Chapter 2: What Companies Are Really After
The things companies ask you to do—write documentation, attend AI Week, screen-record your workflows, boost your AI adoption metrics—look different on the surface but point at the same goal: turning the tacit knowledge in your head into a company-owned, replicable asset that doesn't depend on you.
Tacit Knowledge: The Last Missing Piece for Enterprise AI
Large models are pretrained on the public internet—Wikipedia, StackOverflow, GitHub, papers, books. They know Java syntax, know how to use Spring Boot, know distributed systems theory.
But they don't know:
- Why your company migrated its order system from MySQL to TiDB in 2023, and what went wrong during the migration
- Why your payment module has a "redundant-looking" retry block—because three years ago, a third-party gateway would randomly drop packets during peak traffic, and that logic was added after that incident
- When a product manager says "this feature is simple," which cases really are simple and which ones are missing upstream dependencies
- Who to call first when production breaks—not a technical question, but an organizational one: who has decision authority, whose word actually carries weight
This knowledge is what you earned through time, overtime, mistakes, and countless production incidents. It lives in your head, not in any documentation or code repository.
Companies have identified a bottleneck: the external component of AI capability (general programming knowledge) is already strong. But the "last mile" of deploying AI inside the enterprise—internal context, project history, business boundaries, organizational relationship maps—is completely absent from any model's pretraining data.
So they started asking you for it. SKILL files, screen recordings, technical docs, AI Week deliverables—all of these are attempts to fill that last mile.
This Is Overreach
Your employment contract covers your work output—code, documentation, system operations, task delivery.
It does not cover encoding years of accumulated judgment, intuition, and tacit experience into AI-consumable formats. That's not your job description. The company is using blurry power boundaries to get you to perform—for free—what should be separately priced knowledge engineering work.
Legally, it's a gray zone. Employment contracts have no clear answer to "is an employee obligated to produce training data for AI systems?" Your contract says you produce code and docs, not that you encode your judgment into AI-consumable formats. But in practice, more and more companies are embedding "AI training tasks" into performance reviews—cooperation means "embracing innovation," refusal gets you flagged as "not aligned with company strategy."
Companies know this gray zone exists. They're exploiting it.
Sam Altman's Eulogy
On March 17, 2026, OpenAI CEO Sam Altman posted a tweet:
"I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took. Thank you for getting us to this point."
On the surface, it's gratitude. Read it again:
- "getting us to this point"—what point is this? The point where you're no longer needed.
- "It already feels difficult to remember"—your years of accumulated craft are already "hard to remember."
Against the backdrop of simultaneous mass layoffs at Amazon, Meta, and Atlassian, this isn't a thank-you. It's a eulogy.
These companies trained AI on the code you wrote, then thanked you for "getting us to this point."
Protecting Core Knowledge Is Common Sense in Every Mature Industry
If you think "protecting your own judgment" sounds selfish, look at other professions:
Doctors don't write up "why I chose Treatment A over Treatment B for this patient" and share it with the hospital. They share clinical guidelines (how), but the judgment applied to a specific patient (why) stays in their heads.
Lawyers—you can pay a consultation fee to ask a lawyer a question. But if you asked them to write out their full case experience, litigation strategy, and assessment of the judge in a complete markdown document and hand it over, you wouldn't get cooperation. You'd get slapped.
Chefs don't give the restaurant their recipes down to the gram. The existence of apprenticeship itself proves the point: core craft is transmitted through long-term relationships, not extracted through documentation.
Traders know from day one: the moment you write your strategy down, it starts losing value.
Only tech workers, steeped in open-source culture and the spirit of knowledge sharing, are eager to give everything away.
That openness was a virtue during growth eras—the reputational returns from sharing outweighed the risk of knowledge loss. But when the economy shifts from expansion to contraction, when companies shift from "using your knowledge to create new value" to "using your knowledge to replace you," the rules have changed.
This isn't about becoming closed off. It's about managing your core assets rationally, the way every other mature profession already does.
The Core Distinction: How vs Why
This is Zion's most important tool.
How (operational layer): How to do it. Steps, procedures, commands, configurations.
- The company has every right to require these from you
- This content genuinely should be documented—good operational docs are part of team infrastructure
- AI can learn operational sequences from this content
Why (judgment layer): Why it's done this way. Decision rationale, trade-off logic, historical lessons, gut calls.
- These are your core assets
- They're extremely hard to document completely—tacit knowledge is, by definition, hard to put into words
- Even if AI gets your "why," it can't correctly apply it in new contexts—because judgment depends on context, and context is always changing
| Type | Example | Hand Over? |
|---|---|---|
| How | "When you get an alert, first check Grafana panel X, then check Service Y's logs" | ✓ |
| Why | "We check panel X before Y's logs because during the 2023 incident, Y's logs dropped critical data under high load" | Protect |
| Why | "Chose Plan A over B because B had better performance but would create a dependency on Team Z, and Z was short-staffed at the time" | Protect |
This distinction doesn't require deliberate concealment. Good technical documentation is supposed to focus on "how"—"why" is too context-dependent and writing it into docs actually risks becoming outdated and misleading. Just follow documentation best practices, and naturally, your most valuable knowledge stays with you.
In reality, the line between How and Why isn't black and white—many operational steps implicitly contain judgment. For a more detailed analysis of this, and how to apply the distinction in specific scenarios, see the quadrant model in Scenario: Knowledge Extraction.
Chapter Summary
What the company is after isn't your code output—they already own that. They're after what's inside your head: your judgment, your experiential intuition, your deep understanding of systems and people.
You earned these things with your career. Their value should not be transferred to the company at zero cost, then used to reduce the company's dependence on you.
Understand what you're handing over. Then make a clear-eyed choice.