Chapter 3: The Game Theory Framework
If you skipped the first two chapters and jumped here, the background you need: companies are using various methods to extract your tacit knowledge, and the incentive structure of this process works against you. This chapter uses game theory to break down those incentive structures so you can see the real game behind every decision.
Layer 1: You vs. the Company
The Basic Model: Principal-Agent Problem
The company (principal) wants to maximize knowledge extraction—turning what's in your head into company-owned assets. You (agent) want to maximize your own irreplaceability—maintaining the company's dependence on you.
These two goals are in fundamental tension: the more the company achieves its goal, the less safe you are.
Information Asymmetry Is Your Structural Advantage
In this game, you have a natural edge: the company doesn't know what's in your head.
It knows your code output (git log), knows your task completion (project management tools), knows your hours (attendance tracking). But it doesn't know:
- How deep your understanding of the system really goes
- What critical judgments you've made that were never recorded
- How much undocumentable experience you carry
Maintaining this asymmetry isn't deception. It's rational. You don't need to hide information. You just need to make sure the carrier of critical knowledge is always you, not a file.
The Company's Playbook
How the company works to close this information gap:
| Strategy | Method | How You Perceive It |
|---|---|---|
| Direct extraction | Requiring technical docs, SKILL files | "Knowledge sharing" |
| Indirect extraction | AI Week, Hackathons | "Learning opportunity" |
| Passive collection | IDE plugin usage data, AI tool input/output logs | You may not even know |
| Behavioral observation | Screen recording, operation logs | "Process standardization" |
| Social pressure | Tying it to performance reviews, the "not embracing AI" label | "Attitude problem" |
Every strategy tries to turn your private information into public information. You don't need to fight every single one (that's exhausting). Stick to one principle at the root level: how can be public, why stays in your head.
Layer 2: You vs. Your Coworkers
The Prisoner's Dilemma
If everyone protects their core knowledge, everyone's defenses hold. But if one person hands over everything for short-term performance points, everyone else's line gets breached.
Standard game theory matrix (higher = better outcome):
| Coworker Protects | Coworker Hands Over | |
|---|---|---|
| You Protect | Both safe (3, 3) | You look "uncooperative" (0, 4) |
| You Hand Over | Short-term praise for you (4, 0) | Both lose their moat (1, 1) |
The Nash equilibrium lands in the bottom-right: both hand over—even though that's the worst outcome for both. Everyone is afraid of being the only one still protecting. That's the prisoner's dilemma: individual rationality produces collective failure.
How Companies Manufacture the Prisoner's Dilemma
Companies understand this perfectly and actively engineer this game:
- Stack ranking: Coworkers become competitors. "More AI-forward than the next person" becomes a survival strategy
- Bottom-of-stack elimination: Non-cooperators get flagged first
- "Role model" awards: Publicly praising whoever hands over the most knowledge, applying implicit pressure on everyone else
- Information isolation: Nobody knows how much others have handed over; you can only assume they might be
When Someone Goes All In
In reality, some people will choose to go all in—hand over everything for short-term performance scores and promotion chances. You can't change this, and you don't need to try.
What you need to understand is the nature of that trade: it has an expiration date. Once the company finishes extracting their knowledge, the all-in cooperator's tradeable value drops to zero. Full cooperation doesn't mean "being spared." It just means being further back in the layoff queue.
More importantly: these people can only give up what they have access to. They can't touch your core assets. Your judgment, your understanding of system history, your decision instincts in critical moments—none of this gets exposed just because someone else handed over their share. Even if the company temporarily puts you at a disadvantage because of their impressive cooperation metrics, remember: their advantage is one-shot—once spent, it's gone. What you're protecting keeps generating value as long as the system is running.
Your Strategy Doesn't Depend on Anyone Else
This means something important: you don't need to make any agreements with coworkers, don't need to convince anyone, don't need to build an "alliance," and don't need to figure out who the "defector" is. Just understand the game structure, then independently make the optimal decision for yourself.
Your protection happens at the depth layer—from the outside, you and the coworker who handed over everything look identical: positive attitude, on-time delivery, thorough documentation. The only difference is that in what you deliver, how is rich and why is left blank. That difference is externally unverifiable.
Layer 3: The Company vs. the Market
Why the Company Needs "AI Replacing Workers" Evidence
Layoffs aren't just a cost decision. They're a capital markets communication strategy. The company needs to show Wall Street:
- "Our AI investment is paying off"
- "Output per employee is increasing"
- "We're becoming an AI-native organization"
Your knowledge base contributions, your SKILL files, your screen recordings, your AI Week results—these aren't just "knowledge sharing." They're the evidence the company will present at its next earnings call to demonstrate "AI-driven efficiency gains."
Narrative Economics
A company's market cap doesn't just depend on its business. It depends on the story it tells. "AI transformation" is the most valuable story right now.
This means: even if AI hasn't produced real efficiency gains at your company, management has a powerful incentive to manufacture evidence of efficiency gains. And the easiest evidence to manufacture is: layoff numbers ("we reduced headcount by 20%") and internal case studies ("an employee used AI to complete a task that used to take a week").
What you demo at AI Week might show up in next quarter's investor deck.
The Limits of Game Theory
Game theory gives you analytical tools, but it has important limits:
1. It can't change power asymmetry.
When the company says "cooperate or get out," the game-theoretic optimal strategy might still be "cooperate"—because the cost of losing your job is too high. Zion can't erase this power gap. It can only help you make smarter choices within the space you do have.
2. It can't substitute for real capability.
The most fundamental protection doesn't come from game strategy—it comes from whether your irreplaceability is genuine. If all your value truly lies in repetitive operations, no strategy can protect you long-term. Game theory buys you time, but you have to use that time to build real capability.
Chapter Summary
| Game Layer | Core Tension | Your Strategy |
|---|---|---|
| You vs. Company | Knowledge extraction vs. irreplaceability | Maintain information asymmetry: how is public, why is kept |
| You vs. Coworkers | Prisoner's dilemma | No coordination needed: each person understanding "protecting is self-optimal" is enough |
| Company vs. Market | AI narrative needs evidence | Recognize how your contributions get repackaged; control the narrative |
The premise of every strategy: your non-cooperation happens at the depth layer, invisible at the behavioral layer.