Chapter 1: The Economic and Power Structure
Before we talk about "how to protect yourself," you need to understand why this is happening in the first place.
What Happened
The Capital Bubble and Its Correction
From 2008 to 2021, major global economies maintained ultra-low interest rates (ZIRP) for 13 straight years. Money was too cheap. Tech companies could raise hundreds of millions without turning a profit, then spend it all on one thing: hiring.
Amazon's headcount roughly doubled from 800,000 to 1.6 million between 2019 and 2021. Meta grew from about 45,000 to over 71,000 in the same period—a nearly 60% increase. This kind of bloat was everywhere. (Source: MacroTrends)
This wasn't demand-driven hiring. It was capital-driven hiring. Companies weren't hiring for "who we need now." They were hiring for "who we might need later" and "who our competitors might poach."
In 2022, the Fed started raising rates. The cost of capital reset. These companies realized they'd over-hired by hundreds of thousands. Layoffs began—not as isolated events at a few companies, but as a systemic correction across the entire industry. Per layoffs.fyi tracking data:
- 2023: ~153,000 people
- 2024: ~264,000 people
- 2025: ~165,000 people
The primary driver of this wave of layoffs is the correction of a financial cycle, not AI replacing workers. But AI provides a perfect narrative shell: instead of saying "we over-hired," companies say "AI made us more efficient." The first is a management failure. The second is strategic vision. If you were in management, which story would you tell?
The AI Narrative: Real Impact vs. Inflated Story
AI is genuinely replacing some work—junior development, basic testing, content production, customer support, data labeling. That part is real.
But when a company lays off 20% of its workforce, how much of that was AI actually replacing jobs, and how much was financial cost-cutting? No precise data can untangle the two. The narrative bias, though, is obvious: companies want to attribute all efficiency gains to AI, because that story excites Wall Street and pumps the stock price.
A set of reports from Fortune in February 2026 exposed both sides of this contradiction: on one hand, Stanford AI researchers claimed "productivity doubled in 2025"; on the other, a survey of thousands of executives showed most companies had yet to see real productivity gains from AI. Economists compared it to the "Solow Paradox" of the 1980s—"you can see the computer everywhere except in the productivity statistics."
Another Fortune article from the same period had a more blunt headline: "You hate AI because corporate profits are capturing your extra productivity, and your pay hasn't changed."
Meanwhile, internal AI efficiency data is being systematically polluted:
Leadership signals: "Everything must be tied to AI"
↓
Middle management passes down pressure: "Show measurable AI adoption"
↓
Workers are forced to perform: claiming 10x, 100x efficiency gains
↓
Leadership receives distorted data, concludes "AI can really replace most headcount"
↓
Decision: layoffsEvery layer is doing what's rational for itself—leadership wants a growth story, middle management wants to hit KPIs, workers want to keep their jobs. But these rational choices stack into a systemic lie: AI's actual capabilities are grossly overestimated. Managers who don't know what's happening on the ground make layoff decisions based on polluted data. A task that genuinely takes a week, someone claims "AI can do it in a day," so leadership decides the team can be cut by 80%. Then the people who actually did the work leave, and whoever remains carries 5x the workload—until the next round of "optimization."
Other Accelerating Factors
The AI narrative isn't the only force at play. In 2022, the U.S. Section 174 amendment (a delayed-effect provision from the 2017 Tax Cuts and Jobs Act) began requiring companies to capitalize R&D expenses—including software engineer salaries—and amortize them over 5 years, instead of deducting them fully in the year incurred. (BDO detailed analysis)
Previously, a company hiring an engineer at $200K/year could deduct the full $200K against that year's taxes. Now it must be amortized over 5 years—only about 10% deductible in year one. The real after-tax cost of employing R&D workers jumped overnight. Cutting R&D headcount became the more tax-efficient move. This has nothing to do with AI—it's a structural effect of tax law—but it stacks with the AI narrative and the financial cycle to create an environment that's brutally hostile to engineers.
How It's Executed
Layoffs aren't just a decision. They're a well-practiced playbook.
"Cleanup"—Some companies internally use the word "cleanup" to describe layoffs. Thank them for the honesty. It drops the polished wrappers like "organizational optimization" or "talent upgrade" or "strategic refocus" and reveals an attitude: the people being laid off aren't "resources being reallocated" or "part of a strategic adjustment." They're things to be cleaned up. When a company's internal vocabulary treats employees as garbage, do you really expect it to consider your interests during knowledge extraction?
"Zero-Signal Termination"—Oracle, March 31, 2026. Around 20,000 employees—18% of the global workforce—received an email at 6 AM from "Oracle Leadership" informing them their role had been eliminated. No prior warning from HR or managers. Same-day termination. System access revoked within hours. Severance offered only after signing termination paperwork via DocuSign; unvested stock forfeited entirely. Oracle had accumulated $58 billion in new debt within two months to fund AI data center buildout and needed $8-10 billion in annual cash flow savings. Those people weren't replaced by AI. Their budget was reallocated to AI infrastructure. The company long regarded as the industry's "safe harbor for lifers" proved that no employer is a safe harbor.
"Float and Deny"—Companies leak layoff plans to media through "anonymous sources," observe market and regulatory reactions, then officially deny the reports. If public backlash stays within acceptable bounds and regulators don't move, the layoffs land shortly after—often at the exact scale described in the "inaccurate reports." In March 2026, Reuters reported "Meta plans to lay off 20%." Meta's spokesperson called it "speculative reporting about theoretical scenarios." But the information was already in the market. The reaction had already been collected. When you see "according to people familiar with the matter" followed by an official denial, the real information isn't in the denial. It's in the timeline.
Where the Money Went
Layoffs are the surface symptom. The deeper question: productivity keeps growing, corporate profits keep growing—where are those gains going?
According to BLS (Bureau of Labor Statistics) data, in Q4 2025, the labor share of national income stood at 53.8%—the lowest value since BLS began tracking in 1947. (FRED data)
EPI (Economic Policy Institute, a labor-leaning think tank), based on long-term BLS tracking, shows: from 1979 to now, U.S. economic productivity grew by about 83%, but median hourly wages for workers grew only about 29%. Productivity growth outpaced wage growth by nearly 3x. (EPI Productivity-Pay Gap)
A caveat: the exact size of this gap depends on how you measure it. Pro-market think tanks include health insurance and other non-wage benefits in "total compensation," which produces a smaller gap. But even with the most conservative methodology, the direction is the same: productivity gains are increasingly flowing to capital, not to workers. The economic numbers look great—GDP is growing, corporate profits are growing—but most people feel inflation, pressure, and a job that could vanish at any moment. That's Ghost GDP.
Why Wealth Won't "Trickle Down"
A natural reaction: "The massive wealth AI creates has to be spent somewhere. Rich people consume, and the money flows to everyone else, right?"
Intuitively reasonable. But several structural reasons make this very unlikely.
Most of the wealthy's money never enters consumption. In economics, "marginal propensity to consume" describes how much of each additional dollar earned gets spent. The higher the income, the lower the ratio. Someone earning $5,000/month spends nearly all of it. Someone earning $10 million/year might spend 10%, with the remaining 90% flowing into financial assets—stocks, funds, real estate. That money circulates among the wealthy: you buy my stock, I buy your house. It never passes through a worker's hands.
Even when the wealthy do spend, the transmission efficiency is far lower than you'd think. Spend $100K on a luxury handbag, and intuitively that $100K should flow to the leather workers, seamstresses, and shop employees. But top luxury brands typically have gross margins above 70%—over $70K of that $100K is brand premium, shareholder dividends, and executive compensation. Less than $10K reaches frontline workers. That same $100K, split among 10 ordinary people spending $10K each on everyday goods, touches far more workers—everyday supply chains are longer, more distributed, and more labor-intensive.
The digital economy makes this worse. Traditional wealthy spending on yachts and sports cars at least kept shipyards and autoworkers employed. But today, more and more wealth flows into digital assets—equity in AI companies, cryptocurrency, real estate held as investment vehicles rather than housing. These transactions create almost zero labor demand. A $50 million VC investment into an AI company might hire 20 engineers. The same money distributed as wages to 500 people would produce radically different consumption and employment ripple effects.
"Trickle-down" isn't an untested hypothesis—it's a tested mechanism whose effectiveness keeps declining. Under AI-accelerated wealth concentration, expecting it to automatically distribute money downward is a fantasy.
Is This Time Different?
"Technology ultimately creates more jobs"—NVIDIA CEO Jensen Huang and OpenAI CEO Sam Altman keep repeating this message: the structure of work will transform, and humans will be better off.
This isn't empty talk. Historically, the argument has held up: the First Industrial Revolution killed hand-weaving but created factory jobs. The Second Industrial Revolution eliminated massive amounts of physical labor, but the white-collar class emerged. We need to take this argument seriously rather than dismiss it outright.
The question is: past successful transitions depended on specific preconditions. Do those conditions still hold?
Precondition 1: An "escape upward" path exists
Every past technological revolution replaced the "lower tier" of labor at that time. Machines replaced handcraft; workers could move up to machine operation. Automation replaced physical labor; workers' children could move up to knowledge work—programmers, designers, analysts, lawyers. Each time, a higher tier was waiting.
This time, AI is replacing knowledge work itself. Code, copy, analysis, design, junior legal and medical judgment—these are exactly the destinations people "escaped upward" to in the past. When knowledge work gets replaced too, what's the next escape tier? Maybe something exists, but nobody can give a concrete answer—including Altman and Huang. They say "new types of jobs will emerge" but can't say what. This is different from before: during the Industrial Revolution, the new jobs (factory workers, railway engineers) were already visible as the old ones disappeared.
Precondition 2: The transition window is long enough for society to adapt
From the steam engine to the establishment of labor law, Britain took nearly a century. From the spread of assembly lines to the mass formation of the middle class, the U.S. took about half a century. These timescales meant a generation had enough room to retrain and transition through education.
AI spreads at a completely different speed. ChatGPT went from launch to 100 million users in two months. Companies go from "watching AI" to "mandatory AI adoption company-wide" in one or two quarters. If the transition happens over 5–15 years instead of 30–50, huge numbers of people won't have time to make the jump. "It'll work out eventually" might be correct, but what happens during the gap before "eventually" is borne by real people with real careers.
Precondition 3: Workers have bargaining power to force institutional change
This is the most critical one. Every past industrial revolution eventually produced labor protections (child labor laws, the eight-hour workday, minimum wage, unions)—not because capital had a change of heart, but because workers' strikes could shut factories down. This physical fact—you stop working, the machines stop—is the ultimate source of all labor rights.
In the AI era, this bargaining chip is being eroded. If your knowledge has already been extracted into SKILL files, your workflows have been screen-recorded, your decision logic has been documented—you leave, and the system still runs. Maybe not as well as when you were there, but well enough to outlast a strike. Workers are losing the physical foundation of their bargaining power.
We don't predict the future. New jobs may appear, or may not. The transition may take 10 years, or 50. But the three preconditions that made past tech transitions "eventually work out"—an escape path, sufficient buffer time, and workers' bargaining power—are all under extreme pressure right now. Under these conditions, "trusting history to repeat itself" is a gamble, and the stake is your career.
Why Nobody Is Coming
Maybe you're thinking: even if everything above is true, if things get bad enough, someone will surely stand up and fight, right? The government will step in, right?
The Preconditions for Resistance Have Been Dissolved
Look at history. Every successful labor movement shared one precondition: physical intolerability. Britain's 1833 child labor law came about because children were dying in mines. America's 1886 eight-hour workday movement was born from workers doing 14–16 hour days and suffering injuries and death. It wasn't because people suddenly became enlightened. It was because the status quo had reached the threshold of "change or die."
Modern technology has pushed that threshold very far away. You don't need a job to stay alive—UBI or the gig economy keeps you fed, streaming and social media keep you occupied, delivery apps and e-commerce mean you never have to leave home. You're uncomfortable, but you won't die.
The old threshold for resistance was "can't survive." The new threshold is "can't live with dignity." And "can't live with dignity" has never once produced large-scale collective action. No country has ever had an institution-changing social movement because "people felt their lives were meaningless."
Sam Altman's OpenResearch ran the largest randomized controlled UBI experiment in the U.S. from 2020–2023: 1,000 low-income participants received $1,000/month for three years. (Yahoo Finance report) The results deserve serious consideration—positive and negative: in year one, participants saw significant reductions in psychological stress and food insecurity, fewer low-quality part-time jobs, more investment in education, and a 2% increase in agreement that "work is a social responsibility." But by years two and three, the wellbeing improvements faded. $1,000/month couldn't offset rising rent, chronic illness, or childcare costs. The researchers concluded: UBI can relieve extreme hardship, but it can't replace systemic reform.
Put differently, UBI can keep you off the streets but can't give you a dignified life. It lands precisely in the gray zone between "can't survive" and "living well"—exactly the position least likely to produce resistance.
Policy Won't Arrive in Time
Effective regulation of AI-driven workforce displacement requires answering brutally hard questions: How do you define "AI replaced a worker"? If you tax it, at what rate? Will that drive companies offshore? Who audits how much of a company's work is actually done by AI?
The EU's AI Act took three years from proposal to enactment (2021–2024), and it barely touched workforce displacement. A genuinely effective labor protection framework—from legislation to enforcement—takes five years optimistically, ten years realistically.
And the government's incentive structure isn't on your side. GDP growth, tax revenue, employment rate—AI pushes the first two up and only pushes the third down, and the employment rate decline can be absorbed by statistical reclassification as "flexible employment" or "new work arrangements." Tech companies are among the largest tax bases and lobbying forces in many countries. In the short term, not intervening is the rational choice for government.
The Global Pattern Is Strikingly Consistent
In 2023, China's urban youth unemployment rate broke 20%. Young people's response: 躺平 (tang ping—"lying flat," a passive withdrawal from career competition), 考公 (kao gong—studying for civil service exams as a safe harbor), 考研 (kao yan—pursuing graduate school to delay entering the job market). In the same period, over 260,000 people were laid off in U.S. tech. Most people's response: polish LinkedIn, sign up for bootcamps, take gig work. In South Korea, young people coined "N포세대" (N-po sedae—the "N-give-up generation": giving up on dating, marriage, childbirth, homeownership)—retreat instead of resistance. Japan's "Employment Ice Age" produced an entire generation of freelancers and shut-ins. Structural problems get converted into individual anxiety. The conditions for collective action never form.
People who lose their jobs don't starve. Gig economy absorbs some, family support networks catch others. Consumption can be downgraded. Modern agriculture and industry can sustain basic survival. Short-form video, games, and livestreaming numb the emotions. Information bubbles prevent discontent from aggregating—everyone assumes "I'm the only one struggling." Without conditions for collective action, there's no pressure to accelerate policy.
The floor for how much you can squeeze from workers is much lower than you'd think. Every time someone invokes "the market will self-correct," they're trading time for space—and the time being spent is yours.
Chapter Summary
You're not facing one threat called "AI is going to replace you." You're facing a structural environment where multiple forces compound:
- Financial cycle: Post-ZIRP headcount correction
- AI narrative: Real impact can't be honestly quantified, but the narrative is already driving layoffs, and internal data is inflated at every level
- Tax incentives: Section 174 makes cutting R&D headcount more tax-efficient
- Wealth distribution: Productivity gains captured by capital, labor share at historic lows, trickle-down broken
- Escape path uncertain: The "escape upward" exit that existed in past revolutions may not exist this time
- Resistance threshold raised: Modern stability tools prevent "living without dignity" from turning into collective action
- Policy vacuum: Governments have no incentive to act on your timeline
Understanding this full picture is how you understand why Zion exists.
Zion isn't a project about "self-preservation." When we say "protect your irreplaceable value," what we mean is: in a system that's actively treating you as an optimizable cost line item, you have the right to reject that definition. Your experience, your judgment, your understanding of how systems actually work—these aren't things that can be bought out with two weeks' severance, and they aren't things a SKILL file can exhaust.
Every purchase you make is a vote for the world you want to live in. Every decision you make at work is too. If you accept them buying out your experience with two weeks' pay, accept them calling you something that needs to be "cleaned up"—then the world becomes exactly that.
Policy will come eventually—just like the Industrial Revolution eventually produced labor law. But "eventually" might be ten years. Twenty. Until then, your choices are your stance.