AI Isn't Taking Your Job. People Are.
Productivity describes the ratio of output to input. Historically, it has been increased by offloading work and creating separate classes of labor — enabling individuals to claim their time more effectively and focus on higher-impact work. That single principle underpins all of modern society. The creation of the knowledge worker, the “white collar” class, is one such result. The transfer of wealth and power has always come through the optimization of time, typically through delegation — the creation of an additional role.
So why do you keep hearing that you’ll lose yours?
The Framing Is Wrong
First, that phrasing is inaccurate — and when we’re trying to reason through something technical, we should hold ourselves to higher standards. People are being laid off because of people. Regardless of whether those decisions were delegated to an AI agent, a person was involved at some point. There might be legitimate reasons to downsize: financial hardship, legacy role removal, or a cultural mismatch. But the AI job-loss narrative currently circulating phrases it as though AI itself will replace the need for you.
Let’s correct that framing. When people talk about “AI”, they’re almost always talking about Large Language Models (LLMs) — systems built on neural network architectures that predict the next token in a sequence based on patterns learned from massive training datasets[1].
What they’re effectively telling you is that a system that acts as a universal function approximator[2] — a model that can, in theory, learn to approximate any continuous mathematical function given enough parameters — will replace you. Yes, LLMs are a genuine feat of engineering. But no, they are nowhere near as sophisticated as the human mind. They do not have the capacity to produce information outside of their training distribution. That would require a system capable of understanding and consolidating novel information in real time across a vast, open-ended space to create truly original work — effectively generating order from a pseudo-experience. That feat, if achieved, would change the world in a far more fundamental way than job displacement. And even then, machines cannot experience emotion. A machine has no sense of consequence.
Accountability Has No Substitute
Keep these questions in mind: What happens when the system fails? Who is responsible? How does a machine understand accountability?
Emotions and consequence are the mechanisms that have enabled humans to develop both frightening and marvelous things. They are evolutionary characteristics trained over thousands of years — they can’t be replaced even with the rapid advancement of technology. You can optimize for a loss function, but you can’t cover all possible paths. The consequences of real decisions — and the desire to never experience a particular failure again — are a powerful motivator for creativity, innovation, and error prevention.
As David Thomas and Andrew Hunt describe it in The Pragmatic Programmer[3]: good programmers are paranoid programmers. The same principle extends to any professional. Paranoia, doubt, and a willingness to explore and grow produce something novel. To replicate doubt within AI — even partially — would require something beyond computation.
More Output Means More Oversight, Not Less
So why does it appear that an entire class of workers is being removed? Logically, this doesn’t hold up. Productivity, by definition, produces more output with less input. Even if you grant that AI output is guaranteed to be correct (a generous assumption), you’d still need people to define what correct means. Unless the objective is to eliminate humanity entirely — at which point this entire argument is moot — more output means there needs to be more accountability, not less.
When was the last time humanity let things run unsupervised without it ending in disaster? Humans don’t even trust other humans. Our entire society and legal infrastructure is structured around that single premise. We can’t even reach universal agreement on whether the suffering of a child is unacceptable. How are we going to trust AI to operate autonomously?
Jevons’ Paradox: Efficiency Creates Demand
To understand what should happen when productivity increases, it helps to consider a concept from 19th-century economics.
Jevons’ Paradox[4] is an economic observation introduced by William Stanley Jevons in 1865. It describes how technological improvements that increase the efficiency of a resource’s use tend to lead to increased total consumption of that resource — not decreased. As efficiency lowers the effective cost, demand rises to more than offset the savings. Jevons originally documented this with coal consumption during the Industrial Revolution, but the pattern has been observed across sectors ever since[5].
Apply this to software. Has the market become more saturated? Absolutely — and general-purpose solutions are losing demand fast. AI enables work and workers to be fluid. The cost of switching decisions becomes negligible. You no longer need to hire an entire company to build infrastructure. If you can assemble five specialized engineers who meet your exact criteria, you can develop the minimal infrastructure you need and move on. Companies no longer require thousands of employees to execute. They can be smaller, distributed into autonomous clusters rather than deep management hierarchies. The span of control increases; the depth of control compresses. Every team becomes autonomous with a deep focus on their specific product.
That’s what you’d expect when productivity is genuinely increasing. Instead, the rhetoric continues. So what gives?
A Thought Experiment on Who Actually Loses
Let’s assume AGI is created. Humans no longer need to work. Everything is automated. AI runs everything and is fully self-sustaining.
Who would this most negatively impact?
It’s not the lower or middle class. There’s a massive power discrepancy that exists solely because it’s in the self-interest of certain people to ensure that gap persists. Even if AGI could eliminate all roles, no one who benefits from the current control structure would willingly allow it — especially not individuals who derive their influence from that asymmetry. AI could become the single most effective control mechanism in history, or it could be a mechanism to enable genuine human prosperity. But the latter assumes something about human nature that isn’t grounded in the historical record.
AI is not taking your job. It’s being used as an excuse to increase profits.
The same jobs that are “lost” are often made available again in lower-cost labor markets. By creating and perpetuating fear, the narrative gives companies cover to pay people less than their work is worth. The data on this is unambiguous.
Productivity vs. Wages: The Numbers
According to the Economic Policy Institute[6], between 1979 and 2019, net productivity in the U.S. grew by 59.7%, while typical worker compensation grew by just 15.8% — a 43.9 percentage-point divergence. Since 2000, more than 80% of that gap has been driven by rising inequality: compensation flowing disproportionately to the top 1% and 0.1%, and a declining share of income going to labor relative to capital owners. This isn’t a new trend driven by AI. As EPI’s research shows, the same set of intentional policy decisions[7] — deregulation, erosion of the minimum wage, corporate globalization, and the hollowing out of union power — have been suppressing wages for over four decades.
This is what technology without ethics and accountability produces. The internet was scary. AI is scary. Everything is scary — until you realize it’s only scary because fear enables people to take advantage of others. These excuses are as old as time: “Your skills aren’t worth paying for.” “It’s automated and simple.” This time, the rhetoric is being applied to the new middle class. The wealth distribution doesn’t lift people up; it pushes people down. That’s not because of AI. It’s because of a complete disregard for ethics by the people making these decisions.
Why No One Wants to Admit It
There are several structural reasons the AI job-loss narrative persists.
Investor protection. Investors who poured billions into companies reliant on predatory contracts and vendor lock-in are watching their moats evaporate. What was once friction is now a death sentence. By claiming AI is taking your job, these companies can reduce the most expensive line item on the balance sheet: human capital.
Executive disconnect. Most executives — even those with technical backgrounds — have created so many layers of abstraction between themselves and the actual work that they can no longer reason clearly through the decisions they’re making. You can be smart, very smart in fact. But unless you’re in the ring, there will always be a gap between what you think, what you see, and what actually is. This was manageable when different levels of abstraction served a genuine purpose. Very few people are capable of being deeply technical without developing tunnel vision. But that’s shifting. Abstraction of human thought and knowledge is collapsing into a broad, thin layer. Everyone needs a sufficient balance of knowledge now — almost the opposite of what we’ve trended toward in the last 50 years.
Pressure from capital. The desire to increase profit margins gives people an excuse to deploy AI recklessly. But notice who’s actually hiring: the frontier companies. OpenAI is planning to nearly double its headcount to 8,000 by the end of 2026[8]. Anthropic has been aggressively expanding its workforce[9], with plans for 200 new roles by 2027 in Dublin alone. NVIDIA continues to expand rapidly. The companies closest to frontier research have not claimed that humans are no longer needed. They’re hiring more people, not fewer.
You Can’t Eliminate Accountability Without Consequence
Here’s the truth about any system: you cannot eliminate underlying services without consequence. In the real world, that means you can’t eliminate a human worker without eliminating the accountability that human holds.
Consider what happened at Boeing. When the company prioritized cost-cutting and outsourcing over engineering rigor[10], two 737 MAX aircraft crashed within five months, killing 346 people[11]. The crashes were traced to design flaws in a flight control system that Boeing failed to properly disclose to regulators or pilots. Internal pressure to keep pace with Airbus led to shortcuts in safety analysis, inadequate pilot training requirements, and a culture where profit maximization displaced engineering judgment[12]. The company ultimately pled guilty to criminal fraud in July 2024 — years after the people who died could have been saved by adequate oversight.
There’s a clear and repeating pattern. Systems need time to balance, and any structural change must be made with full awareness of what it will cost. Task automation is real and does reduce the tedium of certain work. But the need for strong judgment, oversight, and accountability will always remain.
That’s not a limitation of AI. It’s a feature of reality.
References
- [1]IBM. "What Are Large Language Models?" IBM Think. https://www.ibm.com/think/topics/large-language-models
- [2]"Universal Approximation Theorem." Wikipedia. https://en.wikipedia.org/wiki/Universal_approximation_theorem
- [3]Thomas, D. & Hunt, A. The Pragmatic Programmer: 20th Anniversary Edition. Pragmatic Bookshelf. https://pragprog.com/titles/tpp20/the-pragmatic-programmer-20th-anniversary-edition/
- [4]"Jevons' Paradox." Wikipedia. https://en.wikipedia.org/wiki/Jevons_paradox
- [5]Alcott, B. "Unraveling the Complexity of the Jevons Paradox." Frontiers in Energy Research, 2018. https://www.frontiersin.org/journals/energy-research/articles/10.3389/fenrg.2018.00026/full
- [6]Economic Policy Institute. "The Productivity-Pay Gap." https://www.epi.org/productivity-pay-gap/
- [7]Bivens, J. & Mishel, L. "Growing Inequalities, Reflecting Growing Employer Power." Economic Policy Institute, 2021. https://www.epi.org/blog/growing-inequalities-reflecting-growing-employer-power-have-generated-a-productivity-pay-gap-since-1979-productivity-has-grown-3-5-times-as-much-as-pay-for-the-typical-worker/
- [8]"OpenAI Plans to Nearly Double Headcount." Fortune, March 2026. https://fortune.com/2026/03/21/openai-double-headcount-this-year-sam-altman-anthropic-google/
- [9]"OpenAI Hiring Expansion." Silicon Republic, March 2026. https://www.siliconrepublic.com/business/openai-double-headcount-by-end-of-year-reports-ft-ai-anthropic
- [10]"Why Boeing's Problems Began More Than 25 Years Ago." Harvard Business School Working Knowledge, 2024. https://www.library.hbs.edu/working-knowledge/why-boeings-problems-with-737-max-began-more-than-25-years-ago
- [11]"Boeing 737 MAX Groundings." Wikipedia. https://en.wikipedia.org/wiki/Boeing_737_MAX_groundings
- [12]"Boeing 737 MAX: Lessons for Engineering Ethics." PMC, 2020. https://pmc.ncbi.nlm.nih.gov/articles/PMC7351545/