Every technology cycle comes with its own fear narrative. Steam engines destroyed crafts. Computers killed clerical work. The internet hollowed out retail. Artificial intelligence is now cast as the next great job killer — a force that will wipe out white-collar work and make large parts of the workforce redundant.
That framing is seductive, simple, and largely wrong.
AI’s real impact isn’t that it replaces jobs outright. It’s that it quietly reshapes who has leverage, who controls output, and who captures value. The danger for most workers isn’t immediate unemployment — it’s becoming interchangeable, monitored, and squeezed in ways that weren’t previously possible.
Key takeaway: AI is less about mass redundancy and more about a redistribution of power inside organisations.
The mistake everyone makes when talking about AI and work
Most commentary treats jobs as atomic units: a role exists, AI arrives, the role disappears. That’s rarely how labour markets work in practice.
Jobs are bundles of tasks, judgement calls, relationships, and accountability. AI doesn’t replace the bundle. It unbundles it. Some tasks get automated, others get standardised, and a smaller number become more valuable because they sit above the system rather than inside it.
This aligns with research from the OECD on AI and the future of skills, which shows that AI augments tasks rather than eliminating entire job categories in a linear way.
Why employers love AI (even when productivity gains are modest)
There’s an uncomfortable truth in many boardrooms: AI is often more attractive as a management tool than as a pure productivity enhancer. Even where efficiency gains are incremental rather than transformational, AI offers something employers value deeply — control.
Workflow tracking, output comparison, automated quality checks, and predictive performance metrics all shift power upward. This mirrors findings from the McKinsey Global Institute on AI and automation, which notes that AI often changes how work is done before replacing it.
Tasks that once relied on trust or tacit knowledge can now be monitored. Decisions that once sat with experienced staff can be nudged, constrained, or overridden by systems. This doesn’t eliminate jobs, but it changes the terms on which people do them.
The regulatory response: slow, cautious, and already behind
Governments know this shift is coming, but regulation is struggling to keep pace — partly because AI doesn’t fit neatly into existing legal categories.
In the UK, the government has published an AI regulation review to align principles across sectors without over-prescribing. The focus is on accountability, transparency, fairness, and contestability.
The EU’s EU AI Act proposal takes a more formal approach, categorising AI systems by risk levels — with higher obligations for high-risk applications.
Both approaches have merit. Both also underestimate how quickly AI will be embedded into everyday management decisions rather than deployed as standalone “systems” that are easier to regulate.
Winners and losers in the AI reshuffle
Who benefits
- Senior decision-makers who sit above AI systems and interpret their outputs.
- Highly autonomous professionals whose value lies in judgement and accountability.
- Large organisations that can afford integration, governance, and legal oversight.
- Specialists designing and auditing AI systems, not just operating them, are increasingly in demand (World Economic Forum Future of Jobs Report 2023).
Who loses
- Mid-level knowledge workers whose tasks are structured but not senior enough to escape automation pressure.
- Junior roles that traditionally relied on repetition for learning.
- Contract and gig workers whose output can be benchmarked and commoditised.
- Small firms competing with larger players deploying AI at scale.
Who is most at risk — and why it’s not who you think
The most vulnerable group isn’t those doing “low-skill” work. It’s those doing work that looks skilled but is highly standardised. When judgement can be encoded into rules or probabilities, it becomes easier to deskill tasks.
The Oxford Martin analysis provides a framework for thinking about task automation and the gradual breaking apart of jobs, rather than wholesale elimination.
Why productivity statistics miss the point
Much of the public debate fixates on productivity gains. Are workers more efficient? Are firms producing more per hour?
This misses the distributional impact. Even modest productivity gains can dramatically shift bargaining power. If one person can now do the work of three — even imperfectly — employers gain optionality. They don’t need to fire two people to change the dynamic. The threat alone is often sufficient.
Historically, technology-driven gains were often shared through higher wages or shorter hours. There’s little evidence that this outcome will happen automatically with AI without intervention — regulatory or competitive.
The hidden risk: hollowing out experience
One under-discussed risk is what happens to experience pipelines.
Many professions rely on junior staff doing routine work to build judgement. If AI takes over routine work, how do people develop the context that senior roles require?
Organisations may find themselves efficient in the short term and brittle in the long term — dependent on systems they don’t fully understand, with fewer people capable of stepping in when things go wrong. This phenomenon has been observed in sectors like finance, aviation, and software where automation hides structural risk under the hood.
What sensible organisations should be doing now
The smartest firms aren’t asking “how many people can we replace?” They’re asking harder questions:
- Which decisions should remain human?
- Where does accountability sit when AI informs outcomes?
- How do we preserve learning pathways while using automation?
- What happens when the system is confidently wrong?
These aren’t technical questions. They’re organisational ones, and they require leadership, not just software procurement or vendor checkboxes.
Why the “AI arms race” narrative is misleading
There’s a tendency to frame AI adoption as an arms race: adopt fast or be left behind. But most competitive advantage comes not from adopting AI first, but from integrating it thoughtfully and governing it wisely.
Many early adopters discover they’ve automated inefficiency, scaled bad assumptions, or eroded trust with customers and staff. Long-term advantage will belong to organisations that treat AI as infrastructure rather than magic — powerful, fallible, and requiring oversight.
What comes next
Over the next five years, expect less drama and more quiet restructuring. Fewer mass layoffs. More role compression. More performance surveillance. More pressure on mid-career professionals to justify their value.
At the same time, expect growing political pressure as the effects become visible — not because AI exists, but because its benefits and costs are unevenly distributed.
The challenge for policymakers won’t be stopping AI. It will be ensuring that the gains don’t accrue exclusively to those already at the top of the system.
Conclusion: the real question AI forces us to answer
The most important question AI raises isn’t “will it take my job?” It’s “who decides how my work is valued?”
AI shifts that decision upward — towards those who design systems, set incentives, and control distribution. Understanding that shift matters far more than worrying about replacement headlines.
Technology doesn’t determine outcomes. Power does. AI just makes power easier to exercise at scale.
About the author
Nick Marr writes on regulation, technology, property, and market disruption, focusing on how policy and innovation reshape real-world outcomes.
This article is for informational purposes only and does not constitute professional advice.








