What History and Real Experience Reveal
Something is happening right now in boardrooms across the country.
Leaders are cutting jobs because they feel pressure to cut jobs. Not because they have done the research. Not because they have a thoughtful plan. Because every headline says they should, because peers are doing it, and because “AI” has become the justification that requires no further explanation.
Here is what I think is actually true.
We are not living through a revolution in efficiency. We are living through a wave of reactive decision-making dressed up as one.
And waves pass.
I know this because I was building inside it.
For the last several months, I worked closely with developers and AI coding tools to build a real product. Not a prototype. A platform. With real users, real data, real compliance considerations.
When experienced developers stepped in to review what the AI tools had produced, they spent significant time correcting it. Not tweaking. Correcting.
Best practices that were skipped. Security gaps that looked fine until they were not. Implementation choices that worked in testing and would have failed in production.
This is not a complaint about the tools. They are genuinely useful. The best developers I know use them every day. But they use them the way a skilled craftsperson uses a power tool: with judgment, with oversight, with the knowledge that speed without precision creates problems you pay for later.
The code that AI generates compiles. It runs. It looks right. And in nearly half of cases studied across multiple research reviews, it carries security vulnerabilities that would not survive a proper audit. AI-assisted code also shows significantly higher rates of major issues compared to human-written code.
If you are building anything in a regulated industry, anything where data security, privacy, and compliance matter, please do not go blind with these tools. The gap between a working prototype and a secure, scalable, maintainable system is still human. That gap did not disappear because the code was generated quickly.
The math that nobody is actually doing.
A fifteen-person team becomes a five-person team.
The headline says efficiency. The reality says gaps.
When you replace three people with one person and one AI tool, you are not doing the same work more efficiently. You are doing less work, and hoping no one notices. You are surviving, not building.
Here is what the data actually shows: in 2024, AI contributed to the loss of roughly 12,700 jobs in the United States, while generating over 119,000 new ones. Ten created for every one lost. Global projections point to a net gain of 78 million jobs by 2030.
And yet the dominant conversation inside most organizations is still: how fast can we cut?
That is not strategy. That is a wave.
We have lived this story before.
The printing press created editors, journalists, and typesetters. The industrial revolution built new industries where none had existed. The automobile gave rise to gas stations, repair shops, highway towns, the entire hospitality industry. The internet created roles that had no name in 1990.
Today, the majority of jobs did not exist 80 years ago.
The pattern is consistent across every technological wave in history. Jobs disappear. New industries emerge. Entirely new roles are created, roles people could not even name before the technology existed.
And every single time, people were certain that this time was different. It was not.
The technology did not lower the ceiling. It expanded it. Just as every wave created new roles, the same is already happening in the work I am doing today.
AI will do the same. The transition will be real. There will be pain in it. The panic around it is optional.
The contradiction worth naming.
The same voices promoting AI as a revolution are also the ones expressing deep fear about it. This is not a personal critique. It signals that even those closest to the technology are uncertain.
And that matters. Because certainty in the public conversation is often performing something other than confidence.
What AI actually cannot do.
AI can process. It can generate. It can match patterns at scale.
But it cannot sense.
It cannot read a room. It cannot feel the weight of what was not said, or know when a moment calls for silence instead of information. It does not carry lived experience. It does not hold responsibility. It does not understand consequence.
Think about what it means to care for someone you love at the end of their life. The hand on the arm. The familiar voice. The presence of someone who actually knows you. No algorithm produces that. No humanoid approximates it.
And the idea that replacing human caregivers with machines is a cost-saving strategy reveals a confusion between efficiency and humanity that we should not let pass without naming.
Pattern matching is not judgment. Optimization is not care. And that same judgment underpins the work that truly matters, in code, in leadership, in human connection.
The question leaders are asking is too small.
Right now, most leaders are asking: how do we use AI to reduce people? That question leads to contraction.
A better question: what becomes possible for our people now that AI has taken the repetitive work off their plate? That question leads to expansion, to new roles, new capacity, new ways of building things that actually last.
What I believe is coming.
The organizations that moved too quickly, cutting too deeply, will feel the gaps. Not loudly. Quietly, as the quality problems surface, as the security vulnerabilities get discovered, as the institutional knowledge that walked out the door turns out to have been load-bearing.
And they will correct.
The leaders who are asking better questions now, who are integrating AI thoughtfully rather than reactively, who are treating this as a tool in the hands of skilled people rather than a replacement for skilled people, they are building something that will still be standing when this wave settles.
I know this because I am in the middle of it.
I have seen developers catch flaws the tools missed, teams collaborate in ways AI cannot replicate, and leaders make tradeoffs that no algorithm could foresee.
That is what lasts.
Namita Mankad is an executive leadership coach and the founder of Oneness Leadership. She works with mid-career leaders navigating the gap between where they are and who they are becoming. Learn more at namitamankad.com.




Comments +