Why the 'AI will replace us' narratives (and most AI strategies) are wrong
The “AI will replace X” narrative is everywhere. Headlines shout. Executives panic. Workers worry.
But the more time I spend with companies struggling with AI implementation, the more I think “Will AI replace us?” is the wrong question.
I’ve helped build everything from AI personalisation engines to banking decision-support tools, and seen where these systems genuinely excel and where they break down.
When we get it right, the results are awesome:
- Financial advisors stop spending hours on portfolios and admin, and start having deeper conversations with their clients about life goals, fears, and family dynamics. The role doesn’t disappear - it becomes more human.
- Healthcare workers stop drowning in case files and documenting routine observations, and start providing care that understands context, stakes, and human complexity beyond what patterns can capture. The expertise doesn’t disappear - it focuses where it matters most.
- Designers stop obsessing over pixel-perfect execution and start exploring 30 alternative concepts in the time they once spent on 2. The craft doesn’t vanish - it evolves.
But this goes hand in hand with an uncomfortable truth: being wrong. Wrong about what our roles should be. Wrong about what makes us valuable.
AI systems now excel at pattern recognition, synthesis, and generation - revealing how much work was complex but ultimately mechanical. Difficult enough to feel important, structured enough to be automated.
Like: The analysis that consumed hours but led to obvious conclusions. The documentation that kept us busy but not insightful. The variations that felt creative but were really just rearranging.
My controversial take: this was always work machines could do. We just didn’t have the right machines yet.
And this is where I see so many companies’ AI strategies heading in the wrong direction.
They’re approaching AI as an optimisation tool: make current processes more efficient, current roles more productive, current people more machine-like. The unspoken goal? Reduce headcount while maintaining output.
They should be asking: “What becomes possible when our people stop doing mechanical work? How do we redesign roles around capabilities that actually require judgment, creativity, and understanding?”
Because some problems don’t have optimal solutions. They have human solutions.
Problems where context matters more than patterns. Where values conflict and someone must decide. Where trust needs to be built, not just information exchanged. Where the stakes are human, not computational.
Most companies will approach AI as a cost-cutting tool and wonder why they’re not seeing transformative results. A few will use it to redesign work around human capabilities. Both will use AI. Only one will thrive.