'The Jesusificatio of AI' by Tey Bannerman Image created with Google Nano Banana 2 and edited with Freepik

The Bible’s Book of Exodus contains one of the sharpest observations about human nature ever written. It goes like this:

Moses, the man leading an entire people to a new homeland, climbs a mountain to receive the laws that will shape their future. And then he’s gone. Days pass. Weeks. No updates. No timeline. No indication of when - or whether - he’s coming back.

Imagine the anxiety. Your leader has disappeared. The future he promised is invisible. You have no information, no control, and no idea what happens next.

So the people do something deeply human. They decide that if the future won’t reveal itself, they’ll build something to believe in instead. Something they can see, something they can touch - something that makes the invisible feel manageable. They gather their gold, melt it down, and build an idol - a golden calf. A single, tangible “thing” they can point to and organise their anxiety around.

The golden calf wasn’t a god, though. It was a projection surface. A place to put fear and hope when the future felt unknowable.

I think we’re doing it again.

“AI will take your job”.
“That looks like it was written by AI”.
“I used AI to plan my holiday”.
“AI is going to destroy creativity”.

'Google search suggestions for "will ai", March 2026' Google search suggestions for “will ai”, March 2026

Listen to how we talk. We’ve turned “AI” into something almost religious - a single, gleaming word onto which we project hopes, anxieties, and predictions about the future. I’ve started calling it the Jesusification of AI. We worship it or we fear it - but either way, we talk about it as though it’s one thing. A singular force. An entity with intentions.

It isn’t.

What we call “AI” is dozens of distinct technologies that work in fundamentally different ways.

When your phone predicts the next word you’re about to type.
When Spotify auto-plays a song that becomes the soundtrack to your year.
When your Apple Watch shows you your health metrics.
When a doctor catches a tumour early because a diagnostic model flagged an anomaly smaller than a grain of rice.

All of these are “AI”. But they’re all built differently, they work differently, and they have completely different implications for your life.

But we’ve melted them all down into one golden word. And that collapse is doing real damage…

It’s making us afraid of the wrong things.

“AI will take your job” was one of the most repeated sentences of 2025. But it’s so vague it’s almost meaningless. Which technology? In what role? Replacing which specific tasks? Augmenting which others?

A friend of mine - a graphic designer - spent months in genuine career anxiety because “AI is replacing designers”. When we actually sat down and talked through his work, it turned out that generative image models could handle roughly 15% of what he does - producing initial concept variations - while being essentially useless for the other 85%: understanding client politics, navigating brand strategy, making judgment calls about cultural context, building relationships. His fear wasn’t proportionate to the reality. But “AI is replacing designers” doesn’t leave room for that nuance.

Multiply that by millions of people, and the result is a kind of quiet collective hallucination - not in the dramatic, dystopian sense, but in the everyday sense of millions of people forming strong opinions about a thing that doesn’t actually exist as a single thing.

“Will AI take our jobs”? Depends entirely on which technology, which tasks, which industries, what timeline. But the golden-word framing forces a binary answer: yes or no. And binary answers to complex questions aren’t just unhelpful - they’re paralysing.

It’s making us trust the wrong things.

Then there’s the opposite problem: misplaced faith.

A friend told me recently that he’d typed his chest pain symptoms into ChatGPT. He described the response the way you’d describe a doctor’s reassurance - casually, settled. But the tool that had put his mind at ease hadn’t examined him, didn’t know about his current prescriptions, and had no awareness of the respiratory virus spreading through his child’s school. It had generated a confident sequence of words. But “using AI” made that interaction feel like consulting an authority. In reality, it was closer to asking a very well-read stranger on a bus.

A recruiter friend described the mirror image. She’s drowning in cover letters that are polished, well-structured… and completely pointless. Candidates think they’ve found a shortcut to sounding impressive. They’ve actually found a shortcut to sounding like every other applicant who found the same shortcut. None of them stopped to think about who would read this cover letter, what that person sees hundreds of times a day, and whether the tool they’d trusted had any idea about either.

In both cases, the technology did exactly what it was designed to do. The trust failure isn’t in the systems - it’s in a label that tells people, “this is intelligent” before they’ve asked, “at what?”. “AI” projects a blanket authority that flattens every tool into one. A person can’t calibrate their trust appropriately when the same two letters describe a text generator and a cancer detection model. So they don’t calibrate at all. They just trust - or don’t - based on how they feel about the monolith.

It’s making us passive when we should be curious.

This might be the biggest cost of all.

When “AI” becomes a monolith - a single, all-powerful force - it starts to feel like something that happens to you. Something you can’t understand, can’t influence, and certainly can’t shape. It creates a sense of helplessness that isn’t justified.

When you’re told “AI is reshaping the economy”, what can you do? It sounds like the weather - vast, impersonal, beyond influence.

But that framing is a lie. Every tool and technology that gets called “AI” is the product of hundreds of human decisions. A team chose a specific model architecture, trained it on specific data with specific objectives, and deployed it in a specific context with specific trade-offs.

Humans made every one of those choices. They could have chosen differently. And when you interact with these systems - when you use them, when you’re affected by them, when you pay for them - you’re not standing before an unknowable force. You’re using something that was designed, that has assumptions baked in, and that responds to pressure from the people who use it.

The way out is simpler than you think.

Here’s what I find encouraging. You already have the skill this requires.

When a doctor suggests a treatment, you don’t nod and accept it. You ask questions. What kind of treatment? What are the side effects? What’s the recovery like? Are there alternatives? You don’t worship medicine as a monolith. You engage with it specifically.

We’re perfectly capable of this kind of thinking. We just haven’t applied it here yet.

The people at the bottom of the mountain didn’t need to understand how the calf was built to stop worshipping it. They needed to recognise it for what it was: gold that had been shaped by hands, from materials, with choices. Something made. Something that could be questioned.

When someone says “AI is going to replace teachers”, ask: which tool? Doing what part of teaching? The lesson planning? The emotional support when a student is struggling? The ability to notice that a child who’s usually engaged has gone quiet for three days? Which part, specifically?

When you use one of these tools, test its limitations in your own context. Lie to it (seriously). Give it false information and see if you get pushback or if it happily builds on the fiction. That thirty-second experiment will teach you more about what you’re actually dealing with than a year of headlines.

Because behind every one of these tools is a company that wants your attention, your trust, and your money. They have support teams, feedback loops, and commercial incentives. You can push back. You can leave. You can choose competitors that are more honest, more reliable, or simply better for the thing you’re trying to do. You have more influence over these systems than the word “AI” will ever let you feel.

None of this requires expertise. It requires the willingness to stop projecting and start asking.

What the calf was always about

The golden calf was never about the calf. It was about needing certainty when the future felt unknowable. Something solid to point to when everything else was shifting.

That’s understandable. The pace of technological change right now is genuinely disorienting. Wanting a simple frame - something to worship or something to blame - is one of the most human responses there is.

But the cost of that simplicity is agency. As long as “AI” remains a golden idol - all-powerful, singular, beyond questioning - we remain worshippers or fearers rather than participants.

But every time you ask, “which system are we actually talking about?”, every time you double-check an output, every time you make the choice to switch to another tool, you’re reclaiming something the monolith took from you: the understanding that this isn’t a force. It’s a collection of technologies and tools, built by people, that you have every right to question.

The future doesn’t need our worship. It needs us to be specific.