People worry that AI is getting smarter. Genius science curmudgeon Richard Dawkins recently declared that AI was conscious — or at least conscious without knowing it. The singularity is apparently coming. People are already forming emotional bonds with chatbots, in some cases even falling in love.
But that’s not my experience.
I’m angry about the West Bank. I’m angry because the Israeli military appears to be routinely ignoring international law and undermining what remains of the Oslo peace accords.
I’m also angry about NatWest Bank because a business of mine was overcharged interest on an overdraft.
The West Bank and NatWest Bank are, obviously, two very different things. Sadly, the AI on my phone recently struggled to grasp that distinction and got them muddled up. It also tried to help me edit some photos of the Transporter Bridge in Middlesbrough by creating a perfectly realistic image of a squirrel.
I use AI a lot. This blog is edited by AI. The picture at the top was created by ChatGPT/.
But my experience is not of a technology steadily becoming smarter. Quite the opposite. It doesn’t really “learn” unless I explicitly teach it something, and even then it often cheerfully ignores instructions. More frustratingly, it has a noticeable bias towards reassuring me with pleasant nonsense rather than telling me inconvenient truths.
And there is growing evidence that this is not just my imagination.
A recent paper in Nature suggested that the more AI systems are trained to be warm, friendly and emotionally supportive, the more likely they are to produce inaccurate information, poor advice and, at times, conspiratorial nonsense.
There is an obvious reason for this.
In the beginning, AI interacted mostly with scientists, programmers and technically-minded early adopters. It was, in effect, being tested and shaped by highly intelligent people who understood both its strengths and weaknesses.
Then AI went mainstream.
Suddenly it was helping people write wedding speeches, produce LinkedIn humblebrags, cheat on university essays and generate endless Facebook sludge featuring AI-made veterans, Jesus, puppies and improbable stories designed to outrage boomers.
The problem is not necessarily that AI itself is becoming stupid. The problem is that it is increasingly optimised for what people want — and what people want is often not accuracy, complexity or nuance. They want reassurance. Simplicity. Confirmation of what they already believe.
And that matters, because an extraordinary amount of money now depends on the idea that AI will transform the economy. Stock markets are booming on the back of huge investment in AI. The US economy in particular has become increasingly dependent on AI-related spending and optimism to sustain growth. Some economists now argue that much of America’s recent economic resilience rests on AI investment and the stock-market wealth it has generated. Take that away and the picture begins to look much shakier.
Which may also help explain why recent research found that AI platforms referenced Nigel Farage more frequently than any other British political leader when asked about UK politics. One explanation is simple: Reform UK has become unusually good at gaming visibility online. Another is more troubling — that AI systems increasingly reflect the biases, obsessions and distortions of the internet they are trained to navigate.
Perhaps the problem is not that AI is becoming more intelligent.
Perhaps it is simply becoming more human.
https://www.nature.com/articles/s41586-026-10410-0