Last week, YouGov retracted a poll suggesting church attendance was rising in the UK. The results, it turned out, had been distorted by bots or AI. This is disappointing given that the research was published last year, and got a lot more attention than the retraction did.
To understand why that matters, it helps to know how most modern opinion polling actually works.
The vast majority of market research in the UK is now conducted online. People sign up to panels using their email addresses and are then invited to complete surveys in exchange for small rewards — a few pounds, an Amazon voucher, something like that.
That creates an obvious incentive to game the system.
“Bots” in this context are not always sophisticated AI systems. More often, they are large numbers of fake or semi-automated accounts, sometimes controlled by a single person, all trying to extract those payments at scale.
There is a second problem, which is more subtle.
The people who sign up to these panels are not, by themselves, representative of the country. So pollsters have to construct a sample that looks representative. In practice, that means selecting respondents so that each survey has the right balance of age, gender, education and so on.
And that has an unintended consequence.
If you belong to a group that is underrepresented on panels, you get invited to more surveys.
So if you are setting up fake accounts, you don’t choose randomly. You choose the profile that will be invited most often.
In the UK, that group is usually young men.
Which means that bots designed to harvest survey rewards tend to present themselves as young men. And because those accounts are not answering questions seriously — or are answering randomly — they can produce some very strange results.
This is why we keep seeing headlines about young men that don’t quite make sense. According to various surveys, large numbers of young British men attend church regularly, support Reform, and hold increasingly extreme political views.
Some of that may be real. But a lot of it is noise — the by-product of a system that rewards scale over accuracy.
And there is a final twist.
The people commissioning surveys are not always neutral observers. They want attention, clicks, and headlines. Dramatic findings travel further than dull ones. A surprising result is more likely to be published, shared and discussed than a predictable one. Newspapers and on-line media companies commission surveys to fill space, produce cheap headlines. And it is not in their interest to look too hard at the results.
In case you hadn’t spotted the quality of journalism in the UK is going down the toilet. There used to be a distinction between reporting and editorial, facts and opinions. Newspaper owners want to present their opinions as facts, and commissioning opinions polls is an easy way to do this. Instead of “I don’t like Muslims” you can run a news story about the percentage of the British population who don’t like Muslims.
And for some bad actors on the fringes of politics anything which normalises radical views, or makes them seem more common is a bonus.
Put all that together — a system that can be gamed, a weighting process that amplifies certain groups, and a media environment that rewards novelty — and you have the conditions for exactly this kind of error.
The result is not just one bad poll. It is a steady drip of questionable findings that shape how we think about the country, and about each other.