AI, Journalism, and Manipulation
For the past few months, I’ve been using AI to sub-edit my blogs. I’ve never hidden the fact that I’m experimenting with AI. But a few weeks ago, while writing a blog on Elon Musk, I noticed something odd—the AI had added an extra paragraph that I hadn’t written.
I’ve also shared examples on my Facebook page where newspapers have published stories similar to my blogs, sometimes months later.
I don’t believe for a second that The Guardian or The Observer would plagiarize my blog. I just find it amusing how slow the mainstream press can be in picking up on certain topics. This lag is likely a result of declining newsroom staff. A decade ago, The Guardian had dedicated journalists covering healthcare and public policy. Now, general reporters cover a wide range of topics, often missing complex or policy-heavy stories.
A Curious Case of AI Journalism
Recently, I pointed out that an Observer article was strikingly similar to my blog from the previous weekend. What made this particularly odd was the use of a very specific phrase—militarized Keynesianism. I didn’t invent the term (it has its own Wikipedia entry), but it’s far from common. The likelihood of it appearing in both pieces by coincidence seemed slim.
This experience reinforced some longstanding suspicions I’ve had about how the press operates.
AI in Newsrooms: How Much Is It Doing?
Take this example:
A couple of weeks ago, I wrote a blog about the death of Rick Buckler from The Jam. I had been planning to write something about the depiction of social class in tracks like Down in the Tube Station at Midnight for a while and had some material drafted.
I later read The Guardian’s obituary for Buckler, and it felt AI-generated. To test my theory, I asked ChatGPT to generate a Buckler obituary—and the result was almost identical.
This week, jazz-funk legend Roy Ayers passed away. I read The Guardian obituary and set ChatGPT the same task. Not only was the AI-generated version similar, but from a music fan’s perspective, it was actually better.
The Reluctance to Admit AI’s Role in Journalism
I reached out to The Guardian to ask whether they use AI for news articles. If you hadn’t noticed, the volume of online news articles has exploded as newspapers have shifted from print to digital. To maintain search engine rankings and engagement, news sites now publish content 24/7. Famously, the Daily Mail once published 20 negative articles about Meghan Markle in just 36 hours—some in the middle of the night.
At the same time, the number of journalists has fallen, which means each journalist is expected to produce more content per day than ever before.
The Guardian was hesitant to acknowledge AI’s role in content production and gave only a partial admission. The situation is further complicated by the recent sale of The Observer, which has resulted in a number of journalist departures. However, the two publications still share a website.
This is the statement they gave me:
https://www.theguardian.com/help/insideguardian/2023/jun/16/the-guardians-approach-to-generative-ai
The only major news outlet that openly admits to using AI is Reach, the company that owns The Mirror Group. You might not have heard of Reach, but they control most local newspapers in the UK. They produce news centrally using AI, which is why local stories often sound eerily similar across different regions.
AI, Clickbait, and the Manipulation of Public Opinion
I mention this because I frequently see emotionally charged, divisive news content shared on social media. I’m increasingly convinced that AI is being used to manufacture controversy to generate clicks and engagement.
My theory is that journalists at major publications rely heavily on AI to generate articles. They provide an outline, and the AI does the rest. In the case of The Observer, a journalist likely asked AI to write about defense spending and GDP. The AI then scoured the web for relevant material—likely including my blog—resulting in an unusual phrase like militarized Keynesianism appearing in both pieces.
When I use AI to sub-edit my blogs, I always ask it to optimize SEO, ensuring my posts appear in search results. This could explain how certain niche phrases migrate from independent blogs to mainstream news articles.
The Dangers of AI-Generated Misinformation
As a final experiment, I asked AI to generate a Daily Mail-style editorial scaremongering about immigration. The result? Exactly what you’d expect.

The more you use a specific AI program, the more it adapts to your style. Ideally, users should provide feedback on edits. With a week or two of fine-tuning, I could probably train AI to generate viral content designed to stir up hatred towards immigrants. While AI is programmed not to produce explicitly hateful content, a human journalist could easily add inflammatory language—like Nigella Lawson sprinkling grated cheese over a dish.*
*Side note: I’ve coined the term cheesoning to describe adding grated cheese to a dish instead of salt and pepper.
Anyway… jazz-funk.