False information spread following Southport stabbings made by AI for profit
Part of the false news widely spread online following the Southport murders was generated by artificial intelligence.
Some of the misinformation spread online following the Southport stabbings was generated by artificial intelligence (AI), a new report has found.
The Alan Turing Institute said website Channel3Now, which originally published a false name for the 2024 Southport murder’s suspect, was set up to generate income.
They said AI software that promotes fake news should be seen as false advertising.
The report said: “This evidence suggests that AI-generated misinformation, with minimal human editorial oversight and monetised through digital ad networks, played a role in injecting divisive falsehoods into the public discourse following the Southport murders.”
Of the 2,089 sites around the world that look like news pages, but artificially generated, some were said to make “passive income” from advertising.
"A user should know when AI content is generated."
These sites were found to have “little to no human oversight.”
Dr. Wasiq Khan, researching AI and Data Science at Liverpool John Moores University, said: “It is very surprising, it is massive because think about how much people will be directly or indirectly affected.
“A user should know when AI content is generated.”
The institute recommend regulator Ofcom to examine the issue during a consultation on false advertising, which is due in the summer.
It’s no surprise that artificial intelligence is inaccurate.
Grok, another AI system, wrongly labelled a Metropolitan Police video of a Unite the Kingdom protest as fake, which got two million views.
The system also wrongly found a deepfake image of the Bondi Beach shooting was real, said the report.
Prof. David Reid, a lecturer in AI and spatial computing at Liverpool Hope University, said: “That individualised personalised attack of AI systems is going to become more and more common.
“They can put a lot of information, a lot of slop, straight away out there, so don’t take one source as gospel and see where source of the news item came from.”
AI was also used to repackage some articles to make them seem more trustworthy.
The institute says AI chatbots should be more prominent in how they flag their limitations on fact-checking in wake of major events.