Swansea Uni study reveals people can't tell difference between fake AI and real images
The research found that participants were unable to reliably distinguish them from authentic photos- even when they were familiar with the person’s appearance
A new study has revealed that artificial intelligence can now generate images of real people that are virtually impossible to tell apart from genuine photographs.
Using AI models ChatGPT and DALL·E, a team of researchers from Swansea University created highly realistic images of both fictional and famous faces, including celebrities.
They found that participants were unable to reliably distinguish them from authentic photos-even when they were familiar with the person’s appearance.
Across four separate experiments the researchers noted that adding comparison photos or the participants’ prior familiarity with the faces provided only limited help.
The findings highlight a new level of “deepfake realism,”:
The research says the findings highlight a new level of “deepfake realism,” showing that AI can now produce convincing fake images of real people which could erode trust in visual media.
Professor Jeremy Tree, from the School of Psychology, said: “Studies have shown that face images of fictional people generated using AI are indistinguishable from real photographs. But for this research we went further by generating synthetic images of real people.
“The fact that everyday AI tools can do this not only raises urgent concerns about misinformation and trust in visual media but also the need for reliable detection methods as a matter of urgency.”
One of the experiments, which involved participants from US, Canada, the UK, Australia and New Zealand, saw subjects shown a series of facial images, both real and artificially generated, and they were asked to identify which was which.
The team say the fact the participants mistook the AI-generated novel faces for real photos indicated just how plausible they were
Another experiment saw participants asked to if they could tell genuine pictures of Hollywood stars such as Paul Rudd and Olivia Wilde from computer-generated versions.
Again the study’s results showed just how difficult individuals can find it to spot the authentic version.
The researchers say AI’s ability to produce novel/synthetic images of real people opens up a number of avenues for use and abuse. For instance, creators might generate images of a celebrity endorsing a certain product or political stance, which could influence public opinion of both the identity and the brand/organisation they are portrayed as supporting.