Worcestershire company working to tackle 'emerging threat' of cyber attacks from AI-generated voices

FARx is supporting businesses across the world through a machine it is developing to pick up and identify whether it is a human speaking, or a computer generated voice

Clive Summerfield (pictured) is the founder of FARx which is using a machine to detect whether it is a human speaking, or a computer generated voice
Author: Elliot BurrowPublished 23rd Feb 2026

Deepfakes being created to manipulate images or a person's voice are becoming "incredibly convincing", the creator of a Worcestershire company working to tackle them has said.

Malvern-based FARx was set-up in 2022 and is using a machine that's working to learn speech and face recognition to recognise what is being said and who is speaking.

Founded by Clive Summerfield, it aims to improve security and detect whether something being used has been made by a computer or is actually a human.

He said it's been helping U.S companies, such as the likes of banks, financial services and customer services to identify these situations where AI (artificial intelligence) is being used to mimic an actual person.

"Detecting synthetic voice reliably becomes a very important cybersecurity tool to defend yourself against what is essentially a flood of synthetic or clone voice attacks that are now starting to emerge in banking and finance and so on," he said.

"Voice becomes a much more dangerous modality for things like transferring funds from here to here for instance in a banking transaction.

"If it's a voice, it's a voice that is potentially giving a command, it’s giving an instruction, so detecting cloned voices and synthetic voices is incredibly valuable for organisations such as banks, government agencies that rely on telephone calls, emergency services for instance."

Mr Summerfield said this is his third deep tech, following on from his first one in speech recognition in the 90s and the second in the 2000s building voice biometric systems to provide security around telephone services.

He mentioned how the AI algorithm has been trained on a database of synthetic voices of around 55,000 so far, which is about 90% of the way there with its current data set.

"In this particular instance, what we've done is we've taken the algorithm and we've trained it on a data set of synthetic voices so that it learns what are the, if you like, the acoustic attributes in the signal, that essentially give away the fact that it's a synthetic voice.

"Many of these acoustic attributes people will find incredibly difficult to recognise and hear, but because we've essentially overtrained the algorithm, it becomes hypersensitive to these unique aberrations or these unique acoustic attributes that appear in synthetic voices.

"We don't hear them because they're not natural, so we're not used to hearing them, but FARx is now very used to hearing these small attributes and therefore can say, well, this is likely to be a synthetic voice as opposed to a real voice."

A survey released on Safer Internet Day earlier this month found three in five young people are worried about AI being used to make inappropriate pictures of them.

It also revealed nearly two thirds (65%) of parents said they were concerned about AI being used to make inappropriate pictures of their children.

The government is currently running a consultation which it has said will "identify the next steps in its plan to boost children’s wellbeing online" and "ensuring they have a healthy relationship with mobile phones and social media".

Hear all the latest news from across the UK on the hour, every hour, on Greatest Hits Radio on DAB, smartspeaker, at greatesthitsradio.co.uk, and on the Rayo app.