Hampshire domestic abuse charity calls for more victim protection around deepfake imagery

There's been a backlash following reports some users are prompting the a chatbot to create sexualised, deepfake content of women and children

The Internet Watch Foundation identified multiple instances in which users on X (formerly Twitter) used Grok, to generate child sexual abuse material
Author: Freya TaylorPublished 12th Jan 2026

A Hampshire domestic abuse charity is calling for more protection for victims of sexualised deepfake imagery.

There's been a backlash following reports some users are prompting the Grok AI chatbot on X to create sexualised, deepfake content of women and children.

Image editing on X's Grok AI tool has now been limited to paid subscribers only.

Claire Lambon, CEO of Stop Domestic Abuse, said: "It must be so traumatic to see a photo of yourself that's been digitally changed and generated into a sexualised image.

"Then impact that it can have - it can affect people's careers, confidence, reputation.

"This is not a laughing matter.

"There's no humour in generating a sexualised image of a woman or a minor.

"It is offensive and it should be treated really sensibly and severely.

"We are coming across it more increasingly and you know it is the fact that AI is improving every day.

"I dread to think in 12 months time what position we might be in."

The Internet Watch Foundation identified multiple instances in which users on X (formerly Twitter) used Grok, to generate child sexual abuse material.

Hear all the latest news from across the UK on the hour, every hour, on Greatest Hits Radio on DAB, smartspeaker, at greatesthitsradio.co.uk, and on the Rayo app.