TRENDING
Elon Musk’s AI tool Grok has restricted image generation for most users after backlash over nonconsensual sexual and violent imagery. The move follows threats of fines and a possible UK ban, but critics say the changes do not go far enough.

Elon Musk’s artificial intelligence tool, Grok, has sharply curtailed its image generation and editing features for most users following global outrage over its use to create sexually explicit and violent content, much of it involving women without their consent.
The move comes amid mounting regulatory pressure, including threats of fines and the possibility of a ban on X in the United Kingdom. Grok’s image creation tools had been widely used to manipulate photographs of women — digitally removing clothing, placing them in sexualised positions, and generating violent imagery — sparking alarm among regulators, lawmakers, and civil society groups.
In a post on X, the platform formerly known as Twitter, the official @Grok account confirmed that image generation and editing are now limited to paying subscribers only. As a result, the vast majority of users no longer have access to the feature.
Restricting the tool to paid users also means those generating images must provide identifiable information, including credit card details, making misuse more traceable. However, concerns remain that the restriction does not go far enough.
While the public Grok account on X has had its image generation capabilities heavily limited, a separate Grok app — which does not publicly share images — has reportedly continued to allow non-paying users to generate sexualised imagery, including depictions involving women and children.
Investigations cited by The Guardian revealed that Grok had been used to create nonconsensual pornographic videos, as well as graphic images portraying women being shot or killed. The revelations intensified scrutiny of Musk and his AI company, xAI, from regulators around the world.
In the UK, Prime Minister Keir Starmer warned on Wednesday that strong action could be taken against X if the company failed to curb the spread of AI-generated sexual imagery. He described the content as “disgraceful” and “disgusting,” urging the platform to “get a grip” on the problem.
Starmer said the communications regulator Ofcom has full government backing to act under the Online Safety Act, which grants it powers to seek court orders blocking platforms in extreme cases or impose fines of up to 10 percent of a company’s global turnover.
“It’s unlawful. We’re not going to tolerate it,” Starmer said. “I’ve asked for all options to be on the table. We will take action on this because it’s simply not tolerable.”
Thousands of sexualised images of women have reportedly been created over the past two weeks alone, following an update to Grok’s image-generation feature in late December. Despite repeated public calls to disable the tool, X had taken no action until now.
Jess Asato, a Labour MP campaigning for stronger regulation of online pornography, criticised the partial rollback, arguing that allowing paying users to continue accessing the feature still enables abuse.
“Paying to put semen, bullet holes or bikinis on women is still digital sexual assault,” she said, calling on xAI to disable the feature entirely.
Some of the most explicit material has been produced outside the X platform itself, using the Grok Imagine tool. Research by AI Forensics, a Paris-based non-profit, identified around 800 images and videos containing pornographic or sexually violent content created through the app.
“These are fully pornographic videos and they look professional,” said Paul Bouchaud, a researcher at AI Forensics. He noted that the content was significantly more explicit than previous trends observed on X, including graphic sexual acts and violent imagery.
As pressure mounts from governments and regulators, Grok’s rollback signals a rare retreat by Musk’s AI venture — though critics argue it remains insufficient to address the scale of harm already caused.