The unleashing on X (formerly Twitter) of a torrent of AI-generated images of women and children wearing bikinis, some in sexualised poses or with injuries, has rightly prompted a strong reaction from UK politicians and regulators. Monday’s announcement that X is being investigated was Ofcom’s most combative move since key provisions in the Online Safety Act came into force. None of the other businesses it has challenged or fined have anything like the global reach or political clout of Elon Musk’s social media giant. Whatever happens next, this is a defining moment. What is being defined is the extent to which some of the wealthiest companies on the planet are under democratic control.
But the announcement is only a first step. Ofcom has given no indication of how long its investigation will take. On Friday Downing Street described as insulting the decision to limit the use of the image‑making Grok AI chatbot to X’s paying subscribers. The government said that this amounted to turning the creation of abusive deepfakes into a “premium service”.
Such robust language was welcome. So was the announcement by the technology secretary, Liz Kendall, that a promised ban on the creation of non‑consensual intimate images will come into force this week, and nudification apps will be outlawed quickly. At the weekend David Lammy claimed that JD Vance shares the UK government’s objection to tools that enable users to undress children in photographs. Clearly, ministers do not want a fight with Donald Trump and would prefer US politicians to get on board with a challenge to big tech over image-based abuse. But Mr Musk’s aggressive opposition to regulation may make a public battle inevitable. He wants Grok to be competitive with OpenAI’s ChatGPT. And sex sells.
The UK is not alone in taking a stand. Indonesia and Malaysia have both restricted access to Grok in response to proliferating intimate deepfakes. Germany’s media minister, Wolfram Weimer, has called on the European Commission to act against the “industrialisation of sexual harassment”. But with OpenAI expected to enable the creation of erotic material using ChatGPT soon, the fear is that the floodgates of deepfake pornography are about to open. Grave concerns about this are not limited to the need for age verification to protect children. Most UK 18-year-olds are still at school. They and older adults are also entitled to protection from the harm caused by intimate deepfakes. The risks from violent online pornography could also be amplified by AI, if this makes such material more easily accessible.
Tech businesses should never have been allowed to dictate the pace of change to the extent that they have, releasing new tools before their impact has been discussed or independently assessed. The UK’s online safety laws are among the world’s most advanced. But the craze for bikini shots has revealed a gap in a law that is more restrictive of images of people in underwear than in swimwear – even when the level of coverage is the same.
While children’s access to social media apps is a separate issue from the design of AI tools, it is no surprise that the issue of age limits has been raised by senior politicians of right and left in recent days. Ministers must get on the front foot, and decide what they think about children using AI. But the immediate priority is Grok. Ofcom has barked, and must show that it can bite as well.
-
Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

5 hours ago
3

















































