A rise in online racism driven by fake images is “just the start of a coming problem” after the latest release of X’s AI software, online abuse experts have warned.
Concerns were raised after computer-generated images created using Grok, X’s generative artificial intelligence chatbot, flooded the social media site in December last year.
Signify, an organisation that works with prominent groups and clubs in sports to track and report online hate, said it has seen an increase in reports of abuse since Grok’s latest update, and believes the introduction of photorealistic AI will make it far more prevalent.
“It is a problem now, but it’s really just the start of a coming problem. It is going to get so much worse and we’re just at the start, I expect over the next 12 months it will become incredibly serious.”
Grok was launched in 2023 by Elon Musk, and recently gained a new text to image feature named Aurora, which created photorealistic AI images based on simple prompts written by the user.
A previous, less advanced version, called Flux, drew controversy earlier this year as it was found to do things that many other similar software would not, such as depict copyrighted characters and public figures in compromising positions, taking drugs or committing acts of violence.
There have been several reports of the newest Grok update being used to create photo realistic racist imagery of several football players and managers. One image depicts a player, who is black, picking cotton while another shows that same player eating a banana surrounded by monkeys in a forest. A separate image depicts two different players as pilots in a plane’s cockpit with the twin towers in the background. More images depict a variety of players and managers meeting and conversing with controversial historical figures such as Adolf Hitler, Saddam Hussein and Osama bin Laden.
Callum Hood, the head of research at the Center for Countering Digital Hate (CCDH), said X had become a platform that incentivised and rewarded spreading hate through revenue sharing, and AI imagery made that even easier.
“The thing that X has done, to a degree that no other mainstream platform has done, is to offer cash incentives to accounts to do this, so accounts on X are very deliberately posting the most naked hate and disinformation possible.”
A key concern outlined by many is not only the relative lack of restrictions on what users can ask for, but also the ease with which prompts given to Grok can circumvent the AI’s guidelines by “jailbreaking”, which includes describing the physical features of whoever the prompter wants in the image, rather than just naming them.
A summer report published by the CCDH found that when given different hateful prompts, Grok created 80% of them, 30% of which it created without pushback and another 50% of which it made after a jailbreak.
The Premier League have said they are aware of the images and have a dedicated team assigned to help find and report racist abuse directed towards athletes, which can lead to legal action. It is believed the Premier League received more than 1,500 such reports last year and that they have introduced filters for players to use on their social media accounts to help block out large amounts of abuse.
A spokesperson from the FA said: “Discrimination has no place in our game or wider society. We continue to urge social media companies and the relevant authorities to tackle online abuse and for action to be taken against offenders of this unacceptable behaviour.”
X and Grok have been contacted for comment.