UK MPs warn of repeat of 2024 riots unless online misinformation is tackled

6 hours ago 4

Failures to properly tackle online misinformation mean it is “only a matter of time” before viral content triggers a repeat of the 2024 summer riots, MPs have warned.

Chi Onwurah, the chair of the Commons science and technology select committee, said ministers seemed complacent about the threat and this was putting the public at risk.

The committee said it was disappointed in the government’s response to its recent report warning social media companies’ business models contributed to disturbances after the Southport murders.

Replying to the committee’s findings, the government rejected a call for legislation tackling generative artificial intelligence platforms and said it would not intervene directly in the online advertising market, which MPs claimed helped incentivise the creation of harmful material after the attack.

Onwurah said the government agreed with most of its conclusions but had stopped short of backing its recommendations for action.

Accusing ministers of putting the public at risk, Onwurah said: “The government urgently needs to plug gaps in the Online Safety Act (OSA), but instead seems complacent about harms from the viral spread of legal but harmful misinformation. Public safety is at risk, and it is only a matter of time until the misinformation-fuelled 2024 summer riots are repeated.”

MPs said in a report titled Social Media, Misinformation and Harmful Algorithms that inflammatory AI images had been posted on social media platforms in the wake of the stabbings, in which three children died, and warned AI tools have made it easier to create hateful, harmful or deceptive content.

In its response published by the committee on Friday, the government said new legislation was not needed and that AI-generated content is already covered by the OSA, which regulates material on social media platforms. It said introducing further laws would hamper its implementation.

However, the committee pointed to testimony from Ofcom in which an official at the communications regulator said AI chatbots are not 100% captured by the act and further consultation with the tech industry was needed.

The government also declined to act immediately on the committee’s recommendation to create a new body to tackle social media advertising systems that allow “the monetisation of harmful and misleading content”, including a website that spread misinformation about the name of the Southport murderer.

In its response, the government said it “acknowledges the concerns” about the lack of transparency in the online advertising market and would continue to review the regulation of the industry. It added that an online advertising workforce was hoping to increase transparency and accountability in the sector, particularly relating to illegal ads and protecting children from harmful products and services.

Addressing the committee’s demand for further research into how social media algorithms amplify harmful content, the government said Ofcom was “best placed” to decide whether research should be undertaken.

Responding to the committee, Ofcom said it had undertaken work into recommendation algorithms but recognised the need for further work across wider academic and research sectors.

The government also rejected the committee’s call for an annual report to parliament on the state of misinformation online, arguing it could expose and hinder government operations to limit the spread of harmful information online.

The UK government defines misinformation as the inadvertent spread of false information, while disinformation is the deliberate creation and spread of false information to create harm or disruption.

Onwurah singled out the responses on AI and digital advertising as specifically concerning. “In particular, it’s disappointing to see a lack of commitment to acting on AI regulation and digital advertising,” she said.

“The committee is not convinced by the government’s argument that the OSA already covers generative AI, and the technology is developing at such a fast rate that more will clearly need to be done to tackle its effects on online misinformation.

“Additionally, without addressing the advertising-based business models that incentivise social media companies to algorithmically amplify misinformation, how can we stop it?”

Read Entire Article
International | Politik|