AI is developing fast, but regulators must be faster | Letters

3 hours ago 1

The recent open letter regarding AI consciousness on which you report (AI systems could be ‘caused to suffer’ if consciousness achieved, says research, 3 February) highlights a genuine moral problem: if we create conscious AI (whether deliberately or inadvertently) then we would have a duty not to cause it to suffer. What the letter fails to do, however, is to capture what a big “if” this is.

Some promising theories of consciousness do indeed open the door to AI consciousness. But other equally promising theories suggest that being conscious requires being an organism. Although we can look for indicators of consciousness in AI, it is very difficult – perhaps impossible – to know whether an AI is actually conscious or merely presenting the outward signs of consciousness. Given how deep these problems run, the only reasonable stance to take on artificial consciousness is an agnostic one.

Does that mean we can ignore the moral problem? Far from it. If there’s a genuine chance of developing conscious AI then we have to act responsibly. However, acting responsibly in such uncertain territory is easier said than done. The open letter recommends that “organisations should prioritise research on understanding and assessing AI consciousness”. But existing methods for testing AI consciousness are highly disputed so can only deliver contentious results.

Although the goal of avoiding artificial suffering is a noble one, it’s worth noting how casual we are about suffering in many organisms. A growing body of evidence suggests that prawns could be capable of suffering, yet the prawn industry kills around half a trillion prawns every year. Testing for consciousness in prawns is hard, but it’s nothing like as hard as testing for consciousness in AI. So while it’s right to take our possible duties to future AI seriously, we mustn’t lose sight of the duties we might already have to our biological cousins.
Dr Tom McClelland
Lecturer in philosophy of science, University of Cambridge

Regarding your editorial (The Guardian view on AI and copyright law: big tech must pay, 31 January), I agree that AI regulations need a balance so that we all benefit. However, the focus is perhaps too much on the training of AI models and not enough on the processing of creative works by AI models. To use a metaphor – imagine I photocopied 100,000 books, read them, and could then string together plausible sentences on topics in the books. Clearly, I shouldn’t have photocopied them, but I can’t reproduce any content from any single book, as it’s too much to remember. At best, I can broadly mimic the style of some of the more prolific authors. This is like AI training.

I then use my newfound skill to take an article, paraphrase it, and present it as my own. What’s more, I find I can do this with pictures, too, as many of the books were illustrated. Give me a picture and I can create five more in a similar style, even though I’ve never seen a picture like this before. I can do this for every piece of creative work I come across, not just things I was trained on. This is like processing by AI.

The debate at the moment seems to be focusing wholly on training. This is understandable, as the difference between training and processing by a pre-trained model isn’t that obvious from a user perspective. While we need a fair economic model for training data – and I believe it’s morally correct that creators can choose whether their work is used in this way and be paid fairly – we need to focus much more on processing rather than training in order to protect creative industries.
Michael Webb
Director of AI, Jisc

We are writing this letter on behalf of a group of members of the UN high-level advisory body for AI. The release of DeepSeek’s R1 model, a state-of-the-art AI system developed in China, highlights the urgent need for global AI governance. Even though DeepSeek is not an intelligence breakthrough, its efficiency highlights that cutting-edge AI is no longer confined to a few corporations. Its open-source nature, like Meta’s Llama and Mistral, raises complex questions: while transparency fosters innovation and oversight, it also enables AI-driven misinformation, cyber-attacks and deepfake propaganda.

Existing governance mechanisms are inadequate. National policies, such as the EU AI Act or the UK’s AI regulation framework, vary widely, creating regulatory fragmentation. Unilateral initiatives like next week’s Paris AI Action Summit may fail to provide comprehensive enforcement, leaving loopholes for misuse. A robust international framework is essential to ensure AI development aligns with global stability and ethical principles.

The UN’s recent Governing AI for Humanity report underscores the dangers of an unregulated AI race – deepening inequalities, entrenching biases, and enabling AI weaponisation. AI’s risks transcend borders; fragmented approaches only exacerbate vulnerabilities. We need binding international agreements that cover transparency, accountability, liability and enforcement. AI’s trajectory must be guided by collective responsibility, not dictated by market forces or geopolitical competition.

The financial world is already reacting to AI’s rapid evolution. Nvidia’s $600bn market loss after DeepSeek’s release signals growing uncertainty. However, history shows that efficiency drives demand, reinforcing the need for oversight. Without a global regulatory framework, AI’s evolution could be dominated by the fastest movers rather than the most responsible actors.

The time for decisive, coordinated global governance is now – before unchecked efficiency spirals into chaos. We believe that the UN remains the best hope for establishing a unified framework that ensures AI serves humanity, safeguards rights and prevents instability before unchecked progress leads to irreversible consequences.
Virginia Dignum
Wallenberg professor of responsible AI, Umeå University
Wendy Hall
Regius professor of computer science, University of Southampton

Read Entire Article
International | Politik|