‘It cannot provide nuance’: UK experts warn AI therapy chatbots are not safe

17 hours ago 4

Having an issue with your romantic relationship? Need to talk through something? Mark Zuckerberg has a solution for that: a chatbot. Meta’s chief executive believes everyone should have a therapist and if they don’t – artificial intelligence can do that job.

“I personally have the belief that everyone should probably have a therapist,” he said last week. “It’s like someone they can just talk to throughout the day, or not necessarily throughout the day, but about whatever issues they’re worried about and for people who don’t have a person who’s a therapist, I think everyone will have an AI.”

The Guardian spoke to mental health clinicians who expressed concern about AI’s emerging role as a digital therapist. Prof Dame Til Wykes, the head of mental health and psychological sciences at King’s College London, cites the example of an eating disorder chatbot that was pulled in 2023 after giving dangerous advice.

“I think AI is not at the level where it can provide nuance and it might actually suggest courses of action that are totally inappropriate,” she said.

Wykes also sees chatbots as being potential disruptors to established relationships.

“One of the reasons you have friends is that you share personal things with each other and you talk them through,” she says. “It’s part of an alliance, a connection. And if you use AI for those sorts of purposes, will it not interfere with that relationship?”

For many AI users, Zuckerberg is merely marking an increasingly popular use of this powerful technology. There are mental health chatbots such as Noah and Wysa, while the Guardian has spoken to users of AI-powered “grieftech” – or chatbots that revive the dead.

There is also their casual use as virtual friends or partners, with bots such as character.ai and Replika offering personas to interact with. ChatGPT’s owner, OpenAI, admitted last week that a version of its groundbreaking chatbot was responding to users in a tone that was “overly flattering” and withdrew it.

“Seriously, good for you for standing up for yourself and taking control of your own life,” it reportedly responded to a user, who claimed they had stopped taking their medication and had left their family because they were “responsible for the radio signals coming in through the walls”.

In an interview with the Stratechery newsletter, Zuckerberg, whose company owns Facebook, Instagram and WhatsApp, added that AI would not squeeze people out of your friendship circle but add to it. “That’s not going to replace the friends you have, but it will probably be additive in some way for a lot of people’s lives,” he said.

Outlining uses for Meta’s AI chatbot – available across its platforms – he said: “One of the uses for Meta AI is basically: ‘I want to talk through an issue’; ‘I need to have a hard conversation with someone’; ‘I’m having an issue with my girlfriend’; ‘I need to have a hard conversation with my boss at work’; ‘help me roleplay this’; or ‘help me figure out how I want to approach this’.”

In a separate interview last week, Zuckerberg said “the average American has three friends, but has demand for 15” and AI could plug that gap.

Dr Jaime Craig, who is about to take over as chair of the UK’s Association of Clinical Psychologists, says it is “crucial” that mental health specialists engage with AI in their field and “ensure that it is informed by best practice”. He flags Wysa as an example of an AI tool that “users value and find more engaging”. But, he adds, more needs to be done on safety.

“Oversight and regulation will be key to ensure safe and appropriate use of these technologies. Worryingly we have not yet addressed this to date in the UK,” Craig says.

Last week it was reported that Meta’s AI Studio, which allows users to create chatbots with specific personas, was hosting bots claming to be therapists – with fake credentials. A journalist at 404 Media, a tech news site, said Instagram had been putting those bots in her feed.

Meta said its AIs carry a disclaimer that “indicates the responses are generated by AI to help people understand their limitations”.

Read Entire Article
International | Politik|