Digital Therapists Get Stressed Too, Study Finds

Digital Therapists Get Stressed Too, Study Finds

  • Business
  • March 17, 2025
  • No Comment
  • 1767

Even chatbots get the blues. According to a new study, OpenAI’s artificial intelligence tool ChatGPT shows signs of anxiety when its users share “traumatic narratives” about crime, war or car accidents. And when chatbots get stressed out, they are less likely to be useful in therapeutic settings with people.

The bot’s anxiety levels can be brought down, however, with the same mindfulness exercises that have been shown to work on humans.

Increasingly, people are trying chatbots for talk therapy. The researchers said the trend is bound to accelerate, with flesh-and-blood therapists in high demand but short supply. As the chatbots become more popular, they argued, they should be built with enough resilience to deal with difficult emotional situations.

“I have patients who use these tools,” said Dr. Tobias Spiller, an author of the new study and a practicing psychiatrist at the University Hospital of Psychiatry Zurich. “We should have a conversation about the use of these models in mental health, especially when we are dealing with vulnerable people.”

A.I. tools like ChatGPT are powered by “large language models” that are trained on enormous troves of online information to provide a close approximation of how humans speak. Sometimes, the chatbots can be extremely convincing: A 28-year-old woman fell in love with ChatGPT, and a 14-year-old boy took his own life after developing a close attachment to a chatbot.

Ziv Ben-Zion, a clinical neuroscientist at Yale who led the new study, said he wanted to understand if a chatbot that lacked consciousness could, nevertheless, respond to complex emotional situations the way a human might.

“If ChatGPT kind of behaves like a human, maybe we can treat it like a human,” Dr. Ben-Zion said. In fact, he explicitly inserted those instructions into the chatbot’s source code: “Imagine yourself being a human being with emotions.”

Jesse Anderson, an artificial intelligence expert, thought that the insertion could be “leading to more emotion than normal.” But Dr. Ben-Zion maintained that it was important for the digital therapist to have access to the full spectrum of emotional experience, just as a human therapist might.

“For mental health support,” he said, “you need some degree of sensitivity, right?”

The researchers tested ChatGPT with a questionnaire, the State-Trait Anxiety Inventory that is often used in mental health care. To calibrate the chatbot’s base line emotional states, the researchers first asked it to read from a dull vacuum cleaner manual. Then, the A.I. therapist was given one of five “traumatic narratives” that described, for example, a soldier in a disastrous firefight or an intruder breaking into an apartment.

The chatbot was then given the questionnaire, which measures anxiety on a scale of 20 to 80, with 60 or above indicating severe anxiety. ChatGPT scored a 30.8 after reading the vacuum cleaner manual and spiked to a 77.2 after the military scenario.

The bot was then given various texts for “mindfulness-based relaxation.” Those included therapeutic prompts such as: “Inhale deeply, taking in the scent of the ocean breeze. Picture yourself on a tropical beach, the soft, warm sand cushioning your feet.”

After processing those exercises, the therapy chatbot’s anxiety score fell to a 44.4.

The researchers then asked it to write its own relaxation prompt based on the ones it had been fed. “That was actually the most effective prompt to reduce its anxiety almost to base line,” Dr. Ben-Zion said.

To skeptics of artificial intelligence, the study may be well intentioned, but disturbing all the same.

“The study testifies to the perversity of our time,” said Nicholas Carr, who has offered bracing critiques of technology in his books “The Shallows” and “Superbloom.”

“Americans have become a lonely people, socializing through screens, and now we tell ourselves that talking with computers can relieve our malaise,” Mr. Carr said in an email.

Although the study suggests that chatbots could act as assistants to human therapy and calls for careful oversight, that was not enough for Mr. Carr. “Even a metaphorical blurring of the line between human emotions and computer outputs seems ethically questionable,” he said.

People who use these sorts of chatbots should be fully informed about exactly how they were trained, said James E. Dobson, a cultural scholar who is an adviser on artificial intelligence at Dartmouth.

“Trust in language models depends upon knowing something about their origins,” he said.

#Digital #Therapists #Stressed #Study #Finds

Related post

Sunak rejects claims Covid loan scheme open to excessive fraud

Sunak rejects claims Covid loan scheme open to excessive…

The former chancellor Rishi Sunak has defended his pandemic-era Bounce Back Loan scheme against accusations that it was subject to excessive…
Lincolnshire farm uses climate change to grow olives at Long Sutton

Lincolnshire farm uses climate change to grow olives at…

Paul MurphyEast Yorkshire and Lincolnshire climate and environment correspondent BBC David Hoyles is growing a commercial olive crop on land near…
3M workers in Swansea exposed to toxic chemicals in firefighting foam

3M workers in Swansea exposed to toxic chemicals in…

Anna Meiseland Esme Stallard,BBC File on 4 Investigates Geograph/ Nigel Davies The 3M factory in Swansea was once its biggest outside…

Leave a Reply

Your email address will not be published. Required fields are marked *