Versuchen GOLD - Frei

Your Al Therapist

Scientific American

|

November 2025

The dangers of using artificial-intelligence chatbots for therapy

- BY ALLISON PARSHALL

Your Al Therapist

ARTIFICIAL-INTELLIGENCE CHATBOTS don't judge. Tell them the most private, vulnerable details of your life, and most of them will validate you and possibly even provide advice. For this reason, many people are turning to applications such as OpenAI's ChatGPT for life guidance.

But AI "therapy" comes with significant risks. In late July, OpenAI CEO Sam Altman warned ChatGPT users against using the chatbot as a "therapist" because of privacy concerns. The American Psychological Association (APA) has claimed that AI chatbot companies and their products are using "deceptive practices" by "passing themselves off as trained mental health providers." It has called on the Federal Trade Commission to investigate them, citing two ongoing lawsuits in which parents alleged that chatbots brought harm to their children. In some of these high-profile cases, parents allege that their child committed suicide following conversations with an AI.

"What stands out to me is just how humanlike it sounds," says C. Vaile Wright, a licensed psychologist and senior director of the APA's Office of Health Care Innovation, which focuses on the safe and effective use of technology in mental health care. "The level of sophistication of the technology, even relative to six to 12 months ago, is pretty staggering. And I can appreciate how people kind of fall down a rabbit hole."

SCIENTIFIC AMERICAN spoke with Wright about how AI chatbots used for therapy could potentially be dangerous and whether it is possible to engineer one that is reliably both helpful and safe. An edited transcript of the interview follows.

What have you seen happening with AI in the world of mental health care in the past few years?

WEITERE GESCHICHTEN VON Scientific American

Listen

Translate

Share

-
+

Change font size