800.553.8359 info@const-ins.com
  • Examples of people using ChatGPT for therapy have proliferated online, with some claiming that talking to a chatbot every day has helped them more than years of therapy. Licensed professionals say that while AI could be helpful in aiding work with a licensed therapist, there are countless pitfalls to using ChatGPT for therapy.

ChatGPT has turned into the perfect therapist for many people: It’s an active “listener” that digests private information. It appears to empathize with users, some would argue, as well as professionals can. Plus, it costs a fraction of the price compared to most human therapists. While many therapists will charge anywhere from up to $200—or even more—per one-hour session, you can have unlimited access to ChatGPT’s most advanced models for $200 per month.

Yet, despite the positive anecdotes you can read online about using ChatGPT as a therapist, as well as the convenience of having a therapist that’s accessible via almost any internet-enabled computer or phone at any time of day, therapists warn ChatGPT can’t replace a licensed professional.

In a statement to Fortune, a spokesperson for ChatGPT-maker OpenAI said the LLM often suggests seeking professional advice to users who discuss topics like personal health. ChatGPT is a general-purpose technology that shouldn’t serve as a substitute for professional advice, according to its terms of service, the spokesperson added.

Singing the praises of AI therapy 

On social media, anecdotes about the usefulness of AI therapy are plentiful. People report the algorithm is level-headed and provides soothing responses that are sensitive to the nuances of a person’s private experiences. 

In a viral post on Reddit, one user said ChatGPT has helped them “more than 15 years of therapy.” The patient, whose identity could not be confirmed by Fortune, claimed that despite previous experience with inpatient and outpatient care, it was daily chats with OpenAI’s LLM that best helped them address their mental health.

“I don’t even know how to explain how much this has changed things for me. I feel seen. I feel supported. And I’ve made more progress in a few weeks than I did in literal years of traditional treatment,” the user wrote.

In a comment, another user got to the root of AI’s advantages over traditional therapy: its convenience.

“I love ChatGPT as therapy. They don’t project their problems onto me. They don’t abuse their authority. They’re open to talking to me at 11pm,” the user wrote.

Others on Reddit noted that even the most upgraded version of ChatGPT at $200 per month was a steal compared to the more than $200 per session for traditional therapy without insurance.

The dangers of overdependence on ChatGPT for therapy

Alyssa Peterson, a licensed clinical social worker and CEO of MyWellBeing, said AI therapy has its drawbacks, but it may be helpful when used alongside traditional therapy. Using AI to help work on tools developed in therapy, such as battling negative self-talk, could be helpful for some, she said. 

Using AI in conjunction with therapy can help a person diversify their approach to mental health, so they’re not using the technology as their sole truth. Therein lies the rub: Relying too heavily on a chatbot in stressful situations could hurt people’s ability to deal with problems on their own, Peterson said.

In acute cases of stress, being able to deal with and alleviate the problem without external help is healthy, Peterson added.

But AI can, in some cases, outperform licensed professionals with its compassionate responses, according to research from the University of Toronto Scarborough published in the journal Communications Psychology. Chatbots aren’t affected by the “compassion fatigue” that can hit even experienced professionals over time, the study claims. Despite its endurance, an AI chatbot may be unable to provide more than surface-level compassion, one of the study’s co-authors noted.

AI responses also aren’t always objective, licensed clinical social worker Malka Shaw told Fortune. Some users have developed emotional attachments to AI chatbots, which has raised concerns about safeguards, especially for underage users.

In the past, some AI algorithms have also provided misinformation or harmful information that reinforces stereotypes or hate. Shaw said because it’s impossible to tell the biases that go into creating an LLM, it’s potentially dangerous for impressionable users. 

In Florida, the mother of 14-year-old Sewell Setzer sued Character.ai, an AI chatbot platform, for negligence, among other claims, after Setzer committed suicide following a conversation with a chatbot on the platform. Another lawsuit against Character.ai in Texas claimed a chatbot on the platform told a 17-year-old with autism to kill his parents. 

A spokesperson for Character.ai declined to comment on pending litigation. The spokesperson said any chatbots labeled as “psychologist,” “therapist,” or “doctor,” include language that warns users not to rely on the characters for any type of professional advice. The company has a separate version of its LLM for users under the age of 18, the spokesperson added, which includes protections to prevent discussions of self-harm and redirect users to helpful resources.

Diagnosing patients

Another fear professionals have is that AI could be giving faulty diagnoses. Diagnosing mental health conditions is not an exact science; it is difficult to do, even for an AI, Shaw said. Many licensed professionals need to accrue years of experience to be able to accurately diagnose patients consistently, she told Fortune.

“It’s very scary to use AI for diagnosis, because there’s an art form and there’s an intuition,” Shaw said. “A robot can’t have that same level of intuition.” 

People have shifted away from googling their symptoms to using AI, said Vaile Wright, a licensed psychologist and senior director for the American Psychological Association’s office of health care innovation. As demonstrated by the cases with Character.ai, the danger of disregarding common sense for the advice of technology is ever present, she said. 

The APA wrote a letter to the Federal Trade Commission with concerns about companionship chatbots, especially in the case where a chatbot labels itself as a “psychologist.” Representatives from the APA also met with two FTC commissioners in January to raise their concerns before they were fired by the Trump administration. 

“They’re not experts, and we know that generative AI has a tendency to conflate information and make things up when it doesn’t know. So I think that, for us, is most certainly the number one concern,” Wright said. 

While the options aren’t yet available, it is possible that, in the future, AI could be used in a responsible way for therapy and even diagnoses, she said, especially for people who can’t afford the high price tag of treatment. Still, such technology would need to be created or informed by licensed professionals.

“I do think that emerging technologies, if they are developed safely and responsibly and demonstrate that they’re effective, could, I think, fill some of those gaps for individuals who just truly cannot afford therapy,” she said. 

This story was originally featured on Fortune.com