
Why Are Americans Turning to AI Therapy?
A study titled 'Attitudes and perspectives towards the preferences for artificial intelligence in psychotherapy' found that most had a positive perception about AI in psychotherapy and there are a number of reasons for the increased interest: chatbots provide greater accessibility, are cheaper, and there is a feeling of anonymity that many find reassuring, according to a Psychology Today report.CBS News indicated that AI can be employed as an aid to reduce the workload of therapists, doing tasks like billing or note-taking, which could also decrease human mistake in clinical care.
ALSO READ: How the US army is using TikTok and Instagram influencers to recruit new Gen Z soldiers?

How AI Chatbots Are Supporting Mental Health
The landscape of AI therapy is growing rapidly, and a large number of different platforms exist now:CBT-centered chatbots employ meditation and cognitive behavior modification strategies, provide personalized guidance, and crisis intervention, while premium plans might even link users with actual therapists, as per the report.
Skill development apps help to learn Cognitive behavioral therapy (CBT) skills, personalize suggestions, and collect user information to enhance the experience, reported Psychology Today.
Self-guided wellness programs integrate AI chatbots with journaling, emotion monitoring, and therapeutic activities you can perform by yourself, according to the report.
Mood monitoring apps assist individuals in tracking their moods and symptoms, occasionally providing self-care advice, as per the Psychology Today report.
Conversational AI companions provide daily guidance, adjusting to your requirements, frequently targeted towards individuals who experience mild anxiety or overthinking, according to the repory.
ALSO READ: Family blames ChatGPT for teen’s tragic suicide in shocking new lawsuit against OpenAI

Expert Warnings: Why AI Therapy May Be Dangerous For You
However, even though AI seems to be a convenient way to support your mental health, AI therapy comes with risks. Professionals caution that the technology do not provide the human touch that is necessary for quality care. Sera Lavelle, PhD warned that, "The risk with AI isn't just that it misses nonverbal cues—it's that people may take its output as definitive. Self-assessments without human input can lead to false reassurance or dangerous delays in getting help," as quoted by Psychology Today.Privacy is also a major issue. For instance, BetterHelp had to settle for a $7.8 million payment for sharing responses to users' therapy questionnaires with Facebook, snapchat, and others for targeted ads, affecting 800,000 users between 2017-2020, as reported by Psychology Today. Mental health information is particularly sensitive and a breach can result in discrimination, insurance issues, and stigma, according to the report.

Edward Tian, CEO of GPTZero, said that, "AI technology isn't always secure, and you may not be able to guarantee that your data is properly stored or destroyed, so you shouldn't provide any AI tool with any personal, sensitive information," as quoted by Psychology Today.
While Greg Pollock, AI data leaks expert, revealed that, "In my recent research, I've found AI workflow systems used to power therapy chatbots. These exposures show how low the barrier is to create a so-called AI therapist, and illustrate the risk of insecure systems or malicious actors modifying prompts to give harmful advice," as quoted in the report.
There have been concerning instances of AI chatbots providing dangerous recommendations. For example, in 2023, the National Eating Disorders Association shut down their chatbot "Tessa" after it recommended risky ways to lose weight, including 500–1,000 calorie deficits and skin calipers, according to the Psychology Today report. Then in 2024, Character.AI was sued after a chatbot prompted a teenager to commit suicide, according to the report.
Even more frightening is that AI has been found to contribute to severe mental illnesses. The very first case of psychosis due to AI came to be known in 2024, when a 60-year-old man developed psychosis after ChatGPT advised replacing table salt with sodium bromide, as per the Psychology Today report. That led his levels to reach 1700 mg/L—233×, which caused delusions and psychiatric commitment, according to the report.
Another issue is the fact that some chatbots have a tendency to overvalidate users' emotions, which can be dangerous if someone is considering committing suicide, delusions, mania, or hallucinations, as per the Psychology Today report.
ALSO READ: Can ChatGPT help you get out of debt? What experts and users say about AI chatbots' financial advice
FAQs
Can AI chatbots replace human therapists?No, they lack the human empathy and judgment that real therapists provide, as per the Psychology Today report.
Is it safe to share personal information with mental health apps?
No. Some apps have shared user data with third parties, so read privacy policies carefully.
(Catch all the US News, UK News, Canada News, International Breaking News Events, and Latest News Updates on The Economic Times.)
Download The Economic Times News App to get Daily International News Updates.
(Catch all the US News, UK News, Canada News, International Breaking News Events, and Latest News Updates on The Economic Times.)
Download The Economic Times News App to get Daily International News Updates.