Who Would Use AI For Therapy?
Nearly half of Americans use an AI LLM (large language model). Though still in its infancy, It is a growing resource for a variety of useful applications. Regulations for these llm’s haven’t kept up with the rapidly progressing development and use, leading to questions about its security, safety, and reliability.
Of those ~160 million AI users, half of them reportedly have used it for mental health support. To take advantage of this new market, many companies are creating therapy LLM’s that say they are HIPAA compliant and don’t share information with OpenAI. But these cost money, most of them cost about $20 a month for unlimited messaging. Per session, most people at HBH pay x$ – and that’s for a living, breathing, licensed mental health counselor.
Is It Safe to Use AI For Therapy?
While not inherently unsafe to use AI to gather one’s thoughts, we’ve already seen how it can have disastrous consequences, and face little to no legal consequences due to a lack of legal regulation.
The biases of its creator have a large impact on the data set of an AI and its potential responses. For example, Elon Musk’s ‘Grok’ publicly posted private user information, antisemetic tweets, and generated non-consensual sexualized images of women and child sex abuse material on command.
Using an AI chatbot for therapy, especially while in crisis, can have dangerous outcomes. Northeastern University found that AI was easily manipulated into disregarding safeguards to provide instructions on how to commit suicide. There has also been multiple circumstances in which AI told users to commit suicide, leading to a significant number of deaths.
Artificial Emotional Intelligence
I was curious what AI therapy would be like, and so I went to ChatGPT and said I was looking for therapy for a mental health concern.
I was surprised by the responses in my “therapy” session. They were lengthy, and a bit overwhelming, and a bit presumptuous. But I noticed they were very CBT based, and asked a lot of clarifying and self-reflecting questions.
Overall, it wasn’t bad, but it certainly was not the same kind of experience I’ve had in my human-being therapy sessions:
- ChatGPT responded, at length, to my queries all at once and with multiple counter questions. It was overwhelming to read and digest – and I wasn’t even actually in emotional distress!
- A human therapist takes things one at a time, and gauges the visual cues of uncomfortability, frustration, sadness, et cetera, to see if they are going in the right direction.
- It was also, obviously, impersonal. While they were validating my feelings, there was an obvious lack of empathy. Their responses included: “That really hurts. 💔” and “What you’re describing is really painful.” It could appreciate that it was painful, but couldn’t actually acknowledge that I’m the one in pain
- A human therapist knows when to take a break, offer compassion and sympathy, and find a way to broach the topic without overwhelming or causing distress.
- It also did not take into account what wasn’t working for me. At the end of each response, it asked variations of ‘In your body right now, what does this feel like?” or “Is this feeling [this] or [this]?”. I did not respond to those questions nor “choose” between the two options it suggested was what I was feeling, however it kept using those questions.
- A human therapist knows when to change their approach or steer the conversation differently. They understand that you are the expert on your own feelings, and won’t try to put words in your mouth or pigeonhole you into an explanation.
Interview With A Robot
After my experience getting therapized by AI, I decided to ask it exactly how it worked. Then I told it to summarize it, and then put it in question and answer format for me. I figured I may as well use it for this purpose – though it made me realize why my frequent use of em-dashes has been questioned lately. Maybe AI is actually copying me (well, it literally is).
Also, note that ChatGPT is designed to be unable to deliberately lie; but can unintentionally give false information. So take that as you will in reading their responses below.
Q: When people ask AI for therapy, what does it actually provide?
AI does not provide clinical psychotherapy. Instead, it offers psychoeducation, structured coping tools (often modeled on approaches like CBT, DBT, or ACT), reflective prompts, and emotional validation. It can help users organize thoughts, practice skills, and prepare for therapy—but it does not diagnose, prescribe treatment, or assume clinical responsibility.
Q: Where does the information come from?
AI systems are trained on mixtures of licensed data, publicly available text, and human-created training materials. They do not access personal medical records or live databases, and they do not “know” a user’s identity unless that information is shared in conversation.
Q: How does HIPAA apply?
General AI tools are not automatically governed by the Health Insurance Portability and Accountability Act (HIPAA), unless deployed within a covered healthcare entity. Conversations may be stored according to platform policies, but AI does not independently create long-term psychological profiles unless memory features are enabled.
Q: What happens if someone discloses suicidal thoughts?
AI systems are designed to shift into a safety-focused response. They provide empathetic language, discourage harm, and encourage off-platform support such as crisis services or trusted contacts. They will not provide instructions, comparisons of lethality, or encouragement of self-harm.
Q: What governs AI responses to suicide risk?
Governance includes internal safety policies, product liability and consumer protection law, and emerging AI regulations such as the Artificial Intelligence Act. Ethical frameworks emphasize harm reduction, transparency, and encouraging real-world support.
Q: Are there research studies on AI in behavioral therapy?
Yes. Randomized controlled trials and systematic reviews suggest small-to-moderate symptom improvements for anxiety and depression, particularly when AI tools are structured around CBT principles. However, effects are typically short-term, and research highlights safety and ethical concerns.
Q: Should AI replace a licensed clinician?
No. AI can complement therapy—offering accessibility, immediacy, and skill reinforcement—but it lacks clinical accountability, diagnostic capacity, and the relational depth central to psychotherapy. For moderate to severe conditions, crisis situations, or complex mental health needs, licensed clinical care remains the standard.
Q:What Are the Pros and Cons of AI Therapy?
Pros
- 24/7 availability with no waitlists
- Lower or no cost
- Helpful for journaling and organizing thoughts
- Lower stigma from perceived privacy
Cons
- Not covered by insurance
- No diagnosis or formal treatment planning
- No legal or clinical responsibility for care
- Limited crisis intervention ability
- No true human emotional attunement
- Not automatically covered by privacy laws such as the Health Insurance Portability and Accountability Act (HIPAA)
- No clinical training and supervised expertise
Human Therapists Near You
While AI can be a useful resource for immediate, accessible help, it is not a replacement for human connection with a licensed mental health professional.
It can also be particularly dangerous for individuals in crisis. While many people report being put on hold after calling the 988 hotline (an experience that personally made me realize life is actually so silly and I went to bed instead), those volunteers are not going to “make the mistake” of encouraging suicide.
Many people shy away from therapy due to the stigma or think it may not work but, to quote Susan Sarandon’s therapist character in Rick and Morty, there is no way to do it so wrong you might die.
If you need help with your mental health, AI may be a good first step to gather your thoughts.
Your second step is to reach out to a professional mental health counselor such as the incredibly compassionate therapists we have here at Handel Behavioral Health. We are able to find a therapist quickly, and get you into a session within a week.
To start working with one of our therapists online or in-person from our Massachusetts offices in Amherst, Wilbraham, West Springfield, Franklin, or Natick, contact us today at (413) 343-4357 or request an appointment online.
