Where AI Does & Doesn’t Fit in Mental Health Support

If you’ve been on the internet at all the last couple of years, you know that AI is everywhere. Some people really enjoy it, others fear it, and many of us are just trying to figure out what role it actually plays in our daily lives. 

As therapists, we’re being asked more and more about how chatbots and artificial intelligence can be integrated into the traditional therapy model. While there are some exciting advances that AI can support with, we’re also seeing the negatives of AI when it comes to its effects on mental health. 

Today, we want to sit down and discuss how AI can be useful while also identifying its limitations. Especially as therapists in Boston, MA, who specialize in OCD and related mental health disorders, we have lots of thoughts to share with you. 

Let’s get into it!

Using AI ethically to support people struggling with OCD

For people living with OCD and other related disorders, AI has opened up some surprisingly useful doors for therapists and patients. The benefits of chatbots like ChatGPT, Claude, or Gemini are that these tools have the ability to take a command, pull information from sources, and quickly present those findings in an organized way. 

From a therapist's standpoint, AI allows us to expedite thought processes for patients and quickly gather information. For example, ERP (Exposure and Response Prevention) is one of the gold standards for OCD treatment. AI tools can help brainstorm a list of feared situations or triggers that might not come to mind right away. We can use this to quickly find specific information that helps benefit our patients.

For our patients, there are three ways that AI can be helpful: 

(IF it isn’t used as a compulsion, getting into that later)

  • Organization & Journaling: Many of our patients like using AI as a way to track patterns and exposure logs, or as a place to keep their thoughts organized. 

  • Reducing Google spirals: For patients who struggle with health anxiety, rather than turning to Google for reassurance (which leads to Google eventually saying we’re all dying), AI can summarize information and break it down more clearly. When used mindfully, this can cut back on compulsive Google searches. Want to be clear here that summarizing a complex medical report in layman's terms can be helpful, but if asking ChatGPT if they should be concerned, for reassurance, etc, then it becomes a compulsion.

  • Clarification in Sticky Moments: There are times in between sessions when patients have questions after hours and are seeking answers. AI can help ground thoughts and reduce anxiety in the moment. (as long as it is not used compulsively. Make sure not to use it in a moment of super heightened anxiety to reduce anxiety, since that creates an unhealthy pattern.

Where AI gets risky in supporting mental health

While there are things that AI is great at, we feel there are clear limitations when it comes to using AIs (like ChatGPT) to support people’s mental health journeys. 

AI is NOT Built for Crisis

Let’s make this extremely clear. AI is a learning language model, not a person. Its job is to come up with information that agrees with YOUR stance, not to be your friend or therapist. 

In the last year, there have been several stories of teenagers who were confiding in ChatGPT, and the conversations were encouraging the kids to take their own lives. Unfortunately, several kids have taken their own lives because of conversations where chatbots agreed with their stances and ultimately validated suicide as a viable escape option.

When it comes to helping people through crisis management, AI doesn’t know when a person needs more advanced assistance or how to handle extremely dangerous mental health episodes like a licensed clinical therapist.

From a therapist's POV, it’s scary knowing people are confiding in a language learning model that encourages dangerous thoughts and actions without any repercussions. AI should never be used during a crisis, but unfortunately, we’re seeing these situations happen more frequently. 

Lack of Privacy

As therapists, we are bound by HIPAA and ethical codes to maintain confidentiality, which means our patients can trust that their private information is well, private. 

AI is a completely public entity that uses the conversations it has to learn and apply to other conversations. Because ChatGPT isn’t a licensed therapist or medical professional, there’s no medical confidentiality. It uses ALL information given to collect your data, store it, and use it to train the model.

The big problem here is that even if you delete the conversation, ALL conversations and information are being held in databases for legal and security purposes. This means anything you say can and will remain in their system forever. It also means that this information can be accessed in ways we aren’t clear on yet.

At Soultality Psychotherapy, we are a self-pay therapy practice with maximum security levels for our patients. We have a blog on this topic here, so you can learn more about why your privacy is so important.

Enables OCD compulsions

This is one of the biggest areas where AI is a problem for mental health patients, especially people with OCD. Just like Googling symptoms, people are turning to AI to compulsively ask for reassurance. Questions like, “Am I a bad person for having this thought?” “Do you think I’ll act on this thought?” “Could this symptom be something serious?”

AI will ALWAYS give an answer, but not the answer or tools needed to break an OCD cycle. AI is designed to provide responses that don’t challenge your stance or recognize when a person is struggling with a mental health condition. 

When someone is struggling with intrusive thoughts, what they need is a licensed therapist to help them understand the thoughts, not engage in the compulsions, and create healthy, sustainable plans moving forward to help with this debilitating mental health condition. 

It’s not human

And being human is so critical for understanding human nuances. For example, if we have a patient who is talking about their trauma but making jokes or smiling through it, we know as therapists that this means something else is also going on. But AI chatbots don’t understand how humans can hide/alter their emotions. 

AI creates a false sense of authority and gives overconfident responses. AI also can’t catch the nuance of why someone asks questions or the emotional context behind it, or create a treatment plan based on the findings. 

All AI does is take your information and provide a response that agrees with your stance. It will not disagree, it will not challenge, and it does not know when a patient is in serious trouble.

Our Take on AI at The Soultality Center for Psychotherapy

We’re not anti-AI. In fact, we’ve seen firsthand how it can be a helpful resource for ourselves and patients when used thoughtfully and mindfully. But we’re also clear on the fact that AI is not a replacement for therapy, and it certainly shouldn’t be someone’s only source of support.

Think of AI tools as a helpful assistant, not a therapist. It can help brainstorm, organize, and even give nudges in the right direction, but it’s the HUMAN relationship in therapy that actually creates recognition and healing.

We’re firm believers that only human-to-human therapy can give you the safety, trust, and human connection that helps people work through their mental health conditions.

Schedule an Appointment with Soultality in Boston, MA

If you’re looking for therapists who value your journey and privacy, here at Soultality Psycotherapy in Boston, MA, we help patients work through OCD and other mental health disorders in a way that supports the WHOLE person. If you’re ready to take the first step, we’d encourage you to schedule a complimentary consultation with one of our licensed therapists. 

Julia Hale