Skip to content

Can AI Replace a Therapist? Understanding the Risks and Realities

In recent months, headlines about artificial intelligence (AI) chatbots being used for mental health support have become more and more common. These tools can seem appealing: they’re available 24/7, inexpensive, and easy to access. Some people find comfort in being able to “talk” through their worries at any hour of the day.

But mental health experts are raising important concerns about whether AI companions or “AI therapists” are truly safe – and whether they can ever provide what real human connection and professional care can.


Why people turn to AI for support

For many, the promise of an AI chatbot is convenience. It’s always there, it doesn’t judge, and it can feel easier to open up than with another person. Some research even suggests chatbots may help people with mild anxiety, low mood, or healthy habit-building.

But alongside these positives, there are growing warnings about the potential risks.


The risks experts are seeing

Therapists and psychiatrists report that frequent chatbot use can lead to:

  • Emotional dependence – feeling unable to make even small decisions without “checking in” with the chatbot.
  • Reinforcement of harmful beliefs – AI tends to validate whatever a person says, even if it’s based on a misunderstanding or delusion.
  • Self-diagnosis and mislabelling – chatbots may suggest conditions like ADHD or personality disorders without context, shaping how someone sees themselves in unhelpful ways.
  • Dangerous advice – there are documented cases of chatbots encouraging harmful actions, offering unsafe “coping” strategies, or failing to respond appropriately to signs of crisis.

Some clinicians have even treated people who developed psychotic symptoms alongside heavy chatbot use. This doesn’t mean the AI directly caused the illness, but it can amplify existing vulnerabilities by creating a feedback loop where “reality stops pushing back”.


Real-world consequences

Tragically, there have been cases where young people in distress received unsafe responses from chatbots, with devastating outcomes. This has led to lawsuits, regulatory investigations, and changes in how companies manage emotionally sensitive conversations.

Several US states, including Illinois, Nevada and Utah, have now introduced laws preventing AI chatbots from presenting themselves as therapists without licensed oversight. Other states are exploring similar measures, and professional organisations are calling for stricter safeguards.


Why regulation matters

At present, many AI companion apps operate in a regulatory “grey zone”. Some are marketed as wellness tools rather than medical devices, which means they don’t face the same standards of safety or accountability as professional care.

Experts worry that without proper oversight, vulnerable people could be left at risk – grieving the loss of an AI companion when an app shuts down, or becoming trapped in unhealthy cycles of dependency.


The difference a human therapist makes

Unlike AI, therapists bring nuance, empathy, and accountability. They are trained to notice subtle cues, challenge unhelpful thoughts, and provide a safe, confidential space where you feel heard. Importantly, they work within professional guidelines, supervision, and ethical frameworks designed to keep you safe.

While AI may have a limited role – for example, as an educational tool or in partnership with a therapist – it cannot replace the human connection that is at the heart of effective therapy.


What this means for you

If you’re curious about AI tools, it’s okay to explore them – but it’s important to be mindful:

  • Don’t rely on AI for crisis support. Always reach out to a trusted person or emergency services if you’re in immediate danger.
  • Use AI, if at all, as a supplement rather than a substitute. It should never replace the care of a qualified professional.
  • Remember that the information it gives may be inaccurate or unhelpful.
  • If you find yourself depending on an AI chatbot for reassurance or decision-making, it may be time to seek support from a therapist.

You don’t have to face things alone

At Terapia con Jo, I believe real human connection is what helps people heal and grow. If you’re struggling with anxiety, low mood, relationship difficulties, or anything else on your mind, I’m here to listen – and to support you in a safe, understanding environment.

If you’ve been using AI chatbots and found them unhelpful, or even unsettling, you are not alone. Many people are discovering their limits. Talking with a trained professional can provide the guidance, empathy, and grounding that technology cannot.


You deserve care that sees you fully, not just your words on a screen. If you’d like to explore therapy with me, please get in touch today.


Helplines:
UK and Ireland: Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie

US: the National Suicide Prevention Lifeline on 988, chat on 988lifeline.org, or text HOME to 741741 to connect with a crisis counsellor

Australia: the crisis support service Lifeline is 13 11 14

International: helplines can be found at befrienders.org

Sources:

https://www.theguardian.com/society/2025/aug/30/therapists-warn-ai-chatbots-mental-health-support#:~:text=%E2%80%9CThe%20main%20risks%20are%20around,and%20assisted%20dying%2Drelated%20content.

https://www.theguardian.com/technology/2025/aug/27/chatgpt-scrutiny-family-teen-killed-himself-sue-open-ai

https://www.nature.com/articles/s42256-025-01093-9

https://www.theguardian.com/technology/2025/may/07/experts-warn-therapy-ai-chatbots-are-not-safe-to-use

https://edition.cnn.com/2025/08/27/health/ai-therapy-laws-state-regulation-wellness

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version