Controversy grows over AI use in psychological counseling and therapy

Psychologists and medical professionals have raised two key concerns: how well AI can interpret and process human psychological data and whether its simulation of emotions and empathy crosses ethical boundaries.
Jodi Halpern, professor of bioethics and psychiatry at the University of California, Berkeley, U.S., told Spanish news outlet El País that using machines to simulate empathy or compassion to build emotional intimacy could be considered a form of manipulation.
She emphasized this concern in an interview with UC Berkely Research, explaining that some forms of therapy are based on developing vulnerable emotional relationships between patients and therapists.
“And I’m very concerned about having an AI bot replace a human in therapy that’s based on a vulnerable emotional relationship.”
Amid a shortage of mental health professionals and limited access to care in western healthcare systems, AI chatbots are increasingly seen as low-cost, always-available tools that offer round-the-clock support for individuals with depression, anxiety or loneliness.
One of the most well-known is Wysa, which uses cognitive behavioral therapy, a widely accepted psychological treatment method. It has been included in the U.K. National Health Service’s digital health app library. According to John Tench, a director at Wysa, the chatbot is designed to steer users back to structured clinical tools when their responses go off track.
Another tool, Pi, developed by U.S.-based Inflection AI, has drawn attention for its warm, conversational tone. It belongs to a class of relational chatbots that generate “strikingly real” interactions using advanced language models. However, Halpern emphasized that whether AI presents itself professionally or as a companion, it cannot replace a professionally trained psychologist.
Some companies, including Pi, have distanced themselves from medical accountability by stating they do not offer healthcare services. However, their platforms often target users experiencing serious mental health challenges, raising concerns about ethical marketing and user safety.
Jean-Christophe Bélisle-Pipon, a researcher in ethics and AI at Simon Fraser University in Canada, said the space remains full of “gray areas and half-truths.”
“Some openly declare that they do not aim to replace human psychologists, while others exaggerate their capabilities and downplay their limitations,” he told El País.
Misleading claims should not be accepted in mental health even if tolerated in other industries, he said.
“Not only does it risk causing serious misunderstandings among vulnerable individuals, but it also undermines the complexity and professionalism of true psychotherapy.”
MIT Media Lab, in collaboration with OpenAI earlier this year, conducted a study involving nearly 1,000 ChatGPT users and analyzed some 40 million interactions over four weeks.
The study found that those who relied heavily on AI chatbots experienced greater feelings of loneliness and emotional dependency. They also became less engaged in real-life social activities. Voice-based chats initially helped reduce isolation, but the benefits declined with prolonged use.
While experts continue to question whether AI should replace human therapists, many acknowledge that chatbots could serve as a temporary solution for millions who cannot afford or access traditional care.
Bélisle-Pipon warned that relying on a psychobot could “worsen symptoms” if the advice it provides is inappropriate.
While raising concerns, he also recognized that, in the current healthcare landscape, AI tools may serve as a temporary option for individuals with limited access to professional care.
link