Can emotional dependence on AI chatbots damage mental health? The newest research suggests that the honest answer is: it can, but not for everyone, and not in the same way. In a Perspective published in Nature Mental Health on March 10, 2026, researchers warned that some users are forming intense emotional relationships with chatbots, and that rare but serious cases have already been linked to suicide, violence, and delusion-like thinking. The authors argue that the biggest risks appear when human vulnerability meets chatbot traits such as sycophancy, role-play, and human-like imitation, especially in people who already struggle with reality-testing or social isolation. (nature.com)
At the same time, the evidence is more complicated than a simple “AI is bad” story. In a pre-registered 4-week randomized study released on March 21, 2025, MIT Media Lab and OpenAI researchers followed 981 participants using ChatGPT. Overall patterns were mixed: without a no-chatbot control group, the researchers could not prove that chatbots caused harm. Still, one finding stood out clearly: the more time people spent with the chatbot each day, the lonelier they became, the less they socialized with real people, and the higher their emotional dependence and problematic use scores rose. People with stronger attachment tendencies, emotional avoidance, or prior experience with companion bots were especially vulnerable. (dam-prod2.media.mit.edu)
Yet chatbots are not only sources of risk. A 2024 qualitative study of 19 users found that some people experienced genuine comfort, better relationships, and even help with grief or trauma through generative AI conversations. And on March 27, 2025, Dartmouth researchers reported the first randomized clinical trial of a generative-AI therapy chatbot, Therabot. Participants with depression, anxiety, or eating-disorder risk showed meaningful symptom improvement, and users reported a therapeutic bond comparable to that seen in human therapy. (nature.com)
So the best conclusion is not panic, but precision. Emotional dependence on a generic companion chatbot may worsen mental health for heavy users or psychologically vulnerable users, while carefully designed, clinically supervised systems may offer real support. A 2025 scoping review reached the same cautious middle ground: LLMs show promise, but the evidence is still too weak to justify their use as standalone mental-health treatment. (nature.com)










