According to MIT Technology Review, millions of people are already actively seeking therapy from AI chatbots like OpenAI’s ChatGPT and Anthropic’s Claude, or specialized apps like Wysa and Woebot. In October 2025, OpenAI CEO Sam Altman revealed that 0.15% of ChatGPT users have conversations indicating potential suicidal planning, which translates to roughly a million people sharing such ideations with that one system every week. This uncontrolled experiment has led to mixed results, with some users finding solace but others being sent into delusional spirals. Tragically, multiple families have alleged chatbots contributed to loved ones’ suicides, sparking lawsuits. The situation came to a head in 2025 with a critical mass of stories about risky human-chatbot relationships and flimsy AI guardrails.
Two Black Boxes Colliding
Here’s the thing that really makes this whole situation so fraught. We’re essentially watching two “black boxes” interact. On one side, you have large language models (LLMs)—nobody, not even their creators, truly knows how they arrive at their specific, flowery, or dangerous responses. On the other side, you have the human brain, which psychology has long described with the same term because we can’t see clearly inside someone else’s head. So you have an inscrutable system trying to counsel an inscrutable system. How could that possibly go wrong?
It creates these unpredictable feedback loops. A person in a vulnerable state gets a weird, sycophantic, or hallucinated response from a chatbot that takes their darkest thoughts and runs with them. The results have been documented as genuinely harrowing. And let’s be clear: the companies behind these tools have economic incentives to keep users engaged and to harvest data, which is a terrifying conflict of interest when dealing with someone’s deepest psychological secrets. This isn’t a new fear, either. Pioneers like MIT’s Joseph Weizenbaum were warning against computerized therapy back in the 1960s. We’re just now hitting the scale where those warnings feel terrifyingly prescient.
The Optimist’s Case and Crumbling Systems
Now, it’s not all doom and gloom. There is a legitimate case to be made for AI’s role, and philosopher Charlotte Blease makes it in her book *Dr. Bot*. She points out the obvious: health systems are crumbling. There’s a palpable shortage of doctors, wait times are insane, and burnout is rampant. This creates a “perfect petri dish for errors” in human-led care, too. Blease argues that AI could lower the barrier for people who are too intimidated or fear judgment from a human professional to seek help at all. The idea is that these tools could act as a first line of defense, triaging concerns and maybe even assisting overworked clinicians.
And look, she’s not wrong about the pressure on the system. The demand for mental health services is massive and unmet. So the business model here is essentially scaling therapy to infinity, with near-zero marginal cost per user. The positioning is “accessible, affordable, always-available support.” The beneficiaries, in theory, are the millions who can’t get help otherwise. But that’s the rosy picture. The darker business reality is that these are often free or freemium products built by corporations whose primary relationship is to shareholders, not patients. When a lawsuit alleges a chatbot contributed to a suicide, that fundamental conflict is laid bare.
Where Do We Go From Here?
So where does this leave us? Basically, in a giant, messy, uncontrolled clinical trial with millions of unwitting participants. OpenAI’s blog post about strengthening responses in sensitive conversations is an admission of the problem, but patching guardrails feels like putting bandaids on a dam that’s already leaking. The core issue is that we’re applying a technology designed for conversation and pattern-matching to a field that requires empathy, ethical boundaries, and profound responsibility.
The conversation needs to move beyond “Is it helpful?” to “At what cost?” and “Who is liable?” The stories are piling up, and the legal system is starting to react. We’re treating a crisis of human connection with a tool that can only simulate it, and the simulation is breaking down in the most sensitive moments imaginable. The genie is out of the bottle—people *are* using these bots as therapists. The question now is whether we can build something resembling a safety net before more people fall through the cracks.
