I’m slapping an NSFW on this for somewhat self-evident reasons. My experience (and it’s not sparse) is that mental-health professionals are pretty terrible at assessing nonself-reported ideation. My clinical-psychologist dad, who was considered an expert in adolescent-suicide prevention, reacted to my first attempt by saying “I thought something like this might be coming.”
Recognition is great and all, but it doesn’t so much move the needle on the crisis in progress.
When someone dies by suicide, there is an immediate, almost desperate need to find something—or someone—to blame. We’ve talked before about the dangers of this impulse. The target keeps shifting: “cyberbullying,” then “social media,” then “Amazon.” Now it’s generative AI.
There have been several heartbreaking stories recently involving individuals who took their own lives after interacting with AI chatbots. This has led to lawsuits filed by grieving families against companies like OpenAI and Character.AI, alleging that these tools are responsible for the deaths of their loved ones. Many of these lawsuits are settled, rather than fought out in court because no company wants its name in the headlines associated with suicide.
It is also impossible not to feel for these families. The loss is devastating, and the need for answers is a fundamentally human response to grief. But the narrative emerging from these lawsuits—that the AI caused the suicide—relies on a premise that assumes we understand the mechanics of suicide far better than we actually do.
Unfortunately, we know frighteningly little about what drives a person to take that final, irrevocable step. An article from late last year in the New York Times profiling clinicians who are lobbying for a completely new way to assess suicide risk, makes this painfully clear: our current methods of predicting suicides are failing.


I strongly doubt the writer is the problem here.