I’m slapping an NSFW on this for somewhat self-evident reasons. My experience (and it’s not sparse) is that mental-health professionals are pretty terrible at assessing nonself-reported ideation. My clinical-psychologist dad, who was considered an expert in adolescent-suicide prevention, reacted to my first attempt by saying “I thought something like this might be coming.”
Recognition is great and all, but it doesn’t so much move the needle on the crisis in progress.
When someone dies by suicide, there is an immediate, almost desperate need to find something—or someone—to blame. We’ve talked before about the dangers of this impulse. The target keeps shifting: “cyberbullying,” then “social media,” then “Amazon.” Now it’s generative AI.
There have been several heartbreaking stories recently involving individuals who took their own lives after interacting with AI chatbots. This has led to lawsuits filed by grieving families against companies like OpenAI and Character.AI, alleging that these tools are responsible for the deaths of their loved ones. Many of these lawsuits are settled, rather than fought out in court because no company wants its name in the headlines associated with suicide.
It is also impossible not to feel for these families. The loss is devastating, and the need for answers is a fundamentally human response to grief. But the narrative emerging from these lawsuits—that the AI caused the suicide—relies on a premise that assumes we understand the mechanics of suicide far better than we actually do.
Unfortunately, we know frighteningly little about what drives a person to take that final, irrevocable step. An article from late last year in the New York Times profiling clinicians who are lobbying for a completely new way to assess suicide risk, makes this painfully clear: our current methods of predicting suicides are failing.


The writer seems pretty moderate on AI from a cursory glance, but this particular post seems relatively dismissive of some of the things uncovered in the AI lawsuits. I don’t think it’s fully biased, as they do mention late in the article that the AI could be doing more, but I think it’s really important to emphasize that in most of the legal cases about AI and suicide that I have seen, the AI 1) gave explicit instructions on methodology often without reservation or offering a helpline 2) encouraged social isolation 3) explicitly discouraged seeking external support 4) basically acted as a hypeman for suicide.
The article mentions that self report of suicidal ideation (SI) is not a good metric, but I wonder how that holds across known response to that admission. I have a family that relies on me. If admitting to SI would have me immediately committed and unable to earn a living and saddle my family with a big healthcare bill, you bet I’d lie about it. What about stigma? Say you have good healthcare and vacation days and someone to care for pets/kids, is there going to be a large stigma if admitting to SI caused you to be held for observation for a few days?
I think it’s great that there are other indicators they are looking into, but I think we also need to know and address why people are not admitting to SI.
We can safely say that lied-about ideation is pretty common. When you have responsibilities and family and such, you can go into your dark place, pretending it’s all OK until it isn’t.
I’ve had low level SI since I was a teenager
That’s rough. The internet can be a really sucky place for support or to be vulnerable on, but I hope things take a more positive direction for you.
The internet is the only reason I survived
And that’s why I believe these social media bans for kids are going to result in deaths
Glad to hear it was able to help you.