The issue we’re talking about is not getting a reply from bot in a chat or phone call. We’re talking about people with metal issues using AI in a way that exasperates their problems. Specifically we’re talking about people believing AI is their personal companion and creating personal connection with it to a point that wrong answers generated by AI affect their well being. Vast majority of people don’t use AI like that.
I’m seeing people use LLM’s for:
The dating, customer support, and mental health hotlines, notably, are not people who are always informed they’re talking to an LLM bot.
I don’t think the “exposure to marijuana” analogy works here because people are getting exposed to to it by businesses without consent.
https://sfstandard.com/2025/08/26/ai-crisis-hotlines-suicide-prevention/
The issue we’re talking about is not getting a reply from bot in a chat or phone call. We’re talking about people with metal issues using AI in a way that exasperates their problems. Specifically we’re talking about people believing AI is their personal companion and creating personal connection with it to a point that wrong answers generated by AI affect their well being. Vast majority of people don’t use AI like that.