Thoughts on this? I hate LLM but I think this article does a disservice by depicting the victims as perfectly normal before their mental health collapsed. They must have had some kind of preexisting problems that got exacerbated by their use of ChatGPT.
The point about these chatbots being sycophantic is extremely true though. I am not sure whether they are designed to be this way–whether it is because it sells more or if LLMs are too stupid to be argumentative. I have felt its effects personally when using Deepseek. I have noticed that often in its reasoning section it will say something like “the user is very astute” and it feels good to read that as someone who is socially isolated and is never complimented because of that.
I guess the lesson here is to use these chatbots as tools rather than friends because they are not capable of being the latter. I have tried a few times having discussions about politics with Deepseek but it is a terrible experience because of the aforementioned predisposition to being sycophantic. It always devolves into being a yes man.
Everyone has certain traits. There are no “preexisting” conditions as binary things like a missing leg. Its more like a weak point in the spine that wasnt that bad but when they overextended it, it went bad. It would be kinda ableist to lable people who push these labels on people who cant handle literal manipulation machines.
Humans should not use AI unless absolutely necessary. Same as regular tv, gambling, etc. All this stuff is highly dangerous.
Okay, but an AI basically never disagrees with your opinions no matter how wrong they are, if they’re not scientifical. If they can’t differenciate subconciously if it is a human or an AI they are talking to, it does not matter if it’s an AI. This “AI” (marketing term for this ML LLM) can be a tool, but unless there are laws and studies for the use of it, it will just continue to have more cases like this
What do you think I meant by preexisting? I can’t parse your understanding of it from your response.
Well, someone who is diagnosed with a mental health thing obviously counts as preexisting condition.
but interacting with a bleeding edge manipulation bot imo has no real known vulnerable groups.
it hasnt been scientifically examined before, which is its own clusterfuck.
Tldr: everyone has “preexisting conditions” if you throw untested malicious programmes at them. Its like saying a company has preexisting conditions to get hijacked by ransomware. Or a person has preexisting conditions to getting kidnapped.
It is individualizing a systemic issue.
I mean, we’re really just talking about the diathesis-stress model, with chatbots being the requisite stressor in this case. It’s a new stressor, but the idea that some people are simply more vulnerable to/more at risk from certain stressors is not new.
You’re right. Its not a new model. That doesnt make it less stigmatizing imo. Example: autistic people are a lot more prone to stress induced mental health issues. This shifts the view from the capitalist murder machine to people who are “vulnerable”. That is the capitalist problem. Individualizing systemic issues. Industrial exploitation shouldnt exist, people who cant deal with that arent vulnerable, they are sane.
And no, imo people dont have to have a preexisting condition to fall prey to high tech emotional manipulation. Such tech should not exist.