Thoughts on this? I hate LLM but I think this article does a disservice by depicting the victims as perfectly normal before their mental health collapsed. They must have had some kind of preexisting problems that got exacerbated by their use of ChatGPT.
The point about these chatbots being sycophantic is extremely true though. I am not sure whether they are designed to be this way–whether it is because it sells more or if LLMs are too stupid to be argumentative. I have felt its effects personally when using Deepseek. I have noticed that often in its reasoning section it will say something like “the user is very astute” and it feels good to read that as someone who is socially isolated and is never complimented because of that.
I guess the lesson here is to use these chatbots as tools rather than friends because they are not capable of being the latter. I have tried a few times having discussions about politics with Deepseek but it is a terrible experience because of the aforementioned predisposition to being sycophantic. It always devolves into being a yes man.
Oh yeah I’ve had to tell ChatGPT to stop bringing up shit from other chats before. Like if something seems related to another chat it’ll start referencing it. As if i didnt just make a new chat for a reason. The worst part is the more you talk to them the more they hallucinate so a fresh new chat is the best way to go about things usually. ChatGPT seems to be worse at hallucinating these days than DeepSeek probably for this reason. New chats arent actually clean slates.