Thoughts on this? I hate LLM but I think this article does a disservice by depicting the victims as perfectly normal before their mental health collapsed. They must have had some kind of preexisting problems that got exacerbated by their use of ChatGPT.
The point about these chatbots being sycophantic is extremely true though. I am not sure whether they are designed to be this way–whether it is because it sells more or if LLMs are too stupid to be argumentative. I have felt its effects personally when using Deepseek. I have noticed that often in its reasoning section it will say something like “the user is very astute” and it feels good to read that as someone who is socially isolated and is never complimented because of that.
I guess the lesson here is to use these chatbots as tools rather than friends because they are not capable of being the latter. I have tried a few times having discussions about politics with Deepseek but it is a terrible experience because of the aforementioned predisposition to being sycophantic. It always devolves into being a yes man.
You can still acknowledge that and say some forms of media are more engaging with certain conditions. I feel as in the future, the same way that someone with epilepsy shouldn’t consume media with flashing lights, someone with schizophrenia shouldn’t be subject to feedback-reinforcing loops with personalized content that has a profile built off your data.
Is it still a symptom? Yes.
Most forms of propaganda would beg to differ.