Thoughts on this? I hate LLM but I think this article does a disservice by depicting the victims as perfectly normal before their mental health collapsed. They must have had some kind of preexisting problems that got exacerbated by their use of ChatGPT.
The point about these chatbots being sycophantic is extremely true though. I am not sure whether they are designed to be this way–whether it is because it sells more or if LLMs are too stupid to be argumentative. I have felt its effects personally when using Deepseek. I have noticed that often in its reasoning section it will say something like “the user is very astute” and it feels good to read that as someone who is socially isolated and is never complimented because of that.
I guess the lesson here is to use these chatbots as tools rather than friends because they are not capable of being the latter. I have tried a few times having discussions about politics with Deepseek but it is a terrible experience because of the aforementioned predisposition to being sycophantic. It always devolves into being a yes man.
Semi related but there seems to be this view by many people of LLMs as some sort of all knowing Oracle. Saw a comment the other day of someone answering a serious advice question based on ChatGPT and I was like: ‘Just because ChatGPT says so doesn’t make it true’ and they acted like I was insane.
Like, it’s a machine that produces output based on whatever the input is. I’m not saying it is wrong all the time but it’s outright dangerous to abandon critical thinking as a whole and accept ChatGPT as some sort of deity. It’s not a real sentient being.
Tbh, it’s best practice to assume an LLM is wrong all of the time. Always verify what it says with other sources. It can technically say things that are factual, but because there is no way of directly checking via the model itself and because it can easily bullshit you with 100% unwavering confidence, you should never trust what it says on the face of it. I mean, it can have high confidence (meaning, high baseline probability strength) in the correct answer and then, depending on sampling of tokens and the context of things, get a bad percent on one token and go down a path with a borked answer. Sorta like if humans could only speak in the rules of improv’s “yes, and…” where you can’t edit, reconsider, or self-correct, you have to just go with what’s already there, no matter how silly it gets.
There are articles in mainstream news outlets like NYT where dumbass journalists share “prompt hacks” to make ChatGPT give you insights about yourself. Journalists are blown away by literal cold reading. The real danger of these chatbots comes from asking about topics you yourself don’t know much about. The response will look meaningful but you will never be able to tell if it has made a mistake since search engines are useless garbage these days.