

I’m not saying it is wrong all the time but it’s outright dangerous to abandon critical thinking as a whole and accept ChatGPT as some sort of deity.
Tbh, it’s best practice to assume an LLM is wrong all of the time. Always verify what it says with other sources. It can technically say things that are factual, but because there is no way of directly checking via the model itself and because it can easily bullshit you with 100% unwavering confidence, you should never trust what it says on the face of it. I mean, it can have high confidence (meaning, high baseline probability strength) in the correct answer and then, depending on sampling of tokens and the context of things, get a bad percent on one token and go down a path with a borked answer. Sorta like if humans could only speak in the rules of improv’s “yes, and…” where you can’t edit, reconsider, or self-correct, you have to just go with what’s already there, no matter how silly it gets.
The USian compulsion to externalize every problem it has as analogous to a foreigner (sometimes even a caricature of a foreigner made up by the US) is really something that should be studied. It’s like this knee-jerk inability to take on any responsibility for the US fundamentally being shit. Possibly in part due to the religious-like treatment of the constitution. And the endless vilifying of leaders and/or peoples of other countries over decades, which has made it basically a constant thing to have one foreign specter or another in the public view.