- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
…and why would you ask an LLM something you already know about? :)
I ask for summaries and examples for things I understand well but struggle to explain. Sometimes it’s very helpful, and sometimes it’s just deranged nonsense.
That’s why I’m less likely to ask it to about something I don’t already know. How would I know if the answer is accurate or coherent? At least with something like Wikipedia, I can track down a source and look for foundational truth, even if it is hidden under layers of bias.
First things I asked Ollama after installing it were questions about permaculture and BattleTech.
The BT responses weren’t great and I’m not a botanist…
I’ve begun asking, “Did you just make that up” before I share anything. A fair amount of the time it’s like: “You’re right to be skeptical, this doesn’t seem correct. Let’s reevaluate.” Or whatever.
It’s still an LLM, not a “truth machine”. Replying with “did you make that up” will just cause it to respond with the next most likely tokens.
Try this: if you know it’s saying something factual, try your technique. It will likely “correct” itself by slightly rephrasing. Enough rephrasing might change the meaning of the sentence, but there’s nothing checking whether that’s factual before or after.
I’ve had some LLMs become extremely stubborn, and deny that it’s wrong on basic facts like the release year of certain media.