I’ve begun asking, “Did you just make that up” before I share anything. A fair amount of the time it’s like: “You’re right to be skeptical, this doesn’t seem correct. Let’s reevaluate.” Or whatever.
It’s still an LLM, not a “truth machine”. Replying with “did you make that up” will just cause it to respond with the next most likely tokens.
Try this: if you know it’s saying something factual, try your technique. It will likely “correct” itself by slightly rephrasing. Enough rephrasing might change the meaning of the sentence, but there’s nothing checking whether that’s factual before or after.
I’ve had some LLMs become extremely stubborn, and deny that it’s wrong on basic facts like the release year of certain media.
I’ve begun asking, “Did you just make that up” before I share anything. A fair amount of the time it’s like: “You’re right to be skeptical, this doesn’t seem correct. Let’s reevaluate.” Or whatever.
It’s still an LLM, not a “truth machine”. Replying with “did you make that up” will just cause it to respond with the next most likely tokens.
Try this: if you know it’s saying something factual, try your technique. It will likely “correct” itself by slightly rephrasing. Enough rephrasing might change the meaning of the sentence, but there’s nothing checking whether that’s factual before or after.
I’ve had some LLMs become extremely stubborn, and deny that it’s wrong on basic facts like the release year of certain media.