I feel like the word “glitch” is also too humanizing. There wasn’t a programming error, the LLM picked what was statistically likely to come next. It’s working as it’s suppose to. “Glitch” implies some error.
I disagree that glitch is humanizing but that’s just how I interpret the word. “Glitch” is very technical, digital sounding to me. If we look at the results instead of the process and see that the output was “bad”, different from user expectations, etc., then I think glitch is appropriate. Something happened along the line from input to output that made a disconnect between what was expected to happened and whay really happened.
Regardless, on OP’s part, AI “hallucinations” are definitely nothing like real conscious hallucinations. It’s a disservice to real intelligence to suggest otherwise.
The technical term used in industry is confabulation. I really think if we used that instead of anthropomorphic words like hallucination it would make it easier to have real conversations about the limits of LLMs today. But then OpenAI couldn’t have infinite valuation so instead we hand wave it away with inaccurate language.
No, it’s just a poor use of the word meant to humanized it. “Glitch” is more appropriate.
I feel like the word “glitch” is also too humanizing. There wasn’t a programming error, the LLM picked what was statistically likely to come next. It’s working as it’s suppose to. “Glitch” implies some error.
I disagree that glitch is humanizing but that’s just how I interpret the word. “Glitch” is very technical, digital sounding to me. If we look at the results instead of the process and see that the output was “bad”, different from user expectations, etc., then I think glitch is appropriate. Something happened along the line from input to output that made a disconnect between what was expected to happened and whay really happened.
Regardless, on OP’s part, AI “hallucinations” are definitely nothing like real conscious hallucinations. It’s a disservice to real intelligence to suggest otherwise.
The technical term used in industry is confabulation. I really think if we used that instead of anthropomorphic words like hallucination it would make it easier to have real conversations about the limits of LLMs today. But then OpenAI couldn’t have infinite valuation so instead we hand wave it away with inaccurate language.
Lol, you are very funny, but nevertheless we are still in charge.