We call it “hallucination” when AI makes things up — but when humans do it, we call it imagination. Where’s the line?

    • Poayjay@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      28 days ago

      I feel like the word “glitch” is also too humanizing. There wasn’t a programming error, the LLM picked what was statistically likely to come next. It’s working as it’s suppose to. “Glitch” implies some error.

      • defunct_punk@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        28 days ago

        I disagree that glitch is humanizing but that’s just how I interpret the word. “Glitch” is very technical, digital sounding to me. If we look at the results instead of the process and see that the output was “bad”, different from user expectations, etc., then I think glitch is appropriate. Something happened along the line from input to output that made a disconnect between what was expected to happened and whay really happened.

        Regardless, on OP’s part, AI “hallucinations” are definitely nothing like real conscious hallucinations. It’s a disservice to real intelligence to suggest otherwise.

    • WxFisch@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      28 days ago

      The technical term used in industry is confabulation. I really think if we used that instead of anthropomorphic words like hallucination it would make it easier to have real conversations about the limits of LLMs today. But then OpenAI couldn’t have infinite valuation so instead we hand wave it away with inaccurate language.