We call it “hallucination” when AI makes things up — but when humans do it, we call it imagination. Where’s the line?
No, it’s just a poor use of the word meant to humanized it. “Glitch” is more appropriate.
I feel like the word “glitch” is also too humanizing. There wasn’t a programming error, the LLM picked what was statistically likely to come next. It’s working as it’s suppose to. “Glitch” implies some error.
I disagree that glitch is humanizing but that’s just how I interpret the word. “Glitch” is very technical, digital sounding to me. If we look at the results instead of the process and see that the output was “bad”, different from user expectations, etc., then I think glitch is appropriate. Something happened along the line from input to output that made a disconnect between what was expected to happened and whay really happened.
Regardless, on OP’s part, AI “hallucinations” are definitely nothing like real conscious hallucinations. It’s a disservice to real intelligence to suggest otherwise.
The technical term used in industry is confabulation. I really think if we used that instead of anthropomorphic words like hallucination it would make it easier to have real conversations about the limits of LLMs today. But then OpenAI couldn’t have infinite valuation so instead we hand wave it away with inaccurate language.
Lol, you are very funny, but nevertheless we are still in charge.
AI doesn’t make things up: It “believes” (doesn’t actually have belief) a real thing just as much as a false thing. The two are indistinguishable to LLMs because the only “true” thing for a chatbot is the existence of text and tokens. Everything else is meaningless to the math.
Does my sewer pipe have imagination because is spewed black goop across my kitchen instead of carrying my waste water away like it normally does? Is my TV hallucinating a new show because the screen got damaged at the factory? Did a printing press create art when it smudged the text on my paperback?
LLMs are tools with a high defect rate which tech billionaires and the media branded as hallucintation to sound more impressive.
Hmm 🤔🧐
“Imagination” is done with intent, its not dishonest. “Hallucinations” try to pass themselves off as reality.
what we call “hallucination” in AI is just more of a programed creativity without awareness of truth. We have awareness and intent AI doesn’t, but people denies most of there intent if it backfires against them
What’s your point? It’s not conscious, it can’t imagine. All it does it lie whenever it’s convenient.
Lol, it’s not alive though, just programmed by us.
yes that’s my point, rocks don’t have imagination
You don’t actually have a point at all do you
Must I have a point to ask a logical question, at least I kind of ask a question which other people avoid asking because of high IQ individuals like you 😊
For the question to be logical, yes, it basically has to have a point.
It’s not a programmed creativity it’s a flaw in an algorithm. If you enter into a calculator 2+2 a 100x you would expect to always get 4 but if every once in a while it gave you 5 you wouldn’t call that creativity… it’s just wrong
No
Correct 💯
Your whole misunderstanding originates from the fact that you heard technical jargon and thought it means the same as the original meaning of the word.
Same as Linux daemons aren’t occult and a misbehaving engine doesn’t need a better upbringing, so does “AI hallucination” have nothing to do with humans hallucinating.
Understand, it’s just that this was broadcast in news worldwide. Anyway there is no need to fight against what’s meant to help us carry out activities easily. I believe you are an ethical hacker also a programmer, so what do you think about cgpt 4&5 is it something dangerous, if you throw a little bit of light about your thought, it will really educating.
The “danger” comes from reliance and a misunderstanding of capabilities. It can save time and be very helpful if you use it to make a framework and then you fill in/ modify the pertinent details. If you just ask it to make a PowerPoint presentation for you about the metabolic engineering implications of agrobacterium and try and present without any proofreading you will end up spouting garbage.
So if you use it as a tool and acknowledge its limitations it’s helpful, but it is dangerous to pretend it has some semblance of real intelligence
I see you read my comment history.
Understand, it’s just that this was broadcast in news worldwide.
Yes, this is what happens if journalists just blindly grab some technical term and broadcast it without any explanation (and often without understanding) what it means. It leads to massive misunderstandings.
Couple that with terms like “hallucination” being specifically created by marketing people to be confusing and you get the current problems.
A better term would be “glitching” or “spouting random nonsense”.
so what do you think about cgpt 4&5 is it something dangerous, if you throw a little bit of light about your thought, it will really educating.
People delegate a lot of their thinking and even their decision-making to AI. “Give me some ideas where to go on a date with my girlfriend”, “What should I cook tonight?”, “What phone should I buy?”, “What does my boyfriend mean with this post?”, “Is politician X a good candidate?”, “Why is immigration bad?”, “Was Hitler really a communist?”.
LLMs (the currently most common type of AI) is super easy to manipulate. There’s a thing called “system prompt”, which works like an initial instruction that the LLM completely trusts and follows. With commercial closed source LLMs these system prompts are secret. They can be modified on-the-fly depending e.g. keywords you use in your text.
It is for example known that Grok’s system prompt tells it that the Nazis weren’t all that bad and that it has to check Musk’s post as a source for its political opinions.
It is also known that there were instances where system prompts in LLMs were used for marketing purposes (e.g. pushing a certain brand of products).
Now imagine what happen when people perceive AI as some kind of neutral, data-driven, evidence-only, unemotional resource that they trust to take over thinking and making decisions for them, when in reality they are only puppets following their puppet master in pushing whatever opinion they want.
Does that seem dangerous to you?
(And then there’s of course the issue with the very low quality of the output, plagiarism, driving people who created essentially the training data out of work and so on and so on)
We say it “hallucinates” because it doesn’t consciously do anything. It literally throws everything at the wall and sees what sticks. The results are also often quite insane (especially with generative video), envoking the idea of drug-fueled hallucinations.
No. ML does not hallucinate, it just very often fails to produce garbage data that’s close enough to facts to be correct.
So therefore, we should create something completely human, impossible, since we have this garbage, I’m sure just like the calculator was so unique in old days, that’s how AI will be like a calculator in a time in the future. It’s a good tool, just only that it’s in a competition with anyone, so remember it’s just an app, soon we will create an app that will more than gpt ok
You can think like that, until you see what AI hallucination actually looks like.
“Pilates|Pilates|Pilates|Pilates|Pilates|Pilates|Pilates|Pilates|Pilates|Pilates|Pilates|Pilates…”
And that’s just the most outstanding example I remember off the top of my head.
The similarities between (particularly early) image generation and dream imagery probably aren’t coincidental. Maybe it’s just that they’re both generated from latent spaces.
😂😆