AI isn’t any one thing. It’s an broad term used in computer science to refer to any system designed to perform a cognitive task that would normally require human intelligence. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. That’s called “narrow” or “weak” AI.
It can still have superhuman abilities, but only within the specific task it was built for - like playing chess or generating language.
A large language model like ChatGPT is also narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. What people expect from it, though, isn’t narrow intelligence - it’s general intelligence. The ability to apply cognitive skills across a wide range of domains the way a human can. That’s something LLMs simply can’t do - at least not yet.
Artificial General Intelligence is the end goal for many AI companies, but LLMs are not generally intelligent. However they still fall under the umbrella of AI as a broad category of systems.
Making what we’ve got into actual AI like you said isn’t going to happen, full stop.
I’ve never claimed LLMs will lead to AGI as I stated in the comment you quoted above.
This is simply just false. We’ve had AI since 1956
AI isn’t any one thing. It’s an broad term used in computer science to refer to any system designed to perform a cognitive task that would normally require human intelligence. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. That’s called “narrow” or “weak” AI.
It can still have superhuman abilities, but only within the specific task it was built for - like playing chess or generating language.
A large language model like ChatGPT is also narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. What people expect from it, though, isn’t narrow intelligence - it’s general intelligence. The ability to apply cognitive skills across a wide range of domains the way a human can. That’s something LLMs simply can’t do - at least not yet. Artificial General Intelligence is the end goal for many AI companies, but LLMs are not generally intelligent. However they still fall under the umbrella of AI as a broad category of systems.
I’ve never claimed LLMs will lead to AGI as I stated in the comment you quoted above.
k