AGI is inevitable given enough time, assuming we don’t destroy ourselves some other way first.
It has the capacity to solve literally all our problems and make life on Earth as close to utopia as possible.
That same capacity, however, also enables it to end the human race - either intentionally or as a byproduct of misalignment.
If the “West” doesn’t build it first, then China will. There’s no second place in this race.
Even if all nation-states somehow agreed to stop its development, a rogue underground group would do it - or possibly some random dude in his mom’s basement.
I genuinely see no solution to this. I can only hope things turn out well, or at the very least that it doesn’t happen during my lifetime. The genie isn’t going back into the bottle.
AGI might or might not be inevitable, but LLMs are very evidently not a path leading to it.
If someone really believes AGI is possible and will solve everything, they should be the first waging active war against this generation of “AI”, though at this point it’s almost certainly too late already.
The future has been murdered for short term profit, and once the bubble pops it’ll take ages before anyone invests in anything remotely related to AI again, despite LLMs having absolutely nothing to do with AI.
Not that investment would do any good during the dark ages that are to come while we sift through the remaining slop to try to find any remaining fragments of actual information, science, and culture.
AI is not something somebody is going to develop in their moms basement. AGI is NOT inevitable. The current models may grow sophisticated enough that it is hard to distinguish them from AGI, but will still be LLMs.
I see the current AI bubble as a bunch of guys digging a hole, realizing they can’t get out and deciding the only way out is to keep digging.
AI is not something somebody is going to develop in their moms basement. AGI is NOT inevitable.
Plenty of AI systems have already been developed by private individuals on their personal computers. This is not hypothetical. And I’m not claiming that our first AGI will have anything to do with LLMs.
I view AGI as inevitable because it’s the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn’t matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don’t think there’s anything supernatural about intelligence.
The thing is, all this can be true (and I don’t really understand why you’re being downvoted,) but it’s also true that LLMs are no more evidence that we are close to AGI than Eliza was.
AGI is inevitable, but it won’t come from an LLM, and all the hype in that direction from Anthropic, OpenAI et al is just so much bullshit.
The problem is, we don’t need AGI to experience the catastrophic consequences; as bad or worse will be idiotic human intelligences putting very-much-not-AGI in charge of things it has no right to be in charge of because they drunk their own koolaid (or rather, the investors did.) That, unfortunately, is the future we are speedrunning - SkyNet never needed AGI, it just needs fucking idiots to put an LLM in charge of a weapons system.
(As for AGI, my gut feeling is that it will come from the intersection of neural networks and quantum computing at scale - I’ll be filling my bunker with canned goods when the latter appears to be close on the horizon…)
I’d say LLMs are not necessarily an indicator that we’re close to AGI, but they’re also not a non-indicator. Certaintly more of an indicator of it than the invention of the steam engine was. For narrowly intelligent systems, they’re getting quite advanced. We’re not there yet, but I worry that the moment we actually step into the zone of general intelligence might not be as obvious as one would think.
However, I also don’t think there’s any basis to make the absolute claim that LLMs will never lead there, because nobody could possibly know that with that degree of certainty.
And yeah, there are multiple ways to screw things up even with narrowly intelligent AI - we don’t need AGI for that.
I mean, I’m not particularly bothered about convincing anyone else, but personally I am absolutely 100% sure that no technology that is cogniscant of absolutely nothing but tokens of language (entirely arbitrary human language at that, far from any fundamental ground truth in itself), that is entirely incapable of discerning any actual meaning from that language other than which tokens appear likely to follow another, is absolutely never, under any circumstances, going to lead to AGI.
Yann LeCun is probably heading down a more realistic path to AGI with his world models - but for as long as my cat has a few orders of magnitude more synapses than Anthropic’s most world beating model has parameters, I’m not going to get to stressed about that either.
Good work, citizen! The tech bros need you to believe that their dumb digital parrots will eventually, magically metamorphose into AGI. It’s the only thing that keeps that sweet VC money flowing and the AI bubble from popping.
The irony of your response is strong. Also, you DID say that:
I view AGI as inevitable became it’s the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn’t matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don’t think there’s anything supernatural about intelligence.
It sounds like you’ve bought into techbro bullshit, but don’t realize it.
The “AI” that we have now is not actually AI, that’s just a marketing term. Actual experts (read: Not people like Sam Altman) point out that LLMs are severely flawed and will always return bad information. This problem is baked into the way these models function. Making what we’ve got into actual AI like you said isn’t going to happen, full stop.
Don’t believe the horseshit you hear from people trying to sell something.
AI isn’t any one thing. It’s an broad term used in computer science to refer to any system designed to perform a cognitive task that would normally require human intelligence. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. That’s called “narrow” or “weak” AI.
It can still have superhuman abilities, but only within the specific task it was built for - like playing chess or generating language.
A large language model like ChatGPT is also narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. What people expect from it, though, isn’t narrow intelligence - it’s general intelligence. The ability to apply cognitive skills across a wide range of domains the way a human can. That’s something LLMs simply can’t do - at least not yet.
Artificial General Intelligence is the end goal for many AI companies, but LLMs are not generally intelligent. However they still fall under the umbrella of AI as a broad category of systems.
Making what we’ve got into actual AI like you said isn’t going to happen, full stop.
I’ve never claimed LLMs will lead to AGI as I stated in the comment you quoted above.
The way I see it:
I genuinely see no solution to this. I can only hope things turn out well, or at the very least that it doesn’t happen during my lifetime. The genie isn’t going back into the bottle.
Turn off computer, go outside or stop doing drugs.
AGI might or might not be inevitable, but LLMs are very evidently not a path leading to it.
If someone really believes AGI is possible and will solve everything, they should be the first waging active war against this generation of “AI”, though at this point it’s almost certainly too late already.
The future has been murdered for short term profit, and once the bubble pops it’ll take ages before anyone invests in anything remotely related to AI again, despite LLMs having absolutely nothing to do with AI.
Not that investment would do any good during the dark ages that are to come while we sift through the remaining slop to try to find any remaining fragments of actual information, science, and culture.
AI is not something somebody is going to develop in their moms basement. AGI is NOT inevitable. The current models may grow sophisticated enough that it is hard to distinguish them from AGI, but will still be LLMs.
I see the current AI bubble as a bunch of guys digging a hole, realizing they can’t get out and deciding the only way out is to keep digging.
Plenty of AI systems have already been developed by private individuals on their personal computers. This is not hypothetical. And I’m not claiming that our first AGI will have anything to do with LLMs.
I view AGI as inevitable because it’s the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn’t matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don’t think there’s anything supernatural about intelligence.
The term AI has been greatly diluted over time. I guess I should have said AGI instead.
For your second point, I quote the Spartans; if. Current tech is hugely expensive.
“It has the capacity to solve literally all our problems and make life on Earth as close to utopia as possible.”
Sure… If it wasn’t in the hands of people who’s main purpose is to gather more money, resources and power.
It won’t solve all our problems. It will solve theirs.
The thing is, all this can be true (and I don’t really understand why you’re being downvoted,) but it’s also true that LLMs are no more evidence that we are close to AGI than Eliza was.
AGI is inevitable, but it won’t come from an LLM, and all the hype in that direction from Anthropic, OpenAI et al is just so much bullshit.
The problem is, we don’t need AGI to experience the catastrophic consequences; as bad or worse will be idiotic human intelligences putting very-much-not-AGI in charge of things it has no right to be in charge of because they drunk their own koolaid (or rather, the investors did.) That, unfortunately, is the future we are speedrunning - SkyNet never needed AGI, it just needs fucking idiots to put an LLM in charge of a weapons system.
(As for AGI, my gut feeling is that it will come from the intersection of neural networks and quantum computing at scale - I’ll be filling my bunker with canned goods when the latter appears to be close on the horizon…)
I’d say LLMs are not necessarily an indicator that we’re close to AGI, but they’re also not a non-indicator. Certaintly more of an indicator of it than the invention of the steam engine was. For narrowly intelligent systems, they’re getting quite advanced. We’re not there yet, but I worry that the moment we actually step into the zone of general intelligence might not be as obvious as one would think.
However, I also don’t think there’s any basis to make the absolute claim that LLMs will never lead there, because nobody could possibly know that with that degree of certainty.
And yeah, there are multiple ways to screw things up even with narrowly intelligent AI - we don’t need AGI for that.
I mean, I’m not particularly bothered about convincing anyone else, but personally I am absolutely 100% sure that no technology that is cogniscant of absolutely nothing but tokens of language (entirely arbitrary human language at that, far from any fundamental ground truth in itself), that is entirely incapable of discerning any actual meaning from that language other than which tokens appear likely to follow another, is absolutely never, under any circumstances, going to lead to AGI.
Yann LeCun is probably heading down a more realistic path to AGI with his world models - but for as long as my cat has a few orders of magnitude more synapses than Anthropic’s most world beating model has parameters, I’m not going to get to stressed about that either.
Good work, citizen! The tech bros need you to believe that their dumb digital parrots will eventually, magically metamorphose into AGI. It’s the only thing that keeps that sweet VC money flowing and the AI bubble from popping.
I’m just going to ignore your completely uncalled-for smug and dismissive tone and note that at no point have I suggested LLMs will lead to AGI.
Thank you for your contribution to making this platform a worse place for everyone.
The irony of your response is strong. Also, you DID say that:
It sounds like you’ve bought into techbro bullshit, but don’t realize it.
Feel free to help me realize it then, because whatever irony or conflict you’re seeing there, I don’t see.
Yes, I can see that.
The “AI” that we have now is not actually AI, that’s just a marketing term. Actual experts (read: Not people like Sam Altman) point out that LLMs are severely flawed and will always return bad information. This problem is baked into the way these models function. Making what we’ve got into actual AI like you said isn’t going to happen, full stop.
Don’t believe the horseshit you hear from people trying to sell something.
This is simply just false. We’ve had AI since 1956
AI isn’t any one thing. It’s an broad term used in computer science to refer to any system designed to perform a cognitive task that would normally require human intelligence. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. That’s called “narrow” or “weak” AI.
It can still have superhuman abilities, but only within the specific task it was built for - like playing chess or generating language.
A large language model like ChatGPT is also narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. What people expect from it, though, isn’t narrow intelligence - it’s general intelligence. The ability to apply cognitive skills across a wide range of domains the way a human can. That’s something LLMs simply can’t do - at least not yet. Artificial General Intelligence is the end goal for many AI companies, but LLMs are not generally intelligent. However they still fall under the umbrella of AI as a broad category of systems.
I’ve never claimed LLMs will lead to AGI as I stated in the comment you quoted above.
k
Is it inevitable with 500 years though?
Nobody could possibly know. That’s why I make no claims about the timeline.