• Tim@lemmy.snowgoons.ro
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    6 hours ago

    The thing is, all this can be true (and I don’t really understand why you’re being downvoted,) but it’s also true that LLMs are no more evidence that we are close to AGI than Eliza was.

    AGI is inevitable, but it won’t come from an LLM, and all the hype in that direction from Anthropic, OpenAI et al is just so much bullshit.

    The problem is, we don’t need AGI to experience the catastrophic consequences; as bad or worse will be idiotic human intelligences putting very-much-not-AGI in charge of things it has no right to be in charge of because they drunk their own koolaid (or rather, the investors did.) That, unfortunately, is the future we are speedrunning - SkyNet never needed AGI, it just needs fucking idiots to put an LLM in charge of a weapons system.

    (As for AGI, my gut feeling is that it will come from the intersection of neural networks and quantum computing at scale - I’ll be filling my bunker with canned goods when the latter appears to be close on the horizon…)

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      3 hours ago

      I’d say LLMs are not necessarily an indicator that we’re close to AGI, but they’re also not a non-indicator. Certaintly more of an indicator of it than the invention of the steam engine was. For narrowly intelligent systems, they’re getting quite advanced. We’re not there yet, but I worry that the moment we actually step into the zone of general intelligence might not be as obvious as one would think.

      However, I also don’t think there’s any basis to make the absolute claim that LLMs will never lead there, because nobody could possibly know that with that degree of certainty.

      And yeah, there are multiple ways to screw things up even with narrowly intelligent AI - we don’t need AGI for that.

      • Tim@lemmy.snowgoons.ro
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 minutes ago

        I mean, I’m not particularly bothered about convincing anyone else, but personally I am absolutely 100% sure that no technology that is cogniscant of absolutely nothing but tokens of language (entirely arbitrary human language at that, far from any fundamental ground truth in itself), that is entirely incapable of discerning any actual meaning from that language other than which tokens appear likely to follow another, is absolutely never, under any circumstances, going to lead to AGI.

        Yann LeCun is probably heading down a more realistic path to AGI with his world models - but for as long as my cat has a few orders of magnitude more synapses than Anthropic’s most world beating model has parameters, I’m not going to get to stressed about that either.