• Iconoclast@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    3 hours ago

    I’d say LLMs are not necessarily an indicator that we’re close to AGI, but they’re also not a non-indicator. Certaintly more of an indicator of it than the invention of the steam engine was. For narrowly intelligent systems, they’re getting quite advanced. We’re not there yet, but I worry that the moment we actually step into the zone of general intelligence might not be as obvious as one would think.

    However, I also don’t think there’s any basis to make the absolute claim that LLMs will never lead there, because nobody could possibly know that with that degree of certainty.

    And yeah, there are multiple ways to screw things up even with narrowly intelligent AI - we don’t need AGI for that.

    • Tim@lemmy.snowgoons.ro
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 minutes ago

      I mean, I’m not particularly bothered about convincing anyone else, but personally I am absolutely 100% sure that no technology that is cogniscant of absolutely nothing but tokens of language (entirely arbitrary human language at that, far from any fundamental ground truth in itself), that is entirely incapable of discerning any actual meaning from that language other than which tokens appear likely to follow another, is absolutely never, under any circumstances, going to lead to AGI.

      Yann LeCun is probably heading down a more realistic path to AGI with his world models - but for as long as my cat has a few orders of magnitude more synapses than Anthropic’s most world beating model has parameters, I’m not going to get to stressed about that either.