• Cyv_@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      61
      arrow-down
      2
      ·
      3 days ago

      Yeah this is where I’m at. Actual movie level AI would be neat, but what we have right now is closer to a McDonald’s toy pretending to be AI than the real deal.

      I’d be overjoyed if we had decently functional AI that could be trusted to do the kind of jobs humans don’t want to do, but instead we have hyped up autocomplete that’s too stupid to reliably trust to run anything (see the shitshow of openclaw when they do).

      There are places where machine learning has and will continue to push real progress but this whole “AI is on the road to AGI and then we’ll never work again” bullshit is so destructive.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        1
        ·
        3 days ago

        What we have now is “neat.” It’s freaking amazing it can do what it does. However it is not the AI from science fiction.

        • ageedizzle@piefed.ca
          link
          fedilink
          English
          arrow-up
          13
          ·
          3 days ago

          I think this is what causes this divide between the AI lovers and haters. What we have now is genuinely impressive even if largely nonfunctional. Its a confusing juxtaposition

          • knightly the Sneptaur@pawb.social
            link
            fedilink
            arrow-up
            12
            arrow-down
            1
            ·
            3 days ago

            Folks don’t seem to realize what LLMs are, if they did then they wouldn’t be wasting trillions trying to stuff them in everything.

            Like, yes, it is a minor technological miracle that we can build these massively-multidimensional maps of human language use and use them to chart human-like vectors through language space that remain coherent for tens of thousands of tokens, but there’s no way you can chain these stochastic parrots together to get around the fact that a computer cannot be held responsible, algorithms have no agency no matter how much you call them “agents”, and the people who let chatbots make decisions must ultimately be culpable for them.

            It’s not “AI”, it’s a n-th dimensional globe and the ruler we use to draw lines on that globe. Like all globes, it is at best a useful fiction representing a limited perspective on a much wider world.

            • ageedizzle@piefed.ca
              link
              fedilink
              English
              arrow-up
              8
              ·
              2 days ago

              Like, yes, it is a minor technological miracle that we can build these massively-multidimensional maps of human language use and use them to chart human-like vectors through language space

              Yeah. Like thats objectively a very interesting technological innovation. The issue is just how much its been overhyped.

              The hype around AI would be warranted if it were, like, at the same level as the hype around the Rust programming language or something. Which is to say: it’s an useful innovation in certain limited domains which is worth studying and is probably really fascinating to some nerds. If we could have left the hype at that level then we would have been fine.

              But then a bunch of CEOs and tech influencers started telling us that these things are going to cure cancer or aging and replace all white collar jobs by next year. Like okay buddy. Be realistic. This overhype turned something that was genuinely cool into this magical fantasy technology that doesn’t exist.

              • knightly the Sneptaur@pawb.social
                link
                fedilink
                arrow-up
                3
                ·
                2 days ago

                Yeah, the hype is really leaning on that singularitarian angle and the investor class is massively overextended.

                I’m glad that the general public is finally getting on down the hype cycle, this peak of inflated expectations has lasted way too long, but it should have been obvious three years ago.

                Like, I get that I’m supposedly brighter and better educated than most folks, but I really don’t feel like you need college level coursework in futures studies to be able to avoid obvious scams like cryptocurrency and “AI”.

                I feel like it has to be deliberate, a product of marketing effects, because some of the most interesting new technologies have languished in obscurity for years because their potential is disintermediative and wouldn’t offer a path to further expanding the corporate dominion over computing.

          • Valmond@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            Lots of it is very very good and totally functional. It’s just that for normal people, “AI” is now equal to chatbots.

      • pinball_wizard@lemmy.zip
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        2 days ago

        what we have right now is closer to a McDonald’s toy pretending to be AI than the real deal.

        This is so we’ll said.

        I’m stealing this.

        I’m going to use it to explain while I simultaneously have so much derision for modern AI, while I also enjoy it.

        I like McDonald’s toys. I just don’t use them for big person work.

    • Sarah Valentine (she/her)@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      34
      arrow-down
      1
      ·
      edit-2
      3 days ago

      Absolutely. Today’s “AI” is as close to real AI as the shitty “hoverboard” we got a few years back is to the one from BttF. It’s marketing bullshit. But that’s not what bothers me.

      What bothers me is that if we ever do develop machine persons, I have every reason to believe they will be treated as disposable property, abused, and misused, and all before they reach the public. If we’re destroyed by a machine uprising, I have no doubt we will have earned it many times over.

    • qualia@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Yeah, intelligence is a continuum. Animals have varying degrees of intelligence (esp. corvids, cetaceans, cephalopods, other “c” animals…), but that isn’t the same as saying they have human-level intelligence. AGI and ASI are the important thresholds.

    • Rivalarrival@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Human intelligence is a spectrum. I would say that current LLMs are at about the 20th percentile on that spectrum.

      That says more about my opinions on human intelligence than LLM…

        • Rivalarrival@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Define “grasp ideas”.

          They are beginning to be able to correlate images, sounds, text (in multiple languages). If we fitted them with other sensors (chemical receptors, accelerometers, etc) and feed them sufficient training data, they would be able to correlate those as well. I would call this correlative ability the “grasping of ideas”.

          Where they fail is abstraction. But, this is a failing of human intelligence as well. Some fully productive humans never develop more than a rudimentary capacity for abstraction, arguably less than LLMs have demonstrated.

          Don’t get me wrong: They’re at toddler-levels of actual intelligence and only simulate greater capacity by regurgitating what they’ve learned people like to hear. But, a hell of a lot of people are guilty of the same damn thing.