• entropiclyclaude@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 hours ago

    These fuckers will claim whatever nonsense to keep themselves relevant enough to take on more debt before they collapse.

    • fierysparrow89@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      I agree, they start to sound desperate to keep their current momentum going. I think the bubble will burst soon. Things look solid until they’re not.

  • MonkderVierte@lemmy.zip
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    12 hours ago

    The Turing thing again, how good a system is at mimicking a human? Like, lot’s of dog owners could swear; the dog is smarter than a cat. But dogs are only better at reading their human.

    I’ll believe him, if he let’s the LLM do his job.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 hours ago

      Cats may be able to read their human just as well or better, but as they don’t give a shit, there’s no feedback to base anything on.

  • PushButton@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 hours ago

    His can we take this idiot seriously; slop DLSS, tgen telling us we are wrong about this (the buddy telling me what I prefer), then we achieved AGI…

    How low can he falls?

  • Zozano@aussie.zone
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    1
    ·
    20 hours ago

    LLMs aren’t AI, let alone AGI.

    They’re fucking prediction engines with extra functions.

    • Onihikage@piefed.social
      link
      fedilink
      English
      arrow-up
      22
      ·
      16 hours ago

      The best description I’ve ever heard of LLMs is “a blurry jpeg of the internet”. From the perspective of data compression and retrieval, they’re impressive… but they’re still a blurry jpeg. The image doesn’t change, you can only zoom in on different parts of it and apply extra filters, and there’s nothing you can truly do about the compression artifacts (what we call “hallucinations”). It can’t think, it can’t learn, it just is, and that’s all it will ever be.

    • unnamed1@feddit.org
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      9
      ·
      13 hours ago

      So are we. Your definition of AI also seems off. It’s a field of computer science dealing with seemingly cognitive algorithms. Basically everything that is not rule based programming. I work in AI production since over ten years. It is absolutely valid and necessary to hate AI, but not to deny technical functionality. Also the other answer to your comment: of course training a neural network is a form of learning. Wether it is by reinforcement or by training data. There are many applications of ML since many years before LLMs, it makes no sense to deny that it exists.

    • MojoMcJojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      13 hours ago

      It’s an industrial sized prediction engine. And when you apply that to bioscience, it predicts things that saves lives.