• somethingDotExe@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 hours ago

    Hyper intelligence SHOULD run the world. Right now my biggest concern is all the fucking no-intelligence, that also rules the world.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    6 hours ago

    Oh my god shut up about the singularity. We’re not there. The only great filter we’re facing at the moment is our own fucking society.

    • Rioting Pacifist@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      6 hours ago

      Specifically LLMs fail at multiple of the axioms that underpin the “theory” of signularity

      • Fail - Recursive self-improvement is possible - LLMs aren’t being coded, beyond specific fields like image generation and programming, it’s not really clear how an LLM would improve itself in a general sense.
      • Fail? - Moore’s Law (or its generalization) - it seems like we are hitting the limits of fitting more chips into a processor and LLMs are not going to solve that
      • Fail - Human cognition is near the threshold for AI being able to self-improve - they really seem to be showing something AI researchers have known for a while - people are dumb and anthropomorphize anything the moment it can pretend to talk to you
      • Fail - Greater intelligence reliably translates into greater real-world capability - I think tech CEOs are doing a great job of demonstrating that this isn’t true - so the idea that a supersmart general AI would run the world rather than be stuck generating deep fake porn of children isn’t necessarily true.
      • Fail? - There is no fundamental ceiling on intelligence - it seems like each itteration of LLMs is returning a smaller improvement than the last, which to my simple meat bag brain implies there is a ceiling on the intelligence of LLMs - I don’t know if this points to some fundamental limit of intelligence but at least with LLMs it seems like they have an asymptotic limit.
      • kkj@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        In regards to the last point, asymptotic functions certainly look like that, but so do logs with a base greater than 1 and powers with an exponent between 0 and 1 exclusive, and both of those do eventually go to infinity. Now, most concepts of the singularity have development accelerating, not decelerating, so this still doesn’t qualify, but it could technically keep increasing without limit.

  • Deestan@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 hours ago

    Fuck Fortune for using their position as a mass medium to give this charlatan a place to post his dream journal entries as news.

  • Corvidae@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 hours ago

    This morning I watched a video posted on Lemmy about a guy getting a ticket for smoking cannabis in public, I believe in Hollywood. Yet, the GOP congress refuses to “ticket” Trump for his war on Iran. We have a massive difference in how laws are implemented and to whom they are applied. Why would extinction of this inequality be a bad thing?

  • notsosure@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 hours ago

    Although it is definitely possible to kill off all of humanity, doing so by may13, 2037 is a stretch goal, even for AI.

  • marud@piefed.marud.fr
    link
    fedilink
    Français
    arrow-up
    1
    ·
    7 hours ago

    10 years ? Hopefully climate change now induced by heavy usage of fossil fuel for AI datacenter power will flush humanity before machines can do it.