• Jo Miran@lemmy.ml
    link
    fedilink
    English
    arrow-up
    130
    arrow-down
    1
    ·
    2 days ago

    So what’s the angle? The Internet is getting flooded by AI slop. AI needs fresh REAL content to train with. That’s the angle. You are there to provide frsh amd original content to feed the AI.

      • Apathy@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 day ago

        Is you a youngin? Cause no product under the control of a billionaire is free. If it’s free, you are the product. AI is hated and they’re trying to make a product using that hate as a basis for target audience

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          Nothing is free, If they can sell ads to people because they don’t like AI, they will. They’re rebooting it with about the same intent as it was originally designed to have.

    • chunes@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      12
      ·
      2 days ago

      Again with this idea of the ever-worsening ai models. It just isn’t happening in reality.

      • EldritchFemininity@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        21 hours ago

        It has been proven over and over that this is exactly what happens. I don’t know if it’s still the case, but ChatGPT was strictly limited to training data from before a certain date because the amount of AI content after that date had negative effects on the output.

        This is very easy to see because an AI is simply regurgitating algorithms created based on its training data. Any biases or flaws in that data become ingrained into the AI, causing it to output more flawed data, which is then used to train more AI, which further exacerbates the issues as they become even more ingrained in those AI who then output even more flawed data, and so on until the outputs are bad enough that nobody wants to use it.

        Did you ever hear that story about the researchers who had 2 LLMs talk to each other and they eventually began speaking in a language that nobody else could understand? What really happened was that their conversation started to turn more and more into gibberish until they were just passing random letters and numbers back and forth. That’s exactly what happens when you train AI on the output of AI. The “AI created their own language” thing was just marketing.

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        1 day ago

        Not only it is actually happening, it’s actually well researched and mathematically proven.

      • pulsewidth@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 day ago

        The same reality where GPT5’s launch a couple months back was a massive failure with users and showed a lot of regression to less reliable output than GPT4? Or perhaps the reality where most corporations that have used AI found no benefit and have given up reported this year?

        LLMs are good tools for some uses, but those uses are quite limited and niche. They are however a square peg being crammed into the round hole of ‘AGI’ by Altman etc while they put their hands out for another $10bil - or, more accurately while they make a trade swap deal with MS or Nvidia or any of the other AI orobouros trade partners that hype up the bubble for self-benefit.

      • theneverfox@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        1 day ago

        People really latched onto the idea, which was shared with the media by people actively working on how to solve the problem