• hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    168
    arrow-down
    1
    ·
    edit-2
    2 days ago

    This is a lot of framing to make it look better for OpenAI. Blaming everyone and rushed technology instead of them. They did have these guardrails. Seems they even did their job and flagged him hundreds of times. But why don’t they enforce their TOS? They chose not to do it. Once I breach my contracts and don’t pay, or upload music to youtube, THEY terminate my contract with them. It’s their rules, and their obligation to enforce them.

    I mean why did they even invest in developing those guardrails and mechanisms to detect abuse, if they then choose to ignore them? This makes almost no sense. Either save that money and have no guardrails, or make use of them?!

    • ShadowRam@fedia.io
      link
      fedilink
      arrow-up
      72
      arrow-down
      4
      ·
      2 days ago

      Well if people started calling it for what it is, weighted random text generator, then maybe they’d stop relying on it for anything serious…

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        1
        ·
        edit-2
        2 days ago

        Yeah, my point was more this doesn’t have to do anything with AI or the technology itself. I mean whether AI is good or bad or doesn’t really work… Their guardrails did work exactly as intended and flagged the account hundreds of times for suicidal thoughts. At least according to these articles. So it’s more a business decision to not intervene and has little to do with what AI is and what it can do.

        (Unless the system comes with too many false positives. That’d be a problem with technology. But this doesn’t seem to be discussed in any form.)

        • Axolotl@feddit.it
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          2 days ago

          I wonder how a keyboard with those enhanched autocomplete would be to use…clearly if the autocomplete is used locally and the app is open source

          • ferrule@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 hours ago

            there are voice to text apps that run a model on your phone. a few more cores on our devices or some more optimisations to the models and we can run an LLM. The problem is battery life and heat.

            • Axolotl@feddit.it
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 hours ago

              I once runned some models on my phone thruh termux. I tried to run Llama 3.2 with 1 and 3B parameters and run pretty well, i tried 8B and was slow. I tried deepseek-r1, 1.5B and run well, 7B was slow.

              For text prediction llama 1B may be enough

              Now, this is on a 300/400€ phone (Honor magic 6 lite)

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        I like how the computational linguist Emily Bender refers to them: “synthetic text extruders”.

        The word “extruder” makes me think about meat processing that makes stuff like chicken nuggets.

    • frunch@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      ·
      2 days ago

      I’m chuckling at the idea of someone using ChatGPT, recognizing at some point that they violated the TOS and immediately stop using the app, then also reach out to OpenAI to confess and accept their punishment 🤣

      Come to think of it, is that how OpenAI thought this actually works?

    • MelonYellow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      2 days ago

      If they cared, it should’ve been escalated to the authorities and investigated for mental health. It’s not just a curious question if he was searching it hundreds of times. If he was actively planning suicide, where I’m from that’s grounds for an involuntary psych hold.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        2 days ago

        I’m a big fan of regulation. These companies try to grow at all cost and they’re pretty ruthless. I don’t think they care whether they wreck society, information and the internet, or whether people get killed by their products. Even bad press from that doesn’t really have an effect on their investors, because that’s not what it’s about… It’s just that OpenAI is an American company. And I’m not holding my breath for that government to step in.

    • jackal@infosec.pub
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      make use of [guardrails].

      Even if a company has the ability to detect issues, that doesn’t mean they also are investing in paying a peon to monitor and handle the issues. This could be an area where there is a gap or a lack of resources to manage all the alarms. That is not to say I have any clue what’s actually going on though.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        It’d be really interesting to ask them this question during the court case. I mean at some point they had to make a willful decision how to process these things and how to handle abuse. Could have been anything from an automated system to strike users like Youtube does and limit or block their accounts after 5 attempts… or 10… or 100… Anything would have helped here. Or pay for a team of human content moderators like social media companies do (Facebook…). But seems they went with just letting it slide. I think for once this means they can’t complain now, how their TOS were violated, because they already accepted that’s how it goes. And moreover it could be willful neglect once a company prioritizes profit over human life and they just don’t address dangerous aspects of their products, which could easily(?) be addressed… And I don’t see how that’d be impossible for them. They’re an AI company so surely they must be able to come up with an automated system like Google has in place for Youtube. And the sweat-shops in Africa which do content moderation for Facebook aren’t that pricey compared to the pile of money OpenAI has available or pays as salary to a single AI engineer?!