incompetent half-assing is rarely this morally righteous of an act too, since your one act of barely-competent-enough incompetence is transmuted into endless incompetence by becoming training data/qc feedback

  • over_clox@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    35
    ·
    1 day ago

    Such pranks might sound like ‘fun’, but also put people’s lives at risk, when modern ‘smart’ vehicles can’t properly identify stop signs, school buses, motorcycles, bicycles, pedestrians…

    I don’t like this modern AI era much, but I think it’s really shitty to deliberately give AI misinformation.

      • over_clox@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        15
        ·
        1 day ago

        Do you really trust ‘smart’ vehicles that get confused when they see emergency vehicles, or just flashing lights in general?

        Remember, they’re using us as AI training mules, and every time someone gives it false information, it confuses the AI, which just makes modern vehicles more unpredictable.

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          15 hours ago

          if you haven’t noticed yet, we want these “smart” vehicles to be abolished, along with their non-stop automatic surveillance and data mining they do

          • over_clox@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            15 hours ago

            Um, you’re preaching to the choir here.

            As much as I loathe the AI and surveillance and shit, I get downvoted any time I express my opinions.

            Maybe I should be more clear…

            FUCK AI!!!

              • over_clox@lemmy.world
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                15 hours ago

                My point is, it exists now, whether we like it or not. I for one do not like it, I prefer Actual Intelligence, like the stuff that comes from the brain noodle.

                But if AI is gonna be the thing, which obviously it is, people shouldn’t deliberately give it bad training data.

                • supersquirrel@sopuli.xyzOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  5 hours ago

                  Yes they should, it is an act of disobedience against the rich, which is honestly one of the few things worth putting effort into in 2025.

        • supersquirrel@sopuli.xyzOP
          link
          fedilink
          arrow-up
          8
          ·
          edit-2
          1 day ago

          How about you remember we all pretty much know that?

          This is just the same old strategy of continously refocusing a conversation about the huge amounts of waste the modern global economy creates on a moral failure of individuals to recycle.

          Like waves arms at the unfurling chaos dragon in the sky what does that matter at this late stage of entanglement with weaponized and proud ignorance? Go give someone you love a genuine compliment, that is actually resisting in the way you think you are describing but you are not.

          • over_clox@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            5
            ·
            1 day ago

            The only thing going to waste is, well, everything, plus people’s brains. There’s children starting school that don’t know how to climb stairs now. I’m sure they know how to stare at an iPad though.

            As far as the ‘smart’ vehicles, even if you don’t drive one yourself, the moron on autopilot next to you is sleeping, fucking, or on their phone, trusting a chunk of silicon that they don’t even understand to keep them, and you, safe…

        • Solumbran@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          24 hours ago

          It’s like saying that if employees are paid and treated like shit, they should still work hard because the opposite would be immoral.

          That’s bullshit, if AIs are crap because they train on unwilling people, it’s the company’s fault, not the people who are coerced into working for free.

          • over_clox@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            24 hours ago

            I never said any such thing. Matter of fact, I quit fixing phones and tablets back in 2017, partly because the employer had me installing used batteries, while he lied to his customers, telling them they were new batteries.

            I have better ethics than that. I’d rather be homeless than mislead people, especially for a lousy $10 an hour.

            So yeah, you’re right, it’s the company’s fault. Question is, are there any honest companies anymore?

            • supersquirrel@sopuli.xyzOP
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              22 hours ago

              Question is, are there any honest companies anymore?

              Wrong question.

              The right question is if there are there any industries in countries like the US that are effectively regulated enough after the long acidic erosion of state functions by decades of neoliberalism and deregulation (especially financial deregulation) to threaten unscrupulous companies enough into behaving as if they were honest companies when they would really rather just save a buck and kill and maim a handful of innocent people?

              Example A: large corporations were bullied into pretending they were pro-trans and pro-gender fluidity right up until the precise moment after they stopped being bullied into pretending they were.

              My line of reasoning is the only way you are going to understand why planes are all of the sudden accidentally crashing into helicopters in ways in a decade or two ago most engineers and pilots involved in the industry would have never let happen even if it took screaming down the CEO who was casually telling them to cut a corner they knew would lead to innocent children and people dying…

              That sense of trust people had about pilots and aerospace industry engineers and regulators was why we were raised to feel a sense of indirect pride in pilots because they remind us when they walk by in a neat and professional uniform that we exist in a society that has magic adults who whisk people into the sky and back so they can see loved ones and somehow do it with incredible safety, kindness and consistency as if it was just a simple matter of filing routine paperwork (no shade at secretaries, that shit ain’t easy either).

                • supersquirrel@sopuli.xyzOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  22 hours ago

                  nah bro lol, have you never played a video game (this is a question sorry if this is unclear). Those mean that you have a quest to turn in or you have to talk to that person in order to continue on in the story campaign.

    • LambdaRX@sh.itjust.works
      link
      fedilink
      arrow-up
      16
      ·
      1 day ago

      I didn’t ask to solve captchas, if someone wants to have accurate data, they better hire someone to train AI.

        • supersquirrel@sopuli.xyzOP
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          16 hours ago

          Also expect your AI to be engaged in some heady and deep forms of self hatred that is going to take decades to unravel.

          Sad angry people in, sad angry robots out.

          • TranquilTurbulence@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            14 hours ago

            If you use internet discussions as training data, you can expect to find all sorts of crazy biases. Completely unfiltered data should produce a chatbot that exaggerates many human traits which completely burying others.

            For example, on Reddit and Lemmy, you’ll find lots of clever puns. On Mastodon, you’ll find all sorts of LGBT advocates or otherwise queer people. On Xitter, you’ll find all the racists and white supremacists. There are also old school forums that amplify things even further.

    • latenightnoir@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      23 hours ago

      I genuinely think that’s a noble sentiment and I share that concern. However, this would entail making a deal with the Devil at this point, and pretty much literally.

      Most if not all relevant models nowadays are owned by outwardly unscrupulous people, which means any correct interaction we have with their models only serves to build up the Devil’s throne.

      It is a downright tragedy that people will suffer as a result of said models, but that fault is not on us. Besides the fact that they’re essentially stealing labour and data in order to train their models, they’re also using them to dish out propaganda, to replace workers and throwing them in a ditch, to cause yet another financial bubble which’ll flush the toilet when it inevitably pops - again.

      We need to let them fail, otherwise we are just encouraging others to use us in the same exact ways.

    • IsoSpandy@lemm.ee
      link
      fedilink
      arrow-up
      13
      ·
      1 day ago

      I didn’t agree to train their AI though you know. If the data is unreliable, then get your own data. Why would I tell it to recognize a stop sign.

      If they open source the model and the weights and allow me turn on my seat warmers on my own car without a subscription, then maybe… Just maybe some day I would help them. Till then, gtfo.