Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.

Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. “Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Edelman global technology chair Justin Westcott told Axios in an email. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”

  • TrickDacy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    10 months ago

    Yeah this would apply except for a few things.

    1. if you even ask a person and trust your life to them like that, unless they give you good reason they are reliable, you are a moron. Why would someone expect a machine to be intelligent and experienced like a doctor? That is 100% on them.
    2. no, humans do not behave like you suggest, they make stupid shit up that is often way worse than a random google search will find you, which will be similar to AI-based advice. Look at any advice thread on any social media.
    3. safeguards against stuff like this are in place and will be added more and more as things progress.

    AI is a concern for a variety of reasons, but people randomly thinking shaky tech is perfect isn’t really one of them (except for the gullible idiots who think that way, which again is on them).

    • nyan@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      Half of the human population is of below-average intelligence. They will be that dumb. Guaranteed. And safeguards generally only get added until after someone notices that a wrong answer is, in fact, wrong, and complains.

      In part, I believe someone’s going to die because large corporations will only get serious about controlling what their LLMs spew when faced with criminal charges or a lawsuit that might make a significant gouge in their gross income. Untill then, they’re going to at best try to patch around the exact prompts that come up in each subsequent media scandal. Which is so easy to get around that some people are likely to do so by accident.

      (As for humans making up answers, yes, some of them will, but in my experience it’s not all that common—some form of “how would I know?” is a more likely response. Maybe the sample of people I have contact with on a regular basis is statistically skewed. Or maybe it’s a Canadian thing.)

    • Eccitaze@yiffit.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      if you even ask a person and trust your life to them like that, unless they give you good reason they are reliable, you are a moron. Why would someone expect a machine to be intelligent and experienced like a doctor? That is 100% on them.

      Insurance companies are already using AI to make medical decisions. We don’t have to speculate about people getting hurt because of AI giving out bad medical advice, it’s already happening and multiple companies are being sued over it.

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        10 months ago

        Somehow we went from me saying this technology shouldn’t be downplayed to “but it’s costing lives already!”

        Not really sure how that happened but yeah it’s obviously shitty that people are irresponsible shitheads and I think downplaying it or quibbling about whether it’s actually AI or not is far from helpful in light of such consequences