Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • sbv@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    6
    ·
    3 days ago

    It looks like the LLMs weren’t trained for medical tasks. The study would be more interesting if it had been run on something built for the task.

    • red_green_black@slrpnk.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      But that’s just it. Allegedly from the Snake Oil Tech Bros the AI is supposedly capable of doing medical tasks, or really just about everything