ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate | It was bad at recognizing relationships and needs selective training, researchers say.::It was bad at recognizing relationships and needs selective training, researchers say.

  • LWD@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 months ago

    Sounds like the Gell-Mann Amnesia Effect. Except instead of a newspaper, you’re reading something not generated by humans.

    You open the newspaper to an article on some subject you know well… You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward… and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read.

    Like the newspaper, though, I would argue that generative AI is being presented as if it knows everything about everything already, or at least collective inertia implies it does.