ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate | It was bad at recognizing relationships and needs selective training, researchers say.::It was bad at recognizing relationships and needs selective training, researchers say.
ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate | It was bad at recognizing relationships and needs selective training, researchers say.::It was bad at recognizing relationships and needs selective training, researchers say.
Sounds like the Gell-Mann Amnesia Effect. Except instead of a newspaper, you’re reading something not generated by humans.
Like the newspaper, though, I would argue that generative AI is being presented as if it knows everything about everything already, or at least collective inertia implies it does.