• cherrari@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    16 hours ago

    That’s a very interesting insight. Maybe the amount of hallucination depends on whether the “knowledge” was loaded in form of a prompt vs training data? In the experience I’m talking about there’s no hallucination at all, but there are wrong conclusions and hypotheses sometimes, especially with really tricky bugs. But that’s normal, the really tricky edge cases is probably not something I’d expect to find on SO anyway…