• Bazell@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    21 hours ago

    Your explanation is not completely correct. More correct explanation would be: an AI chatbot that has an ability to gather relatable info to the user input from internal or external sources allowing the AI model to answer more precisely on questions even if the model wasn’t trained on this data at all. This lowers the amount and degree of hallucinations to some point but doesn’t eliminate them.