• Kay Ohtie@pawb.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    16 hours ago

    whether it’s telling the truth

    “whether the output is correct or a mishmash”

    “Truth” implies understanding that these don’t have, and because of the underlying method the models use to generate plausible-looking responses based on training data, there is no “truth” or “lying” because they don’t actually “know” any of it.

    I know this comes off probably as super pedantic, and it definitely is at least a little pedantic, but the anthropomorphism shown towards these things is half the reason they’re trusted.

    That and how much ChatGPT flatters people.

    • floofloof@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      13 hours ago

      Yeah, it has no notion of being truthful. But we do, so I was bringing in a human perspective there. We know what it says may be true or false, and it’s natural for us to call the former “telling the truth”, but as you say we need to be careful not to impute to the LLM any intention to tell the truth, any awareness of telling the truth, or any intention or awareness at all. All it’s doing is math that spits out words according to patterns in the training material.