• SlimePirate@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    2 days ago

    It does lie and hallucinate a lot, especially with biased context in the question (the bullshit part). The (biased) knowledge is hiding somewhere in its weights, it is just that it is sometimes quite hard to recover.

    Your 40% depends a lot on how you ask the questions and the field of these questions. Humanity’s last exam is a morr obiective benchmark for measuring the wide knowledge of LLMs.

    • Log in | Sign up@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Your 40% depends a lot on how you ask the questions and the field of these questions.

      Dude, they fail that exam with even worse error rates than I see!

      When you can verify it, it’s OFTEN and REGULARLY wrong. It’s stupid to trust if for anything you can’t personally verify.

      The designed purpose of LLMs is to respond to human interaction, not to be correct. They are the showoff who pretends he can answer every question. They are the confident drunkard at the bar who will tell you anything that pops into their head. Intelligent, knowledgeable people say “I don’t know” when they don’t know. LLMs don’t do that. Ever. Trouble is, they don’t “know” anything. They’re a chatbot from the bottom up. Chatbot through and through. It’s their fundamental nature.

      Yes there was knowledge and deep understanding in their training data. Also, I ate chicken curry for tea. However, I am not a chicken, I do not cluck, I haven’t started eating worms, I cannot produce any chicken, and my poop is not chicken either. My poop smells faintly of curry. So it is with LLMs and the knowledge and understanding in their training data.

      • SlimePirate@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        2 days ago

        They beat any human on that knowledge benchmark, completely unrelated to your 40% “test”. Try to answer any of the example questions on the main page.

        I don’t need a metaphor I know LLMs are hallucinating, lying, bullshitting. That doesn’t invalidate my point.