I have not read the article yet but i think this is a good topic to discuss here.

    • 小莱卡@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      4 days ago

      They can be great at it tho, it all depends on the data fed to the LLM. If someone built one with all socialist literature it could be a great tool. This specific one seems more like a front end to another model.

      • haui@lemmygrad.mlOP
        link
        fedilink
        arrow-up
        2
        ·
        4 days ago

        It would probably be wisest to use a pretrained model and feed it the prolewiki with particular focus on marxist leninist writers as to give it the correct bias.

      • robot_dog_with_gun [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        4 days ago

        LLMs work by hallucinating, the wild shit that gets shared isn’t an accident, it’s how they generate all their output.

        people have trained models on internal document sets and it gets things wrong, they are simply not useful for facts. they don’t think, they don’t have knowledge, they just pull scrabble tiles in a clever statistical way that fools you into trusting it.

        • percyraskova@lemmygrad.ml
          link
          fedilink
          arrow-up
          7
          ·
          4 days ago

          thats a tooling/prompting/context window management problem. it can be solved with proper programming procedures and smart memory management