• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    2 days ago

    Open models are going to kick the stool out. Hopefully.

    GLM 4.5 is already #2 on lm arena, above Grok and ChatGPT, and runnable on homelab rigs, yet just 32B active (which is mad). Extrapolate that a bit, and it’s just a race to the zero-cost bottom. None of this is sustainable.

    • dubyakay@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      I did not understand half of what you’ve written. But what do I need to get this running on my home PC?

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 day ago

        I am referencing this: https://z.ai/blog/glm-4.5

        The full GLM? Basically a 3090 or 4090 and a budget EPYC CPU. Or maybe 2 GPUs on a threadripper system.

        GLM Air? Now this would work on a 16GB+ VRAM desktop, just slap in 96GB+ (maybe 64GB?) of fast RAM. Or the recent Framework desktop, or any mini PC/laptop with the 128GB Ryzen 395 config, or a 128GB+ Mac.

        You’d download the weights, quantize yourself if needed, and run them in ik_llama.cpp (which should get support imminently).

        https://github.com/ikawrakow/ik_llama.cpp/

        But these are…not lightweight models. If you don’t want a homelab, there are better ones that will fit on more typical hardware configs.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          It’s going to be slow as molasses on ollama. It needs a better runtime, and GLM 4.5 probably isn’t supported at this moment anyway.

            • WorldsDumbestMan@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              Qwen3 8B sorry, Idiot spelling. I use it to talk about problems when I have no internet or maxed out on Claude. I can rarely trust it with anything reasoning related, it’s faster and easier to do most things myself.

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                1 day ago

                Yeah, 7B models are just not quite there.

                There are tons of places to get free access to bigger models. I’d suggest Jamba, Kimi, Deepseek Chat, and Google AI Studio, and the new GLM chat app: https://chat.z.ai/

                And depending on your hardware, you can probably run better MoEs at the speed of 8Bs. Qwen3 30B is so much smarter its not even funny, and faster on CPU.