Taalas HC1: 17,000 tokens/sec on Llama 3.1 8B vs Nvidia H200’s 233 tokens/sec. 73x faster at one-tenth the power. Each chip runs ONE model, hardwired into the transistors.

  • dieICEdie@lemmy.org
    link
    fedilink
    arrow-up
    6
    ·
    16 hours ago

    This would be great if you could have a machine that would allow you to swap chips… and then they only charge < 50 USD for each chip.

    • boonhet@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      Can’t be that cheap unfortunately if they maxed out the die area. Though it is an older node so maybe not as expensive as flagship GPU chips and shit

          • MagicShel@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            14 hours ago

            The thing that differentiates ChatGPT and Claude is likely more the RAG pipeline that backs them and feeds them context. The models really aren’t getting better, we’re just getting better at using them to break tasks down into units so small AI can figure it out. I’d bet a GPT 5 model or a Claude Opus 4.6 model would last 5, maybe 10 years before you really start to notice its capabilities are falling behind. I’ll bet you could use GPT 4o for 5-10 years and it would be fine.

          • dieICEdie@lemmy.org
            link
            fedilink
            arrow-up
            1
            ·
            14 hours ago

            But if they could make it so the chip is the only thing that is obsolete, That could be recycled pretty easily, or resold.