Taalas HC1: 17,000 tokens/sec on Llama 3.1 8B vs Nvidia H200’s 233 tokens/sec. 73x faster at one-tenth the power. Each chip runs ONE model, hardwired into the transistors.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    I’m middle of the road on AI. I think it has uses. I also think this technology is a dead end (i.e. this is not going to lead to AGI) and had people understood from the start the limitations of it, investment would’ve been more modest and cautious. Is a great technology. You can do cool things with it. But it will never be able to significantly replace humans. However it may be really painful watching the investor class wrestle with that reality.

    I think the chip does have uses and I think building it even with today’s models would last a long time. But the number of scenarios where it is unequivocally better than nothing is smaller than AI bros (I draw a line between an enthusiast like myself and a bro who is all in and won’t hear reason) want to think.

    Last point. In theory this chip is great. Based on my reading this is a substitute for an H100 — a data center GPU (APU?). This isn’t going into smart mines or drones and probably not cars. Not without more development. So while there is potential here, none of these use cases are practical. This is a way for OAI or whomever to run their current models just the way they are for cheaper but with a hardware cost to upgrade. This isn’t going to matter for the rest of us for a while.