Taalas HC1: 17,000 tokens/sec on Llama 3.1 8B vs Nvidia H200’s 233 tokens/sec. 73x faster at one-tenth the power. Each chip runs ONE model, hardwired into the transistors.
Taalas HC1: 17,000 tokens/sec on Llama 3.1 8B vs Nvidia H200’s 233 tokens/sec. 73x faster at one-tenth the power. Each chip runs ONE model, hardwired into the transistors.
But if they could make it so the chip is the only thing that is obsolete, That could be recycled pretty easily, or resold.
Then it would stop being 73 times faster than NVIDIA.
That doesn’t make sense.
If you add levels of indirection, extra transistors and such, it would be surprising to manage to maintain the same level of performance, especially since this design seems to rely on hardwiring to achieve its speed…
Pretty sure the advantage is the AI directly on the chip.
Now it’s your proposal’s turn not to make any sense. This is an article about a chip with a hardwired model being super fast.
Of course the hardwiring is inflexible, and much, much faster.
I just think you want to argue