Taalas HC1: 17,000 tokens/sec on Llama 3.1 8B vs Nvidia H200’s 233 tokens/sec. 73x faster at one-tenth the power. Each chip runs ONE model, hardwired into the transistors.
Taalas HC1: 17,000 tokens/sec on Llama 3.1 8B vs Nvidia H200’s 233 tokens/sec. 73x faster at one-tenth the power. Each chip runs ONE model, hardwired into the transistors.
The thing that differentiates ChatGPT and Claude is likely more the RAG pipeline that backs them and feeds them context. The models really aren’t getting better, we’re just getting better at using them to break tasks down into units so small AI can figure it out. I’d bet a GPT 5 model or a Claude Opus 4.6 model would last 5, maybe 10 years before you really start to notice its capabilities are falling behind. I’ll bet you could use GPT 4o for 5-10 years and it would be fine.