Isn’t that the whole shtick of the AI PCs no one wanted? Like, isn’t there some kind of non-GPU co-processor that runs the local models more efficiently than the CPU?
I don’t really want local LLMs but I won’t begrudge those who do. Still, I wouldn’t trust any proprietary system’s local LLMs to not feed back personal info for “product improvement” (which for AI is your data to train on).
Isn’t that the whole shtick of the AI PCs no one wanted? Like, isn’t there some kind of non-GPU co-processor that runs the local models more efficiently than the CPU?
I don’t really want local LLMs but I won’t begrudge those who do. Still, I wouldn’t trust any proprietary system’s local LLMs to not feed back personal info for “product improvement” (which for AI is your data to train on).
NPU neural processing unit