It’s been tested a lot and the results are that it can’t be trusted at all unless you are already an expert in the thing you’re asking it to “help” you with so you can correct the many mistakes it will make, but it’s slower and, again, is **guaranteed **to make mistakes (hallucinations are built into what techbros are insisting on labeling as “AI”, no matter how many resources you throw at it).
All of this at great environmental and human cost too.
…is this supposed to be news?
Kinda. It’s a novel technology and one that hasn’t been well analyzed or exhaustively tested.
It’s been tested a lot and the results are that it can’t be trusted at all unless you are already an expert in the thing you’re asking it to “help” you with so you can correct the many mistakes it will make, but it’s slower and, again, is **guaranteed **to make mistakes (hallucinations are built into what techbros are insisting on labeling as “AI”, no matter how many resources you throw at it).
All of this at great environmental and human cost too.
I think his point is that this is less “news”, and more “well, duh”.