I think there’s an important semantic difference between worse performance and correctness. Tools, like AI, can underperform when compared to humans and still be very useful and worth investing into, but that’s only as long as they perform correctly.
Tools, like AI, can underperform when compared to humans and still be very useful and worth investing into, but that’s only as long as they perform correctly.
Yeah, the ‘but’ is the entire problem. In my experience, LLM chatbots are like if you made a 12yo a junior admin and fed them speed. Very quick to give you a confident answer, but wrong more often than not. The worst part is a lot of what I’m doing is coding, and it gets basic commands and syntax wrong
I think there’s an important semantic difference between worse performance and correctness. Tools, like AI, can underperform when compared to humans and still be very useful and worth investing into, but that’s only as long as they perform correctly.
Yeah, the ‘but’ is the entire problem. In my experience, LLM chatbots are like if you made a 12yo a junior admin and fed them speed. Very quick to give you a confident answer, but wrong more often than not. The worst part is a lot of what I’m doing is coding, and it gets basic commands and syntax wrong