I think there’s an important semantic difference between worse performance and correctness. Tools, like AI, can underperform when compared to humans and still be very useful and worth investing into, but that’s only as long as they perform correctly.
Tools, like AI, can underperform when compared to humans and still be very useful and worth investing into, but that’s only as long as they perform correctly.
Yeah, the ‘but’ is the entire problem. In my experience, LLM chatbots are like if you made a 12yo a junior admin and fed them speed. Very quick to give you a confident answer, but wrong more often than not. The worst part is a lot of what I’m doing is coding, and it gets basic commands and syntax wrong
If we can’t expect better from an AI than from a human, why should we use the AI (other than so you don’t have to pay workers)?
I think there’s an important semantic difference between worse performance and correctness. Tools, like AI, can underperform when compared to humans and still be very useful and worth investing into, but that’s only as long as they perform correctly.
Yeah, the ‘but’ is the entire problem. In my experience, LLM chatbots are like if you made a 12yo a junior admin and fed them speed. Very quick to give you a confident answer, but wrong more often than not. The worst part is a lot of what I’m doing is coding, and it gets basic commands and syntax wrong
Like there’s a big shortage of unemployed humans
Unless you plan on enslaving them, please refer to my previous comment RE: paying humans.