Altman’s remarks in his tweet drew an overwhelmingly negative reaction.
“You’re welcome,” one user responded. “Nice to know that our reward is our jobs being taken away.”
Others called him a “f***ing psychopath” and “scum.”
“Nothing says ‘you’re being replaced’ quite like a heartfelt thank you from the guy doing the replacing,” one user wrote.


Sam is still early, and obnoxious, but I’ve been monitoring AI progress since the 1980s. Roughly one year ago, AI coding agents sort of turned the corner from not really any more useful than a Google search (which is, itself very useful), into getting things right more than they hallucinate. That was an important watershed, because from that point they could make forward progress, fixing more mistakes than they made.
In the 12 months since, there has been steady and rapid forward progress. If you haven’t asked an AI to code something for you in the last 3 months, you’re out of touch with where it’s at today.
Even free Gemini rips out really good bash scripts faster than you can look up the first weird thing you want it to do.
I personally don’t use AI, but I concede that for some people, it can be useful for them, if they use the AI as a tool for their own thinking, rather than subordinating themselves to the chatbot. Mostly, this means ensuring that they’re able to check whether the AI is right or not.
When I dabbled in using coding AI, there were a few basic tasks that it was useful for. There were a few hallucinations, but because the task was basic and well within my proficiency to scan, I was able to set it right; even with these corrections, it still saved me time overall. However, when I tried to use it on tasks that were beyond my own technical expertise, things got messy really quickly. Things weren’t working, so I felt sure that there must be some hallucinated errors, but I couldn’t tell what they were because the task was at or beyond the limit of my own technical competency. A couple of times, I managed to eventually figure out how to fix the error, but it was so exhausting compared to how problem solving a code problem feels, and I felt dissatisfied by the lack of learning involved.
Ordinarily, struggling through a complex code problem leaves me with a greater understanding of my domain, but I didn’t this time. I guess I did get a little better at prompting the AI, but I felt like I learned far less than if I had solved the problem myself. Battling through to build a thorough understanding of my problem and my tools takes a long time upfront, but the next time I do this task or a similar one, I’ll be quicker, and these time improvements will build and build as my proficiency continues to grow. That’s why I stopped dabbling with AI coding assistants/agents — because even though using them for this complex task still saved me time compared to usual, in the long term, the time savings from using an AI is negligible compared to the time savings from increasing my own proficiency.
Now I hear what you’re saying about how much more effective AI coding agents are becoming, and how the hallucination rate is lower than it was. I haven’t had much first hand experience for quite a few months now, but I have no doubt that I would be incredibly impressed at the progress in such a relatively short time. The time savings from using AI would likely be larger today than it was when I tested it, and in a year, it’ll be even better. However, in my view, that will still not be able to compete with the long term time savings of a human gaining proficiency. You might disagree with me on that.
But the thing is, that human proficiency isn’t just a means to save time on their regular task, but a valuable end in and of itself. That proficiency is how we protect ourselves when things go wrong in unexpected ways. Even if the AI models we’re using now could perfectly capture and reproduce the sum of our collected knowledge, I don’t believe they can come close to rivalling humans in the realm of creating new knowledge, or adapting to completely novel circumstances. Perhaps some day, that might be possible for AI, but that’s not going to be possible with any of the AI architectures that we have today. In the meantime, creative and proficient humans will continue to find ways to exploit the flaws in AI systems, possibly for nefarious ends. A society that relies heavily on AI will need more technical expertise, not less.
The crux of my argument is “how does someone who isn’t proficient in bash tell whether the bash script that AI has generated is a good one or a bad one?”. Even if hallucination rate continues to drop, it will always be non-zero. Sure, humans are also far from perfect, but that’s why so many of our systems include oversight mechanisms that involve many sets of eyes on critical systems; Junior developers are mentored by more experienced devs, who help ensure they don’t break stuff with their inexperience (at least, in an ideal world. In practice, many senior Devs are so overworked and stretched thin that they can’t give the guidance they should. Again, this is a case for more proficient humans). Replacing proficient humans with AI will build a culture of unquestioningly following the AI. Even if hallucination rate is a fraction of the human error rate, it will always be non-zero, and therefore there will be disasters.
And when it all goes to shit, who will fix it if we have allowed human proficiency to wither away and die?