A corporate artificial intelligence frenzy is sowing fear for workers on a massive scale. Seventy-one percent of people in the U.S., according to a Reuters poll on A.I., are concerned “too many people will lose jobs.” Wall Street and Big Tech are running a huge hype machine to back up their massive, risky investment in A.I., pledging it will drive a “productivity surge,” meaning fewer workers and more profits. But workers can take heart that, so far, it’s mostly hot air. To date, A.I. is making few profits.
Not bad advice when it comes to how to organize around AI in the workplace, but by diminishing its capabilities (reducing it to ‘producing buggy code’ and ‘badly summarizing data’ at several points in the article), they will be making workers think that it’s not a big problem to think about, so it diminishes the message they want to convey. The reality is developers at all levels are already using agentic methods a lot. Can’t speak to numbers but it’s widespread, and they’re doing it themselves without management telling them to - so on that for example I agree with the article that workers should be able to choose how they integrate AI in their workspace, just like they should be able to choose their preferred tools and methods of work.
That’s contradictory with the direct experience of most workers, where AI is something forced upon them by managers and that creates more problem than it solves. The minority using agentic AI is also probably harder to organize, so better relate to the majority that is discontent than the minority that is vibe-coding.
but it doesn’t produce buggy code like the article is implying was my point
I’ve been doing software dev for over two decades now. These tools absolutely do work, and they can save you a lot of time. The reality is that these tools are still very new, and people are learning how to use them effectively. They’re not magic, and you don’t just type a prompt and get a working program. You have to spend the time to actually learn how to use them effectively. Most people haven’t actually done that, especially the ones that complain about them most incessantly. I’ve seen this happen a lot personally. Where people don’t want these tools to work. They try them, and then when they don’t magically do what they want use that as proof that they don’t work.
Basing the argument on these tools producing buggy code, not being effective, and not saving time, is just building a straw man. There are plenty of good arguments for organizing that are rooted in reality. There’s no need to invent fake arguments here.
Yes, I feel the same. While I do think those tools work, the productivity gains are not the end of the world, specially because if you want to use them effectively you need to be very specific and then review the generated code afterwards.
The problem is not the AI tools per se, but the expectation that AI will be the new industrial revolution and the feeling that managers will be able to cut costs (and personnel) due to the fictional gains that AI provides. It ends up becoming a self-fufilling prophecy, since they force devs to work harder and longer with higher productivity targets.
I do believe AI will be a big flop. Not because it is a bad tool, but because it will never fulfill the hype.
I’m kind of expecting we’ll see a similar scenario to the dotCom bubble which wiped out majority of the existing tech companies, but then once the dust settled, useful tech came out of the whole thing. We’re in the main hype phase of this technology right now, and a lot of companies are making idiotic bets that are obviously going to be ruinous.
Also worth noting that the spike in energy prices resulting from the war on Iran could be a catalyst for the bubble popping since data centers need massive amounts of energy to operate.
I don’t think developers using AI are vibe coding. Non-devs, sure, but not people in businesses hired as software developers.