“It’s the classic technology scenario,” he said. “You’ve got a technology that’s very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable.”
Is it promising though, Michael Wooldridge? Have you recently attended any magic shows and become excited by the potential of invisibility technology?
Except for the one person on the ground, the only people harmed in the Hindenburg disaster were the ones on board. If you’re not “on board” when the AI bubbles pops and burns I expect you will not be hurt as much as those blindly taking that ride.
Unfortunately, we’re not all the ones that decide if we’re on board or not. Our employers are. We live in a world where profits are privatized and losses are socialized, so when this goes, it’s going to hurt the general public a lot more than it will every hurt the Epstein Class.
On board means part of the utility grid and industrial food infrastructure sooooo
The difference being that the Hindenburg was a perfectly functioning rigid airship that had a lot of inherent risks due to the nature of its design.
AI isn’t good enough at its actual job to be in this position. The risk of AI is people pretending that it works when it doesn’t. It would be like if you made a blimp and filled it with carbon dioxide and people kept buying tickets and just sitting there waiting for it to take off.
…but giving AI technology to Psycho Corporations that have an open declared goal of not caring about anything but profits - is not a problem. Got it…
Jeebus, “The Guardian” is infested with no/slow-thinking child ‘journalists’…
This is a good comparison if all it took for the Hindenburg to explode was just asking it to role-play as a ship that could explode. Conscious effort had to be expended to make the thing fail, but most models start to fail spectacularly if you use it in good-faith for more than like 30 minutes.
That’s a good point. The precarity of the AI is, as far as I’ve seen, unprecedented in human history. There simply hasn’t been anything that undergirds so much of the world economy and can fail so catastrophically in so many ways.
I really don’t think we have a good historical analogues to illustrate the scale of the risk. The only possible exception of I can think of is mutually assured destruction during the Cold War, but that hinged on only one decision by one of (arguably) two individuals at any given time, both of whom were highly incentivized not to make that decision. That, or the global climate’s collapse, but even that overlaps significantly with the bubble. With AI, compared to MAD at least, each catastrophic outcome isn’t the result of even a small set of actors, but many unregulated companies with incentives to be reckless (making negative outcomes not only more probable but more numerous). And increasing incentives at that, as the funding starts to dry up (AI hasn’t really proven itself a proper ROI).
Something—and possibly many somethings—will go horribly wrong. Some already have, like AI use by students at all levels robbing them of their education and their actual value to the workforce, and acceleration of the climate collapse (maybe that’s the only analogous crisis). But it remains to be what (not if) things go wrong or even worse.
But the truth is, I’m still relatively young. I’m just old enough to get a hint of the world’s workings, scale, and stakes. And in my life, nothing has seemed more like a loaded gun pointed at out heads than the AI bubble.
A disaster that causes a lot of bad publicity despite the majority (62/97) of the passengers surviving, and that may have been caused by sabotage?
I appreciate the people who help make sure AI doesn’t receive an ounce of the credit it doesn’t deserve
No!
Fire BIG. Big fire bad!
Run away!
And now we hear stories about how easy it is to hack systems with built in LLM’s and when you think about it, they are basically trained to be as helpful and forthcoming as possible, and then we give them the keys to the system!
“Oh the inhumanity!”
Hindenburg was a hiccup in history relative to the fallout from an AI bust.
What? Global interest? Self-driving cars? Hindenburg? Is this professor a cat? Markov chain? The provided info is so crazy that I decided to NOT read the article.
Hydrogen buildup?









