That’s a good point. The precarity of the AI is, as far as I’ve seen, unprecedented in human history. There simply hasn’t been anything that undergirds so much of the world economy and can fail so catastrophically in so many ways.
I really don’t think we have a good historical analogues to illustrate the scale of the risk. The only possible exception of I can think of is mutually assured destruction during the Cold War, but that hinged on only one decision by one of (arguably) two individuals at any given time, both of whom were highly incentivized not to make that decision. That, or the global climate’s collapse, but even that overlaps significantly with the bubble. With AI, compared to MAD at least, each catastrophic outcome isn’t the result of even a small set of actors, but many unregulated companies with incentives to be reckless (making negative outcomes not only more probable but more numerous). And increasing incentives at that, as the funding starts to dry up (AI hasn’t really proven itself a proper ROI).
Something—and possibly many somethings—will go horribly wrong. Some already have, like AI use by students at all levels robbing them of their education and their actual value to the workforce, and acceleration of the climate collapse (maybe that’s the only analogous crisis). But it remains to be what (not if) things go wrong or even worse.
But the truth is, I’m still relatively young. I’m just old enough to get a hint of the world’s workings, scale, and stakes. And in my life, nothing has seemed more like a loaded gun pointed at out heads than the AI bubble.
That’s a good point. The precarity of the AI is, as far as I’ve seen, unprecedented in human history. There simply hasn’t been anything that undergirds so much of the world economy and can fail so catastrophically in so many ways.
I really don’t think we have a good historical analogues to illustrate the scale of the risk. The only possible exception of I can think of is mutually assured destruction during the Cold War, but that hinged on only one decision by one of (arguably) two individuals at any given time, both of whom were highly incentivized not to make that decision. That, or the global climate’s collapse, but even that overlaps significantly with the bubble. With AI, compared to MAD at least, each catastrophic outcome isn’t the result of even a small set of actors, but many unregulated companies with incentives to be reckless (making negative outcomes not only more probable but more numerous). And increasing incentives at that, as the funding starts to dry up (AI hasn’t really proven itself a proper ROI).
Something—and possibly many somethings—will go horribly wrong. Some already have, like AI use by students at all levels robbing them of their education and their actual value to the workforce, and acceleration of the climate collapse (maybe that’s the only analogous crisis). But it remains to be what (not if) things go wrong or even worse.
But the truth is, I’m still relatively young. I’m just old enough to get a hint of the world’s workings, scale, and stakes. And in my life, nothing has seemed more like a loaded gun pointed at out heads than the AI bubble.