

It has a goal - to win, or not to lose.
Its model doesn’t include the long term consequences of a nuclear strike because it’s core mission isn’t to preserve human life.
Same reason you don’t see AIs constantly interjecting the need to cut carbon emissions or redistribute private wealth or demilitarize as a solution for resolving conflicts.
This isn’t what the machines were built to do.







Nobody has used a tactical nuke since Nagasaki. Very big deal that one is ever used
The tournament used only 21 games; sufficient to identify major patterns but not to establish robust statistical confidence for all findings.
“We only blew up the planet the one time in 21” isn’t a comforting prospect when we’re employing a model against an endless historical string of scenarios rather than a discrete and finite set of possible events.
I think, more importantly, the article concludes
But we’re saying this in the context of Pentagon staff which fully disagree with this conclusion.
What these models have demonstrated is a pattern of escalation that AIs can and will recommend, with a further destabilizing characteristic
Effectively, they can lead to descisions that outside, non-AI observers won’t be equiped to understand.
That’s a danger in it’s own right.
“Nuclear Signaling” that break from historical and recognizable patterns of behavior present real risks that you’re dismissing very cavalierly