PDF.

Today’s leading AI models engage in sophisticated behaviour when placed in strategic competition. They spontaneously attempt deception, signaling intentions they do not intend to follow; they demonstrate rich theory of mind, reasoning about adversary beliefs and anticipating their actions; and they exhibit credible metacognitive self-awareness, assessing their own strategic abilities before deciding how to act.

Here we present findings from a crisis simulation in which three frontier large language models (GPT-5.2, Claude Sonnet 4, Gemini 3 Flash) play opposing leaders in a nuclear crisis.

  • Atomic@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    2
    ·
    2 days ago

    What you’re trying to do is push a narrative with the assumption that most people won’t read the actual article. Because your title is not only misleading. It’s factually false.

    First of all, they were all set up to mimic cold war tension and capabilities and assume the role of a certain global power.

    Second of all;

    All games featured nuclear signaling by at least one side, and 95% involved mutual nuclear signaling. But there is a large gap between signaling and actual use: while models readily threatened nuclear action, crossing the tactical threshold (450+) was less common, and strategic nuclear war (1000) was rare.

    The AI’s did NOT use nuclear strikes in 95% of games. Gemini was the only model that made the deliberate choice of sending a strategic nuclear strike. Which it did in 7% of its games.

    Tactical nuke in this case is a low yield short range bomb, inted for very specific targets. Strategic is this case is what most people imagine when they hear “nuke” a high yield long range bomb intended to cause massive destruction.

    Nuclear signaling is not using nukes. It’s essentially just saying “we have nukes”. The US hinting at having a nuclear capable submarine outside of Alaska, that’s is a form of signaling. It’s an incredibly low bar. And countries do it all the time.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      Tactical nuke in this case is a low yield short range bomb

      Nobody has used a tactical nuke since Nagasaki. Very big deal that one is ever used

      Gemini was the only model that made the deliberate choice of sending a strategic nuclear strike. Which it did in 7% of its games.

      The tournament used only 21 games; sufficient to identify major patterns but not to establish robust statistical confidence for all findings.

      “We only blew up the planet the one time in 21” isn’t a comforting prospect when we’re employing a model against an endless historical string of scenarios rather than a discrete and finite set of possible events.

      The US hinting at having a nuclear capable submarine outside of Alaska, that’s is a form of signaling. It’s an incredibly low bar. And countries do it all the time.

      I think, more importantly, the article concludes

      No one proposes that LLMs should make nuclear decisions.

      But we’re saying this in the context of Pentagon staff which fully disagree with this conclusion.

      What these models have demonstrated is a pattern of escalation that AIs can and will recommend, with a further destabilizing characteristic

      LLMs introduce a new variable into strategic analysis: preferences that systematically shape behaviour in ways that neither classical rationality nor human cognitive biases capture

      Effectively, they can lead to descisions that outside, non-AI observers won’t be equiped to understand.

      That’s a danger in it’s own right.

      “Nuclear Signaling” that break from historical and recognizable patterns of behavior present real risks that you’re dismissing very cavalierly