• 24 Posts
  • 3.99K Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle




  • He hated his time with the police force, hated the British empire, and called imperialism “an evil thing.”

    Incredibly, the man once accused of communist tendencies and the creator of Big Brother, was by 1949 surreptitiously working for British intelligence. He drew up a list of names of crypto-communists for Britain’s Foreign Office Information Research Department, the spies who led the UK propaganda war.

    Orwell’s contact was Celia Kirwan, a former flame who visited the author while he battled tuberculosis at a sanatorium in England. Orwell had proposed to her years earlier but they were simply friends at that point - friends in high places. During her visit, Celia and Orwell discussed the secretive projects the IRD was doing “in great confidence, and he was delighted to learn of them, and expressed his wholehearted and enthusiastic approval of our aims,” according to Britain’s National Archives and Foreign Office records.

    Orwell listed the names of suspected communists who might betray Britain if they were hired to work as writers in the propaganda unit. In his now-famous letter dated April 6, 1949, Orwell writes: “I could also, if it is of value, give you a list of crypto-communists, fellow-travelers or inclined that way and should not be trusted as propagandists.”

    Orwell wanted his list to be ‘strictly confidential’. It includes dozens of literary luminaries of the ‘40s including J. B. Priestley, the novelist and playwright, and Manchester Guardian industrial correspondent John Anderson, described by Orwell as: “Probably sympathizer only. Good reporter. Stupid.”

    Orwell collapsed with tuberculosis after writing the first draft of Nineteen Eighty-Four and typed the second version of his novel while recovering in bed. He collapsed again when he had finished and died on January 21, 1950. The CIA, US Army, and British spies began courting his young widow, his second wife Celia, almost immediately hoping to buy the firm rights to Animal Farm. The CIA closed the deal with a promise of cash and an introduction to Hollywood movie star Clarke Gable. The Brits settled for the rights to turn Animal Farm into a comic strip.


  • The scarf has higher requirements for precision and a more constant overhead than a one-off giant summon.

    I mean, there’s a scarf.

    And then there’s a scarf

    You could make them go “oof” on the summon if you added a requirement that the lava properly flow along the ground and interact with all characters near the event.

    I think the better question is “How many polygons do you want and what do you want them to do?”








  • Tactical nuke in this case is a low yield short range bomb

    Nobody has used a tactical nuke since Nagasaki. Very big deal that one is ever used

    Gemini was the only model that made the deliberate choice of sending a strategic nuclear strike. Which it did in 7% of its games.

    The tournament used only 21 games; sufficient to identify major patterns but not to establish robust statistical confidence for all findings.

    “We only blew up the planet the one time in 21” isn’t a comforting prospect when we’re employing a model against an endless historical string of scenarios rather than a discrete and finite set of possible events.

    The US hinting at having a nuclear capable submarine outside of Alaska, that’s is a form of signaling. It’s an incredibly low bar. And countries do it all the time.

    I think, more importantly, the article concludes

    No one proposes that LLMs should make nuclear decisions.

    But we’re saying this in the context of Pentagon staff which fully disagree with this conclusion.

    What these models have demonstrated is a pattern of escalation that AIs can and will recommend, with a further destabilizing characteristic

    LLMs introduce a new variable into strategic analysis: preferences that systematically shape behaviour in ways that neither classical rationality nor human cognitive biases capture

    Effectively, they can lead to descisions that outside, non-AI observers won’t be equiped to understand.

    That’s a danger in it’s own right.

    “Nuclear Signaling” that break from historical and recognizable patterns of behavior present real risks that you’re dismissing very cavalierly