We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs:
• No use of OpenAI technology for mass domestic surveillance.
• No use of OpenAI technology to direct autonomous weapons systems.
• No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).
It specifically states their AI can’t/won’t be used for surveillance and autonomous weapons. Of course I’m not saying I trust them, but isn’t this the same thing Anthropic says they’re against? What’s the difference here or what did I miss?
the ‘no domestic surveillance’ is just language that mirrors some limitations (from their pov) from the patriot act.
They’re still willing to surveil people outside the USA, and in fact all they have to do is route domestic traffic through an international part of a network and they can legally spy on domestic americans which is what already happens.
From OpenAI’s statement:
It specifically states their AI can’t/won’t be used for surveillance and autonomous weapons. Of course I’m not saying I trust them, but isn’t this the same thing Anthropic says they’re against? What’s the difference here or what did I miss?
the ‘no domestic surveillance’ is just language that mirrors some limitations (from their pov) from the patriot act. They’re still willing to surveil people outside the USA, and in fact all they have to do is route domestic traffic through an international part of a network and they can legally spy on domestic americans which is what already happens.
Anthropic put clauses in that were legally enforceable by future administrations. OpenAI says “yea we totally trust you bro”
Sam Altman is the king of the trust me bro and than backpedaling on it.