OpenAI shares contract language and red lines with Department of Defense
OpenAI shared contract language from its agreement with the Department of Defense, saying its technology cannot be used for mass domestic surveillance, to power autonomous weapons, or in high-stakes decision systems like "social credit" scores. The company added that the deal includes red lines and that it believes the agreement has "more guardrails than any previous agreement for classified AI deployments, including Anthropic's." Under the deal, OpenAI said it retains full discretion over its safety stack, will deploy via cloud, and keep cleared OpenAI personnel in the loop, backed by strong contractual protections and existing U.S.
law. The company said it could terminate the contract if the government violated its terms but added, "We don't expect that to happen." It also said it asked that the same terms be made available to all AI labs and that the government try to resolve matters with Anthropic to de-escalate the situation.
United States
openai, dod, contract language, red lines, mass surveillance, autonomous weapons, social credit, anthropic, safety stack, cloud deployment