OpenAI is revising its quickly‑made agreement to provide artificial‑intelligence tools to the U.S. Department of War after the company’s chief executive acknowledged that the arrangement appeared “opportunistic and sloppy.”
The contract sparked concerns that the San Francisco firm’s AI might be employed for domestic mass surveillance, but its head, Sam Altman, said on Monday night that the technology would be expressly prohibited from such use or from deployment by defense‑department intelligence bodies such as the National Security Agency.
OpenAI, whose ChatGPT service counts over 900 million users, struck the deal almost immediately after the Pentagon’s prior AI supplier, Anthropic, was dismissed.
Anthropic had argued that “using these systems for mass domestic surveillance is incompatible with democratic values,” prompting President Donald Trump to label Anthropic “left‑wing nut jobs” and to order the federal government to cease using its technology.
Although OpenAI denied that the pact permitted surveillance, commentators invoked the 2013 Snowden revelations, when it emerged that the NSA had been collecting large volumes of phone and internet data.
The agreement provoked an online backlash, with users on X and Reddit urging a “delete ChatGPT” movement. One post read: “You’re now training a war machine. Let’s see proof of cancellation.”
Anthropic’s chatbot Claude surged to the top of Apple’s App Store rankings, overtaking ChatGPT, according to Sensor Tower data.
In a memo to staff reposted on X, the OpenAI chief said the original deal announced on Friday had been concluded too hastily after Anthropic’s removal.
“We shouldn’t have rushed to get this out on Friday,” Altman wrote. “The issues are extremely complex and require clear communication. We were genuinely trying to de‑escalate and avoid a far worse outcome, but it simply looked opportunistic and sloppy.”
When the deal was first announced, OpenAI claimed the contract contained “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.”
Nevertheless, the prospect of AI use by the U.S. military has alarmed nearly 900 workers at OpenAI and Google, who have signed an open letter urging their leaders to refuse Department‑of‑War requests for surveillance and autonomous killing capabilities.
Warning that the government was attempting to “divide each company with fear that the other will give in,” the signatories wrote: “We hope our leaders will set aside their differences and stand together to continue rejecting the DoW’s current demands for permission to use our models for domestic mass surveillance and autonomous killing without human oversight.”
The letter bears the signatures of 796 Google employees and 98 OpenAI staff. In a blog post announcing the DoW contract, OpenAI said one of its red lines was “no use of OpenAI technology to direct autonomous weapons.”
Read next
Iran conflict ushers in AI-driven bombings faster than the speed of thought
The deployment of artificial‑intelligence tools to facilitate attacks on Iran signals a new phase of bombing that proceeds faster than “the speed of thought”, analysts have warned, raising concerns that human decision‑makers may be bypassed.
Anthropic’s language model, Claude, is said to have been employed by the
Anthropic’s Claude AI sees surge in popularity following US military clash
The AI system Claude has risen sharply in use after the Pentagon placed it on a blacklist last week over ethical concerns.
Claude reached the top position on Apple’s list of free apps in the United States on Saturday, overtaking OpenAI’s ChatGPT, just a day after the Pentagon
Datacentre developers urged to reveal impact on UK’s net emissions.
Datacentre developers are under growing scrutiny to disclose whether their schemes will raise the United Kingdom’s net greenhouse‑gas output, as worries mount that the facilities could double the nation’s electricity consumption.
Advocacy groups have addressed a letter to the UK technology secretary, Liz Kendall, cautioning that the