OpenAI chief Sam Altman told staff on Tuesday that the firm does not dictate how the Pentagon employs its artificial‑intelligence tools in combat settings. His statement about OpenAI’s lack of influence arrives as lawmakers and ethicists intensify scrutiny of military AI use and as engineers voice worries about the fate of their creations.
“You don’t get to make operational choices,” Altman said to employees, according to Bloomberg and CNBC.
“So whether you think the Iran strike was justified or the Venezuela operation was wrong, you’re not asked to weigh in,” he added.
The sector has been caught in fierce debate and strained talks in recent weeks after the Defense Department pressed AI firms to lift safety constraints on their models to broaden possible military uses. Sources say AI‑driven systems have already been deployed in the U.S. effort to detain Venezuelan leader Nicolás Maduro and in targeting decisions during the conflict with Iran.
Anthropic, a competitor that builds the Claude chatbot, turned down a Pentagon contract last week, citing fears that its model could be repurposed for domestic mass surveillance or fully autonomous weapons. Defense Secretary Pete Hegseth labeled the company a “supply‑chain risk,” a designation never before applied to a U.S. firm and one that could inflict serious financial damage if formalized.
On the same day Hegseth announced punitive steps against Anthropic, the Pentagon revealed a separate agreement with OpenAI that appeared intended to substitute Claude in military projects. The timing of that pact and the perception that OpenAI had crossed ethical lines rejected by Anthropic sparked criticism both publicly and within OpenAI’s workforce.
Altman and OpenAI have since stressed that their technology will be used lawfully and have sought to manage the fallout, with Altman conceding that the agreement was hurried and made the company appear “opportunistic and sloppy.”
Anthropic chief executive Dario Amodei denounced Altman as “mendacious” in a memo to staff on Wednesday and accused him of offering “dictator‑style praise to Trump,” according to The Information. “We have kept our red lines intact rather than colluding to stage ‘safety theater’ for the benefit of employees—something everyone at the Pentagon, Palantir, our political advisers, and others assumed we were trying to solve,” Amodei wrote.
He also targeted the Pentagon and the Trump administration. “The real reason the Pentagon and the Trump team dislike us is that we haven’t donated to Trump, whereas OpenAI and Greg have given heavily,” he added, referencing Greg Brockman, OpenAI’s president, who contributed $25 million to a pro‑Trump political action committee with his wife.
Read next
X will prohibit users from earning revenue for posting unlabeled AI‑generated war videos.
Elon Musk’s platform X will prohibit users from earning revenue if they repeatedly share AI‑generated war videos without labeling them, following a surge of fabricated battle footage related to the Iran conflict.
With roughly five hundred million monthly users, X will bar creators from receiving payment for ninety
Warplane maker warns Europe's next‑gen fighter jet project could collapse if the dispute continues.
France and Germany’s next‑generation fighter programme may soon be “dead”, one of the two firms charged with delivering it warned, as a deepening corporate split over construction responsibilities widens.
Dassault Aviation, France’s premier combat‑aircraft builder, said Airbus Defence – representing Germany and Spain – must cooperate on the
Google sued after Gemini chatbot told a man to commit suicide
Last August, Jonathan Gavalas grew completely absorbed by his Google Gemini chatbot.
The 36‑year‑old resident of Florida had begun using the AI program earlier that month for drafting text and making purchases.
Soon after, Google rolled out Gemini Live, an assistant featuring voice interactions that could sense users’