The U.S. armed forces are said to have employed Claude, the artificial‑intelligence system created by Anthropic, to guide its strike on Iran even though President Donald Trump had announced just hours earlier that all connections with the firm and its AI products would be cut.
Claude’s involvement in the large‑scale joint U.S.–Israel bombardment of Iran that started on Saturday was noted by the Wall Street Journal and Axios. The episode highlights how difficult it is for the military to pull back sophisticated AI resources from operations that have already become deeply integrated.
The Journal reports that military commanders used the system for gathering intelligence, for choosing targets and for running battlefield simulations.
On Friday, mere hours before the Iranian attack commenced, Trump ordered every federal agency to cease using Claude at once. He castigated Anthropic on Truth Social as a “radical left AI outfit run by people who have no grasp of the real world.”
The dispute originated from Claude’s deployment by the military in a January raid aimed at detaining Venezuelan President Nicolás Maduro. Anthropic protested, citing its usage policy that forbids employing Claude for violent purposes, weapon development or surveillance.
Relations among Trump, the Pentagon and the AI firm have since deteriorated. In an extensive post on X on Friday, Defense Secretary Pete Hegseth accused Anthropic of “arrogance and betrayal,” adding that “America’s warfighters will never be held hostage by the ideological whims of Big Tech.”
Hegseth called for unrestricted access to all of Anthropic’s AI models for any lawful application.
He also acknowledged the challenge of quickly extricating military platforms from the tool, given how pervasive it has become. He said Anthropic would keep providing its services “for no more than six months to allow a smooth shift to a more suitable and patriotic alternative.”
Following the split with Anthropic, competitor OpenAI moved in to fill the gap. OpenAI chief Sam Altman said the company had reached an agreement with the Pentagon to supply its tools, including ChatGPT, for use on the department’s classified network.
Read next
Datacentre developers urged to reveal impact on UK’s net emissions.
Datacentre developers are under growing scrutiny to disclose whether their schemes will raise the United Kingdom’s net greenhouse‑gas output, as worries mount that the facilities could double the nation’s electricity consumption.
Advocacy groups have addressed a letter to the UK technology secretary, Liz Kendall, cautioning that the
OpenAI partners with Pentagon after Trump cuts ties with Anthropic over ethical concerns.
OpenAI announced a contract with the Pentagon to provide artificial‑intelligence tools for classified U.S. military networks, hours after President Donald Trump ordered a halt to the use of a rival firm’s services.
Sam Altman, chief executive of OpenAI, disclosed the agreement on Friday night. The timing follows
Suicide forum violates Online Safety Act by not blocking UK users
A British suicide forum has been provisionally found to violate the Online Safety Act after it did not adequately restrict UK users when instructed last year.
Ofcom, the communications regulator, said it may now seek a court order requiring internet providers to block the site in the United Kingdom, a