Iran conflict ushers in AI-driven bombings faster than the speed of thought

The deployment of artificial‑intelligence tools to facilitate attacks on Iran signals a new phase of bombing that proceeds faster than “the speed of thought”, analysts have warned, raising concerns that human decision‑makers may be bypassed.

Anthropic’s language model, Claude, is said to have been employed by the U.S. armed forces during the recent wave of strikes, with the technology “compressing the kill chain” – the sequence from target identification through legal clearance to launch.

The United States and Israel, which have previously used AI to pinpoint targets in Gaza, carried out nearly 900 attacks on Iranian sites within the first twelve hours, a period that also saw Israeli missiles kill Iran’s supreme leader, Ayatollah Ali Khamenei.

Scholars examining the field argue that AI is shrinking the planning interval required for intricate operations – a trend known as “decision compression”. Critics fear this could reduce military and legal specialists to merely endorsing pre‑generated strike plans.

In 2024, the San Francisco‑based firm Anthropic integrated its model across the U.S. Department of Defense and other security agencies to accelerate war‑planning. Claude was incorporated into a platform built by the technology company Palantir in partnership with the Pentagon to enhance intelligence assessment and support officials in their decision processes.

“The AI system proposes targets, which in many respects is faster than the speed of thought,” said Craig Jones, a senior lecturer in political geography at Newcastle University and a specialist in kill‑chain dynamics. “You get both scale and speed, carrying out assassination‑type strikes while simultaneously undermining the regime’s capacity to respond with aerial ballistic missiles. What once took days or weeks in past conflicts now happens in a single moment.”

The newest AI suites can swiftly process vast quantities of data on prospective targets, ranging from drone imagery to intercepted communications and human reports. Palantir’s platform applies machine‑learning techniques to rank targets, suggest appropriate weaponry based on inventory and past effectiveness, and automatically assess the legal justification for an attack.

“This marks a new chapter in military strategy and technology,” said David Leslie, professor of ethics, technology and society at Queen Mary University of London, who has observed AI‑driven defence demonstrations. He cautioned that dependence on AI may lead to “cognitive off‑loading”, whereby individuals responsible for authorising strikes feel detached from the outcomes because the reasoning has been outsourced to a machine.

On Saturday, a missile strike on a school in southern Iran killed 165 people, many of them children, according to state media. The site was near a military barracks, and the United Nations described the incident as “a grave violation of humanitarian law”. The U.S. military said it is reviewing the reports.