Elon Musk’s platform X will prohibit users from earning revenue if they repeatedly share AI‑generated war videos without labeling them, following a surge of fabricated battle footage related to the Iran conflict.
With roughly five hundred million monthly users, X will bar creators from receiving payment for ninety days when they post AI‑generated videos of armed conflict without a clear disclosure. A repeat violation will result in a permanent exclusion, the company announced Tuesday night, after the opening days of the Iran hostilities were accompanied by a flood of false online clips.
Feeds on X, as well as on Instagram and Facebook, have displayed many fabricated battle scenes, such as Iranian rockets chasing and downing a U.S. jet—a video that BBC Verify recorded as having been viewed 70 million times—and another clip that employed AI to swap genuine smoke from a missile strike with an exaggerated fireball.
Creators who attract followings near 100,000 can earn several hundred dollars monthly on X through its advertising system, a factor that encourages the creation of sensational viral content.
“In periods of conflict, reliable on‑the‑ground information is essential,” said Nikita Bier, head of product at X. “Current AI tools make it easy to produce misleading material. Effective immediately, anyone who uploads AI‑generated videos of an armed clash without a clear disclosure will lose access to revenue sharing for ninety days. Further breaches will lead to permanent removal from the program.”
Additional fabricated war videos have attracted large audiences. An Instagram clip claiming to depict a massive blaze after “Iran destroyed the U.S. airbase in Riyadh” was later identified as eighteen‑month‑old footage of the aftermath of an Israeli strike on an oil refinery in Hodeidah, Yemen.
Full Fact, a UK fact‑checking organization, noted that it is “increasingly seeing AI accelerate the spread of misinformation on social media.”
“In recent days we have encountered numerous AI‑generated images circulated on various platforms as if authentic, such as fabricated pictures of an aircraft carrier, the Burj Khalifa ablaze, and an alleged photo of Ayatollah Khamenei’s body,” said Steve Nowottny, editor at Full Fact. “Even when the AI output appears low‑quality or bears a watermark, it is still shared widely; the sheer amount of such false material and the ease of its creation and distribution pose a serious problem.”
Read next
Brusselslaunches probe into Snapchat over child safety worries
Brussels has launched an inquiry into Snapchat after worries that the messaging service is exposing children to grooming, sexual abuse and other illegal activity.
In a separate ruling on Thursday, the European Commission also stated that four pornographic sites are not stopping minors from viewing adult material.
The probes targeting
Senior European reporter suspended for using AI‑generated quotations
The owner of the Dutch daily De Telegraaf and the Irish Independent has placed a senior reporter on leave for now after he confessed to employing artificial intelligence to “incorrectly attribute statements to individuals.”
Peter Vandermeersch, who previously led Mediahuis’s Irish division, said he “succumbed to hallucinations” – the label
Fire specialists stay alert as lithium‑ion battery risks rise
Lithium‑ion cells now pose a fresh technological risk, a fire‑science specialist admits keeps him restless at night, while fire‑service leaders caution that the proliferation of these cells in daily items is outstripping public awareness and safety rules.
The inferno that ravaged a historic Glasgow structure and forced