X will prohibit users from earning revenue for posting unlabeled AI‑generated war videos.

Elon Musk’s platform X will prohibit users from earning revenue if they repeatedly share AI‑generated war videos without labeling them, following a surge of fabricated battle footage related to the Iran conflict.

With roughly five hundred million monthly users, X will bar creators from receiving payment for ninety days when they post AI‑generated videos of armed conflict without a clear disclosure. A repeat violation will result in a permanent exclusion, the company announced Tuesday night, after the opening days of the Iran hostilities were accompanied by a flood of false online clips.

Feeds on X, as well as on Instagram and Facebook, have displayed many fabricated battle scenes, such as Iranian rockets chasing and downing a U.S. jet—a video that BBC Verify recorded as having been viewed 70 million times—and another clip that employed AI to swap genuine smoke from a missile strike with an exaggerated fireball.

Creators who attract followings near 100,000 can earn several hundred dollars monthly on X through its advertising system, a factor that encourages the creation of sensational viral content.

“In periods of conflict, reliable on‑the‑ground information is essential,” said Nikita Bier, head of product at X. “Current AI tools make it easy to produce misleading material. Effective immediately, anyone who uploads AI‑generated videos of an armed clash without a clear disclosure will lose access to revenue sharing for ninety days. Further breaches will lead to permanent removal from the program.”

Additional fabricated war videos have attracted large audiences. An Instagram clip claiming to depict a massive blaze after “Iran destroyed the U.S. airbase in Riyadh” was later identified as eighteen‑month‑old footage of the aftermath of an Israeli strike on an oil refinery in Hodeidah, Yemen.

Full Fact, a UK fact‑checking organization, noted that it is “increasingly seeing AI accelerate the spread of misinformation on social media.”

“In recent days we have encountered numerous AI‑generated images circulated on various platforms as if authentic, such as fabricated pictures of an aircraft carrier, the Burj Khalifa ablaze, and an alleged photo of Ayatollah Khamenei’s body,” said Steve Nowottny, editor at Full Fact. “Even when the AI output appears low‑quality or bears a watermark, it is still shared widely; the sheer amount of such false material and the ease of its creation and distribution pose a serious problem.”