Social media platform X, now operating under Elon Musk’s ownership, announced a new policy change requiring creators to disclose that their AI-generated videos of armed conflicts were artificially made. This move comes as part of the company’s commitment to ensuring information authenticity amid ongoing tensions between the US and Israel against Iran.
The head of product at X, Nikita Bier, emphasized the urgency for people to access genuine information during such critical moments. He stated, “During times of war, it is essential that users have access to authentic content.” Bier noted that current AI technologies make it “trivial” to produce misleading content and thus highlighted the need for more stringent policies.
X’s stance contrasts with its previous approach to content moderation under Musk’s ownership. Since acquiring Twitter in October 2022, X has largely disregarded policies against misinformation, viewing them as forms of censorship.
The new policy was announced by X on Monday. It will impose a 90-day suspension on creators found posting AI-generated videos without disclosing their artificial nature. Furthermore, repeat offenders may face permanent suspension from the Creator Revenue Sharing program, which pays eligible users a share of advertising revenue generated by their posts.
To enforce these rules, X is integrating Community Notes—its crowd-sourced fact-checking system—and technical signals into AI-generated content to flag violations. Additionally, the company revealed that it had identified a user in Pakistan managing 31 accounts posting fake AI war videos. These accounts were hacked and rebranded with new usernames after being detected.
Despite these steps, X remains vigilant in its detection of similar activities and aims to eliminate the incentive for such deceptive practices. The platform is positioning itself as a more transparent entity during these sensitive times, addressing concerns over misinformation and ensuring users receive accurate information amidst global conflicts.


