X is drawing a hard line against AI trickery in war coverage. As geopolitical flashpoints like the US-Israel-Iran standoff intensify, the platform will cut off revenue streams for any creator sharing undisclosed AI-made combat videos.
The deluge of ultra-realistic fake footage during crises poses a grave threat: it muddles facts, misleads viewers, and undermines the clarity needed to grasp unfolding tragedies.
Nikita Bier, X’s product chief, detailed the enforcement in a direct post. First-time culprits lose monetization privileges for three months; repeat offenders face permanent removal. ‘Authentic info is paramount in war zones, yet AI makes deception effortless,’ he noted.
To enforce this, X will leverage cutting-edge AI detectors for generative content, paired with the robust Community Notes mechanism. Users can append context or corrections to dubious posts, reflecting X’s community-led moderation pivot.
The revenue program, which splits ad earnings based on likes, shares, and views, aimed to supercharge content creation. Yet researchers warn it may reward outrage over accuracy, with minimal barriers allowing fakes to thrive.
Focused solely on warfare depictions, the initiative highlights a targeted response to immediate dangers. Broader AI abuses in other domains remain outside its scope, but this could herald wider reforms as tech capabilities grow.