X to Suspend Creators Who Post Undisclosed AI War Videos


X to Suspend Creators Who Post Undisclosed AI War Videos

X says it will penalize creators who share AI-generated videos depicting armed conflict without clearly labeling them as artificial.

On Tuesday, the platform’s head of product, Nikita Bier, announced that users who post misleading AI war content without disclosure will be removed from X’s Creator Revenue Sharing Program for 90 days. Repeat offenders who continue posting undisclosed AI content after their suspension ends will face permanent removal from the monetization program.

“During times of war, it is critical that people have access to authentic information on the ground,” Bier wrote on X. “With today’s AI technologies, it is trivial to create content that can mislead people. Starting now, users who post AI-generated videos of an armed conflict — without adding a disclosure that it was made with AI will be suspended from Creator Revenue Sharing for 90 days.”

How X Plans to Enforce the Rule

X says enforcement will rely on a mix of generative AI detection tools and its crowdsourced fact-checking system, Community Notes. Posts flagged as misleading and lacking proper disclosure could trigger monetization suspensions.

 What’s at Stake for Creators

The Creator Revenue Sharing Program allows users to earn a portion of advertising revenue based on engagement with their posts. While the initiative was designed to encourage more high-quality content, critics argue it has sometimes rewarded sensationalism, clickbait, and outrage-driven posts. Others have raised concerns about weak content moderation standards and the requirement that participants be paid X subscribers.

A Narrow Fix?

Although the policy targets undisclosed AI content related to war, it stops short of addressing other areas where synthetic media can cause harm. AI-generated content is frequently used to spread political misinformation or promote deceptive products practices that remain unaffected by this specific rule.

In short, X’s move limits financial incentives for a particularly sensitive category of misleading AI content. But it leaves broader questions about AI misinformation and platform accountability unresolved.

Post a Comment

0 Comments