Meta announced on Monday that it will implement additional measures to crack down on accounts sharing “unoriginal” content on Facebook, specifically targeting those that repeatedly reuse others’ text, photos, or videos. This year, Meta has already removed approximately 10 million profiles impersonating prominent content creators.
Additionally, the company has taken action against 500,000 accounts engaged in “spammy behavior or fake engagement.” Measures include demoting comments from these accounts and reducing the visibility of their content to prevent monetization.
This update comes shortly after YouTube clarified its own policies regarding unoriginal content, including repetitive and mass-produced videos that have become easier to generate with AI technology.
Like YouTube, Meta stated it won’t penalize users who engage with others’ content, such as creating reaction videos or participating in trends. Instead, the focus is on the reposting of others’ content by spam accounts or those pretending to be the original creators.
Accounts that repeatedly reuse someone else’s content will temporarily lose access to Facebook monetization programs and experience reduced distribution of their posts. Facebook will also lower the visibility of duplicate videos to ensure original creators receive appropriate views and credit.
Furthermore, the company is testing a system that adds links to duplicate videos, directing viewers to the original content.
This update arrives amid criticism from users across Meta’s platforms, including Instagram, regarding erroneous enforcement of policies through automated means. A petition with nearly 30,000 signatures calls on Meta to address issues related to wrongfully disabled accounts and the lack of human support, which many users feel has negatively impacted small businesses. Meta has yet to publicly respond to these concerns.
While the new crackdown is primarily focused on accounts that exploit others’ content for profit, the issue of unoriginal content is becoming increasingly significant.
With the rise of AI technology, platforms have seen an influx of low-quality media content, often referred to as “AI slop,” which features AI-generated voiceovers paired with repurposed images or videos.
Meta’s update appears to address reused content but also suggests consideration of the growing problem of AI-generated low-quality videos. The company advises creators to avoid merely “stitching together clips” or adding watermarks to existing content, instead encouraging “authentic storytelling” over short videos that provide little value.
ICYMI: What happens to your brain during a heartbreak?
Additionally, Meta warns creators against reusing content from other apps or sources, reiterating a longstanding rule. It emphasizes the need for high-quality video captions, which may discourage reliance on automated AI-generated captions that lack creator input.
Meta plans to roll out these changes gradually over the coming months, allowing Facebook creators time to adjust. Creators can access new post-level insights in Facebook’s Professional Dashboard to identify distribution issues and can see if they risk penalties related to content recommendations or monetization in the Support section of their Page or professional profile.
Meta typically shares details about its content takedowns in its quarterly Transparency Reports. In the last quarter, it reported that 3% of its worldwide monthly active Facebook users were fake accounts and had taken action against 1 billion fake accounts from January to March 2025.
Recently, Meta has shifted away from fact-checking content itself in favor of Community Notes in the U.S., allowing users and contributors to assess whether posts adhere to Meta’s Community Standards and accuracy.
SOURCE: TECH CRUNCH