Meta recently announced a shift in its content moderation policies, aiming to “loosen its rules, put more emphasis on supporting free expression,” as part of its Community Standards Enforcement Report. This move resulted in fewer posts being removed from platforms like Facebook and Instagram, marking a higher concentration of posts with erroneous content. Meta emphasized that these changes were intended to bring attention to hidden problems without excessively broadening the types of offensive content exposed by its moderation system.
The report details significant reductions in the number of posts removed from Facebook and Instagram, with Meta stating that these removals decreased by about half compared to prior years. Specifically, Meta noted a 38% drop in “‘spam violations’ for users, with Names of the USA now accounting for nearly 68% of posts removed. Similarly, the category for ‘child endangerment’ decreased to 18%, and hate speech-related categories dropped nearly 60%. Meta also accounted for a 14% drop in posts removed under hate speech policies from other platforms, such as Instagram and TikTok.
Interestingly, the majority of these declines were linked to user reporting—13% of posts removed last year for spam and hate speech issues were flagged by users in the past three months. This suggests that Meta’s approach moved faster than official oversight systems, which have typically removed such content starting about a month after being flagged.
Meta addressed these changes as part of Meta’s broader strategy to accelerate the roll-out of policies that could support “free expression.” The company introduced licensing rights to companies like Google, strongly urging businesses to become more transparent. Meta’s CEO Mark Zuckerberg described the changes as “just out of touch with mainstream discourse,” implying that Meta’s policies were not newsworthy enough to be easily criticized by everyday users.
The c MASK system on Instagram is a particularly notable aspect of Meta’s changes. It was updated to allow posts like “allegations of mental illness or abnormality when based on gender or sexual orientation.” Meta acknowledges that some users expressed challenges with the system, particularly when it was tasked with identifying hate speech and other inappropriate content.
For a full year, Meta’s automated systems for removing posts suspected of hate speech performed slightly better—accounting for nearly 100% of posts removed from Instagram. However, automation faced a significant issue with hate speech and bullying content, where Meta’s UseQ system now had about 98% accuracy—a 1% decrease compared to last year. Despite these challenges, Metauser satisfaction concerns were raised, with 65% of users seeking more user-friendly or easily removed content.
Meta’s changes were timely due to the U.S. President Donald Trump’s start of his two-year term. Meta also reduced its reliance on automation to remove sensitive posts, which could have introduced errors and送货 stations into identifying and removing content that might otherwise go undetected. However, this reduced reliance caused frustration among users, with some expressing discomfort amplifying their concerns about content safety.
Effectively, Meta was attempting to strike a balance between stricter content guidelines and free expression, ensuring that public discourse was protected while also empowering communities to share content in ways they desire. While these changes have had mixed reception, particularly among hate speech advocates, Meta’s policy framework aims to foster a more inclusive digital space that acknowledges diverse perspectives while maintaining public sentiment. The company’s approach reflects the ongoing tension between protecting free expression and maintaining content regulation in an increasingly networked world.