Article

Wednesday, January 08, 2025
search-icon

Meta shifts content policy: Fact-checkers gone, more harmful content expected

publish time

07/01/2025

publish time

07/01/2025

Meta shifts content policy: Fact-checkers gone, more harmful content expected

NEW YORK, Jan 7: Meta has announced sweeping changes to its content moderation policies on Facebook and Instagram, which will significantly alter how posts, videos, and other content are reviewed. The company will phase out its fact-checking teams and replace them with user-generated “community notes,” similar to the system implemented by Elon Musk’s X (formerly Twitter). CEO Mark Zuckerberg made the announcement on Tuesday.

These changes come just as President-elect Donald Trump prepares to take office, amid ongoing criticism from Trump and other Republicans, who accuse Zuckerberg and Meta of censoring right-wing voices. Zuckerberg explained in a video that the existing fact-checking systems had become politically biased and were undermining trust, rather than fostering it. He said, “What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and it’s gone too far.”

Despite acknowledging the shift, Zuckerberg noted a “tradeoff” in the new policy, which could result in more harmful content appearing on the platform as a consequence of the relaxed moderation. Meta’s newly appointed Chief of Global Affairs, Joel Kaplan, also weighed in, stating that while the partnerships with third-party fact-checkers were well-intentioned at the outset, political bias had become too pervasive in their operations.

The policy shift appears to align with a broader ideological move to the right within Meta’s leadership. Zuckerberg has made clear his desire to improve relations with Trump before he takes office. In a separate announcement just a day earlier, Meta revealed that Dana White, a Trump ally and UFC CEO, would join its board, alongside two other new directors. Additionally, Meta has pledged $1 million to Trump’s inaugural fund, with Zuckerberg expressing interest in taking an “active role” in tech policy discussions.

Kaplan, a well-known Republican who was promoted to the role of Chief of Global Affairs just last week, acknowledged that the shift in content moderation policies was directly tied to the changing political landscape in the U.S. He noted, “We saw a lot of societal and political pressure over the past four years, all in the direction of more content moderation, more censorship, and now we’ve got a new administration, and a new president coming in who are big defenders of free expression, and that makes a difference.”

The Real Facebook Oversight Board, an external accountability group comprising academics, lawyers, and civil rights advocates, criticized Meta’s decision. The organization, which plays on the name of the official Facebook Oversight Board, accused Meta of “going full MAGA” and called the policy changes “political pandering.”

This move marks a major reversal in how Meta has handled false and misleading claims on its platforms. In 2016, in response to criticisms about its role in the spread of disinformation, particularly during the U.S. presidential election, Meta launched an independent fact-checking program. Over the years, the company had worked to combat misinformation related to elections, anti-vaccination claims, violence, and hate speech by employing safety teams, automated systems, and an independent Oversight Board to make tough moderation decisions.

However, Zuckerberg’s new approach mirrors that of Elon Musk, who, after acquiring X (formerly Twitter) in 2022, dismantled the platform’s fact-checking teams and replaced them with user-generated “community notes” as the primary method for addressing false claims. Following this pattern, Meta announced it would end its partnerships with third-party fact-checkers and implement community notes across all its platforms, including Facebook, Instagram, and Threads.

Kaplan expressed admiration for Musk’s role in reshaping the debate on free expression, saying, “Elon has played an incredibly important role in moving the debate and getting people refocused on free expression, and that’s been really constructive and productive.”

Meta also plans to refine its automated systems for detecting policy violations. These systems have been criticized for censoring content that did not violate any policies, and the company now intends to focus them solely on detecting illegal or “high-severity” violations, such as terrorism, child exploitation, drug trafficking, fraud, and scams. Other issues will have to be reported by users before Meta evaluates them.

Zuckerberg explained that Meta’s previous systems had resulted in too much content being removed in error, citing that even a small mistake rate—such as 1%—could result in millions of posts being wrongfully taken down. He admitted, “We’ve reached a point where it’s just too many mistakes and too much censorship.”

At the same time, Zuckerberg acknowledged that this new approach could lead to more harmful content being visible on the platform. He framed it as a “tradeoff,” explaining, “It means that we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”

As part of the broader policy changes, Meta will also lift content restrictions on certain sensitive topics, including immigration and gender identity, and reduce the limits on how much political content users can see in their feeds. Additionally, Meta will relocate its trust and safety teams, responsible for content policies, from California to Texas and other U.S. locations. Zuckerberg suggested that this move would help the company build trust, as the teams will operate in regions where there is less concern about political bias.

These changes reflect a significant shift in Meta's approach to content moderation, raising questions about the future of online discourse and the platform's role in managing misinformation and harmful content.