Meta Policy Changes: Understanding Fact-Checking
Image created by DALL-E by Author
Yesterday, Meta’s CEO and Founder, Mark Zuckerberg, announced massive roll backs to the integrity programs that keep users safe across the three most popular applications in the U.S.: Facebook, Instagram, and Threads. The announcement cameas a shock to many, but for some—like myself, who worked at Meta on both theOperational (Regulatory Escalations) and Governance (Global Governance of MetaApplications) sides—this was little surprise. The timing, announced less than a dayafter the 2024 election results were certified and a Republican Washington insider andMeta policy veteran was appointed as the President of Global Public Policy, was all a bit uncanny.
Yes, all of these things happened within a short window, but the policy changes Mark announced on Tuesday were years in the making. A few thoughts:
Companies are not democracies. As someone who worked in the U.S. Senate, it’s easy to forget that, at the end of the day, companies focus on revenue, while nation-states/countries should focus on the well-being of their constituents—which, for companies like Meta, are consumers of their products. Content moderation is necessary for high-priority risk areas, but specific policies around hate speech, bullying, and harassment for protected classes are “nice to have” unless they could lead to real-world harm.
While these changes are only going into effect for three applications, it’s worth notingthat WhatsApp, MetaAI, and Meta hardware (Ray-Bans and Oculus headsets) havepotential for policy changes down the line that may or may not be covered by currentglobal regulation. In the U.S., there’s still the Executive Order on AI, and in the EU,there’s the EU AI Act, along with a handful of other non-Western countries. Everyoneshould keep an eye on the policy changes that may subtly appear over the upcomingmonths.
What’s most important is the signal these policy rollbacks send to other technologycompanies like Alphabet, TikTok, BlueSky, Snapchat, Discord, OpenAI, and emergingsmaller platforms that people have integrated into their daily lives. More importantly,how will civil society, academia, and liberal democracies (and, in some cases,authoritarian ones that have strict privacy laws) circle their wagons to ensure theglobal community is safe from online harms?
Let’s break down what these policy changes mean for the U.S. and for countries thatalready have regulation in place to protect users—unlike in the U.S., where consumerprotection laws don’t cover social media platforms.
Today, I’ll cover the update that’s on the top of everyone’s mind: the dissolving of the3PFC for the U.S. market.
#1. Replace fact-checkers with Community Notes, starting in the U.S. (aka Disbanding of the Third Party Fact-Checking Program in the U.S. post, or 3PFC)
What is actually happening.The U.S. leg of the 3PFC is being deprecated starting this week. Fact-checkingorganizations, including PolitiFact, have stated that they will continue to do theimportant fact-checking work, but the working relationship they have with Meta willno longer continue. As of today, this only impacts U.S. partners, not global partners.
What everyone is concerned about.That this is the end of Meta taking mis- and disinformation seriously. Which is fair—this is a turning point. But here’s the thing: the program has experienced an uphillbattle since its launch. Additionally, the U.S. has one of the better-funded fact-checking programs; what’s at risk is the entire depreciation of the global program andthe rollback of fact-checking interventions once content is live on the platform incountries outside the EU (which has misinformation guardrails baked into the DSA).
Interventions and protections.The 3PFC program was good but flawed. Community Notes is very flawed and not aplace where real fact-checking exists. There isn’t a binary answer or a one-stepsolution—misinformation needs to be tackled on two ends of the spectrum:
A: Before it’s even posted online, handled by high-probability classifiers trained ondata collected from a number of sources.
B: After it’s posted. If content isn’t automatically flagged and prevented from beinguploaded because it didn’t reach the probability threshold, that’s where third-partyfact-checkers come in. Flagged content gets routed to a system where fact-checkerscan leave verified information, which then appears as part of an interstitial overlay ontop of the content, giving users the option to learn more. This is crucial for people tounderstand and see that content is false and draw their own conclusions.
In a world where there is no more 3PFC and we rely on Community Notes instead, itwill be incredibly harder to distinguish what is factual information versus what isposted by fake users or bots seeking to sow and amplify misinformation. There’s toomuch left to chance, and without proper content moderation, this is how falsenarratives spread. Unless a user follows a fact-checking account or that information isamplified by trustworthy sources, misinformation will stay online and false,dangerous narratives can and will spread. This is not a freedom of speech issue—thisis a modern warfare issue that needs to be attacked on all sides. Without transparencyinto how information is gathered and pulled into the classifier system or aboutprobability scoring, we are in for a wild ride. Misinformation warfare is the battlefieldwe’re currently in.
Like everyone else, I’m curious to see what the Oversight Board intends to do with thisnew development. Are they going to step in and provide fact-checking services for theU.S.? Will they advocate for the program to be strengthened in high-risk countries?Will the human rights defenders on the Board push Meta to invest in better solutionsfor fact-checking and misinformation handling across all issue areas—includingelections, health care, terrorism, and misinformation around protected classes?
Next week, I’ll distill what global lobbying by the Trump Administration on behalf ofcompanies like Meta looks like and how there are a series of levers that can be pulledrelated to trade, as the administration works to pull back “government overreach oncensorship” of American companies. This is a complex dance that goes beyond socialmedia platforms and includes AI companies as well.
[Originally Published on January 8, 2025 on the Council on Foreign Relations Member Wall]