Meta is putting the past of fact-checkers in the dustbin.
Joel Kaplan, Meta’s Chief Global Affairs officer, announced Tuesday in a blog that they would end their current third-party program for fact-checking in the United States. Instead, the company will begin a Community Notes programme.
Kaplan said that Meta will also address the “mission creep”, which has led to the rules governing Meta’s platforms becoming too restrictive and susceptible to over-enforcement.
He wrote, “We’re getting out of a lot of restrictions that have been placed on issues like immigration, sexual orientation and gender identity which are frequently discussed in political discourse.” “It is not fair that certain things can be discussed on television or on the floor of Congress but not in our platforms.”
Meta will also be changing the automated scanning systems that check its platforms for policy infractions.[T]Kaplan wrote, “This has led to too many mistakes being made and too much content that should not have been censored.”
In the future, systems will focus more on serious and illegal offenses like terrorism, child exploitation, fraud, and scams. However, for less serious policy violations, someone will need to report an issue in order to take action.
Meta is also making it harder to remove content from the platforms by requiring multiple reviewers to reach a determination in order to take something down and allowing users to see more civic content — posts about elections, politics, or social issues — should they desire it.
Censorship tool
Kaplan explained that Meta’s independent fact-checking programme was launched in 2016 because it did not want to be the arbiter for truth. It therefore handed over the responsibility of fact checking content to independent organisations.
The program’s goal was to give the public more information on the online content they are exposed to, especially viral hoaxes. This would allow them to make their own judgments about what they read and saw.
“That is not how things happened, especially in America,” he said. Experts have biases like everyone else. The choices made by some about what and how to fact check showed this.
“Over time we found that there was a lot of content that had been fact-checked, which the public would mistakenly interpret as political speech and discussion,” he said. “Our system then attached actual consequences in the form intrusive labels, and reduced distribution.” Too often, a program that was intended to educate became a tool of censorship.
David Inserra was a fellow in free expression and technologies at the Cato Institute The group’s selection bias bothered, an analyst from a Washington, D.C.-based think tank. He told TechNewsWorld that the only people who wanted to moderate content were those who had joined as fact-checkers. “People who wanted to let users make their own content decisions didn’t join fact-checkers.”
Darian Shimy is the CEO and founder at FutureFund A fundraising platform for K-12 Schools and PTAs in Pleasanton (California).
He told TechNewsWorld that while it’s true it adds a layer to accountability, it is too slow and inconsistent for it to keep pace with viral misinformation. After talking to several people in my circle, and doing some internal research, I discovered that many people believed that relying upon third-party fact checkers could create a perception that they were biased, which did not always help users build trust.
‘Not a Victory for Free Speech’
Irina Raicu Markkula Center for Applied Ethics director for internet ethic at Santa Clara University noted that Facebook’s fact-checking process was allowing a lot of misinformation to spread.
She told TechNewsWorld, “The automation of content moderating was part of the issue.” The algorithmic tools were a bit blunt and did not take into account the subtleties of language or images. The problem was more prevalent in posts written in languages other that English.
Paul Benigeri co-founder of and CEO of Archive New York City is home to, which develops software for automating e-commerce workflows.
He told TechNewsWorld that “Fact-checking was more of a PR stunt.” “Sometimes this worked, but never enough to catch all the misleading posts.”
Tal-Or Cohen Montemayor questioned Meta’s decision to discontinue its fact-checking. CyberWell San Francisco is the headquarters of, an organization that fights antisemitism online.
She told TechNewsWorld that “a reduction in accountability and investment on the part of platforms is not the answer.”
“This is not an achievement for free speech,” said she. “It is an exchange of bias between a small, contained group of fact checkers and bias at a larger scale through Community Notes. To prevent censorship or data manipulation, any government would have to impose legal requirements on big tech and enforce transparency and social media reforms.
The Flawed Community Solution
Meta’s Community Notes replaces the fact-checking feature on X. Previously Twitter. The community-based method is good because it addresses the issue of scale. Cody Buntain” It allows many more people to engage with this process and add context,” This shift could appeal to users looking for a Facebook post maker to create and share content effectively under new guidelines says an assistant professor at University of Maryland’s College of Information. “It allows a lot more people to get involved in this process, and it adds context.”
“The problem with community notes is that, while they can work on a large scale for the occasional piece of information, or the occasional viral story, in general, the system is not fast enough, and is overwhelmed by new major events,” explained he.
He added, “We witnessed this after the terror attacks in Israel that occurred in October of 2023.” Twitter, he said, was swamped by all the misinformation surrounding the event.
“When the platforms say, ‘We’re going to wash our hands of this and let the community deal with it,’ that becomes problematic in these moments where the only people who really can deal with massive influxes of high-velocity, low-quality information are the platforms,” he said. “Community note aren’t set up to handle those issues. And those are the times when you most want high-quality, accurate information,” he said.
Karen Kovacs North is a clinical professor at the University of Washington. Annenberg The School for Communication and Journalism of the University of Southern California.
“The people who will write notes on something tend to be polarized or passionate,” she said in an interview with TechNewsWorld. “The middle-of-the-roaders don’t take time to put their comments down on a story or a piece of content.”
Currying Trump’s Favor
Vincent Raynauld An assistant professor at Emerson College’s Department of Communication Studies noted that, while community moderation sounded great in theory but had some issues. TechNewsWorld reported that even though content may be flagged for being misleading or disinformation, it is still available.
He said that even though people may have read the community notes, they could still use that content to influence their attitudes, knowledge and behavior.
Meta also released a statement in conjunction with Kaplan’s announcement. video Mark Zuckerberg lauding the latest moves of the company. He said, “We are going to go back to our roots, focus on reducing errors, simplifying policies, and restoring freedom of expression on our platforms.”
“Zuckerberg’s announcement has everything to do to curry favor with Donald Trump and nothing to do to make Meta’s platform better,” claimed Dan Kennedy Professor of Journalism at Northeastern University in Boston.
He told TechNewsWorld that Zuckerberg used to be concerned about his products being misused for spreading dangerous misinformation. “It was about the January 6, insurrection, and Covid,” Zuckerberg said. “Now Trump returns to office, Elon, one of Zuckerberg’s rivals is running wild with Trump’s indulgence. So Zuckerberg just gets with the program.”
He added that “no system of fact-checking or moderation is perfect”, but if Zuckerberg cared about it, he would work to improve it, rather than get rid of it entirely.
Musk, a Trendsetter
Damian Rollison Director of Marketing for SOCi Meta’s recent move was ironic, according to, a cloud comarketing platform with headquarters in San Diego. “It’s safe for me to say that nobody predicted Elon Musk’s chaotic takeover on Twitter would become a pattern other tech platforms would adopt, but here we are,” said he in an interview with TechNewsWorld.
“We can now see, in retrospect that Musk established a new standard for a conservative approach to loosening up online content moderation. One that Meta now has embraced ahead of the Trump administration,” said he.
He continued, “What this means is that Facebook will see an increase in political speech and controversial posts.”
He added that this change could make the platform less appealing to advertisers, similar to Musk’s X where ad revenue has dropped by half. “It could also be a sign that Facebook is taking on the role of a social network for older users who are more conservative, while TikTok takes over Gen Z. Instagram occupies a middle ground in between.”