We support our Publishers and Content Creators. You can view this story on their website by CLICKING HERE.
Meta CEO Mark Zuckerberg’s startling announcement that his company’s Facebook and Instagram sites would drop its censorious fact-checking process scared the press and leftist academics, who would no longer be able to use pseudo-scientific concepts like “misinformation” to silence conservative debate on issues like COVID, radical Islam, Black Lives Matter, and “climate change.”
Tuesday’s edition of the PBS News Hour was a prime example, with co-anchor Geoff Bennett warning in the show’s introduction: “Facebook and Instagram end their fact-checking program, a move critics say will pave the way for a spike in misinformation.”
The actual segment started out alright.
A clip from Zuckerberg followed, with the CEO saying “we have reached a point where it’s just too many mistakes and too much censorship….”
But then Bennett was joined by guest Renee DiResta, now at Georgetown University. Not mentioned — until last year she was research manager of the online censors of Stanford Internet Observatory, one of many left-wing academic entities funded during Trump’s first term to fight online “misinformation” — at least as they defined it — a censorship regime encouraged by the Biden administration to press social media companies to squelch even true COVID-related facts.
After a clip of Donald Trump taking credit for Zuckerberg’s turnaround, he asked DiResta, “So what are the downstream implications of the political motivation behind all this?”
DiResta responded with hypocrisy by omission.
Yet academic censors like DiResta (and their handmaidens in journalism) had no qualms about the Biden Administration “jawboning” or “working the refs” against so-called conservative disinformation. Meta employed websites like PolitiFact that falsely claim to be nonpartisan judges of fact but in fact strongly favor Democrat “facts” over Republican ones. Another fact-checker formerly employed by Facebook was the even more slanted outfit, Snopes.
PBS News Hour
1/7/25
7:14:16 p.m. (ET)
Geoff Bennett: Facebook and Instagram’s parent company, Meta, announced today it’s ending third-party fact-checking on its platforms, calling the decision a return to a — quote — “fundamental commitment to free expression.”
Meta’s fact-checking program was rolled out in the wake of the 2016 election. CEO Mark Zuckerberg said today the rules had become too restrictive and prone to overenforcement.
Mark Zuckerberg, CEO, Meta: We built a lot of complex systems to moderate content. But the problem with complex systems is, they make mistakes. Even if they accidentally censor just 1 percent of posts, that’s millions of people.
And we have reached a point where it’s just too many mistakes and too much censorship. The recent elections also feel like a cultural tipping point towards once again prioritizing speech. So we’re going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression our platforms.
Geoff Bennett: To discuss the implications of this shift, we’re joined now by Renee DiResta, associate research professor at the McCourt School of Public Policy at Georgetown University.
Thanks for being with us.
Renee DiResta, Associate Research Professor, McCourt School of Public Policy, Georgetown University: Thanks for having me.
Geoff Bennett: So, let’s set the stage.
Why did Meta initially put this fact-checking program into place? And was it effective?
Renee DiResta: Yes.
So, it was launched in December 2016 in response to widespread criticism of fake news that had gone viral quite a bit during the 2016 presidential campaign. And the platform faced a lot of backlash in response to that. So the fact-checking initiative was launched as part of Facebook’s efforts to kind of restore its brand, restore trust.
It partnered with the third-party fact-checking organizations that were certified by the International Fact-Checking Network. So, it went to existing organizations that were already quite reputable. And it worked to incorporate adding context. They came up with a moderation framework called remove, reduce, inform.
Remove is when content is taken down. Reduce is when it’s reduced in distribution, it’s not pushed out to as many people. And the fact-checking piece was a really big part of inform, which tried to add a little bit more information to the stories that were going viral or articles that people were seeing in their news feed.
Geoff Bennett: Meta says it’s moving to a community notes practice, similar to what we see now on Elon Musk’s X, formerly Twitter.
What has the impact on that platform been? Can community notes be as effective as fact-checking?
Renee DiResta: It’s a little bit mixed. So it’s very hard to know what the impact of community notes is on X. It’s hard for us on the outside as academics and things to see it because a lot of data access and transparency has been reduced.
I think that community notes is a great way to restore legitimacy to content moderation, but it doesn’t necessarily actually do the job that fact-checking did in quite the same way, so better to have it as a complement.
And that’s because it’s often slow. It addresses just a very small fraction of the content. It really relies on people wanting to sit there and feeling like they should go and perform that almost, like, platform community service. On X, oftentimes, that means that you will see it happen on highly contentious political content, where people feel like some sort of emotional response.
They want to go correct the record about their guy, that kind of thing. And so you see efforts to get community notes on that type of political content. But on the flip side, it’s platforms asking users to do work for them, and it is not necessarily going to catch all of the kind of topical coverage that a professional journalist fact-checker might have more access to the ability to call somebody up and ask them if something is true, the ability to send somebody into a conflict zone to see if something is real.
So it should be a complementary process, but because this has become so politicized, we’re seeing it broached as a replacement, rather than a complement.
Geoff Bennett: Let’s talk more about the political dimension here, because Zuckerberg in his video statement, as we saw, he framed this policy shift as a reaction to Republicans’ November victory. And we heard him say — he called it a cultural tipping point towards once again prioritizing speech.
We know he’s visited Mar-a-Lago, he’s dined with president-elect Trump, he’s donated to the Trump inaugural fund. He just named Dana White, the CEO of UFC and a long time Trump ally, to Meta’s board.
And here’s what the president-elect, Donald Trump, said today when he was asked about this shift by Meta.
Donald Trump, Former President of the United States (R) and Current U.S. President-Elect: Honestly, I think they have come a long way, Meta, Facebook. I think they have come a long way.
Question: Do you think he’s directly responding to the threats that you have made to him in the past?
Donald Trump: Probably. Yes, probably.
Geoff Bennett: So what are the downstream implications of the political motivation behind all this?
Renee DiResta: It is probably in response to the threats that — and we call that jawboning, and we actually should see that as bad. We should see it as bad when the left does it and we should see it as bad when the right does it.
We should not want to see platforms who are supposed to be providing a service of value to their users, right, who are supposed to be facilitating a speech environment that protects the user, that creates a good experience for the user. That’s what the platform should be doing. Working the referees, trying to make the people deciding the calls advantage your team, that’s what’s actually happening here, right?
It is capitulation to ref-working. And what you see, if Meta had come out and said, we are launching this fantastic new community notes initiative, that would have been absolutely great. And that would have been just a routine feature set policy shift from a large social media platform that does that constantly.
But it was the tone of the communication. It was the specific language used in it that was very transparently saying, we are doing this in response to a shift in the political winds.
And I just don’t think that we should want to see our social media platforms quite so buffeted by political winds.
Geoff Bennett: Renee DiResta, thanks for your insights. We appreciate it.
Renee DiResta: Thank you.