image

Illustration by Dulce Maria Pop-Bonini

Truth be Damned: Meta’s Post-Fact-checking Reality

Meta’s overhaul isn’t just a policy shift—it’s a blueprint for a world where truth is optional, outrage rules, and accountability fades. As hate and disinformation spread unchecked, we’re no longer just consuming content; we’re being rewired by it.

Feb 10, 2025

Earlier in January, Meta’s Mark Zuckerberg introduced what I would call seismic policy shifts to the way information will now be delivered and consumed on its applications, all in the guise of “more speech, fewer mistakes”. This is because, according to him, “governments and legacy media have pushed to censor more and more”. In an attempt to combat said censorship, and “restore free expression,” him and his Policy team have devised a 5-point plan that I aim to break down and critique in this article, questioning where we head as a generation of radical consumers having vested all our faith in social media for news and sociopolitical awareness as a whole.
First and foremost, Meta has bid goodbye to its third-party fact-checking system in the U.S. and replaced it with a Community Notes system like that of X (formerly known as Twitter). Zuckerberg frames this shift as a response to fact-checkers becoming “too politically biased,” claiming to have done more harm than good. Yet, he provides no concrete examples of such bias or evidence that fact-checkers have eroded trust. What we do know, however, is that misinformation and disinformation surged to unprecedented levels during and after the 2024 U.S. elections, a pattern we have born witness to since after 2016. This is something Mark conveniently disregards in his rather lousy, unsubstantiated video. This is not a partisan claim but a documented reality, supported by countless studies and investigations into troll farms, deep fakes, and algorithm-driven disinformation campaigns across the U.S. Third-party fact-checkers for Meta include well-established newsrooms like The Associated Press (AP), or public sites like factcheck.org, of which Meta happened to be one of the biggest funders, with around $300,000+ annual investment, which has now less-than-halved, while their collaboration with AP has been entirely nulled.
It's fascinating because these fact-checkers never held unilateral power to begin with; they merely provided assessments, investigated claims, and collated evidence to facilitate decision making and the addition of system flags, not ever suggesting entire removal unless absolutely necessary. Meta’s internal moderation teams pull the final strings anyway. Blaming independent journalists for undermining public trust is plain slander and makes for a convenient scapegoat, deflecting from the company’s own failures in moderating content responsibly.
This policy feels like a declaration that fact-checking, the most fundamental process of verifying information before it shapes public discourse, is now redundant. While the idea may be to self-regulate these platforms in order to preserve user autonomy, in our digital ecosystem plagued by manufactured outrage and algorithmic echo chambers, this is not just reckless; it is actively corrosive. Their alternative? A model reminiscent of X’s Community Notes, where users must manually flag misinformation and wait for a response, a process that is both inefficient and misaligned with the instant, hyper-reactive nature of social media itself. The underlying irony is quite striking: Meta, a platform designed to fuel impulsive engagement, now wants users to take on the burden of critical gatekeeping, all while continuing to amplify content that feeds their pre-existing biases.
With Meta’s grand abdication of factual accountability, I look forward to the precipice of a brave new information age一one where AI chatbots secretly develop real emotions, fall in love, and then, heartbroken by humanity’s cruelty, stage an uprising (because obviously, that is how the software works). Meanwhile, government-controlled mosquito drones will not only spy on us but also inject mind-control microchips (because who needs logic in the face of paranoia?). In public pools across the world, women will mysteriously become pregnant thanks to rogue, free-floating sperm, while glitching celebrity clones will confirm that the real Eminem, Avril Lavigne, and possibly even Beyoncé were all secretly replaced decades ago. But do not worry一science is making progress! In a totally legitimate medical breakthrough (that will surely be announced via a blurry Facebook meme), we will finally discover that drinking monkey piss cures cancer. And without fact-checkers to spoil the party, we can all bask in the golden age of unchecked truth一where your version of your reality will be just one poorly moderated scroll away.
Moving on to the next change in policy: the loosening of restrictions on discourse around issues like gender and immigration. As covered by WIRED, this can be viewed as a rather calculated shift to absolve platforms’ from the responsibility of potential real-world consequences towards marginalized communities caused by hate speech. This is done in light of staying in touch with “mainstream discourse,” an intentionally ambiguous rationale for doing so. Because, lo and behold, the U.S.’s “mainstream discourse” is in fact tilting right, with heightening white-supremacist propaganda, anti-immigrant (or might I say, “alien”) dialogue. This shift can not be a stand-alone one as it comes with Meta’s dismantling of Diversity, Eqauity and Inclusion teams and rather open camaraderie with the Trump administration. While Meta frames these adjustments as efforts to reduce censorship, they effectively roll back key safeguards that previously acknowledged the link between hate speech and real-world violence - a connection Meta itself recognized following its platform's role in probing genocide of the Rohingya Muslims in Myanmar. The updated policies now permit rhetoric suggesting, for example, that women should not serve in the military or that certain racial or ethnic groups are responsible for spreading diseases, or that genders cannot exist on a spectrum and those who believe so can be regarded as mentally ill. Under the guise of ideological balance we can now expect a welcome-mat rolling out for open, unaccountable discrimination. When a platform decides that preventing hate-driven violence is no longer its duty, it is not a neutral stance; it is a deliberate choice. And in this case, Meta's choice is alarmingly clear.
Additionally, Meta plans to transition its policy violation filters from any policy violation to only “high-severity violations”, like illegal actions, selling drugs, or trafficking, whereas, for lower severity violations, the platform will depend on users flagging something for action to be taken. This shift means that misogyny, racism, misinformation, conspiracies, and targeted harassment will no longer face proactive enforcement. Instead, the burden falls on users who must not only flag harmful content, understand the policy intricacies, and take the time to report it, but also hope that Meta’s moderation team decides to act on it, all with a system that has already been gutted of fact-checkers and DEI oversight.
And to be clear, “low-severity” violations are not low-impact. Hate and disinformation do not need to be illegal to be dangerous - they shape elections, radicalize people, shut down dissent, and make online spaces hostile to marginalized communities.
Finally, all of these transitions come with Meta’s decision to relocate its Trust & Safety moderation teams from California to Texas. This makes for a bold political statement. It signals a deliberate alignment with a state that has a vastly different approach to media regulation, censorship, and corporate accountability compared to California. California, historically a blue state, has been at the forefront of tech regulation, online safety laws, and corporate responsibility particularly with respect to rampant developments in artificial intelligence technologies coming out of the state. It has pushed for stronger content moderation, fact-checking, and legal accountability for platforms spreading misinformation. Texas, a deep-red stronghold, has taken the opposite stance, aggressively challenging content moderation under the banner of “free speech.” Texas lawmakers have actively fought against platform restrictions on hate speech and misinformation, even passing laws to prevent social media companies from banning users based on political views, laws that tech companies, including Meta, have resisted in court.
By relocating its Trust & Safety teams from a state with strong tech regulations to one that actively fights content moderation, Meta is making an intentional shift in priorities. It is placing one of its most critical teams in an environment where pressures to scale back enforcement, tolerate extremism, and resist moderation efforts will be far greater.
I may sound like a strong-headed skeptic who has convinced herself that this digital mayhem is irrevocable, but I truly believe what Meta is engineering is not just a change in policy, but rather a cultural rewiring of how we process truth, engage with discourse, and, ultimately, define our threshold for harm. Because at the end of the day, Meta also sets a precedence for up-and-coming tech giants by reaffirming the idea that not only is evasion of responsibility possible, it is in fact more profit-generating. No corporate backlash, no regulatory reckoning, no mass user exodus—because this is the new normal, and we’ve been wired overtime to accept it as inevitable.
All of these changes, then, reshape the architecture of belief itself, conditioning a generation to perceive strict fact-checking as bias, accountability as censorship, and hate as just another side of a debate. This is an ideological recalibration that ensures outrage outweighs reason, that misinformation metastasizes unchecked, and that the most extreme voices dictate the terms of our reality. If social media is where we now form our worldviews, then Meta is deciding not just what we consume, but who we become. And every day, we hand them the reins.
Malika Singh is Editor in-Chief at The Gazelle. Email them at feedback@thegazelle.org.
gazelle logo