The Daily Tar Heel
Printing news. Raising hell. Since 1893.
Tuesday, Jan. 14, 2025 Newsletters Latest print issue

We keep you informed.

Help us keep going. Donate Today.
The Daily Tar Heel

Column: Meta indicates the future of online accountability is bleak

US-NEWS-TRUMP-ZUCKERBERG-GET.jpg
Facebook CEO Mark Zuckerberg speaks about Facebook's News feature at the Paley Center For Media on October 25, 2019, in New York. Photo courtesy of Drew Angerer/Getty Images/TNS.

In the wake of Donald Trump’s presidential victory in 2016, Meta (then Facebook) introduced a sweeping fact-checking program designed to curb the spread of misinformation. Five years later, following the Jan. 6 Capitol riot, Meta took an even more decisive stance, banning Trump from its platform — a signal that it was willing to prioritize safety and truth over unrestrained free speech.

Fast forward to 2024, and as Trump reclaims the presidency, Meta has dismantled its professional fact-checking program, opting for a more hands-off, community-driven approach. The timing is striking for what it reveals about the company’s guiding principles — or lack thereof. Once committed to combating misinformation, Meta now appears to be aligning its policies with political tides rather than consistent standards. This shift raises troubling questions about the role of tech giants in shaping public discourse and their willingness to prioritize power over accountability.

Trump’s rise to the presidency in 2016 marked a pivotal moment for social media accountability. Platforms like Facebook faced intense scrutiny for amplifying misinformation, with fabricated stories outperforming legitimate journalism in engagement in the final three months before the election. Meta’s introduction of a fact-checking program positioned them as a defender of truth, taking its first steps toward acknowledging its sizable influence on public discourse.

The Capitol riots further cemented Meta’s initial role in regulating online behavior. The platform’s decision to ban Trump, citing concerns over incitement and harm, signaled a willingness to enforce strict moderation policies to uphold safety and democratic principles. This meant that tech companies could intervene when online speech posed real-world dangers, protecting others and making the internet more safe. 

Yet in 2024, as Trump returns to power, Meta’s approach has taken a sharp turn. Meta’s new strategy, modeled after X’s Community Notes, claims to democratize moderation by empowering users to flag and evaluate misinformation. While it may appear to foster open discourse, these systems lack the rigor — and especially the neutrality — of professional oversight, making it vulnerable to exploitation by partisan agendas. 

The timing of Meta’s policy reversal is impossible to ignore. Trump’s re-election coincides with the rollback of the very program introduced to combat the misinformation that benefited his 2016 campaign. By relaxing its stance on content moderation, Meta aligns itself with political powers rather than upholding truth and accountability. This alignment sets a dangerous precedent: if platforms reshape their policies to suit political pressures, the internet risks becoming a battleground where truth is negotiable and dictated by power.

The rollback of fact-checking on platforms like Meta has profound implications for democracy, social media and society. For democracy, unchecked misinformation during critical moments like elections poses significant risks. False narratives distort public perception, undermine trust in electoral processes and influence outcomes, eroding the principles of informed decision-making. 

Other platforms may follow suit, triggering a “race to the bottom” where content moderation is deprioritized to maximize engagement and avoid partisan political backlash. Misinformation fuels polarization, deepens ideological divisions and fosters an environment of distrust. As platforms fragment into echo chambers because of this, meaningful discourse diminishes, leaving society more vulnerable to manipulation.

This trend is not just ironic — it is dangerous. By prioritizing political gain and engagement over accountability, Meta and other tech giants risk eroding public trust in digital platforms as spaces for genuine discourse. The rollback of fact-checking sends a clear message: moderation policies are no longer guided by ethical standards but instead serve to appease powerful stakeholders.

To safeguard the integrity of the internet, platforms like Meta must commit to consistent and transparent moderation policies that operate independently of political pressures. Upholding truth in an era of widespread disinformation is not just a technological challenge — it is a moral imperative. 

@dthopinion | opinion@dailytarheel.com

To get the day's news and headlines in your inbox each morning, sign up for our email newsletters.