What if ? A hypothetical history of censorship.

Marei
10 min readOct 1, 2023

--

Hindsight is Plenty Plenty

Abstract

This paper analyzes hypothetical scenarios in which major historical events may have been subject to contemporary Big Tech content moderation frameworks. The findings indicate a potential for substantially altered public understanding and social outcomes.

Introduction

A handful of Big Tech firms, including Google, Facebook, X and others, have gained unprecedented influence over public discourse and the flow of information through their social media platforms and search engines. These companies wield tremendous power through their ability to amplify, suppress, and remove content based on opaque moderation policies. This private governance of online speech and information warrants closer scrutiny for its broad implications for democratic values and social progress.

This article will imagine hypothetical case studies of how Big Tech content moderation could have impacted public debates and outcomes around major events across recent history, including the Iraq War, Watergate, the Civil Rights Movement, Occupy Wall Street, the Covid-19 pandemic, the tobacco industry’s deception around smoking risks, and anti-war activism.

The analysis indicates risks of entrenching establishment voices while suppressing dissent, narrowing the ‘Overton window’ of acceptable discourse. Excessive private controls over public debates contradicts principles of democratic pluralism and risks muting radical ideas or marginalized groups in the name of order and civility.

What if this level of Big Tech Control Existed for The Iraq War? Or other recent events where the authorities were proven to be wrong?

Iraq War

If today’s content moderation systems existed back then, skepticism and anti-war activism surrounding the Iraq War could have been severely suppressed:

  • News articles and opinion pieces from major publications questioning the evidence for WMDs in Iraq may have been flagged as “misinformation” and demoted in search results and social media feeds. Their reach would have been dramatically curtailed.
  • Activist groups organizing anti-war protests could have had their pages/accounts suspended for “coordinating harm.” Without social media, their ability to get the word out and rally people would have been crippled.
  • Prominent voices like Hans Blix who doubted the Bush administration’s claims may have been banned from major platforms for spreading “disinformation.” Their dissenting perspective would have been largely silenced in the public discourse.
  • Whistleblowers may have thought twice about leaking sensitive documents that contradicted the WMD claims, for fear of reprisal under harsh anti-leak policies. Potentially important information exposing the flimsiness of the case for war may never have come to light.
  • Hashtags like #NoWarInIraq, #NotInOurName could have been banned by platforms for being “harmful” or undermining military efforts. Anti-war sentiment would have lacked amplification.
  • Without a free internet and social media, the massive worldwide protests on February 15, 2003 may not have come together. The global demonstration of anti-war sentiment may have never materialized.
  • With diminished outlets for dissent and coordination, the anti-war movement would have struggled to gain momentum. Their counter-narrative would have been largely suffocated, making opposition seem fringe and isolated.
  • As a result, the Bush administration may have faced less critical domestic pressure in the lead-up to war, enabling them to move forward more easily. The war may have continued for many more years before Americans turned decisively against it. The human and economic costs could have been far greater.

Watergate Scandal

  • The Washington Post’s articles detailing the Watergate break-ins and cover-up by Nixon administration officials were based heavily on unnamed sources. Under modern “unverified claims” policies, publishing these explosive allegations from anonymous sources could have gotten the Post’s accounts removed or buried by platforms.
  • Woodward and Bernstein’s Watergate reporting revealed an intricate web of criminality and abuse of power in the Nixon White House. But much of it would have sounded like outlandish, unproven conspiracy theories at the time. Facebook’s fact-checkers may have labeled the allegations “missing context” or flagged them as outright misinformation.
  • When the Post first implicated CRP and connected Liddy and McCord to the Watergate break-in, these were bombshell claims based on unverified insider tips. Today, surfacing such unsubstantiated accusations about political operatives could be deemed harassment or defamation under content policies.
  • As the Post uncovered more smokers guns like Nixon’s secret White House tapes, publishing the content of those tapes based on unnamed sources could have run afoul of prohibitions on publishing hacked materials or illegally obtained information.
  • Other papers amplifying the Post’s reporting with articles of their own may have had their reach throttled for spreading unverified claims. Discussion and dissemination of each new Watergate revelation could be stunted under the guise of stopping misinformation.
  • With its coverage muffled online, the Post’s Watergate revelations may have failed to gain traction. Nixon could have outmaneuvered investigations until after his re-election or the end of his second term, avoiding resignation. The full truth of Watergate may only have emerged years later, altering politics and oversight of the presidency.

Civil Rights Movement

  • Photos and videos documenting nonviolent sit-ins and protest marches could have been removed from platforms for violating “law and order” under policies banning promotion of illegal activity.
  • Speeches and essays by civil rights leaders condemning police brutality and advocating nonviolent civil disobedience may have been taken down for “inciting” unlawful conduct.
  • Groups like the Student Nonviolent Coordinating Committee (SNCC) and Southern Christian Leadership Conference (SCLC) may have had their online presences revoked for “coordinating” illegal protests. This could have hindered organizing and mobilization.
  • Promotion of the Freedom Rides, sit-ins, and other direct actions could have been algorithmically demoted as “dangerous content” for risking violence from opponents.
  • Hashtags used to highlight racial injustice and build momentum like #WeShallOvercome, #IHaveADream may have been banned for fomenting social discord.
  • With activist voices and on-the-ground reports restricted online, the mainstream media’s cautious coverage painting civil rights leaders as rabble-rousers could have dominated, undermining a sense of urgent need for federal action.
  • Politicians counseling patience like Southern senators may have benefited from muted coverage of shocking acts of racist violence that catalyzed national outrage and pressure for civil rights legislation.
  • Lack of online coordination tools combined with fears of platform reprisal may have significantly hampered grassroots activism, delaying landmark achievements like the Civil Rights Act.

Occupy Wall Street

  • Facebook pages and Twitter hashtags used to organize Occupy protests in cities across the country could have been removed for “coordinating harm” or violating platform rules. This would have hindered logistics and turnout.
  • Posts and tweets sharing photos and videos of large-scale encampments like Zuccotti Park may have been algorithmically downranked for showing “dangerous” unrest. This could have minimized the appearance of momentum and nationwide participation.
  • Accounts belonging to Occupy activists may have been suspended for inciting offline disruptions like sit-ins that were deemed illegal at the time. Key organizers could have lost their ability to reach followers.
  • Viral posts spotlighting income inequality statistics and critiques of Wall Street excess amplified by average users may have been flagged as divisive misinformation lacking context under fact-checking partnerships.
  • With Occupy’s decentralized online presence suppressed, the mainstream media’s cautious, negative portrayal of protesters as disorganized and lacking clear demands may have dominated, delegitimizing the movement as fringe and isolated.
  • Politicians and business leaders counseling that the activists’ message of inequality was exaggerated could have benefited from muted counter-narratives and on-the-ground reports. Pressure for Wall Street reform may have stalled.
  • Lack of Occupy visibility online could have hampered its ability to focus national attention on issues of financial industry power, accountability, and anti-corruption measures. The political narrative may have stayed centered on austerity instead.

Big Tobacco

  • Accounts of lung cancer patients and their families recounting lives devastated by smoking may be deleted or demoted for making “harmful unverified medical claims” and promoting stigma around legal products.
  • Public health agencies and cancer research groups posting factual data on smoking’s carcinogenic properties and massive death toll could have their reach throttled on social media for posting “contested claims” harmful to a large industry.
  • Fact-checking initiatives could be tilted to favor the position that studies on tobacco risks are inconclusive, or that vendors cannot restrict legal sales to consenting adults. Critics can be dismissed as promoting activism over impartial analysis.
  • News rating systems like NewsGuard could award high credibility scores to media outlets echoing industry views. Those critical of Big Tobacco corporations could be accused of failing journalistic standards.
  • Doctors testifying about mounting evidence of smoking’s destructive health impacts before Congress or regulatory agencies may have their expertise called into question across platforms for lacking “consensus.”
  • With opposition voices suppressed, it may take even longer for overwhelming evidence to sway public opinion and policy against tobacco company interests, as their propaganda faces little social media friction.

COVID-19 Pandemic?

  • Social platforms did enable dangerous misinformation early on, such as hype around unproven drugs like hydroxychloroquine. However, total censorship could have been an overreaction that stifled good-faith debates.
  • Speculation around COVID originating from a lab was reflexively removed as “conspiracy,” when some experts deemed it a valid hypothesis warranting investigation. Silencing dissenting opinions can obstruct the search for truth.
  • Scientific understanding evolves. Claims once labeled “misinformation” can later prove to have merit based on new evidence. Censoring minority expert views on public health issues can thus impede progress.
  • However, not all claims are created equal. Reputable empirical evidence should still be required to elevate hypotheses to mainstream acceptability. Unfounded conspiracy theories can cause real-world harm.
  • There are rarely easy solutions in moderation. But maintaining venues for rigorous, intellectually honest debate on matters of public concern, while stemming demonstrable falsehoods that threaten health and safety, may be a judicious balance.
  • Transparency in setting speech policies, involving diverse expertise, and allowing appeal help reduce biased overreach. So does protecting good-faith claims, even if controversial, when advanced responsibly.
  • Overall, neither misinformation nor censorship offer constructive paths forward during public health crises. Committing to truth-seeking through inclusive inquiry and discussion should be the guiding light.

Thoughts

Running through several key historical case studies highlights the censorious risks inherent in allowing private tech firms to govern civic debates through opaque and arbitrary speech policies. While defending against clear harms like stalking or threats is reasonable, overbroad suppression can entrench establishment voices, narratives and interests against legitimate challenges to the status quo. Those holding cultural capital and institutional power often shape the norms that define the boundaries of ‘respectable’ discourse. Outlier perspectives from cultural, ideological and epistemic minorities then face tremendous pressures to conform in order to be heard in the modern public square.

But the marketplace of ideas requires radical voices at the margins to provide alternatives to accepted dogma. Censoring dangerous ideas percolating from below risks creating an air of infallibility around elite consensus on issues like foreign policy, policing, or economic affairs that cry out for more dissent. Imposing excessive order contradicts the chaos and conflict inherent to democratic pluralism. And privileging civility and stability over social justice subordinates the marginalized groups who most urgently need platforms to exercise their free speech, often in provocative ways that push boundaries.

Rigid gatekeeping of online discussions under ‘community standards’ defined by unaccountable corporations clashes with traditions of free expression and the democratization of knowledge enabled by the internet. User communities themselves are capable of developing norms to counter misinformation or hate where necessary. Suppressing challenging ideas only risks driving them underground while stunting social progress.

Conclusion

The hypothetical historical case studies highlight risks inherent in allowing private Big Tech firms to govern civic debates and the bounds of acceptable thought through opaque content moderation policies. These companies now wield tremendous influence over social outcomes given their unprecedented position as mediators of information flows and arbiters of online expression. Their power merits much greater public transparency and accountability.

Examining potential impacts on pivotal past events highlights how excessive private controls over discourse could distort epistemic diversity and narrow the ‘Overton window’ of permissible opinion. While some content standards are justifiable, the public interest demands maximizing toleration for free inquiry and ideological dissent from below. Private firms must not unduly engineer consent on matters of common concern. Understanding speech restrictions in historical context compels caution and restraint around contemporary content governance on democratic principles. The cases illustrate how stifling radical ideas or marginal voices, however unsettling, contravenes the open debate essential to a self-governing society.

Relevant Reading

Benkler, Y. (2006). The wealth of networks. Yale University Press.

Gillespie, T. (2018). Custodians of the Internet. Yale University Press.

Tufekci, Z. (2017). Twitter and tear gas. Yale University Press.

Chomsky, N. & Herman, E. (1988) Manufacturing consent. Pantheon Books.

Cohen, J. (2019). Between truth and power. Oxford University Press.

Balkin, J. (2020). Free speech is a triangle. Columbia Law Review, 118(7).

Kaye, D. (2018). Speech police. Columbia Global Reports.

Zuboff, S. (2019). The age of surveillance capitalism. Profile Books.

Noble, S. (2018). Algorithms of oppression. NYU Press.

O’Neil, C. (2016). Weapons of math destruction. Crown.

Tufekci, Z. (2015). Algorithmic harms beyond Facebook and Google. Knight First Amendment Institute, Columbia University.

Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Eds.), Media technologies: Essays on communication, materiality, and society. MIT Press.

Buni, C. & Chemaly, S. (2016). The secret rules of the internet. The Verge.

--

--