The First Amendment was a dead letter for much of American history. Unfortunately, there is reason to fear it is entering a new period of political irrelevance. We live in a golden age of efforts by governments and other actors to control speech, discredit and harass the press, and manipulate public debate. Yet as these efforts mount, and the expressive environment deteriorates, the First Amendment has been confined to a narrow and frequently irrelevant role. Hence the question—when it comes to political speech in the twenty-first century, is the First Amendment obsolete?

The most important change in the expressive environment can be boiled down to one idea: it is no longer speech itself that is scarce, but the attention of listeners.  Emerging threats to public discourse take advantage of this change. As Zeynep Tufekci puts it, “censorship during the Internet era does not operate under the same logic [as] it did under the heyday of print or even broadcast television.” Instead of targeting speakers directly, it targets listeners or it undermines speakers indirectly. More precisely, emerging techniques of speech control depend on (1) a range of new punishments, like unleashing “troll armies” to abuse the press and other critics, and (2) “flooding” tactics (sometimes called “reverse censorship”) that distort or drown out disfavored speech through the creation and dissemination of fake news, the payment of fake commentators, and the deployment of propaganda robots. As journalist Peter Pomerantsev writes, these techniques employ “information . . . in weaponized terms, as a tool to confuse, blackmail, demoralize, subvert and paralyze.”

The First Amendment first came to life in the early twentieth century, when the main threat to the nation’s political speech environment was state suppression of dissidents. The jurisprudence of the First Amendment was shaped by that era. It presupposes an information-poor world, and it focuses exclusively on the protection of speakers from government, as if they were rare and delicate butterflies threatened by one terrible monster.

But today, speakers are more like moths—their supply is apparently endless. The massive decline in barriers to publishing makes information abundant, especially when speakers congregate on brightly lit matters of public controversy. The low costs of speaking have, paradoxically, made it easier to weaponize speech as a tool of speech control. The unfortunate truth is that cheap speech may be used to attack, harass, and silence as much as it is used to illuminate or debate. And the use of speech as a tool to suppress speech is, by its nature, something very challenging for the First Amendment to deal with. In the face of such challenges, First Amendment doctrine seems at best unprepared. It is a body of law that waits for a pamphleteer to be arrested before it will recognize a problem. Even worse, the doctrine may actually block efforts to deal with some of the problems described here.

It may sound odd to say that the First Amendment is growing obsolete when the Supreme Court has an active First Amendment docket and there remain plenty of First Amendment cases in litigation. So that I am not misunderstood, I hasten to add that the First Amendment’s protection of the press and political speakers against government suppression is hardly useless or undesirable. With the important exception of cases related to campaign finance, however, the “big” free speech decisions of the last few decades have centered not on political speech but on economic matters like the right to resell patient data or the right to register offensive trademarks. The safeguarding of political speech is widely understood to be the core function of the First Amendment. Many of the recent cases are not merely at the periphery of this project; they are off exploring some other continent. The apparent flurry of First Amendment activity masks the fact that the Amendment has become increasingly irrelevant in its area of historic concern: the coercive control of political speech.

What might be done in response is a question without an easy answer. One possibility is simply to concede that the First Amendment, built in another era, is not suited to today’s challenges. Instead, any answer must lie in the development of better social norms, adoption of journalistic ethics by private speech platforms, or action by the political branches. Perhaps constitutional law has reached its natural limit.

On the other hand, in the 1920s Justices Oliver Wendell Holmes and Louis Brandeis and Judge Learned Hand also faced forms of speech control that did not seem to be matters of plausible constitutional concern by the standards of their time. If, following their lead, we take the bolder view that the First Amendment should be adapted to contemporary speech conditions, I suggest it may force us to confront buried doctrinal and theoretical questions, mainly related to state action, government speech, and listener interests. We might, for instance, explore “accomplice liability” under the First Amendment. That is, we might ask when the state or political leaders may be held constitutionally responsible for encouraging private parties to punish critics. I suggest here that if the President or other officials direct, encourage, fund, or covertly command attacks on their critics by private mobs or foreign powers, the First Amendment should be implicated.

Second, given that many of the new speech control techniques target listener attention, it may be worth reassessing how the First Amendment handles efforts to promote healthy speech environments and protect listener interests. Many of the problems described here might be subject to legislative or regulatory remedies that would themselves raise First Amendment questions. For example, consider a law that would bar major speech platforms and networks from accepting money from foreign governments for materials designed to influence American elections. Or a law that broadened criminal liability for online intimidation of members of the press. Such laws would likely be challenged under the First Amendment, which suggests that the needed evolution may lie in the jurisprudence of what the Amendment permits.

These tentative suggestions and explorations should not distract from the main point of this paper, which is to demonstrate that a range of speech control techniques have arisen from which the First Amendment, at present, provides little or no protection. In the pages that follow, the paper first identifies the core assumptions that proceeded from the founding era of First Amendment jurisprudence. It then argues that many of those assumptions no longer hold, and it details a series of techniques that are used by governmental and nongovernmental actors to censor and degrade speech. The paper concludes with a few ideas about what might be done.

Core Assumptions of the Political First Amendment

As scholars and historians know well, but the public is sometimes surprised to learn, the First Amendment sat dormant for much of American history, despite its absolute language (“Congress shall make no law . . . .”) and its placement in the Bill of Rights. It is an American “tradition” in the sense that the Super Bowl is an American tradition—one that is relatively new, even if it has come to be defining. To understand the basic paradigm by which the law provides protection, we therefore look not to the Constitution’s founding era but to the First Amendment’s founding era, in the early 1900s.

As the story goes, the First Amendment remained inert well into the 1920s. The trigger that gave it life was the federal government’s extensive speech control program during the First World War. The program was composed of two parts. First, following the passage of new Espionage and Sedition Acts, men and women voicing opposition to the war, or holding other unpopular positions, were charged with crimes directly related to their speech. Eugene Debs, the presidential candidate for the Socialist Party, was arrested and imprisoned for a speech that questioned the war effort, in which he memorably told the crowd that they were “fit for something better than slavery and cannon fodder.”

Second, the federal government operated an extensive domestic propaganda campaign. The Committee on Public Information, created by Executive Order 2594, was a massive federal organization of over 150,000 employees. Its efforts were comprehensive and unrelenting. As George Creel put it: “The printed word, the spoken word, the motion picture, the telegraph, the cable, the wireless, the poster, the sign-board—all these were used in our campaign to make our own people and all other peoples understand the causes that compelled America to take arms.” The Committee on Public Information’s “division of news” supplied the press with content “guidelines,” “appropriate” materials, and pressure to run them. All told, the American propaganda effort reached a scope and level of organization that would be matched only by totalitarian states in the 1930s.

The story of the judiciary’s reaction to these new speech controls has by now attained the status of legend. The federal courts, including the Supreme Court, widely condoned the government’s heavy-handed arrests and other censorial practices as necessary to the war effort. But as time passed, some of the most influential jurists—including Hand, followed by Brandeis and Holmes—found themselves unable to stomach what they saw, despite the fact that each was notably reluctant to use the Constitution for anti-majoritarian purposes. Judge Hand was the only one of the three to act during wartime, but eventually the thoughts of these great judges (mostly expressed in dissent or in concurrence) became the founding jurisprudence of the modern First Amendment. To be sure, their views remained in the minority into the 1950s and 60s, but eventually the dissenting and concurring opinions would become majority holdings, and by the 1970s the “core” political protections of the First Amendment had become fully active, achieving more or less the basic structure we see today.

Left out of this well-known story is a detail quite important for our purposes. The Court’s scrutiny extended only to part of the government’s speech control program: its censorship and punishment of dissidents. Left untouched and unquestioned was the Wilson Administration’s unprecedented domestic propaganda campaign. This was not a deliberate choice, so far as I can tell (although it does seem surprising, in retrospect, that there was no serious challenge brought contesting the President’s power to create a major propaganda agency on the basis of a single executive order). Yet as a practical matter, it was probably the propaganda campaign that had the greater influence over wartime speech, and almost certainly a stronger limiting effect on the freedom of the mainstream press.

I should also add, for completeness, that the story just told only covers the First Amendment’s protection of political speech, or what we might call the story of the “political First Amendment.” Later, beginning in the 1950s, the Court also developed constitutional protections for non-political speech, such as indecency, commercial advertising, and cultural expression, in landmark cases like Roth v. United States and Virginia State Board of Pharmacy v. Virginia Citizens Consumer Council, Inc. The Court also expanded upon both who counted as a speaker and what counted as speech —trends that have continued into this decade. I mention this only for making the boundaries of this paper clear: it is focused on the kind of political and press activity that was the original concern of those who brought the First Amendment to life.

Let us return to the founding jurisprudence of the 1920s. In its time, for the conditions faced, it was as imaginative, convincing, and thoughtful as judicial writing can be. The jurisprudence of the 1920s has the unusual distinction of actually living up to the hype. Rereading the canonical opinions is an exciting and stirring experience not unlike re-watching The Godfather or Gone with the Wind. But that is also the problem. The paradigm established in the 1920s and fleshed out in the 1960s and 70s was so convincing that it is simply hard to admit that it has grown obsolete for some of the major political speech challenges of the twenty-first century.

Consider three main assumptions that the law grew up with. The first is an underlying premise of informational scarcity. For years, it was taken for granted that few people would be willing to invest in speaking publicly. Relatedly, it was assumed that with respect to any given issue—say, the war—only a limited number of important speakers could compete in the “marketplace of ideas.” The second notable assumption arises from the first: listeners are assumed not to be overwhelmed with information, but rather to have abundant time and interest to be influenced by publicly presented views. Finally, the government is assumed to be the main threat to the “marketplace of ideas” through its use of criminal law or other coercive instruments to target speakers (as opposed to listeners) with punishment or bans on publication. Without government intervention, this assumption goes, the marketplace of ideas operates well by itself.

Each of these assumptions has, one way or another, become obsolete in the twenty-first century, due to the rise in importance of attention markets and changes in communications technologies. It is to those phenomena that we now turn.

II. Attentional Scarcity and the Economics of Filter Bubbles

As early as 1971, Herbert Simon predicted the trend that drives this paper. As he wrote:

in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.

In other words, if it was once hard to speak, it is now hard to be heard. Stated differently, it is no longer speech or information that is scarce, but the attention of listeners. Unlike in the 1920s, information is abundant and speaking is easy, while listener time and attention have become highly valued commodities. It follows that one important means of controlling speech is targeting the bottleneck of listener attention, instead of speech itself.

Several major technological and economic developments over the last two decades have transformed the relative scarcity of speech and listener attention. The first is associated with the popularization of the Internet: the massive decrease since the 1990s in the costs of being an online speaker, otherwise known (in Eugene Volokh’s phrase) as “cheap speech,” or what James Gleick calls the “information flood.” Using blogs, micro-blogs, or platforms like Twitter or Facebook, just about anyone, potentially, can disseminate speech into the digital public sphere. This has had several important implications. As Jack Balkin, Jeffrey Rosen, and I myself have argued, it gives the main platforms—which do not consider themselves to be part of the press—an extremely important role in the construction of public discourse. Cheap speech also makes it easier for mobs to harass or abuse other speakers with whom they disagree.

The second, more long-term, development has been the rise of an “attention industry”—that is, a set of actors whose business model is the resale of human attention. Traditionally, these were outfits like broadcasters or newspapers; they have been joined by the major Internet platforms and publishers, all of which seek to maximize the amount of time and attention that people spend with them. The rise and centrality of advertising to their business models has the broad effect of making listener attention ever more valuable.

The third development is the rise of the “filter bubble.” This phrase refers to the tendency of attention merchants or brokers to maximize revenue by offering audiences a highly tailored, filtered package of information designed to match their preexisting interests. Andrew Shapiro and Cass Sunstein were among the first legal writers to express concern about filter bubbles (which Sunstein nicknamed “the Daily Me” ). Over the 2010s, filter bubbles became more important as they became linked to the attention-resale business model just described. A platform like Facebook primarily profits from the resale of its users’ time and attention: hence its efforts to maximize “time on site.” That, in turn, leads the company to provide content that maximizes “engagement,” which is information tailored to the interests of each user. While this sounds relatively innocuous (giving users what they want), it has the secondary effect of exercising strong control over what the listener is exposed to, and blocking content that is unlikely to engage.

The combined consequence of these three developments is to make listener attention scarce and highly fought for. As the commercial and political value of attention has grown, much of that time and attention has become subject to furious competition, so much so that even institutions like the family or traditional religious communities find it difficult to compete. Additionally, some form of celebrity, even “micro-celebrity,” has become increasingly necessary to gain any attention at all. Every hour, indeed every second, of our time has commercial actors seeking to occupy it one way or another.

Hopefully the reader (if she hasn’t already disappeared to check her Facebook page) now understands what it means to say that listener attention has become a major speech bottleneck. With so much alluring, individually tailored content being produced—and so much talent devoted to keeping people clicking away on various platforms—speakers face ever greater challenges in reaching an audience of any meaningful size or political relevance. I want to stress that these developments matter not just to the hypothetical dissident sitting in her basement, who fared no better in previous times, but to the press as well. Gone are the days when the CBS evening news might reach the nation automatically, or whatever made the front cover of the New York Times was known to all. The challenge, paradoxically, has only increased in an age when the President himself consumes so much of the media’s attention. The population is distracted and scattered, making it difficult even for those with substantial resources to reach an audience.

The revolutionary changes just described have hardly gone unnoticed by First Amendment or Internet scholars. By the mid-1990s, Volokh, Kathleen Sullivan, and others had prophesied the coming era of cheaper speech and suggested it would transform much of what the First Amendment had taken for granted. (Sullivan memorably described the reaction to the Internet’s arrival as “First Amendment manna from heaven.”) Lawrence Lessig’s brilliant “code is law” formulation suggested that much of the future of censorship and speech control would reside in the design of the network and its major applications. Rosen, Jack Goldsmith, Jonathan Zittrain, Christopher Yoo, and others, including myself, wrote of the censorial potential that lay either in the network infrastructure itself (hence “net neutrality” as a counterweight) or in the main platforms (search engines, hosting sites, and later social media). The use of infrastructure and platforms as a tool of censorship has been extensively documented overseas and now also in the United States, especially by Balkin. Finally, the democratic implications of filter bubbles and similar technologies have become their own cottage industries.

Yet despite the scholarly attention, no one quite anticipated that speech itself might become a censorial weapon, or that scarcity of attention would become such a target of flooding and similar tactics. While the major changes described here have been decades in the making, we are nonetheless still in the midst of understanding their implications for classic questions of political speech control. We can now turn to the ways these changes have rendered basic assumptions about the First Amendment outmoded.

Obsolete Assumptions

Much can be understood by asking what “evil” any law is designed to combat. The founding First Amendment jurisprudence presumed that the evil of government speech control would be primarily effected by criminal punishment of publishers or speakers (or the threat thereof) and by the direct censorship of disfavored presses. These were, of course, the devices used by the Espionage and Sedition Acts in the 1790s and variations from the 1910s through the 1960s. On the censor’s part, the technique is intuitive: it has the effect of silencing the speaker herself, while also chilling those who might fear similar treatment. Nowadays, however, it is increasingly not the case that the relevant means of censorship is direct punishment by the state, or that the state itself is the primary censor.

The Waning of Direct Censorship

Despite its historic effectiveness, direct and overt government punishment of speakers has fallen out of favor in the twenty-first-century media environment, even in nations without strong free speech traditions. This fact is harder to see in the United States because the First Amendment itself has been read to impose a strong bar on viewpoint-based censorship. The point comes through most clearly when observing the techniques of governments that are unconstrained by similar constitutional protections. Such observation reveals that multiple governments have increasingly turned away from high-profile suppression of speech or arrest of dissidents, in favor of techniques that target listeners or enlist government accomplices.

The study of Chinese speech control provides some of the strongest evidence that a regime with full powers to directly censor nonetheless usually avoids doing so. In a fascinating ongoing study of Chinese censorship, Gary King, Jennifer Pan, and Margaret Roberts have conducted several massive investigations into the government’s evolving approach to social media and other Internet-based speech. What they have discovered is a regime less intent on stamping out forbidden content, but instead focused on distraction, cheerleading, and preventing meaningful collective action. For the most part, they conclude, the state’s agents “do not censor posts criticizing the regime, its leaders, or their policies” and “do not engage on controversial issues.” The authors suggest that the reasons are as follows:

Letting an argument die, or changing the subject, usually works much better than picking an argument and getting someone’s back up . . . . [S]ince censorship alone seems to anger people, the [Chinese] program has the additional advantage of enabling the government to actively control opinion without having to censor as much as they might otherwise.

A related reason for avoiding direct speech suppression is that under conditions of attentional scarcity, high-profile government censorship or the imprisonment of speakers runs the risk of backfiring. The government is, effectively, a kind of celebrity whose actions draw disproportionate attention. And such attention may help overcome the greatest barrier facing a disfavored speaker: that of getting heard at all. In certain instances, the attention showered on an arrested speaker may even, counterintuitively, yield financial or reputational rewards—the opposite of chill.

In Internet lore, one term for this backlash potential is the Streisand effect. Named after celebrity Barbra Streisand, whose lawyer’s efforts to suppress aerial photos of her beachfront resort attracted hundreds of thousands of downloads of those photos, the term stands for the proposition that “the simple act of trying to repress something . . . online is likely to make it . . . seen by many more people.” To be sure, the concept’s general applicability might be questioned, especially with regard to viral dissemination, which is highly unpredictable and rarer than one might imagine. Even still, the possibility of creating attention for the original speaker makes direct censorship less attractive, given the proliferation of cheaper—and often more effective—alternatives.

As suggested in the introduction, those alternatives can be placed in several categories: (1) online harassment and attacks, (2) distorting and flooding, or so-called reverse censorship, and (3) control of the main speech platforms. (The third topic is included for completeness, but it has already received extensive scholarly attention. ) These techniques are practiced to different degrees by different governments abroad. Yet given that they could be used by U.S. officials as well —and that they pose a major threat to the speech environment whether or not one’s own government is using them—all are worth exploring in our consideration of whether the First Amendment, in its political aspects, is obsolete.

Troll Armies

Among the newer emerging threats is the rise of abusive online mobs who seek to wear down targeted speakers and have them think twice about writing critical content, thus making political journalism less attractive. Whether directly employed by, loosely associated with, or merely aligned with the goals of the government or particular politicians, the technique relies on the low cost of speech to punish speakers.

While there have long been Internet trolls, in the early 2000s the Russian government pioneered their use as a systematic speech control technique with the establishment of a “web brigade” (Веб-бригады), often called a “troll army.” Its methods, discovered through leaks and the undercover work of investigative reporters, range from mere encouragement of loyalists, to funding groups that pay commentators piecemeal, to employing full-time staff to engage in around-the-clock propagation of pro-government views and attacks on critics.

There are three hallmarks of the Russian approach. The first is obscuring the government’s influence. The hand of the Kremlin is not explicit; funding comes from “pro-Kremlin” groups or nonprofits, and those involved usually disclaim any formal association with the Russian state. In addition, individuals sympathetic to the cause often join as de facto volunteers. The second is the use of vicious, swarm-like attacks over email, telephone, or social media to harass and humiliate critics of Russian policies or President Putin. While the online hate mob is certainly not a Russian invention, its deployment for such political objectives seems to be a novel development. The third hallmark is its international scope. Although these techniques have mainly been used domestically in Russia, they have also been employed against political opponents elsewhere in the world, including in the Ukraine and in countries like Finland, where trolls savagely attacked journalists who favored joining NATO (or questioned Russian efforts to influence that decision). Likewise, these tactics have been deployed in the United States, where paid Russian trolls targeted the 2016 presidential campaign.

Soviet-born British journalist Peter Pomerantsev, who was among the first to document the evolving Russian approach to speech control, has presented the operative questions this way:

[W]hat happens when a powerful actor systematically abuses freedom of information to spread disinformation? Uses freedom of speech in such a way as to subvert the very possibility of a debate? And does so not merely inside a country, as part of vicious election campaigns, but as part of a transnational military campaign? Since at least 2008, Kremlin military and intelligence thinkers have been talking about information not in the familiar terms of “persuasion,” “public diplomacy” or even “propaganda,” but in weaponized terms, as a tool to confuse, blackmail, demoralize, subvert and paralyze.

Over the last two years, the basic elements of the Russian approach have spread to the United States. As in Russia, journalists of all stripes have been targeted by virtual mobs when they criticize the American President or his policies. While some of the attacks appear to have originated from independent actors who borrowed Russian techniques, others have come from the (paid) Russian force itself; members of the Senate Select Committee on Intelligence have said that over 1,000 people on that force were assigned to influence the U.S. election in 2016. For certain journalists in particular, such harassment has become a regular occurrence, an ongoing assault. As David French of the National Review puts it: “The formula is simple: Criticize Trump—especially his connection to the alt-right—and the backlash will come.”

Ironically, while sometimes the President himself attacks, insults, or abuses journalists, this behavior has not necessarily had censorial consequences in itself, as it tends to draw attention to the speech in question. In fact, the improved fortunes of media outlets like CNN might serve as a demonstration that there often is a measurable Streisandeffect. We are speaking here of a form of censorial punishment practiced by the government’s allies, which is much less newsworthy but potentially just as punitive, especially over the long term.

Consider, for example, French’s description of the response to his criticisms of the President:

I saw images of my daughter’s face in gas chambers, with a smiling Trump in a Nazi uniform preparing to press a button and kill her. I saw her face photo-shopped into images of slaves. She was called a “niglet” and a “dindu.” The alt-right unleashed on my wife, Nancy, claiming that she had slept with black men while I was deployed to Iraq, and that I loved to watch while she had sex with “black bucks.” People sent her pornographic images of black men having sex with white women, with someone photoshopped to look like me, watching.

A similar story is told by Rosa Brooks, a law professor and popular commentator, who wrote a column in late January of 2017 that was critical of President Trump and speculated about whether the military might decline to follow plainly irrational orders, despite the tradition of deference to the Commander-in-Chief. After the piece was picked up by Breitbart News, where it was described as a call for a military coup, Brooks experienced the following. Her account is worth quoting at length:

By mid-afternoon, I was getting death threats. “I AM GOING TO CUT YOUR HEAD OFF………BITCH!” screamed one email. Other correspondents threatened to hang me, shoot me, deport me, imprison me, and/or get me fired (this last one seemed a bit anti-climactic). The dean of Georgetown Law, where I teach, got nasty emails about me. The Georgetown University president’s office received a voicemail from someone threatening to shoot me. New America, the think tank where I am a fellow, got a similar influx of nasty calls and messages. “You’re a fucking cunt! Piece of shit whore!” read a typical missive.

My correspondents were united on the matter of my crimes (treason, sedition, inciting insurrection, etc.). The only issue that appeared to confound and divide them was the vexing question of just what kind of undesirable I was. Several decided, based presumably on my first name, that I was Latina and proposed that I be forcibly sent to the other side of the soon-to-be-built Trump border wall. Others, presumably conflating me with African-American civil rights heroine Rosa Parks, asserted that I would never have gotten hired if it weren’t for race-based affirmative action. The anti-Semitic rants flowed in, too: A website called the Daily Stormer noted darkly that I am “the daughter of the infamous communist Barbara Ehrenreich and the Jew John Ehrenreich,” and I got an anonymous phone call from someone who informed me, in a chillingly pleasant tone, that he supported a military coup “to kill all the Jews.”

The angry, censorial online mob is not merely a tool of neo-fascists or the political right, although the association of such mobs with the current Administration merits special attention. Without assuming any moral equivalence, it is worth noting that there seems to be a growing, parallel tendency of leftist mobs to harass and shut down disfavored speakers as well.

Some suppression of speech is disturbing enough to make one wonder if the First Amendment and its state action doctrine (which holds that the Amendment only applies to actions by the state, not by private parties) are hopelessly limited in an era when harassment is so easy. Consider the story of Lindy West, a comedian and writer who has authored controversial columns, generally on feminist topics. By virtue of her writing talent and her association with The Guardian, she does not, like other speakers, face difficulties getting heard. However, she does face near-constant harassment and abuse. Every time she publishes a controversial piece, West recounts, “the harassment comes in a deluge. It floods my Twitter feed, my Facebook page, my email, so fast that I can’t even keep up (not that I want to).” In a standard example, after West wrote a column about rape, she received the following messages: “She won’t ever have to worry about rape”; “No one would want to rape that fat, disgusting mess”; and many more. As West observes: “It’s a silencing tactic. The message is: you are outnumbered. The message is: we’ll stop when you’re gone.” Eventually, West quit Twitter and other social media entirely.

It is not terribly new to suggest that private suppression of speech may matter as much as state suppression. For example, John Stuart Mill’s On Liberty seemed to take Victorian sensibilities as a greater threat to freedom than anything the government might do. But what has increased is the ability of nominally private forms of punishment—which may be directed or encouraged by government officials—to operate through the very channels meant to facilitate public speech.

Reverse Censorship, Flooding, and Propaganda Robots

Reverse censorship, which is also called “flooding,” is another contemporary technique of speech control. With roots in so-called “astroturfing,” it relies on counter-programming with a sufficient volume of information to drown out disfavored speech, or at least distort the information environment. Politically motivated reverse censorship often involves the dissemination of fake news (or atrocity propaganda) in order to distract and discredit. Whatever form it takes, this technique clearly qualifies as listener-targeted speech control.

The Chinese and Russian governments have led the way in developing methods of flooding and reverse censorship. China in particular stands out for its control of domestic speech. China has not, like North Korea, sought to avoid twenty-first-century communications technologies. Its embrace of the Internet has been enthusiastic and thorough. Yet the Communist Party has nonetheless managed to survive—and even enhance—its control over politics, defying the predictions of many in the West who forecast that the arrival of the Internet would soon lead to the government’s overthrow. Among the Chinese methods uncovered by researchers are the efforts of as many as two million people who are paid to post on behalf of the Party. As King, Pan, and Roberts have found:

[T]he [Chinese] government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We show that the goal of this massive secretive operation is instead to distract the public and change the subject, as most of these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime.

In an attention-scarce world, these kinds of methods are more effective than they might have been in previous decades. When listeners have highly limited bandwidth to devote to any given issue, they will rarely dig deeply, and they are less likely to hear dissenting opinions. In such an environment, flooding can be just as effective as more traditional forms of censorship.

Related to techniques of flooding is the intentional dissemination of so-called “fake news” and the discrediting of mainstream media sources. In modern times, this technique seems, once again, to be a key tool of political influence used by the Russian government. In addition to its attacks on regime critics, the Russian web brigade also spreads massive numbers of false stories, often alleging atrocities committed by its targets. While this technique can be accomplished by humans, it is aided and amplified by the increasing use of human-impersonating robots, or “bots,” which relay the messages through millions of fake accounts on social media sites like Twitter.

Tufekci has documented similar strategies employed by the Turkish government in its efforts to control opposition. The Turkish government, in her account, relies most heavily on discrediting nongovernmental sources of information. As she writes, critics of the state found “an enormous increase in challenges to their credibility, ranging from reasonable questions to outrageous and clearly false accusations. These took place using the same channels, and even the same methods, that a social movement might have used to challenge false claims by authorities.” The goal, she writes, was to create “an ever-bigger glut of mashed-up truth and falsehood to foment confusion and distraction” and “to overwhelm people with so many pieces of bad and disturbing information that they become confused and give up trying to figure out what the truth might be—or even the possibility of finding out what is true.”

While the technique was pioneered overseas, it is clear that flooding has come to the United States. Here, the most important variant has been the development and mass dissemination of so-called “fake news.” Consider in this regard the work of Philip Howard, who runs the Computational Propaganda Project at Oxford University. As Howard points out, voters are strongly influenced by what they think their neighbors are thinking; hence fake crowds, deployed at crucial moments, can create a false sense of solidarity and support. Howard and his collaborators studied the linking and sharing of news on Twitter in the week before the November 2016 U.S. presidential vote. Their research produced a startling revelation: “junk news was shared just as widely as professional news in the days leading up to the election.”

Howard’s group believes that bots were used to help achieve this effect. These bots pose as humans on Facebook, Twitter, and other social media, and they transmit messages as directed. Researchers have estimated that Twitter has as many as 48 million bot users, and Facebook has previously estimated that it has between 67.65 million and 137.76 million fake users. Some percentage of these, according to Howard and his team, are harnessed en masse to help spread fake news before and after important events.

Robots have even been employed to attack the “open” processes of the administrative state. In the spring of 2017, the Federal Communications Commission put its proposed revocation of net neutrality up for public comment. In previous years, such proceedings attracted vigorous argument by (human) commentators. This time, someone directed robots to impersonate—via stolen identities—over one hundred thousand people, flooding the system with fake comments, all of which were purportedly against federal net neutrality rules.

* * *

As it stands, the First Amendment has little to say about any of these tools and techniques. The mobilization of online vitriol or the dissemination of fake news by private parties or foreign states, even if in coordination with the U.S. government, has been considered a matter of journalistic ethics or foreign policy, not constitutional law. And it has long been assumed (though rarely tested) that the U.S. government’s own use of domestic propaganda is not a contestable First Amendment concern, on the premise that propaganda is “government speech.” The closest thing to a constitutional limit on propagandizing is the premise that the state cannot compel citizens to voice messages on its behalf (under the doctrine of compelled speech) or to engage in patriotic acts like saluting the flag or reciting the pledge of allegiance. But under the existing jurisprudence, it seems that little—other than political norms that are fast eroding—stands in the way of a full-blown campaign designed to manipulate the political speech environment to the advantage of current officeholders.

What Might Be Done

What I have written suggests that the First Amendment and its jurisprudence is a bystander in an age of aggressive efforts to propagandize and control online speech. While it does wall off the most coercive technique of the government—directly punishing disfavored speakers or the press—that’s just one part of the problem.

If it seems that the First Amendment’s main presumptions are obsolete, what might be done? There are two basic answers to this question. The first is to admit defeat and suggest that the role of the political First Amendment will be confined to harms that fall within the original 1920s paradigm. There remains important work to be done here, as protecting the press and other speakers from explicit government censorship will continue to be essential. And perhaps this is all that might be expected from the Constitution (and the judiciary). The second—and more ambitious—answer is to imagine how First Amendment doctrine might adapt to the kinds of speech manipulation described above. In some cases, this could mean that the First Amendment must broaden its own reach to encompass new techniques of speech control. In other cases, it could mean that the First Amendment must step slightly to the side and allow different legal tools—like the enforcement of existing or as-yet-to-be-created criminal statutes—to do the lion’s share of the work needed to promote a healthy speech environment.

Accepting a Limited First Amendment

If we accept the premise that the First Amendment cannot itself address the issues here discussed, reform initiatives must center on the behaviors of major private parties that are, in practice, the most important speech brokers of our times. What naturally emerges is a debate over the public duties of both “the media,” traditionally understood, and of major Internet speech platforms like Facebook, Twitter, and Google. At its essence, the debate boils down to asking whether these platforms should adopt (or be forced to adopt) norms and policies traditionally associated with twentieth-century journalism.

We often take for granted the press’s role as a bulwark against the speech control techniques described in this paper. Ever since the rise of “objectivity” and “independence” norms in the 1920s, along with the adoption of formal journalism codes of ethics, the press has tried to avoid printing mere rumors or false claims, knowingly serving as an arm of government propaganda efforts, or succumbing to the influence of business interests. It has also guaranteed reporters some security from attacks and abuse. The press may not have performed these duties perfectly, and there have been the usual debates about what constitutes a “fact” or “objectivity.” But the aspiration exists, and it succeeds in filtering out many obvious distortions.

In contrast, the major speech platforms, born as tech firms, have become players in the media world almost by accident. By design, they have none of the filters or safeguards that the press historically has employed. There are advantages to this design: it yields the appealing idea that anyone, and not only professionals, might have her say. In practice, it has precipitated a great flourishing of speech in various new forms, from blogging to user-created encyclopedias to social media. As Volokh prophesized in 1995: “Cheap speech will mean that far more speakers—rich and poor, popular and not, banal and avant garde—will be able to make their work available to all.” But it has also meant, as we’ve seen, that the platforms have been vulnerable to tactics that weaponize speech and use the openness of the Internet as ammunition. The question now before us is whether the platforms need do to more to combat these problems for the sake of political culture in the United States.

We might, for example, fairly focus on Twitter, which has served as a tool for computational propaganda (through millions of fake users), dissemination of fake news, and harassment of speakers. Twitter does little about any of these problems. It has adopted policies that are meant, supposedly, to curb abuse. But the policies are widely viewed as ineffective, in no small part because they put the burden of action on the person being harassed. West, for example, describes her attempt to report as “abusive” a user who threatened to rape her with an “anthropomorphic train.” Twitter staff responded that the comment was “currently not violating the Twitter Rules.” When Twitter’s CEO recently asked, “What’s the most important thing you want to see Twitter improve or create in 2017?” one user responded: a “comprehensive plan for getting rid of the Nazis.” To suggest that private platforms could—and should—be doing more to prevent speakers from harassment and abuse is perhaps the clearest remedy for the emerging threats identified above, even if it is not clear at this time exactly what such remedies ought to look like.

The so-called troll problem is among the online world’s oldest problems and a fixture of early “cyberspace” debates. Anonymous commentators and mobs have long shown their capacity to poison any environment and, through their vicious and demeaning attacks, chill expression. That old debate also revealed that design can mitigate some of these concerns. For example, consider that Wikipedia does not have a widespread fake news problem. But even if the debate remains similar, the stakes and consequences have changed. In the 1990s, trolls would abuse avatars, scare people off AOL chatrooms, or wreck virtual worlds. Today, we are witnessing efforts to destroy the reputations of real people for political purposes, to tip elections, and to influence foreign policy. It is hard to resist the conclusion that the law must be enlisted to fight such scourges.

First Amendment Possibilities

Could the First Amendment find a way to adapt to twenty-first-century speech challenges? How this might be accomplished is far from obvious, and I will freely admit that this paper is of the variety that is intended to ask the question rather than answer it. The most basic stumbling block is well known to lawyers. The First Amendment, like other guarantees in the Bill of Rights, has been understood primarily as a negative right against coercive government action—not as a right against the conduct of nongovernmental actors, or as a right that obliges the government to ensure a pristine speech environment. Tactics such as flooding and purposeful generation of fake news are, by our current ways of thinking, either private action or, at most, the government’s own protected speech.

A few possible adaptations present themselves, and they can be placed in three groups. The first concerns the “state action” doctrine, which is the limit that most obviously constrains the First Amendment from serving as a check on many of the emerging threats to the political speech environment. If a private mob attacks and silences critics of the government, purely of its own volition, under a basic theory of state action there is no role for the First Amendment—even if the mob replicates punishments that the government itself might have wanted to inflict. But what about when the mob is not quite as independent as it first appears? The First Amendment’s under-discussed “accomplice liability” doctrine may become of increasing importance if, in practice, governmental units or politicians have a hand in encouraging, coordinating, or otherwise providing direction to what might seem like private parties.

A second possibility is expanding the category of “state action” itself to encompass the conduct of major speech platforms like Facebook or Twitter. However, as discussed below, I view this as an unpromising and potentially counterproductive solution.

Third, the project of realizing a healthier speech environment may depend more on what the First Amendment permits, rather than what it prevents or requires. Indeed, some of the most important remedies for the challenges described in this paper may consist of new laws or more aggressive enforcement of existing laws. The federal cyberstalking statute, for example, has already been used to protect the press from egregious trolling and harassment. New laws might target foreign efforts to manipulate American elections, or provide better and faster protections for members of the press. Assuming such laws are challenged as unconstitutional, the necessary doctrinal evolution may involve the First Amendment accommodating robust efforts to fight the new tools of speech control.

Let us look a little more closely at each of these possibilities.

State Action—Accomplice Liability

The state action doctrine, once again, limits constitutional scrutiny to (as the name suggests) actions taken by the state. However, in the “troll army” model, punishment of the press and political critics is conducted by ostensibly private parties or foreign governments. Hence, at a first look, such conduct seems unreachable by the Constitution.

Yet as many have observed, the current American President has seemingly directed online mobs to go after his critics and opponents, particularly members of the press. Even members of the President’s party have reportedly been nervous to speak their minds, not based on threats of ordinary political reactions but for fear of attack by online mobs. And while the directed-mob technique may have been pioneered by Russia and employed by Trump, it is not hard to imagine a future in which other Presidents and powerful leaders sic their loyal mobs on critics, confident that in so doing they may avoid the limits imposed by the First Amendment.

But the state action doctrine may not be as much of a hindrance as this end-run supposes. The First Amendment already has a nascent accomplice liability doctrine that makes state actors, under some circumstances, “liable for the actions of private parties.” In Blum v. Yaretsky,the Supreme Court explained that the state can be held responsible for private action “when it has exercised coercive power or has provided such significant encouragement, either overt or covert, that the choice must in law be deemed to be that of the State.” The Blum formulation echoes common-law accomplice liability principles: a principal is ordinarily liable for the illegal actions of another party when it both shares the underlying mens rea, or purpose, and when it acts to encourage, command, support, or otherwise provide aid to that party. Blum itself was not a First Amendment case, and it left open the question of what might constitute “significant encouragement” in various settings. But in subsequent cases, the lower courts have provided a greater sense of what factual scenarios might suffice for state accomplice liability in the First Amendment context.

For example, the Sixth Circuit has a line of First Amendment employment retaliation cases that suggest when public actors may be held liable for nominally private conduct. In the 2010 case Paige v. Coyner, the Sixth Circuit addressed the constitutional claims of a woman who was fired by her employer at the behest of a state official (Coyner) after she spoke out at a public meeting in opposition to a new highway development. Unlike a typical retaliation-termination case, the plaintiff presented evidence that she was fired because the state official complained to her employer and sought to have her terminated. The Sixth Circuit held that the lawsuit properly alleged state action because Coyner encouraged the firing, even though it was the employer who actually inflicted the punishment. Moreover, the court suggested an even broader liability standard than Blum, holding that the private punishment of a speaker could be attributed to a state official “if that result was a reasonably foreseeable consequence.” More recently, the Sixth Circuit reaffirmed Coyner where a police officer, after a dispute with a private individual, went to her workplace to complain about her with the “reasonably foreseeable” result of having her fired. Similar cases can be found in other circuits.

In the political “attack mob” context, it seems that some official encouragement of attacks on the press or other speakers should trigger First Amendment scrutiny. Naturally, those who attack critics of the state merely because they feel inspired to do so by an official’s example do not present a case of state action. (If burdensome enough, however, the original attack might be a matter of First Amendment concern.) But more direct encouragement may yield a First Amendment constraint. Consider, for example, the following scenarios:

  • If the President or other government officials name individual members of the press and suggest they should be punished, yielding a foreseeable attack;
  • If the President or other officials call upon media companies to fire or otherwise discipline their critics, and the companies do so;
  • If the government is found to be directly funding third-party efforts to attack or flood critics of the government, or organizing or coordinating with those who provide such funding; or
  • If the President or other officials order private individuals or organizations to attack or punish critics of the government.

Based on the standards enumerated in Blum and other cases, these scenarios might support a finding of state action and a First Amendment violation. In other words, an official who spurs private censorial mobs to attack a disfavored speaker might—in an appropriately brought lawsuit, contingent on the usual questions of standing and immunity—be subject to a court injunction or even damages, just as if she performed the attack herself.

State Action—Platforms

The central role played by major speech platforms like Twitter, Google, and Facebook might prompt another question: should the platforms themselves be treated as state actors for First Amendment purposes? Perhaps, like the company town in Marsh v. Alabama, these companies have assumed sufficiently public duties or importance that they stand “in the shoes of the State.” While some have argued that this is appropriate, there are a number of reasons why treating these platforms as state actors strikes me as an unpromising and undesirable avenue.

First, there are real differences between the Marsh company town and today’s speech platforms. Marsh was a case where the firm had effectively taken over the full spectrum of municipal government duties, including ownership of the sidewalk, roads, sewer systems, and policing. The company town was, in most respects, indistinguishable from a traditional government-run locality—it just happened to be private. The residents of Chickasaw had no way of escaping the reach of the company’s power, as the Gulf Shipbuilding Corporation claimed, in Max Weber’s terms, a “monopoly of the legitimate use of physical force.” To exempt such a company town from constitutional scrutiny therefore produced the prospect of easy constitutional evasion by privatization.

However important Facebook or Google may be to our speech environment, it seems much harder to say that they are acting like the government all but in name and thereby avoiding the Constitution. It is true that one’s life may be heavily influenced by these and other large companies, but influence alone cannot be the criterion for what makes something a state actor; in that case, every employer would be a state actor, and perhaps so would nearly every family. If the major speech platforms (including the major television networks) ought to be classified as state actors based not on the assumption of specific state-like duties but merely on their influence, it is hard to know where the category ends.

This is not to deny that the leading speech platforms have an important public function. In fact, I have argued in other work that regulation of communications carriers plays a critical role in facilitating speech, comprising a de facto First Amendment tradition. Yet if these platforms are treated as state actors under the First Amendment in all that they do, their ability to handle some of the problems presented here may well be curtailed. This danger is made clear by Cyber Promotions, Inc. v. American Online, a 1996 case against AOL, the major online platform at the time. In Cyber Promotions, a mass-email marketing firm alleged that AOL’s new spam filters were violations of the First Amendment as, effectively, a form of state censorship. The court distinguished Marsh on factual grounds, but what if it hadn’t? Holding AOL—or today’s major platforms—to be a state actor could have severely limited its ability to fight not only spam but also trolling, flooding, abuse, and myriad other unpleasantries. From the perspective of listeners, it would likely be counterproductive.

Statutory or Law Enforcement Protection of Speech Environments and the Press

Many of the efforts to control speech described in this paper may be best countered not by the judiciary using the First Amendment, but rather by law enforcement using already existing or newly enacted laws. Consider several possibilities, some of which target trolling and others of which focus on flooding:

  • Extensive enforcement of existing federal or state anti-cyberstalking laws to protect journalists or other speakers from individual abuse;
  • The introduction of anti-trolling laws designed to better combat the specific problem of “troll army”-style attacks on journalists or other public figures;
  • New statutory or regulatory restrictions on the ability of major media and Internet speech platforms to knowingly accept money from foreign governments attempting to influence American elections; and
  • New laws or regulations requiring that major speech platforms behave as public trustees, with general duties to police fake users, remove propaganda robots, and promote a robust speech environment surrounding matters of public concern.

The enactment and vigorous enforcement of these laws would yield a range of challenging constitutional questions that this paper cannot address in their entirety. But the important doctrinal question held in common is whether the First Amendment would give sufficient room for such measures. To handle the political speech challenges of our time, I suggest that the First Amendment must be interpreted to give wide latitude for new measures to advance listener interests, including measures that protect some speakers from others.

As a doctrinal matter, such new laws would bring renewed attention to classic doctrines that accommodate the interests of listeners—such as the doctrines of “true threats” and “captive audiences”—as well as to the latitude that courts have traditionally given efforts to protect the electoral process from manipulation. Such laws might also redirect attention to a question originally raised by the Federal Communications Commission’s fairness doctrine and the Red Lion Co. v. FCC decision: how far the government may go solely to promote a better speech environment.

We might begin with the prosecution of trolls, which could be addressed criminally as a form of harassment or threat. Current case law is relatively receptive to such efforts, for it allows the government to protect listeners from speech designed to intimidate them by creating a fear of violence. The death threat and burning cross serve as archetypical examples. As we have seen, trolls frequently operate by describing horrific acts, and not in a manner suggesting good humor or artistic self-expression. In the Supreme Court’s most recent statement on the matter, it advised that “[i]ntimidation in the constitutionally proscribable sense of the word is a type of true threat, where a speaker directs a threat to a person or group of persons with the intent of placing the victim in fear of bodily harm or death.” The fact that threats are often not carried out is immaterial; the intent to create a fear of violence is sufficient. Given this doctrinal backdrop, there is reason to believe that the First Amendment can already accommodate increased prosecution of those who try to intimidate journalists or other critics.

This belief is supported by the outcome of United States v. Moreland, the first lower court decision to consider the use of the federal cyberstalking statute to protect a journalist from an aggressive troll. Jason Moreland, the defendant, directed hundreds of aggressive emails, social media comments, and physical mailings at a journalist living and reporting in Washington, D.C. Many of his messages referenced violence and “a fight to the death.” In the face of a multi-faceted First Amendment challenge, the court wrote:

His communications directly referenced violence, indicated frustration that CP would not respond to his hundreds of emails, reflected concern that CP or someone on her behalf wanted to kill Moreland, stated that it was time to “eliminate things” and “fight to the death,” informed plaintiff that he knew where her brother was, and repeatedly conveyed that he expected a confrontation with CP or others on her behalf. . . . [T]he Court concludes that the statute is not unconstitutional as applied, as the words are in the nature of a true threat and speech integral to criminal conduct.

Cases like Moreland suggest that while efforts to reduce trolling might present a serious enforcement challenge, the Constitution will not stand in the way so long as the trolling at issue looks more like threats and not just strongly expressed political views.

The constitutional questions raised by government efforts to fight flooding are more difficult. Much depends on the extent to which these efforts are seen as serving important societal interests beyond the quality or integrity of public discourse, such as the protection of privacy or the protection of the electoral process.

Of particular relevance, as more and more of our lives are lived online—for many Americans today, nearly every waking moment is spent in close proximity to a screen—we may be “captive audiences” far more often than in previous decades. The captive audience doctrine, first developed in the 1940s, describes situations in which one is left with no practical means of avoiding unwanted speech. It was developed in cases like Kovacs v. Cooper, which concerned a city ban on “sound trucks” that drove around broadcasting various messages at a loud volume so as to reach both pedestrians and people within their homes. The Court wrote that “[t]he unwilling listener is not like the passer-by who may be offered a pamphlet in the street but cannot be made to take it. In his home or on the street he is practically helpless to escape this interference with his privacy by loud speakers except through the protection of the municipality.” It is worth pondering the extent to which we are now captive audiences in somewhat subtler scenarios, and whether we have developed virtual equivalents to the home—like our various devices or our email inboxes—where it is effectively impossible to avoid certain messages. The idea that one might simply “avert the eyes” as a means to deal with offensive messages seems increasingly implausible in many digital contexts. Relying on cases like Kovacs, the government might seek to develop and enforce “anti-captivity” measures that are designed to protect our privacy or autonomy online.

Other government interests may be implicated by efforts to fight flooding in the form of foreign propaganda. Consider, for instance, a ban on political advertising—including payments to social media firms—by foreign governments or even foreigners in general. Such a ban, if challenged as censorship, might be justified by the state’s compelling interest in defending the electoral process and the “national political community,” in the same manner that the government has justified laws banning foreign campaign contributions. As a three-judge panel of the D.C. district court explained in a recent ruling: “the United States has a compelling interest for purposes of First Amendment analysis in limiting the participation of foreign citizens in activities of American democratic self-government, and in thereby preventing foreign influence over the U.S. political process.” It should not be any great step to assert that the United States may also have a compelling interest in preventing foreign interests from manipulating American elections through propaganda campaigns conducted through social media platforms.

I have left for last the question presented by potential new laws premised solely on an interest in improving the political speech environment. These laws would be inspired by the indelible dictum of Alexander Meiklejohn: “What is essential is not that everyone shall speak, but that everything worth saying shall be said” —and, to some meaningful degree, heard. Imagine, for instance, a law that makes any social media platform with significant market power a kind of trustee operating in the public interest, and requires that it actively take steps to promote a healthy speech environment. This could, in effect, be akin to a “fairness doctrine” for social media.

For those not familiar with it, for decades the fairness doctrine obligated broadcasters to use their power over spectrum to improve the conditions of political speech in the United States. It required that broadcasters affirmatively cover matters of public concern and do so in a “fair” manner. Furthermore, it created a right for anyone to demand the opportunity to respond to opposing views using the broadcaster’s facilities. At the time of the doctrine’s first adoption in 1949, the First Amendment remained largely inert; by the 1960s, a constitutional challenge to the regulations became inevitable. In the 1969 Red Lion decision, the Supreme Court upheld the doctrine and in doing so described the First Amendment’s goals as follows:

It is the right of the viewers and listeners, not the right of the broadcasters, which is paramount. It is the purpose of the First Amendment to preserve an uninhibited marketplace of ideas in which truth will ultimately prevail, rather than to countenance monopolization of that market, whether it be by the Government itself or a private licensee.

While Red Lion has never been explicitly overruled, it has been limited by subsequent cases, and it is now usually said to be dependent on the scarcity of spectrum suitable for broadcasting. The FCC withdrew the fairness doctrine in 1987, opining that it was unconstitutional, and Red Lion has been presumed dead or overruled by a variety of government officials and scholars. Nonetheless, in the law, no doctrine is ever truly dead. All things have their season, and the major changes in our media environment seem to have strengthened the constitutional case for laws explicitly intended to improve political discourse.

To make my own preferences clear, I personally would not favor the creation of a fairness doctrine for social media or other parts of the web. That kind of law, I think, would be too hard to administer, too prone to manipulation, and too apt to flatten what has made the Internet interesting and innovative. But I could be overestimating those risks, and my own preferences do not bear on the question of whether Congress has the power to pass such a law. Given the problems discussed in this paper, among others, Congress might conclude that our political discourse has been deeply damaged, threatening not just coherent governance but the survival of the republic. On that basis, I think the elected branches should be allowed, within reasonable limits, to try returning the country to the kind of media environment that prevailed in the 1950s. Stated differently, it seems implausible that the First Amendment cannot allow Congress to cultivate more bipartisanship or nonpartisanship online. The justification for such a law would turn on the trends described above: the increasing scarcity of human attention, the rise to dominance of a few major platforms, and the pervasive evidence of negative effects on our democratic life.

Conclusion

It is obvious that changes in communications technologies will present new challenges for the First Amendment. For nearly twenty years now, scholars have been debating how the rise of the popular Internet might unsettle what the First Amendment takes for granted. Yet the future retains its capacity to surprise, for the emerging threats to our political speech environment are different from what many predicted. Few forecast that speech itself would become a weapon of censorship. In fact, some might say that celebrants of open and unfettered channels of Internet expression (myself included) are being hoisted on their own petard, as those very same channels are today used as ammunition against disfavored speakers. As such, the emerging methods of speech control present a particularly difficult set of challenges for those who share the commitment to free speech articulated so powerfully in the founding—and increasingly obsolete—generation of First Amendment jurisprudence.

© 2017, Tim Wu.

 

Cite as: Tim Wu, Is the First Amendment Obsolete?, 17-01 Knight First Amend. Inst. (Sept. 1, 2017), https://knightcolumbia.org/content/tim-wu-first-amendment-obsolete [https://perma.cc/9ED4-WMHE].