This post is a response to a post on the LPE Blog by Genevieve Lakier and Nelson Tebbe, “After the ‘Great Deplatforming’: Reconsidering the shape of the First Amendment.”
Genevieve Lakier and Nelson Tebbe’s rich and provocative essay probes questions at the First Amendment’s horizon, among them, whether the First Amendment imposes duties on social media companies to respect the free speech interests of their users. It’s a tantalizing prospect, but as Justice Thomas’s concurrence in the Trump Twitter blocking case underscores, it may also be a lost cause. The Roberts Court has taken a singularly laissez faire view of free speech, and this shows no signs of changing. Given that, the question whether the First Amendment imposes duties on the platforms may be less urgent than the question whether the First Amendment precludes democratically elected legislatures from doing so.
At the heart of this question is a war of analogies: Do the platforms resemble traditional common carriers, or are they more like newspapers? Whether the platforms are similar to one or the other has significant implications for whether they can be regulated and, if so, how. If the platforms are like common carriers, any reasonable regulation of them is unlikely to raise First Amendment concern. If they are like newspapers, targeting them is all but off the table.
The problem is that both labels fit like a bad suit. The platforms are not neutral pipes that merely carry information from point A to B. Nor are they exactly like newspapers, which are generally more concerned with nurturing their own voice than hosting others’ and which lack any bottleneck power over a medium of communication.
Indeed, in many ways companies like Facebook and Twitter are sui generis. They dominate the public sphere. They engage in pervasive surveillance of their users in order to maximize their targeted ad-based revenue. They also run on black box algorithms that have the potential to distort public discourse and significantly affect the health of democracy.
We should focus on these features, and resist the temptation to shoehorn the platforms into familiar, yet ill-fitting categories. Doing so will reveal a more accurate picture of the platforms’ relationship to free speech. It is also critical to shaping a First Amendment that promotes, rather than hinders, a democratic approach to online speech governance.
Granted, the analogy to common carriers has real force. As Justice Thomas remarked in his recent concurrence, there are strong similarities between platforms and communications utilities. Just as “a traditional telephone company laid physical wires to create a network connecting people,” social media platforms “lay information infrastructure” to much the same end. Some of these platforms also have substantial market power, like traditional utilities.
There are, however, considerable differences. Unlike the telephone or telegraph, the platforms are not passive receptacles of speech. They moderate content, often at an industrial scale, deciding which speech to take down and structuring and organizing the speech that remains. Through their policy and design decisions, they determine which interactions to facilitate and prohibit, and which speech to promote and suppress.
The platforms have argued that these decisions are akin to the editorial judgments made by newspapers, citing Miami Herald Publishing Co v. Tornillo, in which the Court held that the First Amendment protects the editorial judgments of newspapers. But while the analogy to newspapers has superficial appeal, it falters on closer analysis, as people like Heather Whitney and Oren Bracha have pointed out. Seen at close range, most of the editorial judgments made by platforms are different in kind from those made by newspapers.
The most salient difference is the relationship the platforms have to the content they publish. While people may criticize Facebook for failing to take down violent or racist content, no one attributes this content to Facebook in the same way readers of The New York Times attribute articles and editorials in the paper to the Times. That is not just because of Section 230, which provides that platforms may not be held liable as the publisher or speaker of user content. It is a consequence of Facebook’s ethos and business model. Facebook seeks to “give people voice.” The Times seeks to give expression to its own voice.
To be sure, there are deviations from this norm. For example, both Facebook and Twitter applied warning labels to misinformation around the 2020 election—labels that users likely associated with the companies or assumed the companies to endorse. Most of the content edited by the platforms, however, does not fall into this category.
Focusing on the totality of this content, moreover, doesn’t really change matters. As Bracha has written, the Court has been “willing to assume a separate layer of expression attributable to the editor where the edited content constitutes a collective expressive entity.” But the content curated by platforms generally lacks this quality.
In Hurley v. Irish-American Gay, Lesbian, & Bisexual Group, the Court held that a parade organizer had a First Amendment interest in excluding a gay organization from participating in the parade because “[u]nlike the programming offered on various channels by a cable network, the parade does not consist of individual, unrelated segments that happen to be transmitted together for individual selection by members of the audience.” Rather, each unit’s expression would be perceived by spectators as part of single expressive whole.
The same cannot be said of, say, Facebook’s News Feed. News Feed selects and organizes posts from users’ friends, pages, and Groups, but users don’t usually experience their feed as a single expressive entity. Nor are they likely to discern some overarching meaning in it—and for good reason. Facebook arranges the trappings of users’ digital experience not to convey any message, but to maximize their attention and capture their data in the service of its ad-driven business model.
The analogy to newspapers, then, deserves far more skepticism than is usually given. Not all editorial judgments are equal. Social media companies may in rare cases exercise the kind of editorial judgment we associate with newspapers. But more often they do not. The companies should not, therefore, receive First Amendment protection whenever they make editorial judgments. They should receive protection only to the extent those judgments actually resemble those of newspapers.
Adopting this approach would create the space for regulation Lakier and Tebbe gesture to, due process protections and transparency requirements, neither of which plausibly trench on the platforms’ right to autonomy of message. But it would also give succor to structuralist regulation that addresses the platforms’ basic business model, for example, interoperability requirements (which platforms might argue interfere with their discretion to choose which data to make available through their APIs), or a tax or ban on surveillant advertising (which they might argue burdens their ability to show ads and target them to those audiences most likely to respond to them).
Moreover, even when regulations do encroach on the platforms’ First Amendment protected editorial judgments, they may not warrant the most stringent scrutiny. As the Supreme Court recognized in Red Lion Broadcasting Co v. FCC, “differences in the characteristics of new media justify differences in the First Amendment standards applied to them.”
The Court’s treatment of broadcast and cable is instructive. In Turner Broadcasting System, Inc. v. FCC (Turner II), the Court explained that it had permitted more intrusive regulation of broadcast speakers than of other speakers in the media because of the supposed scarcity of broadcast frequencies. In considering regulations requiring cable providers to carry local broadcast stations, the Court found that justification inapplicable. But it rebuffed providers’ invitation to apply strict scrutiny, in part based on the specific characteristics of the cable medium.
For example, the Court noted that when a person subscribed to cable, the physical connection between the television set and the cable networks gave the cable operator “bottleneck, or gatekeeper, control over most (if not all) of the television programming that is channeled into the subscriber’s home.” This meant that cable operators could “silence the voice of competing speakers with a mere flick of the switch.” The Court refused to ignore this “potential for abuse of ... private power over [such] a central avenue of communication.”
Facebook (and to a lesser extent Twitter) also have gatekeeping power. But they also have other discourse-distorting characteristics that might demand a different and less deregulatory First Amendment standard: for example, their control over the public sphere, the opacity of their inner workings, and their surveillance-based business model. These characteristics render the platforms categorically distinct from the bygone behemoths of the 20th century. And critically, these characteristics engender unprecedented challenges to free speech in the digital age. If we are to meet these challenges, First Amendment doctrine must attend to the platforms’ actual features, rather than to analogical short-cuts that risk stifling the creative and calibrated legislative solutions Lakier and Tebbe call for.
Ramya Krishnan is a staff attorney at the Knight First Amendment Institute.