Large internet platforms’ pleas for free expression protection have vexed policymakers and scholars for over a decade. I am presently trying to reconcile the type of intermediary responsibility I call for in recent work with my earlier characterization of platforms as common carriers. These platforms assume a variety of distinctive roles and responsibilities that arise situationally. Sometimes a platform takes on real editorial responsibility. In other scenarios, it is unable or unwilling to exercise control over users, and regulators are unwilling or unable to formulate rules requiring it to act.

Heather Whitney’s paper Search Engines, Social Media, and the Editorial Analogy challenges these distinctions by highlighting how media organizations and intermediaries that were the subjects of leading First Amendment precedents are unlike contemporary platforms. Whitney’s subtle intervention carefully parses the role and purpose of media outlets, fiduciaries, and other entities with a longer history of regulation than platforms. It should be a vital corrective to the anachronistic metaphors that bog down First Amendment discourse to this day.

The question now is how to craft free expression doctrine capable of addressing platforms that are far more centralized, pervasive, and powerful than the vast majority of entities that pressed free expression claims before 2000. That is a project worthy of a treatment as expansive as Thomas Emerson’s classic The System of Freedom of Expression. In my brief intervention here, I merely hope to advance a perspective congruent with Whitney’s turn to Seana Shiffrin’s “thinker-based” theory of free expression. I believe that free speech protections are primarily for people, and only secondarily (if at all) for software, algorithms, artificial intelligence, and platforms.

“Free speech for people” is a particularly pressing goal given ongoing investigations into manipulation of public spheres around the world. American voters still do not know to what extent foreign governments, non-state actors, and bots manipulated social media during the presidential election of 2016. The Federal Election Commission failed to require disclosure of the source of much political advertising on Facebook and Twitter. Explosive reports now suggest that the goal of the Russian buyers of many ads “was to amplify political discord in the U.S. and fuel an atmosphere of divisiveness and chaos.” Social media firms are cooperating with investigators now. But they will likely fight proactive regulation by arguing that their algorithmic feeds are speech. They have already deleted critical information.

Courts are divided on whether algorithmic generation of search results and newsfeeds merits full First Amendment protection. As Tim Wu has observed, “[c]omputers make trillions of invisible decisions each day; the possibility that each decision could be protected speech should give us pause.” He and other scholars have argued forcefully for limiting constitutional protection of “machine speech.” By contrast, Stuart Benjamin has predicted that courts will expand the coverage of First Amendment protection to artificial intelligence (AI), including algorithmic data processing.

Given the growing concern about the extraordinary power of secret algorithmic manipulation to target influential messaging to persons with little to no appreciation of its ultimate source, courts should not privilege algorithmic data processing in these scenarios as speech. As James Grimmelmann has warned with respect to “robotic copyright,” First Amendment protection for the products of AI could systematically favor machine over human speech. This is particularly dangerous as bots begin mimicking actual human actors. Henry Farrell paints a vivid picture:

The world that the Internet and social media have created is less a system than an ecology, a proliferation of unexpected niches, and entities created and adapted to exploit them in deceptive ways. Vast commercial architectures are being colonized by quasi-autonomous parasites. . . . Such fractured worlds are more vulnerable to invasion by the non-human. . . . Twitterbots vary in sophistication from automated accounts that do no more than retweet what other bots have said, to sophisticated algorithms deploying so-called “Sybil attacks,” creating fake identities in peer-to-peer networks to invade specific organizations or degrade particular kinds of conversation.

There is also a growing body of empirical research on the troubling effects of an automated public sphere. In too many scenarios, bot interventions are less speech than anti-speech, calculated efforts to disrupt democratic will formation and fool the unwary.

To restore public confidence in democratic deliberation, Congress should require rapid disclosure of the data used to generate algorithmic speech, the algorithms employed, and the targeting of that speech. American legislation akin to the “right to explanation” in the European Union’s General Data Protection Regulation would not infringe on, but would rather support, First Amendment values. Affected firms may assert that their algorithms are too complex to disclose. If so, Congress should have the power to ban the targeting and arrangement of information at issue, because the speech protected by the Constitution must bear some recognizable relation to human cognition.

Authorities should also consider banning certain types of manipulation. The UK Code of Broadcast Advertising states that “audiovisual commercial communications shall not use subliminal techniques.” In a less esoteric mode, there is a long line of U.S. Federal Trade Commission (FTC) guidance forbidding misleading advertisements and false or missing indication of sponsorship. Given the FTC’s manifold limitations, U.S. states will also need to develop more specific laws to govern an increasingly automated public sphere. California Senator Robert Hertzberg recently introduced the so-called “Blade Runner Bill,” which “would require digital bots, often credited with spreading misinformation, to be identified on social media sites.” Another proposed bill would “would prohibit an operator of a social media Internet Web site from engaging in the sale of advertising with a computer software account or user that performs an automated task, and that is not verified by the operator as being controlled by a natural person.” I applaud such interventions as concrete efforts to assure that critical forums for human communication and interaction are not overwhelmed by a posthuman swarm of spam, propaganda, and distraction.

As theorists develop a philosophy of free expression for the twenty-first century, they might take the principles underlying interventions like the Blade Runner Bill as fixed points of considered convictions to guide a reflective equilibrium on the proper balance between the rights of speakers and listeners, individuals and community, technology users, and those subject to technology’s effects. Even if free expression protections extend to algorithmic targeting and bot expression, disclosure rules are both essential and constitutionally sound. Courts should avoid intervening to protect “speech” premised on elaborate and secretive human-subject research on internet users. The future of human expression depends on strict rules limiting the power and scope of technological substitutes for real thinkers and real thoughts.

© 2018, Frank Pasquale.

 

Cite as: Frank Pasquale, Preventing a Posthuman Law of Freedom of Expression, 18-01.c Knight First Amend, Inst. (Feb. 26, 2018), https://knightcolumbia.org/content/preventing-posthuman-law-freedom-expression [https://perma.cc/T5CE-C9CL].