What could be one of the most consequential First Amendment cases of the digital age is pending before a court in Illinois and will likely be argued before the end of the year. The case concerns Clearview AI, the technology company that surreptitiously scraped 3 billion images from the internet to feed a facial recognition app it sold to law enforcement agencies. Now confronting multiple lawsuits based on an Illinois privacy law, the company has retained Floyd Abrams, the prominent First Amendment litigator, to argue that its business activities are constitutionally protected. Landing Abrams was a coup for Clearview, but whether anyone else should be celebrating is less clear. A First Amendment that shielded Clearview and other technology companies from reasonable privacy regulation would be bad for privacy, obviously, but it would be bad for free speech, too.
The lawsuits against Clearview are in their early stages, but there does not seem to be any dispute about the important facts. The company assembled a vast database of images scraped from the internet—including from social media networks, news sites, and employment sites—without the consent or knowledge of the people pictured. When a user of Clearview’s app uploads a photo of a face, the app converts the image into a series of coordinates and shows images from the database that share similar coordinates. The app also supplies links to the websites from which the company obtained those similar images. The technology is powerful and fast, which is why federal immigration authorities and hundreds of police departments are already using it.
The people who’ve sued Clearview contend that the company is violating an Illinois privacy law that regulates the collection, use, and dissemination of biometric information. The company argues in defense that its business practices involve the kinds of activities that the First Amendment has been held to protect in the past—collecting publicly available information, analyzing it, and sharing the conclusions of that analysis. In a brief filed in October, it likened its app to a search engine and contended that its judgment about “what information will be most useful to users” is an “editorial” judgment akin to those made by newspapers.
That Clearview is reaching for the First Amendment is not a surprise. In recent years, the Supreme Court has been willing to extend the First Amendment’s protection to an ever-expanding range of activities. Technology companies have learned that an effective way to protect lucrative business practices from regulation is to characterize those practices as free speech. Google has been arguing, with some success in the lower courts, that judges should deal with any effort to regulate its search engine in the same way they’d deal with efforts to censor the Wall Street Journal. In Maine, internet service providers are arguing that the First Amendment protects their right to use and sell their customers’ sensitive data without their consent. Earlier this fall, President Donald Trump issued an executive order meant to shut down TikTok, the video-sharing platform. The company sued, arguing that the order violated the First Amendment because TikTok runs on code, and code is speech.
These arguments may sound audacious, but it’s more difficult than you might imagine to draw lines between these companies’ business practices and the kinds of activities that most of us believe the First Amendment must protect. Why, exactly, should the First Amendment treat Google’s decisions about which search results to highlight differently from the New York Times’ decisions about which articles to feature on its front page? Why is it, precisely, that the First Amendment should protect journalists who scrape the web in the service of their journalism but not Clearview when it scrapes the web in the service of its app? Principled line-drawing in this context is hard. (Go on, try it yourself.) The slopes are slippery. Even skeptics of the technology companies’ arguments may legitimately worry about the broader implications of allowing judges to place those companies’ activities outside the First Amendment’s protection.
But if line-drawing is difficult here, it’s also absolutely necessary. When courts conclude that a given activity is speech within the meaning of the First Amendment, they protect it from government regulation, which makes it more difficult for the government to address social harms that may be associated with it. It’s important, then, that First Amendment protection be reserved for the kinds of activities that actually further the ends the First Amendment was meant to serve.
Is Clearview engaged in activity that the First Amendment should care about? This is a hard question, but the company’s arguments don’t immediately persuade us. The company says it’s engaged in the creation and dissemination of information, but American courts haven’t extended First Amendment protection mechanically to every activity that involves information, data, or even “speech” in the colloquial sense, as scholars have observed. Instead, courts have looked to the social meaning of the activity in question, asking, for instance, whether the activity belongs to a recognized medium of expression; whether it is intended to convey a message and whether that message is likely to be understood; and, perhaps most important, whether the activity has the effect of informing public discourse.
It was considerations like these that led the courts to extend the First Amendment’s protection to flag burning, video games, and picketing—though none of these things is speech in the ordinary sense of the word. And it was considerations like these that led courts to withhold First Amendment protection from the solicitation of hitmen, threats of immediate violence, and agreements to fix prices—though most people would describe all of these things as speech. It has always mattered to courts, in other words, what an activity signifies, and what it is, and what it does.
With all of this in mind, Clearview’s claim to First Amendment protection is less than compelling. The company’s extraction of biometric data from photos scraped from the internet—which is the specific activity that the Illinois law regulates here—does not belong to any recognized mode of expression. And it is not intended to inform public discourse, even if on occasion it might do so incidentally. For understandable reasons, Clearview wants courts to think of the company as an editor and the company’s app as a search engine. But the company’s arguments seem to rely on obscuring differences that matter.
The bigger problem for the company, though, may be that even if the company’s business activities are speech within the meaning of the First Amendment, the Illinois law is a reasonable regulation of that speech. In its legal papers, Clearview argues that any law regulating its business activities should be subject to the most stringent form of constitutional scrutiny—the same kind of scrutiny courts would apply to a law censoring the press. But while laws censoring the press are speech-suppressive almost by definition, this isn’t true of laws protecting individual privacy. To the contrary, privacy is a precondition for all First Amendment freedoms, including the freedom of speech, and so laws protecting privacy can sometimes be speech-enhancing. As the Supreme Court has observed, in many privacy cases there are free speech interests on both sides of the balance. In those cases, the question the courts should ask is not whether the law can survive the most stringent scrutiny but whether it balances those interests in a reasonable way.
The Illinois law does this. It focuses narrowly on certain forms of nonconsensual surveillance that pose an especially serious threat to individual privacy. Facial recognition in particular is an immensely powerful form of surveillance whose abuse could fundamentally undermine civil liberties, including the liberties the First Amendment is meant to protect. Clearview’s technology highlights these dangers. The company’s app would allow anyone to identify the protesters who attended a particular political rally, or to identify the people who entered a particular house of worship or medical clinic. According to the New York Times, one of the app’s founders tried to sell access to a white supremacist who was running for Congress, telling him, quite accurately, that the app could be used to conduct “extreme opposition research.” Against this background, Illinois’ decision to restrict the nonconsensual collection and use of biometric information is perfectly understandable. The law is a straightforward response to technology whose unregulated deployment would, as the New York Times observed, “end privacy as we know it.”
This is not to say that the Illinois law is perfect. It doesn’t restrict the collection and use of biometric data by government agencies, for example. It doesn’t include exceptions for journalism—and, consequently, it’s possible that a journalist might be able to challenge the application of the law to a specific investigative project. On the whole, though, the law is a reasonable effort to balance competing interests, including those related to the First Amendment. It would be disappointing, to say the least, if the courts let Clearview use the First Amendment to kneecap the freedoms it was meant to protect. Carefully drawn privacy laws are a precondition for free speech in the digital age, not a threat to it.
More broadly, it would be terrible for the freedoms of inquiry, association, and speech if the courts didn’t think very carefully before allowing companies like Clearview to wrap their business models in the First Amendment. It hardly needs to be said that our speech environment looks radically different today than it did 50 years ago, when the Supreme Court issued many of the rulings that have come to define the First Amendment. Most political speech now takes place online. A small number of tech companies serve as the gatekeepers of online public discourse. Their business model entails pervasive surveillance of what we read, what we say, and whom we associate and correspond with. These developments demand that legislatures think creatively about how the digital public sphere can be kept moored to democratic values and the public interest. It would be a mistake to allow the First Amendment to become an obstacle to laws essential to protect the integrity and vitality of free speech in the digital age.
Jameel Jaffer is executive director of the Knight Institute.
Ramya Krishnan is a staff attorney at the Knight Institute.