In his contribution to the Knight First Amendment Institute’s “Emerging Threats” essay series, Fordham Law School’s Olivier Sylvain critiques a core U.S. internet law, Section 230 of the Communications Decency Act (CDA 230). CDA 230 immunizes platforms like YouTube and Craigslist from most liability for speech posted by their users. By doing so, it protects lawful and important speech that risk-averse platforms might otherwise silence. But it also lets platforms tolerate unlawful and harmful speech.

Sylvain argues that the net result is to perpetuate inequities in our society. For women, ethnic minorities, and many others, he suggests, CDA 230 facilitates harassment and abuse— and thus “helps to reinforce systemic subordination.” We need not tolerate all this harm, Sylvain further suggests, given the current state of technology. Large platforms’ ever-improving ability to algorithmically curate users’ speech “belies the old faith that such services operate at too massive a scale to be asked to police user content.”

CDA 230 has long been a pillar of U.S. internet law. Lately, though, it has come under sustained attack. In the spring of 2018, Congress passed the first legislative change to CDA 230 in two decades: the Allow States and Victims to Fight Online Sex Trafficking Act, commonly known as FOSTA. FOSTA has an important goal—protecting victims of sex trafficking. But it is so badly drafted, no one can agree on exactly what it means. It passed despite opposition from advocates for trafficking victims and the ACLU, and despite the Justice Department’s concern that aspects of it could make prosecutors’ jobs harder. More challenges to CDA 230 are in the works. That makes close attention to the law, including both its strengths and its weaknesses, extremely timely.

Supporters of CDA 230 generally focus on three broad benefits. The first is promoting innovation and competition. When Congress passed the law in 1996, it was largely looking to future businesses and technologies. In today’s age of powerful mega-platforms, the concern about competition is perhaps even more justified. When platform liability risks expand, wealthy incumbents can hire lawyers and armies of moderators to adapt to new standards. Startups and smaller companies can’t. That’s why advocates for startups opposed FOSTA, while Facebook and the incumbent-backed Internet Association supported it.

The second benefit of CDA 230 is its protection for internet users’ speech rights. When platforms face liability for user content, they have strong incentives to err on the side of caution and take it down, particularly for controversial or unpopular material. Empirical evidence from notice-and-takedown regimes tells us that wrongful legal accusations are common, and that platforms often simply comply with them. The Ecuadorian government, for example, has used spurious copyright claims to suppress criticism and videos of police brutality. Platform removal errors can harm any speaker, but a growing body of evidence suggests that they disproportionately harm vulnerable or disfavored groups. So while Sylvain is right to say that vulnerable groups suffer disproportionately when platforms take down too little content, they also suffer disproportionately when platforms take down too much.

The third benefit is that CDA 230 encourages community-oriented platforms like Facebook or YouTube to weed out offensive content. This was Congress’s goal in enacting the CDA’s “Good Samaritan” clause, which immunizes platforms for voluntarily taking down anything they consider “objectionable.”14  Prior to CDA 230, platforms faced the so-called moderator’s dilemma—any effort to weed out illegal content could expose them to liability for the things they missed, so they were safer not moderating at all.

Against these upsides, Sylvain marshals a compelling list of downsides. Permissive speech rules and hands-off attitudes by platforms, especially when combined with what Sylvain calls “discriminatory designs on user content and data,” enable appalling abuses, particularly against members of minority groups. Nonconsensual pornography, verbal attacks, and credible threats of violence are all too common.

Does that mean it is time to scrap CDA 230? Some people think so. Sylvain’s argument is more nuanced. He identifies specific harms, and specific advances in platform technology and operations, that he argues justify legal changes. While I disagree with some of his analysis and conclusions, the overall project is timely and useful. It arrives at a moment of chaotic, often rudderless public dialogue about platform responsibility. Pundits depict a maelstrom of online threats, often conflating issues as diverse as data breaches, “fake news,” and competition. The result is a moment of real risk, not just for platforms but for internet users. Poorly thought-through policy responses to misunderstood problems can far too easily become laws.

In contrast to this panicked approach, Sylvain says we should be “looking carefully at how intermediaries’ designs on user content do or do not result in actionable injuries.” This is a worthy project. It is one that, in today’s environment, requires us to pool our intellectual resources. Sylvain brings, among other things, a deep understanding of the history of communications regulation. I bring practical experience from years in-house at Google and familiarity with intermediary liability laws around the world.

To put my own cards on the table—and surely surprising no one—I am very wary of tinkering with intermediary liability law, including CDA 230. That’s mostly because I think the field is very poorly understood. It was hardly a field at all just a few years ago. A rising generation of experts, including Sylvain, will fix that before long. In the meantime, though, we need careful and calm analysis if we are to avoid shoot-from-the-hip legislative changes.

Whatever we do with the current slew of questions about platform responsibility, the starting point should be a close look at the facts and the law. The facts include the real and serious harms Sylvain identifies. He rightly asks why our system of laws tolerates them, and what we can do better.

CDA 230, though, is not the driver of many of the problems he identifies. In the first section of my response, I will walk through the reasons why. Hateful or harassing speech, for example, often doesn’t violate any law at all for reasons grounded in the First Amendment. If platforms tolerate content of this sort, it is not because of CDA 230. Quite the contrary: A major function of the law is to encourage platforms to take down lawful but offensive speech.

Other problems Sylvain describes are more akin to the story, recently reported, of Facebook user data winding up in the hands of Cambridge Analytica. They stem from breaches of trust (or of privacy or consumer protection law) between a platform and the user who shared data or content in the first place. Legal claims for breaching this trust are generally not immunized by CDA 230. If we want to change laws that apply in these situations, CDA 230 is the wrong place to start.

In the second section of my response, I will focus on the issues Sylvain surfaces that really do implicate CDA 230. In particular, I will discuss his argument that platforms’ immunities should be reduced when they actively curate content and target it to particular users. Under existing intermediary liability frameworks outside of CDA 230, arguments for disqualifying platforms from immunity based on curation typically fall into one of two categories. I will address both.

The first argument is that platforms should not be immunized when they are insufficiently “neutral.” This framing, I argue, is rarely helpful. It leads to confusing standards and in practice deters platforms from policing for harmful material.

The second argument is that immunity should depend on whether a platform “knows” about unlawful content. Knowledge is a slippery concept in the relevant law, but it is a relatively well-developed one. Knowledge-based liability has problems—it poses the very threats to speech, competition, and good-faith moderation efforts that CDA 230 avoids. But by talking about platform knowledge, we can reason from precedent and experience with other legal frameworks in the United States and around the world. That allows us to more clearly define the factual, legal, and policy questions in front of us. We can have an intelligent conversation, even if we don’t all agree. That’s something the world of internet law and policy badly needs right now.

Isolating Non-CDA 230 Issues

In this section I will walk through issues and potential legal claims mentioned by Sylvain that are not, I think, controlled by CDA 230. Eliminating them from the discussion will help us focus on his remaining important questions about intermediary liability.

Targeting Content or Ads Based on Discriminatory Classifications

Sylvain’s legal arguments are grounded in a deep moral concern with the harms of online discrimination. He provides numerous moving examples of bias and mistreatment. But many of the internet user and platform behaviors he describes are not actually illegal, or are governed by laws other than CDA 230.

As one particularly disturbing example, Sylvain describes how Facebook until recently allowed advertisers to target users based on algorithmically identified “interests” that included phrases like “how to burn Jews” and “Jew hater.” When ProPublica’s Julia Angwin broke this story, Facebook scrambled to suspend these interest categories. Sylvain recounts this episode to illustrate the kinds of antisocial outcomes that algorithmic decisionmaking can generate. However repugnant these phrases are, though, they are not illegal. Nor is using them to target ads. So CDA 230 does not increase platforms’ willingness to tolerate this content—although it does increase their legal flexibility to take it down.

To outlaw this kind of thing, we would need different substantive laws about things like hate speech and harassment. Do we want those? Does the internet context change First Amendment analysis? Like other critics of CDA 230 doctrine, Sylvain emphasizes the “significant qualitative and quantitative difference between the reach of [harmful] offline and online expressive acts.” But it’s not clear that reforming CDA 230 alone would curb many of these harms in the absence of larger legal change.

CDA 230 also has little or no influence on Facebook ads that target users based on their likely race, age, or gender. Critics raise well-justified concerns about this targeting. But, as Sylvain notes, it generally is not illegal under current law. Anti-discrimination laws, and hence CDA 230 defenses, only come into play for ads regarding housing, employment, and possibly credit. Even for that narrower class of ads, it’s not clear that Facebook is doing anything illegal by offering a targeting tool that has both lawful and unlawful uses. If the Fair Housing Act (FHA) does apply to Facebook in this situation, the result in a CDA-230-less world would appear to be that Facebook must prohibit and remove these ads. But that’s what Facebook says it does already. So the CDA 230 problem here may be largely theoretical.

Sylvain’s more complicated claim is that CDA 230 allows Airbnb to facilitate discrimination by requiring renters to post pictures of themselves. Given Airbnb’s importance to travelers, discrimination by hosts is a big deal. But CDA 230’s relevance is dubious. First, it’s not clear if anyone involved — even a host — violates the FHA by enforcing discriminatory preferences for shared dwellings. Even if the hosts are liable, it seems unlikely that Airbnb violates the FHA by requiring photos, which serve legitimate as well as illegitimate purposes. Prohibiting the photos might even be unconstitutional: A court recently struck down under the First Amendment a California statute that, following reasoning similar to Sylvain’s, barred the Internet Movie Database from showing actors’ ages because employers might use the information to discriminate. Finally, if Airbnb’s photo requirement did violate the FHA, it seems unlikely that CDA 230 would provide immunity. The upshot is that CDA 230 is probably irrelevant to the problem Sylvain is trying to solve in this case.

None of this legal analysis refutes Sylvain’s moral and technological point: The internet enables new forms of discrimination, and the law should respond. The law may very well warrant changing. But for these examples, CDA 230 isn’t the problem.

Targeting Content Based on Data Mining

Sylvain also describes a set of problems that seem to arise from platforms’ directly harming or breaching the trust of their users. Some of these commercial behaviors, like “administer[ing] their platforms in obscure or undisclosed ways that are meant to influence how users behave on the site,” don’t appear to implicate CDA 230 even superficially. Others, like using user-generated content in ways the user did not expect, look more like CDA 230 issues because they involve publication. But I don’t think they really fall under CDA 230 either.

In one particularly disturbing example, Sylvain describes an Instagram user who posted a picture of a rape threat she received—only to have Instagram reuse the picture as an ad. An analogous fact pattern was litigated under CDA 230 in Fraley v. Facebook, Inc. In that case, users sued Facebook for using their profile pictures in ads, claiming a right-of-publicity violation. A court upheld their claim and rejected Facebook’s CDA 230 defense. If that ruling is correct, there should no CDA 230 issue for the case Sylvain describes.

But there is a deeper question about what substantive law governs in cases like this. The harm comes from a breach of trust between the platform and individual users, the kind of thing usually addressed by consumer protection, privacy, or data protection laws. U.S. law is famously weak in these areas. Compared to other countries, we give internet users few legal tools to control platforms’ use of their data or content. U.S. courts enforce privacy policies and terms of service that would be void in other jurisdictions, and they are stingy with standing or damages for people claiming privacy harms. That’s why smart plaintiffs’ lawyers bring claims like the right-of-publicity tort in Fraley. But the crux of those claims is not a publishing harm of the sort usually addressed by CDA 230. The crux is the user’s lack of control over her own speech or data — what Jack Balkin or Jonathan Zittrain might call an “information fiduciary” issue. Framing cases like these as CDA 230 issues risks losing sight of these other values and legal principles.

Addressing CDA 230 Issues

Sylvain suggests that platforms should lose CDA 230 immunity when they “employ software to make meaning out of their users’ ‘reactions,’ search terms, and browsing activity in order to curate the content” and thereby “enable[] illegal online conduct.” For issues that really do involve illegal content and potential liability for intermediaries—like nonconsensual pornography—this argument is important. At least one case has reviewed a nearly identical argument and rejected it. But Sylvain’s point isn’t to clarify the current law. It’s to work toward what he calls “a more nuanced immunity doctrine.” For that project, the curation argument matters.

I see two potential reasons for stripping platforms of immunity when they “elicit and then algorithmically sort and repurpose” user content. First, a platform might lose immunity because it is not “neutral” enough, given the ways it selects and prioritizes particular material. Second, it could lose immunity because curation efforts give it “knowledge” of unlawful material. Both theories have important analogues in other areas of law—including the Digital Millennium Copyright Act (DMCA), pre-CDA U.S. law, and law from outside the United States—to help us think them through.


All intermediary liability laws have some limit on the platform operations that are immunized—a point at which a platform becomes too engaged in user-generated content and starts being held legally responsible for it. Courts and lawmakers often use words like “neutral” or “passive” to describe immunized platforms. Those words don’t, in my experience, have stable enough meanings to be useful.

For example, the Court of Justice of the European Union has said that only “passive” hosts are immune under EU law. Applying that standard in the leading case, it found Google immune for content in ads, which the company not only organizes and ranks but also ranks based in part on payment. And in a U.S. case, a court said a platform was “neutral” when it engaged in the very kinds of curation that, under Sylvain’s analysis, makes platforms not neutral.

In the internet service provider (ISP) context, neutrality—as in net neutrality—means something very different. Holding ISPs to a “passive conduit” standard makes sense as a technological matter. But that standard doesn’t transfer well to other intermediaries. It would eliminate immunity for topic-specific forums (Disney’s Club Penguin or a subreddit about knitting, for example) or for platforms like Facebook that bar lawful but offensive speech. That seems like the wrong outcome given that most users, seemingly including Sylvain, want platforms to remove this content.

Policymakers could in theory draw a line by saying that, definitionally, a platform that algorithmically curates content is not neutral or immunized. But then what do we do with search engines, which offer algorithmic ranking as their entire value proposition? And how exactly does a no-algorithmic-curation standard apply to social media? As Eric Goldman has pointed out, there is no such thing neutrality for a platform, like Facebook or Twitter, that hosts user-facing content. Whether it sorts content chronologically, alphabetically, by size, or some other metric, it unavoidably imposes a hierarchy of some sort.

All of this makes neutrality something of a Rorschach test. It takes on different meanings depending on the values we prioritize. For someone focused on speech rights, neutrality might mean not excluding any legal content, no matter how offensive. For a competition specialist, it might mean honesty and fair competition in ranking search results. Still other concepts of neutrality might emerge if we prioritize competition, copyright, transparency, or, as Sylvain does in this piece, protecting vulnerable groups in society.

One way out of this bind is for the law to get very, very granular—like the DMCA. It has multiple overlapping statutory tests that effectively assess a defendant’s neutrality before awarding immunity. By focusing on just a few values, narrowly defining eligible technologies, and spelling out rules in detail, it’s easier to define the line between immunized behavior and non-immunized behavior.

DMCA litigators on both sides hate these granular tests. Maybe that means the law is working as intended. But highly particular tests for immunity present serious tradeoffs. If every intermediary liability question looked like the DMCA, then only companies with armies of lawyers and reserves of cash for litigation and settlement could run platforms. And even they would block user speech or decide not to launch innovative features in the face of legal uncertainty. Detailed rules like the DMCA’s get us back to the problems that motivated Congress to pass the CDA: harm to lawful speech, harm to competition and innovation, and uncertainty about whether platforms could moderate content without incurring liability.

Congress’s goal in CDA 230 was to get away from neutrality tests as a basis for immunity and instead to encourage platforms to curate content. I think Congress was right on this score, and not only for the competition, speech, and “Good Samaritan” reasons identified at the time. As Sylvain’s discussion of intermediary designs suggests, abstract concepts of neutrality do not provide workable answers to real-world platform liability questions.


The other interpretation I see for Sylvain’s argument about curation is that platforms shouldn’t be able to claim immunity if they know about illegal content—and that the tools used for curation bring them ever closer to such knowledge. This factual claim is debatable. Do curation, ranking, and targeting algorithms really provide platforms with meaningful information about legal violations? Whatever the answer, focusing on questions like this can clarify intermediary liability discussions.

Like the neutrality framing, this one is familiar from non-CDA 230 intermediary liability. Many laws around the world, including parts of the DMCA, say that if a platform knows about unlawful content but doesn’t take it down, it loses immunity. These laws lead to litigation about what counts as “knowledge,” and to academic, NGO, and judicial attention to the effects on the internet ecosystem. If a mere allegation or notice to a platform creates culpable knowledge, platforms will err on the side of removing lawful speech. If “knowledge” is an effectively unobtainable legal ideal, on the other hand, platforms won’t have to take down anything.

Some courts and legislatures around the world have addressed this problem by reference to due process. Platforms in Brazil, Chile, Spain, India, and Argentina are, for some or all claims, not considered to know whether a user’s speech is illegal until a court has made that determination. Laws like these often make exceptions for “manifestly” unlawful content that can, in principle, be identified by platforms. This is functionally somewhat similar to CDA 230’s exception for child pornography and other content barred by federal criminal law.

Other models, like the DMCA, use procedural rules to cabin culpable knowledge. Sylvain rightly invokes these as important protections against abuse of notice-and-takedown systems. Claimants must follow a statutorily defined notice process and provide a penalty-of-perjury statement. A DMCA notice that does not comply with the statute’s requirements cannot be used to prove that a platform knows about infringing material. Claimants also accept procedures for accused speakers to formally challenge a removal or to seek penalties for bad-faith removal demands.

A rapidly expanding body of material from the United Nations and regional human rights systems, as well as a widely endorsed civil society standard known as the Manila Principles, spell out additional procedures designed to limit over-removal of lawful speech. Importantly, these include public transparency to allow NGOs and internet users to crowdsource the job of identifying errors by platforms and patterns of abuse by claimants. Several courts around the world have also cited constitutional free expression rights of internet users in rejecting—as Sylvain does—strict liability for platforms.

As Sylvain notes, liability based on knowledge is common in pre-CDA tort law. Platforms differ from print publishers and distributors in important respects. But case law about “analog intermediaries” can provide important guidance, some of it mandatory under the First Amendment. The “actual malice” standard established in New York Times Co. v. Sullivan is an example. Importantly, the Times in that case acted as a platform, not as a publisher of its own reporting. The speech at issue came from paying advertisers, who bought space in the paper to document violence against civil rights protesters. As the court noted in rejecting the Alabama Supreme Court’s defamation judgment, high liability risk “would discourage newspapers from carrying ‘editorial advertisements’ of this type, and so might shut off an important outlet for the promulgation of information and ideas by persons who do not themselves have access to publishing facilities.” Similar considerations apply online.

Knowledge-based standards for platform liability are no panacea. Any concept of culpable knowledge for speech platforms involves tradeoffs of competing values, and not ones I necessarily believe we should make. What the knowledge framing and precedent provide, though, is a set of tools for deliberating more clearly about those tradeoffs.


Talk of platform regulation is in the air. Lawyers can make sense of this chaotic public dialogue by being lawyerly. We can crisply identify harms and parse existing laws. If those laws aren’t adequately protecting important values, including the equality values Sylvain discusses, we can propose specific changes and consider their likely consequences.

At the end of the day, not everyone will agree about policy tradeoffs in intermediary liability—how to balance speech values against dignity and equality values, for example. And not everyone will have the same empirical predictions about what consequences laws are likely to have. But we can get a whole lot closer to agreement than we are now. We can build better shared language and analytic tools, and identify the right questions to ask. Sylvain’s observations and arguments, coupled with tools from existing intermediary liability law, can help us do that.

Note: The author was formerly Associate General Counsel to Google. The Center for Internet and Society (CIS) is a public interest technology law and policy program at Stanford Law School. A list of CIS donors and funding policies is available here.

© 2018, Daphne Keller.


Cite as: Daphne Keller, Toward a Clearer Conversation About Platform Liability, 18-02.c Knight First Amend. Inst. (Apr. 6, 2018), [].