In Section 230 of the Communications Decency Act, lawmakers thought they were devising a safe harbor for online providers engaged in self-regulation. The goal was to encourage platforms to “clean up” offensive material online. Yet Section 230’s immunity has been stretched far beyond that purpose to immunize platforms that solicit or deliberately host illegality. As Olivier Sylvain’s thoughtful essay shows, it has been invoked to shield from liability platforms whose architectural choices lead ineluctably to illegal discrimination.

Section 230’s immunity provision has secured important breathing space for innovative new ways to work, speak, and engage with the world. But the law’s overbroad interpretation has been costly to expression and equality, especially for members of traditionally subordinated groups.

This response piece highlights Sylvain’s important normative contributions to the debate over Section 230. It provides some practical reinforcements for his reading of Section 230. Our central disagreement centers on the way forward. Congress should revise Section 230’s safe harbor to apply only to platforms that take reasonable steps to address unlawful activity. I end with thoughts about why it is time for platforms to pair their power with responsibility.

The “Decency” Act and Its Costs to Free Speech and Equality

In the technology world, Section 230 of the Communications Decency Act (CDA) is a kind of sacred cow—an untouchable protection of near-constitutional status. It is, in some circles anyway, credited with enabling the development of the modern internet. As Emma Llansó recently put it, “Section 230 is as important as the First Amendment to protecting free speech online.”

Before we tackle the current debate, it is important to step back to the statute’s history and purpose. The CDA, which was part of the Telecommunications Act of 1996, was not a libertarian enactment. At the time, online pornography was considered the scourge of the age. Senators James Exon and Slade Gorton introduced the CDA to make the internet safe for kids. Besides proposing criminal penalties for the distribution of sexually explicit material online, the Senators underscored the need for private sector help in reducing the volume of noxious material online. In that vein, Representatives Christopher Cox and Ron Wyden offered an amendment to the CDA entitled “Protection for Private Blocking and Screening of Offensive Material.” The Cox-Wyden amendment, codified in Section 230, provided immunity from liability for “Good Samaritan” online service providers that either over- or under-filtered objectionable content.

Twenty years ago, federal lawmakers could not have imagined how essential to modern life the internet would become. The internet was still largely a tool for hobbyists. Nonetheless, Section 230’s authors believed that “if this amazing new thing — the Internet — [was] going to blossom,” companies should not be “punished for trying to keep things clean.” Cox recently noted that “the original purpose of [Section 230] was to help clean up the Internet, not to facilitate people doing bad things on the Internet.” The key to Section 230, explained Wyden, was “making sure that companies in return for that protection — that they wouldn’t be sued indiscriminately — were being responsible in terms of policing their platforms.”

Courts, however, have stretched Section 230’s safe harbor far beyond what its words, context, and purpose support. Attributing the broad interpretation of Section 230 to “First Amendment values [that] drove the CDA,” courts have extended immunity from liability to platforms that republished content knowing it violated the law; solicited illegal content while ensuring that those responsible could not be identified; altered their user interface to ensure that criminals could not be caught; and sold dangerous products.

Granting immunity to platforms that deliberately host, encourage, or facilitate illegal activity would have seemed absurd to the CDA’s drafters. The law’s overbroad interpretation means that platforms have no reason to take down illicit material and that victims have no leverage to insist that they do so. Rebecca Tushnet put it well a decade ago: Section 230 ensures that platforms enjoy “power without responsibility.”

Although Section 230 has been valuable to innovation and expression, it has not been the net boon for free speech that its celebrants imagine. The free expression calculus, devised by the law’s supporters, fails to consider the loss of voices in the wake of destructive harassment that platforms have encouraged or deliberately tolerated.As ten years of research has shown, cyber mobs and individual harassers shove people offline with sexually threatening and sexually humiliating abuse; targeted individuals are more often women, women of color, lesbian and trans women, and sexual minorities. The benefits that Section 230’s immunity has enabled likely could have been secured at a lesser price.

Discriminatory Design and What to Do About It

Olivier Sylvain wisely urges us to think more broadly about the costs to historically disadvantaged groups wrought by Section 230’s overbroad interpretation. Platforms disadvantage the vulnerable not just through their encouragement of cyber mobs and individual abusers but also through their design choices. As Sylvain argues, conversations about Section 230’s costs to equality should include the ways that a platform’s design can “predictably elicit or even encourage expressive conduct that perpetuates discrimination.”

Sylvain’s focus on discriminatory design deserves the attention of courts and lawmakers. More than twenty years ago, Joel Reidenberg and Lawrence Lessig highlighted code’s role in channeling legal regulation and governance. A platform’s architecture can prevent illegal discrimination, just as it can be designed to protect privacy, expression, property, and due process rights.

As Sylvain has shown, platforms have instead chosen architectures that undermine legal mandates. Airbnb’s site, for instance, asks guests to include real names in their online profiles even though the company knows illegal discrimination is sure to result. As studies have shown, Airbnb guests with distinctively African-American names are 16 percent less likely to be accepted relative to identical guests with distinctively White names. Facebook’s algorithms mine users’ data to create categories from which advertisers choose, including ones that facilitate illegal discrimination in hiring and housing.

Sylvain’s normative argument is compelling. Platforms are by no means “neutral,” no matter how often or loudly tech companies say so. They are not merely publishing others’ content when their carefully devised user interfaces and algorithms damage minorities’ and women’s opportunities. When code enables invidious discrimination, law should be allowed to intervene. Facebook has built an advertising system that inevitably results in fair housing violations. Airbnb’s user interface still requires guests to include their names, which predictably results in housing discrimination. Sylvain is right—platforms should not enjoy immunity from liability for their architectural choices that violate anti-discrimination laws.

The question, of course, is strategy. Do we need to change Section 230 to achieve Sylvain’s normative ends? Section 230 should not be read to immunize platforms from liability related to user interface or design. Platforms are being sued for their code’s illegality, not for their users’ illegality or the platforms’ subsequent over- or under-removal of content. What is legally significant is the platform’s adoption of a design (such as Facebook’s algorithmic manipulation of user data to facilitate ads) that enables illegal discrimination.

Sylvain’s argument finds support in recent state and federal enforcement efforts. For instance, in a suit against revenge porn operator Craig Brittain, the Federal Trade Commission (FTC) argued that it was unfair—and a violation of Section 5 of the FTC Act—for Brittain to exploit individuals’ personal information shared in confidence for financial gain. The FTC’s theory of wrongdoing had roots in prior decisions related to companies that unfairly induced individuals to betray another’s trust. Theories of inducement focus on acts, not the publication of another’s speech. Section 230 would not bar such actions because they are not premised on platforms’ publication of another’s speech but rather on platforms’ inducing others to breach a trust. So, too, with claims asserting that a platform’s wrongful activity is its design that induces or enables illegal discrimination.

What if courts are not convinced by this argument? Sylvain urges Congress to maintain the immunity but to create an explicit exception from the safe harbor for civil rights violations. He notes that other exceptions could be added, such as those related to combating nonconsensual pornography, sex trafficking, or child sexual exploitation. A recent example of that approach is the Stop Enabling Sex Traffickers Act, which recently passed the Senate by an overwhelming vote. The bill would amend Section 230 by rendering websites liable for hosting sex trafficking content.

Congress, however, should avoid a piecemeal approach. Carving out exceptions risks leaving out other areas of the law that should not be immunized. The statutory scheme would require updating as new problems arise that would seem to demand it. Legislation requiring piece-by-piece exemptions would, most likely, not get updated.

Benjamin Wittes and I have offered a broader though balanced legislative fix. In our view, platforms should enjoy immunity from liability if they can show that their response to unlawful uses of their services is reasonable. Accordingly, Wittes and I have proposed a revision to Section 230(c)(1) as follows (revised language is italicized):

No provider or user of an interactive computer service that takes reasonable steps to prevent or address unlawful uses of its services shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.

The determination of what constitutes a reasonable standard of care would take into account differences among online entities. Internet service providers (ISPs) and social networks with millions of postings a day cannot plausibly respond to complaints of abuse immediately, let alone within a day or two. On the other hand, they may be able to deploy technologies to detect content previously deemed unlawful. The duty of care will evolve as technology improves.

A reasonable standard of care will reduce opportunities for abuses without interfering with the further development of a vibrant internet or unintentionally turning innocent platforms into involuntary insurers for those injured through their sites. Approaching the problem as one of setting an appropriate standard of case more readily allows for differentiating among different kinds of online actors, setting a different rule for websites designed to facilitate mob attacks or to enable illegal discrimination from the one that would apply to large ISPs linking millions to the internet.

Parting Thoughts

We have come to an important inflection point. The public is beginning to understand the extraordinary power that platforms wield over our lives. Consider the strong, negative public reaction to journalistic reports of Cambridge Analytica’s mining of Facebook data to manipulate voters, or Facebook’s algorithms allowing advertisers to reach users who “hate Jews,” or YouTube’s video streams that push us to ever more extreme content. Social media companies can no longer hide behind the notion that they are neutral platforms simply publishing people’s musings. Their terms-of-service agreements and content moderation systems determine if content is seen or heard or if it is muted or blocked. Their algorithms dictate the advertisements that are visible to job applicants and home seekers. Their systems act with laser-like precision to target, score, and manipulate each and every one of us.

To return to Rebecca Tushnet’s framing, with power comes responsibility. Law should change to ensure that such power is wielded responsibly. Content intermediaries have moral obligations to their users and others affected by their sites, and companies are beginning to recognize this. As Mark Zuckerberg told CNN, “I’m not sure we shouldn’t be regulated.”

While the internet is special, it is not so fundamentally special that all normal legal rules should not apply to it. Online platforms facilitate expression, along with other key life opportunities, but no more so than do workplaces, schools, and various other civic institutions, which are zones of conversation and are not categorically exempted from legal responsibility for operating safely. The law has not destroyed expression in workplaces, homes, and other social venues.

When courts began recognizing claims under Title VII for sexually hostile work environments, employers argued that the cost of liability would force them to shutter and, if not, would ruin the camaraderie of workspaces. That grim prediction has not come to pass. Rather, those spaces are now available to all on equal terms, and bricks-and-mortar businesses have more than survived in the face of Title VII liability. The same should be true for networked spaces. We must make policy for the internet and society that we actually have, not the internet and society that we believed we would get twenty years ago.

© 2018, Danielle Keats Citron.

 

Cite as: Danielle Keats Citron, Section 230's Challenge to Civil Rights and Civil Liberties, 18-02.b Knight First Amend. Inst. (Apr. 6, 2018), https://knightcolumbia.org/content/section-230s-challenge-civil-rights-and-civil-liberties [https://perma.cc/V54Q-D3Z9].