Section 230 of the Communications Decency Act plays a vital role in protecting free speech online. It allows platforms to host and moderate user-generated speech without assuming legal liability. But the online landscape has changed dramatically since its enactment, prompting questions about whether its protections should remain in place.
While Section 230 shouldn’t be considered sacrosanct, its repeal would do little to address the problems lawmakers are trying to solve. In some ways, it would make the problems worse. What’s needed instead is a new legislative approach grounded in structural reform—one that protects users’ privacy, allows users to engage with platforms on their own terms or leave them more easily, and makes platforms more transparent and accountable.
Lawmakers, parents and advocates have raised serious concerns about harmful content, particularly as it affects minors, as well as the power of large platforms and the inability to hold them accountable for their decisions. Congress is now considering bills that would sunset Section 230. The Senate Committee on Commerce, Science, and Transportation recently held a hearing, at which I testified, on how to move forward.
Framing the discussion about Section 230 as a choice between keeping the provision intact or scrapping it altogether misses a key point: Repeal wouldn’t fix many of the concerns raised.
Here’s why: Many of the harms driving calls for reform are tied to speech the First Amendment already protects. That constraint is not incidental. It shapes what Congress can do. As the Supreme Court recently reaffirmed, platforms’ editorial decisions about whether and how to display content are protected by the Constitution.
Repealing Section 230 would not change the fact that much of the content lawmakers are concerned about—often described as “lawful but awful” speech—would remain protected. The government cannot prohibit that speech, nor can it compel platforms to do so.
What repeal would do is change the incentives that shape platform behavior, to the detriment of users and public discourse.
Without Section 230, platforms would face increased legal risk for hosting user speech, including defamatory claims alleging wrongdoing by identifiable individuals. The First Amendment doesn’t protect defamation. Truth is a defense to liability, but platforms cannot reliably determine the truth of such claims at scale. They would therefore have strong incentives to remove any claims that might give rise to a defamation lawsuit. The result wouldn’t be a safer or meaningfully improved online environment, but one in which lawful, often socially valuable speech is taken down more frequently.
The effects wouldn’t be evenly distributed across platforms. While the largest platforms may be able to absorb the costs of increased liability, smaller or newer platforms may not. Community-driven sites and emerging services would be particularly vulnerable. The likely outcome would be an even more concentrated online landscape, with fewer options for users and less competition.
None of this is to defend the status quo. Few would argue that the digital public sphere is working for Americans or for our democracy. The question is how to respond to a rapidly evolving technology landscape in ways that are both effective and consistent with the First Amendment.
The most productive path forward lies in structural reform that targets the underlying features of the current system that contribute to online harms without creating incentives for platforms to remove lawful speech or giving the largest platforms even more control over online discourse.
Lawmakers could require greater transparency about how platforms operate, including how they collect and use consumer data and how their systems shape what users see. Congress could also establish protections for journalists and researchers who study platforms in the public interest, such as those outlined by my organization in the Knight Institute’s safe harbor proposal.
Platforms are successful in maintaining user engagement in part by relying on the extensive information they gather. Lawmakers could strengthen privacy protections by limiting what information platforms collect about users and how that information is shared, including by restricting the sale of user data. They could also give users more control over their online experience, including by making it easier for them to move their data and connections across platforms or interact with users of competing services.
Congress could enact these reforms independently of Section 230 or condition its protections on compliance with these requirements. Either way, the platforms would have little choice but to respect the privacy of their users, provide greater transparency into how they operate and give users greater control over their online lives—all while preserving the space for public discourse.
Whether to repeal Section 230 is not the right question. The more important question is how to address online harms without undermining free expression.
The better course is to pursue targeted reforms that address real concerns about the online experience while respecting the constitutional limits that govern speech in the U.S.
Repealing Section 230 will not achieve that. Structural reform can.
Nadine Farid Johnson is policy director at the Knight Institute.