I was really pleased when Taylor [Owen, director of McGill University's Centre for Media, Technology, and Democracy] and Sonja [Solomun, deputy director] reached out to me about this event. As you all know, this is a pivotal moment in the debate about the regulation of new communications platforms. Democracies around the world are poised to adopt new laws meant to address harms ranging from monopoly power to child sexual abuse. These legislative proposals raise difficult questions, including about free speech. Many of the people who’ve thought deeply about these questions are connected in some way to McGill, and to the Centre for Media, Technology, and Democracy in particular. So I was honored to get this invitation, and of course I’m glad to be sharing the stage with Frances Haugen, who’s done as much as anyone to illuminate the ways in which platform policies may be causing real harms online and off.

In the United States, where I work, this debate has been idiosyncratic in a number of respects. Many of the largest platforms are of course based in the United States, and in part for this reason they have a great deal of influence over the legislative process there, especially at the national level. The First Amendment, which gives broad protection to the freedom of speech, has also shaped the debate in the United States, though not always in the ways you might have expected it to. I’ll come back to that in a minute.

Perhaps the most distinctive thing about the social media debate in the United States is that it’s become a major front in the culture war, with Section 230 of the Communications Decency Act now somehow occupying a place alongside affirmative action and abortion and gun rights in the national imagination, and the contest for control of social media platforms attracting political figures, hip-hop moguls, and space-faring billionaires. It might be comical if there weren’t so much at stake.

But the questions of who should own social media platforms, and how these platforms should be operated and regulated, are among the most urgent ones of our age. The principal reason for this is that these platforms are where a great deal of the speech that’s most necessary to our democracies now takes place. Privately owned social media platforms are where we learn about the world, bring attention to injustice, organize political movements, hear from and petition our political leaders, advocate for change, and engage with other citizens. The result is that the pathologies of social media are pathologies of self-government, and even of self-determination.

Around the world, and across the political spectrum, people seem to understand this, intuitively. There is a widely shared sense that our democracies are slipping away from us, and that social media has something to do with this, even if the precise causal links are difficult to identify and pin down, and even if the problems plaguing public discourse plainly have many other causes, some of them with deep roots in history.

Over the next year, the social media debate in the United States is likely to become even more charged, because the U.S. Supreme Court is preparing to weigh in on these issues for the first time. The Court will hear two cases, called Gonzalez and Taamneh, about when social media platforms can be held responsible in court for harms allegedly caused by their recommendation algorithms. The Court is also widely expected to take up two cases in which a coalition of technology companies, called NetChoice, has challenged the constitutionality of social media laws enacted by Florida and Texas.

It would be difficult to overstate how consequential these cases are likely to be. In Gonzalez and Taamneh, the plaintiffs are the estates of people who were killed in ISIS-sponsored terrorist attacks in Paris, Istanbul, and San Bernadino, California. The plaintiffs allege that the platforms allowed ISIS to post videos and other content to communicate its message, radicalize new recruits, and further its mission. They argue that an anti-terrorism statute makes the platforms liable for hosting ISIS’s speech and for the real-world harms that they say were caused by that speech. They also argue that the platforms shouldn’t be permitted to invoke the protection that would otherwise be supplied by Section 230, the intermediary immunity statute, because the platforms knew they were hosting ISIS’s speech, and because they didn’t merely host the speech but also, through their algorithms, recommended it.

I don’t want to get too deep in the weeds here, but the questions presented by these two cases—Gonzalez and Taamneh—are a really big deal. For a quarter century, the platforms have relied on lower court rulings that allowed them to host user speech without having to worry that they could be held civilly liable for it. Now there’s the very real possibility that the Supreme Court will read Section 230 more narrowly, and that the platforms will lose some of the immunity they’ve relied on. What would happen if they did? Well, the platforms might be more zealous about taking down the kind of content the plaintiffs highlight in the Gonzalez and Taamneh cases—content posted by terrorist organizations. But with the threat of legal liability hanging over their heads, the platforms would certainly take down a lot of other speech, too, including speech that is socially valuable. I wonder whether the #MeToo movement, for example, could have gotten off the ground in a legal context in which platforms could be held civilly liable for the speech of their users.

The questions presented by the NetChoice cases are, if anything, even more momentous. Again, these cases involve challenges to social media laws enacted earlier this year by Florida and Texas. These laws restrict social media companies’ authority to take down speech posted on their platforms, and require them to make far-reaching public disclosures about their content-moderation policies and decisions.

One question in these cases is whether the social media companies exercise “editorial judgment” when they decide what content can and can’t be posted on their platforms. This question is important because editorial judgment is protected by the First Amendment. The U.S. Supreme Court has held in the past that newspapers exercise editorial judgment when they decide what stories to publish, and that parade organizers exercise editorial judgment when they decide what floats to allow in their parades. Now the social media platforms are arguing that they exercise editorial judgment when they decide what content will appear on their platforms. On the other side, Florida and Texas contend that the platforms are not expressive actors at all—that they are akin to common carriers, with no First Amendment rights to speak of.

A second question in the NetChoice cases is what it means if the platforms are exercising editorial judgment. Does it mean they can’t be regulated, as the platforms argue? Or are there some kinds of regulations that might be constitutional even if the platforms’ content-moderation policies are protected by the First Amendment? This is a big-deal question, too, because the answer to it determines how much regulatory latitude legislatures in the United States will have.

The issues presented by these cases are genuinely hard. One reason they’re hard—and I’ll close with this point—is that there are perils on both sides.

On one side there is the danger of unrestrained government power, of investing public officials with broad authority to distort, constrain, and censor public discourse. The risks here are real. All around the world today, governments are using laws that criminalize “fake news” to persecute their political opponents and suppress legitimate dissent. If the Trump administration had had misinformation laws available to it, you can be confident that those laws would have been used in similar ways. The Florida and Texas laws, which were enacted by Republican legislatures intent on retaliating against companies that deplatformed Trump, are themselves a kind of warning about the ways in which new censorial power is likely to be deployed.

But on the other side, of course, there is the power of the platforms themselves. A small number of private companies have acquired immense influence over the expressive spaces that are most important to our democracies. In the digital public sphere, they decide who can speak, which ideas get heard, and, perhaps most important, which ideas get traction. The consolidation of editorial power in the hands of a handful of technology companies whose interests are not at all aligned with those of their users, let alone with the interests of the democratic public, also poses a substantial and urgent threat to our societies.

Caught between these two perils—of unrestrained public power on one side, and unaccountable private power on the other—what can we do to ensure that the digital public sphere better serves democracy? Are there regulatory interventions that would promote, rather than undermine, self-government?

The most promising proposals, in my view, are ones that are structural in nature—transparency mandates, interoperability requirements, due process safeguards, antitrust, new restrictions on what data platforms can collect and how they can use that data, and new protections for journalists and researchers who study the platforms. There are risks even here, because even structural and content-neutral rules can be used in discriminatory and retaliatory ways. But it seems to me that structural interventions like these, if crafted carefully, would be the best way for us to reconnect the digital public sphere with the democratic values we need it to serve.

Thanks to all of you for coming, and thanks again to Taylor and his team for the invitation.