Introduction

The fear that terrorist organizations will exploit the reach and affordances of social media platforms to spread propaganda, incite violence, and recruit followers was one of the first—and remains one of the most intense—anxieties about the harms caused by the internet. For years, both political and public conversation about terrorist content online has been dominated by concern that social media platforms are not doing enough to stem the tide of terrorist propaganda or mitigate the ways in which the internet has increased its reach and impact. Concern about freedom of speech—which has increasingly dominated the discourse surrounding the content moderation of other kinds of speech—comes up much more rarely in conversations about online terrorist content. To the extent the First Amendment gets mentioned (and it too often does not ), it is frequently seen as hampering the government from effectively responding to extremely dangerous speech. Law professors Cass Sunstein and Eric Posner and others have argued, for example, that First Amendment protections for terrorist speech need to be revisited in the digital age to adequately protect the public against the threat of terrorist violence.

While the First Amendment constrains the government in how it responds to protected expression, social media companies face no such limitations. As a result, they have faced persistent calls to do what the government cannot: censor vast swathes of terrorism-related content. Initially, social media companies resisted calls for them to aggressively remove all terrorism-related content from their sites. In 2008, for example, YouTube responded to a letter from Senator Joe Lieberman that urged the platform to remove all videos mentioning or featuring Islamic terrorist organizations by politely declining the request, stating that “YouTube encourages free speech and defends everyone’s right to express unpopular points of view.” But things have changed. In the years since, platforms have increasingly taken a much more aggressive approach to moderating terrorist content. And even if there are still regular complaints about how effectively they enforce their rules, almost all platforms have created broad prohibitions against content related to terrorism or designated terrorist groups.

From the vantage point of those concerned about the rise of terrorist propaganda and recruitment online, this might be considered a success story. But much of the debate about online terrorist content has always assumed that it is easy to discern which speech related to terrorism should be removed and which should not, and that once the content to be removed is defined there is little collateral cost of its removal. It has assumed, in other words, that the line between harmful speech related to terrorist organizations and all other speech is easy to discern.

The Supreme Court made a similar assumption in Holder v. Humanitarian Law Project when it held that a federal law that criminalized “material support” to terrorist organizations could constitutionally be applied to ban even the provision of nonviolent speech to designated groups. In rejecting a First Amendment challenge brought by human rights organizations that wanted to train designated terrorist organizations on how to peacefully pursue their political aims, “[f]or the first time in its history, the Court upheld the criminalization of speech advocating only nonviolent, lawful ends on the ground that such speech might unintentionally assist a third party in criminal wrongdoing.” To justify this historic ruling, the Humanitarian Law Project Court relied in large part on the notion that speech directed at foreign terrorist organizations was a matter of “foreign affairs and national security,” and that this made it separate and different from domestic political debate. For this reason, said the Court, different First Amendment rules could apply.

But the reality is messier than the Court assumed. As I show in what follows, the material support law, and the broader discourse about the dangers of online terrorist speech, have in fact led platforms to moderate content on their services in ways that have all kinds of impacts on domestic political debate. As social media platforms have ramped up their efforts to remove terrorist content from their sites, they have also erased important documentation of human rights abuses, stifled political discourse, and discriminated against Arabic-language content. This collateral damage of their efforts to police content related to terrorism has profoundly influenced political debate not only overseas but also in the United States. While platforms’ overbroad approach has a number of causes, fear of potential liability for providing material support—that is, for prosecution under the law upheld in Humanitarian Law Project—is clearly one reason.

First Amendment doctrine normally takes very seriously these kinds of collateral harms to freedom of expression. Laws that criminalize only unprotected speech can still be found unconstitutional if they appear likely to deter, or chill, protected expression by incentivizing risk‑averse actors to steer well clear of the unlawful zone. But the Court in Humanitarian Law Project paid no attention to such potential impacts of the material support law. The Court appeared to assume that restrictions on communication with foreign terrorist organizations would have no impact whatsoever on domestic discourse. If this was ever true (which is doubtful), it clearly no longer holds today, as the platforms’ moderation of terrorist content vividly demonstrates.

This paper sheds light on the long shadow that the material support law casts over online discourse to show that the factual assumptions underpinning the Court’s decision in Humanitarian Law Project are wrong. It proceeds in three parts. Part I describes the Court’s decision in Humanitarian Law Project and the specious foreign/domestic distinction that it relied on. Part II illustrates how this has impacted social media content moderation. Part III uses the example of how social media platforms have moderated content about Palestine to show that the line between foreign discourse and domestic debate is illusory. If Humanitarian Law Project was not wrong the day it was decided, its impact on online discourse suggests it is wrong today.

I. The Law’s Line Between Foreign and Domestic

Criminalizing even peaceful speech to, and association with, foreign terrorist organizations would seem to be at odds with foundational First Amendment precedents that protect, in all but the most limited circumstances, people’s rights to speak and associate with even those groups that the government determines are dangerous. This Part explains how the Court nonetheless upheld the federal material support law as applied to even this kind of peaceful speech and association in Humanitarian Law Project by erecting a firm but ultimately illusory line between the spheres of foreign and domestic discourse.

The material support law upheld in Humanitarian Law Project makes it a crime to “knowingly provide[] material support or resources to a foreign terrorist organization.” The definition of “material support or resources” is extremely broad and includes any property, tangible or intangible, or any service. To be convicted under the law, a person must know that the organization they are providing support to is a designated terrorist organization, but there is no requirement that they know the intended use of the property or service they provide, much less that it will be used to further unlawful purposes. This broad prohibition was intended to prevent the provision of aid to designated groups “under any circumstances irrespective of the provider’s intent or belief about how the recipient will use it.” As then-Solicitor General Elena Kagan described the theory of the law at oral argument in Humanitarian Law Project, “Hezbollah builds bombs. Hezbollah also builds homes. What Congress decided was when you help Hezbollah build homes, you are also helping Hezbollah build bombs.”

This law “sit[s] at the heart of the Justice Department’s terrorist prosecution efforts” and has been the most common charge in international terrorism cases. The potential penalties are heavy, including up to 20 years’ imprisonment, or life imprisonment if the material support results in the death of any person. The law has been criticized for chilling humanitarian work, with aid organizations reporting “dramatic, negative effect on the provision of humanitarian assistance in conflict-stricken regions” due to fear of liability. But many, if not most, applications of this provision raise no constitutional issues. Giving someone money, food, shelter or (as in Solicitor General Kagan’s example) building materials receives no special constitutional protection.

But speaking and associating with others does receive special protection, and the material support statute also prohibits this kind of behavior. The definition of material support includes “training” and “expert advice or assistance” —that is, certain kinds of speech. This was controversial when it was proposed and generated strong critiques of its constitutionality. Gregory Nojeim, legislative counsel for the American Civil Liberties Union, told the House Committee for the Judiciary that the law “smack[s] of McCarthyism at its worst.”

Indeed, the law seems to fly in the face of decisions that form the very basis of modern First Amendment doctrine. These cases hold that the government cannot ban speech even (or perhaps especially) when it is speech to entities the government does not like. Almost every First Amendment student will read Justice Brandeis’ searing opinion in Whitney v. California, which laments the criminalization of not “the practice of criminal syndicalism, nor even directly . . . the preaching of it, but association with those who propose to preach it,” and his insistence that it is the “command of the Constitution” that “[i]f there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”

Brandeis’ words brought little aid to Anita Whitney, whose conviction of criminal syndicalism for her work with the Communist Labor Party of America the Supreme Court upheld, but students will also learn that this is now considered a stain on First Amendment history. We teach Brandeis’ opinion because his philosophy became embedded in First Amendment doctrine. The state cannot even punish advocacy of violence unless “such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action,” declares the famously high standard from Brandenburg. Assembly or association with an organization that advocates unlawful acts is protected by the First Amendment, as the Court’s later cases dealing with attempts to criminalize association with the Communist Party confirm. These cases and the principles they stand for are, in the words of David Cole, “the linchpin of the First Amendment's protection of political expression.” The strong protections they erect are emblematic of America’s First Amendment exceptionalism. The material support law, by contrast, criminalizes speech to certain designated disfavored groups regardless of whether it is unprotected incitement and regardless of whether the speaker intends to further those groups’ unlawful aims. Nevertheless, when the issue of the material support law’s constitutionality reached the Court in 2010, the Court upheld the application of the statute to even some forms of peaceful speech and association.

The plaintiffs in Humanitarian Law Project were individuals and human rights organizations who had been—and wanted to continue—working with the Kurdistan Workers’ Party (PKK) and the Liberation Tigers of Tamil Eelam (LTTE), both of which had been designated as foreign terrorist organizations (FTOs). This work involved encouraging the FTOs to resolve their disputes through peaceful means and teaching them how to do so, advising them on how to petition various bodies like the United Nations for relief, training them on how to engage in political advocacy, and other activities that the plaintiffs worried would fall under the material support law’s prohibition on “training” and “expert advice or assistance.” That is, the plaintiffs had no interest or intention in furthering the illegal or violent objectives of these FTOs—quite the opposite. They wanted to encourage these organizations to pursue their goals through nonviolent, lawful means. This kind of peaceful speech and association, even with disfavored groups, would seem to be on its face exactly the kind of speech that settled First Amendment doctrine made clear should be protected.

Chief Justice Roberts’ opinion for the Court acknowledged that because the plaintiffs’ speech did not fall within any of the established exceptions from First Amendment coverage, applying the material support statute in these circumstances would have to satisfy “demanding” scrutiny. Nevertheless, he went on to hold that the material support statute could constitutionally be applied to plaintiffs’ proposed activities. This was because “[e]veryone agrees that the Government’s interest in combating terrorism is an urgent objective of the highest order,” and it was permissible for Congress to conclude that “aiding a foreign terrorist organization’s lawful activity promotes the terrorist organization as a whole.”

Many have critiqued the decision and its apparent inconsistency with the Court’s precedents. As Justice Breyer’s dissent points out, the majority’s conclusion seems inconsistent with many of the Court’s foundational decisions, including the ordinarily stringent protection for political speech, the high bar for prosecuting even incitement to unlawful action, and the central principle that association for peaceful purposes can never be made a crime. Quoting Brandeis’ opinion in Whitney, Breyer asked how one could possibly explain the majority’s decision to people “who live, as we do, in a nation committed to the resolution of disputes through ‘deliberative forces’?” How, then, did the majority explain its decision?

The Court’s decision to uphold the law was clearly influenced by the national security context of the case. Throughout the opinion, the Court was explicit that Congress and the Executive are entitled to deference in their assessment of the necessity of the material support statute because terrorism “implicates sensitive and weighty interests of national security and foreign affairs.” Therefore, “Congress and the Executive are uniquely positioned to make principled distinctions between activities that will further terrorist conduct and undermine United States foreign policy, and those that will not.” But even the national security context cannot fully explain the decision in Humanitarian Law Project. After all, as the dissent points out, in other cases, the Court had made clear that Congress and the Executive’s claims of authority and expertise in matters of national security and foreign affairs “do not automatically trump the Court’s own obligation to secure the protection that the Constitution grants to individuals.” Were it otherwise, the government could simply do an end-run around the First Amendment to suppress any speech it deemed contrary to the nation’s national security interests.

In his majority opinion, Roberts seemed anxious to limit the reach of the decision so as not to appear to overturn any long-established First Amendment principles. Indeed, in an apparent acknowledgment of the difficulty of reconciling his decision with the rest of the First Amendment’s doctrinal architecture, the opinion rejected the idea that all applications of the material support statute would be constitutional. “In particular,” Roberts wrote, “we in no way suggest that a regulation of independent speech [that is, speech not done in coordination with designated FTOs] would pass constitutional muster.” This sentence appeared designed to reassure us that, notwithstanding its victory in the case, the government could not, after Humanitarian Law Project, lock people up for expressing what it deemed to be “dangerous” views, such general support for terrorist organizations or their points of view. And as for First Amendment protections for freedom of association, Roberts also wrote that the decision should not be read to “suggest that Congress could extend the same prohibition on material support at issue here to domestic organizations.”

This last statement, made without elaboration or citation, suggests a constitutional distinction between foreign and domestic groups. It is not only that the Court will grant the government more deference in matters of national security and foreign affairs, but that the government simply cannot restrict speech and association in a domestic context that it could in a foreign context. The implication seems to be that there is a clear dividing line between the two—that the ordinary pre-Humanitarian Law Project First Amendment rules protecting speech and association with groups remained intact when those groups were domestic, but the foreign nature of designated foreign terrorist organizations justified a different rule. The opinion does not explain, however, why this would be the case. And the distinction, which the Court apparently finds intuitive, is sound in neither principle nor practice.

David Cole has tried to reconstruct the Court’s potential (unspoken) “reasons for directing special skepticism at the regulation of domestic speech and association” as compared with speech and association with foreign groups. The most compelling of these is that the government has greater opportunity to monitor domestic organizations’ conduct, and therefore “[i]t can reduce the likelihood that such groups use their resources for terrorist activity without restricting speech or association.” This kind of factual distinction may indeed be relevant to questions about whether a law is narrowly tailored or the least restrictive means for achieving the government’s aims, but it is exactly the kind of argument that the Court normally strictly scrutinizes rather than simply leaving it to readers to hypothesize after the fact. After all, there are reasons to think the opposite might be true—because of fewer constitutional constraints on governmental surveillance of foreign groups, the government might instead have more capacity to monitor foreign rather than domestic groups.

Cole also suggests that the distinction might be justified by the First Amendment’s “democratic purpose.” He suggests that “[i]t is virtually impossible to imagine meaningful self-government if the state can prohibit speech in coordination with domestic political groups it disfavors, but restrictions on speech with foreign organizations arguably pose a less direct challenge to the mechanisms of democracy.” Further, “[t]he risk of improperly motivated censorship is arguably greater with respect to domestic than foreign political groups.”

Cole’s heart was hardly in these arguments—he represented the plaintiffs before the Supreme Court in Humanitarian Law Project, after all. But these purported rationales for a constitutional distinction between foreign and domestic groups are not convincing. There is no categorical reason why speech to, with, and from foreigners should be any less helpful to Americans trying to obtain the truth about the world or exercise self-governance. In fact, on many topics in a globalized world, foreign voices may be the most informative and therefore the freedom of Americans to speak and associate with foreigners directly implicates their own ability to exercise self-governance. In other contexts the Supreme Court has insisted that “[t]he inherent worth of the speech in terms of its capacity for informing the public does not depend upon the identity of its source.” To the extent that speech to and from foreigners is perceived as dangerous because foreigners may have goals and interests that conflict with America’s, ordinarily the remedy for such dangers is “more speech, not enforced silence.”  As Alexander Meiklejohn, the most influential theorist of the self-governance theory of free speech, argued, it is unflatteringly paternalistic to think foreign speech is uniquely dangerous:

Why may we not hear what these [people] from other countries, other systems of government, have to say? . . . Do We, the People of the United States, wish to be thus mentally “protected”? To say that would seem to be an admission that we are intellectually and morally unfit to play our part in what Justice Holmes has called the “experiment” of self-government.

But I can set aside the arguments about the value of speech with foreigners to Americans for now because the point I want to emphasize here is different: Even if drawing a line between aid to foreign and domestic groups was normatively justifiable, it is practically impossible. First Amendment case law has long recognized that there is no neat distinction between foreign and domestic discourse. As the Court has recognized, interfering with the dissemination of foreign speech not only affects the foreign speaker but also infringes on the right of the domestic listener to receive information and ideas. These listeners’ rights are as vital to a functioning system of freedom of expression as speakers’ rights, as Justice Brennan wrote in his concurring opinion in Lamont v. Postmaster General, because “[i]t would be a barren marketplace of ideas that had only sellers and no buyers.”

Thus, First Amendment rights have never been simply delineated by geographical lines. And even if there was once a time when national borders demarcated where a country’s public discourse began and ended, the rise of the internet has surely made this a thing of the past. The internet is radically transnational. As Jack Balkin observes, “what people do on the Internet transcends the nation state; they participate in discussions, debate, and collective activity that does not respect national borders.” It is true that the online world is not as immune from being forced to respect local jurisdictions and national borders as some have argued. Nevertheless, the internet enables much more cross-pollination of foreign and domestic speech and ideas than offline media—when a person posts online, they can be heard and replied to from almost anywhere in the world instantaneously. While some platforms have adopted country-specific content rules that apply in a single jurisdiction in order to comply with local regulations, these are generally the exception. By and large, platforms generally insist on having a single global set of content standards because it’s technically and commercially simpler. As a result, when people use social media, they are often speaking and listening to people from all around the world. The participants in a conversation in the replies to a single post can span multiple countries and legal jurisdictions and can be read in many more. And all this discourse takes place on (largely) American-owned social media platforms, whose community standards set the rules for these global conversations.

The material support law casts a long shadow over all this expressive activity. It may not be immediately obvious why this would be the case. The majority in Humanitarian Law Project seemed to understand itself as creating a very narrow exception to the First Amendment’s protections. It repeatedly emphasized that the law did not place “any restriction on independent advocacy, or indeed any activities not directed to, coordinated with, or controlled by foreign terrorist groups.” Americans therefore remain free to talk independently about, and even praise, foreign terrorist organizations. They also have the right to receive the speech of people the government labels as terrorists, unless it falls within narrow categorical exceptions such as the exception carved out for incitement under Brandenburg v. Ohio or the exception for speech integral to crimes such as conspiracy. It is a common misconception that the vast majority of speech related to terrorism is necessarily unprotected—in fact, the opposite is true. For example, the Texas solicitor general told the Supreme Court during oral argument last term that a Texas law prohibiting platforms from taking down certain kinds of content would not impact their ability to take down terrorist content because Texas’ law did not apply to “illegal” content. The solicitor general evidently assumed that most terrorism-related content was illegal, but, as Justice Kavanaugh jumped in to remind him, this is simply not the case.

Even so, and despite the Court’s insistence in Humanitarian Law Project that the material support law’s ambit is narrowly confined, the law has nevertheless had a dramatic impact on online discourse because of the way social media companies have interpreted it. Recall the broad drafting of the statute, which prohibits providing “material support,” defined to include any service or property, including communications equipment. A plain reading of this prohibition could conceivably cover allowing a designated FTO to use a social media platform. Of course, the mens rea requirement would still need to be satisfied—but recall, too, how low the mens rea standard is under the law. While the defendant must know that the service is being provided to an FTO, they do not need to know that it will, or intend for it to, further any unlawful purpose.

To be sure, applying the material support law to a social media platform that merely fails to remove FTO-related accounts from their service would be a far cry from even Humanitarian Law Project, where the plaintiffs sought to provide individualized training to FTOs. Platforms, by contrast, would simply be letting FTOs use their generally available services to engage in what will, by and large, be protected expression. Just two terms ago, in Twitter v. Taamneh, the Court held that these circumstances would not satisfy the elements of a different statute that prohibited aiding and abetting an act of international terrorism. But the Court based its holding primarily on the fact that the plaintiffs had failed to show that the platforms had any intention to assist the FTO (in that case, ISIS), nor had they taken any affirmative steps to do so. But, of course, this is exactly the kind of mens rea that the material support statute does not require. Taamneh therefore provides little guidance (or reassurance) as to how the material support statute might apply in the latter context.

That legal issue—how the material support law applies to FTOs’ use of social media services—has never been litigated. Some have argued that the plain reading of the statute supports its application to social media platforms that refuse to take down “terrorist information and advocacy.” Indeed, writing two years after Humanitarian Law Project, Cole agreed that the problematically broad reasoning of the majority raised the possibility that social media platforms could be found guilty of material support for simply allowing designated FTOs to maintain accounts on their services. The ACLU’s legislative counsel was similarly concerned.

Government actors have continued to cultivate the idea that the law might apply very broadly to criminalize the ordinary commercial decision-making of tech companies. They have failed to clarify the reach of the material support law, and adopted sweeping interpretations of international sanctions related to terrorism in other contexts. In 2021, for example, the Department of Justice shut down foreign websites it alleged were disinformation campaigns “disguised as news organizations or media outlets” on the basis that they were owned or controlled by sanctioned entities and, therefore, the U.S. servers that hosted them would violate sanctions by providing them “website and domain services.” In 2022, the Treasury Department’s Office of Foreign Assets Control (OFAC) told a nonprofit organization, the Foundation for Global Political Exchange, that allowing designated individuals on other terrorism sanctions lists to appear and talk at their events would be “the provision of a platform for them to speak” which OFAC “considered to be a service.” OFAC later revoked this interpretation as part of a settlement of a lawsuit brought by the Knight First Amendment Institute on behalf of the Foundation and reaffirmed that sanctions are not intended to “restrict the exchange of information or informational materials.” But it is not hard to understand why these precedents could make a social media platform nervous that the government might one day argue that allowing sanctioned entities to maintain social media accounts is “the provision of a platform” and thus a prohibited “service” under the material support law.

There is, to date, no public indication that the government intends to prosecute any social media platform for material support of terrorism. But against this background, it may not need to. As Robert Chesney warned, it would be “too cavalier” to conclude that the impact of the material support law is marginal based only on the kinds of prosecutions the government has brought so far. Such a view “fail[s] to account for the substantial impact that the mere prospect of prosecution can have. That the statute has not, or at least has not often, been used in [any particular way] does not mean that it cannot be.” To be clear—any such applications should be found unconstitutional and would represent a dramatic expansion of even Humanitarian Law Project to generally-available services that facilitate protected expression. But the technical legal argument does not matter if sufficient ambiguity in practice creates enough incentive for platforms to err on the side of caution. Indeed, this is the quintessential chilling effect of broadly worded laws that First Amendment doctrine has long been concerned about. The next Part shows how this exact dynamic plays out in the context of social media platforms.

II. Platforms’ Moderation in the Shadow of the Material Support Law

Platforms take down terrorist and terrorism-related content for many reasons. First and foremost, platforms are commercial entities and, in general, such content is not good for business. The vast majority of users do not want to see violent content, or content calling for violence, when they boot up their social newsfeeds in the morning or do one last refresh at night. Loud and persistent political pressure and public criticism of terrorists’ use of platforms also create political and reputational incentives to moderate such content. But these voluntary commercial and reputational incentives cannot fully explain platforms’ broad and blunt approach to removal. As this Part shows, the threat of legal liability under the material support law plays an important role in shaping how platforms moderate terrorism-related content. Risk-averse intermediaries will be particularly susceptible to government pressure when their legal obligations are vague. This is compounded in the content moderation context by the practical challenges of effectively moderating content at scale, which leads platforms to rely on automated tools that necessarily lack nuance. To be sure, platforms’ approach is somewhat overdetermined, given the variety of pressures incentivizing them to remove content related to terrorism—but this Part shows that the material support law casts a long shadow over their approach and clearly informs platform decision-making.

In general, when the scope of a criminal law that proscribes certain speech is unclear, we would hypothesize that platforms will err on the side of caution and over-remove content in order to avoid even the specter of legal liability. This is exactly what the First Amendment doctrine of chilling effects explicitly expects and protects against—the natural incentive created by broad or vague laws to deter lawful expression or its distribution. Scholars have long argued that these kinds of chilling effects are likely to affect platforms’ content moderation particularly severely. Seth Kreimer calls platforms “the weakest link” in protecting online expression because they have limited incentive to accept the risk of sanctions rather than just engaging in prophylactic censorship of their users’ speech. Kreimer thought the broad material support law was a quintessential example of a law that would create such an incentive and predicted, in 2006, that “[a] risk-averse Internet intermediary would not need to descend into paranoia to conclude that the most prudent course would be to proactively censor messages or links that might prove problematic, and to respond to official ‘requests’ with alacrity.”

Observing this hypothesis in practice is somewhat difficult, given the opacity of content moderation in general. But what information there is tends to confirm this is exactly what happens.

The independent impact of the threat of legal liability is perhaps most visible with respect to those platforms that style themselves as “free speech” alternatives to the “overly censorious” mainstream social media platforms—that is, those platforms whose very brand identity depends on the absence of content moderation. Even those platforms remove terrorist content. Elon Musk’s X, which famously now styles itself as a bastion of free speech, prohibits content “affiliate[d] with or promot[ing] the activities” of terrorist organizations. Telegram, which notoriously refuses to cooperate with law enforcement requests (or at least did until France locked up its CEO ), similarly notes that it “block[s] terrorist (e.g. ISIS-related) bots and channels.” In some instances, the influence of perceived legal obligations on these policies is explicit. Rumble, the video-sharing platform that explicitly markets itself as the haven for those moderated by other platforms and bans “content or material that … [p]romotes or supports entities and/or persons designated by either the Canadian or United States government as terrorists or terrorist organizations.” Truth Social, the Trump-owned platform that intended to be an “impenetrable beachhead of free speech,” states in “TRUTH #1” that “[t]he law requires us to exclude offensive content from our platforms[, including] content posted by or on behalf of terrorist organizations.” The consistency with which even those platforms that generally denigrate content moderation as “censorship” and refuse to remove violent or hateful material in other contexts remove the content of designated FTOs is best explained by a common understanding that this is legally required.

The impact of the material support law has been visible in certain individual cases as well, including the video conferencing platform Zoom’s abrupt shutdown of several events that included Leila Khaled, a radical activist associated with the FTO the Popular Front for the Liberation of Palestine. Representatives from the company admitted that “the current state of the law is not horribly clear” and that they were uncertain “whether allowing Leila Khaled on our platform would be the provision of ‘services’” under the material support law. But given the “risk that Zoom could be under legal scrutiny,” they decided to cancel the events anyway. They acknowledged that liability in such a case depended on a court taking a very aggressive reading of the law, but even so, the risk was not worth it. Facebook and YouTube also removed a livestream of the event with Leila Khaled from their platforms, but they cited violations of their own content policies rather than potential legal liability.

The decision by Facebook and YouTube to justify their removal decisions by reference to their own speech policies, rather than federal law, is representative of the usual approach of the major platforms. Zoom was remarkably candid about their assessment of potential legal liability, but in general companies tend to be much less explicit about how they understand their legal obligations, making the law’s influence less visible. Meta, for example, has generally insisted that it cannot be specific about its approach to moderating terrorist content without endangering its employees or helping banned entities circumvent its policies. The authors of a human rights due diligence report that Meta commissioned on its impacts in Israel and Palestine recommended that the company publicly state its understanding of its legal obligations under the material support law and “[f]und public research into the optimal relationship between legally required counterterrorism obligations and the policies and practices of social media platforms.” Meta declined to do so, however, and somewhat brazenly replied that while “[l]egal advice is an important foundation to our [Dangerous Organizations and Individuals] Policy,” it would not release that advice because “[a]s with other legal advice, we do not direct or fund legal guidance for other companies.” This is an unusually clear statement of something that otherwise can only be inferred: It is not in platforms’ interest to be explicit about their legal obligations in the face of an ambiguous law.

Nor is it in the government’s interest to clear up the ambiguity these companies clearly face when it comes to how the material support law applies to social media platforms. Anticipatory compliance by platforms is a more efficient way for the government to achieve its aims of suppressing certain kinds of speech than a clearly spelled out regulatory regime—that is, the chilling effects are the point. As a result, it is very difficult to know what the platforms think, or reasonably should think, about the application of the material support laws to their content moderation decision-making.

It is nevertheless clear from the platforms’ content moderation policies and practices that the threat of legal liability under the material support law significantly influences their operations. For example, many of the major platforms use specific lists of dangerous or terrorist organizations that are subject to removal under their rules—lists that often closely track the list of designated foreign terrorist organizations maintained by the U.S. government. In many cases, these platforms’ rules sweep more broadly than even the broadest theory of liability under the material support law could justify. For example, platforms often prohibit even independent praise of terrorist groups —exactly the kind of speech that the Humanitarian Law Project majority went out of its way to insist would not be covered by the law. But platforms doing more to repress terrorist speech than the government mandates would be perfectly consistent with a strategy of government appeasement in the face of threatened liability.

The impact of legal ambiguity is compounded by the practical challenges of moderating content at scale. Given the sheer volume of content posted to social media platforms every day, platforms rely on automated tools to perform content moderation at the required speed and scale. These tools are improving but remain deeply imperfect. They generally lack the ability to judge the context of a particular post and rely on blunt signals to classify content, meaning they are deployed with full knowledge that they will often make mistakes. At times of crisis—say, for example, in the aftermath of a terrorist attack—when there is a large influx of both violating and non-violating content, companies may intentionally reduce the accuracy threshold of their moderation tools in order to deal with the influx of material to review. This is, again, exactly what you would expect a risk-averse intermediary to do. The incentive to consciously over-remove even nonviolating content to avoid accidentally leaving up even a small amount of violating content is high when there is the possibility of criminal liability.

These factors—vague legal obligations, risk-averse intermediaries, and clumsy automated moderation tools—when combined together have resulted time and again in the over-moderation of valuable speech in platforms’ attempts to remove terrorism-related speech from their services.

Civil society has for years been drawing attention to platforms’ inability to “consistently differentiate activism, counter-speech, and satire about extremism from extremism itself,” with the result that “marginalized users are the ones who pay for those mistakes.” Anecdata and examples of errors abound. YouTube’s introduction of a new algorithmic moderation tool to remove terrorist propaganda resulted in the deletion of hundreds of thousands of videos documenting human rights abuses in places like Syria. Facebook once removed references to Al-Aqsa Mosque—one of Islam’s holiest sites—during a flare-up in tensions and violence in Israel and Palestine (including at the mosque itself) in 2021 because references to “Al-Aqsa” were mistakenly classified as references to a designated terrorist organization. Facebook called this an “enforcement error.” It has also mistakenly removed news coverage of terrorism, because automated tools do not always differentiate between content shared for the purpose of glorification and praise, and content shared for legitimate reporting purposes. On another occasion, Instagram was auto-translating the word “Palestinian” in users’ bios to read “Palestinian terrorists” —a particularly glaring demonstration of the biases that automated tools can perpetuate—and hiding comments that featured nothing more than Palestinian flag emojis.

Scattered examples like this show the very real costs to free speech from broad and blunt enforcement of platforms’ policies relating to terrorist content. Of course, the scale of online content means it will always be possible to find individual examples of mistakes, but it is much harder to get insight into systemic biases. The pile of civil society reports raising alarm about potential such biases, however, is large and growing. Indeed, long-standing public concern that moderation of terrorist content exhibited biases against Arabic-language content, and in particular Palestinian content, led Meta’s Oversight Board to recommend that the company commission an independent review of its moderation in the region. This review concluded that while there was no intentional bias at Meta against any particular racial or ethnic groups, there were “instances of unintentional bias” that “lead to different human rights impacts on Palestinian and Arabic speaking users.” The review attributed this bias at least in part to Meta’s interpretation of its compliance obligations under the material support laws.

There are important practical ramifications from the fact that the impact of the material support law is largely indirect and works through chilling effects and anticipatory compliance rather than actual enforcement actions. When platforms cite to their own community standards rather than the law as the reason they remove content, they obscure the role that the law plays in motivating their actions. Platforms’ proactive compliance also prevents the need for the government to formally order platforms to do anything (indeed, that is the point). This is a problem because it means not only no transparency or accountability for the law’s role in leading platforms to adopt that interpretation. The absence of a formal legal order makes challenges to a broad interpretation of the law harder to bring. Users do not have a First Amendment claim against platforms who take down their content—indeed, platforms themselves have a First Amendment right to engage in such content moderation. And a user cannot bring a First Amendment claim against the government for an order that does not exist. The Supreme Court also recently set a very high bar for bringing claims based on informal government pressure on platforms where the platforms have their own independent incentives to moderate content, as they do in this context.

As a result, the influence that the material support law has on platforms’ content moderation is opaque, even if obviously significant, and a product of indirect and structural effects that are difficult to legally challenge. As the next Part shows, this is not merely a problem for “foreign” speech—but its effects penetrate right into the heart of domestic political debate.

III. Foreign Content and Domestic Debate

Platforms’ often ham-fisted approach to moderating terrorist content clearly impacts public discourse abroad. Often, it can limit important political debate. As Jillian York observes, people living in areas where designated groups may also have a political role “need to be able to discuss those groups with nuance, and [platform] policy doesn’t allow for that.” But how platforms moderate this “foreign” content has also had profound impacts on “domestic” debate. As Part I discussed, the Court appeared to assume in Humanitarian Law Project that the spheres of foreign and domestic discourse were entirely separate things and, therefore, upheld the material support statute’s application to nonviolent speech on the basis that it only impacted foreign discourse. But the reality of how platforms moderate terrorist content in the shadow of the material support law, described in Part II, belies this assumption. My focus in what follows will be on one example of the inseparability of foreign and domestic discourse—moderation of content related to Palestine and the ongoing conflict in Gaza. The point, however, is—and will continue to be—generalizable in our modern online public sphere.

It hardly needs to be said that the ongoing conflict in the Middle East, and the United States’ policy with respect to it, has been a topic of intense domestic political debate. It has spurred a sweeping wave of protests around the country and influenced the voting behavior of some sizeable constituencies in the 2024 presidential election. That is, speech from and about Israel and Palestine is a matter of obvious importance to domestic politics, not just foreign affairs.

Much of the speech related to this topic takes place online and is profoundly shaped by platforms’ moderation choices. Particularly for young people, social media is often a key source of news. And in a profoundly restricted communications environment, online content has been a main way for people in Gaza to get information out about what is happening in the region, which then informs how people understand and talk about what is going on. Former Secretary of State Anthony Blinken has remarked that the way discourse about Israel’s actions in Gaza “has played out on social media has dominated the narrative.” Frustration with the way pro-Palestinian voices dominated social media debates has even been cited by lawmakers as a reason to ban TikTok. Israel seems to understand the importance of online content to domestic political outcomes, reportedly conducting an online influence campaign in order to foster support for its actions in Gaza. At the very least, this shows that those with decision-making power or seeking to influence American policy perceive online discourse about Palestine as consequential to domestic politics.

Against this backdrop, disproportionate moderation of Palestinian voices or content related to Palestine clearly shapes domestic understanding of one of the most contentious political issues of the current moment. Of course, the material support law does not require this result directly. Even if platforms might be liable for knowingly allowing designated FTOs to use their services, this does not require the restriction of Palestinian content more generally. But the practical challenges of content moderation combined with the heavy potential sanctions for violations of the material support law have meant that the law reaches far further, in practice, than its formal ambit would suggest. The examples in Part II show that in anticipatory compliance with the material support law, platforms moderate a far broader range of unquestionably protected speech, including documentation of war crimes, references to mosques, and Palestinian flags.

This is doctrinally material for two reasons. First, this is exactly the kind of chilling effects that the First Amendment is supposed to protect against. In Smith v. California in 1959, the Court held that a California law that criminalized booksellers who distributed obscenity without knowing its content or its character was unconstitutional even though, on its face, it only targeted unprotected speech (i.e., obscenity) because the law would incentivize risk-averse booksellers to take protected books from their shelves in order to avoid even the specter of liability. This potential “self-censorship” by booksellers was seen by the Court as just as constitutionally problematic as direct governmental speech suppression. The material support law makes platforms act exactly as the Court hypothesized booksellers would act under the law struck down in Smith—they remove more material than they need to, including protected speech, in order to avoid even the specter of liability. The law should be understood to be unconstitutional as applied to platforms for that reason—at least absent more clarification from Congress about specifically what it means and how it applies.

Second, the way platforms moderate in the shadow of the material support law shows that the Court’s assumption in Humanitarian Law Project that the application of the material support laws would not affect domestic political debate was wrong. The effects of this law can be felt at the very core of domestic public discourse. The vibrant, transnational discourse on social media platforms makes plain what has always been true—there are no neat lines to be drawn at the water’s edge regarding the kinds of speech the Constitution should be concerned with protecting.

Conclusion

Platforms’ content moderation of terrorism-related content currently sits at a troubling equilibrium, at which platforms anticipatorily adopt a very broad reading of the law, thus preventing the government from having to explain the law’s true reach, let alone attempt to enforce it against social media companies. This equilibrium is the product of platforms’ self-interested risk aversion, an erroneous assumption by the Supreme Court about what the effects of allowing the government to restrict even peaceful speech as “material support” would be, and no one having both the incentive and capacity to change the status quo. As a result, a U.S. law, and the Court’s interpretation of it, has encouraged the suppression of core political discourse, not just beyond the borders of the United States but also within them. The Court’s decision in Humanitarian Law Project thus casts a long shadow over us all.

Acknowledgments

Many thanks to Anna Diakun, Katy Glenn Bass, Jameel Jaffer, Ramya Krishnan, Genevieve Lakier and the Knight First Amendment Institute, for their comments and material support in making this paper better.

 

© 2025, Evelyn Douek

Cite as: Evelyn Douek, The Long Online Shadow of the Material Support Law, 25-07 Knight First Amend. Inst. (Mar. 19, 2025), https://knightcolumbia.org/content/the-long-online-shadow-of-the-material-support-law [https://perma.cc/LYA3-343W].