Introduction
The fear that terrorist organizations will exploit the reach and affordances of social media platforms to spread propaganda, incite violence, and recruit followers was one of the first—and remains one of the most intense—anxieties about the harms caused by the internet.
For years, both political and public conversation about terrorist content online has been dominated by concern that social media platforms are not doing enough to stem the tide of terrorist propaganda or mitigate the ways in which the internet has increased its reach and impact. Concern about freedom of speech—which has increasingly dominated the discourse surrounding the content moderation of other kinds of speech—comes up much more rarely in conversations about online terrorist content. To the extent the First Amendment gets mentioned (and it too often does not ), it is frequently seen as hampering the government from effectively responding to extremely dangerous speech. Law professors Cass Sunstein and Eric Posner and others have argued, for example, that First Amendment protections for terrorist speech need to be revisited in the digital age to adequately protect the public against the threat of terrorist violence.While the First Amendment constrains the government in how it responds to protected expression, social media companies face no such limitations.
As a result, they have faced persistent calls to do what the government cannot: censor vast swathes of terrorism-related content. Initially, social media companies resisted calls for them to aggressively remove all terrorism-related content from their sites. In 2008, for example, YouTube responded to a letter from Senator Joe Lieberman that urged the platform to remove all videos mentioning or featuring Islamic terrorist organizations by politely declining the request, stating that “YouTube encourages free speech and defends everyone’s right to express unpopular points of view.” But things have changed. In the years since, platforms have increasingly taken a much more aggressive approach to moderating terrorist content. And even if there are still regular complaints about how effectively they enforce their rules, almost all platforms have created broad prohibitions against content related to terrorism or designated terrorist groups.From the vantage point of those concerned about the rise of terrorist propaganda and recruitment online, this might be considered a success story. But much of the debate about online terrorist content has always assumed that it is easy to discern which speech related to terrorism should be removed and which should not, and that once the content to be removed is defined there is little collateral cost of its removal. It has assumed, in other words, that the line between harmful speech related to terrorist organizations and all other speech is easy to discern.
The Supreme Court made a similar assumption in Holder v. Humanitarian Law Project when it held that a federal law that criminalized “material support” to terrorist organizations could constitutionally be applied to ban even the provision of nonviolent speech to designated groups.
In rejecting a First Amendment challenge brought by human rights organizations that wanted to train designated terrorist organizations on how to peacefully pursue their political aims, “[f]or the first time in its history, the Court upheld the criminalization of speech advocating only nonviolent, lawful ends on the ground that such speech might unintentionally assist a third party in criminal wrongdoing.” To justify this historic ruling, the Humanitarian Law Project Court relied in large part on the notion that speech directed at foreign terrorist organizations was a matter of “foreign affairs and national security,” and that this made it separate and different from domestic political debate. For this reason, said the Court, different First Amendment rules could apply.But the reality is messier than the Court assumed. As I show in what follows, the material support law, and the broader discourse about the dangers of online terrorist speech, have in fact led platforms to moderate content on their services in ways that have all kinds of impacts on domestic political debate. As social media platforms have ramped up their efforts to remove terrorist content from their sites, they have also erased important documentation of human rights abuses, stifled political discourse, and discriminated against Arabic-language content.
This collateral damage of their efforts to police content related to terrorism has profoundly influenced political debate not only overseas but also in the United States. While platforms’ overbroad approach has a number of causes, fear of potential liability for providing material support—that is, for prosecution under the law upheld in Humanitarian Law Project—is clearly one reason.First Amendment doctrine normally takes very seriously these kinds of collateral harms to freedom of expression. Laws that criminalize only unprotected speech can still be found unconstitutional if they appear likely to deter, or chill, protected expression by incentivizing risk‑averse actors to steer well clear of the unlawful zone.
But the Court in Humanitarian Law Project paid no attention to such potential impacts of the material support law. The Court appeared to assume that restrictions on communication with foreign terrorist organizations would have no impact whatsoever on domestic discourse. If this was ever true (which is doubtful), it clearly no longer holds today, as the platforms’ moderation of terrorist content vividly demonstrates.This paper sheds light on the long shadow that the material support law casts over online discourse to show that the factual assumptions underpinning the Court’s decision in Humanitarian Law Project are wrong. It proceeds in three parts. Part I describes the Court’s decision in Humanitarian Law Project and the specious foreign/domestic distinction that it relied on. Part II illustrates how this has impacted social media content moderation. Part III uses the example of how social media platforms have moderated content about Palestine to show that the line between foreign discourse and domestic debate is illusory. If Humanitarian Law Project was not wrong the day it was decided, its impact on online discourse suggests it is wrong today.
I. The Law’s Line Between Foreign and Domestic
Criminalizing even peaceful speech to, and association with, foreign terrorist organizations would seem to be at odds with foundational First Amendment precedents that protect, in all but the most limited circumstances, people’s rights to speak and associate with even those groups that the government determines are dangerous. This Part explains how the Court nonetheless upheld the federal material support law as applied to even this kind of peaceful speech and association in Humanitarian Law Project by erecting a firm but ultimately illusory line between the spheres of foreign and domestic discourse.
The material support law upheld in Humanitarian Law Project makes it a crime to “knowingly provide[] material support or resources to a foreign terrorist organization.”
The definition of “material support or resources” is extremely broad and includes any property, tangible or intangible, or any service. To be convicted under the law, a person must know that the organization they are providing support to is a designated terrorist organization, but there is no requirement that they know the intended use of the property or service they provide, much less that it will be used to further unlawful purposes. This broad prohibition was intended to prevent the provision of aid to designated groups “under any circumstances irrespective of the provider’s intent or belief about how the recipient will use it.” As then-Solicitor General Elena Kagan described the theory of the law at oral argument in Humanitarian Law Project, “Hezbollah builds bombs. Hezbollah also builds homes. What Congress decided was when you help Hezbollah build homes, you are also helping Hezbollah build bombs.”This law “sit[s] at the heart of the Justice Department’s terrorist prosecution efforts”
and has been the most common charge in international terrorism cases. The potential penalties are heavy, including up to 20 years’ imprisonment, or life imprisonment if the material support results in the death of any person. The law has been criticized for chilling humanitarian work, with aid organizations reporting “dramatic, negative effect on the provision of humanitarian assistance in conflict-stricken regions” due to fear of liability. But many, if not most, applications of this provision raise no constitutional issues. Giving someone money, food, shelter or (as in Solicitor General Kagan’s example) building materials receives no special constitutional protection.But speaking and associating with others does receive special protection, and the material support statute also prohibits this kind of behavior. The definition of material support includes “training” and “expert advice or assistance”
—that is, certain kinds of speech. This was controversial when it was proposed and generated strong critiques of its constitutionality. Gregory Nojeim, legislative counsel for the American Civil Liberties Union, told the House Committee for the Judiciary that the law “smack[s] of McCarthyism at its worst.”Indeed, the law seems to fly in the face of decisions that form the very basis of modern First Amendment doctrine. These cases hold that the government cannot ban speech even (or perhaps especially) when it is speech to entities the government does not like. Almost every First Amendment student will read Justice Brandeis’ searing opinion in Whitney v. California,
which laments the criminalization of not “the practice of criminal syndicalism, nor even directly . . . the preaching of it, but association with those who propose to preach it,” and his insistence that it is the “command of the Constitution” that “[i]f there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”Brandeis’ words brought little aid to Anita Whitney, whose conviction of criminal syndicalism for her work with the Communist Labor Party of America the Supreme Court upheld, but students will also learn that this is now considered a stain on First Amendment history. We teach Brandeis’ opinion because his philosophy became embedded in First Amendment doctrine. The state cannot even punish advocacy of violence unless “such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action,”
declares the famously high standard from Brandenburg. Assembly or association with an organization that advocates unlawful acts is protected by the First Amendment, as the Court’s later cases dealing with attempts to criminalize association with the Communist Party confirm. These cases and the principles they stand for are, in the words of David Cole, “the linchpin of the First Amendment's protection of political expression.” The strong protections they erect are emblematic of America’s First Amendment exceptionalism. The material support law, by contrast, criminalizes speech to certain designated disfavored groups regardless of whether it is unprotected incitement and regardless of whether the speaker intends to further those groups’ unlawful aims. Nevertheless, when the issue of the material support law’s constitutionality reached the Court in 2010, the Court upheld the application of the statute to even some forms of peaceful speech and association.The plaintiffs in Humanitarian Law Project were individuals and human rights organizations who had been—and wanted to continue—working with the Kurdistan Workers’ Party (PKK) and the Liberation Tigers of Tamil Eelam (LTTE), both of which had been designated as foreign terrorist organizations (FTOs). This work involved encouraging the FTOs to resolve their disputes through peaceful means and teaching them how to do so, advising them on how to petition various bodies like the United Nations for relief, training them on how to engage in political advocacy, and other activities that the plaintiffs worried would fall under the material support law’s prohibition on “training” and “expert advice or assistance.”
That is, the plaintiffs had no interest or intention in furthering the illegal or violent objectives of these FTOs—quite the opposite. They wanted to encourage these organizations to pursue their goals through nonviolent, lawful means. This kind of peaceful speech and association, even with disfavored groups, would seem to be on its face exactly the kind of speech that settled First Amendment doctrine made clear should be protected.Chief Justice Roberts’ opinion for the Court acknowledged that because the plaintiffs’ speech did not fall within any of the established exceptions from First Amendment coverage, applying the material support statute in these circumstances would have to satisfy “demanding” scrutiny.
Nevertheless, he went on to hold that the material support statute could constitutionally be applied to plaintiffs’ proposed activities. This was because “[e]veryone agrees that the Government’s interest in combating terrorism is an urgent objective of the highest order,” and it was permissible for Congress to conclude that “aiding a foreign terrorist organization’s lawful activity promotes the terrorist organization as a whole.”Many have critiqued the decision and its apparent inconsistency with the Court’s precedents.
As Justice Breyer’s dissent points out, the majority’s conclusion seems inconsistent with many of the Court’s foundational decisions, including the ordinarily stringent protection for political speech, the high bar for prosecuting even incitement to unlawful action, and the central principle that association for peaceful purposes can never be made a crime. Quoting Brandeis’ opinion in Whitney, Breyer asked how one could possibly explain the majority’s decision to people “who live, as we do, in a nation committed to the resolution of disputes through ‘deliberative forces’?” How, then, did the majority explain its decision?The Court’s decision to uphold the law was clearly influenced by the national security context of the case. Throughout the opinion, the Court was explicit that Congress and the Executive are entitled to deference in their assessment of the necessity of the material support statute because terrorism “implicates sensitive and weighty interests of national security and foreign affairs.”
Therefore, “Congress and the Executive are uniquely positioned to make principled distinctions between activities that will further terrorist conduct and undermine United States foreign policy, and those that will not.” But even the national security context cannot fully explain the decision in Humanitarian Law Project. After all, as the dissent points out, in other cases, the Court had made clear that Congress and the Executive’s claims of authority and expertise in matters of national security and foreign affairs “do not automatically trump the Court’s own obligation to secure the protection that the Constitution grants to individuals.” Were it otherwise, the government could simply do an end-run around the First Amendment to suppress any speech it deemed contrary to the nation’s national security interests.In his majority opinion, Roberts seemed anxious to limit the reach of the decision so as not to appear to overturn any long-established First Amendment principles. Indeed, in an apparent acknowledgment of the difficulty of reconciling his decision with the rest of the First Amendment’s doctrinal architecture, the opinion rejected the idea that all applications of the material support statute would be constitutional.
“In particular,” Roberts wrote, “we in no way suggest that a regulation of independent speech [that is, speech not done in coordination with designated FTOs] would pass constitutional muster.” This sentence appeared designed to reassure us that, notwithstanding its victory in the case, the government could not, after Humanitarian Law Project, lock people up for expressing what it deemed to be “dangerous” views, such general support for terrorist organizations or their points of view. And as for First Amendment protections for freedom of association, Roberts also wrote that the decision should not be read to “suggest that Congress could extend the same prohibition on material support at issue here to domestic organizations.”This last statement, made without elaboration or citation, suggests a constitutional distinction between foreign and domestic groups. It is not only that the Court will grant the government more deference in matters of national security and foreign affairs, but that the government simply cannot restrict speech and association in a domestic context that it could in a foreign context. The implication seems to be that there is a clear dividing line between the two—that the ordinary pre-Humanitarian Law Project First Amendment rules protecting speech and association with groups remained intact when those groups were domestic, but the foreign nature of designated foreign terrorist organizations justified a different rule.
The opinion does not explain, however, why this would be the case. And the distinction, which the Court apparently finds intuitive, is sound in neither principle nor practice.David Cole has tried to reconstruct the Court’s potential (unspoken) “reasons for directing special skepticism at the regulation of domestic speech and association” as compared with speech and association with foreign groups.
The most compelling of these is that the government has greater opportunity to monitor domestic organizations’ conduct, and therefore “[i]t can reduce the likelihood that such groups use their resources for terrorist activity without restricting speech or association.” This kind of factual distinction may indeed be relevant to questions about whether a law is narrowly tailored or the least restrictive means for achieving the government’s aims, but it is exactly the kind of argument that the Court normally strictly scrutinizes rather than simply leaving it to readers to hypothesize after the fact. After all, there are reasons to think the opposite might be true—because of fewer constitutional constraints on governmental surveillance of foreign groups, the government might instead have more capacity to monitor foreign rather than domestic groups.Cole also suggests that the distinction might be justified by the First Amendment’s “democratic purpose.”
He suggests that “[i]t is virtually impossible to imagine meaningful self-government if the state can prohibit speech in coordination with domestic political groups it disfavors, but restrictions on speech with foreign organizations arguably pose a less direct challenge to the mechanisms of democracy.” Further, “[t]he risk of improperly motivated censorship is arguably greater with respect to domestic than foreign political groups.”Cole’s heart was hardly in these arguments—he represented the plaintiffs before the Supreme Court in Humanitarian Law Project, after all. But these purported rationales for a constitutional distinction between foreign and domestic groups are not convincing. There is no categorical reason why speech to, with, and from foreigners should be any less helpful to Americans trying to obtain the truth about the world or exercise self-governance. In fact, on many topics in a globalized world, foreign voices may be the most informative and therefore the freedom of Americans to speak and associate with foreigners directly implicates their own ability to exercise self-governance.
In other contexts the Supreme Court has insisted that “[t]he inherent worth of the speech in terms of its capacity for informing the public does not depend upon the identity of its source.” To the extent that speech to and from foreigners is perceived as dangerous because foreigners may have goals and interests that conflict with America’s, ordinarily the remedy for such dangers is “more speech, not enforced silence.” As Alexander Meiklejohn, the most influential theorist of the self-governance theory of free speech, argued, it is unflatteringly paternalistic to think foreign speech is uniquely dangerous:Why may we not hear what these [people] from other countries, other systems of government, have to say? . . . Do We, the People of the United States, wish to be thus mentally “protected”? To say that would seem to be an admission that we are intellectually and morally unfit to play our part in what Justice Holmes has called the “experiment” of self-government.
But I can set aside the arguments about the value of speech with foreigners to Americans for now because the point I want to emphasize here is different: Even if drawing a line between aid to foreign and domestic groups was normatively justifiable, it is practically impossible. First Amendment case law has long recognized that there is no neat distinction between foreign and domestic discourse. As the Court has recognized, interfering with the dissemination of foreign speech not only affects the foreign speaker but also infringes on the right of the domestic listener to receive information and ideas.
These listeners’ rights are as vital to a functioning system of freedom of expression as speakers’ rights, as Justice Brennan wrote in his concurring opinion in Lamont v. Postmaster General, because “[i]t would be a barren marketplace of ideas that had only sellers and no buyers.”Thus, First Amendment rights have never been simply delineated by geographical lines. And even if there was once a time when national borders demarcated where a country’s public discourse began and ended, the rise of the internet has surely made this a thing of the past.
The internet is radically transnational. As Jack Balkin observes, “what people do on the Internet transcends the nation state; they participate in discussions, debate, and collective activity that does not respect national borders.” It is true that the online world is not as immune from being forced to respect local jurisdictions and national borders as some have argued. Nevertheless, the internet enables much more cross-pollination of foreign and domestic speech and ideas than offline media—when a person posts online, they can be heard and replied to from almost anywhere in the world instantaneously. While some platforms have adopted country-specific content rules that apply in a single jurisdiction in order to comply with local regulations, these are generally the exception. By and large, platforms generally insist on having a single global set of content standards because it’s technically and commercially simpler. As a result, when people use social media, they are often speaking and listening to people from all around the world. The participants in a conversation in the replies to a single post can span multiple countries and legal jurisdictions and can be read in many more. And all this discourse takes place on (largely) American-owned social media platforms, whose community standards set the rules for these global conversations.The material support law casts a long shadow over all this expressive activity. It may not be immediately obvious why this would be the case. The majority in Humanitarian Law Project seemed to understand itself as creating a very narrow exception to the First Amendment’s protections. It repeatedly emphasized that the law did not place “any restriction on independent advocacy, or indeed any activities not directed to, coordinated with, or controlled by foreign terrorist groups.”
Americans therefore remain free to talk independently about, and even praise, foreign terrorist organizations. They also have the right to receive the speech of people the government labels as terrorists, unless it falls within narrow categorical exceptions such as the exception carved out for incitement under Brandenburg v. Ohio or the exception for speech integral to crimes such as conspiracy. It is a common misconception that the vast majority of speech related to terrorism is necessarily unprotected—in fact, the opposite is true. For example, the Texas solicitor general told the Supreme Court during oral argument last term that a Texas law prohibiting platforms from taking down certain kinds of content would not impact their ability to take down terrorist content because Texas’ law did not apply to “illegal” content. The solicitor general evidently assumed that most terrorism-related content was illegal, but, as Justice Kavanaugh jumped in to remind him, this is simply not the case.Even so, and despite the Court’s insistence in Humanitarian Law Project that the material support law’s ambit is narrowly confined, the law has nevertheless had a dramatic impact on online discourse because of the way social media companies have interpreted it. Recall the broad drafting of the statute, which prohibits providing “material support,” defined to include any service or property, including communications equipment. A plain reading of this prohibition could conceivably cover allowing a designated FTO to use a social media platform. Of course, the mens rea requirement would still need to be satisfied—but recall, too, how low the mens rea standard is under the law. While the defendant must know that the service is being provided to an FTO, they do not need to know that it will, or intend for it to, further any unlawful purpose.
To be sure, applying the material support law to a social media platform that merely fails to remove FTO-related accounts from their service would be a far cry from even Humanitarian Law Project, where the plaintiffs sought to provide individualized training to FTOs. Platforms, by contrast, would simply be letting FTOs use their generally available services to engage in what will, by and large, be protected expression. Just two terms ago, in Twitter v. Taamneh, the Court held that these circumstances would not satisfy the elements of a different statute that prohibited aiding and abetting an act of international terrorism.
But the Court based its holding primarily on the fact that the plaintiffs had failed to show that the platforms had any intention to assist the FTO (in that case, ISIS), nor had they taken any affirmative steps to do so. But, of course, this is exactly the kind of mens rea that the material support statute does not require. Taamneh therefore provides little guidance (or reassurance) as to how the material support statute might apply in the latter context.That legal issue—how the material support law applies to FTOs’ use of social media services—has never been litigated.
Some have argued that the plain reading of the statute supports its application to social media platforms that refuse to take down “terrorist information and advocacy.” Indeed, writing two years after Humanitarian Law Project, Cole agreed that the problematically broad reasoning of the majority raised the possibility that social media platforms could be found guilty of material support for simply allowing designated FTOs to maintain accounts on their services. The ACLU’s legislative counsel was similarly concerned.Government actors have continued to cultivate the idea that the law might apply very broadly to criminalize the ordinary commercial decision-making of tech companies. They have failed to clarify the reach of the material support law, and adopted sweeping interpretations of international sanctions related to terrorism in other contexts. In 2021, for example, the Department of Justice shut down foreign websites it alleged were disinformation campaigns “disguised as news organizations or media outlets” on the basis that they were owned or controlled by sanctioned entities and, therefore, the U.S. servers that hosted them would violate sanctions by providing them “website and domain services.”
In 2022, the Treasury Department’s Office of Foreign Assets Control (OFAC) told a nonprofit organization, the Foundation for Global Political Exchange, that allowing designated individuals on other terrorism sanctions lists to appear and talk at their events would be “the provision of a platform for them to speak” which OFAC “considered to be a service.” OFAC later revoked this interpretation as part of a settlement of a lawsuit brought by the Knight First Amendment Institute on behalf of the Foundation and reaffirmed that sanctions are not intended to “restrict the exchange of information or informational materials.” But it is not hard to understand why these precedents could make a social media platform nervous that the government might one day argue that allowing sanctioned entities to maintain social media accounts is “the provision of a platform” and thus a prohibited “service” under the material support law.There is, to date, no public indication that the government intends to prosecute any social media platform for material support of terrorism. But against this background, it may not need to. As Robert Chesney warned, it would be “too cavalier” to conclude that the impact of the material support law is marginal based only on the kinds of prosecutions the government has brought so far.
Such a view “fail[s] to account for the substantial impact that the mere prospect of prosecution can have. That the statute has not, or at least has not often, been used in [any particular way] does not mean that it cannot be.” To be clear—any such applications should be found unconstitutional and would represent a dramatic expansion of even Humanitarian Law Project to generally-available services that facilitate protected expression. But the technical legal argument does not matter if sufficient ambiguity in practice creates enough incentive for platforms to err on the side of caution. Indeed, this is the quintessential chilling effect of broadly worded laws that First Amendment doctrine has long been concerned about. The next Part shows how this exact dynamic plays out in the context of social media platforms.II. Platforms’ Moderation in the Shadow of the Material Support Law
Platforms take down terrorist and terrorism-related content for many reasons. First and foremost, platforms are commercial entities and, in general, such content is not good for business. The vast majority of users do not want to see violent content, or content calling for violence, when they boot up their social newsfeeds in the morning or do one last refresh at night.
Loud and persistent political pressure and public criticism of terrorists’ use of platforms also create political and reputational incentives to moderate such content. But these voluntary commercial and reputational incentives cannot fully explain platforms’ broad and blunt approach to removal. As this Part shows, the threat of legal liability under the material support law plays an important role in shaping how platforms moderate terrorism-related content. Risk-averse intermediaries will be particularly susceptible to government pressure when their legal obligations are vague. This is compounded in the content moderation context by the practical challenges of effectively moderating content at scale, which leads platforms to rely on automated tools that necessarily lack nuance. To be sure, platforms’ approach is somewhat overdetermined, given the variety of pressures incentivizing them to remove content related to terrorism—but this Part shows that the material support law casts a long shadow over their approach and clearly informs platform decision-making.In general, when the scope of a criminal law that proscribes certain speech is unclear, we would hypothesize that platforms will err on the side of caution and over-remove content in order to avoid even the specter of legal liability. This is exactly what the First Amendment doctrine of chilling effects explicitly expects and protects against—the natural incentive created by broad or vague laws to deter lawful expression or its distribution.
Scholars have long argued that these kinds of chilling effects are likely to affect platforms’ content moderation particularly severely. Seth Kreimer calls platforms “the weakest link” in protecting online expression because they have limited incentive to accept the risk of sanctions rather than just engaging in prophylactic censorship of their users’ speech. Kreimer thought the broad material support law was a quintessential example of a law that would create such an incentive and predicted, in 2006, that “[a] risk-averse Internet intermediary would not need to descend into paranoia to conclude that the most prudent course would be to proactively censor messages or links that might prove problematic, and to respond to official ‘requests’ with alacrity.”Observing this hypothesis in practice is somewhat difficult, given the opacity of content moderation in general.
But what information there is tends to confirm this is exactly what happens.The independent impact of the threat of legal liability is perhaps most visible with respect to those platforms that style themselves as “free speech” alternatives to the “overly censorious” mainstream social media platforms—that is, those platforms whose very brand identity depends on the absence of content moderation. Even those platforms remove terrorist content. Elon Musk’s X, which famously now styles itself as a bastion of free speech,
prohibits content “affiliate[d] with or promot[ing] the activities” of terrorist organizations. Telegram, which notoriously refuses to cooperate with law enforcement requests (or at least did until France locked up its CEO ), similarly notes that it “block[s] terrorist (e.g. ISIS-related) bots and channels.” In some instances, the influence of perceived legal obligations on these policies is explicit. Rumble, the video-sharing platform that explicitly markets itself as the haven for those moderated by other platforms and bans “content or material that … [p]romotes or supports entities and/or persons designated by either the Canadian or United States government as terrorists or terrorist organizations.” Truth Social, the Trump-owned platform that intended to be an “impenetrable beachhead of free speech,” states in “TRUTH #1” that “[t]he law requires us to exclude offensive content from our platforms[, including] content posted by or on behalf of terrorist organizations.” The consistency with which even those platforms that generally denigrate content moderation as “censorship” and refuse to remove violent or hateful material in other contexts remove the content of designated FTOs is best explained by a common understanding that this is legally required.The impact of the material support law has been visible in certain individual cases as well, including the video conferencing platform Zoom’s abrupt shutdown of several events that included Leila Khaled, a radical activist associated with the FTO the Popular Front for the Liberation of Palestine.
Representatives from the company admitted that “the current state of the law is not horribly clear” and that they were uncertain “whether allowing Leila Khaled on our platform would be the provision of ‘services’” under the material support law. But given the “risk that Zoom could be under legal scrutiny,” they decided to cancel the events anyway. They acknowledged that liability in such a case depended on a court taking a very aggressive reading of the law, but even so, the risk was not worth it. Facebook and YouTube also removed a livestream of the event with Leila Khaled from their platforms, but they cited violations of their own content policies rather than potential legal liability.The decision by Facebook and YouTube to justify their removal decisions by reference to their own speech policies, rather than federal law, is representative of the usual approach of the major platforms. Zoom was remarkably candid about their assessment of potential legal liability, but in general companies tend to be much less explicit about how they understand their legal obligations, making the law’s influence less visible. Meta, for example, has generally insisted that it cannot be specific about its approach to moderating terrorist content without endangering its employees or helping banned entities circumvent its policies.
The authors of a human rights due diligence report that Meta commissioned on its impacts in Israel and Palestine recommended that the company publicly state its understanding of its legal obligations under the material support law and “[f]und public research into the optimal relationship between legally required counterterrorism obligations and the policies and practices of social media platforms.” Meta declined to do so, however, and somewhat brazenly replied that while “[l]egal advice is an important foundation to our [Dangerous Organizations and Individuals] Policy,” it would not release that advice because “[a]s with other legal advice, we do not direct or fund legal guidance for other companies.” This is an unusually clear statement of something that otherwise can only be inferred: It is not in platforms’ interest to be explicit about their legal obligations in the face of an ambiguous law.Nor is it in the government’s interest to clear up the ambiguity these companies clearly face when it comes to how the material support law applies to social media platforms. Anticipatory compliance by platforms is a more efficient way for the government to achieve its aims of suppressing certain kinds of speech than a clearly spelled out regulatory regime—that is, the chilling effects are the point. As a result, it is very difficult to know what the platforms think, or reasonably should think, about the application of the material support laws to their content moderation decision-making.
It is nevertheless clear from the platforms’ content moderation policies and practices that the threat of legal liability under the material support law significantly influences their operations. For example, many of the major platforms use specific lists of dangerous or terrorist organizations that are subject to removal under their rules—lists that often closely track the list of designated foreign terrorist organizations maintained by the U.S. government.
In many cases, these platforms’ rules sweep more broadly than even the broadest theory of liability under the material support law could justify. For example, platforms often prohibit even independent praise of terrorist groups —exactly the kind of speech that the Humanitarian Law Project majority went out of its way to insist would not be covered by the law. But platforms doing more to repress terrorist speech than the government mandates would be perfectly consistent with a strategy of government appeasement in the face of threatened liability.The impact of legal ambiguity is compounded by the practical challenges of moderating content at scale. Given the sheer volume of content posted to social media platforms every day, platforms rely on automated tools to perform content moderation at the required speed and scale.
These tools are improving but remain deeply imperfect. They generally lack the ability to judge the context of a particular post and rely on blunt signals to classify content, meaning they are deployed with full knowledge that they will often make mistakes. At times of crisis—say, for example, in the aftermath of a terrorist attack—when there is a large influx of both violating and non-violating content, companies may intentionally reduce the accuracy threshold of their moderation tools in order to deal with the influx of material to review. This is, again, exactly what you would expect a risk-averse intermediary to do. The incentive to consciously over-remove even nonviolating content to avoid accidentally leaving up even a small amount of violating content is high when there is the possibility of criminal liability.These factors—vague legal obligations, risk-averse intermediaries, and clumsy automated moderation tools—when combined together have resulted time and again in the over-moderation of valuable speech in platforms’ attempts to remove terrorism-related speech from their services.
Civil society has for years been drawing attention to platforms’ inability to “consistently differentiate activism, counter-speech, and satire about extremism from extremism itself,” with the result that “marginalized users are the ones who pay for those mistakes.”
Anecdata and examples of errors abound. YouTube’s introduction of a new algorithmic moderation tool to remove terrorist propaganda resulted in the deletion of hundreds of thousands of videos documenting human rights abuses in places like Syria. Facebook once removed references to Al-Aqsa Mosque—one of Islam’s holiest sites—during a flare-up in tensions and violence in Israel and Palestine (including at the mosque itself) in 2021 because references to “Al-Aqsa” were mistakenly classified as references to a designated terrorist organization. Facebook called this an “enforcement error.” It has also mistakenly removed news coverage of terrorism, because automated tools do not always differentiate between content shared for the purpose of glorification and praise, and content shared for legitimate reporting purposes. On another occasion, Instagram was auto-translating the word “Palestinian” in users’ bios to read “Palestinian terrorists” —a particularly glaring demonstration of the biases that automated tools can perpetuate—and hiding comments that featured nothing more than Palestinian flag emojis.Scattered examples like this show the very real costs to free speech from broad and blunt enforcement of platforms’ policies relating to terrorist content. Of course, the scale of online content means it will always be possible to find individual examples of mistakes, but it is much harder to get insight into systemic biases. The pile of civil society reports raising alarm about potential such biases, however, is large and growing.
Indeed, long-standing public concern that moderation of terrorist content exhibited biases against Arabic-language content, and in particular Palestinian content, led Meta’s Oversight Board to recommend that the company commission an independent review of its moderation in the region. This review concluded that while there was no intentional bias at Meta against any particular racial or ethnic groups, there were “instances of unintentional bias” that “lead to different human rights impacts on Palestinian and Arabic speaking users.” The review attributed this bias at least in part to Meta’s interpretation of its compliance obligations under the material support laws.There are important practical ramifications from the fact that the impact of the material support law is largely indirect and works through chilling effects and anticipatory compliance rather than actual enforcement actions. When platforms cite to their own community standards rather than the law as the reason they remove content, they obscure the role that the law plays in motivating their actions. Platforms’ proactive compliance also prevents the need for the government to formally order platforms to do anything (indeed, that is the point). This is a problem because it means not only no transparency or accountability for the law’s role in leading platforms to adopt that interpretation. The absence of a formal legal order makes challenges to a broad interpretation of the law harder to bring. Users do not have a First Amendment claim against platforms who take down their content—indeed, platforms themselves have a First Amendment right to engage in such content moderation.
And a user cannot bring a First Amendment claim against the government for an order that does not exist. The Supreme Court also recently set a very high bar for bringing claims based on informal government pressure on platforms where the platforms have their own independent incentives to moderate content, as they do in this context.As a result, the influence that the material support law has on platforms’ content moderation is opaque, even if obviously significant, and a product of indirect and structural effects that are difficult to legally challenge. As the next Part shows, this is not merely a problem for “foreign” speech—but its effects penetrate right into the heart of domestic political debate.
III. Foreign Content and Domestic Debate
Platforms’ often ham-fisted approach to moderating terrorist content clearly impacts public discourse abroad. Often, it can limit important political debate. As Jillian York observes, people living in areas where designated groups may also have a political role “need to be able to discuss those groups with nuance, and [platform] policy doesn’t allow for that.”
But how platforms moderate this “foreign” content has also had profound impacts on “domestic” debate. As Part I discussed, the Court appeared to assume in Humanitarian Law Project that the spheres of foreign and domestic discourse were entirely separate things and, therefore, upheld the material support statute’s application to nonviolent speech on the basis that it only impacted foreign discourse. But the reality of how platforms moderate terrorist content in the shadow of the material support law, described in Part II, belies this assumption. My focus in what follows will be on one example of the inseparability of foreign and domestic discourse—moderation of content related to Palestine and the ongoing conflict in Gaza. The point, however, is—and will continue to be—generalizable in our modern online public sphere.It hardly needs to be said that the ongoing conflict in the Middle East, and the United States’ policy with respect to it, has been a topic of intense domestic political debate. It has spurred a sweeping wave of protests around the country and influenced the voting behavior of some sizeable constituencies in the 2024 presidential election. That is, speech from and about Israel and Palestine is a matter of obvious importance to domestic politics, not just foreign affairs.
Much of the speech related to this topic takes place online and is profoundly shaped by platforms’ moderation choices. Particularly for young people, social media is often a key source of news.
And in a profoundly restricted communications environment, online content has been a main way for people in Gaza to get information out about what is happening in the region, which then informs how people understand and talk about what is going on. Former Secretary of State Anthony Blinken has remarked that the way discourse about Israel’s actions in Gaza “has played out on social media has dominated the narrative.” Frustration with the way pro-Palestinian voices dominated social media debates has even been cited by lawmakers as a reason to ban TikTok. Israel seems to understand the importance of online content to domestic political outcomes, reportedly conducting an online influence campaign in order to foster support for its actions in Gaza. At the very least, this shows that those with decision-making power or seeking to influence American policy perceive online discourse about Palestine as consequential to domestic politics.Against this backdrop, disproportionate moderation of Palestinian voices or content related to Palestine clearly shapes domestic understanding of one of the most contentious political issues of the current moment. Of course, the material support law does not require this result directly. Even if platforms might be liable for knowingly allowing designated FTOs to use their services, this does not require the restriction of Palestinian content more generally. But the practical challenges of content moderation combined with the heavy potential sanctions for violations of the material support law have meant that the law reaches far further, in practice, than its formal ambit would suggest. The examples in Part II show that in anticipatory compliance with the material support law, platforms moderate a far broader range of unquestionably protected speech, including documentation of war crimes, references to mosques, and Palestinian flags.
This is doctrinally material for two reasons. First, this is exactly the kind of chilling effects that the First Amendment is supposed to protect against. In Smith v. California in 1959, the Court held that a California law that criminalized booksellers who distributed obscenity without knowing its content or its character was unconstitutional even though, on its face, it only targeted unprotected speech (i.e., obscenity) because the law would incentivize risk-averse booksellers to take protected books from their shelves in order to avoid even the specter of liability.
This potential “self-censorship” by booksellers was seen by the Court as just as constitutionally problematic as direct governmental speech suppression. The material support law makes platforms act exactly as the Court hypothesized booksellers would act under the law struck down in Smith—they remove more material than they need to, including protected speech, in order to avoid even the specter of liability. The law should be understood to be unconstitutional as applied to platforms for that reason—at least absent more clarification from Congress about specifically what it means and how it applies.Second, the way platforms moderate in the shadow of the material support law shows that the Court’s assumption in Humanitarian Law Project that the application of the material support laws would not affect domestic political debate was wrong. The effects of this law can be felt at the very core of domestic public discourse. The vibrant, transnational discourse on social media platforms makes plain what has always been true—there are no neat lines to be drawn at the water’s edge regarding the kinds of speech the Constitution should be concerned with protecting.
Conclusion
Platforms’ content moderation of terrorism-related content currently sits at a troubling equilibrium, at which platforms anticipatorily adopt a very broad reading of the law, thus preventing the government from having to explain the law’s true reach, let alone attempt to enforce it against social media companies. This equilibrium is the product of platforms’ self-interested risk aversion, an erroneous assumption by the Supreme Court about what the effects of allowing the government to restrict even peaceful speech as “material support” would be, and no one having both the incentive and capacity to change the status quo. As a result, a U.S. law, and the Court’s interpretation of it, has encouraged the suppression of core political discourse, not just beyond the borders of the United States but also within them. The Court’s decision in Humanitarian Law Project thus casts a long shadow over us all.
Acknowledgments
Many thanks to Anna Diakun, Katy Glenn Bass, Jameel Jaffer, Ramya Krishnan, Genevieve Lakier and the Knight First Amendment Institute, for their comments and material support in making this paper better.
© 2025, Evelyn Douek
Cite as: Evelyn Douek, The Long Online Shadow of the Material Support Law, 25-07 Knight First Amend. Inst. (Mar. 19, 2025), https://knightcolumbia.org/content/the-long-online-shadow-of-the-material-support-law [https://perma.cc/LYA3-343W].
Jillian C. York, Silicon Values: The Future of Free Speech Under Surveillance Capitalism 100–113 (2021) (reviewing the pressure on tech companies to remove terrorist content from 2008 onwards).
See, e.g., Brian Fung & Andrea Peterson, Hillary Clinton Wants Tech Companies to Help "Disrupt" ISIS. What Does that Even Mean?, Wash. Post (Dec. 7, 2015), https://www.washingtonpost.com/news/the-switch/wp/2015/12/07/hillary-clinton-wants-tech-companies-to-help-disrupt-isis-what-does-that-even-mean/; Alexander Tsesis, Social Media Accountability for Terrorist Propaganda, 86 Fordham L. Rev. 605, 619 (2017) (“While social media companies have independently worked to eliminate many terrorist postings, they are too often recalcitrant, tardy, or uncooperative . . . .”).
Genevieve Lakier & Evelyn Douek, The Amendment the Court Forgot in Twitter v. Taamneh, Harv. L. Rev. Blog (Mar. 1, 2023), https://harvardlawreview.org/blog/2023/03/the-amendment-the-court-forgot-in-twitter-v-taamneh/.
Cass R. Sunstein, Islamic State’s Challenge to Free Speech, Bloomberg (Nov. 23, 2015), https://www.bloomberg.com/view/articles/2015-11-23/islamic-state-s-challenge-to-free-speech; Eric Posner, ISIS’s Gives Us No Choice but to Consider Limits on Speech, Slate (Dec. 15, 2015), https://slate.com/news-and-politics/2015/12/isiss-online-radicalization-efforts-present-an-unprecedented-danger.html. See also Lyrissa Barnett Lidsky, Incendiary Speech and Social Media, 44 Tex. Tech L. Rev. 147 (2011).
Moody v. NetChoice, 144 S. Ct. 2383, 2401 (2024).
The YouTube Team, Dialogue with Sen. Lieberman on Terrorism Videos, YouTube Official blog (May 19, 2008), https://blog.youtube/news-and-events/dialogue-with-sen-lieberman-on/.
See infra Part II.
Holder v. Humanitarian L. Project, 561 U.S. 1 (2010).
David Cole, The First Amendment’s Borders: The Place of Holder v. Humanitarian Law Project in First Amendment Doctrine General Essay, 6 Harv. L. & Pol’y Rev. 147, 149 (2012).
See infra Part II.
See infra Part III.
Smith v. California, 361 U.S. 147 (1959). See also Frederick Schauer, Fear, Risk and the First Amendment: Unraveling the Chilling Effect, 58 B.U. L. Rev. 685, 685 (1978) (“[T]he concept of the chilling effect . . . [is] a major substantive component of first amendment adjudication. Its use accounts for some very significant advances in free speech theory . . . .”).
18 U.S.C. § 2339B(a)(1).
Id. §§ 2339B(g)(4), 2339A(b)(1).
Id. § 2339B(a)(1).
Robert M. Chesney, The Sleeper Scenario: Terrorism-Support Laws and the Demands of Prevention, 42 Harv. J. on Legis. 1, 18 (2005).
Transcript of Oral Argument at 40, Holder v. Humanitarian L. Project, 561 U.S. 1, 36 (2010).
Charles Doyle, Cong. Rsch. Serv., R41333, Terrorist Material Support: An Overview of 18 U.S.C. § 2339A and § 2339B (2023), https://crsreports.congress.gov/product/pdf/R/R41333.
Shirin Sinnar, Separate and Unequal: The Law of “Domestic” and “International” Terrorism, Mich. L. Rev. 1333, 1354 (2019) (“Material support to terrorism laws supply some of the most common—and controversial—charges in federal terrorism cases.”); Francesca Laguardia, Considering a Domestic Terrorism Statute and Its Alternatives Online, 114 Nw. U. L. Rev. 1061, 1071–72 (2020) (“[M]ost ‘terrorism prosecutions’ are material support prosecutions under § 2339B.”).
18 U.S.C. § 2339B(a)(1).
Justin Fraterman, Criminalizing Humanitarian Relief: Are U.S. Material Support for Terrorism Laws Compatible with International Humanitarian Law?, 46 NYU J. Int’l L. & Pol. 399, 428–30 (2012). See also Sam Adelsberg et al., The Chilling Effect of the Material Support Law on Humanitarian Aid: Causes, Consequences, and Proposed Reforms, 4 Harv. Nat’l Sec. J. 282, 282 (2013) (Noting that the statute, and the Supreme Court’s interpretation of it, “has led many charitable organizations to raise concerns about the reach of the statute and the chilling effect it has on their activities in the parts of the world most desperately in need of aid.”).
18 U.S.C. §§ 2339B(g)(4), § 2339A(b)(1).
Chesney, supra note 16, at 16–17.
Id. at 17 (citing The Comprehensive Antiterrorism Act of 1995, Hearing Before the House Comm. on the Judiciary, 104th Cong. 322 (1995) (statement of Gregory T. Nojeim, Legis. Counsel for American Civil Liberties Union)).
274 U.S. 357 (1927).
Id. at 373.
Id. at 377.
Brandenburg v. Ohio, 395 U.S. 444, 447 (1968) (per curiam).
De Jonge v. State of Oregon, 299 U.S. 353, 365 (1937) (observing that “peaceable assembly for lawful discussion cannot be made a crime”); Scales v. United States, 367 U.S. 203, 229–30 (1961).
Cole, supra note 9, at 147.
Frederick Schauer, The Exceptional First Amendment, in American Exceptionalism and Human Rights 29, 43 (Michael Ignatieff ed., 2009) (“[T]he American reluctance to ban political parties or accept government assertions about threats to national security might be explained as a reaction to American anti-Communist and antisocialist excesses during the Red Scare of 1919 and the McCarthy era of the late 1940s and early 1950s.”).
Holder v. Humanitarian L. Project, 561 U.S. 1 (2010).
Id. at 15–16 ; Cole, supra note 9, at 151.
Holder, 561 U.S. at 9 (citing United States v. O’Brien, 491 U.S. 367, 403 (1968)).
Id. at 28.
Id. at 31.
See, e.g., Cole, supra note 9, at 156 (“[T]he rationale and result in Humanitarian Law Project sharply depart from some of the Court’s most fundamental First Amendment precedents and principles.”); Amanda Shanor, Beyond Humanitarian Law Project: Promoting Human Rights in a Post-9/11 World Symposium, 34 Suffolk Transnat’l L. Rev. 519, 528 (2011) (“[T]he Court arguably sub silentio overruled the Communist Party precedents . . . .”); Owen M. Fiss, The World We Live In, 83 Temple L. Rev. 295, 308 (2011) (arguing that “t[t]he ban on political advocacy that the Court sustained” would, if not a mere aberration, “alter the very architecture of the First Amendment”); Aziz Huq, Preserving Political Speech from Ourselves and Others 29 (Univ. of Chi. Pub. L. & Legal Theory Working Paper No. 374, 20122012) (arguing that the Court’s inconsistent approach to political speech in different contexts “is a subtle pressure in favor of speakers and forms of speech of which the Court approves”); Ronald J. Krotoszynski, Jr., Transborder Speech, 94 Notre Dame L. Rev. 473, 499 (2018).
Holder, 561 U.S. at 42–52.
Id. at 52.
Id. at 33.
Id. at 35.
Id. at 61.
Cf. N.Y. Times Co. v. United States, 403 U.S. 713 (1971).
Id. at 39.
Id.
Cf. Schenck v. United States, 249 U.S. 47 (1919); Debs v. United States, 249 U.S. 211 (1919); Abrams v. United States, 250 U.S. 616 (1919).
Holder, 561 U.S. at 39.
At least one lower court has understood Humanitarian Law Project in this way. See Al Haramain Islamic Found., Inc. v. U.S. Dep’t of Treasury, 686 F.3d 965, 1001 (9th Cir. 2012) (distinguishing Humanitarian Law Project on the basis that it only involved “wholly foreign” organizations, and that its rationales applied “much more weakly” to cases involving domestic organizations). See also Krotoszynski, supra note 37, at 508.
Cole, supra note 9, at 173–74.
Id. at 174.
Id.
Id. at 173.
Id.
See generally Evelyn Douek, The Free Speech Blind Spot: Foreign Election Interference on Social Media, in Defending Democracies: Combating Foreign Election Interference in a Digital Age 265 (Jens David Ohlin & Duncan B. Hollis eds., 2021). See also Krotoszynski, supra note 37, at 476 (“No necessary relationship exists between the geographic origin of speech or a speaker and its potential utility to the project of democratic self-government.”); Timothy Zick, The First Amendment in Trans-Border Perspective: Toward a More Cosmopolitan Orientation, 52 B.C. L. Rev. 941, 1000 (2011) (“In our interconnected world, a self-governing person must not only have access to information regarding the local community, but she must also have at least a working knowledge of issues of global scope and significance.”); Huq, supra note 37, at 22 (“Even casual observation demonstrates, however, that foreign affairs matters occupy a meaningful tranche of the national political debate initiated by domestic actors.”).
First Nat’l Bank of Boston v. Bellotti, 435 U.S. 765, 777 (1978). See also Citizens United v. Fed. Election Comm’n, 558 U.S. 310, 341, 340 (2010) (observing that “it is inherent in the nature of the political process that voters must be free to obtain information from diverse sources in order to determine how to cast their votes” and “[s]peech restrictions based on the identity of the speaker are all too often simply a means to control content”).
Whitney v. California, 274 U.S. 357, 377 (1927).
Alexander Meiklejohn, Free Speech and Its Relation to Self-Government xiii–xiv (1948).
Lamont v. Postmaster Gen., 381 U.S. 301, 306–07 (1965); Kleindienst v. Mandel, 408 U.S. 753, 762 (1972).
Lamont, 381 U.S. at 308.
Cole, supra note 9, at 168–69 (“In the modern world, when speech can be immediately communicated around the world via the Internet, virtually all speech might implicate foreign affairs.”); Zick, supra note 57, at 990 (“Globalization and the digitization of expression have decreased the significance of territorial borders insofar as First Amendment activities are concerned.”).
Jack M Balkin, The Future of Free Expression in a Digital Age, 36 Pepperdine L. Rev. 427, 438 (2009).
See, e.g., Jack Goldsmith & Tim Wu, Who Controls the Internet?: Illusions of a Borderless World (June 2006).
Monika Bickert, Defining the Boundaries of Free Speech on Social Media, in The Free Speech Century 254, 260 (Lee C. Bollinger & Geoffrey R. Stone eds., 2018) (“Social media platforms offer a place to communicate with a larger and more diverse audience than any offline audience. . . . To preserve this sort of dialogue, people need to be seeing the same content, and they need to be able to engage in real time. And for that, they need one set of global content standards.”).
Holder, 561 U.S. at 36.
Transcript of Oral Argument at 68–69, NetChoice v. Paxton, 144 S. Ct. 2383 (No. 22-555).
Twitter, Inc. v. Taamneh, 598 U.S. 471 (2023).
Id. at 505.
Daphne Keller, Observations on Speech, Danger, and Money 12 (Hoover Instit. No. Aegis Series Paper No. 1807, 2018) (“[t]he precise contours of material support law as applied to platforms—including whether providing social media accounts constitutes material support—have not been established.”).
Tsesis, supra note 3, at 616. See also Benjamin Wittes & Zoe Bedell, Tweeting Terrorists, Part II: Does It Violate the Law for Twitter to Let Terrorist Groups Have Accounts?, Lawfare (Feb. 14, 2016), https://www.lawfaremedia.org/article/tweeting-terrorists-part-ii-does-it-violate-law-twitter-let-terrorist-groups-have-accounts.
David Cole, Is Hamas’s Twitter Account Illegal?, Daily Beast (Nov. 20, 2012), https://www.thedailybeast.com/articles/2012/11/20/is-hamas-s-twitter-account-illegal.
Gabe Rottman, Hamas, Twitter and the First Amendment, Am. Civil Liberties Union (Nov. 21, 2012), https://www.aclu.org/news/national-security/hamas-twitter-and-first-amendment (“[T]here is an argument that Twitter is providing material support to Hamas by simply hosting its feed . . . and that should frighten the daylights out of all of us.”).
United States Seizes Websites Used by the Iranian Islamic Radio and Television Union and Kata’ib Hizballah, U.S. Dep’t of Just. (June 22, 2021) https://www.justice.gov/opa/pr/united-states-seizes-websites-used-iranian-islamic-radio-and-television-union-and-kata-ib. See also Matthew Petti, U.S. Website Seizures Targeting Iran Cast Wide Net Over Dissident and Religious Broadcasters, Intercept (June 26, 2021), https://theintercept.com/2021/06/26/us-iran-censor-websites-evidence/.
Letter from Nikole Thomas, Assistant Dir., Licensing Div., Off. of Foreign Assets Control, to Joshua Andersen, The Found. for Global Pol. Exch., Inc., https://knightcolumbia.org/documents/9o19ay739f.
Letter from Lisa M. Palluconi, Acting Dir., Off. of Foreign Assets Control, to Joshua Andersen, The Found. for Global Pol. Exch., Inc., https://knightcolumbia.org/documents/h6fexgrmd3. See also Joshua Andresen & Xiangnong (George) Wang, Treasury’s Reversal on Sanctions Authority Is a Victory for Free Speech, Just Sec. (Dec. 5, 2024), https://www.justsecurity.org/105426/treasury-reversal-sanctions-free-speech/.
Office of Foreign Assets Control, FAQ: Do U.S. Sanctions Target Persons for Engaging in Political Speech, Religious Practice, or Other Constitutionally Protected Activities?, U.S. Dep’t of Treasury (Aug. 27, 2024), https://ofac.treasury.gov/faqs/1190.
Robert Chesney, The Supreme Court, Material Support, and the Lasting Impact of Holder v. Humanitarian Law Project, 1 Wake Forest L. Rev. F. 13, 19 (2010).
Id.
Adrian Chen, The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed, Wired (Oct. 23, 2024), https://www.wired.com/2014/10/content-moderation/ (social media’s growth into a multibillion-dollar industry, and its lasting mainstream appeal, has depended in large part on companies’ ability to police the borders of their user-generated content—to ensure that Grandma never has to see images like the one Baybayan just nuked"); Keller, supra note 71, at 4 (“A social media service that does not successfully prune such material from users’ day-to-day experience would risk losing both users and advertisers.”).
See sources cited supra notes 1–4.
Smith, 361 U.S. at 153–54 (holding that California could not impose strict liability on booksellers for the distribution of obscene speech because “[t]he bookseller’s limitation in the amount of reading material with which he could familiarize himself, and his timidity in the face of his absolute criminal liability, thus would tend to restrict the public’s access to forms of the printed word which the State could not constitutionally suppress directly”). See also Speiser v. Randall, 357 U.S. 513, 526 (1958); Sullivan, 376 U.S. at 279; Schauer, supra note 12, at 698..
Seth F. Kreimer, Censorship by Proxy: The First Amendment, Internet Intermediaries, and the Problem of the Weakest Link, 155 U. Penn. L. Rev. 11, 28 (2006).
Id. at 93–94. See also Keller, supra note 71, at 13 (arguing that the material support statute “it provides strong legal incentive to err on the side of deletion, and very little protection for lawful—or strategically important—speech”).
See, e.g.. Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media 212 (2018); Robert Gorwa & Timothy Garton Ash, Democratic Transparency in the Platform Society, in Social Media and Democracy: The State of the Field, Prospects for Reform 286, 296 (Joshua A. Tucker & Nathaniel Persily eds., 2020).
X Safety, Stand with X to Protect Free Speech, X Blog (Nov. 18, 2023), https://blog.x.com/en_us/topics/company/2023/stand-with-x-to-protect-free-speech (“Above everything, including profit, X works to protect the public’s right to free speech.”).
Violent and Hateful Entities Policy, X Help Ctr. (Apr. 2023), https://help.x.com/en/rules-and-policies/violent-entities.
Lily Jamali, Telegram Will Now Provide Some User Data to Authorities, BBC (Sep. 23, 2024), https://www.bbc.com/news/articles/cvglp0xny3eo.
Telegram FAQ, Telegram, https://telegram.org/faq?ref=platformer.news#q-do-you-process-data-requests.
Our Story, Rumble, https://corp.rumble.com/our-story/.
Website Terms and Conditions of Use and Agency Agreement, Rumble, https://rumble.com/s/terms (last visited Sep. 2, 2024).
Eva Dou, Trump Media Reports $16.4 Million Quarterly Loss, Wash. Post (Aug. 9, 2024), https://www.washingtonpost.com/technology/2024/08/09/trump-media-loss-quarter/.
Community Guidelines, TRUTH Help Ctr, https://help.truthsocial.com/community-guidelines-page/ (last updated Feb. 4, 2022).
Alice Speri & Sam Biddle, Zoom Censorship of Palestine Seminars Sparks Fight Over Academic Freedom, Intercept (Nov. 14, 2020), https://theintercept.com/2020/11/14/zoom-censorship-leila-khaled-palestine/; James Vincent, Zoom Cancels Talk by Palestinian Hijacker Leila Khaled at San Francisco State University, Verge (Sep. 24, 2020), https://www.theverge.com/2020/9/24/21453935/zoom-facebook-youtube-cancel-talk-leila-khaled-san-francisco-state-university.
Jen Patja et al., The Lawfare Podcast: How Zoom Thinks About Content Moderation, Lawfare (Dec. 2, 2021), https://www.lawfaremedia.org/article/lawfare-podcast-how-zoom-thinks-about-content-moderation.
Id.
Id. (calling it a “narrow theory”).
Speri & Biddle, supra note 95; Vincent, supra note 95.
See, e.g., Biddle, supra note 100.
Dunstan Allison-Hope et al., Human Rights Due Diligence of Meta’s Impacts in Israel and Palestine in May 2021, BSR 10 (Sep. 22, 2022), https://www.bsr.org/reports/BSR_Meta_Human_Rights_Israel_Palestine_English.pdf. .
Meta, Meta Response: Israel and Palestine Due Diligence Exercise 15 (Sep. 2022), https://about.fb.com/wp-content/uploads/2022/09/Meta-Response_-Israel-and-Palestine-Due-Diligence-Exercise.pdf.
See, e.g., Sam Biddle, Revealed: Facebook’s Secret Blacklist of “Dangerous Individuals and Organizations,” Intercept (Oct. 12, 2021), https://theintercept.com/2021/10/12/facebook-secret-blacklist-dangerous/ (“Facebook takes most of the names in the terrorism category directly from the U.S. government”); Violent Extremist or Criminal Organizations Policy, YouTube Help, https://support.google.com/youtube/answer/9229472?hl=en (“[W]e terminate any channel where we have reasonable belief that the account holder is a member of a designated terrorist organization, such as a Foreign Terrorist Organization (U.S.).”).
See, e.g., Violent Extremist or Criminal Organizations Policy, supra note 100 (“Content intended to praise, promote, or aid violent extremist or criminal organizations is not allowed on YouTube.”).
Evelyn Douek, Governing Online Speech: From “Posts-As-Trumps” to Proportionality and Probability, 121 Colum. L. Rev. 759, 792–93 (2021).
Id.
See, e.g., Sam Schechner et al., Inside Meta, Debate Over What’s Fair in Suppressing Comments in the Palestinian Territories, Wall St. J. (Oct. 21, 2023), https://www.wsj.com/tech/inside-meta-debate-over-whats-fair-in-suppressing-speech-in-the-palestinian-territories-6212aa58; Evelyn Douek, What Facebook Did for Chauvin’s Trial Should Happen All the Time, Atlantic (Apr. 21, 2021), https://www.theatlantic.com/ideas/archive/2021/04/facebook-should-dial-down-toxicity-much-more-often/618653/.
Abdul Rahman Al Jaloud et al., Caught in the Net: The Impact of “Extremist” Speech Regulations on Human Rights Content 6 (2019), https://www.eff.org/files/2019/05/30/caught_in_the_net_whitepaper_2019.pdf. See also Arab Center for Social Media Advancement, Facebook and Palestinians: Biased or Neutral Content Moderation Policies?, 7amleh (Oct. 2018), https://www.apc.org/sites/default/files/booklet-final2-1.pdf; Marwa Fatafta, How Meta Censors Palestinian Voices, Access Now (Feb. 19, 2024), https://www.accessnow.org/publication/how-meta-censors-palestinian-voices/; Human Rights Watch, Meta’s Broken Promises: Systemic Censorship of Palestine Content on Instagram and Facebook (2023), https://www.hrw.org/sites/default/files/media_2023/12/ip_meta1223%20web.pdf.
See id. (collating examples).
Hadi Al Khatib & Dia Kayyali, YouTube Is Erasing History, N.Y. Times (Oct. 23, 2019), https://www.nytimes.com/2019/10/23/opinion/syria-youtube-content-moderation.html; Armin Rosen, Erasing History: YouTube’s Deletion of Syria War Videos Concerns Human Rights Groups, Fast Co. (Mar. 7, 2018), https://www.fastcompany.com/40540411/erasing-history-youtubes-deletion-of-syria-war-videos-concerns-human-rights-groups.
Ryan Mac, Instagram Censored Posts About One of Islam's Holiest Mosques, Drawing Employee Ire, BuzzFeed News (May 12, 2021), https://www.buzzfeednews.com/article/ryanmac/instagram-facebook-censored-al-aqsa-mosque.
Id.
Case Decision 2021-009-FB-UA, Oversight Bd. (2021) https://www.oversightboard.com/decision/FB-P93JPX02/ (Shared Al Jazeera post case).
Samantha Cole, Instagram "Sincerely Apologizes" for Inserting ‘Terrorist’ into Palestinian Bio Translations, 404 Media (Oct. 19, 2023), https://www.404media.co/instagram-palestinian-arabic-bio-translation/.
Sam Biddle, Instagram Hid a Comment. It Was Just Three Palestinian Flag Emojis., Intercept (Oct. 28, 2023), https://theintercept.com/2023/10/28/instagram-palestinian-flag-emoji/.
See sources cited in supra note 107.
Case Decision 2021-009-FB-UA, supra note 110..
Allison-Hope et al., supra note 103, at 8.
Id.
Moody v. NetChoice, LLC, 144 S. Ct. 2383 (2024).
Murthy v. Missouri, 144 S. Ct. 1972 (2024).
Sam Biddle, Revealed: Facebook’s Secret Blacklist of “Dangerous Individuals and Organizations”, Intercept (Oct. 12, 2021), https://theintercept.com/2021/10/12/facebook-secret-blacklist-dangerous/.
Rebecca Leppert & Katerina Eva Matsa, More Americans – Especially Young Adults – Are Regularly Getting News on TikTok, Pew Rsch. Ctr (Sep. 17, 2024), https://www.pewresearch.org/short-reads/2024/09/17/more-americans-regularly-get-news-on-tiktok-especially-young-adults/.
Perry Bacon, Jr., Social Media Has Played a Huge Role in the Coverage of the Gaza Conflict, Wash. Post (May 10, 2024), https://www.washingtonpost.com/opinions/2024/05/10/social-media-gaza-protests-israel/.
Secretary Anthony J. Blinken at McCain Institute’s 2024 Sedona Forum Keynote Conversation with Senator Mitt Romney, U.S. Dep’t of State (May 3, 2024), https://2021-2025.state.gov/secretary-antony-j-blinken-at-mccain-institutes-2024-sedona-forum-keynote-conversation-with-senator-mitt-romney/.
Nikki McCann Ramirez, Lawmakers Admit They Want to Ban TikTok Over Pro-Palestinian Content, Rolling Stone (May 6, 2024), https://www.rollingstone.com/politics/politics-news/lawmakers-tiktok-ban-pro-palestinian-content-1235016101/.
Sheera Frenkel, Israel Secretly Targets U.S. Lawmakers with Influence Campaign on Gaza War, N.Y. Times (June 5, 2024), https://www.nytimes.com/2024/06/05/technology/israel-campaign-gaza-social-media.html.
Smith, supra.
Id. at 153–54.
Evelyn Douek is an assistant professor of law at Stanford Law School and was a senior research fellow at the Knight First Amendment Institute from 2021-2022.