Jameel Jaffer
I am Jameel Jaffer, and this is War and Speech, an exploration of the free speech fallout of the war in Israel and Gaza. In a report released in December, Human Rights Watch contended that Meta, the parent company of Facebook and Instagram, was engaged in systemic censorship of pro-Palestinian content. An earlier report by Hamlet, a digital rights organization, reached a similar conclusion, accusing Meta of shadow-banning or invisibly demoting Palestinian content. More recently, some American legislators have accused TikTok of elevating pro-Palestinian content and turning younger people against Israel. On today's episode, we'll be talking about the role that social media companies are playing in shaping or distorting public discourse about the war. We have two guests. Deborah Brown is a senior researcher and advocate on digital rights at Human Rights Watch. Deborah, thanks so much for joining us.
Deborah Brown
Thanks so much for having me, Jameel.
Jameel Jaffer
And Evelyn Douek is a law professor at Stanford University and a one-time visiting scholar at the Knight Institute. Evelyn, it's great to have you here too.
Evelyn Douek
Great to be back and honored to be on such a great podcast series. Thank you.
Jameel Jaffer
Deborah, last year you investigated Meta's policies and practices relating to content about the war. Can you tell me a little bit about how you did that investigation and what you found?
Deborah Brown
So when the war broke out post-October 7th, we started hearing reports of content about Palestine being censored, and this rang familiar, because in 2021, we also researched the censorship of content about Palestine in response to the forced evictions in the Sheikh Jarrah neighborhood of East Jerusalem. And so, building on that experience, we thought to put a call for evidence out using Human Rights Watch's social media platforms. We called for any instances of censorship, and we listed the types we're looking for on any platform.
We looked at neutral in the sense that we didn't ask for what types of views are being censored, and what we found was that, overwhelmingly, the cases that were sent to us were about censorship on Meta's platforms, not exclusively, but predominantly on Meta's platform's Instagram and some on Facebook, and that all except for one case were about censorship of content about Palestinian human rights, about Palestine kind of broadly speaking. In the end, the data set that we used for the research was 1,050 cases, and it was 1,049 cases were content that was censored in support of Palestine.
Jameel Jaffer
Can you say more about what kind of content we're talking about here? I mean, are we talking about posts that are praising specific terrorist acts? Are we talking about posts that call for an end to the occupation? What kinds of posts are we talking about here?
Deborah Brown
We weeded out anything that could be seen as the violence, hatred, or discrimination, but yeah, basically anything that could have been seen as peaceful support for Palestine or neutrally discussing events relating to the war. One kind of set, I would say, are hashtags of ceasefire now, stop the genocide, from the river to the sea, the more popular slogans you'd see in anti-war protests and pro-Palestine protests offline as well. We saw kind of another grouping were of posts that dealt with terrorists, in the sense of there was hostage videos that were quite viral at the moment.
We saw people post the videos and say, "Criticize Netanyahu's government," or say, "Provide the hostages are being released," or something like this, but because there might've been terrorist symbols or Hamas is just mentioned, that was enough to get the post taken down, and that has to do with Facebook's or Meta's dangerous organization individual policy, combined with its use of automation, which we found the combination of the two resulted in posts that should definitely not have been taken down.
Jameel Jaffer
So somebody uses a hashtag like "ceasefire now," say, or "river to the sea," and the posts might get censored on the basis. What does censored mean in this context? You said taken down in that last example, but is that the extent of it, or are there other forms of suppression here?
Deborah Brown
The most common reason that hashtags like those were taken down was the spam policy. So basically, we saw people posting a Palestinian flag emoji, saying ceasefire now repeating the same phrase over and over again, maybe in different posts or many times per day. What looks like spam to Meta systems is what activism looks like in today's world. I mean, people could have been posting something completely different entirely, and in fact, we found people saying, "I think my posts are getting censored," and then it said they violated the spam policy again.
So to answer your question, it was maybe 24 or 48 hours, they couldn't post anything anymore, so they wouldn't lose access to view things, but they couldn't post anything. It was more than just a comment being taken down. They couldn't comment at all, and for different types of violations, for example, DOI violations, they might lose the ability to have their content monetized for it to be recommended for people who don't follow them, or to be able to live stream or other certain types of account pictures.
Jameel Jaffer
Can you say more about the dangerous organizations and individuals DOI policy and how that factors into the suppression of this kind of speech?
Deborah Brown
So what's a bit inconvenient in the timeline is that we published our report, I think it was December, and then on December 29, Meta updated the DOI policy. A lot of the issues that we took are not necessarily under the new policy. We don't know how it's being enforced, but the issue with the policy is that it, at the time, prohibits any representation, praise, or substantive support of entities or individuals and, in some cases, events that are included on Meta's list. One of the issues that we saw was that praise included things like speaking positively about an entity on the list, suggesting that their conduct was morally justified or acceptable, and even internal guidelines that aren't part of the formal policy but are what content moderators use to implement it, had said that the content makes people think more positively about a group.
So one could see how posting horrific evidence of human rights atrocities being experienced by Gaza could make people think a bit more positively about Hamas than they would've ordinarily, who had no sympathy for their cause. So these types of content that would fall under praise were an issue, as well as under substantive support, included things like quoting directly a group without specifically saying you condemn it or that you're neutrally reporting it. While there are exceptions under this policy for neutral reporting, for condemnation, just discussing them without showing praise or support, if there was any doubt about someone's intention, the default is to remove it.
Jameel Jaffer
So speech relating to the conflict that supports, praises, or even in some cases discusses one of these dangerous organizations or individuals would be suppressed in some way. How does Facebook come up with that list?
Deborah Brown
That's a good question. I think now they've become more transparent about the basis for the list, but they still have not published the actual list, and it relies on the state Department's Foreign Terrorist Organization List, but there's also other lists or other additions that we don't know about, and one of our recommendations was to publish a list, but also to mention if Israel's List is on there, because in recent years, human rights organizations have been designated as terrorist organizations. The Intercept did publish a list a few years ago, and I'm not sure to what extent that's the current list, but Meta does say that they can publish it for various reasons, yet has not has, to my knowledge, cited any harm or fallout from the Intercept having published it.
Jameel Jaffer
So Meta has this list of dangerous organizations that Meta's users aren't allowed to praise, support, or at least practically speaking, sometimes even discuss, and that list is given to Meta, at least in part, by the US government and possibly by the Israeli government as well. Did I get that right?
Deborah Brown
Not exactly that they're giving the list. I think it's more that Meta interprets their legal obligation to comply with US law as using that list, and there's certain aspects that are true, but they don't need to go as far as they have, and their risk assessment, their appetite for what they allow on, in terms of speech about people on that list, goes far beyond what the US law requires, and it's not consistent with what other platforms, like Twitter, now X, do. Then, in terms of other governments, one would imagine a similar calculus that they're doing. We just don't actually know the source of it.
Jameel Jaffer
Evelyn, I want to bring you into this conversation. Can you tell me about this Dangerous Organizations and Individuals List and how it relates to American sanctions law?
Evelyn Douek
The relevant law here is that it is a federal crime to knowingly provide material support or resources to a designated foreign terrorist organization, and material support is given a very broad definition, which includes any services or financial services, lodging, training, expert advice, or assistance. And so, it has been interpreted to include some forms of communication with foreign terrorist organizations, or I assume platforms are concerned about providing communication services to designated terrorist organizations and their members. Now, all of that should be subject to the First Amendment. The first amendment must constrain what the federal government can criminalize in this context, and so protected speech can't be criminalized, and Meta can't be held liable for protected speech, but there are a number of different factors at play here. I mean, first of all, your Meta, you don't have a lot of incentives to push the boundaries on the interpretation of the law, but you don't want to be held liable for a federal crime, and your interest in any particular piece of content is pretty minimal.
And so, it would be no surprise if the decision making of the platforms was to be extremely risk averse here and err on the side of the caution, take down more content than necessary, because why bother? It's not worth the headache, and the second reason why platforms might especially be risk averse in this context is that the Supreme Court has indicated that it might not be as solicitous of First Amendment rights and as protective of this kind of speech in this context as an infamous case, the Holder v. Humanitarian Law Project, which I know well, Jameel, where the court surprisingly said that even peaceful communication training on international law with a designated foreign terrorist organization can fall under this statute, can be criminalized, and can be prohibited, which seems at odds with a lot of the other doctrine that we have about freedom of association and free speech, free speech, and so those factors combined mean that the United States government designates these organizations FTOs. Then, material support is criminalized, and as a platform, I think all of your incentives are to err on the side of caution here.
Jameel Jaffer
Deborah, does all of that sound right to you? Why do you think all of this censorship is happening? Is it the result of errors and incompetence, or is BETA interpreting its legal obligations too broadly, or is it something even more nefarious than that?
Deborah Brown
I think there's a few different causes. One is what we've been talking about, is the way that some of their policies are drawn, and so though they obviously have legal obligations to not provide material support, even the way that the dangerous organization individual policy is drawn, goes beyond that and the way that it's enforced goes beyond that. They're using automation, and knowing that their automated tools aren't very accurate all the time, it's very hard to get this right, especially at scale, and making choices as to if they're willing to have more false positives or false negatives, take down more content or leave more content up, I would say they're skewing towards taking more content down, because they don't want to risk having certain types of things on their platform.
That's a choice, and so I don't think it's a deliberate sort of conspiracy to censor certain types of speech, but it's about whose voice and whose safety they prioritize at a given moment in time. In addition to that, I think there is a concern that there is influence over the decisions by governments. If that's not the case, I think we need to see from that that that's actually not happening and that governments aren't exercising undue pressure. We've seen time and again that these are underlying issues, but that when there's a lot of content, whether there's a huge political event or moment like we're seeing now and in the last few months, that their systems simply aren't equipped to deal with the volumes of content and to also understand the nuance.
And so, one of the things that our reporting found was that the appeal function was actually broken, that a lot of people weren't able to disagree or escalate their appeals within the platform itself, so that helps break down trust. If you think you're being discriminated for your viewpoint, and then you want to disagree and you can't get anywhere, it's only going to reinforce that. Then, similarly, I would say with some of the notifications people sent us, didn't even cite a policy that was violated. In some cases it was unclear if it was violating a policy that the platform determined it or that just a user complained about their content, and it would say things like, "This post is offensive or may be offensive," and it was a post of a Palestinian flag. So I think the lack of transparency, the lack of appeal, and then some problems of policies themselves and enforcement together is how we got here.
Jameel Jaffer
I want to go back to the dangerous organizations and individuals issue once more. I find this actually really troubling. I mean, so troubling that it's actually kind of incredible that this is the situation. The US government has a list of foreign terrorist organizations. It has a list of designated terrorists, and US social media platforms are required by law not to provide material support to those organizations, and Meta at least has, in the past, interpreted its obligations broadly enough that it views certain kinds of content by its users as effectively putting Meta in legal jeopardy.
And so, Meta takes down content by its users that praises or discusses these organizations that the US government decides, through an Executive ranch process, should be blacklisted in this particular way. This idea that the US government gets to decide the outer limits of public discourse, not just public discourse, but this is sort of discourse about national security policy and war, sort of core political speech, this is kind of crazy as a First Amendment matter, and Evelyn, how can you justify this kind of legal regime under the First Amendment? What is this set of arguments that makes this okay?
Evelyn Douek
This might be a totally naive thing to say, which is surely if platforms didn't adopt this interpretation and a more tolerating approach to at least less risk averse, certainly just didn't take down everything that happened to mention Hamas, which is not what they're doing, but they are erring on the side of caution, where a lot of that content gets taken down. Well, that's core political speech. It cannot be said to be materially supporting Hamas. The speech itself is protected in that it's independent political advocacy. Then, certainly, the platform's decision to neutrally provide a conduit for that speech would almost certainly be protected. And so I would hope that the platform should say, "Bring it on. Bring a material support prosecution against us. Then, we will bring our First Amendment defense and we'll create some excellent First Amendment law that says all of this is protected."
Now, that's naive on so many levels. First of all, why would the platform do that? That doesn't sound like something that's in their interests to do, unless they have some unusually risk-tolerant general counsel. And second, again, like I said, the case, Humanitarian Law Project itself, is hard to reconcile with the rest of the First Amendment. It sort of allowed the government to prescribe this speech, which was included nonprofits giving these terrorist organizations international law advice on how to pursue their ends through peaceful means. We would think that to be core protected political speech, and the court held that, "No, that could be prescribed," and so that's so hard to reconcile. And so, I guess there is also this concern that the test case wouldn't come out the way that we would hope, even if applying all of our core First Amendment principles. I share your exasperation, Jameel.
Jameel Jaffer
Evelyn, there's been a long-running debate about the role that social media platforms are playing in determining who gets to speak, what gets said, and which ideas get traction in the digital public sphere. The claim that TikTok is turning young people against Israel played an important role, I think, in persuading some legislators to enact a law intended to ban that app altogether. Now, people claim all sorts of things. It doesn't mean that they're true, so I want to ask you first, how much evidence is there that TikTok is elevating pro-Palestinian content? Is there evidence of that?
Evelyn Douek
I mean, the short answer is no, in a word. One of the problems in this area, generally, is that these platforms are extremely opaque. It's very difficult to know what's going on on any platform, what they're elevating, what they're not elevating. Deborah's account of the difficulties of studying Meta and what's going on in Meta makes that clear. You can't do a systemic analysis of how the platform is censoring Palestinian content or boosting whatever kind of content. You have to rely on individual instances that may add up to demonstrate a pattern, and so that becomes very difficult, but it means that we're in this situation where we rely on a lot of anecdata or individual examples that get pulled out, and so you're right. The TikToks about the TikTok bill and how it got passed, it seems that concern that TikTok might've been boosting pro-Palestinian content was something that helped get it over the line with lawmakers, and a lot of these were sort of allegations or examples based on maybe individual hashtags that sort of went viral on social media.
So some individuals pointed to #StandWithPalestine having many more views than #StandWithIsrael. I mean, there's all sorts of problems with relying on those kinds of individual instances. I mean, the first is that's such a limited set. People use different hashtags. Yes, okay, those two hashtags are comparable, but we don't know how people are expressing their views on these topics. They might use lots of different kinds of hashtags. Second, there's no evidence that is as a result of TikTok's manipulation. That could be as a result of organic user views on the platform, and we know that TikTok is a platform that is especially popular with young people. And we also know that pro-Palestinian views are especially prevalent with that demographic. The idea that this discrepancy or that one side, for want of a better word, is winning on TikTok, does not say anything at all about what TikTok is or is not doing behind the scenes. It could just be reflective of user sentiment about these issues.
Jameel Jaffer
So for the past few years, American social media companies have been arguing that they have a First Amendment right to decide what content appears on their platforms, and where and how that content appears. I think it's fair to say that every major First Amendment organization, including the Knight Institute, has accepted that argument, at least to a point, but then if we accept that argument, on what basis do we object when a social media platform decides it wants its users to see more pro-Israel content than pro-Palestinian content or vice versa? What kind of objection can we make there? Are we making a legal objection, or is it an ethical objection, or what?
Evelyn Douek
We'll have to wait a couple of weeks to see when the court hands down the net choice decisions to see whether we have any legal objection to this particular arguments, because of course, the Supreme Court is currently considering exactly this argument. How much editorial discretion do platforms have, does the First Amendment protect, and can states constrain that editorial discretion, including by, in Texas's case, imposing an obligation of viewpoint neutrality on platforms? And so, that's exactly a live issue. By the time this podcast comes out, the Supreme Court may make me wrong, whichever way I land on that particular point.
But I think it does highlight why it really matters how we think about these things and how important this issue is, because that is an enormous amount of power that we are giving to platforms, that in this moment of extremely high stakes debate about something that lives are at stake, that is shaping politics is so enormously important to so many people, to politics, and policy. We don't necessarily know what's going on on these platforms, and these platforms have an enormous role in shaping that debate. I mean, I don't know about you, but my understanding of what is going on in Gaza is fundamentally shaped by social media platforms. That is one of my main sources of news. That is how I understand what is going on in Gaza or what is happening, for example, on US campuses and public sentiment about these debates as well. It is our window into understanding these events, and so the idea that platforms have infinite discretion to shape that should, I think, be something that we should be concerned about.
And not only that they have the power to shape that, but that they have the power to do that without us knowing or without any transparency. That, I think, is also the real key here, because maybe it is the case that these intermediaries should have discretion, that news organizations publish editorials. They have a certain political viewpoint. You watch MSNBC, you read the New York Times, they are shaping our understanding of those issues the way that they present them as well, but what happens on social media platforms is we're sitting here having this conversation of just trying to even work out what is Meta doing, what is TikTok doing? Are they elevating pro-Palestinian content? Are they censoring pro-Palestinian content? We don't have reliable answers to those questions, and so that is something I think that the people are right to be concerned about. I don't know if it's a legal objection, but it certainly is, I guess, an ethical, moral or policy one.
Jameel Jaffer
Should we be worried about the longer-term implications of the platform's policies relating to speech about the war?
Evelyn Douek
The theme of this podcast is about the role of social media platforms in shaping our understanding of this conflict, and a lot of this is focused on, in the moment, what's happening, but they're also going to play a pretty important role, I think, in the aftermath, in the historical record, and indeed in legal actions that may take place after the event about what's happening in Gaza right now, including for war crimes prosecutions, for example. And so, one of the downstream consequences of all of this upstream opacity and potentially censorship is that a lot of the historical record is also being distorted or being wiped away potentially, and so when a video gets taken down because of a risk averse or overboard interpretation of a DOI policy, hate speech, or spam, whatever it is, that video then disappears and can't be collected by media organizations, historians, or lawyers who might ultimately bring certain actions, and so the consequences of the power that these platforms are wielding, it's enormous in this moment, but the effects of it could be felt for years and years.
Jameel Jaffer
So you are focusing just now on the problem of the lack of transparency about platforms. That is, I agree with you, a huge problem, but there is an adjacent problem, which I know you have thought a lot about, Evelyn, and Deborah mentioned it earlier, the problem of governments influencing the platform's editorial decisions, if you think of them as editorial decisions. So it's not just that Meta and other major social media platforms have this immense amount of control over what we see online and what we think about the world. It's that Meta itself is, in some cases, doing the work of governments. Can you say a little bit about how you think about that problem and what lines might be drawn there, what kinds of limits we might want to place on the government's ability to influence social media platforms' editorial decisions?
Evelyn Douek
This has been a huge debate this year in US platform regulation spaces, because of course, the other big case that the Supreme Court is considering this year is a case about what is colloquially known as jawboning. So in this case, Murthy v. Missouri, the question is whether the Biden administration illegitimately or unconstitutionally pressured platforms to remove content about politics, election disinformation, or about COVID misinformation based on pressure behind the scenes of various kinds or in public. We've heard a lot of voices, including conservative voices predominantly, saying, "This is unconstitutional. We should be really worried when the government comes to platforms and asks them to take down protected speech, and does it in a way that we don't have any insight into," because basically that would allow the governments to do an end run around the First Amendment.
If the government can't regulate that speech directly, they shouldn't be able to regulate it indirectly by knocking on the platform's door and saying, "Can you do this for us? We can't do it, but it'd be great if you could." Now, that is exactly the concern that we have in this context, whether it be the U.S government pressuring platforms both in public or behind the scenes. Congress hasn't been shy about its views about what sort of voices need to be censored here. I mean, we've seen this debate playing out on college campuses, and so you don't have to think very hard, if you're a platform, about what kind of content Congress would want you to remove on your platforms too, but of course, also you'd be concerned about other governments, including the Israeli government or, indeed, in Europe, we have this new regulatory package with the Digital Services Act in Europe.
Commissioners were coming out at the start of the war and saying, "We're very worried about platforms and them allowing terrorist content to run rampant on their services, and we have this new big hammer that we're willing to wield against them with huge fines," and again, if you're a platform, and these are what lawmakers are telling you, and these are the threats that they're making, you don't have any particular incentive to make sure that you're being very careful about only censoring the actual terrorist content and not sweeping up the other content that mentions Hamas in passing or talks about Gaza. Your incentive is just to err on the side of caution and stay in the good graces, and I think that should be a real concern. That is government using its informal powers. It doesn't have the formal power to censor this content, to try and make platforms restrict debate in particular ways.
Jameel Jaffer
Deborah, Evelyn. Thanks for making time for this.
Evelyn Douek
Thanks for much for having us.
Deborah Brown
Thank you so much.
Jameel Jaffer
On the next episode of "War & Speech," we discuss free speech and anti-discrimination law with Professor Michael Dorf of Cornell University. Views on First is produced by Ann Marie Awad, with production assistance and fact-checking by Isabel Adler, research and fact-checking by Hannah Vester. Candace White is our executive producer. The art for our show was designed by Astrid Da Silva. Views on First is available on Apple, Spotify, and wherever you get your podcasts. Please subscribe and leave a review. We'd love to know what you think. To learn more about the Knight Institute, visit our website Knightcolumbia.org, and follow us on social media. I'm Jameel Jaffer. Thanks for listening.