On October 20, 2023, the Knight Institute will host a closed convening to explore the question of jawboning: informal government efforts to persuade, cajole, or strong-arm private platforms to change their content-moderation policies. Participants in that workshop have written short notes to outline their thinking on this complex topic, which the Knight Institute is publishing in the weeks leading up to the convening. This blog post is part of that series.

***

Governments around the world “jawbone” platforms by pressuring them to take down users’ speech. In the U.S., those pressures can be unconstitutional. The seminal Bantam Books case held that a state “Commission to Encourage Morality in Youth” violated the First Amendment by pressuring bookstores to remove allegedly obscene books from their shelves. More recently, the Fifth Circuit held in Missouri v. Biden that members of the Biden administration had exceeded constitutional bounds in urging platforms to remove misinformation about COVID-19.

State actors are not going to stop talking to platforms, though. We shouldn’t want them to stop. And interjecting courts to mediate every discussion and avoid any risk of prior restraint is not an option. Even Bantam Books itself made clear that state employees need not “renounce all informal contacts” with private speech distributors. The question is what constitutional guardrails should shape these important and unavoidable conversations.

I’ve been speaking and writing about jawboning for half a decade now. I was on the receiving end of jawboning for a decade before that, as a lawyer for Google. I have a lot of thoughts; this post shares six of them. Some are cranky. (Part 1: First Amendment Rules Need to Cover Speech We Like and Also Speech We Don’t Like.) Some push in opposing directions, and make the constitutional questions about jawboning even murkier. (Parts 2 and 3: State Actors Can Violate Users’ First Amendment Rights Without Coercing Platforms and State Actors Should Be Able to Yell at Platforms Without Violating the First Amendment.) One sounds Pollyannaish, but I think it’s a big deal. (Part 4: Transparency from the Government Can Solve a Multitude of Problems.) One is boring but hopefully useful. (Part 5: There Are a Lot of Other Legal Fault Lines, and They are Messy.) The last one goes back to being cranky. (Part 6: Everyone Is Doing It.)

1. First Amendment Rules Need to Cover Speech We Like and Also Speech We Don’t Like

This one should go without saying. The First Amendment limits on state efforts to suppress lawful speech should be the same, whether or not we share the state’s disapproval for particular lawful-but-awful speech. If extreme situations—like a global pandemic—warrant special leeway for state action, there should be a coherent doctrinal reason why that is the case. Moral instinct is not enough.

Older jawboning cases can provide a test suite of sorts. The rules we accept in a case like Missouri v. Biden should also lead to justifiable outcomes if U.S. state actors behave in the ways described in these cases. I will use these cases as reference points throughout this post.

  • Zhang v. Baidu: Democracy activists sued the Chinese search engine Baidu in a U.S. district court, arguing that Baidu had suppressed their speech at the behest of the Chinese government. The court held that the search engine had a First Amendment right to set its own editorial policies, even if that meant doing what the Chinese government told it to do.
  • Dart v. Backpage: The sheriff of Cook County wrote ominous letters to Visa and other payment processors, urging them to suspend service to Backpage—a site that featured commercial sex advertisements and allegedly turned a blind eye to sex trafficking. The Seventh Circuit held that the sheriff had violated the First Amendment.
  • Adalah v. Cyber Unit: In Israel, Palestinians sued a government Internet Referral Unit (IRU), arguing that the IRU had unconstitutionally caused platforms to suppress lawful dissent. The IRU had, they alleged, wrongly identified lawful posts as unlawful terrorist content, and asked platforms to remove them under their Terms of Service. The Israeli Supreme Court held that the plaintiffs had presented insufficient evidence that platforms had been coerced into acting against their will, or even that any lawful speech had been removed. Affected users had not been notified of the state’s role when their posts were removed, and the IRU did not keep records of the removed content.

2. State Actors Can Violate Users’ First Amendment Rights Without Coercing Platforms

Commentators and courts in jawboning cases often focus on whether state actors “coerced” online intermediaries. The sheriff’s behavior in the Backpage case seemed demonstrably coercive, for example, while milder communications in Missouri and some other U.S. cases did not. The Baidu case likely involved little direct coercion at all, since Chinese Internet companies generally do not wait to be told what government wants.

Coercion matters, of course, if the claim is that the state has violated the platform’s speech rights. It matters in some cases about users’ rights. But it should not be the sine qua non for jawboning claims. Users’ rights against state actors should not disappear, simply because platforms were not coerced. In practice, that test would provide no check on state power when platforms are indifferent or eager to garner government approval. Rational platforms may well do whatever makes powerful officials happy, as long as it does not impose economic, reputational, or other costs on the platform. Acquiescence on platforms’ part should not eliminate users’ rights against state action.

The Fifth Circuit tried to cover this situation by applying two distinct legal standards: One prohibiting “coercion” and one prohibiting “substantial encouragement,” in which the state exercises “active, meaningful control over the private party’s decision.” Together, those cover two scenarios: When platforms are pressured into forfeiting users’ rights, and when they do so simply by being passive and eager to please authorities. That leaves an important third scenario: When platforms authentically and independently choose an editorial position that aligns with state preferences. Many jawboning claims fall into that third category. Generally speaking, those claims should fail.

There are some wrinkles, of course. For one thing, applying any of these standards is hard, as the Fifth Circuit’s conclusions about agencies like the CDC illustrate. It’s also strange to treat platforms’ existing rules as “voluntary,” when platforms’ practices are often the product of years of state pressure. This is most conspicuously true for highly political, well-publicized topics. Platforms gradually expanded their removal of lawful but terrorism-related content under pressure from leaders like UK Prime Minister Theresa May in the 2010s, for example. Should that history cease to matter once platforms stop resisting, and metabolize state-initiated changes as part of their own internal policies and external talking points?

Whatever the standard applied, it is important to distinguish speakers’ rights against the state from rights against private platforms. (That’s why I called my article on this topic Who Do You Sue?) Users might lack any cause of action against Facebook, but still have one against the FBI. Or their claims against platforms might fail because of the platforms’ own statutory immunities or editorial rights, while claims against the government might still succeed.

3. State Actors Should Be Able to Yell at Platforms Without Violating the First Amendment

Dialogue between governments and platforms is important. Governments have historically had a lot to learn from platforms about technology and content moderation. Understanding which problems cannot be solved by “nerding harder” is essential for legislators to pass good laws, enforcers to set good priorities, and elected officials to identify viable policy choices.

Platforms have even more to learn from the government. State officials are uniquely positioned to provide reliable information about the locations of polling stations or libraries, for example. Voters and taxpayers invest heavily in building up agency expertise about topics ranging from disaster relief to virology—and many platforms appropriately choose to rely on it. Smaller platforms may also actively welcome guidance from federal anti-terrorism experts, particularly if no one at the platform speaks languages used in potentially terrorist posts or videos. (Those same platforms will, of course, be less able to recognize state over-reach if officials claim that legal speech violates the law—something Israeli officials almost certainly did in Adalah.)

The range of legitimate state communications to platforms might even include what the Fifth Circuit called “foreboding, inflammatory, and hyper-critical phraseology.” I’ve been on the receiving end of those communications, and it is not fun. But voters elect politicians to advance their stated policy goals. It would be perverse and undemocratic if gaining office meant that an elected official could no longer give voice to those policies. Important communication would also break down if platforms had no way to know when officials are angry with them or want to change laws affecting the platforms. Protecting platforms from that crucial information, and excluding them from policy discussions in the name of avoiding coercion, would not be productive in the long run.

4. Transparency from the Government Can Solve a Multitude of Problems

State actors could solve a lot of problems by being more transparent about their efforts to influence platforms. To put that in litigation terms, courts should more closely scrutinize government communications that are hidden from the public. The Fifth Circuit was attentive to this in oral arguments, implying that there might be a legal difference between President Biden’s very public statement that platforms were “killing people” and aggressive messages that the White House delivered by email or in private meetings. Public transparency seems particularly key if officials engage in heated rhetoric (because being public can make such statements less intimidating) or advocate for legislative change (because it connects the statements to a public democratic process, rather than a backroom negotiation that might circumvent that process).

State actors who know their actions will be seen by members of the public are less likely to try anything improper in the first place. Public disclosure reduces the temptation to bluff about facts or legal authority, since critics with diverse beliefs and areas of expertise may notice. Disclosures in the form of notifications to the affected users would have eliminated a major problem in the Israeli case: Neither affected speakers nor the courts could tell when the state had a role in silencing particular speech.

Transparency about what state officials are doing would also have benefits on the platform side. Platforms of all sizes might be less likely to simply hand over the reins to state officials if they knew that users and shareholders would notice. Publicity would also increase the odds of improper state demands coming to the attention of the right people inside the platform—including lawyers who will recognize a Bantam Books situation when they see it. Disclosure about state actors’ behavior could change the tenor of interactions ranging from senators’ private luncheons with CEOs at Davos to legislative staffers’ hallway discussions with junior platform employees at CES. Platform employees might have more reason, and more cover, to resist state pressure if everything were going to be made public.

Transparency can’t solve everything, of course. If the Israeli government told Facebook that its users were actually planning a terrorist attack, it might justifiably not want the platform to promptly notify users about the allegation. Similarly, the URLs of child sexual abuse material (CSAM) reported to platforms should not be publicly disclosed. Neither should information that would help CSAM purveyors evade detection in the future. Some productive communications can be deterred by fear of disclosure, as David Pozen has pointed out; and unduly burdensome transparency rules can be just as wasteful for governments as they are for platforms. As a signal about the constitutionality of state action, though, public transparency can do a lot of work.

5. There Are a Lot of Other Legal Fault Lines, and They Are Messy

Many other potential fault lines might add nuance—and complexity—in assessing jawboning claims. Courts might distinguish between state communications of the following kinds, for example.

1. Factual statements that have no particular coercive force, but are useful to platforms in making their own decisions (like the CDC’s take on Covid transmission mechanisms)

2. Factual statements that carry an implicit threat of liability if a platform does not act (like DOJ identifying fentanyl distribution sources)

3. Emotionally charged language that lets platforms know how angry officials are (like President Biden’s “they’re killing people” statement)

4. Threats of

4a. Prosecution

4b. Legislative change

4c. Reputational harm

4d. Other adverse actions

5. Threats that

5a. Relate to the removal request (like prosecution for material support of terrorism)

5b. Are unrelated (like an IRS investigation)

6. Threats from

6a. State actors with authority to follow through

6b. State actors without that authority

6c. State actors who cannot act unilaterally, like individual members of Congress threatening legislative change

6d. State actors from specific branches of government (executive, legislative, or judicial)

7. Requests that

7a. Claim speech is unlawful

7b. Imply that speech might be unlawful

7c. Clearly concern only lawful speech

8. Moral suasion, pragmatic arguments, and the like

9. State communications that lead platforms to

9a. Enforce their existing speech policies

9b. Change their policies

9c. Violate their policies (leaving violating content up or taking non-violating content down)

Many of these have come up in cases or in commentary on jawboning, but the importance and interaction between the considerations is far from clear.

6. Everyone Is Doing It

Both conservative and liberal state actors pressure platforms to remove lawful speech. Both conservatives and liberals pressure platforms to reinstate posts that match the state actors’ own preferred messages. Missouri v. Biden has state actors on both sides of the v, as lawyers say. The Republican attorney general of Missouri wants platforms to carry anti-COVID-vaccine speech; the Democratic president wants them not to.

Those positions could easily be reversed. A Democratic attorney general in a state like California might one day demand that Elon Musk reinstate Black Lives Matter posts or pro-transgender-rights posts on Twitter, for example. Or an attorney general might insist that Twitter users should be allowed to criticize political leaders in India, Turkey, or Saudi Arabia. A future President Trump might disagree, and his White House might urge Twitter and other platforms not to permit such content—perhaps saying that it incites violence or undermines foreign relations. The resulting California v. Trump case would be, politically, the mirror image of Missouri v. Biden.

It is also more than a little artificial to say that only the attorney general demanding reinstatement of user speech has the First Amendment on his side. Every state actor involved in these cases is bringing state pressure to bear on platforms and trying to get them to adopt the state’s preferred rules for users’ speech. The attorneys general in Missouri v. Biden and the hypothetical California v. Trump use the state’s litigation might—and presumably also private pre-litigation communications—to pressure platforms to carry certain speech based on its content and viewpoint. If their efforts succeed and platforms reinstate that content, the attorneys general will have effectively forced many Internet users to listen to messages they did not want to hear, as the cost of accessing the information they are actually interested in. Those listeners have First Amendment rights. So does almost everyone else in this scenario, including the platforms. Every state actor in these cases is at least a little bit suspect. Casting any as straightforward or heroic crusaders for speech would be a mistake.