On October 20, 2023, the Knight Institute will host a closed convening to explore the question of jawboning: informal government efforts to persuade, cajole, or strong-arm private platforms to change their content-moderation policies. Participants in that workshop have written short notes to outline their thinking on this complex topic, which the Knight Institute is publishing in the weeks leading up to the convening. This blog post is part of that series.


We were jawboned. Repeatedly. Routinely.

We both worked on the public policy team at Facebook: Katie as the head of the politics and elections team, and Matt as the head of the policy development team. We were there for roughly a decade: Katie from 2011 until 2021 and Matt from 2011 until 2019. In our roles, we met regularly with government officials in the United States and internationally. We met regularly with company executives, often in decisional meetings to determine the best course of action for addressing a challenge.

During our tenure at Facebook, jawboning was incessant. It increased in prevalence as the U.S. government stumbled in its attempts to impose more stringent regulations on the tech sector, despite escalating frustrations with social media generally and Facebook in particular. That gap—between the anger at the industry and the inability to take punitive or preventative action against it—was filled by jawboning.

The frequency and intensity of jawboning mean it is as closely linked to our experience working at Facebook as hoodies and cafes with cold brew dispensers. It was so prevalent that we knew the concept before we even learned of the term. A government official can’t get what they want by passing a law or implementing a rule, so they lean on someone they know—and point and yell or give a serious stare—and threaten retribution by some other means.

Jawboning isn’t only about outcomes. It’s also about power and stature. Before 2016, the tech industry was the place that every policymaker wanted to visit to look cool and appeal to younger voters. After 2016, the tech industry was viewed as the destroyers of democracy. We were down. Not popular. And everyone wanted to pile on with criticism. That left us vulnerable to want to take steps to prove we were responsible actors and to be liked again. This could make the amount of jawboning we were getting particularly persuasive.

Jawboning evokes emotion as well. When we were jawboned, we were often accused of caring only about money or not caring at all about social good or democracy, even when we were working long hours to try to advance those objectives. We were told that hard decisions were easy, that certain consequences we weighed heavily didn’t matter (like costs to online expression or the potential for online speech to translate to offline harms), and that the right decision would become obvious if we were smarter or more ethical. Being on the receiving end of this type of criticism can take a psychological toll.

In this post, we examine jawboning from the perspective of practitioners. How did government officials jawbone Facebook employees, and how did it affect company decision-making? We then discuss why jawboning can be problematic, as well as some of its benefits. We conclude with some ideas about steps that companies and governments could take to avoid the most problematic aspects of jawboning, focusing on transparency that will bring these practices out of the shadows. Our view is that the government and tech industry should maintain open lines of communication so that the government can educate the industry about its perspectives on company decision-making but should avoid using its power and influence to pressure or coerce a company into changing its position. If the government wants to change a company policy or decision, it should do so through formal democratic channels.

Jawboning in practice

In our time at Facebook, jawboning was sometimes explicit and sometimes implicit. In a meeting with a government official, he or she might push for an outcome on one particular issue. If we declined or demurred, the official might threaten to punish the company in another way.

Sometimes, the jawboning was far more subtle than receiving a request and a threat. Battered on front pages, op-ed pages, congressional hearings, and conferences in the wake of the 2016 election by journalists, academics, and civil society, as well as government officials, Facebook shifted its approach to free expression in response, vacillating between vigorously defending online speech and more aggressively policing content on the platform. It’s hard to separate out what changes the company made just because of government threats versus overall public pressure. But the government pressure undoubtedly became more impactful because of the public dynamics around tech platform accountability.

A complicating factor is the fact that much of what the government and others wanted changed dealt with content. Government officials piled on publicly and privately, even though the First Amendment likely barred them from acting to regulate Facebook’s content moderation policies and practices. In other cases that didn’t deal with content, where the First Amendment or other legal restrictions were not a barrier to government action, a policymaker was trying to get an outcome that they could not by passing a law or a rule because he or she lacked the political support to do so.

For example, in a Senate office, Katie once explained Facebook’s approach to providing more transparency to political ads. The Honest Ads Act had been introduced in 2017 and would apply requirements, limitations, and protections regarding political advertising in traditional media to the internet. The bill would require disclosure of information, such as the audience, the ad targets, the average rate charged for the ad, and the name of the candidate/office or legislative issue to which the ad refers.

However, the bill did not cover the use of data directly by campaigns to segment voters. That data could be uploaded to a platform like Facebook through a tool called Custom Audiences, and advertisers could use it for targeting.

The senator’s office told Katie that they really wanted to ban that practice but knew they would never get it through the Senate since so many campaigns relied on the tools for their elections. So instead, they said they were going to pressure tech companies like ours to ban the use of the tool in the hopes that if one of us did so, the others would as well. Although we did not stop using Custom Audiences entirely, Facebook and other platforms did dramatically reduce the targeting options for political advertisers.

Of course, even the most vigorous jawboning does not always cause a company to change course. Government officials routinely ask for changes to company policy that the company rejects or ignores. The last decade of encryption policy provides one example. Despite repeated public and private calls for companies to build mechanisms for law enforcement to access user data, companies have largely refused to alter the level of encryption they provide. Another example is Facebook’s refusal to remove altered videos of Nancy Pelosi, despite active public and private efforts to get the company to change its mind. Speaker Pelosi subsequently leveled vague threats against the company: “All they want are their tax cuts and no anti-trust action against them.”

Why jawboning is problematic

Based on our experience being jawboned at Facebook, we see two primary reasons that jawboning can be problematic: It is antidemocratic, and it degrades the quality of company decision-making.

First, jawboning circumvents the democratic process, ignoring important legal and political realities. If action cannot be taken through the normal executive and legislative processes, that may be the result of insufficient political consensus in support of that inaction. And when a court strikes down executive and legislative action as illegal or unconstitutional, then those judicial determinations may reflect important considerations about permissible conduct.

Circumventing the democratic process is particularly problematic at a time when democracy is backsliding all around the world. Other countries look to the United States to set an example. Already, we see governments getting increasingly aggressive in asking companies to take action—and this isn’t just authoritarian countries. This summer, France and some EU commissioners suggested they might shut down platform access during civil unrest. This could lead to platforms more aggressively taking down content in the hopes of not being shut down. That kind of power shouldn’t be solely in the hands of a government or a company.

Jawboning can also be antidemocratic because of the practical realities of the power dynamics between the jawboner and the jawboned. We might imagine that jawboning occurs most frequently between principals, but in practice, jawboning is as much about interactions between junior staffers as it is about conversations between a senator and Mark Zuckerberg. That’s because staffers interact regularly, sometimes daily, when an issue is hot. The power dynamics are often particularly acute at the staffer level, with a leading staffer of a congressional office or a federal agency interacting with someone at a company who feels far less empowered to push back than a CEO or a COO might.

Second, jawboning leads to flawed company decision-making. Inside the companies, jawboning can push executives to make ad hoc decisions in response to pressure from individual policymakers that could run counter to what the majority of government wants in the long run and counter to the interests of a company’s users. When these situations arise, it often means that decisions are made quickly and in response to individual events rather than in a strategic and thoughtful way. Those decisions will be influenced by who has more power and sway internally and to stop public ridicule that is hurting their reputation rather than relying on research and long-term policy development. In addition, it can silence less powerful speakers and the public overall, as sometimes government pressure can be at odds with what is right for the people using the platforms. A company’s user base may be large but diffuse, and they may be less able to marshal a strong point of view on a topic than an individual politician who can write op-eds and public letters, make speeches, and hold hearings.

Often, we received conflicting messages from government officials about what they wanted us to do. We see this today where many are worried that platforms take down too much content and many worry they don’t take down enough. This can cause platforms to tie themselves up in knots trying to make everyone happy.

A few benefits of jawboning

Jawboning may be problematic for the above reasons, but it has some benefits, too.

First, poor communication between companies and the government has stymied smart tech governance. It’s worth remembering that one of the gaps identified after the 2016 election regarding Russian interference was the lack of communication between the government and tech companies. New agencies such as the Cybersecurity and Infrastructure Security Agency (CISA) were created, and companies started to build teams specifically focused on these new adversarial threats. These teams made it a point to communicate more with the government and other companies. This information sharing has helped to reduce the amount of foreign interference on these online platforms.

The opportunity to jawbone can lead to more open lines of communication between the private and public sectors and provide forums for each side to learn about the other. The chance of altering a company’s position incentivizes government officials to communicate with industry, and the potential for retribution action incentivizes companies to maintain an open door to government officials.

Second, jawboning may produce more desirable outcomes in some cases. The most obvious example would be a case where a company’s position is the wrong one—either because it objectively produces problematic results or because it serves a company interest while imposing steep societal costs—but the government is hamstrung in its ability to address the issue because of legal limits on its authority, such as the First Amendment. In that case, changing the company’s position would produce a better result, and jawboning is simply a mechanism for achieving that outcome. Jawboning fills a gap in the government’s authority, and when filling this gap yields a preferable solution, jawboning could be viewed as a success.

Third, dialogue can sometimes be most productive when it occurs outside of the spotlight. Sunlight is undoubtedly a disinfectant, but it can also burn important nuances out of a discussion. Would a company executive raise an unpopular but accurate consideration in response to a question from a government official if he knew that his comment would be made public? Would a government official be willing to ask questions to learn more about an unpopular tech company policy decision if he knew his line of inquiry would end up ridiculed on social media the next day? Private dialogue can give each side room to explore, learn, and brainstorm options, sometimes producing better outcomes than public interactions.

These benefits are important to consider in crafting options for constraining jawboning. In the next section, we describe our suggestions for these constraints. In light of the positive effects of jawboning, we caution against implementing solutions that cut off communication between the government and tech companies, which could exacerbate the challenges that companies and governments face in developing smart tech governance.

Beyond jawboning

Governments and companies should have open lines of communication on difficult issues. Government officials have access to information that companies lack, and as democratic representatives, they have important perspectives on national values and interests that differ from a company that builds technology products. Using this perspective to educate companies is important for the government, but it should not cross a line into undemocratic influence.

To preserve the benefits of jawboning but limit its problematic effects, we recommend a series of steps that can help lead to a healthier system of accountability between government and companies. We leave the questions of constitutionality to others (including the legal scholars participating in this symposium) but aim here to provide guidance that both companies and government officials might use to educate each other about their respective points of view, without exerting undemocratic pressure that is likely to lead to flawed company decision-making.

A path for governments

First, the administration should publish an executive order outlining permissible conduct for government officials when they engage on policy issues with industry executives. This type of guidance already exists in some contexts. For instance, federal regulations already govern how officials at the Federal Trade Commission can interact with those outside of the FTC on a pending matter. A broader executive order covering all executive agencies and the White House would help establish norms for permissible conduct.

The order should clearly delineate between education and influence. Educational conversations should be permissible, such as when the government provides information to companies that might inform their decision-making or provide product tutorials or context about their policies and enforcement processes. It should also require any government requests to remove content or access user data to be in writing and submitted through a formal channel, such as a law enforcement portal. The executive order might also include public disclosure obligations to facilitate transparency, such as an annual report to Congress on contacts between executive branch officials and industry. Congress would also have the power to hold oversight hearings to understand better how government officials are engaging with the companies and how the companies responded to those communications.

Second, companies should have a formal mechanism to report inappropriate jawboning. Federal regulations on ex-parte communications are a useful analogy. If a non-government employee seeks to influence an FTC proceeding through ex parte communications, a government employee must promptly disclose those communications. Similarly, a company employee should have a mechanism to disclose a government communication intended to influence a corporate decision. The executive order mentioned above could specify how this process should look in practice, including the scope of prohibited conduct, the procedures that companies should follow in making these disclosures, remedies, and how the government should be transparent with the public about the number and nature of the reports it receives.

Third, the government should institute a firewall between those who engage with technology companies on national security or public health issues and those who handle regulatory and policy matters for the tech industry. This exists already in the political context where certain political action committees cannot communicate directly with campaigns and vice versa. By instituting this for government communication, it would add more of a buffer to ensure that the official asking a technology company for data or to take action on content also does not have the power to enact penalties against the company.

Finally, Congress should pass legislation that authorizes data sharing with companies, including sharing from companies to the government and between companies and researchers. This may seem a step removed from jawboning, but we think that they are related. At the end of the day, jawboning occurs because of concerns about the harmful impacts platforms might have. To know what those impacts are we need to facilitate more data sharing and an ongoing, open dialogue about product and policy impacts. Congress should specify the types of sharing that are permissible and provide companies and researchers with safe harbors for data sharing that abides by certain privacy and security best practices.

One additional point merits mention. In our view, based on our experience being jawboned, the prevailing test of jawboning’s legality—coercion (illegal) versus mere persuasion (legal)—is too difficult to implement and too permissive in enabling the government to use its power to exert influence over the speech decisions of privacy companies. First Amendment expert Mike Masnick thoughtfully emphasizes the value of drawing this line, and we agree with him that any conduct crossing over into coercion should not be permitted. But in our experience, it may be valuable to consider shifting that line to prohibit a wider range of conduct, since persuasion can quickly approach coercion in the hands of a powerful actor. While we are not law professors or legal theorists, and therefore defer to others on the proper approach for jurists in delineating between legal and illegal conduct, our experience suggests that even when company employees are on the receiving end of “mere” persuasion, they may find it difficult to decline to pursue the government’s preferred path.

For this reason, the government should aspire to a position of education rather than argumentation. While the government should certainly be able to outline its views of optimal policy and its understanding of the impact of a company’s decisions on American lives, an educative tone is the ideal path. In this respect, the government might consider borrowing from the rules that govern advocacy by nonprofit employees: You can meet with officials to educate, but not to influence.

A path for companies

Even though government officials initiate jawboning, companies have a role to play in reforming jawboning practices. For instance, they should share data that will help the government better understand the impact of their products and product policies. They should conduct regular training on their tools, as Katie did almost daily in her job at Facebook. In these interactions, they should encourage the government to provide feedback and incorporate this feedback into their product and policy development.

They should also maintain official public and private channels for government input. Companies should have an intake system for the governments to make requests, such as a law enforcement portal. They should design this system to ensure that only recognized government officials can make requests and that those requests are routed to appropriate company decision-makers. They should then aggregate data on government requests and publish it regularly in transparency reports.

Companies should also try to diversify their internal decision-making processes. Public policy teams are an important voice in many types of cases mentioned in the debates over jawboning, such as COVID-19 misinformation content moderation. At the same time, public policy teams are likely to be more heavily influenced by senior government actors and more wary of public relations considerations. Those judgments should not be sidelined entirely but also shouldn’t solely determine outcomes. Incorporating the perspectives of safety, product, and business teams into company decision-making will help to limit the impact of jawboning within companies.


Engagement between governments and tech companies is necessary but also fraught with challenges. While we aren’t at Facebook—now Meta—anymore, our colleagues there and at many other companies continue to speak to government officials every day. And the gap between the desire to regulate or reign in tech and the inability to actually do so still exists. The cycle starts again as the government and Congress pressure companies to self-regulate on artificial intelligence.

These issues are too important to be left solely to CEOs, their boards, and shareholders. But, the government shouldn’t use threats to push for an outcome they can’t get through normal executive and legislative processes that will stand up in court.

At the same time, we must be careful not to throw the baby out with the bathwater when looking for the right guardrails to put in place. While we suggest that accounting for the impact of government power might result in prohibiting some communication that is persuasive but not coercive, we also recommend implementing transparency and oversight systems. These systems will enable us to preserve the valuable benefits of educational communication between government and companies, while putting in place some accountability mechanisms that will help to mitigate the most problematic behavior. These difficult issues need more democratic processes and input, not less.