Regulators who want to make tech platforms more accountable for their content moderation decisions have increasingly turned to transparency mandates. Lawmakers want more information about platforms’ rules, how they enforce them, and how information circulates on their services.

These mandates follow years of pressure from academics and civil society to force platforms to open up their content moderation systems to researchers and the public. There are two reasons transparency mandates have become such a focus of reform. First, regulating in the dark is always a bad idea and bound to have unintended consequences. Lawmakers can’t solve problems they don’t understand, and it is truly astounding, given the profound ways platforms’ decisions can affect society, how little we know about what content moderation decisions platforms are making. Second, as we explore in this series of blog posts, substantive government regulation of speech decisions is constitutionally tricky and subject to important limits. Procedural regulations are therefore an enticing alternative.

Some have argued, however, that it is not just substantive regulation of social media companies’ content moderation decisions that the First Amendment strictly limits, but transparency regulation also. In recent months, a number of litigants, academics, and civil society groups have argued that transparency mandates of the kind that have been proposed or applied to the platforms violate the First Amendment. The trade association, NetChoice, which represents platforms before the courts, has made this claim repeatedly. To support this argument it has relied upon the Supreme Court’s 1979 decision in Herbert v. Lando.

Those who invoke Herbert tend to emphasize the line in the opinion for the Court that asserts that “no law that subjects the editorial process to private or official examination merely to satisfy curiosity or to serve some general end such as the public interest … [could] survive constitutional scrutiny as the First Amendment is presently construed.” They interpret this statement to mean that, because platforms’ content moderation rules reflect their editorial judgments, lawmakers generally cannot require them to disclose information about how they formulate or enforce those policies (because doing so would only further a “general end”). Some have gone so far as to suggest that transparency mandates would be constitutional only in individual cases when there are specific plaintiffs who have suffered specific legal harms—and even then, maybe only in the context of a court proceeding.

It’s worth pausing to note the extraordinary breadth of this reading of Herbert. The argument seems to be that the government cannot require some of the most powerful companies in the country—the world, perhaps—to provide even basic information about their policies and practices, when those policies and practices involve making decisions about speech. This would be a remarkably deregulatory outcome that disables almost any government regulation of much of the technology sector. Fortunately, this is not what Herbert says, as we explore in the rest of this post.

Herbert is not Clear

Let’s start with the holding of Herbert. The Court held that there was no First Amendment bar to discovery orders that required newspapers to provide information about their editorial processes when relevant to the litigation of a defamation lawsuit. Specifically, the Court overturned a lower court’s decision that prevented plaintiffs in a defamation case from questioning an employee of CBS, one of the named defendants, about the editorial process. Contrary to the reading urged by critics of transparency mandates, the opinion was on the whole rather pro-transparency. Justice White noted approvingly in his opinion that “courts across the country have long been accepting evidence going to the editorial processes of the media without encountering constitutional objections.” The Court also rejected the idea that prior First Amendment decisions—such as the 1976 decision, Miami Herald Publishing Company v. Tornillo that held that “neither a State nor the Federal Government may dictate what must or must not be printed” in the newspaper—“impliedly suggest that the editorial process is immune from any inquiry whatsoever.” It consequently permitted the disputed discovery order.

To reach this result, the Court adopted a purposive approach to the First Amendment analysis in cases that involve questions about editorial discretion. Rather than treating the First Amendment as an absolute categorical bar against any inquiry into editorial processes, the Court looked to why the editorial process was ordinarily protected. It recognized that, although protecting information about the reporting and editing process from disclosure might sometimes be necessary to avoid imposing “an intolerable chilling effect on the editorial process and editorial decisionmaking,” certain kinds of chill were not incompatible with the values and interests that the First Amendment was intended to protect. As Justice White put it: “[I]f the claimed inhibition flows from the fear of damages liability for publishing knowing or reckless falsehoods, those effects are precisely what New York Times and other cases have held to be consistent with the First Amendment.” This makes sense. After all, the whole point of causes of action such as defamation is to prevent the publication of harmful falsehoods!

That is, Herbert says we must look to the purpose of the protection of editorial discretion and not only the fact that it exists. Such a purposive approach means that to figure out whether transparency mandates are constitutional, we need to first understand when these mandates—and any possible chilling effect they may have on speech—threaten First Amendment values. In other words, do First Amendment values require social media platforms’ power over speech to be completely opaque and unbounded? Or are there, perhaps, First Amendment values protected by understanding how platforms curb people’s speech that also need to be weighed in the balance? 

Answering these questions may require taking account of the differences between social media companies and newspapers. “Editorial processes” and “editorial discretion” are not blanket terms that apply in exactly the same way every time a company decides whether or not to publish speech. Platforms and newspapers perform different functions when it comes to the public sphere, and the way in which they exercise their power over speech is very different. The burdens transparency might impose on that power is also very different, as suggested by the companies’ practices and statements themselves. Many platforms, and all the major ones, now publish content moderation transparency reports on a regular basis. Many have also called for transparency mandates as a path forward for regulators. Clearly, then, platforms do not view all transparency obligations as unreasonable infringements on their content moderation practices.

Part of the work, then, of figuring out whether (and which of) the new crop of platform transparency mandates are constitutional will be working out why and how different entities exercise their discretion, what First Amendment values are protected by that exercise, and how different kinds of transparency mandates might burden it. Until this work is done, what Herbert means in the new context of social media is not at all clear.

Herbert is not Stable

Herbert is a decades-old case, and there’s reason to believe some of its statements about editorial discretion would not be considered persuasive today. For one thing, the broader statements in the opinion about disclosure mandates in contexts other than defamation lawsuits are dicta. The issue before the Court was in fact quite narrow, and its musings about what might happen in other cases do not bind lower courts.

Second, Herbert is a case that revolves around New York Times v. Sullivan, which established the “actual malice” standard for defamation proceedings involving public officials. As discussed in our previous post on Alvarez, at least two justices on the Court have expressed skepticism that Sullivan is well-suited for our modern information ecosystem and the era of social media. Regardless of whether this is a good thing, we now simply note that this might affect how the Court would apply Herbert today.

How far courts will extend Herbert or confine it to its facts is also still up for grabs. Will courts even consider Herbert relevant to questions about social media platform transparency? So far, they haven’t. Neither of the district court opinions issued in ongoing litigation involving the recently enacted Florida and Texas social media transparency mandates mentioned the decision. Nor did the Eleventh Circuit when, just this week, it held that all but one of the transparency obligations written into Florida’s new social media law were not substantially likely to be unconstitutional, and could go into effect. Herbert also wasn't mentioned in a recent decision by the D.C. Superior Court, which allowed the D.C. attorney general to subpoena Meta for information about the enforcement of its content moderation policies in order to determine whether Meta violated the District’s consumer protection laws. A 2019 case preliminarily enjoining a Maryland law that requires online publishers of social media and news websites to disclose information about the political advertisements they hosted also did not include a reference to Herbert.

To the contrary, courts have applied Herbert almost exclusively in the context of discovery orders against journalists or newspapers—see, for example, here, here, and here—and have almost never relied on Herbert in cases involving ex ante disclosure mandates or agency investigations. This makes sense—Herbert does not mention legislatures at all. It is therefore not clear that Herbert applies to legislatures or administrative agencies, or to contexts other than court-supervised discovery. And even if it does, the opinion makes clear that while editorial processes cannot be subjected to scrutiny merely to satisfy public curiosity, they can be subjected to scrutiny in pursuit of the vindication of a valid legal interest. There is no reason to think that legislatures cannot create the kind of legal interests that could justify mandated disclosure—assuming of course that the disclosure requirement is not unduly broad, or viewpoint discriminatory, or problematic for some other reason.

Indeed, corporate transparency mandates are common in a wide variety of contexts, and the Supreme Court has regularly upheld them. As the Court has said, “[T]he State does not lose its power to regulate commercial activity deemed harmful to the public whenever speech is a component of that activity.” For example, the Securities and Exchange Commission requires corporations to make a wide variety of disclosures about matters that could inform investors’ and potential investors’ decisions to buy or sell securities. 

These kinds of disclosure requirements apply to corporations that engage in the business of expression as well. For example, in many states (here, here, and here) newspapers are required to disclose when material that appears in their pages is paid for by an outside sponsor. And under federal law, broadcast, cable, and satellite television providers are required to provide “quarterly lists of the most significant programs [they] aired concerning issues of importance to [the] community,” as well as information about the amount of time they spent broadcasting children’s educational television, and other matters.

In the context of the platforms, there is good reason to think that information about content moderation practices and policies is relevant to investor decision-making. The major platforms have all, in the past, come under pressure from investors and advertisers to be more transparent about their content moderation—because content moderation is core to platforms’ operations and a commercial and reputational risk. In the words of Tarleton Gillespie, “Content moderation is, in many ways, the commodity that platforms offer.” And perhaps one of the few lessons foregrounded by the saga of Elon Musk’s proposed acquisition of Twitter is how significant content moderation policies can be to the business side of platform operations.

To focus on Herbert risks therefore ignoring other, perhaps more relevant, lines of authority on mandated disclosures. Relying on Herbert to provide the framework for analyzing social media transparency laws means ignoring the disclosure obligations already imposed, without constitutional problems, on many corporations, including (as discussed above) media corporations and other organizations whose business involves speech. It also means ignoring decisions such as Zauderer v. Office of Disciplinary Counsel, which provides the legal framework for analyzing compelled disclosures in the commercial context. Because platforms are commercial actors, it would be perfectly plausible—indeed, far more plausible—for courts to use Zauderer as the governing authority in cases involving social media transparency laws than to use Herbert. This is precisely what the Eleventh Circuit did just a few days ago.

The bottom line is that the rules that govern platform transparency mandates are far from settled. What is clear, however, is that just because platforms exercise editorial decision-making does not mean, under existing law, that transparency mandates are necessarily unconstitutional, even when they further legitimate government purposes and are reasonably tailored. Transparency mandates that discriminate on the basis of viewpoint or subject matter may raise separate, thornier constitutional questions. We leave a full exploration of that question for another time. But the mere fact that transparency mandates burden editorial discretion did not mean in 1979, and does not mean today, that they violate the First Amendment. A reading of Herbert that holds the opposite would imperil broad swathes of important, existing regulation. It would represent a departure from current doctrine, not a vindication of it.


Why does this matter? Hard cases make bad law, as the saying goes. There are good reasons to think that the Florida and Texas laws are not good-faith efforts to impose the kind of transparency requirements that many have argued are necessary to make social media platforms publicly accountable, or that they would even be effective. But we should not throw the quest-for-transparency baby out with the bad-faith bathwater. Could transparency mandates be drafted in such a way as to be unconstitutional fishing expeditions designed to chill platforms’ valid exercise of their editorial discretion? Absolutely. Will they always? Absolutely not.

It would be a mistake to allow Texas and Florida’s new social media laws to be used as an excuse to interpret the First Amendment as an absolute bar against imposing even reasonably tailored and good-faith transparency requirements on some of the most powerful corporations in history. Herbert requires no such result. To the contrary: Herbert suggests that such mandates may sometimes be justified and beneficial, even if they burden editorial discretion. 

The authors thank Rachel Smith for her excellent research assistance.