Introduction

Problematic and divisive content dominates today’s online platforms. Disinformation, abuse, conspiracy theories, and hate speech have risen to prominence and created challenges both online and off. Policymakers, advocates, journalists, and academics have all called on platforms to ensure that this content doesn’t overtake more beneficial uses of their services. Many proposed solutions focus on platforms’ ability to directly allow or prohibit particular types of speech or users and ask the platforms to do more to remove specific instances of abusive speech.

These proposals raise serious questions about the role a private corporation should play in determining the limits of acceptable speech. Because private companies—such as social media platforms—are not subject to First Amendment restrictions, they alone are able to determine what content is permissible in the public sphere with little restriction, and that dynamic may potentially erode or obviate many of the protections granted under the First Amendment. On the other hand, government regulation or oversight of moderation raises significant questions about whether such restrictions would be permissible under the First Amendment as it is currently understood.

It is our intuition that, rather than focus on particular speakers or speech, policymakers should instead address the underlying platform business practices that encourage much of the most harmful speech. Any long-term solution must address the ways in which platforms collect data and curate content in order to maximally harvest users’ attention and sell advertisements, the main method of monetization online.

Specifically, online platforms curate and promote content to drive user engagement (the intensity and frequency with which a user interacts with a platform), which in turn enables platforms to collect more detailed information about the types of content users engage with. This data collection provides platforms with insights not just into the type of content that provokes users to engage generally, but also into the preferences, interests, and likely behaviors of specific usersinsights that then can be used to personalize and deliver the content that is most engaging to each user. Since by definition, the more provocative the content, the more likely it is to garner and retain attention, platforms are economically incentivized to permit and even to encourage the spread of extreme or controversial harmful speech, as it is likely to directly benefit them financially. In a targeted advertising model, misinformation and conspiracy theories are often the product, not an accident.

Addressing targeted advertising as a business practice, rather than specific types of speech and speakers, avoids many of the thorniest First Amendment issues. Removing or lessening platforms’ financial incentives to promote bad content to their users dampens questions about the prudence of a private company acting as a de facto censor on public speech and removes concerns over the limits of government commandeering of platforms. At the same time, placing meaningful restrictions on targeted advertising practices addresses many of the structural supports for the wide reach of fake news and harassment.

This paper will explore a framework for regulating platforms without mandating content-based restrictions. To do so, it will explain that the platforms have the technical means to provide curated spaces. The analysis will then show why an ex post content-based moderation system cannot solve the root problems faced by platforms. It will then examine the incentive structure created by the targeted advertising model and show that addressing those incentives would likely reduce the spread of harmful content on many platforms without running afoul of the First Amendment.

Platform Moderation Will Always Be Imperfect

The primary solution proposed to platforms is simply to remove more speech and speakers. Platforms have responded to these proposals and have shown some willingness to ban problematic or fake users outright, build algorithms to detect hate speech and fake news, promote “trusted” news sources and demote others, and employ human moderators. Despite some positive indications, these tools raise significant civil rights and First Amendment concerns on their own—detailed below—and do not always work. Additionally, the platforms have demonstrated that moderation and content removal is often driven by public opinion, rather than established practices, and has not been consistently applied. This section will examine the technical moderation tools available to platforms and analyze their shortcomings.

Many of the current platform moderation approaches can be seen as extensions of early anti-spam techniques developed by email service providers. In the 1990s, email provided a new means to communicate instantly with the world. Naturally, solicitation, fraud, and abuse followed closely behind. Email providers were forced to grapple with unwanted speech and develop technical tools to reduce unwanted messages. As platforms faced similar problems, they turned toward existing techniques to sanitize their services. Platform moderation therefore looks very similar to email spam control: Senders who might have been “blacklisted” on email servers are essentially users banned on platforms. Whitelisted users are now “verified,” and whitelisted domains are “trusted sources.” Email content filtering has evolved into a complex web of platform policies, filtering techniques, and machine-learning tools.

While both email providers and social media platforms must regulate huge volumes of speech and content, the challenges they face are considerably different, and email-based moderation tools have not risen to the challenge. Unlike email anti-spam techniques, which largely affected private communications, platform moderation plays out in the public eye and directly affects the ability of individuals and entities to speak in public forums, and platform policies shape the degree of protection individual speakers may enjoy.

Blacklisting and Whitelisting

Banning individual users and domains—blacklisting—was one of the first techniques adopted to counter spam. Early email users simply began keeping lists of IP addresses that sent fraudulent emails and prevented those addresses from being able to transfer outgoing emails. This practice, slightly modified, remains a crucial part of platform content moderation: Platforms routinely identify speakers that contravene policies and ban those accounts. Twitter alone removed over 70 million accounts over the course of two months in 2018; Facebook removed 583 million—a quarter of its user base—in the first three months of the same year.

Despite high visibility, this practice has not kept pace with the actual challenges posed by harmful speech on the platforms. For a variety of reasons, platforms have not taken action against many of the most egregious accounts or have only done so after severe public pressure. Many of the accounts identified with promoting “fake news” during the 2016 U.S. presidential election remain online. Banning accounts is also financially disincentivized for platforms, since user growth and engagement are central to their business models: Daily and monthly active users are the primary metrics by which a services growth is measured. Further, accounts are only banned after they have violated a policy—by posting hate speech or advocating violence, for instance. Blacklisting alone is not an effective solution to prevent harmful speech.

Blacklisting also requires platforms to be able to identify users, even pseudonymously, which affects the right to speak anonymously and to associate freely. Additionally, to enable effective blacklisting, platforms must have a record of individual speakers’ account information—such as their IP addresses—to ensure that the speaker does not simply create new accounts and avoid the ban. Resorting to banning users means that platforms cannot provide an arena for anonymous or pseudonymous speech, cornerstones of the First Amendment rights enjoyed by Americans. Platforms central role also makes blacklisting particularly disruptive. While an email user blacklisted from certain domains was still able to communicate with other parties, a user blacklisted from a platform has no ability to continue communications on that platform. Blacklisting therefore not only is largely ineffective to protect against harmful speech but also erodes constitutionally protected anonymity of speech at a time when the U.S. government has argued that an individual has no right to anonymity when posting online.

Whitelisting on platforms faces different but complementary challenges. Email whitelisting protects legitimate bulk emailers by placing them on pre-approved sender lists, enabling those senders to avoid being filtered as spam. Modern platforms largely mimicked this technique by creating affirmative signals, such as badges and trust-based rankings. One such program, Facebook’s “trusted news source” program, ranked news sources and prioritized stories from “trusted news sources,” including major outlets such as the New York Times and the Wall Street Journal. Stories from trusted sources were placed higher in news feeds, displacing other content, such as “blogs that may be on more of the fringe” or spam. These filters rely on determinations about the speaker, rather than the speech itself skirting many of the main challenges on the platforms. Many mainstream news sources and speakers, for instance, promote content that could be considered hateful or misinformation, and promoting those voices absent determinations on their content risks simply lending legitimacy to fringe ideas without addressing other structural issues on platforms.

Platforms also whitelist users in the form of “verified” badges. Like “trusted news sources,” verified badges are based on the identity of an individual, rather than any specific speech. While Twitter views verified badges as a simple indication that a particular account is authentic, users may well believe that the checkmark indicates a level of veracity or trust in a verified account. Even more than trusted news, this approach can have significant error costs and promote harmful and abusive content under the guise of “verification.” Shock jock Alex Jones, for instance, was verified on Twitter before he was banned, as was aspiring provocateur Jacob Wohl. Whitelisting also removes any right to anonymity an individual might have had, since platforms generally will not verify pseudonymous or anonymous accounts.

Both blacklisting and whitelisting on platforms create significant limitations on the rights of speakers without meaningfully addressing the promulgation and spread of the most harmful types of speech online. Relying on or enhancing these methods is unlikely to lead to sustainable or desirable solutions. Increasing blacklisting, for instance, will not curb harmful speech, as banning occurs only after bad actions. At the same time, an effective banning policy will require platforms to invade the privacy rights of their users and collect personal information to permanently prevent platform access. The whitelisting focus on speakers, rather than speech, means that divisive or fringe speakers can still promote their theories, only now with an air of legitimacy granted by recognition from the platforms. At the same time, anonymous speech is disadvantaged, since platforms will not verify anonymous accounts and the platform has inserted itself as a gatekeeper to legitimate speech.

Blacklisting also creates immense potential for abuse, and companies have overstepped boundaries in the past. For instance, the Spamhaus Project, an international email reputation service widely used by businesses to filter email, has been accused of abusing its position by blacklisting companies without due process or based on personal animosity. Though Spamhaus, like major social media platforms, protests that it operates “within the boundaries of the law” and remains “accountable to [its] users,” others have alleged that the company uses its powerful position to force other companies to refuse to deal with individuals Spamhaus has decided should be blocked. By virtue of its power to make blacklisting decisions, Spamhaus, a private company, exerts enormous judicial pressure and extra-judicial influence over the business operations of other enterprises with little to no accountability. As platforms continue to consolidate power over online speech, there is significant risk that similar abuses might occur.

Direct Content Filtering

Platforms have also adopted content filtering tools and methods from email providers. Web providers have filtered content since nearly the beginning of the web. Early spam filters relied largely on reading the content of an email to look for keywords or to “score” the message in order to determine whether it was legitimate. For both email and platforms, filtering is central to the product. Without it, there would simply be too much information for consumers to absorb.

Despite the necessity of the practice, filtering creates the most direct potential First Amendment issues. As private actors, platforms can legally remove any content they like. With no external checks, First Amendment protections mean very little, since private companies, not the government, are the bottlenecks to speech. But, if the government were to enforce filters and rules about what speech is permitted, moderation actions would run headlong into existing restrictions on policing speech. Either way, directly moderating content poses serious challenges.

The First Amendment protects only against actions taken by the government. In practice, this means that platforms have wide latitude to set their own policies on what content is and is not acceptable. Platforms do not have to guarantee any procedural rights for users before removing content, nor are platforms subject to formal outside scrutiny of their policies or decision making. Recent strides toward transparency and accountability are encouraging but fall short in that they are not meaningful substitutes for safeguards and checks and balances for users. Indeed, many platform users have been unsuccessful when challenging content removal decisions. These harms are exacerbated by platforms’ spotty ability to remove actual hate speech and disinformation. In response, platforms have proposed new methods for accountability, mimicking existing democratic structures, such as Facebook’s proposed “Supreme Court.” Unless checked, platforms’ nearly limitless power to shape and suppress online speech has the potential to completely swallow large portions of First Amendment protections against the government.

Checking platform moderation power is not a simple proposition. Involving the government in censorship or moderation activities runs headlong into the most salient First Amendment issues. Under current law, the government cannot force a private company to withhold or to publish particular content. More recent cases suggest that the government may not be able to directly prohibit “fake news” or other false speech simply because it is untrue. The complexity of involving the government in platform moderation may be enough to dissuade policymakers from pursuing that course of action. However, if the government does become involved, platforms are likely to bow to governmental pressure in the United States as they have done in other countries where governments have insisted on content concessions. To be sure, the U.S. government has already eroded some legislative speech protections for platforms by effectively insisting that platforms take particular moderation actions against certain types of content.

Overall, direct moderation of content and users creates enormous potential for collision with or usurpation of the protections of the First Amendment. At the same time, it is not even clear that these moderation tools are effective at protecting users and removing illicit content. Direct content moderation is likely an insufficient response to address the speech problems faced by platforms and platform users. The next section will examine how regulating advertising practices online could alleviate many of the most serious challenges online and provide a framework for addressing harmful speech moving forward.

Targeted Advertising Encourages Platforms to Prioritize Controversial Content

Direct moderation of content will likely always fail to achieve the stated goals of the platforms because the business models of the platforms themselves encourage and reward divisive or controversial content. Today, nearly every social media platform makes money selling advertisements, rather than charging individual users. Ad-based platforms auction off users’ attention to advertisers and provide the most ad content to users when users spend the most time on the platform. Platforms are therefore incentivized to serve the content that is most engaging and likely to provoke a response, whether that be a share, a like, or a purchase. Time and again, the best fodder for such a response has proved to be incendiary, controversial, and divisive material. For instance, users may share “fake news” even though they know it is factually incorrect when that content reaffirms a user’s sense of identity or culture. Platforms themselves may also share or promote controversial content simply to spark discussion or debate, retain user focus, or create engagement. This discrepancy was made clearer when Facebook changed its content sorting to promote direct content from a user’s individual friends over other content. After platform engagement dropped significantly, the company reversed the move. Platforms therefore have significant interest in promoting harmful content in order to keep users engaged as long as possible.

Once a user is engaged, platforms amass enormous amounts of information to target advertisements directly toward what a platform knows a user will respond to. Platforms collect information including social interactions, demographics, what a user clicks and doesn’t click, and myriad other data points to build a digital dossier on each user. They know about users’ personality types, behavioral quirks, and even emotional states. Platforms know exactly what pushes a user’s buttons and how to engage that individual. Platforms then monetize the users by allowing advertisers to engage directly with specific populations. Recent controversies have highlighted the mismatch between business practices and filtering principles: Until recently, Facebook permitted advertisers to use the category “jew haters” as a target for advertisements. It also distributed anti-vaccination advertisements to potential young mothers. Advertisements in this context are not simply a means to sell a product but act to directly shape user behavior off-platform. Unscrupulous advertising practices and content delivery don’t just create a harmful platform: They also contribute to off-platform behavior, such as Pizzagate and racial discrimination, and provide harmful advertisers an enormous platform to reach vulnerable populations.

While platforms have paid lip service to reforming content moderation, they have largely avoided making any such commitments with regard to advertising practices. Instead, platforms have doubled down on growth and profits and insisted that additional accountability measures, such as transparency and changes to internal governance, are sufficient to redress content issues online. This is simply not the case. So long as platform profits are reliant on keeping users on-platform as long as possible, controversial and harmful speech will continue to proliferate.

Despite the ubiquity of the targeted advertising, there is growing skepticism that the practice creates more value for anyone outside of the companies providing the targeting. Researchers have found that “behaviourally targeted advertising had increased the publisher’s revenue but only marginally” while costing “orders of magnitude” more to deliver. Major platforms have misled brands and advertisers about the value of placing ads. Recently, several large brands have scaled back or stopped the practice altogether. Procter & Gamble reduced its targeted-ad budget in 2018 after it determined that the program was a waste of money. Following the enactment of the European General Data Protection Regulation (GDPR), Google is supporting non-targeted ads in Europe, and the New York Times stopped behavioral advertising altogether while raising its overall advertising revenue in the same period. The past several years have seen a reckoning with targeted advertising from within the industry, and it is past time that conversation took hold in the public.

Regulating Advertising Practices Reduces Constitutional Friction

This essay began by outlining the reasons that speech- and user-focused platform moderation are unlikely to be successful long-term solutions and why the practices create maximal challenges to the First Amendment as it is currently understood. It then discussed why platform business practices create incentives and opportunities for platforms to accept and promote controversial content online. The essay will now outline several proposals for addressing the mismatch between platform and public incentives and explain how each approach might stem the flow of unwanted content online.

Congress should restrict platform practices as part of any privacy legislation.

Following a scad of high-profile platform scandals, Congress is considering a number of proposals to increase privacy and data security for U.S. consumers. Despite mounting evidence that platform structure incentivizes privacy-invasive practices, legislators have largely ignored that relationship in public text. While the relationship between privacy and targeted ads is distinct from questions of platform moderation, the two are closely related. Just as targeted ads incentivize platforms to serve divisive content, so too does the model incentivize platforms to gather as much personal information as possible on users without regard for its provenance or future use. Congress and state legislatures should ensure that privacy laws meaningfully address the advertising models of platforms and place restrictions on how data can be collected and used for advertising purposes.

Existing law provides guidance for how Congress might structure such restrictions. For example, the GDPR places restrictions on what data companies may acquire, and once collected, how those companies may use the data. Congress could address some of the most egregious platform behavior by ensuring that, for instance, certain types of information are not collected by platforms and, when data is collected, that not all data can be used for advertising or commercial purposes. Additionally, restrictions could be placed on using information collected in one context from being used in a different context—for example, prohibiting using phone numbers collected for account recovery from being used for cross-platform advertising purposes. The devil is in the details, of course, but focusing on business behavior rather than consumer harms provides a strong basis for addressing platform harms generally and would lessen the need for aggressive moderation.

The California Consumer Privacy Act (CCPA) also provides a starting point for regulators. While many of the harms from divisive speech begin with platforms promoting divisive content to engage users, others result from sharing consumer data with third parties. The CCPA addresses this type of harm by restricting the types of data that platforms and other online providers can share with third parties and placing affirmative obligations on the third parties once they have received that information. This should, in turn, reduce the ability for third parties to amass detailed profiles about individuals across multiple sites, services, and devices and significantly constrain the ability to influence those users.

The Federal Trade Commission has authority to address advertising practices.

Congress may or may not decide to approach platform advertising head-on. Until they do, or if they do not, the Federal Trade Commission (FTC) likely has existing authority to curtail the targeted advertising practices. Under Section 5 of the FTC Act, the Commission may enjoin “unfair or deceptive” practices. While claims under the so-called “deceptive authority” are more common, the Commission here may well be able to exercise its authority to ban unfair practices.

In an unfairness case, the FTC must show that (1) the injury to consumers is substantial, (2) any injury is not outweighed by other benefits, and (3) the injury is not reasonably avoidable by consumers. As explained above, targeted advertising has the potential to create substantial injury both to individual consumers and to society. Individual consumers may also find themselves excluded from seeing certain ads based on protected categories. While societal harms are more diffuse, they are evident through effects such as filter bubbles, radicalization, and other off-platform activities. These negative effects, though diffuse, provide ample evidence that platform business activities may not provide much of their promised benefit to consumers. Against direct and diffuse harms, there is a growing consensus that targeted advertising may not provide financial benefit to anyone other than the ad-tech companies. Researchers have shown that behavioral advertising likely only creates nominal value compared to contextual or direct advertising and costs significantly more to produce and deliver. A recent empirical study showed that targeted advertisements are, on average, only worth $0.00008 (4%) more than non-targeted ads to the publishers of the advertisements. The risk of data breach or theft to platforms and users is significant, as websites serving targeted advertisements must create, maintain, and store enormous datasets on millions of users. There is a limit to what type of targeted advertising users will even tolerate, and any purported benefit to consumers is likely outweighed by the harm suffered individually and in the aggregate. Lastly, consumers often have no way to avoid harm other than by not using platforms or playing a cat-and-mouse game with online ad blockers. While this is feasible for some, it is not a realistic option in many cases. Platforms themselves do not even have the capability to reduce the potential for harm once consumer information is in the wild.

Depending on the explicit or implicit representations made by platforms to their users, the Commission may also be able to pursue action based on its deceptive practices authority. For example, if the Commission determined that a platform held itself out as a neutral arbiter of discourse but was instead actively promoting certain viewpoints over others, that discrepancy may rise to the level of a violation of the FTC Act. The Commission has sought enforcement in other situations when advertisers misrepresented the origins and nature of advertisements to consumers.

While additional facts may be necessary to fully support an enforcement action, the Commission certainly has the tools and authorities to explore whether it can bring an action under the unfairness authority. It should do so.

Platforms may face liability despite the Communications Decency Act.

Section 230 of the Communications Decency Act (CDA 230) protects online service providers from being liable for content posted by their users. Under the Act, providers are not treated as the “publishers” of users’ speech and therefore are not responsible for its content. This protection applies even when providers restrict access to material, even if that material is constitutionally protected. This provision has often been held out as a form of absolute immunity for social media platforms and other providers for the information on their platforms.

However, Section 230 may not extend as broadly as some claim. Notably, while CDA 230 protects platforms from liability for content published by third parties content hosted by the platform, those protections may not extend to platforms for decisions about business practices and other non-content activities if the practices are not explicitly a publication activity by the platform or a third party. For instance, platforms may be liable for enabling discrimination or other illegal practices by providing certain ad categories to third parties, since creating and monetizing the categories is not a publication activity. Similarly, legal commenters have noted that Section 230 is accorded a much more sweeping effect than its plain language alone might support. While platforms most likely cannot be held accountable for specific instances of hate speech or harassment, they may well face liability for non-speech business practices that enable and encourage malicious or illegal content.

In addition to current liability for non-publication business practices, we should consider whether platforms should face liability for content when the platform has substantially transformed the presentation beyond the contents’ original form or context. Platforms do not simply pass along user content—they take an active role in how, when, and where that content is displayed (including selectively not displaying new content or promoting older content repeatedly). To do this, platforms use information about individuals in combination with aggregate information about group behavior (so-called “big data”) as well as inferences made about a particular user’s behavior (personalized content). They use that information to create insights about likely responses and engagement to particular material and then curate the universe of user-generated content to maximize engagement independent of the original chronology and intent of the content creator.

The platform editorial process does not simply present users with posts created by their friends or connections: It substantially modifies the order, context, and meaning of user content. Platforms are aware of this transformative effect and have acknowledged that their role is more akin to a media company than just a tech platform. Twitter has considered surrounding “fake news” with posts debunking false assertions, an acknowledgment that context and position change meaning. Platforms ought not be held liable for every piece of harmful or malicious content generated by their users, but we should seriously consider whether their content curation functions should fall outside the protective embrace of Section 230.

Conclusion

The proposals outlined above are intended to show that many of the challenges posed by speech on platforms—and platforms’ responses— may be partially be addressed by legal and regulatory tools that encroach less immediately on the First Amendment. The authors believe that existing constitutional protections likely remain sufficient to address new types and controllers of speech online. However, before addressing whether speech online must be further curtailed, or grappling with thorny questions of which entities are best able to make content determinations, it is first necessary to challenge structural practices that encourage harmful speech.

To this end, the authors have suggested taking concrete steps to better align private and public interests. By focusing on platform practices, we have suggested that many of the incentives for promoting harmful speech can be removed and platforms’ priorities more closely aligned with those of the public. That realignment may serve to reduce or obviate many of the most challenging questions thrust into sharp relief by the proliferation of speech online. These solutions are neither complete nor perfect. But they represent actions necessary to understand the types and degree of harms and challenges that do truly exist and provide a framework to grapple with larger questions about the role of platforms in enabling and policing public speech.

 

© 2019, Jeff Gary & Ashkan Soltani.

 

Cite as: Jeff Gary & Ashkan Soltani, First Things First: Online Advertising Practices and Their Effects on Platform Speech, 19-04 Knight First Amend. Inst. (Aug. 21, 2019), https://knightcolumbia.org/content/first-things-first-online-advertising-practices-and-their-effects-on-platform-speech [https://perma.cc/5PVM-2GHN].