One question that has prompted debate about the First Amendment’s role in modern society is: What should we do about “bots” on social media? Bot accounts are online accounts where a substantial majority of the content is generated or posted without direct user input. Bot accounts have a bad reputation, thanks in large part to the thousands of Russian-linked Facebook and Twitter bot accounts deployed at scale to spread disinformation and sow social discord in the lead-up to the 2016 election. Policymakers and commentators have since called for regulation to deal with harmful social media bots, most commonly in the form of mandated bot-labeling schemes. The hope is that if bots are labeled to make them more visible to social media users, malicious actors won’t be able to use them to artificially inflate the influence of certain accounts or content or manufacture a false consensus surrounding political candidates or issues of public debate.
There are circumstances in which it makes sense to require bot accounts to be labeled. Still, it’s important to drill down into the details and practical implications of the various proposals. This is not merely an academic exercise. These proposals will have real effects on online speech, both intentional and unintentional, and those effects will depend entirely on the precise languageof the proposed legislation. In particular, the potential unintended real-world consequences of bot-labeling proposals provide a good illustration of why the First Amendment’s high threshold for speech regulation remains important—not just for protecting the speech of particular individuals but also for resisting governmental efforts to deputize unaccountable private actors to censor speech on its behalf. This is the threat the First Amendment was designed to protect against, and it is still a threat we need to be concerned about today.
That is not to say that concerns about malicious bots on social media platforms are unfounded. While the precise impact of bot accounts on election outcomes in recent years is uncertain, bots—particularly bots used at scale—have undeniably been used to give an air of legitimacy to fabricated content by promoting it and helping it to spread. This includes conspiracy theories that thereafter not only went viral but also inspired real-world violence. Bots have also long been used by oppressive governments to drown out the speech of human rights activists.
But smart regulation in this context requires a scalpel, not a bludgeon. As a practical matter, because automation is an essential and ubiquitous part of the internet, regulating bots is more complicated that it sounds. Mandatory bot labels in all cases will not only sweep up innocuous bots, but in certain cases, such as for voice-to-text input programs used by those with special accessibility needs, requiring bot labels could actually be detrimental to the speech of entire communities.
There are also limits to what a bot-labeling law can realistically accomplish. Mandatory bot labels won’t, for example, stop malicious social media mavens from gaming the system and exploiting platforms’ content algorithms to generate and amplify hoaxes and false information online. (Bots are not required to do this.) They won’t stop foreign adversaries from attempting to promote divisive content. They won’t teach people to think critically about the veracity of what they read online and the reliability of their sources, even and especially when what they are reading confirms their preexisting opinions about the world. And given the technical challenge of identifying bots, they won’t even help social media platforms identify and remove malicious bots masquerading as real people any faster than they already are.
Unfocused and ill-conceived bot-labeling laws—including laws that lack precision or that attempt to solve problems beyond what a bot-labeling law can realistically achieve—risk unintentionally suppressing speech of the very voices they are intended to protect, while not even scratching the surface of the problems they are intended to solve. Some proposals, for example, have included mandates that platforms review and rapidly label or remove user content. As UN Special Rapporteur David Kaye has noted, regulations of this type, which create punitive frameworks, are “likely to undermine freedom of expression even in democratic societies.” Proceeding thoughtfully and cautiously is critical—even in the United States.
This essay outlines how well-intentioned bot-labeling proposals could, if not meticulously crafted, enable censorship and silence less powerful communities, threaten online anonymity, and result in the takedown of lawful human speech. It explains how California’s recently enacted bot-labeling law—which is narrowly tailored to the precise problems the bill was intended to solve—may be a good template for avoiding such negative unintended consequences. It concludes by arguing that the First Amendment’s guarantee of free speech, designed to serve as a bulwark against government censorship, remains a necessary foundation for democracy and that we should not lower the bar on speech regulation to allow for more far-reaching bot-labeling laws—even if that means that some bots go unlabeled on social media platforms. Most bot-labeling proposals are grounded in a concern over protecting democracy. Chipping away at the foundation of democracy is not the way to protect it.
Where Bot-Labeling Proposals Can Go Wrong
Multiple factors make the task of crafting bot-labeling laws that target harmful bots without negatively impacting lawful speech a challenging one. First, not all bots are bad. In fact, the internet is full of harmless, and even helpful, bots. Technically, a “bot” is merely a software application that runs automated tasks over the internet. Bots are used for all sorts of tasks online; they are an essential and ubiquitous part of the internet. In 2016, for example, bots accounted for 52 percent of web traffic, whereas humans accounted for only 48 percent.
Second, on social media in particular, bots often represent the speech of real people, processed through a computer program, and the human speech underlying bots is protected by the First Amendment. That technology or computer software is involved does not in and of itself mean that the underlying speech is less important or unworthy of First Amendment protection —and adopting such a view would be a dangerous break in legal precedent.
Third, bots are hard to identify—and increasingly so. Accounts controlled entirely by humans have been and will continue to be flagged as bots, both accidentally and maliciously. Bot-labeling laws will thus inevitably impact the accounts of innocent human users.
It is therefore critical that bot-labeling laws (1) carefully define the technology targeted, (2) specify the contexts in which the bill should apply, and (3) avoid forcing platforms to implement mechanisms that will be abused to censor speech.
How Is the Technology Defined?
The most fundamental concern lies in the difficulty of defining a “bot.” Proposals to regulate bots all “face a common challenge in the regulation of new technology: defining the technology itself.” Without a definition targeted to the precise type of bots that policymakers seek to regulate, bot-labeling laws will inevitably end up unintentionally sweeping in some of the countless other bots operating harmlessly around them.
On Twitter, for example, a law that defines a “bot” purely in terms of automation would apply to simple technological tools like vacation responders, scheduled tweets, or voice-to-text input programs used by those with special accessibility needs. Senator Dianne Feinstein’s Bot Disclosure and Accountability Act of 2019 would on its face apply to such tools, because it targets any “automated software program or process intended to impersonate or replicate human activity online.” The bill, if passed, would require the Federal Trade Commission (FTC) to promulgate regulations within a year of enactment defining that term—“automated software program or process intended to impersonate or replicate human activity online”—“broadly enough so that the definition is not limited to current technology,” but it places no obligations on the FTC to ensure that the definition is narrow enough to avoid unintentionally impacting use of simple technical tools that merely assist human speech.
Being precise about definitions is important, even for proposals that simply mandate a label. A label may seem innocuous, but a bot label may diminish the impact of the speech to which it is attached. That is, after all, the hope behind these proposals. And because many social media platforms’ terms of service ban bot accounts altogether, a mandated label could result in an account being flagged and booted off the platform entirely. Use of simple technical tools, like scheduled tweets or voice-to-text programs, fall beyond the goals of bot-labeling regulations and should not give rise to a mandated label that could diminish the impact of human speech. Poorly defined bot-labeling bills that require such broad labeling could end up chilling basic technological innovations that merely assist speech.
Are There Limitations on the Bill’s Scope?
The second, and related, concern lies with scope. Some proposals would apply to all bots, regardless of what a bot was being used for or whether it was causing any harm to society. Such proposals would sweep up not only bots deployed at scale for malicious ends, but also one-off bots used by real people for activities protected by the First Amendment. This includes bots used on social media for research, journalism, poetry, political speech, parody, and satire —such as poking fun at people who cannot resist arguing, even with bots.
Here again, requiring that a bot used as part of a parody or research project be labeled may seem innocuous. However, as Madeline Lamo explains in an article for Slate, “the unique ambiguity of the bot format” allows speakers an unprecedented opportunity “to explore the boundaries between human and technology.” And mandated labels would restrict the speech of academics, journalists, parodists, or artists whose projects or research necessitate not disclosing that a bot is a bot. What’s more, if labels result in an account being flagged or deleted, then a bot-labeling bill could eventually end up driving all bot-driven humor and art off of the platforms that have become the twenty-first century’s public square.
How Will Mechanisms Required to Implement the Bill Impact Real People?
The final concern lies with the practical downstream implications of the mechanisms necessary to implement a bill’s requirements.
Some bot-labeling proposals, for example, have contained provisions mandating notice and takedown or labeling mechanisms that could be used to censor or unmask the very voices the regulators intend to protect. Early versions of California’s recently enacted bot-labeling law, S.B. 1001, for example, would have required platforms to create a notice and takedown system for suspected bots that would inevitably have led to innocent human users having their accounts labeled as bots or deleted altogether. The provision was modeled on the notoriously problematic Digital Millennium Copyright Act (DMCA), a 1998 federal law that requires platforms to remove content accused of violating copyright. It would have required platforms to accept reports of suspected bots from their users and then determine within 72 hours whether or not to label the account as a bot or remove it altogether.
On its face, this may sound like a positive step in improving public discourse, but years of attempts at content moderation by large platforms show that things inevitably go wrong in a variety of ways. As we have learned in the two decades since the DMCA’s enactment, for example, platforms have little, if any, incentive to push back against takedown requests on behalf of their users when simply taking down content will fulfill their legal obligations. In the face of legally mandated takedown schemes like the DMCA, platforms generally err on the side of caution, automatically complying with even absurd takedown requests rather than risking legal penalties. Such legally mandated takedown schemes thus inherently make platforms susceptible to the “heckler’s veto”: those who dislike certain content can rely on these systems to get the content taken down, because it is easier for the platform to simply take down content rather than investigate whether it actually should be taken down. This has made copyright law a tempting tool for unscrupulous censors—including political candidates, small businesses, and even the president of Ecuador.
This sort of abuse is not restricted to the DMCA. Platforms’ own terms of service are also routinely abused as tools for systematic, targeted censorship. Those seeking to censor legitimate speech have become experts at figuring out precisely how to use platforms’ policies to silence or otherwise discredit their opponents on social media platforms. The targets of this sort of abuse have been the sorts of voices that supporters of bot regulation would likely want to protect—including Muslim civil rights leaders, pro-democracy activists in Vietnam, and Black Lives Matter activists. As the Center for Media Justice reports, “Black activists report consistently being censored for mentioning race, referring to or describing structural racism, or reacting to incidents of police violence with frustration or sadness” while “explicit white supremacists have called for the death of Muslims, threatened to kill Black, indigenous and other activists of color and more without being found to violate Facebook community standards.”
Bot-labeling notice and takedown systems are all the more problematic given how hard it can be to determine whether an account is actually controlled by a bot, a human, or a “centaur” (i.e., a human-machine team). Platforms can try to guess based on the account’s IP addresses, mouse pointer movement, or keystroke timing, but these techniques are imperfect. They could, for example, sweep up individuals using VPNs or Tor for privacy, or news organizations that tweet multiple links to the same article each day. And accounts of those with special accessibility needs who use voice-to-text input could be mislabeled by a mouse or keyboard heuristic. Bots, meanwhile, are getting increasingly good at sneaking their way through Turing tests. As soon as a new test is developed, bots are programmed to fake their way through it, turning the detection of bots into a never-ending cat-and-mouse game. Additionally, all of these indicators are signals that other users on a social media platform cannot see. As a result, the typical internet user won’t actually be able to provide the platforms with any information that would help them get a bot determination right. That doesn’t mean that platforms shouldn’t accept reports of suspected bots from users; it means that they should not be legally mandated to take action based on those reports.
Bot-labeling mandates will also likely lead platforms to demand that the owners of suspected bot accounts prove their identities, so as to ensure that a single person isn’t running an entire bot army. This will have the inevitable result of piercing anonymity. The takedown regime described above, for example, would be hard to enforce in practice without unmasking anonymous human speakers. While merely labeling an account as a bot does not pierce anonymity, anyone who wishes to challenge their account’s “bot” designation would likely be required by a platform to prove their humanity by verifying their human identity.
The right to speak anonymously is protected by the First Amendment, and anonymous speech constitutes an “honorable tradition of advocacy and of dissent” in the United States. As the Supreme Court has recognized, “Anonymity is a shield from the tyranny of the majority.” Journalists around the world rely on anonymity and pseudonymity on social media to protect their safety. So do transgender people and other vulnerable users, including domestic violence survivors and drag performers who do not use legal names to protect their physical safety or privacy. Overbroad bot-labeling bills would threaten vulnerable communities by putting anonymity on social media at risk.
A Template for Narrowly Tailored Regulation
California’s Bot-Labeling Law
To help avoid such negative consequences, it is important that laws aimed at regulating bots, just like other government-imposed restrictions on speech, be held to First Amendment standards. This means that bot-labeling proposals must satisfy at least intermediate scrutiny, which requires that time, place, and manner restrictions on speech be narrowly tailored to serve a significant government interest unrelated to the suppression of speech and leave open ample alternative channels of communication.
California’s recently enacted bot-labeling legislation, S.B. 1001, may provide a good template for a narrowly tailored law that leaves open ample channels of communication for lawful speech. The law, authored by Senator Robert Hertzberg, makes it unlawful to surreptitiously use bot accounts in an attempt to influence either commercial transactions or how people vote in elections.
The law’s precise impact on speech has yet to be seen, as it only went into effect on July 1, 2019, but the legislation specifically does not apply to all bots across the internet. Instead, it applies only to bots used “with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.” A similar bill based largely on S.B. 1001 is already being considered in New Jersey.
While S.B. 1001’s language could be clarified—for example, to indicate clearly whether there is one intent requirement or two—its limited scope is critical. While addressing both fraud and political manipulation are significant government interests, they would not sustain a bill that applied to any and all bots. A concern over consumer or political manipulation does not justify a requirement that parodists or artists tell us whether a person or a bot is behind their latest creation. A bill that applies to all bots would also not leave open ample alternative channels of communication. In the context of bots used for artistic purposes, for example, “it seems unlikely that an algorithmic artist whose work hinges on the uncertainty of whether her account is human-run could effectively communicate her message through alternative channels of communication.” S.B. 1001 avoids these pitfalls by targeting the precise types of harmful bots that prompted the legislation in the first place.
S.B. 1001 also attempts to avoid unintentionally sweeping in simple technological tools, like scheduled tweets and auto-responders, by defining “bot” as not merely an automated account but “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” It also appropriately targets large platforms—those with 10 million or more unique monthly U.S. visitors. The problems this new law aims to solve are caused by bots deployed at scale on large platforms, and limiting the law to large platforms ensures that it will not unduly burden small businesses or community-run forums.
S.B. 1001 likely will not stop actors who don’t care one way or the other whether their conduct is illegal (for example, foreign actors not subject to the jurisdiction of the United States). But it may encourage businesses and political campaigns—and anyone else looking to convince someone in California to enter into a commercial transaction or vote a certain way in an election—to be forthcoming about when they are using bots to drive influence, and it provides a remedy when they are not. We’ll learn more over the next year about S.B. 1001’s effectiveness and potential for abuse, but the law may prove to be a good template for jurisdictions looking to protect citizens against bot-based fraud and political manipulation without unduly burdening online speech and running afoul of the First Amendment.
Bot-Labeling Alone Is Insufficient to Protect Democracy
It is worth noting that requiring bots to be labeled is likely not sufficient to combat the harms to democracy that legislators are concerned about. Policymakers who dig a bit deeper into the question of how malicious actors exploit social media platforms to artificially inflate their influence will find that substantial structural issues—far beyond the reach of any bot-labeling proposal—need to be addressed in order to prevent these harms going forward.
First, platforms use sophisticated surveillance technology to surreptitiously follow users across the internet, amass as much data as possible, and allow advertisers to serve micro-targeted ads in hyper-personalized feeds. These tools are incredibly powerful. As demonstrated by Facebook’s 2012 study into the impact of users’ news feeds on their emotions, social media has the power to inspire “massive-scale emotional contagion.” The same powerful and persuasive tools available to advertisers are also available to politicians and propagandists. In order to rein in these powerful tools, policymakers should consider strong, carefully crafted privacy regulations that give internet users real choices about who may track their browsing history across the web and how their data may be used. After all, bots are not required for malicious actors to use hyper-personalized targeting to direct precisely tailored propaganda to those who will be most vulnerable to it.
Second, our data-driven, hyper-personalized feeds are designed to get ahold of our attention and keep us on the platforms, often by showing us content that makes us feel outraged—and this includes divisive content spread by malicious actors. Professor James Grimmelmann describes “the disturbing, demand-driven dynamics of the Internet today,” where “attention is money and platforms find and focus attention” and “where any desire, no matter how perverse or inarticulate, can be catered to by the invisible hand of an algorithmic media ecosystem that has no conscious idea what it is doing.” We need regulation to counteract this—to require and incentivize platforms to give their users more information about how their personalization algorithms function, as well as more tools so that users can directly adjust what they see, instead of just relying on the algorithms to decide based on what might drive the most engagement or make users stay on the platform the longest.
Finally, too few platforms have too much power. As Tim Berners-Lee has stated, “The fact that power is concentrated among so few companies has made it possible to weaponize the web at scale.” We need more competition among the speech platforms themselves, so there are fewer chokepoints and more places for speech of different types and points of view and from people of different backgrounds.
We also need to reeducate ourselves as a society about media literacy. Over the past few decades, we’ve grown accustomed to relying on trusted name brand media outlets to do the critical thinking for us. With a majority of adults in the United States now getting news on social media, however, we cannot outsource this responsibility any longer. We need to retrain ourselves to slow down and think critically about what we consume. This issue, like the structural issues listed above, transcends social media bots; bots may be used to exacerbate the spread of misinformation online, but they aren’t the root cause of it. One recent Pew Research Center study showed that about a quarter of Americans say they have shared a “fabricated” news story, and that roughly half of those individuals knew the information was false. Another recent study showed that both Republicans and Democrats are more likely to think news statements are factual when they appeal to their side, even if they are opinions.
Media literacy won’t teach the average media user to identify bot-generated content, but neither will bot-labeling laws, because those intent on distorting reality will simply not comply with them. Media literacy will, though, train people to question the veracity of what they see online and the reliability of the sources of the information. In an era of deepfakes and AI-generated “synthetic media” that threaten to upend our collective understanding of truth, the need to instill healthy skepticism in internet users is growing more urgent by the day.
Further, merely understanding the difference between opinions and facts, the need to consider veracity and reliability, and the importance of confronting all sides of an issue are no longer sufficient for true media literacy. Media literacy today also means understanding how social media platforms work—e.g., why social media platforms promote certain content to certain users, how news feeds are optimized for virality, how clicks are monetized, and how this all capitalizes on (and aggravates) the confirmation biases we all innately carry. Psychologists and social scientists “have repeatedly shown that when confronted with diverse information choices, people rarely act like rational, civic-minded automatons. Instead, we are roiled by preconceptions and biases, and we usually do what feels easiest—we gorge on information that confirms our ideas, and we shun what does not.” In today’s fragmented internet, people are “attracted to forums that align with their thinking” and, once there, are more susceptible to impulsive behavior and an irrational cult mentality. A 2016 study found that homogeneous and polarized social networks, or “echo chambers,” help unsubstantiated rumors and conspiracy theories persist, grow, and disseminate rapidly online. “This creates an ecosystem in which the truth value of the information doesn’t matter,” one of the studies author’s told the New York Times. “All that matters is whether the information fits in your narrative.” This is not a problem that any label will fix. But teaching people how and why we got here, and the importance of breaking free of it, just might.
The First Amendment’s Threat Model: Why Requiring Narrowly Tailored Regulation Remains Key
Holding bot-labeling laws to existing First Amendment standards is not merely a matter of principle. Lowering the bar in this context would require lowering the bar for government regulation of speech across the board. This is not something to be taken lightly. Existing First Amendment law developed as a direct response to the American propaganda machine in the 1920s and 1930s, which “reached a scope and level of organization that would be matched only by totalitarian states.” The threat the First Amendment is designed to protect against is that of the government dictating what we can and cannot say, and who gets to speak and who doesn’t. This includes laws, no matter how well-intentioned, that deputize unaccountable private actors to censor speech on the government’s behalf.
It may seem that we no longer need protection against government sponsored propaganda campaigns and attempts by the state to control the media, but that’s only because the First Amendment’s free speech guarantee does not allow it. Freedom House reported in 2016 that internet freedom had declined for the sixth consecutive year, “with more governments than ever before targeting social media and communication apps as a means of halting the rapid dissemination of information, particularly during anti-government protests.” The United States is not immune to such threats.
This does not mean, however, that the First Amendment should be used as an excuse to thwart laws that would “subsidize and so produce more . . . speech” or promote access to information. It should not. Nor should the First Amendment be relied upon to hide from the public important information about the policy decisions represented in the algorithmic or AI systems that increasingly impact our lives. Such disingenuous and often politicized applications of the First Amendment “obscure, not clarify, the true value of protecting freedom of speech.”
This also does not mean that all speech online is necessarily good, or something that we have to agree with. It means that giving the government the power to police speech is dangerous. With censorship, disinformation, and propaganda campaigns again on the rise, not only in the United States but across the globe, it is more important than ever that government-imposed restrictions on speech—including how, when, and where people use technology like bots to amplify their voices—satisfy standards designed to protect free speech.
© 2019, Jamie Lee Williams.
See Daniel Oberhaus, Election 2016 Belongs to the Twitter Bots, Motherboard (Nov. 7, 2016), https://motherboard.vice.com/en_us/article/78kmky/election-2016-belongs-to-the-twitter-bots (reporting on a study finding that 20 percent of election tweets were generated by bots).
Whether bots were effective in swaying the opinions of voters, versus inspiring more ardent political views and thereby polarizing political discussion, or impacting the likelihood of a person to vote, is a subject of research and debate. See, e.g., Christopher A. Bail, et al., Exposure to Opposing Views on Social Media Can Increase Political Polarization, 115(37) Proc. of the Nat’l Acad. of Sci. 9216 (Sep. 11, 2018), http://www.pnas.org/content/115/37/9216 (finding, pursuant to a field experiment on the political views of Republicans and Democrats exposed to bots on social media, that “Republicans who followed a liberal Twitter bot became substantially more conservative posttreatment” while the effect on Democrats who followed a conservative Twitter bot was “not statistically significant”); Chris Baraniuk, How Twitter Bots Help Fuel Political Feuds, Sci. Am. (Mar. 27, 2018), https://www.scientificamerican.com/article/how-twitter-bots-help-fuel-political-feuds/ (reporting on a study that found “[t]he stronger a person’s partisan identity, the more likely that person is to actually vote as opposed to merely complaining about the opposition”); cf. James Grimmelmann, The Platform is the Message, 2 Geo. L. Tech. Rev. 217, 232–33 (2018), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3132758 (“Studies have found that fake news didn’t seem to be particularly influential in the 2016 election. Rather, it was a kind of captivating entertainment for people already disposed to be interested; you can’t stop clicking on the outrages.”) (citing Andrew Guess, Brendan Nyhan & Jason Reifler, Selective Exposure to Misinformation: Evidence from the Consumption of Fake News During the 2016 U.S. Presidential Campaign (Jan. 9, 2018), http://www.dartmouth.edu/~nyhan/fake-news-2016.pdf; Hunt Allcot & Matthew Gentzkow, Social Media and Fake News in the 2016 Election, 31 J. Econ. Persp. 211 (2017); Nick Rogers, How Wrestling Explains Alex Jones and Donald Trump, N.Y. Times (Apr. 25, 2017), https://www.nytimes.com/2017/04/25/opinion/wrestling-explains-alex-jones-and-donald-trump.html).
Marc Fisher, John Woodrow Cox & Peter Hermann, Pizzagate: From Rumor, to Hashtag, to Gunfire in D.C., Wash. Post (Dec. 6, 2016), https://www.washingtonpost.com/local/pizzagate-from-rumor-to-hashtag-to-gunfire-in-dc/2016/12/06/4c7def50-bbd4-11e6-94ac-3d324840106c_story.html; Matthew Haag & Maya Salam, Gunman in ‘Pizzagate’ Shooting Is Sentenced to 4 Years in Prison, N.Y. Times (June 22, 2017), https://www.nytimes.com/2017/06/22/us/pizzagate-attack-sentence.html.
Adam Segal, China’s Twitter-Spam War Against Pro-Tibet Activists, The Atlantic (Mar. 23, 2012), https://www.theatlantic.com/international/archive/2012/03/chinas-twitter-spam-war-against-pro-tibet-activists/254975/; Oiwan Lam, China: A Typical Online Political Harassment, Global Voices Advox (Apr. 9, 2011), https://advox.globalvoices.org/2011/04/09/china-a-typical-online-political-harassment/.
See, e.g., Jonah Engel Bromwich, Sam Hyde and Other Hoaxes: False Information Trails Texas Shooting, N.Y. Times (Nov. 6, 2017), https://www.nytimes.com/2017/11/06/us/shooting-texas-hoaxes.html (describing how online trolls gamed social media algorithms in the aftermath of a shooting in Texas, leading Google News to amplify tweets falsely claiming that the gunman had been a supporter of Senator Bernie Sanders and Hillary Clinton and had converted to Islam).
David Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, A/HRC/38/35 (Apr. 6, 2018), http://daccess-ods.un.org/access.nsf/Get?Open&DS=A/HRC/38/35&Lang=E.
See Jamie Susskind, Chatbots Are a Danger to Democracy, N.Y. Times (Dec. 4, 2018), https://www.nytimes.com/2018/12/04/opinion/chatbots-ai-democracy-free-speech.html.
Bot, Merriam Webster, https://www.merriam-webster.com/dictionary/bot (defining bot as “a computer program that performs automatic repetitive tasks”); Internet Bot, Wikipedia (last updated Oct. 13, 2018), https://en.wikipedia.org/wiki/Internet_bot (Bots are not the same as botnets. A botnet, a portmanteau of “robot” and “network,” is a network of private computers or devices infected with malicious software and controlled without the owners’ knowledge).
Igal Zeifman, Bot Traffic Report 2016, INCAPSULA (Jan. 24, 2017), https://www.incapsula.com/blog/bot-traffic-report-2016.html.
See, e.g., Junger v. Daley, 209 F.3d 481, 484 (6th Cir. 2000) (recognizing that code, like a written musical score, “is an expressive means for the exchange of information and ideas”).
Josh Dzieza, Why CAPTCHAs Have Gotten So Difficult, The Verge, https://www.theverge.com/2019/2/1/18205610/google-captcha-ai-robot-human-difficult-artificial-intelligence (explaining how CAPTCHA tests are getting harder for humans as they become easier for bots).
See, e.g., Sarah Burnett, Crackdown on ‘Bots’ Sweeps Up Tennessee Woman, Others Who Tweet Often, Knox News (Aug. 6, 2018), https://www.knoxnews.com/story/news/local/tennessee/2018/08/06/crackdown-bots-sweeps-up-tennessee-woman-others-who-tweet-often/912701002/.
Madeline Lamo, Regulating Bots on Social Media Is Easier Said Than Done, Slate (Aug. 9, 2018), https://slate.com/technology/2018/08/to-regulate-bots-we-have-to-define-them.html.
Bot Disclosure and Accountability Act of 2019, S. 2125, 116th Cong. (1st Sess. 2019), available at https://www.congress.gov/bill/116th-congress/senate-bill/2125/text.
See, e.g., @soft_focuses, Twitter, https://twitter.com/soft_focuses (last visited June 20, 2019) (a bot that tweets poetry, without identifying itself as a bot); Jia Zhang, Introducing censusAmericans, A Twitter Bot For America, FiveThirtyEight (July 24, 2015), https://fivethirtyeight.com/features/introducing-censusamericans-a-twitter-bot-for-america/ (describing a Twitter bot that tweets short biographies based on Census data); @lenadunhamapols, Twitter, https://twitter.com/lenadunhamapols (last visited June 20, 2019) (a “Lena Dunham Apology Generator” parody bot).
See Kaitlyn Tiffany, The Internet’s Alt-right Are Mistakenly Arguing with a Bot, The Verge (Oct. 7, 2016), https://www.theverge.com/2016/10/7/13202794/arguetron-twitter-bot-alt-right-internet-bigots-4chan-sarah-nyberg.
Madeline Lamo, Regulating Bots on Social Media Is Easier Said Than Done, Slate (Aug. 9, 2018), https://slate.com/technology/2018/08/to-regulate-bots-we-have-to-define-them.html.
See Madeline Lamo & Ryan Calo, Regulating Bot Speech, 66 UCLA L. Rev. 988 (2019).
Packingham v. North Carolina, 137 S. Ct. 1730, 1732 (2017) (We cannot assume that platforms will, for example, simply take down objectionable content but leave up parodies and social commentary. These decisions involve “difficult line-drawing exercises, highly contestable value judgments, sensitivity to radically different cultural contexts, and untenable distinctions between sincerity and irony” and are thus not decisions that platforms are well equipped to make); See James Grimmelmann, The Platform Is the Message, 2 Geo. L. Tech. Rev. 217, 224 (2018).
S.B. 1001, 2018 Leg. (Ca. 2018). After the potential unintended consequences of such a notice and takedown scheme on innocent human users were considered, the provision was removed from the bill.
Digital Millennium Copyright Act of 1998, Pub. L. No. 105-304, 112 Stat. 2860 (codified as amended in scattered sections of 17 U.S.C.).
See Corynne McSherry, Platform Censorship: Lessons From the Copyright Wars, Elec. Frontier Found. (Sept. 26, 2018), https://www.eff.org/deeplinks/2018/09/platform-censorship-lessons-copyright-wars.
Jamie Williams, Absurd Automated Notices Illustrate Abuse of DMCA Takedown Process, Elec. Frontier Found. (Feb. 24, 2014), https://www.eff.org/deeplinks/2015/02/absurd-automated-notices-illustrate-abuse-dmca-takedown-process.
See Jones v. Dirty World Entm’t. Recordings, L.L.C., 755 F.3d 398, 407 (6th Cir. 2014) (discussing the “heckler’s veto” in the context of a Section 230 case).
Daniel Nazer, Copyright, The First Wave of Internet Censorship, Elec. Frontier Found. (Jan. 18, 2018), https://www.eff.org/deeplinks/2018/01/copyright-first-wave-internet-censorship.
See, e.g. Russell Brandom, Facebook’s Report Abuse Button Has Become a Tool of Global Oppression, The Verge (Sept. 2, 2014), https://www.theverge.com/2014/9/2/6083647/facebook-s-report-abuse-button-has-become-a-tool-of-global-oppression.
Mandie Czech, How Do Facebook “Community Standards” Ban a Muslim Civil Rights Leader and Support Anti-Muslim Groups?, Chi. Monitor (June 27, 2016), http://chicagomonitor.com/2016/06/how-do-facebook-community-standards-ban-a-muslim-civil-rights-leader-and-support-anti-muslim-groups/.
Mai Nguyen, Vietnam Activists Question Facebook on Suppressing Dissent, Reuters (Apr. 9, 2018), https://www.reuters.com/article/us-facebook-privacy-vietnam/vietnam-activists-question-facebook-on-suppressing-dissent-idUSKBN1HH0DO.
Sam Levin, Civil Rights Groups Urge Facebook to Fix “Racially Biased” Moderation System, The Guardian (Jan. 18, 2017), https://www.theguardian.com/technology/2017/jan/18/facebook-moderation-racial-bias-black-lives-matter.
Center for Media Justice, Facebook’s Hate Speech Algorithms Support Racial Bias, Say Digital Rights Advocates, MediaJustice (June 30, 2017), https://centerformediajustice.org/2017/06/30/facebooks-hate-speech-algorithms-support-racial-bias/.
Computer AI Passes Turing Test in “World First,” BBC News (June 9, 2014), https://www.bbc.com/news/technology-27762088.
Requiring identification is in line with platforms’ interest; it makes tracking and surveillance easier. Some platforms, like Facebook, already have policies requiring users to use their real names when registering for an account, despite the fact that some communities rely on anonymity to protect their safety. Ravin Sampat, Protesters Target Facebook’s “Real Name” Policy, BBC News (June 2, 2015), https://www.bbc.com/news/blogs-trending-32961249.
Buckley v. American Constitutional Law Found., 525 U.S. 182, 200 (1999).
McIntyre v. Ohio Elections Comm’n, 514 U.S. 334, 357 (1995).
See, e.g., Freedom House, Report on Freedom of the Press (2016): Kazakhstan, https://freedomhouse.org/report/freedom-press/2016/kazakhstan (discussing physical attacks and threat to safety faced by independent journalists and news outlets in Kazakhstan); Freedom House, Report on Freedom of the Net (2017): Kazakhstan, https://freedomhouse.org/report/freedom-net/2017/kazakhstan (discussing the Kazakhstan government’s plan of launching three different content monitoring systems, including software to monitor social networking sites); Jamie Williams, Judge Rules Kazakhstan Can't Force Facebook to Turn Over Respublika's IP Addresses in Another Win for Free Speech, Elec. Frontier Found. (Mar. 4, 2016), https://www.eff.org/deeplinks/2016/03/another-ruling-against-kazakhstan-its-attempt-use-us-courts-censorship-and (describing the Kazakhstan government’s efforts to unmask the administrators of the Facebook page of an independent newspaper critical of the ruling regime).
Sam Levin, As Facebook Blocks the Names of Trans Users and Drag Queens, This Burlesque Performer is Fighting Back, The Guardian (June 29, 2017), https://www.theguardian.com/world/2017/jun/29/facebook-real-name-trans-drag-queen-dottie-lux.
While the original language was much broader, the bill was narrowed and refined prior to its passage. Jamie Williams & Jeremy Gillula, Victory! Dangerous Elements Removed From California’s Bot-Labeling Bill, Elec. Frontier Found. (Oct. 5, 2018), https://www.eff.org/deeplinks/2018/10/victory-dangerous-elements-removed-californias-bot-labeling-bill.
Cal. Bus. & Prof. Code § 17941(a).
See N.J. Assemb. 4563, 218th Leg. (N.J. 2018), https://www.njleg.state.nj.us/2018/Bills/A5000/4563_I1.htm.
See Madeline Lamo & Ryan Calo, 66 Regulating Bot Speech 988, 1015 (2019) (“[W]hile preserving free and fair elections can serve as a compelling reason for requiring political bots to disclose their bot-ness when engaged specifically in electioneering, a different justification than preserving elections would be necessary in all other political contexts.”).
Id. at 25.
Cal. Bus. & Prof. Code § 17940(a). The bill’s definition of “bot” was previously limited to online accounts automated or designed to mimic an account of a natural person, which would have applied to parody accounts that didn’t even involve automation, but not auto-generated posts from fake organizational accounts. See S.B. 1001, Bots: Disclosure, Compare Versions, Cal. Legis. Comm’n (last visited June 20, 2019), https://leginfo.legislature.ca.gov/faces/billVersionsCompareClient.xhtml?bill_id=201720180SB1001&cversion=20170SB100197AMD.
Cal. Bus. & Prof. Code § 17940(c).
Cf. Brendan Nyhan, Fake News and Bots May Be Worrisome, But Their Political Power Is Overblown, N.Y. Times (Feb. 13, 2018), https://www.nytimes.com/2018/02/13/upshot/fake-news-and-bots-may-be-worrisome-but-their-political-power-is-overblown.html (explaining that “to combat online misinformation,” steps should be taken “based on evidence and data, not hype or speculation”).
Adam Kramer, Jamie E. Guillory & Jeffrey T. Hancock, Experimental Evidence of Massive-scale Emotional Contagion Through Social Networks, 111(24) Proc. of the Nat’l Acad. of Sci. 8788 (June 17, 2014), https://www.pnas.org/content/111/24/8788; see also Andrew Marantz, Reddit and the Struggle to Detoxify the Internet, New Yorker (Mar. 19, 2018), https://www.newyorker.com/magazine/2018/03/19/reddit-and-the-struggle-to-detoxify-the-internet.
See James Grimmelmann, The Platform is the Message, 2 Geo. L. Tech. Rev. 217, 217, 230 (2018); see also Craig Silverman, Facebook Made This Sketchy Website's Fake Story A Top Trending Topic, BuzzFeed News (Aug. 29, 2016), https://www.buzzfeednews.com/article/craigsilverman/a-site-that-facebook-made-a-top-trending-topic-is-a-sketch; Craig Silverman & Lawrence Alexander, How Teens In The Balkans Are Duping Trump Supporters With Fake News, BuzzFeed News (Nov. 3, 2016), https://www.buzzfeednews.com/article/craigsilverman/how-macedonia-became-a-global-hub-for-pro-trump-misinfo#.mo5D9Wn1L (explaining how a “strange hub of pro-Trump sites in the former Yugoslav Republic of Macedonia” illustrates “the economic incentives behind producing misinformation specifically for the wealthiest advertising markets and specifically for Facebook, the world’s largest social network, as well as within online advertising networks such as Google AdSense”).
Noah Kulwin, The Internet Apologizes . . . , New York Magazine (Apr. 16, 2018), http://nymag.com/intelligencer/2018/04/an-apology-for-the-internet-from-the-people-who-built-it.html.
A 2016 study by the Pew Research Center found that 62 percent of adults in the United States get their news on social media—up from 49 percent in 2012—and that 18 percent do so often. Jeffrey Gottfried & Elisa Shearer, News Use Across Social Media Platforms 2016, Pew Res. Ctr. (May 26, 2016), http://www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/.
Michael Barthel, Amy Mitchell & Jesse Holcomb, Many Americans Believe Fake News Is Sowing Confusion, Pew Res. Ctr. (Dec. 15, 2016), http://www.journalism.org/2016/12/15/many-americans-believe-fake-news-is-sowing-confusion/.
Amy Mitchell, et al., Distinguishing Between Factual and Opinion Statements in the News, Pew Res. Ctr. (June 18, 2018), http://www.journalism.org/2018/06/18/distinguishing-between-factual-and-opinion-statements-in-the-news/.
Joshua Rothman, In the Age of A.I., Is Seeing Still Believing?, New Yorker (Nov. 5, 2018), https://www.newyorker.com/magazine/2018/11/12/in-the-age-of-ai-is-seeing-still-believing.
Farhad Manjoo, How the Internet Is Loosening Our Grip on the Truth, N.Y. Times (Nov. 2, 2016), https://www.nytimes.com/2016/11/03/technology/how-the-internet-is-loosening-our-grip-on-the-truth.html.
Lauren Tousignant, The Trolling Has Only Just Begun, N.Y. Post (Mar. 31, 2017), https://nypost.com/2017/03/31/trolls-are-taking-over-the-internet/ (quoting Vint Cerf).
Jesse Singal, To Understand Pizzagate, It Helps to Understand Cults, The Cut (Dec. 14, 2016), https://www.thecut.com/2016/12/to-understand-pizzagate-it-helps-to-understand-cults.html.
Michela Del Vicario, et al., The Spreading of Misinformation Online, 113(3) Proc. of the Nat’l Acad. of Sci. 554, 554-559 (Jan. 19, 2016), http://www.pnas.org/content/113/3/554 (“[H]omogeneity and polarization are the main determinants for predicting cascades’ size.”).
Farhad Manjoo, How the Internet Is Loosening Our Grip on the Truth, N.Y. Times (Nov. 2, 2016), https://www.nytimes.com/2016/11/03/technology/how-the-internet-is-loosening-our-grip-on-the-truth.html (quoting Walter Quattrociocchi).
Tim Wu, Is the First Amendment Obsolete?, Knight First Amend. Inst. (Sep. 2017), https://knightcolumbia.org/content/tim-wu-first-amendment-obsolete.
“[S]uch rules involve risks to freedom of expression, putting significant pressure on companies such that they may remove lawful content in a broad effort to avoid liability. They also involve the delegation of regulatory functions to private actors that lack basic tools of accountability.” David Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, A/HRC/38/35 (Apr. 6, 2018), http://daccess-ods.un.org/access.nsf/Get?Open&DS=A/HRC/38/35&Lang=E. According to Kaye’s report, “Obligations to monitor and rapidly remove user-generated content have . . . increased globally, establishing punitive frameworks likely to undermine freedom of expression even in democratic societies.” Id.
Cf. Shelby Cty., Ala. v. Holder, 570 U.S. 529, 569 (2013) (Ginsburg, J., dissenting) (“Demand for a record of violations equivalent to the one earlier made would expose Congress to a catch-22. If the statute was working, there would be less evidence of discrimination, so opponents might argue that Congress should not be allowed to renew the statute. In contrast, if the statute was not working, there would be plenty of evidence of discrimination, but scant reason to renew a failed regulatory regime.”).
Freedom on the Net 2016 — Silencing the Messenger: Communication Apps Under Pressure, Freedom House (Nov. 2016), https://freedomhouse.org/report/freedom-net/freedom-net-2016 (“Social media users face unprecedented penalties, as authorities in 38 countries made arrests based on social media posts over the past year. Globally, 27 percent of all internet users live in countries where people have been arrested for publishing, sharing, or merely “liking” content on Facebook.”). The Freedom House report notes, “The increased controls show the importance of social media and online communication for advancing political freedom and social justice.” Id.
See, e.g., PEN America Files Lawsuit Against President Donald J. Trump For First Amendment Violations, PEN America (Oct. 16, 2018), https://pen.org/press-release/lawsuit-trump-first-amendment-violations/ (alleging unconstitutional use of regulatory and enforcement powers of government to punish the press for criticism); Jennifer Rubin, Suing Trump for Endangering the First Amendment, Wash. Post (Oct. 16, 2018), https://www.washingtonpost.com/news/opinions/wp/2018/10/16/suing-trump-for-endangering-the-first-amendment/?utm_term=.238dd84c94e8 (quoting Erwin Chemerinsky: “No president in history has repeatedly threatened the press as Donald Trump does on a regular basis. Under long-standing First Amendment precedents, these threats violate freedom of press and the First Amendment. [The PEN America] lawsuit addresses an urgent threat to our Constitution.”).
Ariz. Free Enter. Club’s Freedom Club PAC v. Bennett, 564 U.S. 721, 763 (2011) (emphasis in original) (Kagan, J., dissenting).
Nat’l Inst. of Family & Life Advocates v. Becerra, 138 S. Ct. 2361, 2382–83 (2018) (Breyer, J., dissenting) (discussing how use of the First Amendment “to strike down economic and social laws that legislatures long would have thought themselves free to enact” such as laws insisting that medical providers tell women about the possibility of abortion is antithetical to the First Amendment’s guarantee of free speech).
Id. at 2382–83 (2018) (Breyer, J., dissenting).
Jamie Lee Williams is a staff attorney at the Electronic Frontier Foundation.