Introduction

Anonymity has emerged in recent years as an important focus of debates about the digital public sphere. An opinion piece in The Wall Street Journal argued that a solution to the problems besetting social media was to “end anonymity.” Soon after, Senator John Kennedy announced he would introduce a bill to ban anonymity online. In the United Kingdom, anonymity also featured in the discussions about the Online Safety Act. Bills designed to curb anonymity are also frequent in Brazil, including more recently with one introduced by the select committee investigating the Bolsonaro administration’s handling of the pandemic as part of the committee’s recommendations included in the final report to punish those who engage in disinformation.

Supporters of proposals targeting anonymity sometimes argue that requiring users to make themselves known will remedy many of the pathologies afflicting the digital public sphere, including misinformation. Identification is seen as a tool for creating a more truth-based discourse, by inducing speakers to behave more responsibly, as well as providing listeners with information to assess the credibility of the speaker. The assumption often is that anonymity promotes lies and incivility, while identification induces truth and civility. Nathaniel Persily sums it up: “If online anonymity is the cause of many of the democracy-related ills of social media, then disclosure might be the best disinfectant.”

In fact, in an environment beset by political polarization, instead of serving as a disinfectant, identification can add fuel to the fire of mis- and disinformation. Not only that, anonymity can have a role also in enabling public political deliberation that has been underappreciated. This paper surveys literature from multiple disciplines and challenges assumptions behind the prevailing stances towards anonymity and mis- and disinformation. It argues that anonymity and identification do not have a fixed function; it instead refers to the plurality of identification and the plurality of anonymity. “Plurality” is meant to emphasize that both anonymity and identification shape and are shaped by factors such as social norms and platform affordances. As such, whether identification will contribute to a more truth-based public discourse and to a more civic-minded digital sphere is a question that can only be answered if we account for those factors. Considering the identity-based components of the spread of disinformation in polarized contexts, anonymity can serve as a device to create opportunities for conversation and avoid some of the mechanisms triggering those components.

A few notes on terminology and scope should be helpful. Anonymity stands for namelessness in the vernacular, yet conceptually it must be appreciated as going beyond names. In fact, names are less effective as unique identifiers, as they often can shared by more than one person. Identification correspondingly is not constrained to names. Identification and anonymity can be seen as “different poles of a continuum.” Anonymity is relational: Someone might have knowledge that allows them to identify a speaker, while another person might not.

This relational aspect can be relevant particularly when we are considering illegal content, where it is not just listeners who pass judgment on anonymous speech, but also authorities seeking to hold speakers accountable. That is, the audience not having knowledge that identifies a speaker (because their name is not unique or the speaker uses a pen name) can be a different issue than law enforcement and other officials being able to trace the speech. Although the questions are connected, this paper will not discuss traceability. It will focus on identification with one kind of identifier, real names, as a lever that commentators and policymakers have turned to with the aspiration of governing legal speech. Combatting mis- and disinformation is one reason why commentators want to expand identification. One shared hope is that both speakers and listeners will be closer to the truth through real-name identification. That is the central concern of this paper.

Part I introduces the concept of the plurality of identification, which the paper uses to call attention to how real names have a different operation on social media. Names, which were not ubiquitously employed to the same extent they are now (e.g., full names in Facebook profiles), work in markedly transformed ways when they offer an index to massively aggregated, permanent information on every one of us that is accessible through social media and search engines. Calls for identification often rest on an assumption that real names instantiate the same identity regardless of the context they are displayed. This ignores the impact of context collapse —the flattening of different social contexts—in impelling individuals to perform their identity to an imagined, unspecified audience with which they engage much like micro-celebrities.

At the same time, anonymity is thought to prevent accountability by disconnecting us from drivers of norm-abiding behavior. Part II shows that this is only sometimes true—and introduces the plurality of anonymity. It surveys research establishing that anonymous settings may produce greater conformity to local, i.e., group-related, social norms (which may or may not be democratically desirable). The paper then argues that the impact of anonymity on user behavior depends on content moderation practices and community norms.

Part III consolidates those points and discusses the role of political polarization and the sharing of false information. Although it is commonly assumed that identification is a means of fostering veracity, as well as civility, this is often not the case. The paper explores findings from psychology and computational social science to argue that real names are part of mechanisms that drive misinformation in settings marked by affective polarization (negative attitudes toward the other party). Anonymity, conversely, has potential as a device for reducing polarization as well as creating opportunities for conversations not infected by those mechanisms.

This paper aims to add to a years-long debate about the place of anonymity in a healthy digital public sphere. Much work has been done about the disproportionate effects flowing from real-name policies to marginalized communities, to individuals who have legitimate reason to fear for their safety in disclosing their real names, or to those whose names do not match their official government identification. Indeed, in 2011, the announcement of now-defunct Google Plus’s real-name policy prompted considerable backlash along those lines, leading in what were described as the Nymwars; in 2015, a new battlefront turned to changes in Facebook’s enforcement of its policies, which was met with opposition by a collection of civil society organizations gathered around the Nameless Coalition. Scholars have suggested that such concerns can be addressed in specific cases and exceptionally, only “where anonymity is needed to avoid ‘threats, harassment, or reprisals,’” as Justice Scalia argued in McIntyre, a landmark case on the topic. My hope with this paper is to explore the role of anonymity and identification even beyond the risk of speech suppression and disproportionate effects.

I. The Plurality of Identification: Real names and context collapse

Mark Zuckerberg framed real names as the appropriate norm for online interaction when he claimed, in 2010, that “[h]aving two identities for yourself is an example of a lack of integrity.” The notion seems to be: People hardly ever use assumed names in offline life, so why should they do it differently online?

By implementing a real-name policy for Facebook, and its 3 billion users worldwide, Zuckerberg gave some credit to the notion that using real names is what is to be expected generally from people online. He made his assertion a kind of self-fulfilling prophecy.

Discussions of online anonymity often frame it as a deviation from established social norms —a deviation that is justified by an individual’s legitimate fear of retaliation, or as a legitimate response to surveillance. However, despite the allure of the familiarity of names in a pre-internet, offline world, it is instead real-name policies that break with longstanding conventions. As Section A shows, the internet did not always presume real names.

Even when users adopt real names online, doing so significantly alters the function those names play, as Section B explores. This is because those names get indexed on multiple social media and search engines, the result being that users’ multiple audiences, representing a range of social interactions, are flattened into a single one. This forces them to perform to their “most sensitive [audience] members: parents, partners, and bosses,” as if they were broadcasting for these networked audiences. Real-name identification in such a setting does not mean the same as it would in each social context; this evidences how identification is multifarious.

A. Before real names

In fact, the early days of computers and the internet were marked by identifiers other than real names. As Emily van der Nagel reports, the earliest usernames were actually numbers: System administrators would assign individuals unique user identification numbers to distinguish their activities from those of others who shared the same computer (then owned only by institutions).

And although early email accounts were at first controlled by institutions, not individuals—who tended to use employees’ or students’ full names as their email identification (or a combination of initials and numbers) — as the internet developed and the commercial internet grew, service providers started offering personal email addresses for a fee. Once (institutional and financial) constraints on email creation disappeared, people began to choose usernames creatively, “play[ing] with numbers, nicknames, interests, in-jokes and cultural references.” Those creative email addresses were also a way to establish boundaries between work and personal life, which made more sense before connected portable devices (laptops and phones) eroded those divisions.

Pseudonyms were also a staple of users of social media precursors such as bulletin boards and IRC (internet relay chat) channels. As early as the 1970s, people played with usernames at the Electronic Information Exchange System, a computer conferencing bulletin board, so they could adopt “a role in particular conferences, have the freedom to say things they would not want attributed to them or their organisation, signal that the discussion was not to be taken too seriously, and let newcomers experiment with sending messages on the board without fear of revealing their lack of skill in the medium.” Foundational work by Sherry Turkle, writing on the early days of the commercial internet, discussed how users in IRC channels had fluid identities and explored how this helped to create a space in which conventions around gender, age, and race could be redefined and transformed. Those hopes did not bear out as Turkle might have expected, in large part because “forms of discrimination such as racism and sexism are not solely based on appearance.”

The tendency of users to continue to rely on pseudonyms was a consequence of many features of the early internet, which Bernie Hogan discusses. First, pre-Web 2.0, user-generated content was generally text-based, and digital cameras and webcams were not yet widespread. As such, constructing a new identity required less effort. Second, because relatively few people used the internet, communities tended to be interest-based, not based on social ties. This meant there were few costs for using pseudonyms online. Third, the internet was still a mystery to many, even those who used it, and people were wary of exposing their “real-world” identities.

B. Real names in context collapse

The rise of social media altered many of these features of the internet. And because social media platforms were designed to link people to those they were already connected to offline in some way, it made sense for users to employ their real names when they used the platforms. Indeed, it is worth remembering that Facebook was early on described as an online version of Harvard’s paper face books. Real names made sense for TheFacebook, just as pseudonyms made sense for other websites. Initially, Facebook was limited to the Harvard community; it would later be extended to other universities in the US. Still, it was a walled garden, with social norms appropriate for the context of that community.

An important change happened, however, when the platform became accessible to anyone with an email. Users now could interact simultaneously with their high school friends, college colleagues, family, coworkers, and so on. This meant that users had no single set of social norms they could rely upon when communicating to these multiple audiences. The opening up of Facebook resulted, in other words, in context collapse, a term which stands for “[t]he lack of spatial, social, and temporal boundaries mak[ing] it difficult to maintain distinct social contexts.”

The consequence was that, although Facebook’s real-name policy stuck around, users’ real names no longer played the role they did offline. For one thing, users’ real names were now persistent and searchable: When users spoke online, their words were not only broadcast to everyone in their online network; they could be found and associated with them at any later point. With context collapse, users would be read by audiences they might not have expected. Attempting to make a joke after giving the barista one’s name entailed the risk of either looking silly to a handful of people nearby or drawing a few chuckles from them. With real-name social media accounts, the embarrassment goes much further, as does the comedy. This is not just a question of reach; it affects how users see themselves and what they post.

Alice Marwick and danah boyd have explored this transformation in how people interact online. They show how the collapsed social contexts drive people to engage in practices of “micro-celebrities,” much like broadcast television, with the caveat that “unlike broadcast television, social media users are not professional image-makers.” To the extent that each social interaction enacts identity, the collapsing of contexts in social media means that users must present themselves to an imagined audience (who they think might consume their content) that does not share a set of norms regarding what is appropriate.

So, while sticking to real names might seem a continuation of established social practices, it is not, because internet affordances change how names operate socially. Our names are indexed, our café encounters, our workplace banter, our relationships—in short, now we are visible to all, we have to perform our identities for all those people, or pay the price for not doing so.

In light of that, we can see that pseudonyms in fact make sense online, because they allow people to navigate different contexts, and speak in different registers to different audiences. This is not to say that real names on social media do not make sense. Billions of users found value in connecting to high school friends, distant family members, former coworkers, etc. The point that we should be clear on is how real names online are not a continuation of our pre-digital practices. And, as Part I, Section A showed, the ensuing transformation is not directly a result of technological change. As Bernie Hogan notes, “[t]he real-name web is not a technology; it is a practice and a system of values.” The familiar appeal of using real names, therefore, rests on an inadequate understanding of how internet affordances changed what our names mean. The impact of attaching real names to our speech and actions varies, and this is how we can see the plurality of identification.

II. The Plurality of Anonymity: Norm conformity and the mediation of other affordances

Part I explored how the same form of identification can function differently according to the context. Real names have different implications in digital settings. Part III will explore how this variation frustrates the assumptions of commentators who put faith in identification to combat mis- and disinformation. This Part shows how anonymity can play a part in making behavior conform to social norms, a point that is often neglected. Section A introduces the theoretical model that describes how. Section B then transitions from theory to practice. It canvasses some of the ways anonymous communities work to shape identities around their aspirations and goals. Section C discusses quantitative research that has sought to understand the role of anonymity in the quality of online content by studying newspaper comment sections.

A. Anonymity does not mean absence of social norms

It is tempting to think of online anonymity as bringing out the worst in us. If users are not held accountable for their offline identities, the argument goes, then incentives to refrain from engaging in abusive behavior are removed, and only incentives to indulge in toxic disinhibition remain. In short, the idea is that when individuals are anonymous, they will flout social norms and behave badly. This tracks classic theories on deindividuation in social psychology. This familiar view of the impact of anonymity has been challenged in recent decades by scholars in social psychology and communication studies who have developed the social identity model of deindividuation effects (SIDE).

This model holds that, in many situations, “group immersion and anonymity le[a]d to greater conformity to specific (i.e., local) group norms, rather than to transgression of general prosocial norms, as deindividuation theory proposed.” Contrary to classic deindividuation theory, which links the lack of identification with individuals acting in disdain for any social norms, the SIDE model predicts that, when group identity is salient, it will modulate anonymous individuals.

Deindividuation theory would see the behavior of individuals in a crowd as irrational and anti-normative, reflecting a sense of loss of identity and the constraints of self-awareness. The SIDE model sees such behavior as a consequence of individuals corresponding to group identity and local norms, acting in accordance with what that group finds normative. In a nutshell, where deindividuation theory “implies a loss of self in the group,” the SIDE model instead recognizes “the emergence of the group in the self” —when individuals perceive each other as “interchangeable group members.” Initially applied to text-based media, the model has been extended to other kinds of media as well (e.g., video-based). The SIDE model has been supported by multiple research findings.

So the notion that online anonymity entails a negation of identity and any kind of social norms must be revised in light of research showing how, even in conditions of anonymity, identities are still intermediated by norms. We should be careful about what this means. It does not mean that group identity and corresponding norms will always prevail. Which identity will be salient depends on a wide range of factors; the SIDE model does not say it will always be the case, instead, it rejects a “blanket assumption that people will always act in line with individual self-interest when anonymous.” It also does not mean that the resulting norms will guide group behavior toward positive social outcomes. Importantly, the norms here are local, i.e., those embraced by the group, and might be in tension with broader social norms or with the law.

Indeed, as noted, SIDE explains (instead of refuting or ignoring) how, in groups such as mobs, individuals can be guided toward extreme conduct. While one might think that the anonymity of the mob (i.e., the fact that individual behavior is less likely to be discerned) releases mob members from social norms, the reverse is often the case: Individuals are dragged by the mass behavior because they fused to the (destructive) group identity. The insight borne out by this framework is that this is not a result of the absence of social norms. It is rather the opposite: Groups can become more extreme than the aggregation of members’ attitudes precisely because group identity plays such an overwhelming force. We turn now to 4chan and Reddit to see in practice how group identity can be shaped to very different results.

B. Affordances and norms shape identity even in anonymous settings

The SIDE model shows that we should not assume that anonymity necessarily erodes the constraints of identity and social norms. Identity can play a part in anonymous settings, and identity performance is then not unlike what takes place in non-anonymous settings when we perform not just one but many roles (or, under context collapse, try to negotiate performing those identities for audiences with differing expectations). How we make decisions regarding identity performance in such circumstances is the result of the interplay of digital affordances and social norms, which are reciprocally shaped.

The outcome of this complicated function can affirm or undermine our democratic aspirations for the digital public sphere. The argument here is not that anonymity always yields valuable results. Instead, it is that the role of anonymity in that function is not linearly fixed. Commentators often talk as if it were.

To see how, we can consider platforms that allow users to be anonymous and where anonymity is the norm—and are still markedly different. 4chan and Reddit both enable users to post without any verification. Users can employ multiple handles and create temporary accounts (which on Reddit are known as throwaway accounts), one for each post they want to make even; 4chan goes a step further and allows for the same handle to be shared by multiple users, which is the norm. They fall roughly on the same extreme of the spectrum from real-name verified accounts and no identification at all. In spite of that, the 4chan boards and Reddit subreddits that we will consider are starkly contrasting.

Reddit operates with federated community standards and moderation, with site-wide (or federal) policies and practices supplemented by more specific, community-built and enforced, (local) subreddit rules. Site-wide policies and their enforcement were significantly stiffened after very visible incidents, particularly the use of the website for the non-consensual sharing of intimate images of celebrities, leading the platform to ban a community that had hosted much of the material. After 2015, Reddit announced an update to its harassment policy that culminated in the banning of “a fatphobic community [targeting] photographs and videos of overweight and/or obese persons.” Other subreddits were later banned, and the platform also started using quarantine as an enforcement instrument.

Once again, this federal level of policies and enforcement sits on top of communities’, which can abide by stringent rules for eligibility to participate (sometimes by obtaining assurances of who the user is without checking any official or institutional forms of identification) and the manner of participation. That shows that group identity is deliberately and fastidiously molded by the communities, promulgating and patrolling the model of behavior they have elected for themselves. That effort by communities sits within Reddit’s ‘karma’ system and upvote and downvote mechanisms, which affects content visibility, and which subreddits can to an extent wield as part of their governance strategies (e.g., by instructing users to use the downvote function to enforce community rules, not so much to signal their disliking of the content). There are also other platform affordances that subreddits can use and adjust to their needs, including automation tools for moderation and “flairs,” color tags that can be attached by moderators to both pieces of content and usernames (when displayed in that community). For instance, r/AskHistorians uses flairs as badges of community-verified expertise.

4chan, on the other hand, is decidedly not invested in that kind of meticulously manicured public forum. 4chan message boards such as /b/ are often described as “a well-known trolling stomping ground,” notoriously often accorded the distinction of being one of “the dark corners of the internet.” Like Gab and 8chan,  4chan “engage[s] in little or no moderation of the content posted.”

That might be taken to suggest that group identity and social norms do not play a role. Yet the opposite is true. Meaningful participation in such 4chan boards in fact requires intricate demonstrations of membership, which are designed to cordon off outsiders. These range from the digital equivalent of shibboleths (for instance, being able to post unusual Unicode characters), to particular slang, to a choreography involving sarcastic use of design features (such as “memeflags”), grasp of community tropes regarding current affairs, and textual and nontextual representations. Seasoned users explicitly tell the uninitiated to observe and assimilate the ways of the community. Mastery of social norms is persistently tested, and lack of familiarity prompts chastisement. Archetypes about members and unwanted participants are also upheld.

More specifically, the import from the SIDE model is that we should not assume that anonymity works the same on platforms such as 4chan and Reddit. The former does virtually no moderation; community norms are uncodified, and there is often apparent informal approval of abuse and harm toward out-group users. The latter platform, in contrast, operates with federated community standards and moderation, with site-wide (or federal) practices supplemented by more specific, community-built and enforced, (local) subreddit rules. In terms of requiring and validating information, they might otherwise be seen as quite similar. Yet the differences are striking. While certain 4chan boards are often referred to as one of “the dark corners of the internet,” researchers have shown how subreddits are able to create vibrant forums for scholarly knowledge, parenting, and intimate content, among others. The SIDE model offers insight as to why: anonymity is employed with patently different goals— and outcomes.

C. Measuring the impact of anonymity: the role of content moderation

Research about the role of anonymity in comment sections of newspaper websites has been prolific. It provides additional insight into anonymity by showing us a picture of how forums that are not interest-specific (like some subreddits) or extremist (like some 4chan message boards) are affected by it.

Several studies seek to evaluate the role of anonymity by assessing discursive civility, which an influential study notes “has been defined as arguing the justice of one’s own view while admitting and respecting the justice of others’ views.” Civility is not, of course, the only value that critics of anonymity online argue that it threatens. Anonymity has also been linked to hate speech, actual threats, and harassment. Nevertheless, research on civility can help shed light on the extent to which anonymity drives people to behave without respect for social norms, which include but are not limited to, disapproval of uncivil speech.

What, then, do studies on comment sections tell about civility and anonymity? The evidence is mixed. A highly cited 2014 study compared 11 online newspapers and found that “over 53 percent of the anonymous comments were uncivil, while 28.7 percent of the non-anonymous comments were uncivil.” The same researcher more recently examined 30 outlets, with similar results. Yet competing explanations were not discussed, and so differences in the audiences of each website as well as varying content moderation practices could have interfered with the observed effects. Another study compared comments on The Washington Post website, which “afford[ed] users a relatively high level of anonymity,” with the newspaper’s Facebook page, finding the former had significantly more uncivil discussions than the latter. Yet again other factors cannot be excluded, and it is plausible that content moderation practices in early 2013 available to and deployed by Facebook were considerably more efficient than those The Washington Post website could make use of. Conversely, a study comparing comments posted to newspaper websites and respective Facebook pages in Brazil in 2016 (a period of considerable disruption that saw President Dilma Rousseff’s removal from office after her impeachment trial) identified no significant difference in terms of incivility and actually found more intolerance on Facebook.

Knustad and Johansson examined the toxicity of the comments section of The New York Times and The Washington Post and assessed whether anonymous commenters were more toxic than non-anonymous commenters. The outlets were selected for comparison because they are both “east-coast, national, fairly mainstream, left-leaning newspapers,” thus reducing “the likelihood of interfering variables, such as the affordances of different platforms, with different rules of conduct, moderation and different comment section cultures.” They found a “small or tiny” correlation between anonymity and toxic comments, but a much larger difference between the two publications. The Post had considerably more toxic comments than The Times comments section. This led researchers to conclude that “website is a stronger explanation for toxicity than anonymity alone.” The authors speculated that these results might be a product of different content moderation strategies since both newspapers “have extensive community rules and guidelines that are linked to in the comment sections […] that reflect their desire for civil and well-informed comments, and neither allow personal attacks, vulgarity or off-topic comments,” noting that The Times uses machine learning software developed by Jigsaw, part of the Alphabet conglomerate. The researchers hypothesized The Times’ system might be “better at catching unwanted comments than the system used by The Washington Post,” which boasted about having its own, proprietary machine learning system.

Another potential factor is that The Times also banks on “NYT Picks,” which are selected by the moderators to showcase “high quality comments with exceptional insights that are highlighted in the commenting interface.” A study found evidence of “the positive impact of highlighting desirable behaviors via NYT Picks to encourage a higher-quality communication in online comment communities.” The Post also highlighted comments, not for their quality, but to call attention to “[u]sers with direct involvement in a particular story.” The Times’ content moderation strategy of spotlighting quality contributions while taking advantage of design features might be an important factor in the differences found by research on anonymous comments.

This reaffirms the centrality of content moderation practices to understanding how anonymous communities work. Policies, strategies, and enforcement are crucial in governing the digital public sphere, and not just as assessed by, e.g., the volume or prevalence of infringing or abusive content. The point here is not that creative content moderation or more efficient systems can keep the anonymous vandals out. Indeed, we should not underestimate issues with automation in content moderation, particularly with Perspective, the Jigsaw software which was adapted to create The Times’ Moderator. The point is instead that content moderation is a component in shaping the identity of those taking part in a particular digital forum, which takes place even when identification is not required. Content moderation can do so by modeling positive behavior, as with the NYT Picks (or flairs in some subreddits, as seen in Part II, Section B), as well as by curbing unwelcome content and preventing users from being provoked into emulating it.

III. False Information, Polarization, and Identity

Identification is often seen as a means for better democratic deliberation. Anonymity is regarded as an abettor of lying. By contrast, as “more information moves the market closer to truth,” identification makes for an improved marketplace of ideas, by equipping listeners with better information to form their judgment. Identification is taken “as a beneficial and purifying process,” through which “[t]he sense of being exposed to public view spurs us to engage in the actions of the person we would like to be.” “[C]ivil and dignified” discourse is also associated with identification, which furthermore upholds civic virtues needed for democratic decision-making.

The previous Part has shown that categorical statements such as those do not appreciate how anonymous settings can shape identities in different ways. Just as anonymity in Reddit and 4chan results in contrasting outcomes, we should not expect that anonymity will always undermine the democratic values with which commentators are concerned. Anonymity is not intrinsically inferior to identification. That is because the effects of anonymity and, as Part I showed, identification vary. This Part goes further than claiming anonymity is not less than. I will argue that, in fact, identification can be an agent in the pathologies afflicting social media, particularly dis- and misinformation.

To see how, we need to understand the real-world interplay of identity in its articulation with community and norms. Commentators have assumed that anonymity “facilitate[s] the kind of lying and misrepresentation that undercut a well-informed electorate” because “the speaker bares no cost for repeating lies and promoting false content.” But research into political polarization paints a different picture. It tells us that, in affectively polarized settings, identified speakers can reap rewards from lies and inaccuracies, instead of being punished by their listeners.

In an affectively polarized landscape where hyperpartisans dominate social media, accuracy and truth do not take the disciplining force that the commentary about identification assumes. In fact, users have been shown to sever the decision to share from their judgment on the truthfulness or falsity of the news. This indicates that it is not only anonymity but also identification that has been insufficiently conceptualized. The next sections will bring together findings from different strands of scholarly literature to explore the role that identification plays in the sharing of mis- and disinformation.

A. Identity and affective polarization

An important concern about the current state of the online landscape is political polarization, which has been described as “the greatest threat to American democracy” and one of the “four horsemen of constitutional rot.” Social media has been blamed for reinforcing preexisting beliefs through repeated exposure to homogeneous viewpoints, which in turn further cements beliefs and insulates them from being challenged. It thus contributes to increasingly polarized politics, with each side of the divide living in its own “echo chamber,” according to a popular account of the issue.

In fact, the “echo chambers” theory of social media as a driver of polarization is quite controversial. Researchers have found little empirical support for the thesis, or have concluded that the claim is overstated. A study has found that modest monetary incentives may considerably dissipate reported incorrect partisan beliefs about facts. And even when it comes to opinions, the echo chambers account might fail to consider how partisan attitudes toward policy positions are formed. For instance, in a study, participants voiced support for a policy aligned with their perception of party ideology but expressed the contrary view when told party stance was in favor of the policy. Furthermore, the echo chamber account of political polarization may dramatically underestimate the role of legacy media actors in driving the phenomenon.

Instead of issue-based, or ideological, polarization, many scholars are increasingly more interested in the escalation of affective polarization, described as a “phenomenon of animosity between the parties.” Affective polarization refers to the process whereby identities get sorted along a cultural divide that predicts where people buy food, what clothes they wear, and what shows they watch on TV. In short, rather than policy positions (e.g., support for a proposed gun control measure), this kind of polarization manifests itself in more encompassing terms. The divide is not only along partisan, but also racial, religious, cultural, and geographic lines, all of which are increasingly conflated.

Affective polarization helps explain seemingly paradoxical results from a research intervention that was designed to decrease “echo chamber” insulation (and hence partisan distance). In that study, partisans were paid to follow a Twitter bot account that exposed them to opposing political ideologies. If greater political polarization is understood as the result of social media reinforcing views and information, and not exposing partisans to different thinking, we would expect that participants who saw more cross-partisan content would hold less polarized attitudes. The study instead found that participants subsequently exhibited more partisan attitudes. The key to unraveling this paradox is in understanding how identities are shaped on social media in a context of affective polarization.

The echo chamber account sees polarization as a consequence of insulation created by social media. Scholarship highlighting affective polarization instead frames it as being “driven by conflict rather than isolation.” Exposure to cross-party content such as offered by the study thus does not break echo chambers, argues the lead author of the study in subsequent work, because it does not breed reflection and deliberation. Rather, it “sharpen[s] the contrasts between ‘us’ and ‘them,’” magnifying affective polarization.

Chris Bail uses the metaphor of a prism to explain how social media plays an important role in shaping political identities in the reflection of a distorted image of society. In a setting where affective polarization festers, extremists get validation and social support from detracting the out-party, as well as from disciplining in-party members who stray from in-party views. This sort of behavior is then normalized by moderates (who are given the impression that their views are less prevalent than they in fact are) and extremists (who are entrenched further not just in their views but their tactics).

Affective polarization offers identity, more than policy positions or information, as crucial to understanding political polarization on the internet. Platforms magnify feedback processes of the presentation of the self; they “enable us to make social comparisons with unprecedented scale and speed.” That is, we can clearly see what sort of content gets positive engagement from other users, and what sort of content brings about the embarrassment of being ignored or the stress of being contested. Given that social media is now so ingrained in everyday life, straying from partisan expectations is very costly, socially and emotionally. This, Bail contends, creates a prism distorting our sense of the environment, inducing us to see the partisan out-group as more extreme than it actually is, through the rewarding of radical partisan behavior and silencing of moderate behavior. All of that culminates in “status seeking on social media creat[ing] a vicious cycle of political extremism.”

B. Performing Lies and Misinformation: Identification as a driver

So social media is a cog in a machine that rewards greater affective polarization. Platforms “do not isolate us from opposing ideas; au contraire, they throw us into a national political war.” The prevalence of dis- and misinformation online must be understood against that background. Once we appreciate this, the connection between identification (particularly the kind established by real-name policies) and misinformation is made clear.

There is increasing evidence that content employing “moral-emotional language” does significantly better on social media. Moral psychologists use the term “moral-emotional language” to refer to language which both expresses a moral judgment about what is right and wrong and an emotional state (such as “hate” or “contempt”). Moral-emotional content shows a propensity to go viral online, in a process researchers have described as “moral contagion” given how “it mimics the spread of disease.” One study of over 500,000 tweets found a 20 percent increase in sharing for each word marked by that kind of language.

Disinformation campaigns have leveraged that viral propensity of moral-emotional content. A study that looked at news articles shared on Twitter concluded false news (established as such through concurring assessments by fact-checking organizations) evoked disgust, more so than real news.

Research focused on moral outrage (a subcategory of moral-emotional language) on social media has explored the mechanisms behind the virality of that sort of content. I want to foreground how identification is a part of those mechanisms.

One important component in expressing moral outrage is the reputational gains that can be reaped by signaling to the in-group that we care about serious moral violations. This logic is valid both offline and online, but whereas expressing outrage at, for instance, how badly a fellow commuter was treated might typically get us credit with others waiting in the subway station, for instance, “doing so online instantly advertises your character to your entire social network and beyond,” as M. J. Crockett puts it. This line of research emphasizes how social media “is a context in which our political group identities are hypersalient,” which amplifies motivations to engage in moral-emotional expression to uphold in-group versus perceived out-group threats and to accrue in-group reputational gains. Status seeking (much like Bail described) is an important component of the spread of dis- and misinformation online.

When users are anonymous in online settings, they are less likely to express outrage, as an important part of their underlying personal motivation is removed: “the need to maintain an image as a good group member in the eyes of other group members.” This is supported by related research on “online aggression” in a German petitions website, which found that non-anonymous users comments were more aggressive when engaging in firestorms against public officials and policies, given that aggressiveness would not be something they would want to conceal—on the contrary, they would want to be seen as standing up for their values.

My argument is that real names on social media significantly raise the stakes of the rewards for expressing moral outrage. Granted, pseudonymous users can benefit from reputational gains within that particular digital context. The reputational gains for identified users, however, can render them material benefits, including offline. The favorable recognition they can achieve might be translated, for instance, in media appearances in prestigious legacy publications or in professional opportunities. The fact that they can extract those tangible gains is a function of the affordances for identification in a given platform. A platform that disfavors identification, where users are not identified across different posts, like YikYak or Whisper, impedes users who might be willing to claim to be the authors of a viral piece of content; they will have problems establishing themselves as the genuine posters—anyone would be able to fabricate a screenshot and try to get credit.

The entanglement between identification and moral outrage goes further than the rewards those expressing it online can garner. Moral outrage directed at a member of the out-group upholds in-group norms and thus also affirms group identity, to the detriment of the out-group. In an affectively polarized setting, moral outrage is an assault on the opposing party and its political capital. Identification again is crucial here. Real names in a polarized setting will enable and invite users to try to establish whether the target of their outrage is a member of the opposing party. Digital encounters in identified settings then provide opportunities, particularly for hyperpartisans, to raid the opposing party at every flank where moral outrage can be expressed. Even if in a given platform users find insufficient cues about the potential targets for moral outrage expression (such as how they identify through their bios, their profile picture, likes, or follows), other information on the web can be found to try and infer party affiliation.

To be clear, this kind of antagonistic behavior is not exclusive to real-name settings; it can also take place with pseudonyms whenever there are sufficient cues for users to make inferences about others. Yet this mechanism is contingent on the norms in an anonymous setting: it depends, that is, on whether or not users do bear their party affiliations or give them away inadvertently. In real-name social media, platform design makes this inescapable. As noted earlier, these in-group-oriented motivations extend to the sharing of fake news, regardless of whether the user “ha[s] a firm belief in” it, as research has found. Indeed, one line of study highlights that whether or not people believe false information stands separately from whether they condone it—“they recognize it as false, but give it a moral pass.”

And while it might be objected that moral outrage did not start with the internet, M.J. Crockett points to several factors explaining why outrage is amplified by social media. It multiplies opportunities: There is evidence that in-person observation of violation of moral norms is uncommon. In platforms driven by user engagement, moral outrage is more likely to go viral. And while expressing moral outrage in person is costly (because many will shy away from confrontation or be intimidated by the risk of retaliation from the target of outrage, including with violence), online the costs are lower, and the corresponding positive feedback can be much more immediate. Again, such positive feedback for the individual translates into gains to their reputation, accrued in terms of virtue signaling to the in-group, which is a function of their identification. Once more, online identification with real names can yield different results compared to offline identification, as discussed in Part I.

While the discussion so far has emphasized deleterious effects of moral outrage, the literature has emphasized that such emotional phenomena should not be viewed as intrinsically positive or negative, citing for instance the role it has in propelling collective action around activism around social inequality and injustice and fundraising campaigns, for instance. The point here is not to pass judgment on moral outrage but to note its part in the mechanisms that underpin the sharing of misinformation online and highlight how identification magnifies those mechanisms.

Commentators see identification as beneficial because they believe users will behave better by refraining from toxic speech out of fear of how their actions online will impact their standing in their social circles. What is generally not accounted for in that narrative is how real names on social media also impel users to perform their context-collapsed identities under a condition of affective polarization. The audience (composed of their friends, family, coworkers, and so on) is watching and will pass judgment on deviations from group loyalties. In real-name platforms, experimentation, self-questioning, and crossing the aisle to try to understand the other side come at a price. Posts and comments supporting the in-group are rewarded; in-group opposing content will often lead to disciplining. Risks flowing from context-collapsed identities in social media have been described in terms of what users will or will not post. What we are considering here is how norm enforcement will effectively shape not only what users themselves post but how they consume content by other users. In other words, the content of the posts users share and how they read posts by others are both in part a function of how identities are presented in a platform. This can create a vicious cycle. Conversely, these drivers can be prevented in certain anonymous settings. This is exactly what some researchers have been exploring and is the topic of Part III, Section C.

C. Anonymity as a depolarizing, discourse-enabling device

With polarization breaking records and social media engulfed in a vicious cycle elicited by status seeking based on constant feedback from like-minded individuals, it might sound like an inane notion to participate in online communities to solicit views contradicting our beliefs on topics such as immigration, gender identity, and the disbandment of one of the main political parties in the U.S. Still, those are examples of conversations at r/ChangeMyView, a subreddit created in 2013 to serve as a venue where users deliberately invite challenges to their opinions.

The community operates within Reddit, which, as discussed above, requires no more than a username and password for account creation and employs a policy that allows and even encourages temporary or “throwaway” accounts. It illustrates how anonymity, combined with platform design and content moderation strategies, can mold identity to support digital spaces in overcoming afflictions plaguing much of social media.

First, rather than all-encompassing policies that must apply to a wide range of contexts, at r/ChangeMyView the rules meticulously govern not just what can and cannot be posted but also how. They cover the text, the attitude, and the manner and effort of participation expected from users who submit issues to the community. The rules are accompanied by “indicators of violations,” which give more insight into how the rules are interpreted and applied. There are also rules for commenters, establishing, for example, that top-level comments (i.e., direct responses to the OP) “must challenge or question at least one aspect of the submitted view,” whereas comments within a thread may express agreement with the OP.

Second, the subreddit also leverages platform design to promote the community’s goals. The rules also set out criteria for when to award and when not to award deltas, which any user is able to do. Deltas are “a token of appreciation towards a user who helped tweak or reshape your opinion.” Deltas are displayed as community badges within r/ChangeMyView. Like mainstream social media, then, the subreddit makes use of gamification strategies; unlike them, however, it does not optimize for user engagement, and instead leverages platform affordances to “celebrat[e] view changes, [which] is at the core of Change My View.”

Third, policy enforcement. Moderators are active and adopt a range of approaches to steer the subreddit. An extensive set of “Moderation standards and practices” addresses “procedures for removing posts/comments, how bans are decided and implemented, how the six (6) month statute of limitations is applied for offenses, and how our appeal process works.” Policy enforcement is therefore also tailored to support the community’s goals, including providing explanations for post removals, adapting automation tools, and employing and modifying design features, such as flags and the delta system.

The extent to which r/ChangeMyView actually vindicates its name is debated. Researchers conducted interviews with 15 participants, reporting users “typically did not change their view completely,” even though they saw the community as useful. More importantly for affective polarization concerns, they found participants thought “posting on CMV helped them develop empathy towards users they earlier disagreed with.”

Another example of anonymity being put to use to achieve what real names could not is DiscussIt, a “mobile chat app [developed] to conduct a field experiment testing the impact of anonymous cross-party conversations on controversial topics.” After recruiting 1,200 Democrats and Republicans, DiscussIt matched each of them with a participant from the opposing political party. Participants were issued an androgynous-sounding pseudonym to join a chat with question prompts asking for their views on either immigration or gun control and received notifications if they became non-responsive. Comparing surveys of participants that responded before and after the experiment, one of the authors says he is “cautiously optimistic about the power of anonymity,” as “many people expressed fewer negative attitudes to the other parties or subscribed less strongly to the stereotypes about them,” and “many others expressed more moderate views about political issues they discussed or social policies designed to address them.” The study reported findings of changes in sentiment toward opposing party members as well as views on the issues discussed.

Experiences such as DiscussIt and r/ChangeMyView show us at least two ways that anonymity can be instrumental in creating a more vibrant digital public sphere. One is by attenuating affective polarization, as noted. This is in line with research suggesting that positive contact with the out-group can reduce affective polarization. There is evidence that partisans have exaggerated perceptions of members of the opposing party, so that engaging in conversation with a living average Republican or Democrat can disabuse stereotypes and reduce negative attitudes. Anonymous social media can create opportunities for cross-party interaction that do not take place in the battlegrounds of a “national political war,” and where reputation is not gained by scoring points against the opposing party with any available means. We have seen how affective polarization is connected with misinformation, so alleviating one could help with the other.

A further way anonymity can play a part in enacting more truth-based discourse is by enabling conversations that are grounded in facts and guided by what is generally expected of public deliberation—even if it does not move the needle on polarization. Anonymity can lower the stakes of engaging in what could otherwise be seen as heretical partisan equivocation that would be faced with disciplining. It can as such facilitate hard conversations for which at least some users have a hunger.

These examples might be too hopeful if thought of as immediate prototypes for replacing Facebook or Twitter, yet they are still valuable. It is true that interest and available time to invest in civic-minded exercises such as DiscussIt or r/ChangeMyView should not be presumed to be universal. Still, they are valuable in creating an alternative environment where people can have sincere conversations about topics which have been battlegrounds of the political divide. More importantly, both r/ChangeMyView and DiscussIt suggest an exciting path for re-engineering platforms, tweaking the levers to steer the digital environment toward democracy-empowering settings, and helping to allay the illnesses afflicting politics, instead of exacerbating them. Importantly, both highlight “how the design of our platforms shapes the types of identities we create and the social status we seek.” Tinkering with anonymity and identification as such is an important component of potential experimentation with other social media affordances and should be a focus of attention when considering the other kinds of internet ecosystems that scholars such as Ethan Zuckerman have imagined.

Conclusion

The plurality of identification and plurality of anonymity emerging from the study of networked communities holds cross-cutting insights. This paper begins the work of setting out what it entails. Identity is not fixed but is perennially shaped by and shapes group identity and norms, as Robert Post explicates. Real names as used in a networked society are not equivalent to how real names work offline. Anonymity has been wrongly conceived as a marker of the absence of communal identity and of community norms. In effect, it is an ingredient in establishing communities, mediated by other affordances, including design, norms, and practices. We should not understand anonymity as operating according to a uniform function. Identification is likewise mediated. This point has overlooked implications for the condition of the digital public sphere revealing identification as a driver of political polarization and misinformation in a complicated interconnected machinery. Rather than a piece in that machinery, anonymity may instead afford a disentangling device to respond to the pathologies of political discourse that have concerned commentators.

 

© 2023, Artur Pericles Lima Monteiro.

 

Cite as: Artur Pericles Lima Monteiro, Anonymity, Identity, and Lies, 23-13 Knight First Amend. Inst. (Dec. 5, 2023), https://knightcolumbia.org/content/anonymity-identity-and-lies [https://perma.cc/E5XX-SMTH].