When you invent the ship, you also invent the shipwreck; when you invent the plane, you also invent the plane crash; and when you invent electricity, you invent electrocution . . . Every technology carries its own negativity, which is invented at the same time as technical progress.

— Paul Virilio, Politics of the Very Worst (1999)

Introduction

As I write this in May 2024, there is, yet again, anxiety about the power that an emerging technology, namely generative artificial intelligence (or GenAI), has to shape journalism, news, and the viability of a robust, economically healthy, and free press. While earlier panics focused on the influence of the internet and digital data on journalistic practice, social media’s control of attention and advertising, and the power of complex and proprietary algorithms to create “echo chambers” and “filter bubbles” that undermine journalism’s role in democracy, the contemporary moment features a potentially more existentially destabilizing force.

Depending on which technologist, activist, scholar, or pundit you read, the prediction of how GenAI is likely to influence journalism varies. It may free journalists from mundane tasks and empower them to pursue more complex reporting, or it may further shrink an already atrophying labor force struggling for stable work. It may infuse internet search results with high-quality news that has previously been scraped or licensed from publishers, or it may sequester news into opaque datasets and probabilistic language models that deliver neither reliably true results nor traffic to news advertisers. GenAI’s propensity to haphazardly extract, combine, and hallucinate data may even further erode online public spheres, or it may privilege publishers as producers of high-quality news created in the public interest over the generic “big data” that technology companies greedily need for general computational models and advertising markets.

In many ways, GenAI is not a new threat to skilled workers like journalists. In his 1978 study of how workers responded to automatically controlled machine tools, David Noble identified three ways that technological innovations challenge workforces, all of which apply to journalists in this moment of GenAI. First, workforces must “transcend the ideology of technological determinism by demystifying the technology itself.” Journalists often critique and attempt to undercut the fantastical claims that some technologists—and some fellow journalists—make about GenAI’s power to produce believable, useful, and democratically forceful language. Second, the workforce must “regain its confidence by preparing itself” and articulating “its own choices in its own . . . interest.” Journalists might differentiate what they do—reporting, writing, and editing with sophistication and in the public interest—from what GenAI, which is inscrutable and incapable of sentience, professionalism, or reflection, does, namely mimicking the statistical patterns of datasets and models that produce media. Finally, workers must “struggle, on the shop floor and politically, to get into the position to make these choices and thereby steer the course of ‘progress.’” That is, journalists must not only debunk the fantastical claims of GenAI proponents and differentiate their work from GenAI’s outputs but must also be able to act in their own interests and know what those interests mean in different technological contexts. They must be able to practice their craft in ways that reflect what they think their profession and public obligations demand. This capacity to act with knowledge and intent requires, as a preliminary condition, knowing well enough how a technology works to appreciate how it matters to professional practices. Only then can professionals appreciate what they should do with a technology and make informed and intentional choices about resisting or reshaping it and/or their professional standards.

This capacity to act with knowledge and purpose is a kind of press freedom. An autonomous press knows its systems well enough to reshape them in its own professional image, or to refuse them, and also understands its public obligation. A press that is beholden to a technology’s fantastical claims or moral panics, or that cannot separate its own vision from that of the technologists, will be unable to structure and change itself, a hallmark of any profession’s autonomy. By “autonomy,” then, I mean the profession’s capacity to make its own decisions about its practices—for example, what to cover, how to write, who to interview, when to publish, or which ledes to foreground. An autonomous press can see its own actions, reflect on them, and change them when it chooses to.

To explore how well the press can change itself, I focus here on how the press responds to mistakes by examining moments when the press has investigated, corrected, suffered, or tolerated some failed or broken-down aspect of either its own journalistic work or the larger information environment it operates within. My contention is that a strong, autonomous, self-empowered, and self-corrective press has the capacity to report and publish on mistakes—its own and others’—and that GenAI poses a particular threat to this capacity because so many aspects of synthetic media and its use in journalistic work are currently unknown.

Taking seriously the idea that an autonomous press self-monitors and self-corrects, we might question whether the institution has recursive press freedom—a capacity to create for itself the structures that enable journalism and serve the public interest. A recursively free press would confidently trace and shape the social, cultural, political, and technological forces that structure journalism and create news without needing to rely on outsiders for knowledge, permission, or capacity. Such recursive freedom would be especially critical in the context of the synthetic press, which refers to the journalistic practices, forms, and institutional conditions that are intertwined with GenAI and often unknowable and unchangeable within proprietary datasets, machine learning models, and private corporations. Recursive, synthetic press freedom would be the press’s own capacity to shape GenAI according to its own journalistic mission and its own sense of public service.

This chapter investigates what this recursive, synthetic press freedom could look like in three parts. First, it briefly frames the current GenAI challenge to press freedom in terms of long-standing scholarship on journalism’s relationship to technological change. Second, it presents a case study to illustrate this issue, focusing on how the free press pursues and self-corrects its own mission and how it deals with “error” in a variety of ways. Third, the chapter uses instances of recent GenAI errors to demonstrate how this new technology hampers the press’s ability to know and change itself, thus damaging its autonomy. And, finally, inspired by Noble’s study of how workers might regain autonomy in the face of outside technological change, the chapter sketches a model of “recursive press freedom” that all defenders of journalistic autonomy might adopt to understand and reshape this GenAI moment.

I hope that different scholarly audiences will find this chapter useful in multiple ways. Journalism scholars, for example, are rapidly endeavoring to understand the relevance and usefulness of GenAI to news production and its impact on normative ideals of the press. Examining instances when GenAI has failed gives us a way to think about what a successful press is presumed to do and what good or proper uses of GenAI in journalism might be. When scholars or practitioners foreground journalistic errors and call for remedies, they reveal their capacity to monitor, correct, and envision a better press. The press’s power to self-correct—to define, prioritize, and fix its own errors—is a key aspect of press freedom, and one that is increasingly critical as many aspects of reporting, editing, and publishing become intertwined with technological infrastructures existing beyond the control of newsrooms.

This is an important moment to develop a precise understanding of how technologies influence journalism because there are ramifications for understanding the scope and meaning of press rights. As Frederick Schauer cautioned, failing to appreciate the “sociological messiness of institutional demarcation” —for example, how blogging, investigative journalism, pornography, sports broadcasting, or recommendation algorithms all pose different speech challenges—may cause a First Amendment “institutional ‘compression’” that leaves everyone with fewer rights than they might have had if journalists, technologists, and courts alike had better understood the significance of drawing the press’s boundaries and of defining “the press” narrowly as what journalists do or more broadly as what technologies, technologists, and online audiences also do. Focusing on journalism’s disposition toward errors and mistakes is one way of achieving empirical precision and of understanding how different institutions and forces constituting the press collide. This is a challenge ideally suited for scholars of science and technology studies, critical internet studies, and platforms studies who have long traced how technological practices, actors, values, and politics intersect to create newsrooms, professional journalism, audience cultures, and news economics. A focus on journalistic errors and GenAI may reveal how different social and technological actors define mistakes, debate causes, classify harms, anticipate risks, assign responsibility, and apply remedies. If press freedom emerges not only from intersections between journalism and jurisprudence but also among messy collisions—of people, professions, norms, institutions, data, and computation—then analyzing mistaken meetings and failed alignments should illustrate what press freedom could be if the power and politics of digital media were otherwise.

Press Freedom as an Institutional, Infrastructural Achievement

To better understand what press freedom means in this era of GenAI, it is essential to first understand how the “the press” is always a product of an era’s institutional and technological forces and how the concept of “press freedom” always depends on ideals relating to its public power and obligations.

Though it is beyond the scope of this chapter to review the large, long-standing, and interdisciplinary research tracing the various forces that have constituted “the press,” it is worth highlighting how three dimensions of the press—news forms, journalistic practices, and institutional relationships—have emerged from the collisions of social and technological forces, with this providing an expansive and flexible view of the press that may help stave off the compression of rights that Schauer cautioned against.

First, news forms—text, image, sound, video, virtual reality, games, and participatory forums—have always been intertwined with advances in media representations and publishing technologies. Advances in printing, layout, and graphics transmission enabled newspapers to develop brand-specific visual designs, direct readers’ attention, sustain audience engagement across a newspaper, and use high-quality imagery to signal both journalists’ authority to report and the authenticity of their accounts. Subsequent advances in synchronized audio and visual content, chyron overlays, animated graphics, and data visualizations allowed news organizations to create immersive, layered news experiences that could display and interweave different information streams simultaneously. Developments in transportation infrastructure, satellite networks, and lightweight cellular technologies enabled “live news” to emerge as a unique genre. And the popularization of communication technologies amongst readers—encompassing everything from cheap and reliable postal service and home telephones to online bulletin boards and social media platforms—spurred the creation of letters to the editor, call-in shows, and comment threads that blurred the distinction between the journalistic voice and public participation.

Every new technological advancement has prompted the question of what it means for news stories to be authoritative, authentic, timely, persuasive, and professionally produced. Put differently, to the extent that press freedom depends upon and enables journalists’ ability to create stories that are defensible as forms of information—for example, that are factual, free of mistakes, and not manipulative—that freedom is always inseparable from an era’s communication technologies and media representations.

Second, journalistic practices—fact-finding, interviewing, witnessing, writing, editing, and publishing—have always reflected how an era’s communication technologies influence epistemological standards, modes of working, and professional norms. Publishers in the colonial era invested in boat-building techniques to produce small ships that could quickly reach ships arriving from England with news, mail, and commodity prices, ensuring that their newspapers had the timeliest information and feeding early anxieties about being scooped by competitors. Advances in literacy, printing, distribution, industrial advertising, and market segmentation all fed the emergence of journalistic objectivity, not as an ideal of impartiality and public service but as an economic practice that let publishers avoid offending any one set of readers and, most importantly, advertisers.

As technology has continued to advance, so too have modern debates about these new journalistic practices. Computer-assisted reporting and data-focused journalism have allowed investigative reporters to portray themselves as disinterested scientists reliably and defensibly discovering patterns of injustice consistent with the moral standards of their times. And now that online publishers can track how well their stories are performing in search engine results, digital advertising markets, and on social media platforms, newsrooms must grapple with how to do journalism that is aware of, but not beholden to, internet traffic patterns. Similarly, journalists are wrestling with whether social media platforms like X are places to seek out stories, find sources, speculate, publish independently, develop personal followings, or some combination thereof. Advances in drone technologies, meanwhile, have opened new events and regions for journalists to observe, but they have also invited skepticism about the value of such high-altitude witnessing versus grounded reporting. Encrypted tools like Signal, PGP keys, and the Tor network allow journalists to connect securely with potential sources and whistleblowers, and while the process is still cumbersome, increasing efficiencies in Freedom of Information Act (FOIA) requests give journalists more access to public information more quickly, enabling new forms of public oversight reporting.

Fast boats, mail service, computer databases, search engine analytics, social media platforms, drones, and encryption are just a few of the myriad technological advancements that have influenced journalistic practice, and it would be a mistake to see them as separate from how journalists make judgments about newsworthiness, facticity, audience engagement, public service, or source ethics. If press freedom is, in part, a journalist’s ability to decide the workflows, norms, and practices that they believe align with their professional judgment, then it is impossible to separate these decisions from the communication systems that suggest actions, encode standards, enable practices, and generally structure information work. Is it a mistake or an infringement of press freedom if a journalist subconsciously trusts a source with a PGP key, a Twitter user with a blue checkmark, or an organization highly ranked by a search engine? These seemingly trivial examples point to a messy and implicitly structured space of journalistic practice that is inseparable from communication technologies.

Finally, institutional relationships—the press’s connections to governments, markets, industries, and audiences—have always influenced how the press has understood its mission and scoped its autonomy. Defining institutions as “social patterns of behavior identifiable across the organizations that are generally seen within a society to preside over a particular social sphere,” Cook showed how “the press” is actually a set of relationships among not only news organizations but also government officials, public relations professionals, civil society actors, political parties, and other communicators interested in structuring communicative self-governance. These relationships, according to Cook, need and sustain the institutional practices of mimicry, isomorphism, and labor division to distribute the work and power of the press across largely invisible backstage processes. Instead of seeing the press as an autonomous institution living within news organizations, professional bodies, and public ideals, we might more correctly see it as a pattern of distributed, intertwined communication practices that seem coherent to the extent that they resemble what a given era socially, culturally, economically, and technologically expects “the press” to be. This coherence is stabilized when, for example:

  • a government spokesperson frames events and gives quotes in ways that resonate with a journalist’s expectations and a publisher’s conception of its audience;
  • public relations representatives position press releases and product launches to fit not only with their clients’ goals but also with journalists’ understanding of industry novelty and technological innovation;
  • news organizations perennially realign their judgments of newsworthiness and public service with social media platforms’ own assessments of audience engagement and advertiser value (as seen in publishers’ “pivot to video” to meet Facebook’s expectations and contemporary investment in short videos that perform well on TikTok and YouTube);
  • fact-checking organizations structure their rhythms and standards of verification to align with the goals of the social media platforms that spread disinformation, creating a symbiotic service arrangement in which fact-checkers are essentially platform partners; and
  • publishers like the Associated Press and NewsCorp license their content to OpenAI, ensuring that future ChatGPT results will somehow include or account for the journalism produced by those companies.

These examples of the institutional perspective on press freedom may seem like reasonable arrangements that simply ensure journalism’s survival and relevance in rapidly changing media systems. Without such alignments, patterns, and partnerships, the press might simply disappear. But they also suggest that a common-sense understanding of “the press” as comprising what journalists do and what publishers distribute is simply incorrect. The press is actually a set of largely invisible, strategic relationships with their own logics of alignment, their own standards of quality, and their own senses of what it means to tell factual stories in the public interest.

This image of “the press” as a powerful and messy interplay of social and technological arrangements has gained traction in recent years. Early digital newsrooms were shaped by mixes of journalistic and technological mindsets and practices. Digital algorithms quickly influenced reporting and publishing practices. As social media rose in prominence, a hybrid media system of seemingly disparate communicators emerged, which journalists both participated in and reported on. Histories of the press show how its last 50 years have been inseparable from the datasets and computational epistemologies that journalists use to know their world.

My own work has examined how the press’s digital infrastructures enable the public to hear the perspectives of others and receive information that they would likely not seek out for themselves, and Kate Crawford and I have suggested that the press is a “liminal” achievement somewhere between journalistic judgment and technological design. Emily Bell and Taylor Owen have identified a “platform press” that has emerged from publishers’ seemingly endless responses to new technologies and policies. Matt Carlson and Seth Lewis have traced the contemporary press through the complex boundary work of diverse laborers, which Adrienne Russell has shown also includes digitally savvy activists, environmentalists, and solution-oriented journalists. Stephen Reese has demonstrated how the crisis of the institutional press is a problem not only of journalism and publishing but also of information economics and political culture. Newer work centered on artificial intelligence is revisiting these questions, with a focus on how computation, technologists, datasets, and experimental methods are currently structuring the press, its language, and its own understanding of its public service role.

It is impossible in this chapter to fully capture the extent of this literature, but there is ample scholarship indicating that “the press” is and always has been a messy and historically situated mix of colliding social and technological forces. The idea, then, that “press freedom” is only, or even mostly, a matter of journalism and jurisprudence is incorrect. It is more accurately understood as an ongoing achievement of multiple, intertwined actors, existing in historical-technological moments, which create separations and dependencies that are continually judged against normative ideals of what journalistic autonomy is supposed to be and what journalism’s public mission should be in any era.

If these conceptions of the press and its freedom form a plausible framework, then the challenge becomes not only how to empirically observe these collisions of forces—these separations and dependencies—but also how to judge whether they move us closer to or further from any particular ideal of press freedom.

I want to bracket the discussion of which version of “press freedom” is worth pursuing amidst this mix of forces—much excellent scholarship has analyzed the term and the significance of various definitions —and instead argue for a way of seeing press freedom, namely as the press’s capacity to self-monitor and self-correct and to learn from its mistakes. Taking inspiration from Samuelson’s call for a “freedom to tinker,” scholarship on how “broken world thinking” offers new ways to see system maintenance and repair, studies of institutional learning, and organizational technologies, I wish to suggest that a largely ignored aspect of press freedom is the press’s ability to learn from mistakes. While some works have encouraged journalists to experiment with GenAI, learn how it can be useful for news work, and avoid making mistakes while using it, I want to invoke a larger, more generative sense of experimentation to ask how the press might learn about itself by critically engaging with how GenAI mistakes happen, why they matter, and what they might mean for press freedom in our contemporary era of synthetic media. If an autonomous press can self-monitor and self-correct—knowing its own forces and reshaping them when they fail—then the autonomy of the synthetic press (the journalistic practices, forms, and institutional conditions intertwined with GenAI) depends upon how well it knows, learns from, and changes in response to its mistaken engagements with GenAI.

The remainder of the chapter argues that the press has always defined itself through relationships with “mistakes” (broadly construed), that the contemporary synthetic press is struggling to understand itself through GenAI failure, and that a press that can learn from its mistakes has a kind of “recursive press freedom” that will be essential for defining its own normative vision amidst continued technological turmoil.

The Press and Mistakes—Its Own and Others’

In many ways, the press feeds on mistakes. Mistakes can be targets for correction, triggers for outrage, focuses of investigations, opportunities to teach audiences how knowledge develops, chances to develop and defend the profession, indicators of mistaken institutional relationships, and even matters to strategically ignore. While a complete review of how the press deals with mistakes, errors, and failures is beyond the scope of this chapter, as context for understanding GenAI journalism mistakes, I offer five ways to understand the press’s relationship to error: as (1) misinformation to correct, (2) transgressions spurring outrage, (3) professional malpractice and self-regulation, (4) misconfigured publishing systems, and (5) tolerated ambiguities.

(A note on terminology: I use the word “mistake” somewhat loosely and interchangeably with “error” or “failure” or “breakdown.” This is not because I see these terms as synonymous—indeed, an extensive body of interdisciplinary literature differentiates these words more precisely —but because I mean for this chapter to broadly assert that the press’s freedom requires a capacity to self-correct and to structure and change itself in relation to things that it sees as somehow wrong, failing, broken, or misaligned with its vision of ideal professional journalism or healthy, democratic, communicative self-governance.)

A. Mistakes as misinformation to correct

Part of what professional journalism does is create a framework of facts for understanding and governing social life, and it also fights mis- and disinformation that harms this framework. As Tucher argued in her history of “fake news” in American journalism, one of the profession’s early motivations was to ferret out and correct misinformation. Sometimes those errors came from “actors who [were] distorting, manipulating, misunderstanding, or faking the prevailing journalistic conventions in order to present ‘truths’ of their own for motives or reasons of their own” and sometimes they result from a “clash within journalism between practitioners of the real seeking to defend their profession and perpetrators of the fake working to exploit it.” That is, sometimes fakery, errors, falsity, and failures come from outside journalism, and sometimes they exist as struggles among journalists with different visions of their roles. Most recently, this friction has played out in fact-checking communities populated with information producers, professional journalists, information activists, and platform content moderators. These actors attempt to define, find, correct, and prevent the circulation of information—some of it resembling news—seen as harmful to healthy democratic communication. As Graves chronicled,  the field of contemporary fact-checking works by identifying the forces that produce mistaken or erroneous information (within and outside of news organizations), intervening to correct particularly harmful or exemplary misinformation, and trying to structurally change media systems in ways that might curb the future production, circulation, and power of misinformation. Intentionally produced or not, misinformation is evidence of a broken media system that journalists must try to correct and fix.

B. Mistakes as transgressions spurring outrage

Mistakes also play a role in shaping the press as a key institution of public investigation, monitoring, and oversight. This model of the press conceives of its role as going beyond simply providing information and uncovering facts; rather, the press does so selectively and with a keen sense that it is meant to be a bulwark against social injustice and transgressions by those in power. In some cases, investigative reporters, who see themselves as “custodians of conscience,” uncover these transgressions by researching and building stories that reveal how a powerful figure or system has failed so egregiously that it defies common sense, with audiences expected to share the journalist’s outrage. In other instances, whistleblowers, leakers, and self-appointed watchdog sources bring transgressions to the attention of journalists, teaching reporters how to understand their worlds and recognize when a powerful industry, technology, or organization has grievously failed and why that failure is publicly significant. While the motivations and interests of whistleblowers and journalists alike are complex and often go unstated, many of the most notable examples of transgression-driven reporting—such as coverage of governmental failures, state surveillance, and tech industry corruption —emerge from sources and journalists coming together around a shared sense that something has gone wrong and that investigative journalism may have a role to play in fixing it. The press’s ability to decide which transgressions it thinks are worthy of investigation and publication is part of its freedom —it has autonomy from whistleblowers’ outrages and autonomy to choose mistakes, failures, and breakdowns that it sees as mattering to the public.

C. Mistakes as malpractice and self-regulation

Journalists care not only about mistakes that appear as misinformation, drive investigative reporting, or motivate sources’ outrage. They are also concerned with errors that they themselves make in the course of their work. Sometimes these mistakes become infamous cases of misconduct. Notable examples include the instances of professional malpractice by Janet Cooke, Stephen Glass, Jayson Blair, Jack Kelley, Judith Miller, and Dan Rather, which all demonstrated how reporting judgment is often precarious, idiosyncratic, and susceptible to both individual deviation and organizational weaknesses. Drawing on engineers’ acceptance that their complex systems will fail with a regularity that enables risks to be anticipated and expected—what Perrow calls “normal accidents” —media scholars suggest that journalistic practices and newsroom cultures are designed to make mistakes. Newsroom leaders often inadvertently and systematically foster an environment conducive to errors by fostering weak internal communication, tolerating poor reporting habits, trusting journalists with bad track records, enabling substandard fact-checking, resisting regular performance reviews, failing to scrutinize source expertise, and “isolat[ing] deviance as individual misconduct in order to stave off systematic criticism.”  By trusting journalists with too much independence and failing to implement strong management practices, newsrooms harm the public’s freedom not to be deceived or misinformed.

Journalists sometimes call out their own mistakes explicitly, with many news organizations having explicit policies about when and why it issues corrections, clarifications, and, in rare cases, retractions. A long line of research, however, indicates that these policies are applied unevenly, with only a small fraction of errors ever receiving official corrections. Many publications are rife with mistakes concerning objectively verifiable facts and offer interpretations that experts agree are mistaken. Most errors are never fixed and, if they are, it is often because a reporter has some personal connection to the topic or source. Fixes often fail to repair the original misunderstanding and echo through subsequent stories years after publication. Publications that fail to self-monitor and self-correct stories miss the opportunity to build trust with readers through apologies, to teach them about the challenges of journalism, and to educate readers on the positive aspects of learning from mistakes and building knowledge by grappling with error. Indeed, when news organizations do acknowledge their mistakes, they often strategically contain them, their causes, and their harms. They explain away mistakes as rare and unavoidable missteps that are of little significance to journalism’s entrenched habits and structures, or they address them with vague promises to do better and study internal processes. Finally, newer work confirms that online publishers are constantly updating their stories, but the reasons for these updates are unclear. Sometimes events change and new reporting emerges, but sometimes earlier errors are corrected without comment or explanation. It may be that there is a subtle, error-correcting process at work among digital publishers, but the patterns and motivations of such practices are unclear.

Across these scenes of journalistic mistakes—infamous malpractice, newsroom mismanagement, and rituals of correction—press freedom is about journalists strategically narrating, containing, and addressing their mistakes in ways that preserve their power to decide when and how news stories change. These choices leave largely intact the idea that published news is factual and that errors are rare and benign.

D. Mistakes as misconfigured publishing systems

Another class of journalistic mistake involves publishers’ relationships to technology company products and services. Though the details point to a variety of forces at work, examples abound of moments when platforms interfered with or mismanaged news stories that publishers intended to share through social media.

For example, Facebook removed from its platform a story that The Vindicator (a small Texas newspaper) had published as part of its reprinting of the U.S. Declaration of Independence because the story included the Declaration’s phrase “merciless Indian Savages,” which Facebook judged to violate its prohibition against hate speech. Facebook similarly clashed with a publisher’s editorial judgment when it attempted to censor a post by Aftenposten (Norway’s largest newspaper) that included Nick Ut’s Pulitzer-Prize-winning photo “Napalm Girl” because the platform’s policies prohibit nudity. Facebook reversed both decisions. In another instance, the company had to backtrack and apologize after it added to a DailyMail.com post “an AI-generated label of ‘primates’ to a news video . . . that featured black men” being harassed by a white man. Facebook also had to apologize after deleting from its site all stories by the Kansas Reflector; here, the company had labeled a Reflector series critiquing the climate policies of Facebook’s parent company Meta as a “cybersecurity threat.” The chain of events leading to this censorship was similar to the platform’s deletion of stories by The Winchester Star mentioning sexual assault, which Facebook flagged as violating its community standards. Scientists criticized the platform for labeling a scientific article by the British Medical Journal as “partly false” and “needing context,” after one of Facebook’s fact-checking partners wrongly concluded that the peer-reviewed scientific publication “could mislead people.”

These mistakes all occurred in slightly different contexts. Some involved platforms failing to understand the editorial context and journalistic value of words and images, others involved the algorithmic additions of metadata that overrode journalistic framings, and still others involved human judgments by platforms that failed to acknowledge and defer to subject-matter experts. All these errors indicate how publishing systems—content moderation guidelines, algorithmic labeling, fact-checking partnerships—can interfere not only with journalists’ freedom to publish on their individual websites (channels they control) but also with their practical freedom to reach the large, algorithmically curated, and economically lucrative audiences that convene on social media platforms. It is noteworthy that three of the victims of these errors—the Vindicator, the Reflector, and the Star—were small publications with limited readerships and limited organizational capacity to anticipate or contest such errors, suggesting that the harms of platform mistakes may be unevenly distributed among publishers and have the most impact on small news organizations.

E. Mistakes as tolerated ambiguities

Finally, though far less common than some of the more observable examples of journalistic mistakes, communication and journalism scholars identify a more subtle type of error that complicates the assumption that journalists are committed to creating a clear factual record for the public. Three examples illustrate how the press creates and tolerates subtle and ambiguous forms of language that strict definitions of writing might call mistaken but that preserve journalism’s power to create the stories that it thinks should be told—a kind of press freedom.

The first example falls under the umbrella of what Basil Bernstein referred to as “elaborated codes,” or language that is shared and readily understood by a particular demographic group, social class, or community of interpretation. While outsiders may find such language confusing or simply fail to appreciate its depth, insiders “get” it and signal their identity and social position by correctly reading the language’s subtleties. News language is rife with such language. For example, throughout the AIDS epidemic, The New York Times and many other newspapers used the phrase “longtime companion” to euphemistically refer to a deceased person’s gay partner, lover, or boyfriend. This phrase suggested a relationship that some readers would fail to notice but that others would understand and identify with. Journalists could thus steganographically signal a meaning of “longtime companion” that might have challenged the era’s mores, tolerating stories with indirect language when strict adherence to principles of clear news writing would have required clarity. Though such language is not really “mistaken,” it suggests that publishers sometimes take license to strategically use ambiguity and opacity if doing so serves their aims.

The second example of tolerated ambiguity is irony. Journalists can use ironic language to “direct readers, viewers, or listeners to a preferred or intended understanding” of a story that may literally contradict a writer’s words and that may subtly and winkingly invite the audience to share the journalist’s own interpretation in a manner that would otherwise fail traditional tests of journalistic objectivity. For example, when veteran reporter Lou Cannon faced the conundrum of covering President Ronald Reagan’s factually incorrect 1985 claim that South Africa had “eliminated” segregation (it had not), instead of juxtaposing Reagan’s statement against experts who could debunk or contextualize it, Cannon used irony, highly selective quotations, and interpretive verbs like “contended” to simultaneously report on and discredit the president’s words.

Journalists also employ “stance verbs” to “reflect a speaker’s attitude or stance toward the content of his or her speech” as a way to signal their own interpretation of statements or events. This practice involves appending words like “obviously,” “clearly,” “apparently,” and “presumably” when they are not required by the reported facts. While such words can seem like innocuous narrative devices when used in the context of reporting on official speech (for example, “the President clearly stated that . . .”), they subtly embed a journalist’s own judgment. The use of interpretive words continues to be common. For example, a Washington Post story about the families of Sandy Hook Elementary School victims offering Alex Jones a settlement described the offer as “only about 6 percent of what he owes” (emphasis added). When journalists use irony and stance words, they are not making “mistakes” of the kind that might be caught and corrected by editors and fact-checkers, but they are nonetheless straying from publishing strictly factual accounts. They are exercising autonomy over language and enjoying a freedom to embed subtle interpretations and leave ambiguities unresolved.

The third example concerns the profession’s power; in his exhaustive historical study of misreported stories in American journalism, Joseph W. Campbell suggested that journalists often let mistakes stand when they align with how journalists would like the public to perceive their profession’s power. Campbell illustrated this point by using Nick Ut’s “Napalm Girl” photo as an example. When discussing the power of this photo, journalists persist in using phrases like “American napalm” and “American plane,” thus perpetuating the trope that it played a major role in ending the Vietnam War. Per Campbell, however, while the napalm was indeed American-made, it was not dropped by American pilots; similarly, the plane that carried out the attack was American-made but not owned or operated by the American Air Force. While such details and distinctions may seem trivial in the context of the war’s overall brutality, that seeming triviality is exactly what Campbell sought to highlight in arguing that some facts have been allowed to stand as ambiguous when they served what he called “misleading ‘consensus narratives’ . . . anecdotes and legends that are found at the heart of a profession’s culture and are readily recalled.” These are the facts and stories that journalists want to be true about itself and its power to shape the world. Adopting Shelby Steel’s phrase, Campbell asserted that such facts are “poetic truths” serving a “larger essential truth” that journalists sometimes prioritize over the facticity that their profession usually unyieldingly demands.

***

Across these scenes of error—misinformation, transgressions of justice, professional malpractice, platform censorship, and ambiguous language—a distinct image of the press and its freedom emerges. This image depends upon seeing the press as a site of unfettered reporting, editorial judgment, and publishing; as a set of social and technological dependencies and separations; and also as a communication system with the power to diagnose, prioritize, fix, and tolerate mistakes. This press sees some errors as grievous mistakes demanding correction, others as social ills deserving its attention, others as unfortunate but understandable missteps, and still others as circumstances that it wants to have the power to contextualize, strategically contain, or altogether ignore.

In turning now to GenAI and its relationship to press freedom, it is worth keeping in mind these complex views of the press, press freedom, and journalism’s relationship to mistakes. In doing so, we can more effectively consider what, exactly, is “wrong” with the “synthetic press” and ask which forces cause which failures and how contemporary efforts to diagnose and correct the synthetic press’s failings reveal different understandings of the press and its freedoms.

The Synthetic Press and Its Mistakes

The synthetic press, which encompasses journalistic practices, news forms, institutional relationships, economic models, and regulatory frictions prompted by GenAI datasets, large language models, and commercial products, presents a new space of mistakes, errors, and failures for thinking about press freedom.

A considerable body of recent scholarship has traced how journalists report on GenAI, how newsrooms adopt and experiment with GenAI, how media organizations establish GenAI guidelines, how media guilds bargain around GenAI, and how GenAI poses new normative and regulatory challenges. But there has been little work so far framing GenAI as a press freedom problem.  Thus, the focus of the present work is to understand how the press’s mistaken or failed uses of GenAI might repeat or break its historical patterns of contextualizing, containing, tolerating, or ignoring mistakes.

Beginning around the fall of 2022, GenAI began appearing in new, popularly available systems like ChatGPT, Midjourney, Copilot, DALL-E, Meta AI, Claude, and Sora. It was already at work in familiar tools like Google Search, Microsoft Office, and Google Docs and in countless, largely invisible, back-end integrations with existing organizational information systems. Journalists were quick to use and misuse the technology.

Indeed, examples of mistaken or failed GenAI journalism already abound. The German magazine Die Aktuelle fired its editor-in-chief after he published a synthetically generated “interview” with race car driver Michael Schumacher, which gave audiences no indication that it was a fabricated story. The Guardian discovered that ChatGPT was delivering plausible, but entirely fictional, news stories that the technology claimed the news organization had published. USA Today apologized and corrected countless sports stories after discovering that many of them were synthetically generated and contained errors. CNET paused using GenAI after it published a series of stories offering readers financial guidance that contained myriad mistaken formulas and calculations. Gizmodo found itself running afoul of Star Wars fan communities after it used GenAI to publish several stories with errors about the franchise’s characters and plots. Microsoft was harshly criticized after its GenAI news platform placed a poll next to a Guardian story, inviting readers to speculate whether the subject of the story died by murder, accident, or suicide. Sports Illustrated backtracked on its use of GenAI after an investigation found that it had published an undisclosed number of entirely fabricated stories under the bylines of synthetically generated journalist personas. The magazine blamed a third-party content provider for the mistake. Gannett paused its use of GenAI after it published several synthetically generated articles that contained information placeholders in place of story details (for example, running the lede “The Worthington Christian [[WINNING_TEAM_MASCOT]] defeated the Westerville North [[LOSING_TEAM_MASCOT]] 2-1 in an Ohio boys soccer game on Saturday.”). And MSN.com recently mistakenly featured a story falsely accusing an Irish broadcaster of sexual misconduct, a story that had been synthetically generated by a third-party chatbot that paraphrased real news stories.

While some individual errors are amusing and idiosyncratic growing pains of technological experimentation, together they suggest that the synthetic press’s inclusion of GenAI yields new types of journalistic mistakes and thus new ways to think about press freedom through GenAI mistakes. Note, for example, how these mistakes span questions of editorial judgment, failed third-party relationships, factual errors, journalistic taste or decorum, specialized knowledge, reporter identity, and system design. These instances of mistaken GenAI journalism all deserve deeper theorization and case studies, but taken together, they suggest a framework for thinking about synthetic press freedom. In the following, I briefly sketch three sites of GenAI journalistic mistakes, suggesting each site as a place to consider how press autonomy might appear in synthetic media systems.

Journalistic GenAI Mistakes as Synthetic Press Freedom Dynamics

A. Uncritically using tools and infrastructure

GenAI uses datasets and computational language models to produce media that its designers judge as producing statistically acceptable results. It does not “know” anything about the data it uses, the meaning of its patterns, or the nuances of the language it produces, and it is little more than a “stochastic parrot.” As scholarship on journalistic errors shows, reporters and editors have considerable freedom to choose sources, frame ledes, analyze data, phrase and juxtapose claims, and signal interpretations, with few errors or mistakes ever being corrected. This individual autonomy is a hallmark of the profession and a key element of seeing press freedom in terms of intrepid journalists pursuing truth at all costs. Style guides and newsroom policies can go a long way toward setting standards and expectations for how, if at all, journalists should use GenAI (for example, catching factual errors and noncompliant language). However, more subtle forms of error can appear if journalists too quickly and uncritically accept as “good enough” the default results of synthetically generated source suggestions, story ledes, document analyses, research summaries, interview translations, transcripts, and a myriad other seemingly innocuous aspects of journalistic work.  In many news organizations, the pressures to produce many stories in a short timeframe that are deemed good enough may create uncritical reliance on tools and habits that are unlikely to ever rise to the level of an error demanding correction but that nonetheless erode the quality of the work.

News organizations with the money, data, engineering expertise, and editorial oversight to create their own GenAI tools and infrastructure (like The New York Times and The Washington Post ) will have arguably more freedom to avoid the risks and errors that come with uncritically using publicly available synthetic media technologies. Being able to know what datasets a model includes, how it was trained, which tests it passed before release, and who is responsible for fixing errors is arguably a much stronger position from which to critically judge GenAI outputs. To the extent that the privilege of avoiding or controlling synthetic media mistakes is unevenly distributed—few small newsrooms can afford homegrown AI systems—the synthetic press is a field with uneven freedom.

Little harm is likely to result from an individual journalist on a single beat or story uncritically accepting GenAI outputs as “good enough,” but if many journalists too quickly accept the statistically generated outputs of similar tools across workflows and newsrooms, GenAI begins to look like a subtle, distributed, largely invisible, and hard-to-track curb on the autonomy of individual journalists. If the ideal of professional journalistic autonomy is that reporters and editors have the power to use their expertise to decide for themselves whether a story is publishable, then the gradual incursion of GenAI into those judgments—to suggest phrasings, speed up work, recommend sources, reorder text, summarize research, analyze interview transcripts, and more—has the real potential to displace the feeling or fact of professional independence. To be sure, tools have always structured human practices—journalists were not free from technological influences before GenAI —but even if journalists seek out and approve of GenAI in their work, there is still an impact on work that they might have done without GenAI’s help. And to the extent that journalists mistake commonly used GenAI tools as neutral, objective, innocuous, or even sentient, they run the risk of implicitly surrendering some of their own freedom, and obligation, to scrutinize sources, research, facts, framings, language, and interpretations in the public interest.

B. Echoing past errors

Although it is difficult to quantify, research suggests that many published stories contain errors that have never been corrected. In a study of 2,700 stories containing factual errors or serious inaccuracies, published in 22 small and midmarket newspapers, Scott Maier found that a correction was issued only 11 percent of the time. In another survey of 4,800 human sources cited in news stories and interviews, 61 percent of sources said the stories contained errors. An internationally comparative study of news accuracy found errors in 48 percent of U.S. stories, 60 percent of Swiss stories, and 52 percent of Italian stories. Though the sources opined that the errors seriously damaged the value of the stories, they saw such errors as inevitable, correction processes as futile, and journalists as largely resistant to talking about a story’s accuracy after it has been published. A meta-review of news accuracy studies from 1936 to 2005 revealed that newspapers have always been rife with errors, with between 41 and 61 percent of all published articles over that time period containing mistakes of some kind.

Though often celebrated as exactly the kind of high-quality information that GenAI datasets should include, it seems that many news stories contain errors or inaccuracies. As makers of GenAI systems buttress their large language models with datasets of news stories, with the aim of increasing the veracity and reliability of synthetic outputs, they are potentially reproducing decades of errors contained in published news stories. It is highly unlikely that mistakes in any single news story will be reproduced in a GenAI output, and this is because GenAI datasets include multiple sources, machine learning outputs are probabilistic, and training methods like human-centered reinforcement learning try to reduce GenAI errors. Nevertheless, GenAI systems that include or prioritize news data may be reproducing past journalistic mistakes. As a matter of press freedom, this means that news errors that were created years ago, in part by individual journalists acting with autonomy, newsrooms with poor oversight, and publishers’ reluctance to make corrections, all feed a contemporary situation in which journalists and general users alike find themselves using GenAI systems informed by mistake-riddled news. To be sure, all of these actors were exercising professional autonomy and choosing to practice in ways they saw fit, but the result of that autonomy is a news record with errors—a record that is now driving the datasets and machine learning models being used to generate synthetic news. Without auditing GenAI datasets for news, extracting erroneous news data, and updating large language models, it is practically impossible for anyone using or subject to GenAI to be free from the mistakes created by a profession that has historically protected its autonomy to define, find, correct, and ignore its errors.

C. Politics of reproducing datasets

Beyond the risk of error-ridden news datasets creating ripples of mistakes and inaccuracies in GenAI systems, there is a more subtle danger, namely that GenAI journalism reproduces the politics of the datasets underpinning synthetic media systems. As historians, cultural critics, and auditors of datasets have demonstrated, machine learning datasets often contain information that was unethically collected; excludes large swaths of people; organizes individuals into racist, gendered, homophobic, and xenophobic categories; and prioritizes English language and Western-centric descriptions of the world.

These dataset politics have the potential to reappear if GenAI systems uncritically incorporate corpora of news stories without evaluating them for problematic histories or patterns. Because there is no such thing as data free from the influence of politics or history—historian Lisa Gitelman has referred to “raw data” as an oxymoron —corpora of news stories reflect the traditions, languages, epistemologies, and ethics of the people and eras that created the narratives. As histories of the AP Stylebook, a powerful authority of language in many newsrooms, show, journalistic language is constantly changing. It was once editorially acceptable to use terms like “illegal immigrant,” “transexual,” or “addict.” The New York Times had a long history of referring to women as only “Miss” or “Mrs.” (calling “Ms.” a linguistic fad ), and The Washington Post recently started capitalizing the word “Black” to “identify the many groups that make up the African diaspora in America and elsewhere.” While adherence to stylebooks can prevent replication of problematic individual words or phrases in contemporary stories, to the extent that GenAI datasets include these news stories, they are rife with claims, assumptions, and language politics that were once common and acceptable but that are now considered mistaken and unacceptable.

The power of dataset politics also appears in the dominance of The New York Times “Annotated Corpus,” a dataset of 1.8 million documents consisting of nearly every article published by The New York Times between January 1987 and June 2007 that is organized by topic, individuals, locations, and other elements within The New York Times’ metadata system. It has quickly become the de facto standard of news language for machine learning technologists and is regularly used as a benchmark dataset for teaching AI systems how to generate high-quality story summaries, recognize news events, and standardize language. The power of The New York Times and its “Annotated Corpus” dataset to computationally standardize news and signal acceptable news language (to both journalists and outsiders alike) is nothing new. It echoes an earlier “arterial process” that media sociologists determined could explain why so many of the nation’s newsrooms tried to mimic The New York Times in terms of reporting techniques, editorial judgment, language choices, topic selection, audience relations, and publication timing. The New York Times has long held a central position in American journalism, and it continues to do so through the leverage of its powerful datasets and indexes—information and power that it is currently suing Microsoft and OpenAI to protect.

These dataset politics, which potentially include problematic collection practices, unethical categorizations, stylebook changes, and powerful indexes, mean that much of the language that informs synthetically produced journalism emerges from unexamined assumptions about who or what is newsworthy, how stories are told, and which organizations dominate. While individual journalists may feel a sense of independence as they report and write, whenever they use GenAI tools they are automatically and often unknowingly embedded in a system of datasets, categories, models, and organizations with uneven power and complex histories.

Conclusion: Recursive Press Freedom for the Synthetic Press

Through this chapter, I have developed three lines of argument. First, I suggested that “the press” is most correctly understood as a set of intertwined social, cultural, political, and technological forces. Although often primarily associated with the work of journalists and the production of news, the press is, in reality, an entity shaped by the actions and influences of multiple actors.

Second, I argued that a key element of press freedom is the press’s capacity to define, uncover, explain, control, fix, or ignore mistakes. Broadly understood, mistakes are a key means by which the press knows and regulates itself, as well as a lens through which it observes and endeavors to change society. Mistakes are an important and largely understudied way of understanding the press and its power.

Finally, I have tried to argue that, today, the press is a synthetic press to the extent that GenAI, through its datasets, models, and products, structures journalistic practices, news forms, institutional relationships, economic models, and regulatory frictions. And just as past eras of the press had their own types of errors, the synthetic press has its own emerging set of mistakes that manifest themselves in how journalists engage GenAI tools, how past errors echo in GenAI infrastructures, and how dataset politics structure journalism and news.

This conception of press freedom as the capacity to know, control, make, and learn from mistakes is not a stand-alone model of press freedom but is deeply connected to more general ideas about how institutions achieve their autonomy and how the public can govern themselves through communication technologies.

As Schauer argued, institutions discover and demonstrate their independence, in part, through the capacity to create and control exceptions. When institutions reject default rules, carve out special cases, reverse precedents, and use their own judgment to evaluate the significance of their circumstances, they distinguish themselves from other institutions and demonstrate their ability to act with self-awareness, purpose, and confidence. A free press should see mistakes as exceptional opportunities to know itself better, to judge whether an error illustrates a pattern to tolerate or change, and sometimes to argue that a mistake is not its own but someone else’s failure to treat it differently. A free synthetic press, for example, would make its own GenAI infrastructure that fixes, prevents, or tolerates errors as it sees fit, and a free synthetic press would reject or work to change GenAI infrastructure that causes errors that GenAI may tolerate but journalism cannot. In practice, and aligned with calls for institutional exceptions to data-driven rules, synthetic press autonomy would demand freedom from unauthorized data scraping, unethical datasets, deceptive interfaces, and anachronistic language models and freedom to create its own GenAI systems. A free synthetic press would create exceptions for itself from general purpose GenAI because the mistakes that would otherwise result would be unacceptable.

As Chris Kelty contended, publics become “recursive publics,” when they have the “discursive and technical”  means of creating and recreating themselves. Put differently, they can talk about who they are and why they are structured as they are, and they have the expertise and material power to sustain and change themselves without relying on outsiders. A simple example of a recursive public is an online forum composed of people who have the communicative capacity to talk about why and how they convene and who have the engineering expertise to maintain or redesign the forum, all without the need to consult or rely upon external actors.

A recursively free press, then, is a press that knows and can confidently discuss the social, cultural, political, and technological relationships that structure its journalism and produce its news and that has the power to change those relationships with intention and without needing outsiders. A recursively free press, for example, would not try to emulate Mark Zuckerberg’s model of community and would not need Google’s funding. A recursive synthetic press would deeply understand GenAI’s datasets, models, and products; would understand and be able to discuss their significance to journalism and news; and would be able to reject, change, or invent a GenAI infrastructure that it desires, without relying on outsiders. Such a press would have command of its future and be able to, as Erin Carroll has argued, “conjure the language that describes the press we need and want.”

Artificial intelligence pioneer and ultimate critic Joseph Weizenbaum wrote that “the computer scientist has a heavy responsibility to make the fallibility and limitations of the systems he is capable of designing brilliantly clear.”  The same might be said for the GenAI journalist, who should know how and why the synthetic press fails, which failures matter, and how it might be fixed. Such a journalist would hold an image of journalism, debate it with others, and trace it across datasets, models, interfaces, economics, politics, ideals of the public, and imaginings of what the synthetic press could be if it were autonomous.

References

Amoore, Louise, Alexander Campolo, Benjamin Jacobsen, and Ludovico Rella. 2024. “A world model: On the political logics of generative AI.” Political Geography 113:103134. https://doi.org/https://doi.org/10.1016/j.polgeo.2024.103134.

Ananny, Mike. 2018a. Networked press freedom: Creating infrastructures for a public right to hear. Cambridge, MA: MIT Press.

———. “The partnership press: Lessons for platform-publisher collaborations as Facebook and news outlets team to fight misinformation.” Accessed September 3, 2018. https://www.cjr.org/tow_center_reports/partnership-press-facebook-news-outlets-team-fight-misinformation.php/.

———. 2023. “Making Mistakes: Constructing Algorithmic Errors to Understand Sociotechnical Power.” Osiris 38:223-241.

Ananny, Mike, and Kate Crawford. 2015. “A liminal press: Situating news app designers within a field of networked news production.” Digital Journalism 3 (2):192-208. https://doi.org/10.1080/21670811.2014.922322.

Ananny, Mike, and Jake Karr. “Press freedom means controlling the language of AI.” https://www.niemanlab.org/2023/09/press-freedom-means-controlling-the-language-of-ai/.

Anderson, C.W. 2013a. Rebuilding the news: Metropolitan journalism in the digital age: Temple University Press.

———. 2013b. “Towards a sociology of computational and algorithmic journalism.” New Media & Society 15 (7):1005-1021. https://doi.org/10.1177/1461444812465137.

———. 2018. Apostles of certainty: Data journalism and the politics of doubt. Oxford, UK: Oxford University Press.

Appadurai, Arjun, and Neta Alexander. 2019. Failure. New York, NY: Wiley.

Aronczyk, Melissa, and Maria I. Espinoza. 2022. A Strategic Nature: Public Relations and the Politics of American Environmentalism. Oxford, UK: Oxford University Press.

Assmann, Karin. 2022. “Whistleblowers and their Faith in Journalism.” Journalism Practice:1-20. https://doi.org/10.1080/17512786.2022.2161067.

Balkin, Jack M. 2018. “Free speech is a triangle.” Columbia Law Review 118 (7):2011-2056.

Barassi, Veronica, Antje Scharenberg, Marie Poux-Berthe, Rahi Patra, and Philip Di Salvo. “From ChatGPT to crime: how journalists are shaping the debate around AI errors.” European Journalism Observatory. https://en.ejo.ch/specialist-journalism/from-chatgpt-to-crime-how-journalists-are-shaping-the-debate-around-ai-errors.

Barkin, Steve M., and Mark R. Levy. 1983. “All the News That’s Fit to Correct: Corrections in the Times and the Post.” Journalism Quarterly 60 (2):218-225. https://doi.org/10.1177/107769908306000202.

Barnhurst, Kevin G., and John Nerone. 2001. The form of news: A history. New York, NY: Guilford Press.

Bartholomew, Jem, and Dhrumil Mehta. “How the media is covering ChatGPT.” Tow Center for Digital Journalism, Columbia University. https://www.cjr.org/tow_center/media-coverage-chatgpt.php.

Beaver, David, and Jason Stanley. 2023. The politics of language. Princeton, NJ: Princeton University Press.

Becker, Kim Björn, Felix M. Simon, and Christopher Crum. 2023. “Policies in Parallel? A Comparative Study of Journalistic AI Policies in 52 Global News Organisations.” In.

Bell, Emily, and Taylor Owen. 2017. “The platform press: How Silicon Valley reengineered journalism.” In Columbia School of Journalism, Tow Center.

Bella, Timothy. 2023. “Sandy Hook families offer Alex Jones a deal to settle $1.5 billion debt.” In The Washington Post.

Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ?” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. Virtual Event, Canada: Association for Computing Machinery.

Bennett, Lance W., Lynne A. Gressett, and William Haltom. 2006. “Repairing the News: A case Study of the News Paradigm.” Journal of Communication 35 (2):50-68. https://doi.org/10.1111/j.1460-2466.1985.tb02233.x.

Bernstein, B. 1962. “Social class, linguistic codes and grammatical elements.” Language and speech 5 (4):221-240.

Berry, Fred C. 1967. “A Study of Accuracy in Local News Stories of Three Dailies.” Journalism Quarterly 44 (3):482-490. https://doi.org/10.1177/107769906704400309.

Bessette, Lily G., Sacha C. Hauc, Heidi Danckers, Agata Atayde, and Richard Saitz. 2022. “The Associated Press Stylebook Changes and the use of Addiction-Related Stigmatizing Terms in News Media.” Substance Abuse 43 (1):127-130. https://doi.org/10.1080/08897077.2020.1748167.

Bezanson, Randall P. 2012. “Whither freedom of the press?” Iowa Law Review 97:1259-1274.

Bien-Aimé, Steve. 2016. “AP Stylebook normalizes sports as a male space.” Newspaper Research Journal 37 (1):44-57. https://doi.org/10.1177/0739532916634640.

Blach-Ørsten, Mark, Maria Bendix Wittchen, and Jannie Møller Hartley. 2021. “Ethics on the Beat: An Analysis of Ethical Breaches Across News Beats from 1999 to 2019.” Journalism Practice:1-17. https://doi.org/https://doi.org/10.1080/17512786.2021.1956363.

Blankenburg, William B. 1970. “News Accuracy: Some Findings on the Meaning of Errors.” Journal of Communication 20 (4):375-386. https://doi.org/https://doi.org/10.1111/j.1460-2466.1970.tb00896.x.

Bloch-Wehba, Hannah. 2024. “The Promise and Perils of Tech Whistleblowing.” Northwestern University Law Review 118 (6):1503-1562.

Boczkowski, Pablo. 2009. “Technology, monitoring, and imitation in contemporary news work.” Communication, Culture & Critique 2 (1):39-59.

Boczkowski, Pablo J. 2004. Digitizing the news: Innovation in online newspapers. Cambridge, MA.

Bødker, Henrik. 2015. “Journalism as Cultures of Circulation.” Digital Journalism 3 (1):101-115. https://doi.org/10.1080/21670811.2014.928106.

Bollinger, Lee C. 1991. Images of a free press. Chicago, IL: The University of Chicago Press.

———. 2010. Uninhibited, robust and wide-open: A free press for a new century. Oxford, UK: Oxford University Press.

Brause, Saba Rebecca, Jing Zeng, Mike S. Schäfer, Christian Katzenbach, and Simon Lindgren. 2023. “Media representations of artificial intelligence: surveying the field.” In Handbook of Critical Studies of Artificial Intelligence, edited by Simon Lindgren, 277-288. Edward Elgar Publishing.

Breed, W. 1955. “Newspaper opinion leaders and the process of standardization.” Journalism Quarterly 32:277-284.

Brennen, J Scott, Philip N Howard, and Rasmus K Nielsen. 2022. “What to expect when you’re expecting robots: Futures, expectations, and pseudo-artificial general intelligence in UK news.” Journalism 23 (1):22-38. https://doi.org/10.1177/1464884920947535.

Bridges, Lauren E. 2021. “Digital failure: Unbecoming the “good” data subject through entropic, fugitive, and queer data.” Big Data & Society 8 (1):2053951720977882. https://doi.org/10.1177/2053951720977882.

Broussard, Meredith. 2023. More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. Cambridge, MA: MIT Press.

Brown, John Seely, and Paul Duguid. 1991. “Organizational learning and communities-of-practice: Toward a unified view of work, learning, and innovation.” Organizational Science 2 (1):40-57.

Brown, Pete. “Licensing deals, litigation raise raft of familiar questions in fraught world of platforms and publishers.” Tow Center for Digital Journalism, Columbia University. https://www.cjr.org/tow_center/licensing-deals-litigation-raise-raft-of-familiar-questions-in-fraught-world-of-platforms-and-publishers.php.

Buchanan, Tyler. 2023. “Dispatch pauses AI sports writing program.” In Axios.

Buolamwini, Joy, and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Conference on Fairness, Accountability, and Transparency, edited by Sorelle A. Friedler and Christo Wilson, 1-15.

Burkhardt, Sarah, and Bernhard Rieder. 2024. “Foundation models are platform models: Prompting and the political economy of AI.” Big Data & Society 11 (2):20539517241247839. https://doi.org/10.1177/20539517241247839.

Callison, Candis, and Mary Lynn Young. 2020. Reckoning: Journalism’s limits and possibilities. Oxford, UK: Oxford University Press.

Campbell, Joseph W. 2016. Getting It Wrong: Debunking the Greatest Myths in American Journalism. 2nd ed. Berkeley, CA: University of California Press.

Carlson, Matt. 2013. “Gone, but not forgotten: Memories of journalistic deviance as metajournalistic discourse.” Journalism Studies 15 (1):33-47. https://doi.org/10.1080/1461670X.2013.790620.

———. 2015. “The many boundaries of journalism.” In Boundaries of journalism: Professionalism, practices and participation, edited by Matt Carlson and Seth C. Lewis, 1-18. New York, NY: Routledge.

Carroll, Erin C. 2022. “A Free Press Without Democracy.” U.C. Davis Law Review 56:289-345.

———. “Press Benefits and the Public Imagination.” https://knightcolumbia.org/blog/press-benefits-and-the-public-imagination.

Cen, Sarah H., and Manish Raghavan. 2023. “The Right to Be an Exception to a Data-Driven Rule.” MIT Case Studies in Social and Ethical Responsibilities of Computing Winter. https://doi.org/https://doi.org/10.21428/2c646de5.a15f7255.

Chadwick, Andrew. 2017. The hybrid media system: Politics and power. 2nd ed. Oxford, UK: Oxford University Press.

Chanda, Sasanka Sekhar, and Debarag Narayan Banerjee. 2022. “Omission and commission errors underlying AI failures.” AI & Society. https://doi.org/10.1007/s00146-022-01585-x.

Choi, Sukyoung. 2024. “Temporal Framing in Balanced News Coverage of Artificial Intelligence and Public Attitudes.” Mass Communication and Society 27 (2):384-405. https://doi.org/10.1080/15205436.2023.2248974.

Christin, Angele. 2020. Metrics at Work: How Web Journalists Make Sense of their Algorithmic Publics in the United States and France. Princeton, NJ: Princeton University Press.

Christin, Angèle, and Caitlin Petre. 2020. “Making Peace with Metrics: Relational Work in Online News Production.” Sociologica 14 (2):133-156. https://doi.org/10.6092/issn.1971-8853/11178.

Cobbe, Jennifer. 2024. “The Politics of Artificial Intelligence: Rhetoric vs Reality.” Political Insight 15 (2):20-23. https://doi.org/10.1177/20419058241260785.

Coddington, Mark. 2014. “Clarifying journalism’s quantitative turn.” Digital Journalism 3 (3):331-348. https://doi.org/10.1080/21670811.2014.976400.

Colyvas, Jeannette A., and Spiro Maroulis. 2015. “Moving from an Exception to a Rule: Analyzing Mechanisms in Emergence-Based Institutionalization.” Organization Science 26 (2):601-621. https://doi.org/10.1287/orsc.2014.0948.

Cook, Timothy E. 1989. Making laws and making news. Washington, DC: Brookings Institution Press.

———. 1998. Governing with the news. Chicago, IL: University of Chicago Press.

Cools, Hannes, and Nicholas Diakopoulos. “Writing guidelines for the role of AI in your newsroom? Here are some, er, guidelines for that.” Nieman Lab. https://www.niemanlab.org/2023/07/writing-guidelines-for-the-role-of-ai-in-your-newsroom-here-are-some-er-guidelines-for-that/.

Cools, Hannes, Baldwin Van Gorp, and Michael Opgenhaffen. 2022. “Where exactly between utopia and dystopia? A framing analysis of AI and automation in US newspapers.” Journalism 0 (0):14648849221122647. https://doi.org/10.1177/14648849221122647.

Cools, Hannes, Baldwin Van Gorp, and Michaël Opgenhaffen. 2023. “Newsroom Engineering Teams as “Survival Entities” for Journalism? Mapping the Process of Institutionalization at The Washington Post.” Digital Journalism:1-20. https://doi.org/10.1080/21670811.2023.2195115.

Coombes, Rebecca, and Madlen Davies. 2022. “Facebook versus the BMJ: when fact checking goes wrong.” BMJ 376:o95. https://doi.org/10.1136/bmj.o95.

Crawford, Kate. 2023. “Archeologies of Datasets.” The American Historical Review 128 (3):1368-1371. https://doi.org/10.1093/ahr/rhad364.

Crawford, Kate, and Trevor Paglen. “Excavating AI: The Politics of Training Sets for Machine Learning.” https://excavatingai.com/.

Culpepper, Sophie. “The AP announces five AI tools to help local newsrooms with tasks like transcription and sorting pitches.” Nieman Lab. https://www.niemanlab.org/2023/10/the-ap-announces-five-ai-tools-to-help-local-newsrooms-with-tasks-like-transcription-and-sorting-pitches/.

Daston, Lorraine. 2005. “Scientific Error and the Ethos of Belief.” Social Research 72 (1):1-28.

David, Emilia. 2024. “The New York Times is building a team to explore AI in the newsroom.” In The Verge.

Deck, Andrew. “The Washington Post’s first AI strategy editor talks LLMs in the newsroom.” Nieman Lab. https://www.niemanlab.org/2024/03/the-washington-posts-first-ai-strategy-editor-talks-llms-in-the-newsroom/.

Denton, Emily, Alex Hanna, Razvan Amironesei, Andrew Smart, and Hilary Nicole. 2021. “On the genealogy of machine learning datasets: A critical history of ImageNet.” Big Data & Society 8 (2):20539517211035955. https://doi.org/10.1177/20539517211035955.

Denton, Emily, Alex Hanna, Razvan Amironesei, Andrew Smart, Hilary Nicole, and Morgan Klaus Scheuerman. 2020. “Bringing the People Back In: Contesting Benchmark Machine Learning Datasets.” In Proceedings of ICML Workshop on Participatory Approaches to Machine Learning, 2020.

Deuze, Mark, and Charlie Beckett. 2022. “Imagination, Algorithms and News: Developing AI Literacy for Journalism.” Digital Journalism:1-6. https://doi.org/10.1080/21670811.2022.2119152.

DiMaggio, P.J., and W.W. Powell. 1991. “Introduction.” In The new institutionalism in organizational analysis, edited by W.W. Powell and P.J. DiMaggio, 1-38. Chicago, IL: The University of Chicago Press.

Dorner, Dietrich. 1989. The Logic Of Failure: Recognizing And Avoiding Error In Complex Situations. New York, NY: Basic Books.

Dourish, P., and M. Mazmanian. 2011. “Media as material: Information representations as material foundations for organizational practice.” In Third International Symposium on Process Organization Studies. Corfu, Greece.

Dowd, Cate. 2020. Digital Journalism, Drones, and Automation: The Language and Abstractions Behind the News. Oxford, UK: Oxford University Press.

Downer, John. 2024. Rational Accidents: Reckoning with Catastrophic Technologies. Cambridge, MA: MIT Press.

Dupre, Maggie Harrison. 2023a. “Gizmodo’s AI-Generated Star Wars Article Still Has Errors, and Now It’s Ranking on Google.” In Futurism.

———. 2023b. “Sports Illustrated Published Articles by Fake, AI-Generated Writers.” In Futurism.

Eason, D.L. 1986. “On journalistic authority: The Janet Cooke scandal.” Critical Studies in Mass Communication 3:429-447.

Ehrlich, Matthew C., and Joe Saltzman. 2015. Heroes and scoundrels: The image of the journalist in popular culture. Chicago, IL: University of Illinois Press.

Elish, Madeleine Clare. 2019. “Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction.” Engaging Science, Technology, and Society 5:40-60. https://doi.org/10.17351/ests2019.260.

Ettema, James S., and Theodore L. Glasser. 1998. Custodians of conscience. New York, NY: Columbia University Press.

Fass, John, and Angus Main. 2014. “Revealing the news: How online news changes without you noticing.” Digital Journalism. https://doi.org/10.1080/21670811.2014.899756.

Fridman, M., R. Krøvel, and F. Palumbo. 2023. “How (not to) Run an AI Project in Investigative Journalism.” Journalism Practice:1-18. https://doi.org/10.1080/17512786.2023.2253797.

Galison, Peter. 2005. “Author of Error.” Social Research 72 (1):63-76.

Giles, Robert H. 2010. “New economic models for U.S. journalism.” Daedalus 139 (2):26-38.

Gillespie, Tarleton. 2010. “The politics of ‘platforms’.” New Media & Society 12 (3):347-364.

———. 2024. “Generative AI and the politics of visibility.” Big Data & Society 11 (2):20539517241252131. https://doi.org/10.1177/20539517241252131.

Ginart, Antonio A., Melody Y. Guan, Gregory Valiant, and James Zou. 2019. “Making AI forget you: data deletion in machine learning.” In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Article 316. Curran Associates Inc.

Gitelman, L. 2013. ““Raw Data” is an oxymoron.” In Cambridge, MA: MIT Press.

Glasser, Theodore L., and James S. Ettema. 1993. “When the facts don’t speak for themselves: A study of the use of irony in daily journalism.” Critical Studies in Mass Communication 10 (4):322-338.

Glasser, Theodore L., and Mark Gunther. 2005. “The legacy of autonomy in American journalism.” In The Institutions of American Democracy: The Press, edited by Geneva Overholser and Kathleen Hall Jamieson, 384-399. Oxford, UK: Oxford University Press.

Graves, Lucas. 2016. Deciding what’s true: The rise of political fact-checking in American journalism. New York, NY: Columbia University Press.

Graves, Lucas, and C. W. Anderson. 2020. “Discipline and promote: Building infrastructure and managing algorithms in a ‘structured journalism’ project by professional fact-checking groups.” New Media & Society 22 (2):342-360.

Graves, Lucas, and Laurens Lauer. 2020. “From Movement to Institution: The “Global Fact” Summit as a Field-Configuring Event.” Sociologica 14 (2):157-174. https://doi.org/10.6092/issn.1971-8853/11154.

Greenwald, Glen. 2014. No place to hide. New York, NY: Metropolitan Books.

Griffith, Keith. 2021. “DailyMail.com wins apology from Facebook for AI fail that labelled website’s news video of black men being harassed and arrested as ‘primates’.” In Daily Mail.

Grynbaum, Michael M., and Ryan Mac. 2023. “The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work.” In The New York Times.

Halberstam, Jack. 2011. The Queer Art of Failure. Durham, NC: Duke University Press.

Haugen, Frances. 2023. The Power of One: How I Found the Strength to Tell the Truth and Why I Blew the Whistle on Facebook. New York, NY: Little, Brown, and Company.

Haunschild, Pamela, and David Chandler. 2008. “Institutional-Level Learning: Learning as a Source of Institutional Change.” In The SAGE Handbook of Organizational Institutionalism, edited by Royston Greenwood, Christine Oliver, Thomas B. Lawrence and Renate E. Meyer, 624-649. London, UK: SAGE.

Haynes, Kenneth. 2021. “Error.” In Information: A historical companion, edited by Ann Blair, Paul Duguid, Anja-Silvia Goeing and Anthony Grafton, 424-432. Princeton, NJ: Princeton University Press.

Helberger, Natali, Max van Drunen, Judith Moeller, Sanne Vrijenhoek, and Sarah Eskens. 2022. “Towards a Normative Perspective on Journalistic AI: Embracing the Messy Reality of Normative Ideals.” Digital Journalism 10 (10):1605-1626. https://doi.org/10.1080/21670811.2022.2152195.

Hendrix, Justin. “Facebook Whistleblower Frances Haugen and WSJ Reporter Jeff Horwitz Reflect One Year On.” https://techpolicy.press/facebook-whistleblower-frances-haugen-and-wsj-reporter-jeff-horwitz-reflect-one-year-on/.

Henke, Jakob, Stefanie Holtrup, and Wiebke Moehring. 2022. “Forgiving the News: The Effects of Error Corrections on News Users’ Reactions and the Influence of Individual Characteristics and Perceptions.” Journalism Studies 23 (7):840-857. https://doi.org/10.1080/1461670X.2022.2044889.

Hill, Kashmir, and Tiffany Hsu. 2024. “It Looked Like a Reliable News Site. It Was an A.I. Chop Shop.” In The New York Times.

Holpuch, Amanda. 2023. “German Magazine Editor Is Fired Over A.I. Michael Schumacher Interview.” In The New York Times.

Horwitz, Jeff. 2023. Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets. New York, NY: Penguin.

Jackson, Steven J. 2013. “Rethinking repair.” In Media technologies: Essays on communication, materiality, and society, edited by Tarleton Gillespie, Pablo J. Boczkowski and Kirsten A. Foot, 221-239. Cambridge, MA: MIT Press.

Jacobsen, Benjamin N. 2023. “Machine learning and the politics of synthetic data.” Big Data & Society 10 (1):20539517221145372. https://doi.org/10.1177/20539517221145372.

Jia, Chenyan, Martin J. Riedl, and Samuel Woolley. 2024. “Promises and Perils of Automated Journalism: Algorithms, Experimentation, and “Teachers of Machines” in China and the United States.” Journalism Studies 25 (1):38-57. https://doi.org/10.1080/1461670X.2023.2289881.

Jones-Jang, S Mo, and Yong Jin Park. 2022. “How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability.” Journal of Computer-Mediated Communication 28 (1). https://doi.org/10.1093/jcmc/zmac029.

Jones, RonNell Andersen, and Sonja R. West. 2017. “The Fragility of the Free American Press.” Northwestern University Law Review 112 (3):567-596.

———. 2022. “The Disappearing Freedom of the Press.” Washington and Lee Law Review 79 (4).

Kelty, Chris M. 2005. “Geeks, social imaginaries, and recursive publics.” Cultural Anthropology 20 (2):185-214.

Kesan, Jay P., and Rajiv C. Shah. 2006. “Setting software defaults: Perspectives from law, computer science and behavioral economics.” Notre Dame Law Review 82 (2):583-684.

Kwoka, Margaret. “Returning FOIA to the press.” https://knightcolumbia.org/blog/returning-foia-to-the-press.

Lasorsa, Dominic L., and Jia Dai. 2007. “Newsroom’s normal accident?” Journalism Practice 1 (2):159-174. https://doi.org/10.1080/17512780701275473.

Lave, Jean, and Etienne Wenger. 1991. Situated learning: Legitimate peripheral participation. Cambridge, UK: Cambridge University Press.

Leonardi, P. M. 2011. “When flexible routines meet flexible technologies: Affordance, constraint, and the imbrication of human and material agencies.” MIS Quarterly 35 (1):147-167.

Lessin, Jessica. 2024. “News organizations rushing to absolve AI companies of theft are acting against their own interests.” In The Atlantic.

Levin, Sam, Julia Carrie Wong, and Luke Harding. “Facebook backs down from ‘napalm girl’ censorship and reinstates photo.” Accessed September 11, 2016. https://www.theguardian.com/technology/2016/sep/09/facebook-reinstates-napalm-girl-photo.

Li, Qin, Hans J. G. Hassell, and Robert M. Bond. 2023. “Journalists’ networks: Homophily and peering over the shoulder of other journalists.” PLOS ONE 18 (10):e0291544. https://doi.org/10.1371/journal.pone.0291544.

Lin, Cindy Kaiying, and Steven J. Jackson. 2023. “From Bias to Repair: Error as a Site of Collaboration and Negotiation in Applied Data Science Work.” Proc. ACM Hum.-Comput. Interact. 7 (CSCW1):Article 131. https://doi.org/10.1145/3579607.

Lipari, Lisbeth. 1996. “Journalistic authority: Textual strategies of legitimation.” Journalism & Mass Communication Quarterly 73 (4):821-834.

Lopez, Marisela Gutierrez, Colin Porlezza, Glenda Cooper, Stephann Makri, Andrew MacFarlane, and Sondess Missaoui. 2023. “A Question of Design: Strategies for Embedding AI-Driven Tools into Journalistic Work Routines.” Digital Journalism 11 (3):484-503. https://doi.org/10.1080/21670811.2022.2043759.

Luccioni, Alexandra Sasha, Frances Corry, Hamsini Sridharan, Mike Ananny, Jason Schultz, and Kate Crawford. 2022. “A Framework for Deprecating Datasets: Standardizing Documentation, Identification, and Communication.” In 2022 ACM Conference on Fairness, Accountability, and Transparency, 199–212. Seoul, Republic of Korea: Association for Computing Machinery.

Magaudda, Paolo, and Gabriele Balbi. “Theorizing failure in digital media. Four eclectic theses.” Annals of the International Communication Association:1-14. https://doi.org/10.1080/23808985.2024.2326056.

Maier, Scott R. 2005. “Accuracy Matters: A Cross-Market Assessment of Newspaper Error and Credibility.” Journalism & Mass Communication Quarterly 82 (3):533-551. https://doi.org/10.1177/107769900508200304.

———. 2007. “Setting the record straight.” Journalism Practice 1 (1):33-43. https://doi.org/10.1080/17512780601078845.

Matsakis, Louise. 2018. “What happens when Facebook mistakenly blocks local news stories.” In WIRED.

McGregor, Shannon C. 2019. “Social media as public opinion: How journalists use social media to represent public opinion.” Journalism. https://doi.org/10.1177/1464884919845458.

McGregor, Shannon C., and Logan Molyneux. 2018. “Twitter’s influence on news judgment: An experiment among journalists.” Journalism. https://doi.org/10.1177/1464884918802975.

Milmo, Dan. 2023. “Microsoft accused of damaging Guardian’s reputation with AI-generated poll.” In The Guardian.

Moran, Chris. 2023. “ChatGPT is making up fake Guardian articles. Here’s how we’re responding.” In The Guardian.

Moran, Rachel E., and Sonia Jawaid Shaikh. 2022. “Robots in the News and Newsrooms: Unpacking Meta-Journalistic Discourse on the Use of Artificial Intelligence in Journalism.” Digital Journalism 10 (10):1756-1774. https://doi.org/10.1080/21670811.2022.2085129.

Morrow, Allison. 2024. “Meta is accused of censoring a non-profit newspaper and an independent journalist who criticized the company.” In CNN.

Mullin, Benjamin, and Katie Robertson. 2022. “USA Today to Remove 23 Articles After Investigation Into Fabricated Sources.” In The New York Times.

Nerone, John. 2010. “Genres of journalism history.” The Communication Review 13 (1):15-26.

Newman, Nic. “How publishers are learning to create and distribute news on TikTok.” https://reutersinstitute.politics.ox.ac.uk/how-publishers-are-learning-create-and-distribute-news-tiktok.

Nishal, Sachita, and Nicholas Diakopoulos. 2023. Envisioning the Applications and Implications of Generative AI for News Media. Paper presented at the CHI ‘23 Generative AI and HCI Workshop, Hamburg, Germany, April 23–28.

Noble, David F. 1978. “Social Choice in Machine Design: The Case of Automatically Controlled Machine Tools, and a Challenge for Labor.” Politics & Society 8 (3-4):313-347. https://doi.org/10.1177/003232927800800302.

———. 1986. Forces of production: A social history of industrial automation. Oxford, UK: Oxford University Press.

Olesen, Thomas. 2018. “The democratic drama of whistleblowing.” European Journal of Social Theory 21 (4):508-525. https://doi.org/10.1177/1368431017751546.

———. 2019. “The Politics of Whistleblowing in Digitalized Societies.” Politics & Society 47 (2):277-297. https://doi.org/10.1177/0032329219844140.

———. 2021. “Democracy’s Autonomy Dilemma: Whistleblowing and the Politics of Disclosure.” Sociological Theory 39 (4):245-264. https://doi.org/10.1177/07352751211054874.

———. 2023. “Whistleblowing and the press: Complicating the standard account.” Journalism 24 (11):2418-2435. https://doi.org/10.1177/14648849221109046.

Ophir, Yotam, and Kathleen Hall Jamieson. 2021. “The effects of media narratives about failures and discoveries in science on beliefs about and support for science.” Public Understanding of Science 30 (8):1008-1023. https://doi.org/10.1177/09636625211012630.

Orlikowski, Wanda. 2010. “Technology and organization: Contingency all the way down.” Research in The Sociology of Organizations 29:239-246. https://doi.org/l0.1108/S0733-558X(2010)0000029017.

Owen, Laura Hazard. “Microsoft, pushing generative AI in newsrooms, partners with Semafor, CUNY, the Online News Association, and others.” Nieman Lab. https://www.niemanlab.org/2024/02/microsoft-pushing-generative-ai-in-newsrooms-partners-with-semafor-cuny-the-online-news-association-and-others/.

Parasie, Sylvain. 2022. Computing the news. New York, NY: Columbia University Press.

Pariser, E. 2011. The filter bubble. New York, NY: Penguin Press.

Partnership on AI. “AI Adoption for Newsrooms: A 10-Step Guide.” Partnership on AI. https://partnershiponai.org/ai-for-newsrooms/.

Parvin, Nassim, and Anne Pollock. 2020. “Unintended by Design: On the Political Uses of “Unintended Consequences”.” Engaging Science, Technology, and Society 6:320-327. https://doi.org/10.17351/ests2020.497.

Pea, Roy D. 2004. “The social and technological dimensions of scaffolding and related theoretical concepts for learning, education, and human activity.” The Journal of the Learning Sciences 13 (3):423-451.

Perrow, Charles. 1984. Normal accidents: Living with High Risk Technologies. New York, NY: Basic Books.

Petre, Caitlin. 2021. All the news that’s fit to click. Princeton, NJ: Princeton University Press.

Petridis, Savvas, Nicholas Diakopoulos, Kevin Crowston, Mark Hansen, Keren Henderson, Stan Jastrzebski, Jeffrey V Nickerson, and Lydia B Chilton. 2023. “AngleKindling: Supporting Journalistic Angle Ideation with Large Language Models.” In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Article 225. Hamburg, Germany: Association for Computing Machinery.

Poirier, Lindsay. 2021. “Reading datasets: Strategies for interpreting the politics of data signification.” Big Data & Society 8 (2):20539517211029322. https://doi.org/10.1177/20539517211029322.

Porlezza, Colin, Scott R. Maier, and Stephan Russ-Mohl. 2012. “News accuracy in Switzerland and Italy.” Journalism Practice 6 (4):530-546. https://doi.org/10.1080/17512786.2011.650923.

Powell, W.W. 1990. “Neither market nor hierarchy: Network forms of organization.” Research in Organizational Behavior 12:295-336.

Reese, Stephen D. 2021. The crisis of the institutional press. London, UK: Polity.

Rettberg, Jill Walker. 2022. “Algorithmic failure as a humanities methodology: Machine learning’s mispredictions identify rich cases for qualitative analysis.” Big Data & Society 9 (2):20539517221131290. https://doi.org/10.1177/20539517221131290.

Rosenberg, Eli. 2018. “Facebook censored a post for ‘hate speech.’ It was the Declaration of Independence.” In The Washington Post.

Russell, Adrienne. 2011. Networked: A contemporary history of news in transition. London, UK: Polity.

———. 2023. The mediated climate: How journalists, big tech, and activists are vying for our future. New York, NY: Columbia University Press.

Salamon, Errol. 2023. “Supporting Digital Job Satisfaction in Online Media Unions’ Contracts.” In Happiness in Journalism, edited by Valérie Bélair-Gagnon, Avery E. Holton, Mark Deuze and Claudia Mellado, 87-95. London, UK: Routledge.

———. 2024. “Negotiating Technological Change: How Media Unions Navigate Artificial Intelligence in Journalism.” Journalism & Communication Monographs 26 (2):159-163. https://doi.org/10.1177/15226379241239758.

Salkin, Erica, and Kevin Grieves. 2022. “The “major mea culpa:” Journalistic Discursive Techniques When Professional Norms are Broken.” Journalism Studies 23 (9):1096-1113. https://doi.org/10.1080/1461670X.2022.2069589.

Samuelson, Pamela. 2016. “Freedom to Tinker.” Theoretical Inquiries in Law 17:563-600.

Sandhaus, Evan. nd. “The New York Times Annotated Corpus Overview.” In.: The New York Times.

Sandhaus, Evan, and Rob Larson. nd. “A Century of Semantic Technology: Semantics At The New York Times.” In.: The New York Times.

Sato, Mia, and Emma Roth. 2023. “CNET found errors in more than half of its AI-written stories.” In The Verge.

Schauer, Frederick. 1991. “Exceptions.” The University of Chicago Law Review 58 (3):871-899.

———. 2005. “Towards an institutional first amendment.” Minnesota Law Review 89:1256-1279.

Schiller, Dan. 1979. “An historical approach to objectivity and professionalism in American news reporting.” Journal of Communication 29:46-57.

Schneider, Rebecca. 2021. “Glitch.” In Uncertain Archives: Critical Keywords for Big Data, edited by Nanna Bonde Thylstrup, Daniela Agostinho, Annie Ring, Catherine D’Ignazio and Kristin Veel, 259-269. Cambridge, MA: MIT Press.

Schudson, Michael. 1978. Discovering the news: A social history of American newspapers. New York, NY: Basic Books.

———. 2001. “The objectivity norm in American journalism.” Journalism 2 (2):149-170.

———. 2005. “Autonomy from what?” In Bourdieu and the journalistic field, edited by R. Benson and E. Neveu, 214–223. Cambridge: Polity Press.

Shah, Rajiv C., and Jay P. Kesan. 2008. “Setting online policy with software defaults.” Information, Communication & Society 11 (7):989-1007.

Silverman, Craig. 2007. Regret the Error: How Media Mistakes Pollute the Press and Imperil Free Speech. New York, NY: Union Square Press.

Simon, Felix M. 2022. “Uneasy Bedfellows: AI in the News, Platform Companies and the Issue of Journalistic Autonomy.” Digital Journalism 10 (10):1832-1854. https://doi.org/10.1080/21670811.2022.2063150.

———. 2023. “Escape Me If You Can: How AI Reshapes News Organisations’ Dependency on Platform Companies.” Digital Journalism:1-22. https://doi.org/10.1080/21670811.2023.2287464.

Singer, Jessie. 2022. There Are No Accidents. New York, NY: Simon & Schuster.

Slavtcheva-Petkova, Vera, Jyotika Ramaprasad, Nina Springer, Sallie Hughes, Thomas Hanitzsch, Basyouni Hamada, Abit Hoxha, and Nina Steindl. 2023. “Conceptualizing Journalists’ Safety around the Globe.” Digital Journalism:1-19. https://doi.org/10.1080/21670811.2022.2162429.

Smelser, Neil J. 2005. “The Questionable Logic of “Mistakes” in the Dynamics of Knowledge Growth in the Social Sciences.” Social Research 72 (1):237-262.

Snyder, Gabriel. “The Times corrects factual errors. What about bigger controversies?”, Accessed November 1, 2020. https://www.cjr.org/public_editor/nyt-correction-factual-errors-editors-note.php.

Spangher, Alexander, Xiang Ren, Jonathan May, and Nanyun Peng. 2022. NewsEdits: A News Article Revision Dataset and a Novel Document-Level Reasoning Challenge. Paper presented at the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Seattle, WA.

Stilgoe, Jack. 2023. “We need a Weizenbaum test for AI.” Science 381 (6658):eadk0176. https://doi.org/doi:10.1126/science.adk0176.

Stubenvoll, Marlis, and Jörg Matthes. 2022. “Why Retractions of Numerical Misinformation Fail: The Anchoring Effect of Inaccurate Numbers in the News.” Journalism & Mass Communication Quarterly 99 (2):368-389. https://doi.org/10.1177/10776990211021800.

Stupart, Richard. 2023. “Anger and the investigative journalist.” Journalism 24 (11):2341-2358. https://doi.org/10.1177/14648849221125980.

Sweet, Paige L, and Danielle Giffort. 2021. “The bad expert.” Social Studies of Science 51 (3):313-338. https://doi.org/10.1177/0306312720970282.

Taylor, Miles. 2023. Blowback: A Warning to Save Democracy from the Next Trump. New York, NY: Atria.

The Washington Post. 2020. “The Washington Post announces writing style changes for racial and ethnic identifiers.” In The Washington Post.

Thomson, T. J., Ryan J. Thomas, and Phoebe Matich. 2024. “Generative Visual AI in News Organizations: Challenges, Opportunities, Perceptions, and Policies.” Digital Journalism:1-22. https://doi.org/10.1080/21670811.2024.2331769.

Thylstrup, Nanna Bonde. 2021. “Error.” In Uncertain Archives: Critical Keywords for Big Data, edited by Nanna Bonde Thylstrup, Daniela Agostinho, Annie Ring, Catherine D’Ignazio and Kristin Veel, 191-200. Cambridge, MA: MIT Press.

———. 2022. “The ethics and politics of data sets in the age of machine learning: deleting traces and encountering remains.” Media, Culture & Society 0 (0):01634437211060226. https://doi.org/10.1177/01634437211060226.

Tucher, Andie. 2022. Not Exactly Lying: Fake News and Fake Journalism in American History. New York, NY: Columbia University Press.

van Dijck, Jose, Thomas Poell, and Martijn de Waal. 2018. The Platform Society. Oxford, UK: Oxford University Press.

Vaughan, Diane. 1996. The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. Chicago, IL: University of Chicago.

Volokh, Eugene. 2012. “Freedom for the press as an industry, or for the press as a technology? From the framing to today.” University of Pennsylvania Law Review 160:459-540.

Vultee, Fred. 2012. “A paleontology of style.” Journalism Practice 6 (4):450-464. https://doi.org/10.1080/17512786.2012.674834.

Wahl-Jorgensen, K., and J. Hunt. 2012. “Journalism, accountability and the possibilities for structural critique: A case study of coverage of whistleblowing.” Journalism 13 (4):399-416. https://doi.org/10.1177/1464884912439135

Wenger, Etienne. 1998. Communities of practice: Learning, meaning, and identity. Cambridge, UK: Cambridge University Press.

Wilczek, Bartosz, Mario Haim, and Neil Thurman. 2024. “Transforming the value chain of local journalism with artificial intelligence.” AI Magazine:1-12. https://doi.org/https://doi.org/10.1002/aaai.12174.

Wiley, Sarah K. 2021. “The Grey Area: How Regulations Impact Autonomy in Computational Journalism.” Digital Journalism:1-18. https://doi.org/10.1080/21670811.2021.1893199.

Wilner, Tamar, Ryan Wallace, Ivan Lacasa-Mas, and Emily Goldstein. 2021. “The Tragedy of Errors: Political Ideology, Perceived Journalistic Quality, and Media Trust.” Journalism Practice:1-22. https://doi.org/10.1080/17512786.2021.1873167.

 

© 2024, Mike Ananny.

Cite as: Mike Ananny, Recursive Press Freedom as the Capacity to Control and Learn From Mistakes, 24-04 Knight First Amend. Inst. (Jul. 16, 2024), https://knightcolumbia.org/content/recursive-press-freedom-as-the-capacity-to-control-and-learn-from-mistakes [https://perma.cc/3PJ6-UQD7].