<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Knight First Amendment Institute</title>
    <description><![CDATA[The Knight First Amendment Institute defends the freedoms of speech and the press in the digital age through strategic litigation, research, and public education]]></description>
    <link>https://knightcolumbia.org/</link>
    <atom:link href="https://knightcolumbia.org/rss" rel="self" type="application/rss+xml" />
    <generator>In house</generator>
        <item>
      <title><![CDATA[Amazon v. Perplexity AI]]></title>
      <link>https://knightcolumbia.org/cases/amazon-v-perplexity-ai</link>
      <description><![CDATA[<p>On April 8, 2026, the Knight Institute, the ACLU, and the ACLU of Northern California filed an amicus brief in <em>Amazon v. Perplexity AI</em>, a case concerning liability under federal and state computer crime laws for the use of digital tools that automate how users access and interact with websites and online platforms. The case centers on Perplexity&rsquo;s browser, which the company claims allows users to deploy AI &ldquo;agents&rdquo; to browse websites like Amazon.com and perform tasks on their behalf.</p>
<p>The amicus brief warns that adopting Amazon&rsquo;s interpretation of the laws at issue would chill journalism and research that serves the public interest. Journalists and researchers increasingly use automated digital tools&mdash;including scrapers, browser extensions, and other AI-powered tools&mdash;to study online platforms and the ways in which they shape public discourse. Like Perplexity&rsquo;s AI agents, these tools often rely on users to voluntarily provide them with access to their accounts. The brief argues that the Computer Fraud and Abuse Act and California&rsquo;s analog should not be interpreted to prohibit these user-directed activities, because doing so would implicate the basic tools of digital investigation necessary for research and journalism online.&nbsp;</p>
<p><strong>Status: </strong>Briefing ongoing.&nbsp;</p>
<p><strong>Case Information: </strong><em>Amazon.com Servs., LLC v. Perplexity AI, Inc</em>., No. 26-1444 (9th Cir.)</p>]]></description>
      <guid isPermaLink="false">/cases/amazon-v-perplexity-ai</guid>
      <pubDate>Wed, 08 Apr 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[Margolin v. National Association of Immigration Judges]]></title>
      <link>https://knightcolumbia.org/cases/naij-v-neal</link>
      <description><![CDATA[<p>On July 1, 2020, the Knight Institute filed a lawsuit on behalf of the National Association of Immigration Judges (NAIJ) challenging a Justice Department policy that imposes an unconstitutional prior restraint on the speech of immigration judges. The policy categorically prohibits immigration judges from speaking or writing publicly in their personal capacities about immigration or about the agency that employs them.</p>
<p>For years, immigration judges regularly spoke at conferences, guest lectured at universities and law schools, participated in immigration-law trainings, and spoke to local community groups, all in their personal capacities. But starting in 2017, the Executive Office for Immigration Review issued <a href="https://knightcolumbia.org/documents/bd8dbc9669">a series of </a><a href="https://knightcolumbia.org/documents/f038648bd0" target="_blank" rel="noopener">speaking-engagement policies</a> that sharply curtailed their ability to speak publicly in their personal capacities</p>
<p>The lawsuit argues that the <a href="https://knightcolumbia.org/documents/kpj6aibn16">currently operative policy</a> violates the First Amendment right of immigration judges to speak publicly on matters of public concern, and the First Amendment right of the public to hear them. It also argues that the policy is void for vagueness under the First and Fifth Amendments.</p>
<p>On June 3, 2025, the Fourth Circuit vacated the district court&rsquo;s decision dismissing the case for lack of subject matter jurisdiction, and remanded for further proceedings consistent with its opinion.</p>
<p><strong>Status:</strong> Briefing complete on the government&rsquo;s petition for certiorari and NAIJ&rsquo;s cross-petition for a writ of certiorari.</p>
<p><strong>Case information:</strong>&nbsp;<em>Nat'l Ass'n of Immigration Judges v. Owen</em>, No. 1:20-cv-00731 (E.D. Va.), No. 20-1868 and 23-2235 (4th Cir.), <span class="title"><em>Margolin </em><em>v. Nat'l Ass'n of Immigration Judges</em>, No. </span>25A662, No. 25-767, 25-1009.</p>]]></description>
      <guid isPermaLink="false">/cases/naij-v-neal</guid>
      <pubDate>Tue, 07 Apr 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[Knight Institute Raises First Amendment Concerns Over Trump Threat to Compel Journalists to Reveal Sources]]></title>
      <link>https://knightcolumbia.org/content/knight-institute-raises-first-amendment-concerns-over-trump-threat-to-compel-journalists-to-reveal-sources</link>
      <description><![CDATA[<p dir="ltr">WASHINGTON&mdash;At a press conference today, President Trump threatened to compel journalists to disclose confidential sources in response to reporting about the administration&rsquo;s handling of the war in Iran. The comments come amid escalating efforts by the administration to challenge news coverage, raising serious First Amendment concerns about press independence, newsgathering, and the public&rsquo;s right of access to information.</p>
<p dir="ltr"><strong>The following can be attributed to Jameel Jaffer, executive director at the Knight First Amendment Institute at Columbia University:</strong></p>
<p dir="ltr">&ldquo;News organizations have a First Amendment right to publish stories about matters of public importance&mdash;including stories the government would prefer to suppress. President Trump&rsquo;s threat to force journalists to disclose their sources raises serious press freedom concerns because journalists&rsquo; ability to do their work turns in part on their ability to protect their sources&rsquo; identities. President Trump's threat should be understood as an effort to intimidate the press and to prevent journalists from doing work the public needs them to do.&rdquo;</p>
<p>For more information, contact: Adriana Lamirande, <a href="mailto:adriana.lamirande@knightcolumbia.org">adriana.lamirande@knightcolumbia.org</a>&nbsp;</p>]]></description>
      <guid isPermaLink="false">/content/knight-institute-raises-first-amendment-concerns-over-trump-threat-to-compel-journalists-to-reveal-sources</guid>
      <pubDate>Mon, 06 Apr 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[A Conversation on Middleware and User Control]]></title>
      <link>https://knightcolumbia.org/content/a-conversation-on-middleware-and-user-control</link>
      <description><![CDATA[<p>At the heart of debates over the power of social media platforms and their implications for free expression is the question of who should shape what we see online&mdash;and whether that control can shift.</p>
<p>Throughout the&nbsp;Institute&rsquo;s half-day program&mdash;&ldquo;<a class="external" href="https://knightcolumbia.org/events/can-middleware-save-social-media">Can Middleware Save Social Media?</a>&rdquo;&mdash;those questions played out across three panel discussions about online speech and the rules governing social media platforms.</p>
<p>As&nbsp;<a class="external" href="https://knightcolumbia.org/bios/view/jameel-jaffer">Jameel Jaffer</a>, executive director of the Knight Institute, said at the outset:&nbsp;The question is not just whether social media can be saved, but what we are saving it from&mdash;and who gets to do the saving.</p>
<h3>Defining Middleware</h3>
<p>Panelists described middleware not as a single product, but as a layer of tools that sits between users and platforms, shaping how information is filtered and delivered. Early examples such as ad blockers showed how users could modify the content that platforms deliver. Newer systems go further, allowing users to curate their own feeds or apply alternative algorithms.</p>
<p>However, that shift comes with constraints, including legal barriers that limit interoperability, privacy risks, and the reality that platforms still hold enormous&nbsp;<a class="external" href="https://knightcolumbia.org/policy/platform-accountability-and-transparency">structural power&nbsp;</a>and control the terms of engagement. Middleware, in this sense, operates within and around existing systems rather than replacing them.</p>
<p>In conversation with the Institute&rsquo;s&nbsp;<a class="external" href="https://knightcolumbia.org/bios/view/ramya-krishnan">Ramya Krishnan</a>,&nbsp;<a class="external" href="https://knightcolumbia.org/bios/view/daphne-keller">Daphne Keller</a>&nbsp;of Stanford University and&nbsp;<a class="external" href="https://knightcolumbia.org/bios/view/richard-reisman">Richard Reisman</a>&nbsp;of the Foundation for American Innovation explored these dynamics, emphasizing how middleware can redistribute control to users without inviting direct government intervention. At the same time, they underscored that middleware&rsquo;s effectiveness depends on the surrounding legal and technical environment&mdash;whether platforms are required, or even willing, to allow interoperability. Without that, the promise of user control remains contingent, not guaranteed.</p>
<h3>Middleware&rsquo;s Promise and Its Limits</h3>
<p>The keynote conversation, moderated by the Institute&rsquo;s Policy Director <a class="external" href="https://knightcolumbia.org/bios/view/nadine-farid-johnson">Nadine Farid Johnson</a>, brought together&nbsp;<a class="external" href="https://knightcolumbia.org/bios/view/olivier-sylvain">Olivier Sylvain</a>, senior policy research fellow at the Knight Institute and professor of law at Fordham University, and&nbsp;<a class="external" href="https://knightcolumbia.org/bios/view/ethan-zuckerman">Ethan Zuckerman</a>&nbsp;of the University of Massachusetts Amherst to examine what middleware can actually do.</p>
<p>For Zuckerman, middleware offers a tangible but limited path forward. &ldquo;The problem with middleware is that it is a partial, incomplete, imperfect solution,&rdquo; he said. Still, he described it as &ldquo;a way of showing that another world is possible,&rdquo; one that could help build the case for broader reform.</p>
<p>At one point, the conversation turned to whether users can realistically take control of their own online experience. Zuckerman mentioned tools that prompt reflection&mdash;like weekly screen time reports on smartphones&mdash;as evidence that users might engage more intentionally with their media environments if given the opportunity.</p>
<p>Sylvain challenged that premise, shifting the focus from individual behavior to system design.</p>
<p>&ldquo;There is an underlying pathology, a structural problem that middleware does not answer,&rdquo; Sylvain said, arguing that the current system is defined by information asymmetries&mdash;what platforms know, what users do not, and how that gap is leveraged to shape how content is delivered and consumed. He warned that focusing too squarely &ldquo;on the user as a solution &hellip; doubles down on the problem we have now.&rdquo;</p>
<p>The exchange crystallized a central tension: whether meaningful reform begins with empowering users to reshape their own experience, or with restructuring the systems that shape it for them.</p>
<h3>Considerations for Policymakers</h3>
<p>In conversation with the Institute&rsquo;s&nbsp;<a class="external" href="https://knightcolumbia.org/bios/view/ryan-morgan">Ryan Morgan</a>, panelists&nbsp;<a class="external" href="https://knightcolumbia.org/bios/view/rene-diresta">Ren&eacute;e DiResta</a>&nbsp;of the Stanford Internet Observatory,&nbsp;<a class="external" href="https://knightcolumbia.org/bios/view/anna-lenhart">Anna Lenhart</a>&nbsp;of Common Sense Media, and&nbsp;<a class="external" href="https://knightcolumbia.org/bios/view/luke-hogg">Luke Hogg</a>&nbsp;of the Foundation for American Innovation examined the policy landscape surrounding middleware and platform governance, with a focus on what interventions could meaningfully shift power in the digital public sphere.</p>
<p>They discussed a range of legislative proposals at the federal and state levels that could facilitate middleware development and adoption, alongside broader efforts to strengthen data privacy protections and address the structural dynamics that shape how information is distributed and consumed online.</p>
<p>The discussion made it clear that middleware alone cannot resolve the deeper forces shaping online discourse.</p>
<p>What middleware can do, however, is expose the limits of the current system and sharpen the case for structural reform. The future of online discourse will not be decided solely by platforms or policymakers, but by whether new systems can redistribute control in ways that are both meaningful and durable.</p>
<p>The larger question is whether the digital public sphere can be restructured so that the power over what we see is not concentrated by a handful of platforms, but more broadly shared. Middleware offers one entry point.</p>]]></description>
      <guid isPermaLink="false">/content/a-conversation-on-middleware-and-user-control</guid>
      <pubDate>Mon, 06 Apr 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[Knight Institute Urges Congress to Limit ICE’s Use of Spyware Technologies as Agency Confirms Expanded Deployment]]></title>
      <link>https://knightcolumbia.org/content/knight-institute-urges-congress-to-limit-ices-use-of-spyware-technologies-as-agency-confirms-expanded-deployment</link>
      <description><![CDATA[<p dir="ltr">WASHINGTON&mdash;In an April 1 letter to Congress, U.S. Immigration and Customs Enforcement (ICE) confirmed that it has approved the procurement and use of a powerful spyware tool. Last summer, ICE reportedly re-opened a $2 million contract with the U.S. branch of Paragon Solutions&mdash;the Israeli manufacturer of spyware known as Graphite&mdash;prompting concerns from members of Congress about the agency&rsquo;s interest in using spyware and the risks such technologies pose to civil liberties.</p>
<p dir="ltr"><strong>The following can be attributed to Nadine Farid Johnson, policy director at the Knight First Amendment Institute at Columbia University:</strong></p>
<p dir="ltr">&ldquo;ICE&rsquo;s use of powerful spyware tools raises serious civil liberties concerns. Spyware enables covert and often unlimited access to smartphone data, posing significant risks to free speech and privacy. Experience shows these tools are highly susceptible to misuse; they&rsquo;ve already been used to target journalists, human rights advocates, and political dissidents around the world. Democracies should not be in the business of deploying spyware against their populations. We appreciate Representative Lee&rsquo;s focus on this issue and again urge Congress to step in to limit the circumstances in which spyware technology can be used.&rdquo;</p>
<p dir="ltr">In 2022, the Knight Institute filed a lawsuit on behalf of journalists and other members of El Faro&mdash;one of Central America&rsquo;s foremost independent news organizations, based in El Salvador&mdash;who were targeted with spyware attacks using NSO Group&rsquo;s Pegasus technology. In July of last year, the U.S. Court of Appeals for the Ninth Circuit held that the U.S. District Court for the District of Columbia had abused its discretion in dismissing the lawsuit and remanded the case for further consideration. Read more about that case, Dada v. NSO Group, <a href="https://www.knightcolumbia.org/cases/dada-v-nso-group">here</a>.&nbsp;&nbsp;</p>
<p dir="ltr">For more information, contact: Adriana Lamirande, <a href="mailto:adriana.lamirande@knightcolumbia.org">adriana.lamirande@knightcolumbia.org</a>&nbsp;</p>
<p>&nbsp;</p>]]></description>
      <guid isPermaLink="false">/content/knight-institute-urges-congress-to-limit-ices-use-of-spyware-technologies-as-agency-confirms-expanded-deployment</guid>
      <pubDate>Thu, 02 Apr 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[To repeal or not to repeal is not the question for Section 230]]></title>
      <link>https://knightcolumbia.org/content/to-repeal-or-not-to-repeal-is-not-the-question-for-section-230</link>
      <description><![CDATA[<p><a href="https://www.congress.gov/crs-product/R46751" target="_blank" rel="noopener">Section 230 of the Communications Decency Act</a> plays a vital role in protecting free speech online. It allows platforms to host and moderate user-generated speech without assuming legal liability. But the online landscape has changed dramatically since its enactment, prompting questions about whether its protections should remain in place.&nbsp;</p>
<p>While Section 230 shouldn&rsquo;t be considered sacrosanct, its repeal would do little to address the problems lawmakers are trying to solve. In some ways, it would make the problems worse. What&rsquo;s needed instead is a new legislative approach grounded in structural reform&mdash;one that protects users&rsquo; privacy, allows users to engage with platforms on their own terms or leave them more easily, and makes platforms more transparent and accountable.&nbsp;</p>
<p>Lawmakers, parents and advocates have raised serious concerns about harmful content, particularly as it affects minors, as well as the power of large platforms and the inability to hold them accountable for their decisions. Congress is now&nbsp;<a href="https://www.congress.gov/bill/119th-congress/house-bill/6746/text" target="_blank" rel="noreferrer noopener">considering</a>&nbsp;<a href="https://www.grassley.senate.gov/imo/media/doc/sunset_section_230_act.pdf" target="_blank" rel="noopener">bills</a>&nbsp;that would sunset Section 230. The Senate Committee on Commerce, Science, and Transportation recently held a&nbsp;<a href="https://www.commerce.senate.gov/meetings/liability-or-deniability-platform-power-as-section-230-turns-30/" target="_blank" rel="noopener">hearing</a>, at which I testified, on how to move forward.&nbsp;&nbsp;</p>
<p>Framing the discussion about Section 230 as a choice between keeping the provision intact or scrapping it altogether misses a key point: Repeal wouldn&rsquo;t fix many of the concerns raised.&nbsp;</p>
<p>Here&rsquo;s why: Many of the harms driving calls for reform are tied to speech the First Amendment already protects. That constraint is not incidental. It shapes what Congress can do. As the Supreme Court recently&nbsp;<a href="https://www.supremecourt.gov/opinions/23pdf/22-277_d18f.pdf" target="_blank" rel="noreferrer noopener">reaffirmed</a>, platforms&rsquo; editorial decisions about whether and how to display content are protected by the Constitution.&nbsp;</p>
<p>Repealing Section 230 would not change the fact that much of the content lawmakers are concerned about&mdash;often described as &ldquo;lawful but awful&rdquo; speech&mdash;would remain protected. The government cannot prohibit that speech, nor can it compel platforms to do so.&nbsp;</p>
<p>What repeal would do is change the incentives that shape platform behavior, to the detriment of users and public discourse.&nbsp;</p>
<p>Without Section 230, platforms would face increased legal risk for hosting user speech, including defamatory claims alleging wrongdoing by identifiable individuals. The First Amendment doesn&rsquo;t protect defamation. Truth is a defense to liability, but platforms cannot reliably determine the truth of such claims at scale. They would therefore have strong incentives to remove any claims that might give rise to a defamation lawsuit. The result wouldn&rsquo;t be a safer or meaningfully improved online environment, but one in which lawful, often socially valuable speech is taken down more frequently.&nbsp;</p>
<p>The effects wouldn&rsquo;t be evenly distributed across platforms. While the largest platforms may be able to absorb the costs of increased liability, smaller or newer platforms may not. Community-driven sites and emerging services would be particularly vulnerable. The likely outcome would be an even more concentrated online landscape, with fewer options for users and less competition.&nbsp;</p>
<p>None of this is to defend the status quo. Few would argue that the digital public sphere is working for Americans or for our democracy. The question is how to respond to a rapidly evolving technology landscape in ways that are both effective and consistent with the First Amendment.&nbsp;</p>
<p>The most productive path forward lies in structural reform that targets the underlying features of the current system that contribute to online harms without creating incentives for platforms to remove lawful speech or giving the largest platforms even more control over online discourse.&nbsp;</p>
<p>Lawmakers could require greater transparency about how platforms operate, including how they collect and use consumer data and how their systems shape what users see. Congress could also establish protections for journalists and researchers who study platforms in the public interest, such as those outlined by my organization in the&nbsp;<a href="https://knightcolumbia.org/content/a-safe-harbor-for-platform-research">Knight Institute&rsquo;s safe harbor proposal</a>.&nbsp;</p>
<p>Platforms are successful in maintaining user engagement in part by relying on the extensive information they gather. Lawmakers could strengthen privacy protections by limiting what information platforms collect about users and how that information is shared, including by restricting the&nbsp;<a href="https://www.maximizemarketresearch.com/market-report/global-data-broker-market/55670" target="_blank" rel="noreferrer noopener">sale&nbsp;of user data</a>. They could also give users more control over their online experience, including by making it easier for them to move their data and connections across platforms or interact with users of competing services.&nbsp;</p>
<p>Congress could enact these reforms independently of Section 230 or condition its protections on compliance with these requirements. Either way, the platforms would have little choice but to respect the privacy of their users, provide greater transparency into how they operate and give users greater control over their online lives&mdash;all while preserving the space for public discourse.&nbsp;</p>
<p>Whether to repeal Section 230 is not the right question. The more important question is how to address online harms without undermining free expression.&nbsp;&nbsp;</p>
<p>The better course is to pursue targeted reforms that address real concerns about the online experience while respecting the constitutional limits that govern speech in the U.S.&nbsp;</p>
<p>Repealing Section 230 will not achieve that. Structural reform can.&nbsp;</p>]]></description>
      <guid isPermaLink="false">/content/to-repeal-or-not-to-repeal-is-not-the-question-for-section-230</guid>
      <pubDate>Wed, 01 Apr 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[Can Middleware Save Social Media?]]></title>
      <link>https://knightcolumbia.org/events/can-middleware-save-social-media</link>
      <description><![CDATA[<p dir="ltr">In wrestling with the enormous power that social media platforms exert over the system of free expression, Congress has repeatedly debated legislation that would protect user privacy and give users more control over their experience online. Time after time these efforts have faltered&mdash;sometimes because the proposed interventions raised genuine free speech concerns. As a result, little has changed for the millions of individuals anxious about the collection and use of their personal information, and about who or what decides the kind and quality of information they see.</p>
<p dir="ltr">In seeking avenues to address these concerns without impinging on First Amendment rights, many free speech and technology advocates are looking to middleware, third-party software that operates between users and platforms. Middleware&rsquo;s proponents say that this kind of software could permit users to enhance their ability to shape their online experience, including by curating their timelines and by exercising more control over what data they share. Skeptics say that middleware raises privacy concerns of its own and that more fundamental changes are needed to address the pathologies of the digital public sphere. Can middleware really deliver the improvements that its boosters envision?&nbsp;</p>
<p dir="ltr"><strong id="docs-internal-guid-8d97dafb-7fff-b73d-10da-91f650156e37"></strong>On March 27, 2026, the Knight Institute will host &ldquo;Can Middleware Save Social Media?&rdquo; a half-day convening focusing on these questions and the future of these go-between tools. This convening is a collaboration between the Knight Institute and the Institute&rsquo;s Senior Policy Fellow&nbsp;<a tabindex="0" href="https://knightcolumbia.org/bios/view/olivier-sylvain" target="_blank" rel="nofollow noopener noreferrer" data-md-link="true" data-uie-name="markdown-link">Olivier Sylvain</a> and will include discussions with academics, policymakers, and technologists.&nbsp;</p>
<p><a href="https://can-middleware-save-social-media.eventbrite.com" target="_blank" rel="noopener">Registration</a> is required to attend in-person or to watch the livestream.&nbsp;</p>]]></description>
      <guid isPermaLink="false">/events/can-middleware-save-social-media</guid>
      <pubDate>Fri, 27 Mar 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[Technology Researchers Ask Court to Block Trump Policy Threatening Deportation for Work on Social Media Platforms]]></title>
      <link>https://knightcolumbia.org/content/technology-researchers-ask-court-to-block-trump-policy-threatening-deportation-for-work-on-social-media-platforms</link>
      <description><![CDATA[<p dir="ltr">WASHINGTON&mdash;The Knight First Amendment Institute at Columbia University and Protect Democracy last night filed a motion for a preliminary injunction in their lawsuit on behalf of the Coalition for Independent Technology Research (CITR), asking a federal court to block a U.S. immigration policy that targets noncitizen researchers, advocates, fact-checkers, and trust and safety workers for visa denials, revocations, detention, and deportation based on their work studying and reporting on social media platforms.</p>
<p dir="ltr">The lawsuit alleges that the policy violates the First Amendment by penalizing particular viewpoints and deterring independent research about social media and other internet platforms. It also raises claims under the Fifth Amendment and the Administrative Procedure Act.</p>
<p dir="ltr">CITR&rsquo;s motion asks the court to halt enforcement of the policy while the case proceeds, explaining that its members are already self-censoring by curtailing research, avoiding speaking publicly about their work, and limiting their participation in advocacy efforts for fear of being targeted by the government for their public-interest work.</p>
<p dir="ltr">CITR&rsquo;s members include research institutions, academics, and journalists who study digital platforms and their societal impacts. Their work seeks to inform public debate so that consumers, advertisers, platforms, and policymakers can make informed decisions about emerging technologies.</p>
<p dir="ltr"><strong>The following can be attributed to Carrie DeCell, senior staff attorney at the Knight First Amendment Institute:</strong></p>
<p dir="ltr">&ldquo;The Trump administration claims that its new exclusion and deportation policy counters censorship, but it is itself censorship. In targeting independent researchers for studying and reporting on social media and other internet platforms, the policy punishes work that the First Amendment protects&mdash;and work that the public needs to understand how the platforms are shaping our society.&rdquo;</p>
<p dir="ltr"><strong>The following can be attributed to Clare Melford, co-founder of the Global Disinformation Index, a CITR member organization:</strong></p>
<p dir="ltr">&ldquo;Because of the policy, I&rsquo;ve been prevented from traveling to the United States. I had to cancel meetings with colleagues and funders and postpone work that depends on in-person collaboration. That kind of disruption slows research, breaks down partnerships, and limits the exchange of ideas across borders.&rdquo;</p>
<p dir="ltr"><strong>The following can be attributed to Brandi Geurkink, executive director of the Coalition for Independent Technology Research:</strong></p>
<p dir="ltr">&ldquo;Because of the government&rsquo;s censorship policy, researchers are pulling back on studying critical topics and avoiding speaking publicly about their work, because they fear&nbsp; they could be&nbsp; detained or deported because of what they say. If this assault on research continues, people will be left without independent information about the impacts of AI and other digital platforms on our societies&mdash;at precisely the moment when we need it most.&rdquo;</p>
<p dir="ltr">Read the preliminary injunction motion <a href="https://knightcolumbia.org/documents/4xb9tdw6ax">here</a>.</p>
<p dir="ltr">Read more about the case <a href="https://knightcolumbia.org/cases/citr-v-rubio">here</a>.</p>
<p dir="ltr">Lawyers on the case include Carrie DeCell, Raya Koreh, Kiran Wattamwar, Anna Diakun, Katie Fallow, Alex Abdo, and Jameel Jaffer, for the Knight First Amendment Institute, and Naomi Gilens, Nicole Schneidman, Scott Shuchart, and Deana El-Mallawany, for Protect Democracy.</p>
<p dir="ltr">For more informtaion, contact: Adriana Lamirande, <a href="mailto:adriana.lamirande@knightcolumbia.org">adriana.lamirande@knightcolumbia.org</a>.&nbsp;</p>
<p>&nbsp;</p>]]></description>
      <guid isPermaLink="false">/content/technology-researchers-ask-court-to-block-trump-policy-threatening-deportation-for-work-on-social-media-platforms</guid>
      <pubDate>Fri, 27 Mar 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[Knight Institute Raises First Amendment Concerns About Proposed NYPD “Buffer Zones” Restricting Protests]]></title>
      <link>https://knightcolumbia.org/content/knight-institute-raises-first-amendment-concerns-about-proposed-nypd-buffer-zones-restricting-protests</link>
      <description><![CDATA[<p dir="ltr">NEW YORK&ndash;According to news reports, the New York City Council is poised today to consider two legislative proposals that would require the New York Police Department (NYPD) to establish &ldquo;buffer zones&rdquo; restricting protest activity outside schools and places of worship. Free expression advocates, including the Knight First Amendment Institute at Columbia University, warn that the proposed buffer zones could unduly chill participation in lawful protest and other forms of First Amendment-protected expression.</p>
<p dir="ltr">The following can be attributed to <strong>Nadine Farid Johnson</strong>, policy director at the Knight First Amendment Institute at Columbia University:</p>
<p dir="ltr">&ldquo;Tasking the NYPD to write the rules on where and how people may engage in lawful political protest risks chilling and criminalizing a wide range of activities protected by the First Amendment. It&rsquo;s especially alarming because these rules would cover schools, other educational facilities, and places of religious worship, which are the sites of some of the city's most vital public discourse.&rdquo;</p>
<p>For more information, contact: Adriana Lamirande, <a href="mailto:adriana.lamirande@knightcolumbia.org">adriana.lamirande@knightcolumbia.org</a>&nbsp;</p>]]></description>
      <guid isPermaLink="false">/content/knight-institute-raises-first-amendment-concerns-about-proposed-nypd-buffer-zones-restricting-protests</guid>
      <pubDate>Thu, 26 Mar 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[First Amendment Balancing, or, How I Learned to Stop Worrying and Become a Breyerian]]></title>
      <link>https://knightcolumbia.org/content/first-amendment-balancing-or-how-i-learned-to-stop-worrying-and-become-a-breyerian</link>
      <description><![CDATA[<p>Free speech doctrine thrives on categories and tests. Content-based regulations target speech based on their communicative content;<button id="ref-1" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-1">1</button> <span id="sdn-1" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 1">1. Reed v. Town of Gilbert, 576 U.S. 155, 163 (2015).</span> a court analyzing a content-based regulation will apply strict scrutiny, which the government can rarely satisfy.<button id="ref-2" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-2">2</button> <span id="sdn-2" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 2">2. <cite>See </cite>Free Speech Coalition v. Paxton, 606 U.S. 461, 484 (2025) (&ldquo;In the First Amendment context, we have held only once that a law triggered but satisfied strict scrutiny.&rdquo;)</span> Courts apply intermediate scrutiny to evaluate content-neutral regulations.<button id="ref-3" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-3">3</button> <span id="sdn-3" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 3">3. Ward v. Rock Against Racism, 491 U.S. 781, 798 (1989).</span> Conduct regulations that only incidentally burden speech also implicate a test roughly akin to intermediate scrutiny.<button id="ref-4" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-4">4</button> <span id="sdn-4" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 4">4. United States v. O&rsquo;Brien, 391 U.S. 367, 377 (1968).</span> Laws either regulate conduct or speech (though some conduct can be expressive; there&rsquo;s a test for that too).<button id="ref-5" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-5">5</button> <span id="sdn-5" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 5">5. Spence v. State of Washington, 418 U.S. 405, 410&ndash;11 (1974).</span></p>
<p>All this has given rise to a formally complicated, almost flow chart-like model for evaluating First Amendment challenges. Does the law regulate speech, or merely conduct? Is it content-based? If so, can the government satisfy strict scrutiny? And if it&rsquo;s content-neutral, will the parties merely just fight over tailoring? Because of the judicial skepticism of laws that regulate speech, particularly content-based regulations, challengers to government action fight strenuously to persuade the court to identify the action as speech. The government, for its part, will almost always begin with a claim that there&rsquo;s nothing to see here: this law merely regulates conduct or is government speech, and the court should let it stand.</p>
<p>Formalism may have its virtues in free speech cases, particularly if a rules-based framework minimizes chilling effects and allows speakers to express themselves with less fear of reprisal. But the current overreliance on rules, in my view, has created a predictable result&mdash;parties hostile to government regulation try to jam their claims into the First Amendment, hopeful that they can pull off a maneuver that invalidates the entire scheme as content-based, or perhaps not even properly tailored under intermediate scrutiny.</p>
<p>This is where we have arrived: a system in which any legislator or regulator thinking of drafting a a law that regulates information or data has to think &ldquo;Is this going to survive the Supreme Court&rsquo;s view of the First Amendment?&rdquo; And absent a radical narrowing of what falls into First Amendment-protected speech&mdash;a move the Court could make, but seems unlikely&mdash;I see only one manageable, principled path forward to allow for government regulation of the large swaths of the economy that supposedly implicate speech. The Court must embrace free speech balancing tests.</p>
<p>Calling for balancing inquiries in speech cases may seem politically impossible, absurd, or ill-advised. The need for rules to provide clarity and predictability for speakers, and to limit the possibility of judges upholding the censorship of speech they dislike, has become a near orthodoxy in free speech cases.<button id="ref-6" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-6">6</button> <span id="sdn-6" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 6">6. <cite>See </cite>United States v. Stevens, 559 U.S. 460, 470 (2010) (&ldquo;The First Amendment's guarantee of free speech does not extend only to categories of speech that survive an ad hoc balancing of relative social costs and benefits. The First Amendment itself reflects a judgment by the American people that the benefits of its restrictions on the Government outweigh the costs. Our Constitution forecloses any attempt to revise that judgment simply on the basis that some speech is not worth it.&rdquo;). Some doctrinal frameworks do incorporate balancing, as in the test that applies to public employees speaking on matters of public concern. Pickering v. Board of Education, 391 U.S. 563 (1968).</span> In my view, though, the Court has itself abandoned this commitment to rules even as First Amendment doctrine has become more convoluted, contradictory, and opaque. Whatever virtues clear rules provided, they have faded.</p>
<p>Consider two decisions from the Court&rsquo;s last term. <em>Free Speech Coalition v. Paxton </em>concerned a Texas law mandating age verification for access to explicit content online.<button id="ref-7" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-7">7</button> <span id="sdn-7" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 7">7. 606 U.S. 461 (2025).</span> Justice Thomas&rsquo; majority distinguishes the case from <em>ACLU v. Ashcroft</em>,<button id="ref-8" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-8">8</button> <span id="sdn-8" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 8">8. 542 U.S. 656 (2004).</span> a 2004 case striking down a similar federal law, by arguing that a burden on adults to access explicit content differs from a ban; no matter that the Court had already said that burdens do not qualify the level of scrutiny in <em>U.S. v. Playboy Entertainment Group</em>.<button id="ref-9" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-9">9</button> <span id="sdn-9" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 9">9. <cite>See </cite>United States v. Playboy Entertainment Group, 529 U.S. 803, 826 (2000) (&ldquo;[S]pecial consideration or latitude is not accorded to the Government merely because the law can somehow be described as a burden rather than outright suppression.&rdquo;)</span> The rule of <em>Ashcroft</em>, <em>Playboy Entertainment Group</em>, and other cases became something that the court circumvented because it could.</p>
<p>In <em>TikTok v. Garland</em>,<button id="ref-10" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-10">10</button> <span id="sdn-10" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 10">10. 606 U.S. 56 (2025).</span> the Court evaluated a federal law that effectively forced TikTok to either sever ties with its Chinese holding company or face a shutdown within the United States. The social media company argued that the law violated the First Amendment; the federal government countered that it had vital interests in preventing foreign governments from manipulating Americans via social media feeds and in collecting data on American users. Justice Kagan noted at oral argument that the former sounded like a content-based restriction; the latter, however, carried the day as the justification for upholding the law. But by smuggling a mixed-motives justification rule into the First Amendment&mdash;in which a law with two motives, one that violates the First Amendment and one that does not, can still survive&mdash;the Court created a world in which courts can pick and choose which motives to investigate and which to ignore.</p>
<p>Beyond these two cases, other recent decisions show how the Court has allowed its supposedly firm doctrinal rules to permit discretionary choices by judges. <em>NIFLA v. Becerra</em><em><button id="ref-11" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-11">11</button> <span id="sdn-11" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 11">11. 585 U.S. 755 (2018).</span></em> weakened the standard from <em>Zauderer v. Office of Disciplinary Counsel</em><button id="ref-12" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-12">12</button> <span id="sdn-12" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 12">12. 471 U.S. 626 (1985).</span> by allowing challengers to seek invalidation of transparency requirements on &ldquo;controversial&rdquo; topics, ignoring how challengers can actually manufacture controversy to begin with. <em>Americans for Prosperity v. Bonta</em><button id="ref-13" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-13">13</button> <span id="sdn-13" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 13">13. 594 U.S. 595 (2021).</span> ignored the longstanding ban on &ldquo;subjective &lsquo;chill&rsquo;&rdquo; as a basis for standing in First Amendment cases. <em>303 Creative v. Elenis</em><button id="ref-14" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-14">14</button> <span id="sdn-14" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 14">14. 600 U.S. 570 (2023).</span>created an exemption for antidiscrimination laws applying to businesses engaged in &ldquo;expressive activity,&rdquo; conceding that determining what falls in that category might &ldquo;raise difficult questions.&rdquo;<button id="ref-15" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-15">15</button> <span id="sdn-15" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 15">15. <cite>Id.</cite> at 599.</span> If the Court once feared allowing standards and judicial discretion in First Amendment cases, it seems to recently have gained some courage&mdash;though it hasn&rsquo;t actually acknowledged it.</p>
<p>Rather than stop this move into standards-based reasoning, I in fact think the Court should more <em>explicitly</em> adopt it. Justice Breyer, on multiple occasions, called for means-ends balancing in First Amendment cases to allow for consideration of governmental motives and speech interests, eschewing &ldquo;a mechanical use of categories.&rdquo;<button id="ref-16" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-16">16</button> <span id="sdn-16" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 16">16. Reed v. Town of Gilbert, 576 U.S. 155, 179 (2015) (Breyer, J., concurring in the judgment).</span> Breyer even managed to get the Court to adopt balancing tests in two of his last First Amendment majority opinions, <em>Mahanoy Area School District v. B.L.</em><button id="ref-17" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-17">17</button> <span id="sdn-17" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 17">17. 594 U.S. 180 (2021).</span>and <em>Shurtleff v. City of Boston</em>.<button id="ref-18" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-18">18</button> <span id="sdn-18" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 18">18. 596 U.S. 243 (2022).</span> In <em>Mahanoy</em>, Breyer set forth a number of factors to determine whether a K-12 school could regulate off-campus speech of students.<button id="ref-19" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-19">19</button> <span id="sdn-19" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 19">19. Mahanoy, 594 U.S. at 189 (&ldquo;We can, however, mention three features of off-campus speech that often, even if not always, distinguish schools&rsquo; efforts to regulate that speech from their efforts to regulate on-campus speech. Those features diminish the strength of the unique educational characteristics that might call for special First Amendment leeway.&rdquo;).</span>And in <em>Shurtleff</em>, Breyer fashioned a contextual inquiry to determine whether the government speech doctrine applied.<button id="ref-20" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-20">20</button> <span id="sdn-20" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 20">20. Shurtleff, 596 U.S. at 252 (&ldquo;[W]e conduct a holistic inquiry designed to determine whether the government intends to speak for itself or to regulate private expression. Our review is not mechanical; it is driven by a case's context rather than the rote application of rigid factors.&rdquo;).</span> While these balancing tests don&rsquo;t change the core of free speech doctrine, they show a perhaps surprising potential to incorporate standards and balancing into other First Amendment areas.</p>
<p>Beyond the need to better reflect some of the Court&rsquo;s actual recent free speech decisions, incorporating standards furthers political goals that a reimagined First Amendment must take into account. A brittle, harsh First Amendment system of categories and stark rules makes it difficult, if not impossible to regulate in the areas most essential to promote a contemporary democratic society, including campaign finance, anti-discrimination, and information governance. The Court has recently, belatedly, and partially adopted a Breyerian fondness for standards and balancing in free speech cases. Directly acknowledging and continuing that shift would allow for a healthier First Amendment environment.</p>
<p>Beyond the courts, entities that use First Amendment principles to inform their own speech regulations (such as social media companies, private universities, and some private employers) should more explicitly and transparently adopt balancing frameworks, which could help socialize common practices in the private sphere. Legislators and regulators at all levels of government should contemplate and enact legislation that might force the Supreme Court to reconsider its excessively formalized doctrines. As many social movements have taught in other areas of rights and liberties, while the Court will probably not, on its own, develop a pro-democracy First Amendment, we can attempt to guide it to that necessary end.</p>]]></description>
      <guid isPermaLink="false">/content/first-amendment-balancing-or-how-i-learned-to-stop-worrying-and-become-a-breyerian</guid>
      <pubDate>Mon, 23 Mar 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[Knight Institute Joins Coalition Urging the FCC to Halt Unlawful Threats to Press Freedom]]></title>
      <link>https://knightcolumbia.org/content/knight-institute-joins-coalition-urging-the-fcc-to-halt-unlawful-threats-to-press-freedom</link>
      <description><![CDATA[<p dir="ltr">The Knight First Amendment Institute today joined TechFreedom and more than 75 civil society organizations, scholars, and former Federal Communications Commission officials in a letter urging FCC Chairman Brendan Carr to stop pressuring news broadcasters over their coverage. The letter argues that recent threats by Carr and President Trump&mdash;including suggestions that broadcasters could lose their licenses over alleged &ldquo;fake news&rdquo;&mdash;constitute unconstitutional jawboning and threaten press freedom.</p>
<p dir="ltr">The letter raises particular concern about Carr&rsquo;s use of the FCC&rsquo;s &ldquo;public interest&rdquo; standard as a tool to target viewpoints disfavored by the Trump administration. It explains that the standard does not authorize the FCC to police editorial decisions or penalize protected speech, and warns that vague and selective enforcement risks chilling lawful reporting. The First Amendment, the letter emphasizes, does not permit the government to coerce private actors or reshape the content of the news. The coalition calls on the FCC to withdraw its threats and make clear that it will not use its regulatory authority to influence or control press coverage.</p>
<p dir="ltr">In 2024, the Knight Institute launched <a href="https://knightcolumbia.org/research/jawboning">&ldquo;Jawboning and the First Amendment,&rdquo;</a> a research initiative examining how informal government pressure can function as a form of censorship, why it matters, and what legal and policy responses can address its harms.</p>
<p dir="ltr">Read today&rsquo;s letter&nbsp;<a href="https://knightcolumbia.org/documents/cefvwfm9wc">here</a>.</p>]]></description>
      <guid isPermaLink="false">/content/knight-institute-joins-coalition-urging-the-fcc-to-halt-unlawful-threats-to-press-freedom</guid>
      <pubDate>Fri, 20 Mar 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[Knight Institute Joins Coalition Urging Los Angeles County to Reject Jail Mail Digitization Proposal]]></title>
      <link>https://knightcolumbia.org/content/knight-institute-joins-coalition-urging-los-angeles-county-to-reject-jail-mail-digitization-proposal</link>
      <description><![CDATA[<p dir="ltr">Late yesterday, the Knight First Amendment Institute, the Social Justice Legal Foundation, the Electronic Frontier Foundation, and the American Civil Liberties Union of Southern California sent a letter to the Los Angeles County Sheriff Civilian Oversight Commission urging the county to reject a proposal to digitize mail in its jails. The organizations argue that banning physical mail would undermine free speech and privacy rights while severing a vital line of communication between people who are incarcerated and their loved ones.</p>
<p dir="ltr">The organizations also raise serious concerns about the sweeping nature of this surveillance. Mail digitization allows correctional authorities and third-party vendors to store, search, and analyze personal correspondence for extended periods, often without clear limits or safeguards. It extends monitoring beyond jail walls and deters communication. The letter also notes that there is little evidence that such policies reduce drug use in correctional facilities.&nbsp;</p>
<p dir="ltr">In 2023, the Knight Institute, the Social Justice Legal Foundation, and the Electronic Frontier Foundation filed a <a href="https://knightcolumbia.org/cases/abo-comix-v-san-mateo-county">lawsuit</a> challenging a similar mail digitization policy in San Mateo County, arguing that it violates the constitutional rights of people who are incarcerated and those who correspond with them. The case is ongoing.</p>
<p dir="ltr">Read the full letter <a href="https://knightcolumbia.org/documents/cw44iw5n97">here</a>.</p>]]></description>
      <guid isPermaLink="false">/content/knight-institute-joins-coalition-urging-los-angeles-county-to-reject-jail-mail-digitization-proposal</guid>
      <pubDate>Thu, 19 Mar 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[Knight Institute’s Nadine Farid Johnson Testifies Before Senate Committee on Section 230]]></title>
      <link>https://knightcolumbia.org/content/knight-institutes-nadine-farid-johnson-testifies-before-senate-committee-on-section-230</link>
      <description><![CDATA[<p dir="ltr">Appearing today before the U.S. Senate Committee on Commerce, Science, and Transportation at a hearing titled &ldquo;Liability or Deniability? Platform Power as Section 230 Turns 30,&rdquo; the Institute&rsquo;s Policy Director Nadine Farid Johnson underscored the provision&rsquo;s vital role in protecting free speech online&mdash;while acknowledging that difficult questions remain about the scope of its protections and outlining structural reforms to ensure online platforms better serve the public.</p>
<p dir="ltr">Section 230 of the Communications Decency Act of 1996 limits the liability of online platforms for user-generated content and allows them to moderate that content without assuming legal responsibility for it. These protections have helped define the digital public sphere. Farid Johnson also emphasized the independent role that the First Amendment plays in protecting speech online.&nbsp;</p>
<p dir="ltr">Rather than repealing the provision, Farid Johnson urged Congress to pursue structural reforms that would strengthen users&rsquo; privacy, require greater transparency from platforms, and give users more control over their data and online experience.</p>
<p dir="ltr">Read Farid Johnson&rsquo;s full testimony <a href="https://knightcolumbia.org/documents/3wz4jqde85">here</a>.</p>
<p dir="ltr">Watch the full hearing below.</p>
<p dir="ltr"><iframe title="YouTube video player" src="https://www.youtube.com/embed/F8T5vCmlHmA?si=41UZ-Hi0Fnc_wiXp" width="560" height="315" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="allowfullscreen"></iframe></p>
<p>&nbsp;</p>]]></description>
      <guid isPermaLink="false">/content/knight-institutes-nadine-farid-johnson-testifies-before-senate-committee-on-section-230</guid>
      <pubDate>Wed, 18 Mar 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[Knight Institute Condemns Trump Attacks on Press Over Iran War Coverage]]></title>
      <link>https://knightcolumbia.org/content/knight-institute-condemns-trump-attacks-on-press-over-iran-war-coverage</link>
      <description><![CDATA[<p dir="ltr">NEW YORK&ndash;President Trump and senior administration officials escalated their attacks on the press over the weekend, criticizing news organizations for their coverage of the war with Iran. FCC Chair Brendan Carr threatened to revoke broadcasters&rsquo; licenses a day after Defense Secretary Pete Hegseth singled out CNN over reporting that cited sources who said the administration underestimated Iran&rsquo;s willingness to close the critical Strait of Hormuz. President Trump publicly endorsed Carr&rsquo;s threat in a post on Truth Social Sunday evening and also suggested that news organizations should be tried for treason.&nbsp;</p>
<p dir="ltr">The following can be attributed to Jameel Jaffer, executive director at the Knight First Amendment Institute at Columbia University:</p>
<p dir="ltr">&ldquo;President Trump is free to criticize news coverage he thinks is inaccurate or unfair, but the First Amendment gives news organizations the right to decide for themselves what to report, and how to report it. This is constitutional bedrock, if anything is. President Trump&rsquo;s threats are a further intensification of his long-running effort to bring news organizations into closer alignment with his own political and ideological agenda.&rdquo;</p>
<p>For more information, contact: Adriana Lamirande, <a href="mailto:adriana.lamirande@knightcolumbia.org">adriana.lamirande@knightcolumbia.org</a>&nbsp;</p>]]></description>
      <guid isPermaLink="false">/content/knight-institute-condemns-trump-attacks-on-press-over-iran-war-coverage</guid>
      <pubDate>Mon, 16 Mar 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[A Conceptual Model to Guide AI Risk Governance Strategies]]></title>
      <link>https://knightcolumbia.org/content/a-conceptual-model-to-guide-ai-risk-governance-strategies-1</link>
      <description><![CDATA[<h3><strong>Introduction</strong></h3>
<p>In recent years, risk mitigation has grown increasingly salient in the AI governance landscape. Across the world, both countries and multilateral organizations moved beyond high-level statements about the risks posed by AI and toward adopting frameworks, laws, and policies to clarify the rights and values that AI developers and users ought to respect&mdash;and the practices they should adopt to do so. From the European Union Artificial Intelligence Act (Regulation (EU) 2024/1689, hereinafter EU AI Act), to the Biden-Harris Administration rules governing United States federal agencies&rsquo; use of AI (Young), to a unanimous United Nations General Assembly resolution on trustworthy AI (UN G.A. Res. 78/265, hereinafter UNGA AI Resolution 2024), these policies articulated the public interest that needs to be protected against AI risks. Countries also founded new AI safety institutes to develop the science and practices to design, evaluate, and use AI responsibly and formed an international network to enhance global collaboration on the technical aspects of AI safety.</p>
<p>In parallel, many policymakers and stakeholders have become concerned about the increasing capabilities of AI models. As a result, many of the emerging AI risk management actions and practices focus on governing AI models through improved testing and evaluation, safeguards on model inputs and outputs, and limiting access to AI models&rsquo; weights. Supporters of this model-centric governance approach argue that interventions at the AI model training and release stages can reduce the risks posed by downstream uses, especially misuses, which may be particularly important given the increasingly pervasive use of generative AI models. Some also argue that certain model outputs are harmful and thus can be most efficiently and effectively mitigated at the model development stage.</p>
<p>Critics argue that model-centric governance is infeasible and ineffective, and entails collateral cost to innovation, scientific practice and progress, economic competition, and other approaches to risk mitigation. They claim that constraining models can stymie productive downstream AI applications, whilst doing little to prevent AI harms. We find many of the critiques of model-centric governance (Narayanan &amp; Kapoor 2024) compelling. However, our goal here isn&rsquo;t to settle this debate but to structure it. Current AI risk management activities are stymied by the lack of a conceptual framework to reason about the strengths and weaknesses of various intervention sites (data, models, applications, policies, etc.), resulting in a constrained set of methods, tools, and enlisted expertise.</p>
<p>As AI risk management transitions from principles into law and practice, we must assess whether it is intervening in the optimal points in the <em>sociotechnical system, </em>imposing responsibilities on the right actors, deploying the right tools, and enlisting the right expertise to reduce the <em>harms</em> AI use can produce and exacerbate. Today, AI risk management activities are increasingly unmoored from the goal of protecting the public&rsquo;s rights and safety and public goods. Even well-intentioned and well-executed model evaluations are insufficient, on their own, to mitigate harms of AI deployed in context. Moreover, the expertise necessary to effectively and legitimately mitigate harms often lies outside the companies that develop AI models.<button id="ref-1" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-1">1</button> <span id="sdn-1" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 1">1. And outside the training of those focused on model safety.</span></p>
<p>In this essay we seek to recenter the prevention of harms (i.e., realized negative impacts) on the ground as the goal of AI risk management. We argue that accomplishing this task requires AI risk assessments at the <em>sociotechnical</em> level. Only a <em>sociotechnical systems</em> orientation to risk assessment that accounts for the technical and organizational context (Dobbe 2022) can produce a full understanding of the ways in which system components&mdash;human, material, technical, and social&mdash;coproduce harms or exacerbate the possibility of their production. A sociotechnical approach clarifies the range of potential sites for risk mitigation activities that directly reduce harms and reduce their probability. As we show, intervening at these various sites&mdash;data sets, models, organizational processes, professional training&mdash;requires an expanded set of mitigation methods and tools. The expanded methods and tools of risk mitigation introduced through the sociotechnical frame elucidate the variety of competencies and expertise necessary to effectively and legitimately manage AI risks, thus requiring policy frameworks that broaden the frame of assessment and potential interventions from technical to sociotechnical and broaden the actors enlisted in those activities.</p>
<p>Left on its current path, AI risk management will drive a proliferation of technocratic practices that fit the workflows and expertise of powerful entities, such as large tech companies building AI models, and the specific, predominantly technical, experts they employ, but fail to sufficiently reduce harm from AI systems. Mitigating AI-related harms requires policymakers to reaffirm the goal of risk management as reducing harms, identify appropriate sites of intervention, and expand the tools and experts involved in AI risk management.</p>
<p>Part I describes and contrasts aspects of various AI risk management frameworks including the EU AI Act, the U.S. guidance to federal agencies on the responsible use of AI (Young 2024), the United Kingdom&rsquo;s AI Security Institute&rsquo;s research agenda (AISI 2025), the U.S. National Institute of Standards AI Risk Management Framework (NIST 2023, hereinafter NIST AI RMF) and the voluntary AI commitments secured by the Biden-Harris Administration (White House 2023).<button id="ref-2" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-2">2</button> <span id="sdn-2" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 2">2. Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI signed on to the AI Commitments in July 2023, and Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability signed on in September 2023 (Heikkil&auml; 2024; Ordo&ntilde;ez 2023). The September version is nominally modified to address testing-relevant privacy concerns.</span> We highlight <em>what</em> AI risk management frameworks seek to protect against AI risks (e.g., rights, safety, democracy, etc.), <em>who</em> they task with managing risks, <em>how</em> (the tools and methods) they direct be used to mitigate risks, and <em>where</em> (the frame(s) e.g., data, model, system, etc.) they direct interventions. We identify and critique the narrow focus on technical systems in some of these approaches, and often on models specifically, and the related emphasis on technical and computational testing and mitigation methods and practices which on their own are insufficient to address real world harms.</p>
<p>Part II presents our conceptual framework for approaching AI risk management. The framework proposes two important analytic shifts to AI risk management: a sociotechnical approach to risk management, and a preference for interventions and collections of interventions that prevent <em>harms</em> (realized negative impact) over those that reduce particular component <em>hazards</em> (probability of future harm). These analytic shifts require policies that broaden the methods and tools of risk mitigation and disentangle ownership or control of AI models and systems from participation in risk management, making way for other entities who possess relevant expertise, operational capacity, and independence. These two steps will bring in the broader set of experts and accompanying methods and tools required to effectively and legitimately reduce AI related harms and hazards.</p>
<p>In Part III, we examine risk mitigation approaches to addressing a particular example where this framework is helpful for mitigating risk. In particular, we look at image-based sexual abuse exacerbated by AI capabilities through the lens of our proposed reorientation of AI risk management.</p>
<p>In Part IV, we offer four recommendations to policymakers. First, policymakers should begin by developing a sociotechnical system map that identifies the technical and organizational system components related to the harm under exploration. Second, deployers of AI systems should be tasked with assessing and mitigating risks of AI <em>use cases</em>, not systemic risks. Third, regulatory frameworks should reduce reliance on the developers and deployers of AI systems to independently engage in risk mitigation activities. Policymakers should incentivize entities that develop and deploy systems to enlist external stakeholders with risk-relevant expertise in risk mitigation. This includes bringing external stakeholders into strategic decisions about where and how to mitigate risks, and, where relevant, directly into risk mitigation activities. Finally, governments and companies need to invest in the infrastructure and research to support sociotechnical evaluations and the richer set of technical and non-technical risk mitigation techniques required to reduce harms.</p>
<h3><strong>I. Limitations of AI Governance Frameworks&rsquo; Approaches to Risk Management</strong></h3>
<p>Policymakers in the United States and the European Union have introduced new governance efforts to manage the risks AI poses to a range of the public&rsquo;s rights and safety, as well as public goods, including cybersecurity and biosecurity, among others (EU AI Act; Exec. Order No. 14110; Young 2024; Biden 2024, NIST 2023; International Organization for Standardization and International Electrotechnical Commission 2023, hereinafter ISO/IEC 2023). These risk-based (or management-based) regulatory approaches stand in contrast to regulatory approaches that establish&nbsp;<em>ex ante</em> rules requiring particular conduct or particular measurable outcomes. Both academics (Marcus 2023; Marchant &amp; Stevens 2017; Matthew &amp; Suzor 2017; Scherer 2016) and AI companies (U.S. Congress, Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law 2023a; 2023b) have advocated for risk-regulation approaches that direct or encourage entities to evaluate the risks generated by their AI (this could be at the level of models or systems or use cases) and adopt mitigations to manage those risks.</p>
<p>AI regulations and governance efforts direct entities to mitigate risks to a wide range of objects, such as rights, safety, democracy, and the environment (White House OSTP 2022; Exec. Order No. 14110; Biden 2024; EU AI Act; UNGA AI Resolution 2024). Below we discuss four weaknesses of many current AI risk governance frameworks: insufficient attention paid to the relationality of risk, reliance on developers and deployers for risk management, emphasis on technocratic tools and methods, and over-reliance on model-centric mitigations. &nbsp;</p>
<h4>A. Component-by-component hazard reduction may not reduce harms</h4>
<p>While most of the AI governance frameworks identify the objects to be protected, they allow entities to direct mitigation efforts as they see fit. The resulting component-by-component analysis with each entity focusing on the technical artifact they develop or deploy lacks the coordination necessary to reduce harms that are largely produced by interactions in sociotechnical systems. These mitigation activities often target hazard (possibility of future harm) reduction and do not consider whether the actions undertaken collectively reduce actual harms.<em><button id="ref-3" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-3">3</button> <span id="sdn-3" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 3">3. Nancy G. Leveson (2012) defines &ldquo;hazard&rdquo; as a &ldquo;system state or set of conditions that, together with a particular set of worst-case environmental conditions, will lead to an accident (loss).&rdquo;</span></em></p>
<p>This component orientation combined with &ldquo;if-then&rdquo; logic pervades AI safety discourse to drive <em>ex ante</em> speculation of potential hazards and mitigation practices that center models and other technical artifacts along with technical methods and expertise to produce risk mitigation efforts that over-emphasize component-level hazards and presume linear relationships between hazard reduction and harm reduction (Karnofsky 2024). Today&rsquo;s frameworks drive AI risk mitigation approaches that tend to posit risk as emerging in a causal mechanistic chain flowing downwards from particular components, typically models or model outputs. As Dobbe (2025) argues, the construction of safety as being &ldquo;of&rdquo; an AI system, in the sense of a &ldquo;property,&rdquo; and about &ldquo;avoiding harmful outputs,&rdquo; misunderstands safety as &ldquo;model reliability.&rdquo;<button id="ref-4" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-4">4</button> <span id="sdn-4" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 4">4. &ldquo;Reliability in engineering is defined as the probability that something satisfies its specified behavioral requirements over time and under given conditions&mdash;that is, it does not fail&rdquo; (Leveson 2012).</span> Safety cases from the AI Security Institute illustrate Dobbe&rsquo;s point: they aim to make &ldquo;inability arguments&rdquo; to assure (Goemans et al. 2024) why model capabilities would not &ldquo;incur[] large-scale harm&rdquo; (Clymer et al. 2025). By positing that the model capability itself leads to harm, structured in an if-then rationality, they derive mitigations that center the model (even if they are not necessarily targeting the model itself&mdash;for example, cybersecurity interventions). In other words, first something is posited as a hazard, then its pathways to harm are derived. Good governance ought to do the reverse.</p>
<p>Recent reports from users of chatbots of mental health and physical harms reveals a particular limitation of the if-then approach to AI risk. An if-then orientation enframes interventions at the model output level by building out technical solutions such as content guardrails that prevent models from producing certain kinds of &ldquo;disallowed&rdquo; content (OpenAI 2023).<button id="ref-5" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-5">5</button> <span id="sdn-5" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 5">5. For instance, GPT-5&rsquo;s system card measures model refusals against a wide range of &ldquo;unsafe&rdquo; content (OpenAI 2025). Although benchmarks are saturated, this training cannot guarantee that all model interactions are safe along these dimensions. While better benchmarks are a part of the picture in improving the technical, we believe widening the aperture and scope of analysis to the broader sociotechnical system may result in safer systems. </span> Yet users engage with models in ways unanticipated by developers that leads to an overflowing of this framing, for instance, by engaging in long form conversations which erode the efficacy of safety training (Eliot 2025). Although developers may attempt to limit user interactions they consider &lsquo;misuse&rsquo; through terms of service, science and technology studies scholars have long told us that designers cannot rationally predict or control all the ways in which users and stakeholders will interact with an artifact. Instead, research and engineering practices must be contextually situated and account for foreseeable (even if prohibited) uses (Suchman 1987; Dourish 2001). In the automobile industry, this perspective has meant that vehicle manufacturers must account for failures that arise from &ldquo;ordinary abuse&rdquo; regardless of whether the abuse is legally or contractually prohibited (Goldenfein et al. 2020). Determining what is &ldquo;reasonably foreseeable" and what is &ldquo;ordinary abuse&rdquo; requires an approach to risk that is not merely technological but sociotechnical (Goldenfein et al. 2020). This view, that risks too are embodied, reveals that harms are not, or at least not only, in the capabilities of models, they are in the world.</p>
<p>The weakness of the component-by-component and if-then approach to mitigating risks is compounded by the failure to understand risks as relational and embedded in broader sociotechnical systems, not produced solely by a system output. Models and other technical components are individual parts of an assemblage that encompasses the social, political, and economic as well as the technological, and it is often the interactions between these components that produce the hazards that lead to harm. Recovery efforts following Hurricane Katrina illustrate the complex interactional nature of harms. The emphasis on the breaking levee as the hazard led to the construction of a $14.5 billion flood control system (Layne 2021), yet the harms of the flood were coproduced by the hurricane, the failure of the levee, hollowed out public services, and other crumbling infrastructure (Knowles 2014). Focusing on the levee alone produces a stilted picture of what contributed to the death of nearly 1,400 people and a limited view of what ought to be fixed to avoid similar future harms (Knabb et al. 2023). Similarly, the disastrous consequences of the 2025 Central Texas floods can be partially attributed to the social and political conditions that shaped Kerr County&rsquo;s vulnerability to shocks (Colman et al. 2025). In both instances, the levees and the flash flood are parts of an assemblage&mdash;encompassing the social, political, and economic, as well as the technological&mdash;necessary to protect against harm. It is the interactions between these components that produced the hazards that led to loss of life, not the levee failure or flash flood alone.</p>
<p>While risk management should be concerned with potential hazard reductions across components, as we explain in Part II below, the failure of AI governance frameworks to take a systems approach undermines optimal risk management strategies. While yielding many efforts that reduce hazards, those efforts may not compose to actually reduce harms. Harms emerge from systems, through the way that complex components interact, through everyday practices, not in the model artifact alone. To that end in Part II, we propose the handoff lens as an analytic approach compatible with this sociotechnical perspective on risk management.</p>
<h4>B. Risk management in the hands of developers and deployers limits expertise and undermines legitimacy</h4>
<p>AI governance frameworks generally task entities developing and deploying AI systems with risk management responsibility. The EU AI Act imposes a suite of testing and evaluation requirements on AI systems of varying risk levels. It creates opportunities for a broader range of stakeholders to participate in developing the code of practice and standards to support the risk assessments it requires developers and deployers to undertake, but largely leaves those providing and deploying models and systems to do so without external involvement.<button id="ref-6" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-6">6</button> <span id="sdn-6" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 6">6. The EU AI Act requires AI systems that are &ldquo;intended to be used as a safety component of a product, or the AI system is itself a product&rdquo; to undergo a third-party conformity assessment, for example medical devices (Regulation (EU) 2024/1689 Art. 6(1)(a) and Annex I). Other high-risk systems (identified in Annex III), including biometrics, workplace hiring and management among others, &ldquo;providers shall follow the conformity assessment procedure based on internal control as referred to in Annex&nbsp;VI, which does not provide for the involvement of a&nbsp;notified body&rdquo; (Regulation (EU) 2024/1689 Art. 43(2)). For a useful overview of the limitations of the EU AI Act generally and the conformity assessments in particular see Wachter 2024.</span> Even providers of high-risk AI systems and general-purpose AI (GPAI) models are trusted to independently evaluate the risks posed by their models.<button id="ref-7" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-7">7</button> <span id="sdn-7" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 7">7. Among other documentation and transparency requirements, for <cite>GPAI</cite> posing systemic risks (presumptively <cite>models using more than 1025 FLOPS</cite>, unless the provider demonstrates the contrary), the EU AI Act requires providers to &ldquo;perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a&nbsp;view to identifying and mitigating systemic risks&rdquo; however the evaluations need not be conducted by an external entity (Regulation (EU) 2024/1689 Art. 55(1)). In addition, the Third Draft of the General-Purpose AI Code of Practice notes that "[B]efore placing a [GPAI model with systemic risk] on the market, Signatories commit to obtaining independent external systemic risk assessments, including model evaluations, unless the model can be deemed sufficiently safe, as specified in Measure II.11.1&rdquo; (Chairs and the Vice-Chairs of the General-Purpose AI Code of Practice 2025).</span> The exception is AI models and systems intended to be used as a safety component of a product or be a product which require external evaluations. While requirements on federal agencies imposed during the Biden-Harris Administration and left intact by the Trump Administration leave agencies to test and evaluate their own systems, they recommend some level of <em>internal</em> independence, i.e., employees testing or auditing systems should be distinct from those developing or deploying them, and imposed a set of high-level risks that agencies had to care about mitigating, rather than leaving that risk identification to agencies themselves (Young 2024, &sect;5c IV C). The guidance also requires agencies to <em>engage</em> affected stakeholders in the design of AI systems, including risk mitigation choices, however it does not prescribe how agencies should execute on this obligation and instead offers an inexhaustive list of potential methods (Young 2024, &sect;5c IV C).</p>
<p>Scholars have critiqued AI governance frameworks for leaving regulated entities fully in charge of risk management processes and mitigations (Kaminski 2023; Wachter 2024) and called for more government involvement. Some researchers argue that AI evaluations produced under current frameworks reduce the public&rsquo;s understanding of AI risks, acting as a form of &ldquo;safetywashing&rdquo; (Henshall 2024). These critiques align with those of regulatory scholars who have identified the risks to public goals and public trust of delegating decisions about how to achieve regulatory objectives to regulated entities. Kenneth Abbott and Duncan Snidal explain the inability of any single non-state actor to provide all the competencies required for &ldquo;regulatory standard-setting&rdquo; (RSS), a term they coin to capture the emerging transnational &ldquo;non-state and public-private governance arrangements focused on setting and implementing standards for global production in the areas of labor rights, human rights and the environment&rdquo; (Abbott and Snidal 2009). With respect to regulated entities, Abbott and Snidal note &ldquo;firms lack <em>independence</em>&hellip;a weakness especially significant for monitoring&hellip;are relatively weak on normative expertise and commitment&hellip;[and] are not generally representative beyond their economic stakeholders&rdquo; (emphasis added) (Abbot and Snidal 2009). Due to these deficiencies they conclude &ldquo;firms are unlikely to produce regulatory standards and programs that serve common interests and may lack legitimacy and credibility in the eyes of the public&mdash;and certainly those of activists&mdash;even if they are sincere about self-regulation&rdquo; (Abbot and Snidal 2009). Kenneth Bamberger&rsquo;s work further explains how &ldquo;corporate structures, mindsets, and routines developed to allow efficient firm behavior can skew compliance efforts by filtering out the very information about risk and change that regulation seeks to identify&rdquo; (Bamberger 2006).</p>
<p>Other scholars place the responsibility for risk identification and mitigation activities on entities developing and using AI models and systems, seeing this as the only realistic regulatory strategy given the imbalance in expertise and resources between the public and private sectors (Coglianese and Crum 2025; Wasil et al. 2024). While the expertise and resources of model developers and deployers are necessary for successful AI risk management efforts, like in RSS, they are insufficient. AI risk management efforts like RSS aim to address &ldquo;social and environmental externalities rather than demands for technical coordination&rdquo; and regulated entities lack the expertise to independently define and identify the relevant risks (Abbott and Snidal 2009). The coordinated activities undertaken to address child sexual abuse material (CSAM) enlist different institutions with different kinds of expertise, legitimacy, and capacity in a manner that increases corporate accountability for public goals and through the provisioning of shared infrastructure reduces the costs of addressing the harms caused by CSAM circulation (Mulligan &amp; Bamberger 2021). Like RSS, AI risk management efforts pose significant challenges for &ldquo;monitoring and enforcement (due to the strategic structure of those externalities)&rdquo; which require visibility into risks that arise due to the interactions of products and services and use cases outside the purview of individual model developers or deployers as well as the sector, and mitigation activities that span model developers and deployers as well as other actors in the ecosystem (Abbott and Snidal 2009). Furthermore, deeply entrenched firm processes produce what Bamberger calls &ldquo;cognitive decisionmaking pathologies&rdquo; that if left unchallenged and unchecked undermine AI risk management efforts as they lead firms to interpret and embed new activities in ways that fit neatly into existing firm routines and personnel (Bamberger 2006). The breadth of AI deployments and the variation of deployers&mdash;including small businesses and governments&mdash;suggest that shared infrastructure to support at least some risk management activities will be important to ensure the benefits of AI are broadly available and its risks are consistently mitigated across deployments. As Solow-Niederman (2020) warns, without reforms, current regulatory frameworks are ushering in &ldquo;an era of private governance&rdquo; that will prevent &ldquo;public values [from inform[ing] Al research, development, and deployment&rdquo; and will undermine &ldquo;the democratic accountability that is at the heart of public law.&rdquo;</p>
<h4>C. An overly narrow and technocratic set of risk management tools and practices</h4>
<p>The AI governance frameworks direct entities towards an expanded set of risk management tools and practices.<button id="ref-8" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-8">8</button> <span id="sdn-8" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 8">8. For an overview and comparison of risk management practices in four different AI governance schemes see Kaminski, Margot E. 2023. &ldquo;Regulating the Risks of AI.&rdquo; <cite>Boston University Law Review</cite> 103 (5): 1347-1411. For an overview of the variety of risk management practices that apply to systems and services carrying unacceptable, high, minimal and no risk as well as general purpose AI systems see Wachter, Sandra. 2024. &ldquo;Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond.&rdquo; <cite>Yale Journal of Law and Technology</cite>, 26(3): 671&ndash;718. https://dx.doi.org/10.2139/ssrn.4924553.</span> The explicit AI governance frameworks all feature some combination of traditional risk management tools such as impact assessments, post-market monitoring, audits, and registration. A few instruments take a more precautionary approach, for example precluding certain uses of AI in specific contexts due to risk (EU AI Act Art. 5; Biden 2024, Pillar I) and requiring pre-market risk assessments (EU AI Act Art. 40(1); Young 2024, &sect;5). In addition to traditional risk management tools, the AI governance frameworks draw in a range of additional techniques ranging from new forms of model, system, and data documentation; cybersecurity-inspired adversarial testing; public reporting of use cases; transparency when in use; requirements for explanations to impacted individuals; bias identification and mitigation efforts; human fallbacks; and consultation with affected communities and the public on the design, development, and use of the system as well as the risk mitigation choices (EU AI Act Art. 5, 16-27; Young 2024). But in practice, institutional arrangements and power diverge from these broad methods to produce technocratic practices and tools.</p>
<p>While AI risk management frameworks call for a range of risk management approaches and tools, key public sector efforts by the United States Center for AI Standards and Innovation (CAISI), formerly the U.S. AI Safety Institute, housed in the National Institute for Standards and Technology (NIST) at the Department of Commerce, and sister AI safety institutes around the world, are increasingly focused on technocratic, and predominantly model-centric, evaluations and mitigation strategies. For example, while NIST&rsquo;s seminal work, the Artificial Intelligence Risk Management Framework (2023), describes risk management as &ldquo;coordinated activities to direct and control an <em>organization</em> with regard to risk&rdquo; (emphasis added) and sets out a process for risk management that entails a wide range of technical and organizational practices designed to govern, map, measure, and manage AI risks and improve AI safety and trustworthiness, NIST and CAISI&rsquo;s subsequent work has focused on technical issues, including guidance for AI developers in managing the evaluation of misuse of dual-use foundation models (NIST 2024b), frameworks on managing generative AI risks (NIST 2024a), and securely developing generative AI systems and dual-use foundation models (NIST 2025). The work of the AI Security Institute (AISI) is similarly highly model-focused for example, in relation to the topic of &ldquo;cyber misuse&rdquo; emphasizing that its &ldquo;research intends to understand, assess and research potential model developer mitigation strategies&rdquo; (AISI 2025).</p>
<p>Rather than using and developing the infrastructure and approaches to support the broad range of risk management strategies and activities found in the frameworks, efforts of these key public sector entities lean heavily on risk identification and mitigation approaches developed in the private sector that focus on technical evaluations of and risk mitigations within AI <em>models</em>. In particular, the work of AISI and CAISI draw on private sector approaches in areas including risk assessments, testing, evaluations, and benchmarks focused on better understanding the capabilities and limitations of models (Anderljung 2023; Vidgen 2024) paired with efforts to determine acceptable or unacceptable level of risk (METR 2023). For example, the published pre-release AISI and CAISI joint evaluation of OpenAI&rsquo;s o1 and Anthropic&rsquo;s Claude Sonnet 3.5 models frames the goal &ldquo;to better understand the capabilities and potential impacts of o1 considering the availability of several similar existing models&rdquo; and relies primarily on industry standard benchmarks (HarmBench, Cybench, LAB-bench) to evaluate the model&rsquo;s chemical, biological, radiological, and nuclear risks; offensive cyber capabilities; and software engineering capabilities (CAISI and AISI 2024a, 2024b).</p>
<p>AISI and CAISI&rsquo;s red-teaming work has similarly drifted towards model-centric and technocratic approaches developed within industry. For example, AISI&rsquo;s principles highlight the importance of external partners (both experts and the public) in red-teaming activities, however its research agenda emphasizes the production of &ldquo;technical solutions&rdquo; to risks generated from an if-then rationality, rather than oriented around harms (AISI 2025). Methodologically, red-teaming features prominently as safeguards that are posited to directly mitigate risk, for example for inability arguments in safety cases, even though as a practice, red-teaming is about the discovery of potential weaknesses, not the guarantee of their absence.</p>
<p>Mitigation approaches within AISI and CAISI similarly favor the model-centric focus of industry efforts. For example, AISI and CAISI work emphasizes changing the model itself to avoid undesired behaviors, such as by modifying the model&rsquo;s weights, training data, or training code (Henderson 2023, Jones et al. 2020). Some of these interventions appear helpful for reducing risks and preventing harms. For example, recent research suggests how some child sexual abuse material might be fine-tuned out of an AI model directly (Gandikota et al. 2023; Thiel 2023). Those in the field of AI safety have recognized the limitations of existing technical interventions, highlighting a need for greater rigour in the field (Apollo 2024). New AI models can exhibit capabilities that are hard to predict, measure, or mitigate through technical interventions. In addition, many of the aforementioned technical interventions can easily be circumvented by a malicious actor or avoided by using a different model that has not had those interventions (Wei et al. 2024). Tunnel vision on the model also commits a kind of framing trap, as recent work from AISI shows in fact that for certain categories of harms for open weight models, mitigations at the data level may be highly effective (O'Brien et al. 2025).</p>
<p>Regardless of their utility, the emphasis on technocratic tools aimed at models in combination with the deference to the private sector entities described above almost inevitably will produce management strategies that are unresponsive or incomplete in their efforts to address public goals. The combination of private sector control, model-centricity, and emphasis on technical mechanisms sets the conditions for what Cohen and Waldman call &ldquo;regulatory managerialism,&rdquo; the importing of the &ldquo;practices for organizing and overseeing private sector, capitalist economic production and&hellip;the logics and underlying ideologies in which those practices are rooted&rdquo; into regulated activities (Cohen and Waldman 2023). Given the complexity and opacity of AI systems the current mix of deference and model-centricity is poised to leave a small group of well-resourced private entities to shape the publics&rsquo; and regulators&rsquo; understandings of compliance (Edelman 2016 <button id="ref-9" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-9">9</button> <span id="sdn-9" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 9">9. Detailing how the ambiguity of civil rights law allowed regulated entities to create "symbolic structures" that while more reflective of managerial interests than the legal goals came to inform courts understanding of compliance illustrating the concept of "legal endogeneity."</span>; Chi et al. 2021).</p>
<h4>D. Constrained view on potential mitigation sites</h4>
<p>AI governance frameworks and the emerging practices across the fields over-emphasize model assessments and mitigation strategies. This orientation detracts from more fulsome understandings of how rights and safety are put at risk, and in many instances results in mitigations that are occluded from seeing the full spectrum of hazards and their relations to particular harms.</p>
<p>Many AI governance frameworks emphasize the responsibility of model developers and deployers to mitigate risks posed by those models. The EU AI Act requires risk mitigation activities by <em>providers</em> of &ldquo;general-purpose AI models,&rdquo; with heightened requirements where they pose &ldquo;systemic risks,&rdquo;<button id="ref-10" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-10">10</button> <span id="sdn-10" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 10">10. Those who use greater than 1025 floating point operations (FLOPS) in the computation used for training.</span> and <em>deployers</em> of High-Risk AI systems (EU AI Act Art. 53, 55, 26-27). Similarly, President Biden directed various reporting requirements and technical guidelines for AI systems deemed to be &ldquo;dual-use foundation models&rdquo; and directed a report on the risks and benefits of AI models with widely available weights and policy recommendations to maximize those benefits while mitigating the risks (Exec. Order No. 14110). Private commitments to mitigate risks to the public&rsquo;s rights and safety similarly focus on &ldquo;red-teaming of models or systems,&rdquo; &ldquo;sharing&hellip;information on advances in frontier [models&rsquo;] capabilities and emerging risks and threats,&rdquo; and &ldquo;protect[ing] proprietary and unreleased model weights&rdquo; (White House 2023).</p>
<p>Although providers and deployers both play important and crucial roles in mitigating risks from AI, efforts focused on engineering and design procedures and practices are not oriented towards potential harms on the ground. In combination with the emphasis on technical mitigation strategies and the deference to the private sector, the focus on model-level interventions may make it easier for engineers to address risks in their day-to-day practice, but collectively these governance approaches risk constructing a world in which corporations make progress on addressing AI risks without making a meaningful overall dent in negative real-world impacts from AI.</p>
<p>In contrast, binding guidance issued under the Biden-Harris Administration (Young 2024) and reissued under the Trump Administration (Vought 2025), directs risk mitigations at rights- and safety-impacting AI <em>use cases</em>&mdash;best understood as deployed sociotechnical systems including the technical, human, and organizational elements&mdash;or as Dobbe describes it, accounting for AI system design and institutional context (Dobbe 2022). The Digital Services Act also takes a sociotechnical system approach requiring platforms and search engines of specific sizes (those with 45 million or more users per month in the EU) to &ldquo;identify, analyse and assess any systemic risks&nbsp;from the design or <em>functioning</em> of their service and its related systems, including algorithmic systems, or from the use made of their services&rdquo; and adopt mitigation measures to address them (Regulation (EU) 2022/2065 Art. 34-35). The NIST AI RMF (2023) as well as the more recent NIST AI Risk Management profile on Generative AI (NIST 2024a) and State Department Risk Management profile on AI and Human Rights (U.S. Department of State 2024) take a sociotechnical systems perspective as well, explaining the need to consider risks at multiple levels of system abstraction. For example, the NIST AI RMF explains that it is designed to be used by &ldquo;AI actors&hellip;who perform or manage the design, development, deployment, evaluation, and use of AI systems&rdquo; including those who control the &ldquo;Application Context, Data and Input, AI Model, and Task and Output&rdquo; (NIST 2023). Similarly, the Generative AI framework explains, &ldquo;Risks may exist at individual model or system levels, at the application or implementation levels (i.e., for a specific use case), or at the ecosystem level &ndash; that is, beyond a single system or organizational context&rdquo; (NIST 2024a). While the question of what risks can be evaluated and addressed at various levels of abstraction isn&rsquo;t crisply set out in the proposals, the NIST AI RMF provides some guidance about the risk mitigation activities that can occur at different <em>dimensions</em> (&ldquo;Application Context, Data and Input, AI Model, and Task and Output&rdquo;), and categories of actors who can assess them (NIST 2023). This extends the site of action beyond where the engineering and design work happens, aiming to account for the full complexity of the world.</p>
<h3><strong>II. Effectively and Legitimately Managing AI Risks</strong></h3>
<p>AI governance frameworks direct risk management activities to protect public rights and interests, but effectively and legitimately reducing harms requires policy frameworks that advance a sociotechnical approach to risk assessment and mitigation; center the prevention of harm through coordinated action; and expand the practices and institutions involved in risk management activities, including those focused on models. Absent a reorientation, AI risk management will continue to devolve into an exercise in technocratic regulatory managerialism, divorced from the rights and public values it was created to serve. It will yield ineffective risk management approaches that will be viewed as illegitimate and captured. In its current form AI governance, while grounded in a deep commitment to advance the interests and protect the rights and safety of the public, is at risk of contributing to &ldquo;peoples&rsquo;&hellip;alienation from public institutions, and the perspectives of the regulators who are supposed to be safeguarding their interests&rdquo; (Ford 2023).</p>
<p>AI governance practices are still nascent. Policymakers and other stakeholders have a window of opportunity to shape them. Below we outline four practical ways to reorient risk management and realize its public benefits.</p>
<h4>A. Establishing the sociotechnical frame</h4>
<p>First, governance approaches should take a sociotechnical systems approach to assessing and mitigating risk<em>. </em>Rather than seeking to address risks component-by-component, a sociotechnical systems approach analyzes how system components collectively produce harms and hazards andidentifies optimal points for riskmitigations across them<em>. </em>The goal is to systematically identify a set of risk mitigations across various technical and institutional components that will effectively mitigate harms and reduce hazards.</p>
<p>Advisory bodies, researchers, advocates and some policymakers have emphasized the importance of taking a sociotechnical systems approach to managing AI risks (NAIAC 2023; NIST 2023 Appendix 3; Polemi et al. 2024; Dobbe 2022; Raji and Dobbe 2023; NASEM 2021; Bogen and Winecoff 2024; White House OSTP 2022; Prabhakar and Klein 2024). Researchers report that a key failing of current risk management methods is an overemphasis on &ldquo;the technological artefact&hellip;in isolation&rdquo; and absence of attention to &ldquo;the human factors and systemic structures that influence whether a harm actually manifests&rdquo; (Weidinger et al. 2024). They find that an emphasis on technical components leaves key sources of risk unidentified and unaddressed (Dobbe 2022; Fox and Victores 2025).</p>
<p>Moving AI risk management beyond models and data is aligned with learnings from safety science and risk management practices in other high-risk fields. As Dobbe (2022) explains, the field of system safety assumes that &ldquo;systems cannot be safeguarded by technical design choices on the model or algorithm alone&rdquo; therefore takes an &ldquo;end-to-end&rdquo; approach to analyzing risks and a sociotechnical approach&mdash;including &ldquo;the context of use, impacted stakeholders&hellip;and institutional environment&rdquo;&mdash; to deploying mitigations. Governance models in other high-risk fields such as transportation, finance and medicine reflect this holistic, systems approach to assessing and mitigating risk (NASEM 2021).</p>
<p>Driving a sociotechnical systems approach in the field requires policymakers to prioritize the mitigation of harms, rather than the evaluation of specific system components. Two practical ways to set risk management activities within the sociotechnical frame include establishing <em>use cases</em> rather than systems as the unit of analysis, and encouraging collaborative, <em>ecosystem-level</em> approaches to risk management. The first approach, pioneered in the guidance to federal agencies and building off the Blueprint for the AI Bill of Rights established during the Biden-Harris Administration emphasizes the need to design and evaluate a specific context of use rather than a generic AI system (White House OSTP 2022; Young 2024). Federal agencies are directed to evaluate AI <em>use cases</em> defined as &ldquo;the specific scenario in which AI is designed, developed, procured, or used to advance the execution of agencies&rsquo; missions and their delivery of programs and services, enhance decision making, or provide the public with a particular benefit&rdquo; (OMB 2024). At the other end of the spectrum, yet accomplishing a similar objective, the Global Network Initiative and Business for Social Responsibility toolkit, <em>Across the Stack Tool: Understanding Human Rights Due Diligence Under an Ecosystem Lens, </em>offers a generalizable method for establishing a sociotechnical frame to assess <em>rights</em> (Global Network Initiative, n.d.a). This tool focuses on the collective action required by the entities and stakeholders that make up the <em>technology</em> <em>ecosystem</em> to protect, respect, and realize human rights, and reflects the fact that many human rights challenges are sociotechnical, span geographies, and are neither company nor technical system specific. While these approaches are radically different, one narrowing to a <em>specific use</em> of AI in context and the other focusing on <em>specific harms</em> across an ecosystem, in practice both orient risk assessment and mitigation activities towards reducing <em>negative impacts</em> on individuals and groups rather than focusing on evaluating a specific technical component or system.</p>
<h4>B. Reorienting harms and hazards relations</h4>
<p>AI systems exist within complex assemblages of social and material components; their risks are coproduced and inseparable from their social worlds. Therefore, risk management should be reoriented to first begin with anticipated harms within a sociotechnical system, then conduct systems analysis to identify and mitigate hazards, not the other way around. This reorientation increases the ability of various entities to tailor mitigations across the sociotechnical system to maximize their efficacy and efficiency at reducing harms while maintaining system components&rsquo; capacity to support other uses and outputs.</p>
<p>Reducing the model-centric focus produces a more robust understanding of risk management opportunities across the sociotechnical system. To this end, the handoff model (Mulligan and Nissenbaum 2020) provides an analytic that helps elucidate the sociotechnical and relational view of risk. In popular accounts of automation, a function that is performed by a human is imagined to be fungible and exchangeable with an algorithmic counterpart, leaving the overall system intact. Handoff challenges the delegation of human decision-making onto automated decisioning systems by examining how the function of the system (or assemblage) is transformed through automation. It is about states of all the interacting components in a system rather than particular components themselves. Understanding the shifting landscape of harms induced by AI systems in action requires us to articulate shifts in how functions are performed (by distinct configurations of human and non-human components) and subsequently alter, relocate, and redistribute harms in a socio-material assemblage. Automated decision-making systems, for example, are said to be a means to reduce harms that arise from intentional human discrimination, but once in operation, bring about disparate impact and other forms of discrimination. Reducing a model-centric focus and unbundling harms and hazards produce a more robust understanding of risk management opportunities across the sociotechnical system.</p>
<h4>C. Expanding the methods and tools of risk mitigation</h4>
<p>Most AI risk management tools are designed to support AI designers&rsquo; and developers&rsquo; efforts during the data-centric and statistical modeling stages of AI development (Kuehnert et al. 2025). As Kieslich et al. (2025) note in their criticism of the risk assessment obligations of the DSA and AI Act, current technically focused practices &ldquo;are not inclusive, unable to take into account broader systemic factors, unsuitable to account for individual (group) differences, and do not offer guidance on how to balance value conflicts&rdquo; and &ldquo;have a tendency to invoke the power and legitimacy of objectifiable criteria, do not leave room to navigate value conflicts, tend to ignore group-specific perceptions of harms and the sociotechnical interplay of end-users with the technology, and invite bureaucratization and overreliance on the perception of one particular stakeholder (developers), while generally suffering from a democratic deficit to the extent that they fail to actively involve and engage the societal stakeholders that are affected.&rdquo;</p>
<p>To address the limitations of current risk management methods, scholars and researchers have pressed for tools and methods that involve other stakeholders (Cohen and Waldman 2023; Metcalf et al. 2021). Researchers and practitioners are pioneering new methods to support sociotechnical risk assessments (Kieslich et al. 2025; Weidinger 2024; Weidinger et al. 2023; Gandhi et al. 2023; Ganguli et al. 2023). However, as Cohen and Waldman (2023) note much more must be done to &ldquo;introduce&mdash;or, in some cases, reintroduce&mdash;knowledge production methods that might compete with and ultimately dislodge managerialist epistemologies&rdquo; and ensure public values&mdash;specifically protecting the public&rsquo;s rights and safety and public goods&mdash;drive AI risk management activities.</p>
<p>Recognizing the importance of moving out of a narrow emphasis on the technical components of systems and quantitative measures of risk towards a holistic approach, NIST sponsored a workshop series convened by the National Academies of Sciences, Engineering, and Medicine to identify and explore approaches to addressing human and organizational risks in AI systems (NASEM 2021). The goal was to develop a complement to the NIST AI RMF that would address the human and organizational aspects of risk and risk management.&nbsp;During the Biden-Harris Administration the U.S. National Science Foundation funded a new AI Institute for Trustworthy AI in Law and Society that is researching participatory approaches to AI development and AI governance approaches, and along with a pool of private philanthropies, launched the Responsible Design, Development, and Deployment of Technologies program that aims to ensure ethical, legal, community, and societal considerations are embedded&nbsp;in the lifecycle of technology&rsquo;s creation (NSF 2023, 2024).</p>
<p>Developing tools, research initiatives and new institutions are important efforts to broaden the methods and tools for addressing AI risks, but they are insufficient to realize the shift in the field required to advance holistic approaches to AI risk. Funding for interdisciplinary work&mdash;particularly efforts that include qualitative, interpretivist social scientists and are situated in specific domains&mdash;is essential to develop the methods and tools to support risk management activities across sociotechnical AI systems that center the public&rsquo;s rights and safety and public values. Existing research has found that responsibly deploying AI systems requires significant investments in human processes, relationships, and training (Sendak et al. 2020). Such changes and investments in institutional processes and personnel must be considered part of the risk mitigation toolbox if AI is to deliver positive impacts in practice. Regulatory frameworks that demand sociotechnical approaches to risk management&mdash;like the requirement for U.S. agencies to assess AI <em>use cases</em>&mdash;are an important way to drive public and private investment in new methods and tools.</p>
<h4>D. Broadening participation in risk management</h4>
<p>Current AI governance frameworks largely rely on regulated entities for risk mitigation activities (Wachter 2024). Scholars and advocates argue these approaches leave too much discretion about which risks are in scope and how much residual risk is acceptable to entities providing AI systems (Smuha et al. 2021; NIST 2023). and allow regulated entities to &ldquo;emphasi[ze]&hellip;severity over probability&rdquo; (Yang 2024). While a key rationale for risk-based regulation is to enlist the expertise, judgment and privileged access of firm insiders, the extent to which outsiders have input and can observe firm choices influences how well private actors and their choices align with public goals. Regulators currently lack the resources and are just beginning to demand the access and acquire the expertise to more directly evaluate and shape firms&rsquo; approaches to AI risk management. The expertise of civil society, academia, and domain experts associated with specific risks is rarely acknowledged and even more rarely leveraged by existing governance approaches. For these reasons, the current governance approaches are unlikely to meet public risk management objectives and, at the same time, the deference to firms undermines public trust in efforts produced under the existing regimes regardless of their merit.</p>
<p>AI risk management needs new players. Solow-Niederman (2020) argues that a functional theory of Al governance must address the power and actions of the private entities and individuals controlling a vast quantity of AI infrastructure. Kuehnert et al. (2025) draw attention to the importance of &ldquo;<em>who within an organization&hellip;conduct[s] what task at different stages of the AI lifecycle, and how</em>&rdquo; for &ldquo;&hellip;preventing and mitigating AI harms.&rdquo; They emphasize that &ldquo;understanding [AI harms&rsquo;] source in the AI lifecycle&mdash;the process by which an AI model is imagined, designed, developed, evaluated, and integrated into broader decision-making processes and workflows&rdquo; is essential to managing risk and that different individuals within the firm have different vantage points (Kuehnert et al. 2025). While Kuenhert et al. focus on who &ldquo;<em>within an organization</em>&rdquo; should be responsible for various risk management tasks, Weidinger et al. (2024) suggest that &ldquo;the work of conducting safety evaluations can be <em>meaningfully distributed across different actors</em>, based on who is best placed to conduct different types of evaluations&rdquo; (emphasis added). Weidinger et al.&rsquo;s suggestion to enlist a wider set of actors beyond regulated firms is consistent with the perspective of regulatory scholars who recommend considering the strengths of various external actors who can be enlisted to support various governance goals. Abbot and Snidal (2009) suggest that multi-actor governance schemes should focus on allocating specific tasks to actors based on consideration of four competencies: independence, representativeness, expertise, and operational capacity. They note that different sets of competencies are essential to the efficacy and legitimacy of distinct governance activities, including distinct risk management activities, and vary with the substantive risk to be managed even within risk management activities. Of particular relevance to risk mitigation, Abbott and Snidal identify competency gaps in regulated firms that counsel against making them wholly responsible for implementation and monitoring activities.</p>
<p>Researchers predict that enlisting external stakeholders in one area of AI risk management, audits, will yield better outcomes (Stein et al. 2024). Building on an analysis of how the public and private sector are enlisted in auditing activities in other high-risk industries they conclude &ldquo;that public bodies [should] become directly involved in safety critical, especially gray-and white-box, AI model evaluations&rdquo; (Stein et al. 2024).<button id="ref-11" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-11">11</button> <span id="sdn-11" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 11">11. &ldquo;Black-box evaluation techniques assess an AI model&rsquo;s performance from an external (e.g., user) perspective, limiting analysis to the model&rsquo;s inputs and outputs without accessing its internal workings (Casper et al. 2024). By contrast, white-box techniques involve analyzing the internal functioning of the model (Casper et al. 2024). Intermediate approaches are referred to as &lsquo;gray-box.&rsquo;&rdquo; (Stein et al. 2024)</span> Their conclusion rests on several factors. First, the &ldquo;unpredictable but potentially critical and far-reaching impacts&rdquo; of advanced AI systems. Second, the market concentration in the AI sector. Third, the high cost of testing and evaluation due to the absence of standards. Fourth, the &ldquo;significant and specialized expertise&rdquo; including &ldquo;[d]omain-specific expertise&hellip;to develop threat models and red-team&rdquo; and &ldquo;research engineers and computational social scientists&hellip;to understand models and their impacts&rdquo; (Stein et al. 2024). Fifth, the potential for audits to reveal pathways to misusing advanced AI. The combination of these factors requires auditors to have independence, significant expertise, and be trusted to protect information that could lead to system misuse. In recommending that public bodies take on this role, the authors note that doing so requires providing public bodies with &ldquo;extensive access to models and facilities,&rdquo; could require &ldquo;[hundreds] of employees for auditing in large jurisdictions like the EU or US,&rdquo;(Stein et al. 2024). Public bodies play a role in effective and legitimate testing but additional investments into the resources, expertise, and access to conduct rigorous AI model audits are needed.</p>
<p>AI risk mitigation is composed of distinct elements including impact assessments, mitigation, red-teaming, audits, operational testing, and monitoring. The NIST AI RMF provides an overview of the range of actors across the public and private sector who can participate in risk management activities (NIST 2023 Figure 3 and Appendix A). Each AI actor has distinct capacities and competencies and correctly deploying them requires task-specific assessments of the sort modeled by Stein et al. (2023). As Mulligan and Bamberger (2021) show, even within a common risk management task like content moderation, the actor who can best perform a subtask&mdash;identifying or removing objectionable content for example&mdash;may vary depending upon the substance being moderated (e.g., child sexual abuse material, copyright, privacy). In addition, constraints, such as transparency reporting or limitations on forms of automation, can further improve the efficacy and legitimacy of an enlisted actor&rsquo;s risk mitigation efforts.</p>
<p>Thoughtful tasking of AI risk management, as Stein et al. (2024) note, will be impossible without investments to resource and build the capacity of governments, civil society organizations, and academic institutions. The Biden-Harris Administration made steps towards this end. They increased government capacity recruiting over 250 AI and AI-enabling experts into government and created processes to involve all stakeholders in the testing and evaluation work of the U.S. AI Safety Institute (now known as CAISI) (White House 2024b; CAISI 2024). They launched the National AI Research Resource pilot to support research teams outside of industry (White House 2024b). Private philanthropic investments in the United States are also building the capacity of civil society organizations to participate in AI risk management activities as well. Nonprofits like the Allen Institute for AI, backed by hundreds of millions of dollars in both public and private sources of funding, are building public alternatives to industry AI models that are more open (NSF 2025). Civil society organizations like the Center for Democracy and Technology, the Leadership Conference on Civil and Human Rights, and the AFL-CIO have all stood up suborganizations that are focused on tech issues as they intersect with their core mission, and that have brought in AI expertise to help mitigate the relevant risks to that mission as it relates to AI (CDT, n.d.; The Leadership Conference on Civil and Human Rights 2023; AFL-CIO 2021).</p>
<p>These efforts are nascent and not yet at the scale required to address the challenges Stein et al. (2024) note, but they are promising examples that bring the independence and representativeness of public and civil society actors into AI risk management tasks and bolster the sociotechnical expertise within government. If public AI initiatives continue to be prioritized, we can expect to see greater civil society capacity to make AI more trustworthy and aligned with the public interest (Marda 2024).</p>
<p>To end this section, we provide two examples which gather stakeholders with appropriate expertise, tools, and methods, around technical and institutional processes. First, Data &amp; Society&rsquo;s AIMLab<button id="ref-12" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-12">12</button> <span id="sdn-12" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 12">12. In particular, consider their pilot with the City of San Jose on a computer vision use case which ultimately led to a bake-off with a number of vendors (Data &amp; Society 2025). </span> takes an on-the-ground participatory approach that works with communities to anticipate potential harms, and integrate lived expertise, expanding who participates in risk governance and altering the substance and process of typically purely technical evaluations, working in tandem with government and industry stakeholders (Data &amp; Society, n.d.). Second, at the infrastructural level, consider the National Center for Missing &amp; Exploited Children (NCMEC). NCMEC maintains a hash sharing platform for CSAM, acting in conjunction with industry and non-profit actors, that allow platforms to take down CSAM at scale by using NCMEC APIs. NCMEC&rsquo;s independent domain experts identify violative content in accordance with public law and the technical infrastructure scales their expertise allowing it to be integrated into engineering work that happens within numerous firms (Mulligan and Bamberger 2021).</p>
<p>Together these four shifts will reorient AI risk management ensuring appropriate stakeholders are involved, the right tools and methods are used, and both technical and institutional processes are aligned to effectively and efficiently mitigate harms.</p>
<h3>III. Applying the Conceptual Analysis: Image-Based Sexual Abuse</h3>
<p>In this section, we sketch out how our framework can be used to guide risk management activities to address image-based sexual abuse. Advocates and policymakers frame the creation and distribution of intimate images as &ldquo;image-based sexual abuse&rdquo; (McGlynn &amp; Rackley, 2017). This framing recognizes both the diverse harms experienced by individual victim-survivors <em>and</em> the societal harms of its often-gendered nature. It positions the phenomena, as sociotechnical hybrids, within the landscape of information and communication technologies that scholars have shown contribute to the perpetuation of systemic injustice (Benjamin 2019; Noble 2018; Buolamwini and Gebru 2018; Browne 2015).</p>
<p>Both the production of synthetic intimate images of real women generated by AI and the distribution of real intimate images of actual women distributed without consent harm women. They harm the specific women depicted in the images, and they harm women as a group because the creation and distribution of these images represents &ldquo;a systematic tolerance of sexual violence against women&rdquo; that &ldquo;takes away from [women&rsquo;s] autonomy and ability to move through the world&rdquo; (Tran 2015). In this sense, the production of such synthetic images from generative AI coproduce harms against women within an existing world of gendered violence. It cannot be mitigated at the level of model output alone. Reflecting this sociotechnical lens, the concept of &ldquo;sexual digital forgeries,&rdquo; building on Angela Chen&rsquo;s definition of &ldquo;digital forgeries&rdquo; as &ldquo;something that a reasonable person would think is real,&rdquo; was coined by Danielle Citron and Mary Anne Franks and reveals how although generative AI proliferated new technical tools for the creation of synthetic intimate images, the problem, and its solutions, cannot be solely technological (Chen 2019). For example, since not all violative synthetic intimate content can be identified through technical tools such as deepfake detectors, the standard for deciding whether a piece of content should be removed should ultimately be set by victims themselves.</p>
<p>This sociotechnical framing is reflected in a White House call to action from the Biden-Harris Administration addressing the creation and distribution of image-based sexual abuse (IBSA) (Prabhakar and Klein 2024).<button id="ref-13" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-13">13</button> <span id="sdn-13" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 13">13. &ldquo;Image-based sexual abuse&mdash;including synthetic content generated by artificial intelligence (AI) and real images distributed without consent&mdash;has skyrocketed in recent years, disproportionately targeting women, girls, and LGBTQI+ people. For survivors, this abuse can be devastating, upending their lives, disrupting their education and careers, and leading to depression, anxiety, post-traumatic stress disorder, and increased risk of suicide&rdquo; (Prabhakar and Klein 2024). </span> While the call to action brings AI into view, it also calls on &ldquo;companies and other organizations to provide meaningful tools that will prevent and mitigate harms, and to limit websites and mobile apps whose primary business is to create, facilitate, monetize, or disseminate image-based sexual abuse&rdquo; and &ldquo;Congress to strengthen legal protections and provide critical resources for survivors and victims&rdquo; (Prabhakar and Klein 2024). Reflecting this sociotechnical orientation towards managing the harms of IBSA, the Biden-Harris Administration garnered voluntary commitments from AI model developers and data providers to reduce AI-generated IBSA including commitments from key model developers to:</p>
<p>responsibly source datasets and safeguard them from IBSA;</p>
<p>remove nude images from AI training datasets for certain models; and</p>
<p>use iterative stress-testing strategies in their development processes and feedback loops to guard against AI models outputting IBSA (White House 2024a).</p>
<p>In addition to commitments from AI model developers, payment service providers committed to help detect and limit payment services to companies producing, soliciting, or publishing IBSA; GitHub committed to prohibit the sharing of software tools that are designed for, encourage, promote, support, or suggest in any way the use of synthetic or manipulated media for the creation of non-consensual intimate imagery; and Microsoft committed to launch a pilot to detect and delist duplicates of survivor-reported non-consensual intimate imagery in Bing&rsquo;s search results (White House 2024a).</p>
<p>The Biden-Harris Administration&rsquo;s call to action and the voluntary commitments it garnered address the wide range of actors that create harms, by producing or providing access to non-consensual intimate images, and create hazards, either through AI models and other tools that have eased the creation of fake but convincing intimate images and videos of women or through search engines and payment platforms that make the distribution of such images easy and lucrative.</p>
<p>This sociotechnical framing animates federal laws too. New policies build on existing laws against the non-consensual creation of sexual images&mdash;enacted in response to so-called &lsquo;upskirt and down-shirt&rsquo; photos and videos ushered in by the use of increasingly small cameras to capture sexual images in public places&mdash;to address sexual digital forgeries and non-consensual distribution of such content. The Violence Against Women (VAWA) Reauthorization Act of 2022 created a federal civil cause of action for individuals whose identifiable intimate visual images are disclosed without their consent, allowing victims to recover damages and legal fees (U.S. Congress 2022) while the 2025 Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (TAKE IT DOWN) Act, which covers both &ldquo;digital forgeries&rdquo; and &ldquo;authentic intimate visual depictions,&rdquo; targets individuals who knowingly publish non-consensual intimate imagery <em>and</em> platforms that host them requiring platforms to set up a process to remove covered images upon notice from a victim (U.S. Congress 2025).</p>
<p>Together these new laws and the voluntary actions spurred by the call to action approach IBSA at an ecosystem level intervening at various points&ndash;creation, distribution, hosting, monetization&ndash;to minimize harm. Together, these actions address three distinct harms: the inclusion of IBSA in datasets, the production of synthetic IBSA that identifies real women,<button id="ref-14" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-14">14</button> <span id="sdn-14" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 14">14. For the purpose of this example, we are setting aside the question of whether fully synthetic intimate images cause harm. To the extent that &ldquo;systematic tolerance of sexual violence against women&rdquo; (Tran, 2015) is considered a harm, then it is reasonable to frame systems that make fully synthetic intimate images easy to produce and circulate as producing both risk of harm and hazards.</span> and the circulation of IBSA (synthetic and non-synthetic). They target model outputs that directly inflict harm (i.e., non-consensually produced and circulated intimate images) and embed risk mitigations across the sociotechnical system involving a wide swathe of actors to reduce their production and circulation of IBSA. We will explore the first two through our model.</p>
<p>The inclusion of real non-consensual intimate images in datasets can be categorized as a harm to privacy. The voluntary commitments made by AI model providers include a commitment to safeguarding datasets used in model production from IBSA. While this harm can be mitigated at the level of training data, satisfying the sociotechnical and relational orientations of our model, it is unclear whether model developers possess the capacity and competencies to perform this mitigation. AI model developers<button id="ref-15" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-15">15</button> <span id="sdn-15" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 15">15. Including Adobe, Anthropic, Cohere, Common Crawl, Microsoft, and OpenAI who made this commitment (White House 2024a).</span> can use their resources to create contractual requirements on dataset providers to use existing databases such as StopNCII and TakeItDown to identify and remove IBSA from datasets. However, other actors may be better able to effectively and legitimately undertake this task.</p>
<p>With respect to expertise and operational capacity and efficiency, dataset developers, as opposed to AI model developers, are arguably better positioned to ensure their datasets do not contain IBSA. They have control over the dataset development process, they can make decisions about sourcing data, they can filter data before including it in the dataset. Other parties may have superior expertise in the subject matter of IBSA and possess the independence and representativeness viewed as essential to legitimately undertake the work. For example, StopNCII.org and Take It Down are non-profit organizations who work directly with affected individuals and communities and are trusted to advance and protect their interests. Their databases are increasingly used by other AI actors to identify IBSA for harm mitigation (White House 2024a; Gregoire 2024). Only a diverse collection of entities have the right set of capacities and competencies to mitigate this harm effectively and legitimately. In addition, requiring dataset developers to ensure IBSA is removed scales risk management across all models using a dataset rather than having each AI model developer undertake the task. Ideally a distributed system of risk mitigation will emerge that, like the current approach to identifying and removing CSAM, will enlist subject matter experts and victim-survivors in identifying material to be removed and create automated screening tools to reduce the costs and improve the pace of removing such images from datasets.</p>
<p>A second harm is AI model outputs that either reproduce a non-consensual intimate image or produce a synthetic intimate image of a real identifiable person&mdash;a &ldquo;sexual digital forgery.&rdquo; Leading AI model developers agreed to incorporate iterative stress-testing and feedback loops in their development processes to mitigate against the production of both kinds of images. This harm mitigation strategy appears well targeted to the operational capacity of AI model developers. The output of the AI models is itself the harm and AI model developers can use various technical processes from prompt engineering to filtering outputs to prevent the production of IBSA. However, here too they likely need the expertise, independence, and representativeness of other actors to define what IBSA is in relation to synthetic intimate images, and to identify existing non-consensual intimate images. StopNCII, TakeItDown, and other groups that represent affected parties may play a role in both activities. In addition, transparency reports and third-party audits of AI model outputs may be important constraints on the risk mitigation practices enabling others to check the validity of the AI model developers&rsquo; work.</p>
<p>Finally, the updates to federal law deter individuals from distributing real intimate images and sexual digital forgeries without consent and require websites, platforms, and mobile applications to remove both kinds of images when notified. The legal framework taps into the expertise of those harmed by IBSA and the capacity of entities that host it to curb its distribution and availability.</p>
<p>Examining efforts to address IBSA through our conceptual model highlights the sociotechnical approach, the focus on coordinated action to reduce <em>harms</em> and <em>hazards</em>, and the practical and normative benefits of enlisting a wider range of actors in risk mitigation activities.</p>
<h3><strong>IV. Implications for AI Policy</strong></h3>
<p>Policymakers must recenter harm reduction as the goal of risk management and require a sociotechnical orientation to achieve it. AI policy debates increasingly assume that increasing capabilities of a few AI models in a number of specific areas generate the only&mdash;or at least the most significant&mdash;pathways to harm. This assumption has whittled down risk governance activities limiting the sites where mitigations are deployed, the tools and methods of mitigation, and narrowing the actors and expertise brought into these activities. The assumption that mitigations directly on model capabilities is the key to harm reduction is misguided. While model evaluations and mitigations are necessary, they are insufficient to address harms as they rarely meaningfully reduce the wide range of hazards and harms to which models contribute. The model-centricity of current risk mitigation practices will not reduce harms, and it has given AI model developers and deployers the discretion to operationalize and mitigate risks with little public participation and oversight. The resulting risk management governance system is flawed and occluded because it ignores many more effective ways to mitigate relevant risks, and sidelines external players with core competencies and desirable independence from the economic interests of the AI industry. In more simple terms, this often ends up being self-regulation.</p>
<p>A sociotechnical approach to risk governance centers harms and examines systems to understand how hazards arise from configurations of system components. This sociotechnical approach requires new institutional arrangements and infrastructures to involve a wide range of actors with different kinds of expertise in risk mitigation of AI use cases. These rearrangements, as we describe above, alter both the process and the substance of evaluations, aligning them with public goals and building public trust.</p>
<p>Governments, nonprofits, and companies need to invest in the human and technical infrastructure and the research to support sociotechnical governance. This ranges from building the infrastructure needed for third-party actors to find and report flaws in AI systems (see, e.g., Longpre et al. 2025), to investing in AI safety tooling across the AI stack&mdash;not just at the model level (see, e.g., Marda et al. 2024, Surman and Bdeir 2025). It requires financial support for civil society, academia, and government actors who provide crucial expertise, capacity, and independence necessary for legitimate and accountable risk governance. Crucially, many of these interventions can leverage testing and regulatory processes set up for specific domains, such as using existing reporting frameworks and subject matter experts around medical device safety to identify risks in AI systems deployed in the healthcare system. Evaluations and mitigations at other frames (e.g., data, model, system), while important, do not capture the organized complexity that produces hazards and harms in the wild. Testing AI systems in deployed and operational contexts is vital, as is ongoing monitoring. Some policy documents, such as the Biden-Harris administration&rsquo;s guidance to federal agencies on the use of AI (OMB 2024), recognize the importance of this. Other regulatory frameworks, such as those enforced by the Federal Trade Commission, also look at AI hazards and harms as they manifest in the real world to build a case against unfair or deceptive AI practices (Nguyen 2025).</p>
<p>We recommend four concrete steps for stakeholders in the AI ecosystem to better incorporate this perspective into their approach to AI risk mitigation:</p>
<p>First, policymakers should begin by developing a sociotechnical system map that identifies the technical and organizational system components related to the harm under exploration. In the related area of Human Rights Impact Assessments, Global Network Initiative and Business for Social Responsibility take a &lsquo;value-chain&rsquo; ecosystem approach that examines the interdependency between many different actors (suppliers, providers, deployers) to develop a relational approach in mapping and mitigating risks, that goes beyond responsibilities located within particular firms (Global Network Initiative, n.d.b).<button id="ref-16" class="article-ref" data-dropdown="ref" aria-haspopup="true" aria-expanded="false" aria-controls="sdn-16">16</button> <span id="sdn-16" class="article-sdn" data-dropdown="note" aria-hidden="true" aria-label="sidenote 16">16. For an example of such a mapping for generative AI, see Business for Social Responsibility (2025). </span> This approach called for engagement with a broadened set of stakeholders, spanning public, private, academia, and civil society, and including those that are not directly implicated in the value chain but nonetheless impact or are impacted by it.</p>
<p>Second, task the deployers of AI systems with assessing and mitigating risks of harms from specific AI <em>use cases</em>, not the full range of potential risks. Currently, the expectation of responsibility is often placed on AI deployers for a much broader set of risks than ones manifested by particular use cases. Deployers should be focused on mitigating the risks that manifest from the specific use cases they are deploying by working backwards from harms associated with those use cases, not on considering the whole range of possible AI risk mitigations. This strategy would result in a more holistic and targeted approach as described throughout this paper.</p>
<p>Third, regulatory frameworks should reduce reliance on the developers and deployers of AI systems to independently engage in risk mitigation activities. As this paper argues, this often produces self-regulation. Policymakers should instead directly require, or at least incentivize, entities that develop and deploy systems to enlist external stakeholders with relevant expertise in risk assessment and mitigation processes. This includes bringing external stakeholders into strategic decisions about where and how to mitigate risks, and, where relevant, into risk mitigation activities. Mechanisms for sustainably funding the expertise of these external parties, and building the expertise of government agencies, is essential to AI risk management. One author has suggested potential sustainable funding models (Mulligan and Bamberger 2018; Doty and Mulligan 2013). And civil society organizations such as the National Fair Housing Alliance are innovating funding models, beyond philanthropy, to support robust AI and domain specific expertise (National Fair Housing Alliance, n.d.).</p>
<p>Finally, governments and companies need to invest in the infrastructure and research to support evaluations of sociotechnical systems and a richer set of technical and non-technical risk mitigation techniques. The National Artificial Intelligence Research Resource (NAIRR) Pilot (NSF, n.d.) and CalCompute (California 2025) are infrastructural efforts in this direction which provides not only necessary cloud computing infrastructure but also includes appropriate human expertise. Evaluations of models and data sets do not capture the organized complexity that produces harms and increases their probability in the wild and model-centric mitigations cannot on their own prevent harms on the ground. As discussed in Part II, Data &amp; Society&rsquo;s AIMLab represents innovations in evaluations which are not merely technical. Building a testing infrastructure&mdash;physical, technical, human and methodological&mdash;for AI use cases is key to identifying and addressing harms in the wild. There are efforts in this direction within the U.S. government, for example the Department of Homeland Security&rsquo;s biometric testing facility, or as discussed earlier in the paper, NCMEC, and in the private sector, the Health AI Partnership&rsquo;s work to define the requirements for adequate organizational governance of AI systems in healthcare settings (Kim et al. 2023; Health AI Partnership 2024; The Maryland Test Facility, n.d.).</p>
<p>Together, these four steps would drive a <em>sociotechnical systems</em> approach to the study and mitigation of AI risks that focuses on reducing harms through coordinated action, is oriented around use cases that include social and technical components, targets mitigations towards appropriate sites across the sociotechnical system, and enlists a wide range of actors who possess the capacity and competencies necessary to effectively and legitimately perform the work. For AI systems to be <em>trustworthy</em>, risk mitigations must be performed by entities with the relevant expertise and operational capacity&mdash;including technical and human resources and access to the relevant system components. For systems to be <em>trusted</em>, risk mitigations should lie in the hands of entities viewed as legitimate due to their independence and representativeness. Our approach recenters risk management in the prevention of harms on-the-ground, situating it within a complex everyday world rather than in firms and research labs, and in doing so, paves the path towards a vision for accountable AI governance in the public interest.</p>
<h3><strong>Citations&nbsp;</strong></h3>
<p>​​Abbott, Kenneth W., and Duncan Snidal. 2009. &ldquo;The Governance Triangle: Regulatory Standards Institutions and the Shadow of the State.&rdquo; In&nbsp;<em>The Politics of Global Regulation</em>, edited by Walter Mattli &amp; Ngaire Woods. Princeton University Press.</p>
<p>AFL-CIO. 2021. &ldquo;AFL-CIO Launches Technology Institute.&rdquo; January 11. <a href="https://aflcio.org/press/releases/afl-cio-launches-technology-institute">https://aflcio.org/press/releases/afl-cio-launches-technology-institute</a>.</p>
<p>AISI (AI Security Institute). 2025. &ldquo;Our Research Agenda.&rdquo; May 6. <a href="https://www.aisi.gov.uk/research-agenda">https://www.aisi.gov.uk/research-agenda</a>.</p>
<p>Allen, Danielle, Sarah Hubbard, Woojin Lim, et al. 2025. &ldquo;A Roadmap for Governing AI: Technology Governance and Power Sharing Liberalism.&rdquo; <em>AI Ethics</em> 5: 3355&ndash;77. <a href="https://doi.org/10.1007/s43681-024-00635-y">https://doi.org/10.1007/s43681-024-00635-y</a>.</p>
<p>Anderljung, Markus, Joslyn Barnhart, Anton Korinek, et al. 2023. &ldquo;Frontier AI Regulation: Managing Emerging Risks to Public Safety.&rdquo; Preprint, arXiV, July 6. arXiv:2307.03718.</p>
<p>Apollo Research. 2024. &ldquo;We Need A Science of Evals.&rdquo; January 22. <a href="https://www.apolloresearch.ai/blog/we-need-a-science-of-evals/">https://www.apolloresearch.ai/blog/we-need-a-science-of-evals/</a>.&nbsp;</p>
<p>Article 29 Data Protection Working Party. 2014. <em>Statement On The Role Of A Risk-Based Approach In Data Protection Legal Frameworks</em> (WP 218). May 30. <a href="https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2014/wp218_en.pdf">https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2014/wp218_en.pdf</a>.</p>
<p>Bamberger, Kenneth A. 2006. &ldquo;Regulation as Delegation: Private Firms, Decisionmaking, and Accountability in the Administrative State.&rdquo; <em>Duke Law Journal</em> 56(2): 377&ndash;468. <a href="https://scholarship.law.duke.edu/dlj/vol56/iss2/1">https://scholarship.law.duke.edu/dlj/vol56/iss2/1</a>.</p>
<p>Benjamin, Ruha. 2019. <em>Race After Technology: Abolitionist Tools for the New Jim Code</em>. Polity Press.</p>
<p>Biden, Joseph R. 2024.<em> National Security Memorandum on Advancing the United States' Leadership in Artificial Intelligence; Harnessing Artificial Intelligence To Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence</em>. The White House. <a href="https://archive.is/hFQ8i">https://archive.is/hFQ8i</a>.</p>
<p>Bogen, Miranda, and Amy Winecoff. 2024. &ldquo;Applying Sociotechnical Approaches to AI Governance in Practice.&rdquo;&nbsp;<em>Center for Democracy and Technology</em>, May 15. <a href="https://cdt.org/insights/applying-sociotechnical-approaches-to-ai-governance-in-practice/">https://cdt.org/insights/applying-sociotechnical-approaches-to-ai-governance-in-practice/</a>.&nbsp;</p>
<p>Browne, Simone. 2015. <em>Dark Matters: On the Surveillance of Blackness</em>. Duke University Press.</p>
<p>Buolamwini, Joy, and Timnit Gebru. 2018. &ldquo;Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.&rdquo; <em>Proceedings of Machine Learning Research </em>81: 1&ndash;15. <a href="https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf">https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf</a>.</p>
<p>Business for Social Responsibility. 2025. <em>A Human Rights Assessment of the Generative AI Value Chain</em>. <a href="https://www.bsr.org/files/BSR-A-Human-Rights-Assessment-of-the-Generative-AI-Value-Chain.pdf">https://www.bsr.org/files/BSR-A-Human-Rights-Assessment-of-the-Generative-AI-Value-Chain.pdf</a>.</p>
<p>CAISI (Center for AI Standards and Innovation, formerly U.S. AI Safety Institute). 2024. <em>The United States Artificial Intelligence Safety Institute: Vision, Mission, and Strategic Goals. </em><a href="https://www.nist.gov/system/files/documents/2024/05/21/AISI-vision-21May2024.pdf">https://www.nist.gov/system/files/documents/2024/05/21/AISI-vision-21May2024.pdf</a>.</p>
<p>CAISI (Center for AI Standards and Innovation, formerly U.S. AI Safety Institute) and AISI (AI Security Institute) (formely UK AI Safety Institute) 2024a. <em>US AISI and UK AISI Joint Pre-Deployment Test: OpenAI o1</em>. <a href="https://cdn.prod.website-files.com/663bd486c5e4c81588db7a1d/6763fac97cd22a9484ac3c37_o1_uk_us_december_publication_final.pdf">https://cdn.prod.website-files.com/663bd486c5e4c81588db7a1d/6763fac97cd22a9484ac3c37_o1_uk_us_december_publication_final.pdf</a>.</p>
<p>CAISI (Center for AI Standards and Innovation, formerly U.S. AI Safety Institute) and AISI (AI Security Institute) (formely UK AI Safety Institute). 2024b. <em>US AISI and UK AISI Joint Pre-Deployment Test: Anthropic&rsquo;s Claude 3.5 Sonnet</em>. <a href="https://cdn.prod.website-files.com/663bd486c5e4c81588db7a1d/673b689ec926d8d32e889a8e_UK-US-Testing-Report-Nov-19.pdf">https://cdn.prod.website-files.com/663bd486c5e4c81588db7a1d/673b689ec926d8d32e889a8e_UK-US-Testing-Report-Nov-19.pdf</a>.</p>
<p>California. 2025. Artificial Intelligence Models: Large Developers, Ch. 138, Statutes of 2025. Enacted September 29, 2025.</p>
<p>Casper, Stephen, Carson Ezell, Charlotte Siegmann, et al. 2024. &ldquo;Black-Box Access is Insufficient for Rigorous AI Audits.&rdquo; <em>Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency</em>, 2254&ndash;2272. <a href="https://doi.org/10.1145/3630106.3659037">https://doi.org/10.1145/3630106.3659037</a>.</p>
<p>CDT (Center for Democracy and Technology). n.d. &ldquo;AI Governance Lab.&rdquo; Accessed February 25. 2026. <a href="https://cdt.org/cdt-ai-governance-lab/">https://cdt.org/cdt-ai-governance-lab/</a>.</p>
<p>Chairs and the Vice-Chairs of the General-Purpose AI Code of Practice. 2025. <em>Third Draft of the General-Purpose AI Code of Practice</em>. European Commission. <a href="https://digital-strategy.ec.europa.eu/en/library/third-draft-general-purpose-ai-code-practice-published-written-independent-experts">https://digital-strategy.ec.europa.eu/en/library/third-draft-general-purpose-ai-code-practice-published-written-independent-experts</a>.</p>
<p>Chen, Angela. 2019. &ldquo;Three Threats Posed by Deepfakes That Technology Won&rsquo;t Solve.&rdquo; <em>MIT Technology Review</em>, October 2. <a href="https://www.technologyreview.com/2019/10/02/75400/deepfake-technology-detection-disinformation-harassment-revenge-porn-law/">https://www.technologyreview.com/2019/10/02/75400/deepfake-technology-detection-disinformation-harassment-revenge-porn-law/</a>.&nbsp;</p>
<p>Chi, Nicole, Emma Lurie, and Deirdre K. Mulligan. 2021. &ldquo;Reconfiguring Diversity and Inclusion for AI Ethics.&rdquo;&nbsp;<em>Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society</em>, 447&ndash;57. <a href="https://doi.org/10.1145/3461702.3462622">https://doi.org/10.1145/3461702.3462622</a>.</p>
<p>Clymer, Joshua, Jonah Weinbaum, Robert Kirk, Kimberly Mai, Selena Zhang, and Xander Davies. 2025. &ldquo;An Example Safety Case for Safeguards Against Misuse.&rdquo; Preprint, arXiV, May 23. <a href="https://doi.org/10.48550/arXiv.2505.18003">https://doi.org/10.48550/arXiv.2505.18003</a>.</p>
<p>Coglianese, Cary, and Colton R. Crum. 2025. &ldquo;Leashes, Not Guardrails: A Management-Based Approach to Artificial Intelligence Risk Regulation.&rdquo; <em>Risk Analysis</em>, 45 (12): 4397&ndash;4407. <a href="https://doi.org/10.1111/risa.70020">https://doi.org/10.1111/risa.70020</a>.</p>
<p>Cohen, Julie E., and Ari Azra Waldman. 2023. &ldquo;Introduction: Framing Regulatory Managerialism as an Object of Study and Strategic Displacement.&rdquo;&nbsp;<em>Law &amp; Contemp. Probs.</em> 86 (3). <a href="https://ssrn.com/abstract=4661146">https://ssrn.com/abstract=4661146</a>.&nbsp;</p>
<p>Colman, Zack, Annie Snyder, and James Bikales. 2025. &ldquo;Why Texas&rsquo; Floods Are a Warning for the Rest of the Country.&rdquo; <em>Politico</em>, July 8. <a href="https://www.politico.com/news/2025/07/08/climate-change-makes-deadly-floods-more-likely-but-washington-is-responding-with-cuts-00441921?utm_content=user/politico&amp;utm_source=flipboard">https://www.politico.com/news/2025/07/08/climate-change-makes-deadly-floods-more-likely-but-washington-is-responding-with-cuts-00441921?utm_content=user/politico&amp;utm_source=flipboard</a>.</p>
<p>Data &amp; Society. n.d. &ldquo;Algorithmic Impact Methods Lab.&rdquo; Accessed January 15, 2026. <a href="https://datasociety.net/research/algorithmic-impact-methods-lab/?tab=About">https://datasociety.net/research/algorithmic-impact-methods-lab/?tab=About</a>.</p>
<p>Data &amp; Society. n.d. <em>Pilot 1 Case Report: The City of San Jos&eacute;, Object Detection</em>. <a href="https://datasociety.net/wp-content/uploads/2025/10/Pilot-1-San-Jose.pdf">https://datasociety.net/wp-content/uploads/2025/10/Pilot-1-San-Jose.pdf</a>.</p>
<p>Dobbe, Roel I. J. 2022.&nbsp;&ldquo;System Safety and Artificial Intelligence.&rdquo; In<em> Oxford Handbook of AI Governance</em>, edited by Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, et al. Oxford University Press.</p>
<p>Dobbe, Roel I. J. 2025. &ldquo;AI Safety is Stuck in Technical Terms&mdash;A System Safety Response to the International AI Safety Report.&rdquo; Preprint, arXiv, February 5. <a href="https://doi.org/10.48550/arXiv.2503.04743">https://doi.org/10.48550/arXiv.2503.04743</a>.</p>
<p>Doty, Nick, and Deirdre K. Mulligan. 2013. &ldquo;Internet Multistakeholder Processes and Techno-Policy Standards: Initial Reflections on Privacy at the World Wide Web Consortium.&rdquo; <em>J. on Telecomm. &amp; High Tech. L.</em> 11 (135).</p>
<p>Dourish, Paul. 2001. <em>Where the Action Is: The Foundations of Embodied Interaction</em>.MIT Press.</p>
<p>Edelman, Lauren B. 2016. <em>Working Law: Courts, Corporations, and Symbolic Civil Rights</em>. University of Chicago Press.</p>
<p>Eliot, Lance. 2025. &ldquo;OpenAI Acknowledges That Lengthy Conversations With ChatGPT And GPT-5 Might Regrettably Escape AI Guardrails.&rdquo; <em>Forbes</em>, August 29. <a href="https://www.forbes.com/sites/lanceeliot/2025/08/29/openai-acknowledges-that-lengthy-conversations-with-chatgpt-and-gpt-5-might-regrettably-escape-ai-guardrails/">https://www.forbes.com/sites/lanceeliot/2025/08/29/openai-acknowledges-that-lengthy-conversations-with-chatgpt-and-gpt-5-might-regrettably-escape-ai-guardrails/</a>.</p>
<p>Exec. Order No. 14110, 88 Fed. Reg. 75191 (Oct. 30, 2023).</p>
<p>Ford, Cristie. 2023. &ldquo;Regulation as Respect.&rdquo; <em>Law &amp; Contemp. Probs.</em> 86: 133&ndash;55. <a href="https://scholarship.law.duke.edu/lcp/vol86/iss3/6">https://scholarship.law.duke.edu/lcp/vol86/iss3/6</a>.</p>
<p>Fox, Stephen, and Juan G. Victores. 2024. &ldquo;Safety of Human&ndash;Artificial Intelligence Systems: Applying Safety Science to Analyze Loopholes in Interactions between Human Organizations, Artificial Intelligence, and Individual People.&rdquo;&nbsp;<em>Informatics</em>&nbsp;11 (2): 36. <a href="https://doi.org/10.3390/informatics11020036">https://doi.org/10.3390/informatics11020036</a>.</p>
<p>Gandhi, Kanishk, Jan-Philipp Fr&auml;nken, Tobias Gerstenberg, and Noah D. Goodman. 2023. &ldquo;Understanding Social Reasoning in Language Models with Language Models.&rdquo; Preprint, arXiV, December 4. <a href="https://arxiv.org/abs/2306.15448"><em>https://arxiv.org/abs/2306.15448</em></a>.</p>
<p>Gandikota, Rohit, Hadas Orgad, Yonatan Belinkov, Joanna Materzyńska, David Bau. 2024. &ldquo;Unified Concept Editing in Diffusion Models.&rdquo; Preprint, arXiV, October 22. <a href="https://doi.org/10.48550/arXiv.2505.18003">https://doi.org/10.48550/arXiv.2505.18003</a>.</p>
<p>Ganguli, Deep, Nicholas Schiefer, Marina Favaro, and Jack Clark. 2023. &ldquo;Challenges in Evaluating AI Systems.&rdquo; <em>Anthropic</em>, October 4. <a href="https://www.anthropic.com/index/evaluating-ai-systems">https://www.anthropic.com/index/evaluating-ai-systems</a>.</p>
<p>G.A. Res. 78/265, <em>Seizing the Opportunities of Safe, Secure and Trustworthy Artificial Intelligence Systems for Sustainable Development</em> (Mar. 21, 2024), <a href="https://docs.un.org/en/A/res/78/265">https://docs.un.org/en/A/res/78/265</a>.</p>
<p>Global Network Initiative. n.d.a. &ldquo;Human Rights Due Diligence Across the Technology&nbsp;Ecosystem.&rdquo; <a href="https://eco.globalnetworkinitiative.org/">https://eco.globalnetworkinitiative.org/</a>.</p>
<p>Global Network Initiative n.d.b. &ldquo;The Importance of an Ecosystem Approach.&rdquo; <a href="https://eco.globalnetworkinitiative.org/ecosystem-approach/">https://eco.globalnetworkinitiative.org/ecosystem-approach/</a>.</p>
<p>Goemans, Arthur, Marie Davidsen Buhl, Jonas Schuett, et al. 2024. &ldquo;Safety Case Template for Frontier AI: A Cyber Inability Argument.&rdquo; Preprint, arXiV, November 12. <a href="https://arxiv.org/pdf/2411.08088">https://arxiv.org/pdf/2411.08088</a>.</p>
<p>Goldenfein, Jake, Deirdre K. Mulligan, Helen Nissenbaum, and Wendy Ju. 2020. &ldquo;Through The Handoff Lens: Competing Visions of Autonomous Futures.&rdquo; <em>Berkeley Technology Law Journal</em> <em>35</em>(3): 835&ndash;910. <a href="https://doi.org/10.15779/Z38CR5ND0J">https://doi.org/10.15779/Z38CR5ND0J</a>.</p>
<p>Gregoire, Courtney. 2024. &ldquo;An Update on Our Approach to Tackling Intimate Image Abuse.&rdquo; <em>Microsoft</em>, September 5. <a href="https://blogs.microsoft.com/on-the-issues/2024/09/05/an-update-on-our-approach-to-tackling-intimate-image-abuse/">https://blogs.microsoft.com/on-the-issues/2024/09/05/an-update-on-our-approach-to-tackling-intimate-image-abuse/</a>.</p>
<p>Guohot, Michael, Anne F. Matthew, and Nicolas P. Suzor. 2020. &ldquo;Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence.&rdquo; <em>Vanderbilt Journal of Entertainment and Technology Law </em>20 (2): 385. <a href="https://scholarship.law.vanderbilt.edu/jetlaw/vol20/iss2/2">https://scholarship.law.vanderbilt.edu/jetlaw/vol20/iss2/2</a>.</p>
<p>Health AI Partnership. 2024. <em>Event Report: A Summit On AI Product Lifecycle Management in Healthcare</em>.<a href="https://drive.google.com/file/d/14qL9MYctX76pd0W87p2lONZnasQ21ucB/view">https://drive.google.com/file/d/14qL9MYctX76pd0W87p2lONZnasQ21ucB/view</a>.</p>
<p>Heikkil&auml;, Melissa. 2024. &ldquo;AI Companies Promised to Self-Regulate One Year Ago. What&rsquo;s Changed?&rdquo; <em>MIT Technology Review</em>, July 22. <a href="https://www.technologyreview.com/2024/07/22/1095193/ai-companies-promised-the-white-house-to-self-regulate-one-year-ago-whats-changed/">https://www.technologyreview.com/2024/07/22/1095193/ai-companies-promised-the-white-house-to-self-regulate-one-year-ago-whats-changed/</a>.</p>
<p>Henderson, Peter, Eric Mitchell, Christopher Manning, Dan Jurafsky, and Chelsea Finn. 2023. &ldquo;Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of Foundation Models.&rdquo; <em>Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society</em>, 287&ndash;96. <a href="https://dl.acm.org/doi/abs/10.1145/3600211.3604690">https://dl.acm.org/doi/abs/10.1145/3600211.3604690</a>.</p>
<p>Henshall, Will. 2024. &ldquo;Nobody Knows How to Safety-Test AI.&rdquo; TIME, March 21. <a href="https://time.com/6958868/artificial-intelligence-safety-evaluations-risks/">https://time.com/6958868/artificial-intelligence-safety-evaluations-risks/</a>.</p>
<p>Inan, Hakan, Kartikeya Upasani, Jianfeng Chi, et al. 2023. &ldquo;Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations.&rdquo; Preprint, arXiv, December 7. <a href="https://doi.org/10.48550/arXiv.2312.06674">https://doi.org/10.48550/arXiv.2312.06674</a>.</p>
<p>International Organization for Standardization and International Electrotechnical Commission. 2023. <em>Information Technology &ndash; Artificial Intelligence &ndash; Guidance on Risk Management</em> (ISO/IEC 23894). <a href="https://www.iso.org/standard/77304.html">https://www.iso.org/standard/77304.html</a>.</p>
<p>Kaminski, Margot E. 2023. &ldquo;Regulating the Risks of AI.&rdquo; <em>Boston University Law Review</em> 103 (5):1347-1411. <a href="https://doi.org/10.2139/ssrn.4195066">https://doi.org/10.2139/ssrn.4195066</a>.</p>
<p>Karnofsky, Holden. 2024. &ldquo;If-Then Commitments for AI Risk Reduction. Carnegie Endowment for International Peace.&rdquo; <em>Carnege Endowment</em>, September 13. <a href="https://carnegieendowment.org/research/2024/09/if-then-commitments-for-ai-risk-reduction?lang=en">https://carnegieendowment.org/research/2024/09/if-then-commitments-for-ai-risk-reduction?lang=en</a></p>
<p>Kieslich, Kimon, Natali Helberger, and Nicholas Diakopoulos. 2025. &ldquo;Scenario-Based Sociotechnical Envisioning (SSE): An Approach to Enhance Systemic Risk Assessments.&rdquo; Preprint, SocArXiv, January 29. <a href="https://doi.org/10.31235/osf.io/ertsj_v1">https://doi.org/10.31235/osf.io/ertsj_v1</a>.</p>
<p>Kim, Jee Young, William Boag, Freya Gulamali, et al<em>. </em>2023.&ldquo;Organizational Governance of Emerging Technologies: AI Adoption in Healthcare.&rdquo;&nbsp;<em>FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency</em>, 1396&ndash;417. <a href="https://doi.org/10.1145/3593013.3594089">https://doi.org/10.1145/3593013.3594089</a>.&nbsp;</p>
<p>Knabb, Richard D., Jamie R. Rhome, and Daniel P. Brown. 2023. <em>Tropical Cyclone Report: Hurricane Katrina</em>. National Hurricane Center. <a href="https://www.nhc.noaa.gov/data/tcr/AL122005_Katrina.pdf">https://www.nhc.noaa.gov/data/tcr/AL122005_Katrina.pdf</a>.</p>
<p>Knowles, Scott Gabriel. 2014. &ldquo;Learning from Disaster? The History of Technology and the Future of Disaster Research.&rdquo; <em>Technology and Culture</em> 55 (4): 773&ndash;84. <a href="http://www.jstor.org/stable/24468470">http://www.jstor.org/stable/24468470</a>.</p>
<p>Kuehnert, Blaine , Rachel Kim, Jodi Forlizzi, and Hoda Heidari. 2025. &ldquo;The &lsquo;Who,&rsquo; &lsquo;What,&rsquo; and &lsquo;How&rsquo; of Responsible AI Governance: A Systematic Review and Meta-Analysis of (Actor, Stage)-Specific Tools.&rdquo; <em>FAccT '25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency</em>, 2991&ndash;3005. <a href="https://doi.org/10.1145/3715275.3732191">https://doi.org/10.1145/3715275.3732191</a>.&nbsp;</p>
<p>Jones, Erik, Robin Jia, Aditi Raghunathan, and Percy Liang. 2020. &ldquo;Robust Encodings: A Framework for Combating Adversarial Typos.&rdquo; <em>Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics</em>, 2752-65. <a href="https://aclanthology.org/2020.acl-main.245/">https://aclanthology.org/2020.acl-main.245/</a>.</p>
<p>Layne, Nathan. 2021. &ldquo;New Orleans&rsquo; Levees Got a $14.5 Billion Upgrade. Will They Hold?&rdquo; <em>Reuters</em>, August 29. <a href="https://www.reuters.com/world/us/new-orleans-levees-got-145-billion-upgrade-will-they-hold-2021-08-30/">https://www.reuters.com/world/us/new-orleans-levees-got-145-billion-upgrade-will-they-hold-2021-08-30/</a>.</p>
<p>Leveson, Nancy. 2012. <em>Engineering a Safer World: Systems Thinking Applied to Safety. </em>MIT Press.</p>
<p>Longpre, Shayne, Kevin Klyman, Ruth E. Appel, et al. 2025. &ldquo;In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI.&rdquo; Preprint, arXiv, March 21.&nbsp;<a href="https://doi.org/10.48550/arXiv.2503.16861">https://doi.org/10.48550/arXiv.2503.16861</a>.</p>
<p>Marchant, Gary E., and Yvonne A. Stevens. 2017. &ldquo;Resilience: A New Tool in the Risk Governance Toolbox for Emerging Technologies.&rdquo; U.C. Davis L. Rev. 51: 233&ndash;36.</p>
<p>Marda, Nik, Jasmine Sun, and Mark Surman. 2024. &ldquo;Public AI: Making AI Work For Everyone, By Everyone.&rdquo; <em>Mozilla</em>. <a href="https://assets.mofoprod.net/network/documents/Public_AI_Mozilla.pdf">https://assets.mofoprod.net/network/documents/Public_AI_Mozilla.pdf</a>.</p>
<p>McGlynn, Clare, and Erika Rackley. 2017. &ldquo;Image-Based Sexual Abuse.&rdquo; <em>Oxford Journal of Legal Studies</em>, 37 (3): 534&ndash;61. <a href="https://doi.org/10.1093/ojls/gqw033">https://doi.org/10.1093/ojls/gqw033</a>.</p>
<p>MdTF. n.d. &ldquo;The Maryland Test Facility.&rdquo; Accessed February 25, 2026. <a href="https://mdtf.org/">https://mdtf.org/</a>.</p>
<p>Metcalf, Jacob, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish. 2021. &ldquo;Algorithmic Impact Assessments and Accountability: The Co-Construction of Impacts.&rdquo; <em>FaccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency</em>, 735&ndash;46. <a href="https://doi.org/10.1145/3442188.3445935">https://doi.org/10.1145/3442188.3445935</a>.</p>
<p>METR. 2023. &ldquo;Responsible Scaling Policies (RSPs).&rdquo; METR, September 26. Accessed January 7, 2025. <a href="https://metr.org/blog/2023-09-26-rsp/">https://metr.org/blog/2023-09-26-rsp/</a>.</p>
<p>Mittelstadt, Brent D. 2019. &ldquo;Principles Alone Cannot Guarantee Ethical AI.&rdquo; <em>Nature Machine Intelligence</em> 1: 501&ndash;7. <a href="https://doi.org/10.1038/s42256-019-0114-4">https://doi.org/10.1038/s42256-019-0114-4</a>.</p>
<p>Mulligan, Deirdre K., and Kenneth A. Bamberger. 2018. &ldquo;Saving Governance-By-Design.&rdquo; <em>California Law Review</em> 106 (3): 697&ndash;784. <a href="https://doi.org/10.15779/Z38QN5ZB5H">https://doi.org/10.15779/Z38QN5ZB5H</a>.</p>
<p>Mulligan, Deirdre K., and Kenneth A. Bamberger. 2021. &ldquo;Allocating Responsibility In Content Moderation: A Functional Framework.&rdquo; <em>Berkeley Technology Law Journal</em>, <em>36 </em>(3): 1091&ndash;172. <a href="https://doi.org/10.15779/Z383B5W872">https://doi.org/10.15779/Z383B5W872</a>.</p>
<p>Mulligan, Deirdre K., and Helen Nissenbaum. 2020. &ldquo;The Concept of Handoff as a Model for Ethical Analysis and Design<em>.&rdquo; In Oxford Handbook of Ethics of AI</em>, edited by Markus D. Dubber, Frank Pasquale, and Sunit Das.Oxford University Press.</p>
<p>Narayanan, Arvind, and Sayash Kapoor. 2024. &ldquo;AI Safety is Not a Model Property.&rdquo; <em>AI As Normal Technology</em>, March 12. <a href="https://www.normaltech.ai/p/ai-safety-is-not-a-model-property">https://www.normaltech.ai/p/ai-safety-is-not-a-model-property</a>.</p>
<p>NAIAC (National Artificial Intelligence Advisory Committee (NAIAC). 2023. <em>National Artificial Intelligence Advisory Committee Year l.&nbsp;</em><a href="https://web.archive.org/web/20230905003617/https://www.ai.gov/wp-content/uploads/2023/05/NAIAC-Report-Year1.pdf">https://web.archive.org/web/20230905003617/https://www.ai.gov/wp-content/uploads/2023/05/NAIAC-Report-Year1.pdf</a>.</p>
<p>NASEM (National Academies of Sciences, Engineering, and Medicine). 2021.&nbsp;<em>Assessing and Improving AI Trustworthiness: Current Contexts and Concerns: Proceedings of a Workshop&ndash;in Brief</em>. National Academies Press. <a href="https://doi.org/10.17226/26208">https://doi.org/10.17226/26208</a>.</p>
<p>National Fair Housing Affliance. n.d. &ldquo;Algorithmic Bias in Housing and Lending.&rdquo; Accessed February 25, 2026. <a href="https://nationalfairhousing.org/issue/tech-equity-initiative/">https://nationalfairhousing.org/issue/tech-equity-initiative/</a>.</p>
<p>Nguyen, Stephanie. 2025. &ldquo;AI-Related Programmatic Advances at the FTC (June 2021 - January 2025).&rdquo; <em>Federal Trade Commission</em>, January 17. <a href="https://www.ftc.gov/news-events/news/public-statements/ai-related-programmatic-advances-ftc-june-2021-january-2025">https://www.ftc.gov/news-events/news/public-statements/ai-related-programmatic-advances-ftc-june-2021-january-2025</a>.</p>
<p>NIST (National Institute of Standards and Technology). 2023. <em>Artificial Intelligence Risk Management Framework</em> (NIST AI 100-1). <a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf">https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf</a>.</p>
<p>NIST (National Institute of Standards and Technology). 2024a. <em>Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile</em> (NIST AI 600-1). <a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf">https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf</a>.</p>
<p>NIST (National Institute of Standards and Technology). 2024b. <em>Managing Misuse Risk for Dual-Use Foundation Models</em> (NIST AI 800-1). <a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.800-1.ipd.pdf">https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.800-1.ipd.pdf</a>.</p>
<p>NIST (National Institute of Standards and Technology). &ldquo;CAISI Works with OpenAI and Anthropic to Promote Secure AI Innovation.&rdquo; <em>National Institute of Standards and Technology</em>, September 25. <a href="https://www.nist.gov/news-events/news/2025/09/caisi-works-openai-and-anthropic-promote-secure-ai-innovation">https://www.nist.gov/news-events/news/2025/09/caisi-works-openai-and-anthropic-promote-secure-ai-innovation</a>.</p>
<p>Noble, Safiya Umoja. 2018. <em>Algorithms of Oppression: How Search Engines Reinforce Racism</em>. New York University Press.</p>
<p>NSF (National Science Foundation). n.d. &ldquo;National Artificial Intelligence Research Resource Pilot.&rdquo; Accessed January 15, 2026. <a href="https://www.nsf.gov/focus-areas/ai/nairr">https://www.nsf.gov/focus-areas/ai/nairr</a>.</p>
<p>NSF (National Science Foundation). 2023. &ldquo;NSF Announces 7 New National Artificial Intelligence Research Institutes.&rdquo; <em>U.S. National Science Foundation</em>, May 4. <a href="https://www.nsf.gov/news/nsf-announces-7-new-national-artificial?sf176473159=1">https://www.nsf.gov/news/nsf-announces-7-new-national-artificial?sf176473159=1</a>.</p>
<p>NSF (National Science Foundation). 2024. &ldquo;Responsible Design, Development, and Deployment of Technologies (ReDDDoT).&rdquo;January 8. <a href="https://www.nsf.gov/funding/opportunities/redddot-responsible-design-development-deployment-technologies/506215/nsf24-524">https://www.nsf.gov/funding/opportunities/redddot-responsible-design-development-deployment-technologies/506215/nsf24-524</a>.</p>
<p>NSF (National Science Foundation). 2025. &ldquo;NSF and NVIDIA Partnership Enables Ai2 to Develop Fully Open AI Models to Fuel U.S. Scientific Innovation.&rdquo; August 14. <a href="https://www.nsf.gov/news/nsf-nvidia-partnership-enables-ai2-develop-fully-open-ai">https://www.nsf.gov/news/nsf-nvidia-partnership-enables-ai2-develop-fully-open-ai</a>.</p>
<p>O&rsquo;Brien, Kyle, Stephen Casper, Quentin Anthony, et al. 2025. &ldquo;Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs.&rdquo; Preprint, arXiv, August 8. https://arxiv.org/abs/2508.06601.</p>
<p>OMB (Office of Management and Budget). 2024. <em>Guidance For Agency Artificial Intelligence Reporting per EO 14110</em>. <em>Office of Management and Budget</em>, August 14. <a href="https://www.cio.gov/assets/resources/2024-Guidance-for-AI-Use-Case-Inventories.pdf">https://www.cio.gov/assets/resources/2024-Guidance-for-AI-Use-Case-Inventories.pdf</a>.</p>
<p>OpenAI. 2025. <em>GPT-5 System Card.</em> August 13. <a href="https://cdn.openai.com/gpt-5-system-card.pdf">https://cdn.openai.com/gpt-5-system-card.pdf</a>.</p>
<p>Ordo&ntilde;ez, Franco. 2023. &ldquo;These Tech Giants Are at The White House Today to Talk About the Risks of AI.&rdquo; <em>NPR</em>, September 12. <a href="https://www.npr.org/2023/09/12/1198885516/these-tech-giants-are-at-the-white-house-today-to-talk-about-the-risks-of-ai">https://www.npr.org/2023/09/12/1198885516/these-tech-giants-are-at-the-white-house-today-to-talk-about-the-risks-of-ai</a>.</p>
<p>Polemi, Nineta, Isabel Pra&ccedil;a, Kitty Kioskli, and Adrien B&eacute;cue. 2024. &ldquo;Challenges and Efforts in Managing AI Trustworthiness Risks: A State of Knowledge.&rdquo;&nbsp;<em>Frontiers in Big Data</em>&nbsp;7. <a href="https://doi.org/10.3389/fdata.2024.1381163">https://doi.org/10.3389/fdata.2024.1381163</a>.</p>
<p>Prabhakar, Arati, and Jennifer Klein. 2024. &ldquo;A Call to Action to Combat Image-Based Sexual&nbsp;Abuse.&rdquo; <em>White House Office of Science and Technology Policy</em>, May 23. <a href="https://bidenwhitehouse.archives.gov/ostp/news-updates/2024/05/23/a-call-to-action-to-combat-image-based-sexual-abuse/">https://bidenwhitehouse.archives.gov/ostp/news-updates/2024/05/23/a-call-to-action-to-combat-image-based-sexual-abuse/</a>.</p>
<p>Raji, Inioluwa Deborah, and Robel Dobbe. 2023. &ldquo;Concrete Problems in AI Safety, Revisited.&rdquo; Preprint, arXiv, December 18. <a href="https://arxiv.org/abs/2401.10899">https://arxiv.org/abs/2401.10899</a>.</p>
<p>Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), 2022 O.J. (L 277) 1.</p>
<p>Regulation (EU) 2024/1689, of the European Parliament and of the Council of 13 June 2024, laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), 2024 O.J. (L 1689) 1.</p>
<p>Scherer, Matthew U. 2016. &ldquo;Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies.&rdquo; <em>Harv. J.L &amp; Tech.</em> 29 (2): 353&ndash;6. <a href="http://dx.doi.org/10.2139/ssrn.2609777">http://dx.doi.org/10.2139/ssrn.2609777</a>.</p>
<p>Sendak, Mark P., William Ratliff, Dina Sarro, et al. 2020. &ldquo;Real-World Integration of a Sepsis Deep Learning Technology Into Routine Clinical Care: Implementation Study.&rdquo; <em>JMIR Medical Informatics</em> 8 (7): e15182. doi:10.2196/15182.</p>
<p>Smuha, Nathalie A., Emma Rengers, Adam Harkens, et al. 2021. &ldquo;How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission&rsquo;s Proposal for an Artificial Intelligence Act.&rdquo; <em>SSRN</em>. <a href="http://dx.doi.org/10.2139/ssrn.3899991">http://dx.doi.org/10.2139/ssrn.3899991</a>.</p>
<p>Solow-Niederman, Alicia. 2020. &ldquo;Administering Artificial Intelligence.&rdquo; <em>S. Cal. L. Rev.</em> 93: 633&ndash;96. <a href="http://dx.doi.org/10.2139/ssrn.3495725">http://dx.doi.org/10.2139/ssrn.3495725</a>.</p>
<p>Stein, Merlin, Milan Gandhi, Theresa Kriecherbauer, Amin Oueslati, and Robert Trager. 2024. &ldquo;Public vs Private Bodies: Who Should Run Advanced AI Evaluations and Audits? A Three-Step Logic Based on Case Studies of High-Risk Industries.&rdquo; <em>Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society</em>, 7 (1): 1401&ndash;15. <a href="https://doi.org/10.1609/aies.v7i1.31733">https://doi.org/10.1609/aies.v7i1.31733</a>.</p>
<p>Suchman, Lucy A. 1987. <em>Plans and Situated Actions: The Problem of Human-Machine Communication. </em>Cambridge University Press.</p>
<p>Surman, Mark and Ayah Bdeir. 2025. &ldquo;ROOST: Open Source AI Safety for Everyone.&rdquo; <em>Mozilla</em>, February 10. <a href="https://blog.mozilla.org/en/mozilla/ai/roost-launch-ai-safety-tools-nonprofit/">https://blog.mozilla.org/en/mozilla/ai/roost-launch-ai-safety-tools-nonprofit/</a>.</p>
<p>The Leadership Conference on Civil and Human Rights. 2023. &ldquo;The Leadership Conference Education Fund Announces Its &lsquo;Center for Civil Rights and Technology,&rsquo; a First of Its Kind Research and Advocacy Hub.&rdquo; September 7. <a href="https://civilrights.org/2023/09/07/the-leadership-conference-education-fund-announces-its-center-for-civil-rights-and-technology-a-first-of-its-kind-research-and-advocacy-hub/">https://civilrights.org/2023/09/07/the-leadership-conference-education-fund-announces-its-center-for-civil-rights-and-technology-a-first-of-its-kind-research-and-advocacy-hub/</a>.</p>
<p>Thiel, David. 2023. <em>Identifying and Eliminating CSAM in </em><em>Generative ML Training Data and Models</em>. Stanford Internet Observatory. <a href="https://www.congress.gov/118/meeting/house/116913/documents/HHRG-118-JU08-20240306-SD005-U5.pdf">https://www.congress.gov/118/meeting/house/116913/documents/HHRG-118-JU08-20240306-SD005-U5.pdf</a>.</p>
<p>Tran, Marc. 2015. &ldquo;Combatting Gender Privilege and Recognizing a Woman's Right to Privacy in Public Spaces: Arguments to Criminalize Catcalling and Creepshots.&rdquo; <em>Hastings J. Gender &amp; L.</em> 26 (2): 185&ndash;206. <a href="https://repository.uclawsf.edu/hwlj/vol26/iss2/1">https://repository.uclawsf.edu/hwlj/vol26/iss2/1</a>.</p>
<p>U.S. Congress. 2022. Violence Against Women Act Reauthorization Act of 2022. Pub. L. No. 117-103, 136 Stat. 840.</p>
<p>U.S. Congress. Senate. Committee on the Judiciary. Subcommittee on Privacy, Technology, and the Law. 2023a<strong>.</strong> <em>Oversight of AI: Rules for Artificial Intelligence</em>. Hearing, 118th Cong. (Testimony and Questions for the Record of Sam Altman, Chief Executive Officer, OpenAI. Testimony advocating for licensing or registration requirements that would ensure risk management practices are applied to &ldquo;AI models above a crucial threshold of capabilities.&rdquo; Questions for the Record &ldquo;For future generations of the most highly capable foundation models, which are likely to prove more capable than models that have been previously shown to be safe, we support the development of registration, disclosure, and licensing requirements&hellip;. Licensees could be required to perform pre-deployment risk assessments and adopt state-of-the-art security and deployment safeguards.&rdquo;)</p>
<p>U.S. Congress. Senate. Committee on the Judiciary. Subcommittee on Privacy, Technology, and the Law. 2023b<strong>.</strong> <em>Oversight of AI: Rules for Artificial Intelligence</em>. Hearing, 118th Cong. (Testimony and Questions for the Record of Gary Marcus, Professor Emeritus<strong>, </strong>New York University. Testimony arguing for &ldquo;independent scientists access&rdquo; to test AI systems &ldquo;before they are widely released &ndash; as part of a clinical trial-like safety evaluation.&rdquo; Questions for the Record advocating the creation of &ldquo;an FDA-like regulatory regime for AI that evaluates large-scale deployment, balancing risks and benefits.&rdquo;)</p>
<p>U.S. Congress. Senate. Committee on the Judiciary. Subcommittee on Privacy, Technology, and the Law. 2023c<strong>.</strong> <em>Oversight of AI: Rules for Artificial Intelligence</em>. Hearing, 118th Cong. (Testimony of Christina Montgomery, Chief Privacy and Trust Officer, IBM, advocating for &ldquo;risk-based, use-case specific approach&rdquo; to AI regulation.)</p>
<p>U.S. Congress. 2025. Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act (Take It Down Act). Pub. L. No. 119-12, 139 Stat. 55.</p>
<p>U.S. Department of State, Bureau of Cyberspace and Digital Policy. 2024. <em>Risk Management Profile for Artificial Intelligence and Human Rights</em>. <a href="https://2021-2025.state.gov/risk-management-profile-for-ai-and-human-rights/">https://2021-2025.state.gov/risk-management-profile-for-ai-and-human-rights/</a>.</p>
<p>Vidgen, Bertie, Adarsh Agrawal, Ahmed M. Ahmed, et al. 2024. &ldquo;Introducing v0.5 of the AI Safety Benchmark from MLCommons.&rdquo; Preprint, arXiv, April 18. <a href="https://arxiv.org/abs/2404.12241">https://arxiv.org/abs/2404.12241</a>.</p>
<p>Vought, Russell T. 2025. &ldquo;M-25-21 Memorandum for the Heads of Executive Departments and Agencies: Accelerating Federal Use of AI through Innovation, Governance, and Public Trust.&rdquo; <em>Office of Management and Budget</em>. <a href="https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf">https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf</a>.</p>
<p>Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2020. &ldquo;Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI.&rdquo; <em>Computer Law &amp; Security Review</em> 41: 46&ndash;47. <a href="http://dx.doi.org/10.2139/ssrn.3547922">http://dx.doi.org/10.2139/ssrn.3547922</a>.</p>
<p>Wachter, Sandra. 2024. &ldquo;Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond.&rdquo; <em>Yale Journal of Law &amp; Technology</em> 26 (3): 671&ndash;718. <a href="http://dx.doi.org/10.2139/ssrn.4924553">http://dx.doi.org/10.2139/ssrn.4924553</a>.</p>
<p>Wasil, Akash R., Joshua Clymer, David Krueger, Emily Dardaman, Simeon Campos, and Evan R. Murphy. 2024. &ldquo;Affirmative Safety: An Approach to Risk Management for High-Risk AI.&rdquo; Preprint, arXiv, April 14. <a href="https://doi.org/10.48550/arXiv.2406.15371">https://doi.org/10.48550/arXiv.2406.15371</a>.</p>
<p>Wei, Alexander, Nika Haghtalab, and Jacob Steinhardt. 2024. &ldquo;Jailbroken: How Does LLM Safety Training Fail?&rdquo; <em>Proceedings of 37th Conference on Neural Information Processing Systems (NeurIPS 2023).</em> <a href="https://proceedings.neurips.cc/paper_files/paper/2023/file/fd6613131889a4b656206c50a8bd7790-Paper-Conference.pdf">https://proceedings.neurips.cc/paper_files/paper/2023/file/fd6613131889a4b656206c50a8bd7790-Paper-Conference.pdf</a>.</p>
<p>Weidinger, Laura, Maribeth Rauh, and Nahema Marchal, et al. 2023. "Sociotechnical Safety Evaluation of Generative AI Systems.&rdquo; <em>Google Deepmind</em>. Preprint, arXiv, October 31. <a href="https://doi.org/10.48550/arXiv.2310.11986">https://doi.org/10.48550/arXiv.2310.11986</a>.</p>
<p>Weidinger, Laura, Joslyn Barnhart, Jenny Brennan, et al. 2024. &ldquo;Holistic Safety and Responsibility Evaluations of Advanced AI Models.&rdquo; Preprint, arXiv, April 22.&nbsp;<a href="https://doi.org/10.48550/arXiv.2404.14068">https://doi.org/10.48550/arXiv.2404.14068</a>.</p>
<p>White House. 2023. <em>Voluntary AI Commitments. </em><a href="https://bidenwhitehouse.archives.gov/wp-content/uploads/2023/09/Voluntary-AI-Commitments-September-2023.pdf">https://bidenwhitehouse.archives.gov/wp-content/uploads/2023/09/Voluntary-AI-Commitments-September-2023.pdf</a>.</p>
<p>White House. 2024a. &ldquo;White House Announces New Private Sector Voluntary Commitments to Combat Image-Based Sexual Abuse.&rdquo; September 12<em>. </em><a href="https://bidenwhitehouse.archives.gov/ostp/news-updates/2024/09/12/white-house-announces-new-private-sector-voluntary-commitments-to-combat-image-based-sexual-abuse/">https://bidenwhitehouse.archives.gov/ostp/news-updates/2024/09/12/white-house-announces-new-private-sector-voluntary-commitments-to-combat-image-based-sexual-abuse/</a>.</p>
<p>White House. 2024b. &ldquo;Fact Sheet: Key AI Accomplishments in the Year Since the Biden-Harris Administration's Landmark Executive Order.&rdquo; October 30<em>. </em><a href="https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2024/10/30/fact-sheet-key-ai-accomplishments-in-the-year-since-the-biden-harris-administrations-landmark-executive-order/">https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2024/10/30/fact-sheet-key-ai-accomplishments-in-the-year-since-the-biden-harris-administrations-landmark-executive-order/</a>.</p>
<p>White House OSTP (Office of Science and Technology Policy). 2022. <em>Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People</em>. The White House. <a href="https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/">https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/</a>.</p>
<p>White House OSTP (Office of Science and Technology Policy). 2024. <em>Framework for Nucleic Acid Synthesis Screening</em>. The White House. <a href="https://bidenwhitehouse.archives.gov/wp-content/uploads/2024/04/Nucleic-Acid_Synthesis_Screening_Framework.pdf">https://bidenwhitehouse.archives.gov/wp-content/uploads/2024/04/Nucleic-Acid_Synthesis_Screening_Framework.pdf</a>.</p>
<p>Yang, Stephen. 2024. &ldquo;Beyond High-Risk Scenarios: Recentering the Everyday Risks of AI.&rdquo; <em>Center for Democracy and Technology</em>, October 22. <a href="https://cdt.org/insights/beyond-high-risk-scenarios-recentering-the-everyday-risks-of-ai/">https://cdt.org/insights/beyond-high-risk-scenarios-recentering-the-everyday-risks-of-ai/</a>. (Noting that &ldquo;Anthropic&rsquo;s <a href="https://www.anthropic.com/news/anthropics-responsible-scaling-policy">Responsible Scaling Policy</a> (RSP) and OpenAI&rsquo;s <a href="https://openai.com/preparedness/">Preparedness Framework</a> focus on mitigating risks associated with doomsday scenarios where AI contributes to existential risks, such as pandemics and nuclear wars.&rdquo;)</p>
<p>Young, Shalanda D. 2024. &ldquo;M-24-10 Memorandum for the Heads of Executive Departments and Agencies: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.&rdquo; <em>Office of Management and Budget</em>.<a href="https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf"> </a><a href="https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf">https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf</a>.</p>
<p>&nbsp;</p>
<p>&copy; 2026, Deirdre K. Mulligan, Nik Marda, and Victor Zhenyi Wang</p>
<p>Cite as: Deirdre K. Mulligan, Nik Marda, and Victor Zhenyi Wang, A Conceptual Model to Guide AI Risk Governance Strategies, 26-3 Knight First Amend. Inst. (Mar. 16, 2026), <a href="https://knightcolumbia.org/content/a-conceptual-model-to-guide-ai-risk-governance-strategies-1">https://knightcolumbia.org/content/a-conceptual-model-to-guide-ai-risk-governance-strategies-1</a> [<a href="https://perma.cc/WRD4-ZPC4">https://perma.cc/WRD4-ZPC4</a>].&nbsp;</p>]]></description>
      <guid isPermaLink="false">/content/a-conceptual-model-to-guide-ai-risk-governance-strategies-1</guid>
      <pubDate>Mon, 16 Mar 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[Free Expression and the Rights of Non-Citizens]]></title>
      <link>https://knightcolumbia.org/content/free-expression-and-the-rights-of-non-citizens</link>
      <description><![CDATA[<p>The arrest takes place in less than two minutes. Security camera footage shows a woman wearing a white coat and a pink headscarf walking along a Massachusetts street. Suddenly, a group of people dressed in black swoop toward her. She screams. &ldquo;We&rsquo;re the police,&rdquo; one of the figures says, as the group handcuffs the woman&mdash;30-year-old R&uuml;meysa &Ouml;zt&uuml;rk, a Turkish Ph.D. student at Tufts University&mdash;and pushes her into a black van.</p>
<p>Immigration officers <a href="https://www.cnn.com/2025/03/27/us/rumeysa-ozturk-detained-what-we-know">grabbed</a> &Ouml;zt&uuml;rk off the street on March 26, 2025, just days after the State Department voided her student visa. The previous year, she had <a href="https://www.tuftsdaily.com/article/2024/03/4ftk27sm6jkj">coauthored an op-ed</a> in the Tufts student newspaper encouraging the university to more seriously consider a student-backed resolution in support of divesting the university&rsquo;s investments linked to Israel. As far as the Trump administration was concerned, this apparently constituted sufficient grounds to revoke her student visa. After federal officers seized &Ouml;zt&uuml;rk, she was <a href="https://www.washingtonpost.com/nation/2025/05/07/rumeysa-ozturk-tufts-ice-detention/">shipped</a> to immigration detention facilities in Vermont and then Louisiana. It took six weeks for her to be <a href="https://www.cnn.com/2025/05/09/us/rumeysa-ozturk-tufts-bail-release">released</a> on bail; for the last ten months, litigation continued over her freedom and the government&rsquo;s continued quest to remove her from the country. Early in 2026, just less than a year into the second Trump administration, an immigration judge <a href="https://www.nytimes.com/2026/02/10/us/immigration-judge-tufts-student-rumeysa-ozturk.html">finally dismissed</a> the government&rsquo;s efforts to deport her.</p>
<p>Although &Ouml;zt&uuml;rk&rsquo;s case may be unusually extreme, it is not unique. Shortly after Donald Trump took office for a second time, his administration embarked on a campaign of arrests and deportation threats against noncitizen students who had protested or in some way voiced support for Palestinians or criticism of the Israeli government. The Knight Institute, in its own litigation against this campaign, <a href="https://knightcolumbia.org/documents/xicmg5tgiw">termed the effort</a> an &ldquo;ideological deportation policy.&rdquo;</p>
<p>The effort has since run aground in the courts, as judges have repeatedly indicated concerns over the constitutionality of attempted deportations like &Ouml;zt&uuml;rk&rsquo;s. In <em>AAUP v. Rubio, </em>the Knight Institute&rsquo;s case, Judge William Young <a href="https://knightcolumbia.org/documents/ahmr9jfap2">ruled</a> that the overall policy violated the First Amendment rights of both the students and their interlocutors. Still, despite these losses, the administration has successfully instilled an environment of fear. Multiple friends who had arrived in the United States from politically repressive countries told me that the video of &Ouml;zt&uuml;rk&rsquo;s arrest reminded them of how things had worked back home&mdash;exactly what they had immigrated to the United States to escape. Now, in America, they were newly weighing their desire to speak freely against the precarity of their status as immigrants. Even a green card was no longer adequate protection: Along with &Ouml;zt&uuml;rk, the administration has been <a href="https://www.nytimes.com/2026/01/15/nyregion/mahmoud-khalil-detention.html">attempting to deport</a> the Columbia student and activist Mahmoud Khalil, who had <a href="https://www.washingtonpost.com/immigration/2025/04/11/khalil-hearing-trump-columbia-deportation/">become</a> a legal permanent resident in November 2024.</p>
<p>It is not a coincidence that the Trump administration&rsquo;s most vicious attacks on free expression have emerged in the area of immigration. Panic over immigration, after all, appears to be the driving force behind much of the government&rsquo;s policymaking. But this is also a realm of domestic policy in which the executive&rsquo;s powers are bound by fewer constraints, and in which it enjoys great deference from the judiciary in immigration enforcement.</p>
<p>As the immigration policy scholar Dara Lind <a href="https://www.nytimes.com/2025/06/16/opinion/trump-immigration-deportations-rights.html">has explored</a>, this attack on the freedom of immigrants emphasizes the distinctions in status between those who are citizens and those who are not. Yet it also erodes the value of citizenship itself by chipping away at the rights of Americans who oppose the administration&rsquo;s policies. The federal government has responded with fury to efforts by onlookers to document abuses by U.S. Immigration and Customs Enforcement and Customs and Border Patrol, <a href="https://www.reuters.com/world/us/ice-is-cracking-down-people-who-follow-them-their-cars-2026-02-10/">charging</a> mounting numbers of them with obstructing or assaulting federal officers. These prosecutions&mdash;many of which are flimsy and have been either <a href="https://www.cbsnews.com/chicago/news/charges-dropped-broadview-ice-protesters-grand-jury/">dropped by the government</a> or rejected by <a href="https://thehill.com/regulation/court-battles/5737914-charges-dropped-ice-officer-assault/">judges</a> and <a href="https://apnews.com/article/ice-immigration-protests-prosecutions-doj-arrests-591f155d50c13756842e033ea23f16d3">juries</a>&mdash;constitute an attack on free expression, too. They reflect the government&rsquo;s inability to accept that anyone might legitimately oppose its brutal approach to immigration enforcement. Prosecuting observers and protestors is an easy way to frighten away would-be observers with the threat of prosecution.</p>
<p>In the course of my <a href="https://www.theatlantic.com/ideas/2025/11/favorite-statute-section-111-ice/684961/">reporting for </a><a href="https://www.theatlantic.com/ideas/2025/11/favorite-statute-section-111-ice/684961/"><em>The Atlantic</em></a><em>, </em>I&rsquo;ve been following these prosecutions of bystanders closely. It&rsquo;s possible to imagine simple reforms that might limit the government&rsquo;s ability to bring such cases abusively. For example, the underlying statute, <a href="https://www.law.cornell.edu/uscode/text/18/111">18 U.S.C. &sect; 111</a>, might be tweaked to emphasize that a person who &ldquo;forcibly assaults, resists, opposes, impedes, intimidates, or interferes&rdquo; with a federal officer may only be charged if their action significantly impairs the officer&rsquo;s ability to carry out their duties. This would limit the government&rsquo;s ability to prosecute, say, someone who <a href="https://www.pbs.org/newshour/nation/man-who-threw-sandwich-at-federal-agent-in-d-c-says-it-was-a-protest-prosecutors-say-its-felony-assault">threw</a> a sandwich at a CBP officer in protest, or someone who <a href="https://www.wusa9.com/article/news/legal/sidney-reid-trial-not-guilty-fbi-agent-ice-arrest-assault-charge-dc-jail/65-fa3b180e-e72f-43d2-ad36-bbcc91fad9d0">scraped</a> an officer&rsquo;s hand when the officer pushed her up against a wall to stop her from filming an immigration arrest.</p>
<p>But this seems to me to elide the real underlying issue, which is the construction of immigration policy and enforcement as an area where speech protections are weakened in the face of executive power. Tackling this problem in earnest would require rethinking legal structures and unwinding doctrines of judicial deference that are deeply built into the law.</p>
<p>In addition to that, though, a simpler target for free-speech advocates are the provisions of the U.S. Code that allow the executive to <a href="https://www.law.cornell.edu/uscode/text/8/1227">deport</a> or <a href="https://www.law.cornell.edu/uscode/text/8/1182">deny entry</a> to noncitizens whose presence or activities the secretary of state has &ldquo;reasonable ground to believe&rdquo; would pose &ldquo;potentially serious adverse foreign policy consequences for the United States.&rdquo; The State Department <a href="https://www.lawfaremedia.org/article/the-trump-admin-s-embrace-of-ideological-exclusion-and-deportation">relied on this authority</a> in its effort to deport &Ouml;zt&uuml;rk and Khalil. When these provisions were passed into law in 1990, Congress <a href="https://www.documentcloud.org/documents/26033341-7f7043c8-full/?q=merely&amp;mode=document#document/p129">emphasized</a> its intent that the executive would use this power &ldquo;sparingly and not merely because there is a likelihood that an alien will make critical remarks about the United States or its policies.&rdquo; Clearly, this did not work out. There are few downsides to repealing the statute: The authority was little-used before the second Trump administration, and the government has plenty of discretion to regulate entry into the country on grounds other than political opinions.</p>
<p>Revoking these authorities will not, by any means, solve the problem of the weakened protections for free expression enjoyed by immigrants. But it would make their persecution more difficult. The government&mdash;in the way of all bullies&mdash;targeted &Ouml;zt&uuml;rk and her fellow students primarily because they were vulnerable. The solution, in part, is to make them less so.</p>]]></description>
      <guid isPermaLink="false">/content/free-expression-and-the-rights-of-non-citizens</guid>
      <pubDate>Thu, 12 Mar 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[Technology Researchers Challenge Trump Policy Threatening Deportation for Work on Social Media Platforms and Online Harms]]></title>
      <link>https://knightcolumbia.org/content/technology-researchers-challenge-trump-policy-threatening-deportation-for-work-on-social-media-platforms-and-online-harms</link>
      <description><![CDATA[<p dir="ltr">WASHINGTON&mdash;The Knight First Amendment Institute at Columbia University and Protect Democracy today filed a lawsuit in federal court on behalf of the Coalition for Independent Technology Research (CITR) challenging the constitutionality of a new U.S. immigration policy that targets noncitizen researchers, advocates, fact-checkers, and trust and safety workers for visa denials, revocations, detention, and deportation based on their work researching and reporting on social media platforms. The group alleges that the policy, which purportedly aims to combat &ldquo;censorship&rdquo; of Americans&rsquo; speech on the internet, violates the First Amendment and chills independent research about social media and other internet platforms.</p>
<p dir="ltr">&ldquo;The Trump administration is using the threat of detention and deportation to suppress speech it disfavors,&rdquo; said Carrie DeCell, senior staff attorney and legislative advisor at the Knight First Amendment Institute. &ldquo;By targeting researchers and advocates for their work studying and reporting on social media platforms and online harms, the policy chills protected speech and distorts public debate about issues of profound public importance.&rdquo;</p>
<p dir="ltr">Since his first term, President Trump and his allies have characterized the content moderation decisions of privately owned social media platforms as a form of &ldquo;censorship&rdquo; reflecting anti-conservative bias. In May 2025, Secretary of State Marco Rubio announced a visa restriction policy aimed at foreign officials and other individuals who are allegedly &ldquo;complicit in censoring Americans.&rdquo; In early December 2025, news outlets reported that the State Department had instructed U.S. consular officers to scrutinize visa applicants&mdash;particularly H-1B applicants&mdash;for evidence of their work in fields including misinformation, disinformation, fact-checking, content moderation, trust and safety, and compliance, and to pursue findings of visa ineligibility if they deemed applicants &ldquo;complicit&rdquo; in censorship. Secretary Rubio subsequently applied the policy to one former EU regulator and four independent researchers and advocates&mdash;including the leaders of two CITR-member organizations&mdash;and indicated his willingness to expand its application.&nbsp;</p>
<p dir="ltr">CITR&rsquo;s members include research organizations, academics, journalists, and advocates who study digital platforms and their societal impacts. Their work seeks to identify online harms, improve user safety, and inform public debate.</p>
<p dir="ltr">&ldquo;Researchers who help everyday people understand the impacts of Big Tech are scared that they and their families will be targeted for detention and deportation under this policy,&rdquo; said Brandi Geurkink, executive director of the Coalition for Independent Technology Research. &ldquo;At a time when AI is rapidly changing our lives and economy and people are already worried about their freedom and safety online, we need independent researchers more than ever. This policy is meant to censor researchers into silence and keep the public in the dark, and that&rsquo;s exactly what it&rsquo;s doing.&rdquo;</p>
<p dir="ltr">The policy&rsquo;s chilling effects spread beyond the community of independent researchers that CITR represents. According to news reports about the December 2025 State Department cable, which has not been made public, the policy reaches fact-checkers and online safety professionals whose work includes combating child exploitation, terrorism, and preventing fraud, human trafficking, and other forms of malicious behavior. This work involves research, analysis, and editorial judgment&mdash;work that is itself protected expressive activity.&nbsp;</p>
<p dir="ltr">&ldquo;This policy appears to be so broad and vague that it casts a shadow over a vast range of protected activity,&rdquo; said Naomi Gilens, counsel at Protect Democracy. &ldquo;The professionals working to keep the internet safe are left in fear, wondering whether doing their jobs could cost them their visas or trigger detention or deportation. Exploiting immigration policy to go after this kind of work doesn&rsquo;t just hurt those individuals&mdash;it undermines the very systems that make the internet more trustworthy for all of us.&rdquo;</p>
<p dir="ltr">Today&rsquo;s complaint further alleges that the policy punishes CITR&rsquo;s noncitizen members and others based on their perceived viewpoints; interferes with the rights of CITR and its U.S. citizen members to hear from and associate with noncitizen colleagues; is not sufficiently tailored to serve any legitimate governmental interest; and is impermissibly vague. The complaint also raises claims under the Administrative Procedure Act.</p>
<p dir="ltr">Read the complaint <a href="https://knightcolumbia.org/documents/hpsetihu54">here</a>.</p>
<p dir="ltr">Read more about the lawsuit, <em>Coalition for Independent Technology Research v. Rubio</em>, <a href="https://knightcolumbia.org/cases/citr-v-rubio">here</a>.</p>
<p dir="ltr">Lawyers on the case include Carrie DeCell, Raya Koreh, Kiran Wattamwar, Anna Diakun, Katie Fallow, Alex Abdo, and Jameel Jaffer, for the Knight First Amendment Institute, and Naomi Gilens, Nicole Schneidman, Scott Shuchart, and Deana El-Mallawany, for Protect Democracy.&nbsp;</p>
<p dir="ltr">For more information, contact: Adriana Lamirande, <a href="mailto:adriana.lamirande@knightcolumbia.org">adriana.lamirande@knightcolumbia.org</a>&nbsp;</p>
<p>&nbsp;</p>]]></description>
      <guid isPermaLink="false">/content/technology-researchers-challenge-trump-policy-threatening-deportation-for-work-on-social-media-platforms-and-online-harms</guid>
      <pubDate>Mon, 09 Mar 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[Coalition for Independent Technology Research v. Rubio]]></title>
      <link>https://knightcolumbia.org/cases/citr-v-rubio</link>
      <description><![CDATA[<p>On March 9, 2026, the Knight Institute and Protect Democracy filed a lawsuit challenging the constitutionality of a U.S. immigration policy that targets noncitizen researchers, advocates, fact-checkers, and trust and safety workers through visa denials, revocations, detention, and deportation based on their work. Filed on behalf of the Coalition for Independent Technology Research (CITR), the suit alleges that the policy punishes people based on their perceived viewpoints, chills protected speech, and distorts public debate about social media and other internet platforms.</p>
<p>The government has framed the policy as an effort to combat &ldquo;censorship&rdquo; of Americans&rsquo; speech online and has reportedly applied it to individuals deemed &ldquo;complicit&rdquo; in that censorship. The policy and its enforcement have instilled fear within the research community, causing some to curtail their work, withdraw from public advocacy, and reconsider their ability to continue working in the United States.</p>
<p>The lawsuit alleges that the policy violates the First Amendment, including by interfering with the rights of CITR and its U.S. citizen members to hear from and associate with noncitizen colleagues; is impermissibly vague; and violates the Administrative Procedure Act. The plaintiffs seek declaratory and injunctive relief barring enforcement of the policy.</p>
<p><strong>Status:</strong> Complaint filed in the U.S. District Court for the District of Columbia on March 9, 2026.</p>
<p><strong>Case Information: </strong><em>Coalition for Independent Technology Research v. Rubio</em>, No. 1:26-cv-815 (D.D.C.)</p>]]></description>
      <guid isPermaLink="false">/cases/citr-v-rubio</guid>
      <pubDate>Mon, 09 Mar 2026 00:00:00 -0700</pubDate>
    </item>
        <item>
      <title><![CDATA[The Infrastructure of Free Expression]]></title>
      <link>https://knightcolumbia.org/content/the-infrastructure-of-free-expression</link>
      <description><![CDATA[<p>The threats of the moment are increasingly brazen and clear, from the state surveilling people online, seizing them off the streets, and steering control of media organizations to regime allies, to using urgently needed social programs and revenue streams as leverage to intimidate schools, universities, nonprofits, and state and local governments. But what would it mean to imagine and secure these freedoms in the future? In this blog post, I want to suggest three propositions that should shape any larger project of reconstructing our system of free expression for a more robust, meaningful, and substantive democracy of the future.</p>
<ol>
<li><strong>Freedoms of expression are products of infrastructure</strong>. Democracy depends on infrastructure&mdash;the underlying structures that enable us to organize, associate, engage in speech, vote, and have an equal share of political power and influence. Conventional democracy reform addresses the more formal political institutional aspects of this infrastructure: voting, districting, campaign finance, and deeper structural questions about legislative malapportionment (e.g., of the Senate) and the like. But there is a vital <em>civic and informational infrastructure</em> that also is critical to enabling free expression. Individuals and communities need to be able to form civic associations&mdash;including through nonprofit and other legal formations&mdash;free of state intimidation and with affirmative resources to advance their missions. Individuals and communities depend on the free flow of information and ideas&mdash;both to learn and engage with the world around them, and to form their views and advocate for their aspirations. That flow of information itself depends on an underlying infrastructure: of media organizations, online platforms, and subject to political economic incentives and pressures stemming from the business organization of these entities.</li>
<li><strong>The threats to freedom of expression and its underlying infrastructure stem from both public and private actors.</strong> As the events of the last year have underscored, these infrastructures that undergird free expression are vulnerable&mdash;to both the pressures of <em>state </em>actors and <em>private </em>actors. We have seen how the weaponization of governmental enforcement powers and government funding streams alike have been wielded to seek to discipline media companies, universities, and nonprofit organizations&mdash;all with the goal of chilling oppositional expression. These are threats to expression from an authoritarian, dominating state. But we also see increasingly the role of <em>private power </em>in asserting its own dominance over the free expression infrastructure: in the consolidation of media ownership through mergers and the rise of explicitly regime-friendly oligarchical control of information. More subtle but equally present is the role of private power in shaping the mediation of digital public opinion through the control and manipulation of social media platforms and algorithms.</li>
<li><strong>Many of these threats predate the worst excesses of the current administration&mdash;and redressing them will require reckoning with laws and practices that have been accepted by both parties for the last 20-plus years. </strong>This is perhaps the most important premise for any project reconstructing free expression. The threats to our free expression infrastructure, while accelerating to the extreme in the current moment, long predate the current regime. And indeed, many of the attacks on our free expression infrastructure were already taking place, albeit in more localized and less visible forms. Attacks on civil society organizations have already been a mainstay of the modern policy response to recent moments of bottom-up civil society movement organizing. Prosecutors in Atlanta have used <a href="https://lpeproject.org/blog/how-conspiracy-law-threatens-social-movements/" target="_blank" rel="noopener">conspiracy</a> <a href="https://michiganlawreview.org/journal/conspiracy-and-social-movements/" target="_blank" rel="noopener">charges</a> in an attempt to dismantle the civic infrastructure enabling <a href="https://www.nytimes.com/2025/12/31/us/cop-city-activists-racketeering-charges.html" target="_blank" rel="noopener">mass protests</a> against the expansion of the &ldquo;Cop City&rdquo; development. Private oligarchs have systematically brought nuisance suits to intimidate and bleed dry independent journalism outfits that embarrassed them with muckraking coverage&mdash;as when Peter Thiel <a href="https://www.nytimes.com/2016/05/26/business/dealbook/peter-thiel-tech-billionaire-reveals-secret-war-with-gawker.html#:~:text=in%20the%20closet.%E2%80%9D-,Mr.,SKIP%20ADVERTISEMENT" target="_blank" rel="noopener">funded</a> the legal campaign that ultimately <a href="https://www.npr.org/sections/thetwo-way/2016/06/10/481565188/gawker-files-for-bankruptcy-it-faces-140-million-court-penalty" target="_blank" rel="noopener">bankrupted</a> Gawker media almost 20 years ago. The concentration of private control over social media platforms and in context of broadcast and radio mergers was already a significant threat to free expression prior to 2025.</li>
</ol>
<p>What then would a reconstruction of free expression need to look like in light of these three realities?</p>
<ul>
<li>First, we will have to dismantle authoritarian capacities. We must rein in the excessive power of the state to unlawfully intimidate, defund, or discriminate against individuals, firms, nonprofits, and the like. This includes the need to more systematically dismantle surveillance and coercive capacities&mdash;capacities that both parties have sought to expand and preserve since 9/11. And it requires providing for greater accountability for officials who abuse their office by attacking free expression.</li>
<li>Second, and similarly, we will need to structurally limit private control of information, media, and other critical free expression infrastructures&mdash;for example through robust antitrust policies, common carriage policies, and similar structural interventions. This in turn will require rebuilt and redesigned <em>affirmative </em>capacities for state regulation of private power threats to free expression&mdash;and more permissive legal doctrines that enable more creative legislative and regulatory regimes that can govern our modern media and information infrastructures in ways that are democracy and expression-enhancing.</li>
<li>Third, we will need to innovate new affirmative investments in a civic-minded information infrastructure, from future alternative models of public media, to independent production of knowledge and research including through public funding, to expanded resourcing for bottom-up civic organizing and engagement among ordinary Americans. Public financing of some form is crucial to such knowledge production&mdash;but it must be structured through institutions that are less prone to partisan weaponization than the institutions of the past.</li>
</ul>
<p>In 2022, Congress came close to passing landmark <a href="https://www.congress.gov/bill/117th-congress/house-bill/1" target="_blank" rel="noopener">democracy reform legislation</a> addressing issues of voter suppression, gerrymandering, and money-in-politics. Those reforms remain essential. But any future reconstructed democracy will also require additional structural democracy reforms geared towards securing the infrastructures of free expression as well.</p>]]></description>
      <guid isPermaLink="false">/content/the-infrastructure-of-free-expression</guid>
      <pubDate>Fri, 06 Mar 2026 00:00:00 -0800</pubDate>
    </item>
        <item>
      <title><![CDATA[The People’s College]]></title>
      <link>https://knightcolumbia.org/content/the-peoples-college</link>
      <description><![CDATA[<p>The organizers of this symposium have set us a wonderful, impossible task. In 800 words, they have asked us to diagnose the root problems of the free speech crisis in America today&nbsp;<em>and</em> to propose a concrete reform that would begin to remedy the crisis.</p>
<p>I&rsquo;m going to approach this in reverse: first by proposing something, and then by describing why it might respond to the crisis of today as I see it. This proposal is creative (you could also call it half-baked), because I&rsquo;m taking the challenge as I understand it: in these disorienting times, to come up with new ideas that aiming at the roots of the problem.</p>
<p>The proposal is &ldquo;The People&rsquo;s College&rdquo;: a guarantee, to all adults in the United States: You have the right to a two-year college degree at the accredited college of your choice. The state and federal government would provide the funds, with rates capped at those charged by public institutions. Colleges and universities would provide programs, focusing on what they can best provide, and developing new programs that suit local community needs. Learners would seek out the education they most want and need. To participate, colleges would need to meet eligibility standards. These would be designed to strengthen higher education as a sector and at the same time to make it more accountable, public-minded, and democracy-reinforcing. For example, participating institutions would be required to protect academic freedom and free speech, provide education primarily in person, have clear and fair admissions policies, and respect the right of all campus workers to organize. They might also be encouraged to impart critical media skills, involve local community groups and provide opportunities for community service, involve local workers&rsquo; organizations and provide pathways to apprenticeships, provide education about how local government works, facilitate transfer credits for further study, and so forth.</p>
<p>Why a People&rsquo;s College? First, because education matters tremendously, to all of us. Humans are the only species that learns across generations in large-scale, open-ended ways. Education formalizes this learning, helping us to understand our world and find our place within it. Higher education has a profoundly important role in democracy. It not only passes on skills, but also by protects dissent and critical thought, and provides opportunities for belonging, community building, and aspiration. We are also in a period of enormous insecurity for many in this country. We need many collective minds and hands to address vast challenges ahead: adapting to AI, bringing cleaner energy to homes around the country, bringing skilled nursing care to the elderly, and developing ideas that can help us make sense of our rapidly changing world. An educational guarantee might be warranted as economic and jobs policy alone&mdash;but the virtues of it for democracy are what makes it interesting for those concerned with free speech.</p>
<p>Many states offer subsidies and grants for community college already, but without the suite of democracy-reinforcing requirements I suggest above. The more ambitious program I&rsquo;m suggesting here would not only incorporate such conditions, but also be better funded, supported both by states and the federal government, open to all adult learners (those with a high school degree, but not a BA), and to all accredited institutions of higher education. Clearly, the backbone would be community colleges and four-year state universities, for these educate the vast majority of Americans today. If reimbursement rates were keyed to in-state tuition for state universities, it would help sustain public institutions, while inviting private institutions to become more interested in and accountable to their local communities. It would put the challenge to <em>all </em>higher ed institutions: What could you do to better serve adults in your communities? What does the kind of education you can best provide offer to people? The idea here, you could say, is a mashup of free community college programs and Bard College&rsquo;s successful <a href="https://bpi.bard.edu/">prison programs</a> and new &ldquo;<a href="https://bpi.bard.edu/college-in-prison-and-beyond/microcolleges-and-bardbac/">micro-colleges</a>,&rdquo; which provide a liberal arts education including in great books with extraordinary success. And a vastly better program than the one offered by Trump&rsquo;s terrible &ldquo;<a href="https://www.forbes.com/sites/frederickhess/2023/11/09/trumps-american-academy-is-an-awful-idea/">American Academy</a>&rdquo; proposal.</p>
<p>This is the kind of proposal that could strengthen our democracy and provide an institutional, structural remedy for our free speech crisis if it could: 1) make an institution central to democracy more accessible and accountable, 2) strengthen higher ed as a sector organized to advance learning, knowledge, and critical thought, and 3) help build more durable majorities for inclusive democracy itself, by rapidly providing more avenues for learning and belonging <em>and </em>by mitigating the press of material constraints that crush so many people&rsquo;s sense of possibility and freedom today. The laissez-faire of the neoliberal age has created forms of insecurity that are readily weaponized by demagogues. And it reordered higher ed in ways that have deeply alienated our natural constituency: people who want to learn, provided the cost doesn&rsquo;t ruin their lives. Any structural solution to the crisis of democracy needs to address material well-being and security, and not simply &ldquo;rule of law&rdquo; institutions.</p>
<p>Take this as exemplary rather than cooked. But the idea here is that we need stronger <em>and</em> more accountable institutions that are not ordered by profit-seeking and market-sociability, but by values of critical inquiry, learning, pluralism, and democratic equality. Higher education is not really that kind of institution today, but there are key aspects upon which to build&mdash;and in the process, build more reason for public trust and accountability in higher ed, and more inclusive and democratic forms of the thing itself. The root cause of the crisis in free speech, in other words, is in the concentrated power and domination that structure our <a href="https://lpeproject.org/lpe-manifesto/">political economic order</a> itself. Resolving it requires imagining and building stronger institutions and collectives&mdash;ones that can protect pluralistic and genuinely egalitarian forms of democratic life.</p>]]></description>
      <guid isPermaLink="false">/content/the-peoples-college</guid>
      <pubDate>Tue, 03 Mar 2026 00:00:00 -0800</pubDate>
    </item>
      </channel>
</rss>