The Knight First Amendment Institute invites submissions for its spring 2025 symposium, “Artificial Intelligence and Democratic Freedoms.” The symposium will take place in two parts: a private work-in-progress workshop on November 18-19, 2024, and a public event aimed at stakeholders across government, industry, and civil society in mid-April 2025, held at Columbia University. A discussion of the theme of the symposium is below, followed by logistical information for those who wish to participate.
Introduction
Recent years have seen major advances in AI research [1, 2]. Autonomous AI systems capable of taking consequential series of actions without direct human supervision are now feasible [3-7]. Advanced AI is already transforming society, and is likely to bring even more profound changes to our economic, political, cultural, and associative lives. With these changes come obvious threats; among the most profound is their potential to undermine democratic freedoms [8-10]—including at least the civil and political liberties that democracies protect, and the basic right to stable and functioning democratic institutions.
Around the world, democratic freedoms are already deeply imperiled [11]. Their preservation and restoration depend on various preconditions—epistemic, economic, infrastructural, institutional, and cultural—each of which advanced AI systems may stress.
AI is already embedded in our epistemic environment [12]; as we rely more on AI assistants [13, 14] to navigate the tidal waves of misinformation and AI slop, we may create single points of potentially critical failure and control [15]. Advanced AI is also likely to radically concentrate resources and power (including power over compute), making corporations harder to govern democratically, and potentially causing economic upheaval and resource conflicts due to its massive environmental costs [16].
Our democratic freedoms depend on networked infrastructure. This infrastructure could be destabilized if advanced AI systems ultimately upset the attack-defense balance in cybersecurity (see e.g. [17], for a more optimistic view, see [18]). Meanwhile AI is already being used to undermine civil and political liberties and their institutional protection; advanced AI systems will predictably continue to supercharge state (and corporate) surveillance [19].
And even as we are already witnessing the impact of AI-infused digital platforms on social cohesion [20], the cultural preconditions for democracy are likely further threatened by the prospect of advanced AI companions enabling one-to-one radicalization, as well as the likely impact of widespread economic displacement creating a fertile breeding ground for extremist ideologies.
Nothing about AI’s future is certain. Research progress may plateau (see e.g. [21-23]), and societal impacts will depend on many intervening factors. But anticipating threats from AI systems is crucial to preventing them or mitigating their impacts. We need an approach that centers the mitigation of risks from advanced AI systems, that draws on a pluralistic range of values, and that can situate both AI and democratic freedoms in their sociotechnical context, and deploy sociotechnical as well as technical interventions to steer AI for the better [24]. And we need an approach that can look towards the horizon without becoming obsessed with what lies beyond it. This workshop aims to foster that kind of approach, under the broad heading of sociotechnical AI safety [25-27].
Call for Abstracts
This symposium will bring this agenda to the forefront. Through an initial closed-door work-in-progress workshop, and a subsequent public event to launch the resulting papers and to engage stakeholders, we will galvanize this emerging field, providing a resource and leadership to researchers and policymakers alike.
We invite submission of abstracts for papers from any discipline (or combination of disciplines) related to the above themes; we will aim for a balanced collection of papers with contributions from across and beyond law, philosophy, computer science, and the social sciences, and will favor papers that integrate different disciplinary approaches, and that engage head on with advanced AI systems through a sociotechnical lens (under some description). We welcome any papers responding to the above narrative; to prompt reflection, we suggest thinking about four clusters of questions, focused around aims, threats, interventions, and methods:
Aims: Sociotechnical AI safety demands an account of what it means for AI systems to be safe—what kinds of direct risks from advanced AI systems we should be focusing on. This entails articulating ideals for what a society with advanced AI should look like, so that we can evaluate and diagnose how AI might cause us to fall short of those ideals. What kinds of democratic freedoms should we aim to realize in a world infused with advanced AI? It also entails developing processes for eliciting those ideals and constraints from a broader public, whether through participatory design or through quasi-democratic deliberative procedures (AI itself might prove assistive here [28, 29]). Beyond this, it means designing AI systems and institutions that enable meaningful democratic control of AI [9, 30].
Threats: What kinds of threats might advanced AI pose to democratic freedoms in the near- to mid-term? How credible are concerns about the impact of generative AI on the digital public sphere [8]? How might more autonomous AI systems, such as AI agents, undermine either the epistemic or associative foundations of democracy [10, 31]? What are the prospects of governments using advanced AI to undermine civil liberties within or across nation-states? Or are these concerns overblown, because democracies have their own homeostatic systems for managing rapid technological change, or because the real threats to democracy are much deeper than transient new technologies? And beyond just enumerating threats, what are our best means for attaching more precise probabilities to them?
Interventions: How can cross-disciplinary perspectives advance progress on the design, evaluation, and governance (broadly understood) of advanced AI systems [32]? While this includes sociotechnically informed design and evaluation interventions [33] as well as technical AI governance proposals [34], these don’t exhaust the landscape of sociotechnical interventions. We also welcome papers focused on (among other approaches), strategic litigation, civil liability, investigative journalism, market design, norm entrepreneurship [35], building strategic resilience, broader governance strategies [36, 37], and fostering public engagement and participation.
Methods: Sociotechnical AI safety should involve analyzing and evaluating the sociotechnical preconditions for this very research to take place [38]. This would include analysis and critique of AI Safety and other related research fields. It could also include proposals for shaping the methodology and agenda of sociotechnical AI safety (or displacing it).
We welcome, however, any submissions suitably related to sociotechnical AI safety in the defense of democratic freedoms as a whole (even if they do not cluster around these themes).
Confirmed participants include:
- Tino Cuéllar (Carnegie Endowment for International Peace)
- Henry Farrell (Johns Hopkins University)
- Hahrie Han (Johns Hopkins University)
- Hoda Heidari (Carnegie Mellon University)
- Seth Lazar (Australian National University)
- Sydney Levine (Allen Institute for AI)
- Deirdre Mulligan (UC Berkeley)
- Arvind Narayanan (Princeton University)
- Alondra Nelson (Institute for Advanced Study)
- Spencer Overton (George Washington University)
- Daniel Susskind (King’s College London)
- M.H. Tessler (Google DeepMind)
More to be added.
Dates, Deadlines, and Logistics
Those interested in participating should send an abstract (500-1,000 words, not including references) to [email protected] by September 20, 2024. Abstracts will be selected by Seth Lazar, Professor of Philosophy at Australian National University and Senior AI Advisor at the Knight Institute, and Katy Glenn Bass, Research Director of the Knight Institute, with the assistance of other Institute staff and scholars. We anticipate selecting 12-16 final papers of 6-8,000 words, and will notify those selected to write by the end of September.
The symposium, which will take place in New York City, will be divided into a private, pre-read, work-in-progress workshop (November 18-19, 2024), and a public symposium (mid-April, 2025). Draft papers for the workshop will be due October 25, 2024. Revised drafts will be due after the workshop, preferably in time to enable peer review and revisions prior to the public event in April. Final papers will be published on the Knight Institute’s website beginning at the time of the public event. Authors are free to pursue subsequent publication in a journal or other venue. Each paper will receive an honorarium of U.S. $6,000 (divided between co-authors as needed). The Knight Institute will cover participants’ hotel and travel expenses for the two events.
References
1. OpenAI, GPT-4 Technical Report. 2023.
2. Bengio, Y., et al., International Scientific Report on the Safety of Advanced AI: Interim Report. 2024.
3. Lazar, S., Frontier AI Ethics, in Aeon. 2024.
4. Kolt, N., ‘Governing AI Agents.’ Available at SSRN, 2024.
5. Schick, T., et al., ‘Toolformer: Language Models Can Teach Themselves to Use Tools.’ arXiv preprint, 2023: https://arxiv.org/abs/2302.04761.
6. Yao, S., et al., ‘ReAct: Synergizing reasoning and acting in language models.’ arXiv preprint, 2022: https://arxiv.org/abs/2210.03629.
7. Wang, L., et al., ‘A survey on large language model based autonomous agents.’ Frontiers of Computer Science, 2024. 18(6): 186345.
8. Solaiman, I., et al., ‘Release Strategies and the Social Impacts of Language Models.’ arXiv preprint, 2019: https://arxiv.org/abs/1908.09203.
9. Bengio, Y., ‘AI and catastrophic risk.’ Journal of Democracy, 2023. 34(4): 111-121.
10. Allen, D. and E.G. Weyl, ‘The Real Dangers of Generative AI.’ Journal of Democracy, 2024. 35(1): 147-162.
11. Waldner, D. and E. Lust, ‘Unwelcome change: Coming to terms with democratic backsliding.’ Annual Review of Political Science, 2018. 21(1): 93-113.
12. Narayanan, A., 'Understanding Social Media Recommendation Algorithms'. Knight First Amendment Institute, 2023: 1-49.
13. Gabriel, I., et al., ‘The ethics of advanced AI assistants.’ arXiv preprint, 2024: https://arxiv.org/abs/2404.16244.
14. Lazar, S., ‘Frontier AI Ethics.’ Aeon, 2024.
15. Seger, E., et al., ‘Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world.’ 2020.
16. Luccioni, S., Y. Jernite, and E. Strubell. ‘Power hungry processing: Watts driving the cost of AI deployment?’ in The 2024 ACM Conference on Fairness, Accountability, and Transparency. 2024.
17. Fang, R., et al., ‘LLM Agents can Autonomously Hack Websites.’ arXiv preprint, 2024: https://arxiv.org/abs/2402.06664.
18. Schneier, B., ‘Artificial intelligence and the attack/defense balance.’ IEEE security & privacy, 2018. 16(02): 96-96.
19. Lazar, S., Connected by Code: Algorithmic Intermediaries and Political Philosophy. Forthcoming, Oxford: Oxford University Press.
20. Settle, J.E., Frenemies: How Social Media Polarizes America. 2018: Cambridge University Press.
21. Valmeekam, K., et al., ‘On the Planning Abilities of Large Language Models—A Critical Investigation.’ arXiv preprint, 2023: https://arxiv.org/abs/2305.15771.
22. Mitchell, M., A.B. Palmarini, and A. Moskvichev, ‘Comparing Humans, GPT-4, and GPT-4V on abstraction and reasoning tasks.’ arXiv preprint, 2023: https://arxiv.org/abs/2311.09247.
23. Srivastava, S., et al., ‘Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap.’ arXiv preprint, 2024: https://arxiv.org/abs/2402.19450.
24. Kneese, T. and S. Oduro, ‘AI Governance Needs Sociotechnical Expertise: Why the Humanities and Social Sciences are Critical to Government Efforts.’ Data & Society Policy Brief, 2024: 1-10.
25. Weidinger, L., et al., ‘Sociotechnical Safety Evaluation of Generative AI Systems.’ arXiv preprint, 2023: https://arxiv.org/abs/2310.11986.
26. Lazar, S. and A. Nelson, ‘AI safety on whose terms?.’ Science, 2023. 381(6654): 138-138.
27. Curtis, S., et al., ‘Research Agenda for Sociotechnical Approaches to AI Safety.’ 2024.
28. Ovadya, A., ‘Reimagining Democracy for AI.’ Journal of Democracy, 2023. 34(4): 162-170.
29. Huang, S., et al., Collective Constitutional AI. 2024.
30. Lazar, S. and A. Pascal, ‘AGI and Democracy.’ Allen Lab for Democracy Renovation, 2024.
31. Chan, A., et al., Harms from Increasingly Agentic Algorithmic Systems, in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 2023, Association for Computing Machinery: <conf-loc>, <city>Chicago</city>, <state>IL</state>, <country>USA</country>, </conf-loc>. p. 651–666.
32. Dobbe, R. ‘System safety and artificial intelligence.’ in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 2022.
33. Weidinger, L., et al., ‘STAR: SocioTechnical Approach to Red Teaming Language Models.’ arXiv preprint, 2024: https://arxiv.org/abs/2406.11757.
34. Reuel, A., et al., ‘Open Problems in Technical AI Governance.’ arXiv preprint, 2024: https://arxiv.org/abs/2407.14981.
35. Finnemore, M. and K. Sikkink, ‘International Norm Dynamics and Political Change.’ International Organization, 1998. 52: 887 - 917.
36. Shavit, Y., et al., Practices for governing agentic AI systems. 2023, OpenAI.
37. Kolt, N., ‘Governing AI Agents.’ SSRN, 2024: https://dx.doi.org/10.2139/ssrn.4772956.
38. Ahmed, S., et al., ‘Field-building and the epistemic culture of AI safety.’ First Monday, 2024.