Arvind Narayanan
Knight Institute Visiting Senior Research Scientist 2022-2023; Princeton University
Arvind Narayanan is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He co-authored a textbook on fairness and machine learning and is currently co-authoring a book on AI snake oil. He led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes, and his doctoral research showed the fundamental limits of de-identification. Narayanan is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE), twice a recipient of the Privacy Enhancing Technologies Award, and thrice a recipient of the Privacy Papers for Policy Makers Award.
Narayanan was the Knight First Amendment Institute’s 2022-2023 visiting senior research scientist. He carried out a research project on algorithmic amplification on social media and hosted a major conference on the topic in Spring 2023.
Contact
Selected Projects
-
Algorithmic Amplification and Society
A project studying algorithmic amplification and distortion, and exploring ways to minimize harmful amplifying or distorting effects
Contact
Selected Projects
-
Algorithmic Amplification and Society
A project studying algorithmic amplification and distortion, and exploring ways to minimize harmful amplifying or distorting effects
Selected Events
-
Optimizing for What? Algorithmic Amplification and Society
A two-day symposium exploring algorithmic amplification and distortion as well as potential interventions
Writings & Appearances
-
Deep Dive
We Looked at 78 Election Deepfakes. Political Misinformation Is Not an AI Problem.
Technology isn’t the problem—or the solution.
-
Deep Dive
A Safe Harbor for AI Evaluation and Red Teaming
An argument for legal and technical safe harbors for AI safety and trustworthiness research
-
Deep Dive
Generative AI companies must publish transparency reports
The debate about AI harms is happening in a data vacuum.
-
Essays and Scholarship
How to Prepare for the Deluge of Generative AI on Social Media
A grounded analysis of the challenges and opportunities
-
Quick Take
Introducing Visualizing Virality
Illustrating and investigating virality and demotion on Twitter