Editors note: The below Q&A is excerpted from “Friends of the Court: Gonzalez Amici Offer Their Perspectives,” which was published in Rebooting Social Media’s RE:COMMITTED newsletter. Knight Institute Senior Counsel Scott Wilkens’s responses have been reposted below.

What is the ideal outcome from Gonzalez?

The best reading of Section 230 would immunize internet platforms for their use of recommendation algorithms except where those recommendation algorithms materially contribute to the harm being alleged in a way that goes beyond the mere amplification of speech.

To be sure, interpreting Section 230 in this way would immunize some conduct that causes real harm, because some harmful conduct is the result of mere amplification of speech. But categorically excluding recommendation algorithms from Section 230’s protection would have devastating consequences for free speech online. It would require internet platforms such as search engines and social media platforms to remove large swaths of content in order to avoid the possibility of crippling liability.

What do you think the Petitioners/Respondent get wrong, and what intervention does your brief make on this issue?

Although the petitioners’ position has changed significantly over the course of the litigation, the main error they make is to interpret Section 230 not to protect the mere amplification of speech. The petitioners try to limit “publishing” in the online context to hosting speech, as distinct from recommending speech, but that distorts the plain meaning of the term. Publishing a magazine or newspaper, for example, necessarily involves recommending the content being published, and even more so stories placed prominently. The same is true of publishing a list of search results or publishing a social media feed.

Our amicus brief argues that YouTube is shielded by Section 230 in this case because the petitioners are seeking to hold YouTube liable for amplifying certain content, and amplification, without more, does not amount to a material contribution to the alleged illegality. The brief also argues that the use of recommendation algorithms to do more than merely amplify speech falls outside of Section 230’s immunity if it materially contributes to the alleged illegality. The circuit courts have applied the material contribution test to immunize mere recommendation or amplification of content, but to leave room for other kinds of claims against the platforms.

What might the court get wrong in this case? How would the web and social media change?

It’s conceivable that the Court could categorically exclude recommendation algorithms from Section 230’s immunity. The Court could do so by saying that when a platform recommends content to users, the platform is not protected by Section 230 because it is going beyond acting as a publisher.

If the Supreme Court does so, it will have a drastic, negative impact on free speech online. Many internet platforms, including search engines and social media platforms, provide services that are largely if not entirely dependent on recommendation algorithms. As a result, it would be impossible for these platforms to avoid massive liability by no longer using recommendation algorithms. They would have no choice but to remove large swaths of constitutionally protected speech—any speech that could potentially result in a lawsuit.

What other fields and domains might the holdings in this case unexpectedly shape?

While this case is about recommendation algorithms, it could impact any online service that is potentially eligible for Section 230 immunity, meaning any online service that disseminates third-party content. This is the first time the Supreme Court will interpret Section 230, including critically important terms like “publisher,” which the statute doesn’t define. The Court’s decision will have ramifications not only for existing online services, but also future ones. Anyone who wants to develop a new online service that disseminates third party content would probably think twice if the Court’s decision makes it doubtful that their service would be protected by Section 230. One has to wonder whether the absence of Section 230 immunity for recommending content to users would have inhibited the invention or development of search engines like Google or social media platforms like Facebook.

Is judicial interpretation the best way to go about resolving these questions, or would you prefer congressional intervention, however unlikely? What would that congressional intervention look like, and how might it differently tailor the immunities currently provided by 230?

Legislative action would be far preferable to a judicial interpretation of Section 230’s immunity that categorically excludes algorithmic recommendations. Again, the amplification of content can cause real harms–no one should pretend otherwise. But legislatures can address or mitigate the harms associated with amplification through other mechanisms, including by requiring platforms to be more transparent, establishing legal protections for journalists and researchers who study the platforms, limiting what information platforms can collect and how they can use it, and mandating interoperability and data portability.