Speaker 1:
What is Twitter?
Evelyn Douek:
G'day, and welcome back to "Views on First."
Speaker 1:
Twitter, Twitter, Twitter, Twitter, Twitter, Twitter, Twitter.
Evelyn Douek:
This is a podcast about the First Amendment in the digital age, from the Knight First Amendment Institute at Columbia University.
Nicole Wong:
It's possible we're viewing it in the wrong lens. It could be that we should make a policy choice about building a public square.
Evelyn Douek:
This podcast explores what on earth the First Amendment is going to do about social media platforms and free speech in the digital age.
Alex Stamos:
All these decisions are based upon what the companies think are both right for their platform and their users and right for society. And so they're just kind of winging it and building their own models.
Nicole Wong:
I think we should wrestle with that and I don't think we should delegate those kind of moral choices to a machine.
Evelyn Douek:
This is episode four. If you are just happening on this episode somehow and you haven't heard the first three, head back to episode one, do not pass go, do not collect $200. For those of you who have been with us so far, you'll know that we've been thinking and talking a lot about how the First Amendment should adjust to the new challenges of the platform era, and that shouldn't be a surprise. We're a First Amendment Institute and so we spend a lot of time thinking about the First Amendment and legal doctrine. But most of our free speech debates happen outside the courts. They're about norms and rules that are not legal or constitutional issues, but social ones. And increasingly, they're corporate ones. Because alongside the story that we've been telling about how tech platforms have collided with the First Amendment, there's another story about how tech platforms have collided with different understandings of free speech. And that's an important story because tech platforms are perhaps the most important speech regulators in the world.
They're certainly the most prolific. In the third quarter of 2022, Facebook took down about 23,339 pieces of content and YouTube about 5,653 channels, videos and comments every minute. The numbers are mind-boggling. By contrast, in its entire history, the Supreme Court of the United States has decided about... Drum roll, please, 247 First Amendment cases. That alone tells you something about how content moderation is different to old-fashioned offline speech regulation, and also tells you how important it is. And so the story of free speech in the digital age would not be complete without the story of how these tech companies have evolved in their thinking about speech regulation.
In episode two, you heard from Yoel Roth, the former head of Twitter's trust and safety, about some of the tools that companies might use to do content moderation. But that needs to be put into the broader context of how we got here and how companies came to develop those tools in the first place. This episode is that story. Once upon a time, there were some engineers that thought they had some neat ideas for communications platforms on the internet. They didn't exactly intend to become all-powerful speech overlords, they were just into creating cool products. I chatted with someone who happens to know a lot about what it was like.
Nicole Wong:
My name is Nicole Wong.
Evelyn Douek:
Nicole is kind of like the Forrest Gump of the early platform era, popping up all over the place.
Nicole Wong:
I started my career as a First Amendment attorney. And this was back in the mid-nineties, so most of my early clients were actually traditional media clients, like newspapers and TV stations and radio stations. But as most folks know, in the late nineties a lot of those media companies started moving online. And I followed them there and then began representing pure play internet companies like Yahoo and Netscape and others.
Evelyn Douek:
After that, she then spent over seven years at Google, leading their legal team and launching products globally, followed by a stint at Twitter doing similar work. And from these front row seats to the Internet's youth in the nineties and early aughts, Nicole saw how companies thought about content moderation as they were first getting off the ground, or how they didn't think about it.
Nicole Wong:
If you think about some of the early platforms, and now I'm thinking [inaudible 00:04:24] they're like message boards, right? So AOL's message boards or Yahoo's message boards, or there were small platforms like Silicon Investor, which was about trading penny stocks, those were not really thinking about what their obligations to speech rights were at all. They thought about themselves as software engineers, and so they didn't really understand both their rights and obligations as part of a media ecosystem.
Evelyn Douek:
But then things changed and government and public pressure made companies start to take their responsibilities more seriously. This led to Nicole earning pretty much the coolest unofficial title ever during her time at Google, The Decider. How did you get the title of The Decider?
Nicole Wong:
It started as a joke in our department.
Evelyn Douek:
The moniker was a joke, but the job that Nicole had that earned her the moniker absolutely wasn't.
Nicole Wong:
Google, when we started was a search engine, and so some of the policies were around what kind of search results will we show? Are there any types of search results that we shouldn't surface? People's social security numbers, for example, or child pornography or other things that you might find on the internet, are those things that as a search engine we should suppress? Eventually, in 2007, we acquired YouTube. There were just a whole host of areas where content decisions got made. And one of the jobs I had is to decide what kind of content should go up or stay down. And that's where the Decider moniker, which I think was originally a tongue in cheek reference to George Bush, who was also being called the Decider at the time, came from.
Evelyn Douek:
So these companies didn't set out to become behemoth speech regulators, but as they became giant, they stumbled into that role nonetheless. There were suddenly all these really difficult questions about what speech to leave up and what to take down. And at the end of the day, the person that had to make that call was Nicole. I asked her if that was awesome or awful.
Nicole Wong:
The responsibility felt very weighty. I think there was one time where there was content that was critical of a political and religious organization in India. There were people protesting outside of our offices in India, this was at Google. There was threats of violence that were happening. And that's a big deal and a lot of responsibility.
Evelyn Douek:
Yeah, no kidding. Let's take a step back and look at why it was Nicole's job in the first place. See, the thing is that when it comes to most of the decisions that companies have to make about what they should leave up or take down, the law doesn't provide a lot of guidance. Of course, companies don't want to get on the wrong side of the law, but working out what content is illegal is only the start of working out what rules they might want to have on their services.
Alex Stamos:
When you look at the decisions companies make, the vast majority of them are not guided by law.
Evelyn Douek:
That's Alex Stamos. Alex knows a thing or two about this, both from his current job-
Alex Stamos:
I'm the director of the Stanford Internet Observatory.
Evelyn Douek:
And from a few jobs he had before that.
Alex Stamos:
I was the chief security officer at Facebook and the chief information security officer at Yahoo. Before that, I spent about 20 years in the information security industry.
Evelyn Douek:
He doesn't have a cool title like The Decider, but he's still got significant cred in the field. Alex is content moderation famous for being in charge of the team that discovered Russian interference on Facebook in the run-up to the 2016 election.
Alex Stamos:
I got pushed onto the stage by being the guy who had to have his name attached to revealing all of the influence operations that we had discovered on Facebook.
Voice of America Newscaster:
Facebook's chief security officer Alex Stamos announced that from June 2015 to May of 2017, about $100,000 was spent on about 3,000 Facebook ads that violated the social network's terms of service and may have influenced people's opinions on the election.
Evelyn Douek:
So, Alex has seen some things, and so he knows what he's talking about when he says:
Alex Stamos:
The vast majority of social media products that only apply the First Amendment standard would become unusable. First because of spam. Most commercial speech is protected by the First Amendment. It is completely legal in New York for somebody to be walking down the street and yell, "Yo, come buy shoes at Joe's, come buy shoes at Joe's," or, "Come buy these Ray-Bans." It's always fake Ray-Bans is the thing that you see all the time on the internet now. Or male enhancement products.
The difference in an online platform is that it's not just the number of people that can fit on a New York sidewalk that you can hear. It is millions of people from thousands of miles away with armies of automated bots that have the ability to yell things that you hear. So just applying the First Amendment standard to any platform that allows for open registration, a free platform where you don't have to know people ahead of time, I think would instantly become unusable.
Evelyn Douek:
Alex and Nicole have been at the front lines of working out how to police speech on social media. And, being from a First Amendment institute, I couldn't help asking them about how large the First Amendment loomed when these corporations were thinking about constructing a system of speech regulation.
Nicole Wong:
The continual reference to First Amendment is such a weird dichotomy when we're talking about private companies.
Evelyn Douek:
It's weird for the reasons we covered in episode one, when we looked at the words of the First Amendment itself. Let's take a read. "Congress shall make no law... abridging the freedom of speech." That is, the First Amendment only constrains Congress, and as the Supreme Court has interpreted it, other government acts. So Nicole wasn't bound by the First Amendment when she was capital D Deciding what should be on Google or YouTube.
Nicole Wong:
These are all private parks that we're in, not public squares.
Evelyn Douek:
In other words, they're businesses. They have their business interests to take care of.
Nicole Wong:
It's driven by the north stars of how do we serve up the best, most robust product? The owners of the private park, the Twitters and the Facebooks and the YouTubes, have made decisions about being super-permissive about the speech there because they believe that's how they're adding value to their users, in particular, and maybe also to the world, but particularly to their users.
Evelyn Douek:
Platforms are businesses and businesses care about their products and their bottom line. That's literally their job. It's not that they're evil. There's nothing wrong with businesses thinking about profits. That's what we expect. But pursuing profits might not be the best way to achieve a healthy forum for public discourse.
Nicole Wong:
It's possible we're viewing it in the wrong lens if we think that they actually have to become public squares, because they serve this value to us that we then have to impose upon them all of the obligations of a public square. It could be that we should make a policy choice about building a public square.
Evelyn Douek:
Now, of course, not everyone in these very large companies or in the industry as a whole, thought in exactly the same ways about these issues. But at the end of the day, the fact that platforms are businesses means that of course they're going to regulate speech differently from how a government might. But how exactly they would regulate was not entirely clear.
Alex Stamos:
All these decisions are based upon what the companies think are both right for their platform and their users and right for society, and so they're just kind of winging it and building their own models.
Evelyn Douek:
Winging it. Awesome. That's definitely how we want the most expansive system of speech regulation humankind has ever known to be built. But while the First Amendment wasn't legally relevant, that didn't mean it completely left the building, because the people constructing those models within platforms were American lawyers who turned to the tools they knew.
Nicole Wong:
There was a maximization of information should be free. I think that was really consistent with the philosophy, the approach, the ethos of the internet in its early years. And certainly my First Amendment background, but also I think just general free expression contexts would've erred towards that as well, and that drove a lot of my decision-making.
Alex Stamos:
Often the people who are running these teams are American-trained JDs. They're people who went to an American law school and took con law from a real law professor like yourself. And so they have been trained in the history of First Amendment jurisprudence, and so all of the framing of is this a immediate threat to somebody's life? These are ideas that inevitably work their way into the conversation because almost everybody around the table is a JD. And so I think the contours of how speech is treated in America are deeply embedded in all of these rules just because it's American lawyers who are writing them.
Evelyn Douek:
And it wasn't only the substance of the rules that was influenced by their legal training, it was also the processes of rule writing and enforcement.
Nicole Wong:
We needed the rules to be understandable and executable by a wide range of people across many continents. And so this notion of what in legal parlance we'd call precedent, and adherence and consistency was important.
Evelyn Douek:
Nicole had other very good reasons for wanting clear, predictable rules that didn't require every single decision to be evaluated from scratch.
Nicole Wong:
I don't want to have to be The Decider at two o'clock in the morning for every small piece of content.
Evelyn Douek:
Very fair. Okay, so you've got these companies and they start out by writing some code that they think is pretty dope. They suddenly find themselves having to make really difficult speech decisions, so they turn to the tools that they know, First Amendment-style thinking. They create a robust and predictable system of speech rules, and that solves everything. Season over. Thanks for listening.
Nope, of course not. It turns out, constructing a system of speech regulation is really hard, and platforms were in for a bumpy ride. First of all, as they expanded, they found out that other countries weren't as hot on the First Amendment as Americans are.
Nicole Wong:
If you think about the late-'90s, early 2000s, you're talking about what I've referred to before as the first generation of internet countries, which were largely Western-style democracies, so North America, Canada and the US, Western Europe, Japan, Australia, but a fairly small group of countries and user base. And within that context, the actual people on the internet were largely male, largely white, largely educated, with really similar approaches to rule of law, to free expression, to privacy. And so the type of disputes you would have were in a pretty narrow band.
Evelyn Douek:
But Nicole says that changed a lot around 2007.
Nicole Wong:
What I would call the second generation of countries started to have really meaningful internet penetration, and importantly, the fastest growing markets: Brazil, Russia, India, China, but also Saudi Arabia, Vietnam, Thailand, all of these countries where their approach to rule of law, free expression, privacy was actually really different, and came immediately to the fore at that point in terms of how do we wrestle with what showing up appropriately in that country will look like?
Evelyn Douek:
And these companies got schooled.
Nicole Wong:
Because in the early days, it was literally like, I'm just a small US-based startup. I'm not intending to be in Chile. Why am I even getting this letter?
Evelyn Douek:
It turned out other countries weren't and still aren't happy to just let these American companies waltz in and impose their own norms.
Alex Stamos:
When you're thinking about these issues inside of one of these companies, you always have a global perspective. Only 5% of Facebook's users are Americans. And in fact, the most important country for global speech regulation now is India, for sure. You would say China, but most of these American platforms are blocked in China, so in a way, the Chinese have no more influence because there's nothing for them to cut off. But India is a huge, huge market for all the big American tech companies. They have a legitimately democratically elected government that has the real support of the people, but they're using that support of the people in a way that [inaudible 00:16:55] and Americanized especially is completely inappropriate from a speech perspective.
Evelyn Douek:
In other cases, companies moved fast and broke into new markets without building an infrastructure to actually moderate their platforms, with tragic costs.
Democracy Now! Newscaster:
Facebook has been accused by UN investigators and human rights groups of facilitating violence against the minority Rohingya Muslims in Burma by allowing anti-Muslim hate speech and false news to spread on its platform.
Evelyn Douek:
And if platforms found themselves in over their heads because of the way they went out into the world, they also faced new challenges as the world came to them.
Alex Stamos:
Some of the first users of the internet were actually white supremacists and members of the Ku Klux Klan and such, who used it for a variety of means, but effectively to get around the government's ability to monitor them by monitoring mail and such that they moved to online. ISIS was a specific example of an organization that took that to an extreme. Al-Qaeda had their website and their content. You had terrorist groups pushing videos and such. And then ISIS was the first terrorist organization where a significant portion of their leadership had grown up with the internet, understood the internet really fundamentally, and were able to use it in kind of a non-ironic, really serious way to be digitally native in their propaganda, in their recruitment. And so in the 2014, 2015 timeframe, that was a big driver for a number of countries, was seeing the effectiveness of ISIS to recruit and to celebrate their attacks online.
ABC News Newscaster:
The FBI director is sounding the alarm about ISIS, saying the terror group is reaching deeper into America thanks to a disturbingly sophisticated social media campaign.
CNN Newscaster:
An American journalist has been beheaded by ISIS terrorists. A video showing the horrific killing and its gruesome aftermath was released on the internet a short while ago along with a message-
Evelyn Douek:
And there were other pressures on these companies to clean up their act too.
CBS News Newscaster:
A new report from Amnesty International says women are often threatened on Twitter, and that even though the company's policies prohibit abuse, the social media platform is not providing "adequate remedies" for the victims of those threats.
CNET Newscaster:
In an internal memo obtained by The Verge, the head of Twitter, Dick Costolo, wrote frankly to employees, "We suck at dealing with abuse and trolls on the platform, and we've sucked at it for years. It's no secret, and the rest of the world talks about it every day."
Evelyn Douek:
Users demanded platforms do better, otherwise, it just became too unpleasant to spend time in these spaces. And advertisers demanded platforms do better so that their ads didn't appear next to offensive content.
CNN Newscaster:
Companies like AT&T, Verizon, Johnson & Johnson, these big companies, they've just found out that their ads are being played before some pretty offensive content and they want it to stop.
Evelyn Douek:
It's not exactly a great look for a brand to pop up right next to a hate screed.
Hank Green:
Major brands are being advertised on top of very vile YouTube videos. A bunch of advertisers fearing backlash removed their ads entirely from YouTube. And during this period, every YouTuber saw a decrease in revenue.
Evelyn Douek:
And then, of course, there were the Russians.
ABC News Host:
Congressional investigators say the Russians planted on Facebook as part of the Kremlin effort to help get Donald Trump elected.
Alex Stamos:
In the aftermath of Donald Trump winning, a lot of people were really searching for how this happened. And there was a legitimate both what people call fake news of actual fake news as well as government-influenced operations. And people took the fact that those things existed and ran with, well, this is the only reason Donald Trump is president, due to these online platforms.
Evelyn Douek:
So faced with all of this anger and backlash, platforms started to take a different approach to regulating speech on their services.
Alex Stamos:
What I saw a lot of in the 2016, 2017 timeframe was much more of general societal responsibility beyond what is just good for users at that moment being placed onto these companies. And a lot of what has happened over the last five years is a response to that.
Evelyn Douek:
And so these companies cleaned up their act and everyone lived happily ever after. Season over. Thanks for listening. Okay, not quite. It turns out building a system of speech regulation is hard. Platforms just posting rules on their blogs and websites is not the same as enforcing them in practice.
Alex Stamos:
You have policy people, again, trained as lawyers who have law professors who Socratically call on them and say, try to distinguish this thing from this thing. And then you turn to the tech people and the operations people, and there are 97 million pieces of content that have to be moderated under that standard. The idea that we can slice it that finely is ridiculous, right? A lot of the ideas around First Amendment law in America is the idea that you can have an intelligent, educated judge apply a seven-step test created by the Supreme Court, and they've got two weeks to do so, aided by an army of clerks.
In the content moderation world, it is first a machine learning algorithm that is effectively a statistical model trained on video cards, and a person in the Philippines who has 30 seconds to make a decision, that those two entities together have to make a speech decision. And so what you can do with people that you can train to make decisions very quickly plus machine learning is very different than having a judge think about is this an imminent threat, and spend weeks and weeks thinking about it.
Evelyn Douek:
The fundamental problem of content moderation is the scale. Remember, Facebook has taken down something like 513,000 pieces of content since you started listening to this episode. And to do that, platforms have to rely on technology. Using machine learning to do a first pass at reviewing where the content violates their rules will be practically necessary, but it also causes all sorts of problems. Now, of course, the tech has come a long way since the early days.
Nicole Wong:
I was outside counsel to a company where I was reviewing their complaints of child pornography. So they would literally print pieces of child pornography, put them in a manila envelope for me to review and make a legal decision about whether it should come down, first of all, and second, whether we should report it. So all of these tools that have made the ability to identify content scalable, accurate and efficient in decision-making is a huge step from where I was reviewing pieces of paper out of a manila folder.
Evelyn Douek:
But the thing is, as much progress has been made, artificial intelligence still just isn't that intelligent.
Alex Stamos:
Having the world's best machine learning algorithms is having like an army of five-year-olds. There are tasks for which if you have an army of five-year-olds, it's really good. Like if I had a Mount Everest-sized pile of Skittles and I wanted that Mount Everest-sized pile of Skittles to be broken into five different piles based upon color, then having a million five-year-olds would make that a much faster job.
But a million five-year-olds are never going to build the Taj Mahal, and that's what machine learning is like. You can teach machine learning to do something reasonably simple by giving it lots and lots of examples of how you want something done. And then if it's really good, it will be able to replicate that decision at speed and at scale. And so you can have a set of here's a thousand decisions we've made, and now based upon those 1,000 or 10,000 or 100,000 decisions, we want you to make those decisions against a hundred million or a hundred billion. That's what machine learning's good for.
Evelyn Douek:
So let's say there's some fictional universe in which we could all come together and decide on the perfectly crafted, uncontroversial hate speech rule that makes everyone happy. That doesn't matter if it can't be enforced at scale.
Alex Stamos:
All the time what will happen is that you'll have the content policy people who, again, are mostly lawyers and trained as lawyers say, we want to make this decision and we want to change the standard. And then they'll hand that to the people who are actually responsible for implementing it, the operational people and the engineers, and then they'll go test it. They'll go grab 10,000 pieces of content that are relevant, and then they'll try to enforce the rules against it, using machine learning humans. And then they'll come back and say, our false positive rate here was 20%. This is impossible for us. We need to make this a simpler rule. That kind of stuff happens all the time.
Evelyn Douek:
And ultimately there's just no getting around the fact that making speech decisions is a matter of judgment and values. Current debates around platform "censorship" and political bias show just how fraught all of this is. And there's no computer in the world that can answer exactly where to draw the line.
Alex Stamos:
When you talk about some kinds of content moderation, especially around disinformation, around hate speech, around anything that's political, then there are so many corner cases and pieces of context that can't be encoded into machine learning, that inevitably human beings have to get involved. And if you've designed your system right, that's fine. If the machine learning algorithm says there's a 99% chance this is hate speech, then maybe you're like, okay, that's fine. We'll allow the machine to take steps. If it says it's 1%, you're like, okay, we're going to let it through and let it be posted. If it says, I'm not totally sure, I think this is a 50% chance that this is bad, then you get a human being involved. That human being makes a decision.
Nicole Wong:
Sometimes the executives from companies are saying, yeah, we know that terrorist content or porn problem or whatever is a problem, but we're going to figure out some better AI to address that with.
Evelyn Douek:
Here is Mark Zuckerberg telling Congress exactly that.
Mark Zuckerberg:
No amount of people that we can hire will be enough to review all of the content. We need to rely on and build sophisticated AI tools that can help us flag certain content, and we're getting good in certain areas.
Nicole Wong:
I want us to be really skeptical about whether AI can do things like that and whether we want them to. Because I think humans should make hard decisions about the type of content we feel is appropriate. I think we should wrestle with that, and I don't think we should delegate those kind of moral choices to a machine.
Evelyn Douek:
Great. So platforms are in a situation where they have to use automated tools to make speech decisions, but also automated tools suck at making speech decisions. And it gets worse. There's no set and forget when it comes to online speech regulation. Everything is constantly changing.
Alex Stamos:
Trust and safety is not like other engineering areas where you can make decisions that stand the test of time. You make a decision about how the Golden Gate Bridge is built, and then you see over a hundred years how those decisions play out. When you make a trust and safety decision, you will find out the next day what your intelligent adversaries are going to do in reaction.
Evelyn Douek:
Whatever rule platforms come up with, people try and game it.
Nicole Wong:
A bunch of the trolling that happens online is really subtle because the trolls are super smart about coming right up to the line of our policies, but not quite going over, or denying it once they get there because they were just joking. So while they're doing a bunch of harassment that is online but not necessarily totally detectable, the impacts on a person are actually happening offline and outside the purview or even the visibility of these platforms. And so I feel like the platforms have really struggled with how much do I take into account statements that don't violate my policies on my site, but are clearly having an impact?
Evelyn Douek:
Okay. But what happens when the deciders within platforms start thinking not only about how to police their services, but also how to take into account what's happening offline?
Alex Stamos:
I was in a meeting in 2015 with some senior Facebook executives who had just come back from the UK where they had been yelled at by the prime minister, David Cameron. This was a discussion of how should Facebook react to ISIS using Facebook's products. And one of the points of discussion, a question was asked is, well, what's our goal here? And one of the senior executives said, our goal is to defeat ISIS.
Evelyn Douek:
Step one, make a tool that reminds people when it's their friends' birthdays and shows them funny cat memes. Step two, defeat ISIS. Step three, profit?
Alex Stamos:
And my boss, the general counsel, to his credit, said, whoa, whoa, wait a second. Let's pump the brakes on this. That's not a reasonable goal for a private tech company. Our goal needs to be something that's doable by us, that's tied to what we can do. Defeat ISIS is the goal of a government, it can't be the goal of a private company. And so there's this big philosophical argument that really has stuck with me of what is the role of a platform. And the outcome from that one, which I think is actually reflected in a lot of these other decisions, is the idea that, well, yes, our goal isn't to defeat ISIS, our goal should be that Facebook doesn't make ISIS's job easier. Which I think is a much more reasonable place than to say we should make the world a better place.
Evelyn Douek:
Yeah. Maybe we can't just code our way to kumbaya.
Alex Stamos:
There are some people who really still believe that, both inside tech companies and especially outside of tech companies, that the companies should strive to just make humanity better. And I think that's actually a really dangerous impulse.
Evelyn Douek:
And it's dangerous because it's hubristic. Nicole and Alex are really nice people. I really enjoyed talking to them. They probably make better decisions than I do as someone who pretty much exclusively wears black in order to avoid the tremendously difficult and high stakes decision of what clothes to put on in the morning. But do I think they have the answers to all the world's problems?
Nicole Wong:
It's totally a fair question. Who gets to be the Decider, and why should we trust that person, and why should it be one person? And I was asked a version of that question once, should Jack Dorsey or whoever get to be the sole decider of what's on any given platform? And I don't think that's what we want, that it should be one person, but I'm not sure the answer is that it has to be many.
Evelyn Douek:
So to recap, it's not great when platforms don't moderate. There are all sorts of costs to society to rampant toxic speech. But when they do moderate, it turns out that it's really hard, and there are some really legitimate questions that come up about why they should be the ones to decide how to moderate.
Alex Stamos:
When you're inside the companies and you've got 85, 90% of your work, maybe 99% of your work by volume is stuff that everybody agrees on. And then you've got the 1% that every decision you make becomes this massive political thing that maybe gets you hauled in front of Congress or in front of a parliament or yelled out on TV. You have every political actor in the world both taking advantage of the kinds of amplification and attention they can get via social media companies while also berating the same companies for any decisions that do not go their way. And so you have this constant kind of working of the refs back and forth that has only gotten worse.
Evelyn Douek:
So if there was a turn to the capital D Deciders to clean up the online public sphere over the last decade, in the past few years, we've seen swings back against that. There's been the rise of alt-tech internet platforms that proclaim to be more free speechy.
Nicole Wong:
Maybe it's that we have many venues for content decisions to be made, and each bubble is its own bubble, like Truth Social can do whatever it's going to do. I have no idea what it's doing over there, but I'm assuming that whoever its users are probably happy with what it is. And maybe that will be the same for Twitter. I do feel like we're in this weird conundrum where certain platforms are so big and the platform is trying to please such a wide and diverse audience, and that is always going to be impossible.
Alex Stamos:
January 6th has had a big impact, but it's less on the policies of the platform and more on the fact that the reaction to January 6th was the companies finally cutting off Trump. And as a result, we've had this massive fracturing of the social media landscape in the US. We've had the rise of Parler and Gab and Truth Social and Telegram, which always existed, but became much more popular in the US. And that is where the heart of the alt-right movement has moved from a speech perspective is onto these alternate platforms.
You certainly still see it on Instagram and Facebook and Twitter and the other major platforms, but what you see is the echo or the intentional shadow where the really radical speech is much safer elsewhere. And so you'll have a group that radicalizes themselves on these platforms and then intentionally has less radical positions on the big platforms taking into account what their content moderation strategy is. And so that is a huge change with the 2022 and 2024 election, and that's going to be a really big issue for people to pay attention to.
Evelyn Douek:
Some people are buying and trying to change the most popular platforms.
Forbes Narrator:
Elon Musk has finally completed his $44 billion deal to take over Twitter late Thursday, and quickly went to work rebuilding the company to his vision.
7NEWS Newscaster:
The world's richest man has taken control over one of the most influential platforms on the planet, and he immediately went to work firing staff within minutes.
ABC News Newscaster:
The breaking news overnight, Twitter suspended multiple journalists from prominent outlets including The Washington Post, The New York Times, and CNN.
Sky News Australia Newscaster:
Donald Trump has had his Twitter account reinstated. Earlier, Twitter CEO Elon Musk asked his followers to vote on whether or not the former US president should be allowed to return to the platform. More than 15 million accounts responded to the poll. A short time ago, the billionaire businessman tweeted, "The people have spoken. Trump will be reinstated."
Evelyn Douek:
And then there are the lawmakers.
Nicole Wong:
I don't think government regulators get super exercised about the content on a given platform until it starts to offend either them or one of their constituents in a big way. And so how much does a company's content policies make to the average legislator? Probably not too much until we really get it wrong. And we've had many opportunities to really get it wrong.
Evelyn Douek:
In some ways, the law sat on the sidelines watching these mega platforms grow up, but now it wants in and in a big way, because the private and public spheres of speech regulation do not exist in complete isolation from each other. Up until now, by and large, the law gave platforms space to make their own rules. But there are lawmakers who think it's time for that to come to an end.
Texas Governor Greg Abbott:
We see that the First Amendment is under assault by these social media companies, and that is not going to be tolerated in Texas.
Florida Governor Ron DeSantis:
Big tech has come to look more like Big Brother with each passing day. If George Orwell had thought of it, he would have loved the term content moderation.
Donald Trump:
Currently, social media giants like Twitter receive an unprecedented liability shield, based on the theory that they're a neutral platform, which they're not.
Evelyn Douek:
In the meantime, companies will continue to make billions of speech decisions, fumbling their way along and upsetting everyone as they go. Because, well, constructing a system of speech regulation is, you guessed it, hard.
Alex Stamos:
There's a bunch of these decisions that they know are not solvable. These are problems that can't be fixed.
Evelyn Douek:
And now it's no longer going to be up to the companies alone to try. The legal landscape is changing, and in many ways, working out good regulatory reforms is long overdue.
Nicole Wong:
Yeah, I mean, the best time would've been 10 years ago, but the second-best time is tomorrow.
Evelyn Douek:
But of course, that's only if the cure isn't worse than the disease. And, well, the next episode of "Views on First" is about some legislative "cures" and how they may in fact not make anything better.
This episode of "Views on First" is produced and edited by Maren Lazian, and written and hosted by me, Evelyn Douek, with production assistance and fact-checking from Kushil Dev. Candace White is our executive producer. Audio and production services are provided by Ultraviolet Audio with production and scoring by Maren Lazian and mixing and sound design by Matt Boynton. "Views on First" is brought to you by the Knight First Amendment Institute at Columbia University. To learn more about the Knight Institute, visit their website, knightcolumbia.org, or follow them on Twitter at @KnightColumbia, and on Mastodon at the same handle. I'm Evelyn Douek. Thanks for listening.