Mr. Altman Goes to Washington, and Casey Goes on This American Life
This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.
Casey, can I show you an app?
Show me an app.
OK, this is a new app called “New York Times Audio.” And as you might guess from the name, it is a “New York Times” audio app.
So this time the app is coming from inside the house that we’re sitting in right now.
Right. So this is a new iOS app. It’s for “New York Times” subscribers. It’s called “New York Times Audio.” Our show is on this app as well as a daily playlist of news. It’s got narrated articles. It’s got podcasts from “This American Life,” “Serial Productions, ” and “The Athletic.” And I’ve been using this app as a Beta. Tester for a while now, and it is really good.
All right, I’m going to get on the Wi-Fi and do this.
I believe in you.
Now, I’m looking at this for the first time, and I’m scrolling. And let me tell you what I’m seeing. I’m seeing Turkey’s President fighting for political survival. I’m seeing how $89 million of phone donations disappeared. “The Daily’s” on here.
Small niche podcasts.
And I actually am seeing a new episode of “Hard Fork.” So when I see that Kevin when I look at that I think I’m getting everything.
And there’s articles?
Narrated articles.
There’s narrated articles, and it’s the actual reporter taking time away from doing journalism to read it to your lazy ass. That’s all happening in “The New York Times Audio” app. And it’s free if you subscribe to “The New York Times.”
You can also — and this is a very exciting feature for me — you can choose between eight different playback speeds ranging from 0.8x all the way up to 3x. And if you are listening to “Hard Fork” on 3x, I actually do want to hear from you.
That’s too fast. I’ll say, that’s too fast.
Yeah, I kind of want to listen to it on 3x now just to hear what it sounds like.
[AUDIO PLAYING IN 3X SPEED]
Oh, come on
This is 3x. [AUDIO PLAYING IN 3X SPEED]
Wow, our laughter at 3x, it does not sound great.
No.
We sound like chipmunks. [AUDIO PLAYING IN 3X SPEED]
So if you want to listen to “Hard Fork” or any other show from “The New York Times”—
At any speed ranging from 0.8 to 3.0 —
You can download “New York Times Audio” at nytimes.com/audioapp.
And you better.
I think we should try to record the show at three times speed today.
Let’s do it because I want to get to happy hour.
(SPEAKING RAPIDLY) I’m Kevin Roose from “The New York Times.”
(SPEAKING RAPIDLY) I’m Casey Newton from Platformer.
(SPEAKING RAPIDLY) You’re listening to “Hard Fork.”
[MUSIC PLAYING]
I’m Kevin Roose, a tech columnist at “The New York Times.”
I’m Casey Newton from “Platformer.” And you’re listening to “Hard Fork.”
This week, “The New York Times” Cecilia Kang talks to us about why lawmakers are cozying up to OpenAI’s CEO Sam Altman. Then Twitter’s former head of trust and safety, Yoel Roth, talks to me about his battles against Donald Trump, Elon Musk, and other forces conspiring against content moderation.
[MUSIC PLAYING]
Casey, the big news this week in tech was, not in California, but it was in Washington DC with a big senate hearing about AI and AI regulation. Testifying, most notably, was former “Hard Fork” podcast guest Sam Altman, CEO of OpenAI, along with two other AI experts, Kristina Montgomery, who is Vice President and the Chief Privacy and Trust Officer for IBM, and Gary Marcus, who’s a professor emeritus at NYU.
Did you watch this hearing?
Well, I was on a plane to New York to meet with you in the studio, so I missed it. But more than anything, I was just surprised that we’re already here. Congress is talking about it. It’s not just Don Beyer anymore, that congressman who we interviewed on a previous episode, who went back to school to study AI.
Congress is paying full attention to this. And I think that’s a good thing.
Yeah, social media had existed for 10 or 15 years before the first congressional hearings where Mark Zuckerberg and other CEOs were called to testify. ChatGPT came out last November.
Right.
And we’re already having congressional hearings about it. This thing is moving so quickly, and lawmakers are really trying to get their heads around it. And so this week we got a glimpse of basically the first time that a congressional hearing has addressed this issue of generative AI and some of the risks and promises that the technology has.
So one of our colleagues for “The New York Times,” Cecilia Kang was following along with the hearing. Cecilia covers tech policy and regulation for “The Times.” Cecilia, welcome to “Hard Fork.”
Hey, thanks for having me, guys.
Hi, Cecelia.
So Cecilia, tell us about this hearing. Often, with hearings in Washington that I and Casey have talked about, the hearings sort of come after some tech company has done something really bad or really spooky, like the Facebook hearings over Cambridge Analytica, Twitter’s hearings over content moderation. There’s something that gets screwed up, and the executives are called to congress to testify about it.
There’s like a smoking crater, and congress is like, what made that?
So in this case, what did make congress eager to have a hearing with the CEO of OpenAI and two other experts?
Yeah, well, I think it helped that when ChatGPT by OpenAI was released late last year, everybody was trying it. And that included lawmakers in Washington. And so they were running speeches on ChatGPT. They were conducting experiments. And they just had their holy S moments of, whoa! This thing can do what I’m paid to do and elected to do, which is to give speeches and to have positions on policy. And this is scary. So I think it struck personal.
Interesting.
It’s almost like how social media hearings really dialed up after politicians started using them for their campaigns because they were like, oh, this affects me and my job and my constituents.
And I’d also say that there is a recognition in Washington that congress has completely failed when it comes to regulations of social media. And there were a lot of nods to this during the hearing yesterday that lawmakers were saying, we don’t want to make the mistakes that we did of the last few years, which was to talk a lot about regulating and not doing anything. So they are trying to be faster and looking around corners.
Which I find very heartening, I have to say, right, because I’m somebody who, like Cecilia, sat through those hearings and saw bill after bill. And then nothing happened. And with some of the risk around AI, I think we do want to see them moving faster. So I actually found it gratifying that they were moving here.
Yeah, and speaking of those hearings, we have some clips, like a blooper reel of congressional tech appearances.
Yeah, if you missed the past few years, I think these clips really showcase the knowledge and perspective that congress brought to the discussion.
- archived recording (congressman)
-
How do you sustain a business model in which users don’t pay for your service?
- archived recording (mark zuckerberg)
-
Senator, we run ads.
- archived recording (congressman)
-
I have a seven-year-old granddaughter who picked up her phone before the election, and she’s playing a little game, kind of game a kid would play. And up on their pops a picture of her grandfather. And I’m not going to say into the record what kind of language was used around that picture of her grandfather. But I’d ask you, how does that show up on a seven-year-old’s iPhone who’s playing a kid’s game?
- archived recording (sundar pichai)
-
Congressman, iPhone is made by a different company.
- archived recording (congressman)
-
Two, does TikTok access the home Wi-Fi network?
- archived recording (shou chew)
-
Only if the user turns on the Wi-Fi. I’m sorry. I may not understand the question.
- archived recording (congressman)
-
So if I have a TikTok app on my phone, and my phone is on my home Wi-Fi network, does TikTok access that network?
- archived recording (shou chew)
-
It will have to access the network to get connections to the internet, if that’s the question.
Oh, three classics. That was, of course, Mark Zuckerberg, Sundar Pichai, and Shou Chew of TikTok.
Yeah, so this is the kind of tech hearing that we’ve come to expect from Congress. I would say the tone of most of these hearings has been most similar to a Genius Bar appointment.
— with a very confused customer.
But like an angry Genius Bar appointment.
And that’s the thing. These lawmakers when they ask their bad questions, they ask with so much anger and confidence in their questions.
Yes, totally. So I would say, not totally reassuring.
Can I just underscore that point because it’s like that is the most emblematic thing is, I’m going to ask you a question. And I’ve never been more mad, and I also have no idea what I’m talking about.
Precisely.
Yes, my iPhone is on the fritz, and it is a personal affront to democracy. So this hearing, however, was a little bit different.
Yeah, so I was really struck by how not adversarial this hearing was and how lawmakers were very friendly, particularly towards Sam Altman. They were really approaching him like he was a professor, like come educate us, Sam Altman, on this technology. And tell us how we should regulate you. The posture was so different. There wasn’t this performative, I’m so angry. Your company is terrible for democracy tenor in the questioning.
There was a lot of doomsaying in a lot of what the lawmakers were saying. They are projecting concerns about what artificial intelligence can do and wreak on the economy and in society. But they were looking to Sam Altman for answers and for guidance, and that was a very different thing that I’ve seen, and surprising, from hearings of past. And also, the questions weren’t, dare I say it, terrible.
They were pretty good.
Yeah, and they were not terribly deep, but they were not terrible. They were not off. There wasn’t asking the CEO of Google how an iPhone works.
So I’m really curious about why the tone might have been so different. And I think one thing is that we’re early enough that there has not been a huge calamity yet. There is not a smoking crater that everyone is mad about. But also, I wonder if the fact that OpenAI started out as a little bit more of a research lab than a big consumer internet company might help. And I also wonder, did Sam and the other folks at OpenAI I spent a lot of time leading up to this trying to get ahead of the story?
Yes, that last bit is, I think, key. Sam Altman has been in Washington multiple times. Just this week, on Monday, he was having dinner with 60 lawmakers on the House side. He gave a presentation. People left the meeting and told me they thought they were super impressed with him and how he explained how the technology works and how he seemed so cooperative. So that was just one example of many of his meetings.
He’s given personal demonstrations to many members of congress and their staff. So he’s accessible, which is very different than the early years of big tech titans who would come to Washington only under duress to come to testify. And they never wanted to engage with Washington to talk about how their technology worked. And they were very defiant and defensive that their technology could not be harmful at all. And I think Sam Altman has a very different view. He has a very balanced approach to what how he’s making can be harmful but also have lots of opportunities for good.
Yeah, that was really one of the most remarkable pieces of this hearing to me was they really didn’t shy away from talking about some of the downsides and the risks of AI. And I really wanted to unpack that with you because I thought there was some super interesting moments. So we actually pulled some clips from the hearing and edited them down for clarity. And I thought we could just listen to them and then just talk about what happened.
So this first clip, I think, is about one of the concerns that really drew the most attention during this hearing, which is the medium and long-term risks of AI and how it could impact, not just jobs, but humanity as a whole. So this came from a moment where Senator Richard Blumenthal from Connecticut was asking all of the witnesses what their biggest nightmare is with AI. And they went one by one, and Sam Altman said something about jobs. And then at the end of Gary Marcus’s answer, he pointed out that Sam Altman had skirted the question.
- archived recording (congressman)
-
And last, I don’t know if I’m allowed to do this, but I will note that Sam’s worst fear, I do not think, is employment. And he never told us what his worst fear actually is. And I think it’s germane to find out. Thank you. I’m going to ask Mr. Altman if he cares to respond.
- archived recording (sam altman)
-
Yeah, look, we have tried to be very clear about the magnitude of the risks here. I think jobs and employment and what we’re all going to do with our time really matters. I agree that when we get to very powerful systems, the landscape will change. I think I’m just more optimistic that we are incredibly creative, and we find new things to do with better tools. And that will keep happening.
My worst fears are that we, the field, the technology, the industry cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening. But we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.
So this clip really speaks to one of the central tensions, I feel like, in the conversation about AI as a whole is, should lawmakers be focused on the near-term risks of that we can see now, things like disinformation, propaganda.
Bias.
Bias, people churning out news stories using ChatGPT or students using it to cheat on homework or other misuses of this technology. And then there are people who think, well actually, the bigger risks and the ones we should be regulating to try to prevent are the long-term risks, the danger that AI could get so powerful that it could actually destroy or disempower humanity as a whole. So when you were listening to this hearing, how did you hear the lawmakers and the witnesses grappling with this tension between near-term risks and long-term risks?
Yeah, and I’d say that all of it was discussed. And some specific near-term risks, like copyright infringement, as you said, definitely election interference, on a scale that’s so much greater than what we saw in social media, those were things that were discussed in detail. And there’s great concern for that. The problem with the long-term risks is, aside from using the kinds of words that Sam Altman did, which were kind of vague, like this could be really terrible for humanity.
It’s hard to be specific about this very hypothetical cataclysmic result of this technology. So I truly believe that lawmakers have a hard time grappling with the long-term risks beyond what they read in Sci-Fi. Not to say that some of that stuff is not legitimately things that are concerning. But for them, when it comes to policy making, it’s harder to grasp onto those bigger, more ambiguous and broad concerns that aren’t specific.
Well, there’s this term that people are starting to ask me about, and I wonder if you’ve heard this. Have either of you heard the term P doom?
Yes.
OK.
P doom.
P doom. So P doom is what the AI safety people use to call the probability that AI will cause doom, right, a superhuman intelligence subjugating humanity. And in the AI research community, there are people who think that probability is like 10 percent or higher. And so I bring it up because, if you’re congress, we might want you to have a personal P doom. And you might want to have a sense of, if you think the P doom is 10 percent or 20 percent, then maybe you do pay more attention to that than how is this thing going to affect the next election?
Right.
But also maybe not. I don’t know. It’s hard.
I was having lunch with some AI safety folks the other day, and everyone was going around the table and saying, what’s your P doom? What’s your P — this is like —
It’s Silicon Valley’s latest parlor game.
So congress was evaluating its own P doom. What other concerns did the senators at this hearing bring up about AI?
Yeah, they did talk about these specific things related to how synthetic media could be used to create fake videos, fake audio clips. And that is a big front and center concern in Washington and, actually, across the world right now. It’s clear that everybody who has tried ChatGPT or other chat bots or other AI tools, such as DALL-E, you can see the potential for massive fake misinformation everywhere, just a flooding that we haven’t seen yet. So that was discussed quite a bit.
There was a concern by Tennessee Republican Marsha Blackburn about how music clips from — she’s from Tennessee, so she represents Nashville. The musicians are just super upset about how their music is being used over and over again. We see this also with Getty Images. They’re actually suing about how images are being used that aren’t fair use cases. So there’s a lot of discussion around whether copyright needs to be reformed. There was concern generally about who should be held liable if there’s something that’s said that’s false about you. And there were some proposals that were discussed on what regulation could look like.
Yeah, let’s talk about those proposals because one of the things that was brought up during this hearing was how Section 230, which is the law that shields tech platforms from legal liability for user-generated content. Like you can’t get sued if someone posts a nasty comment on your blog. Whether Section 230 should apply to generative AI programs, like ChatGPT, should OpenAI be liable if ChatGPT, for example, tell someone to do something really harmful and they go out and do it? So this next clip is from Senator Dick Durbin asking Sam Altman how we should think about Section 230 in relation to generative AI. And here’s what he said.
- archived recording (sam altman)
-
I don’t know yet exactly what the right answer here is. I’d love to collaborate with you to figure it out. I do think for a very new technology we need a new framework. Certainly, companies like ours bear a lot of responsibility for the tools that we put out in the world, but tool users do as well, and also people that will build on top of it between them and the end consumer. And how we want to come up with a liability framework there is a super important question. And we’d love to work together.
So it sounds like what Sam Altman is saying is, no, we don’t want Section 230 to apply to generative AI because, in some sense, it’s not exactly like a social platform. It’s more like a tool. We don’t hold Microsoft liable if someone writes something crazy or libelous in a Microsoft Word document. That’s just not how our legal framework is set up. Is that how you interpret it as?
I interpret this as him saying that Section 230 was meant for platforms to be shielded from lawsuits against things that happen on their platform that they don’t create, they don’t intend to create. So in a way, he’s inviting more scrutiny and potentially litigation. So that’s how I read it. And in fact, we’re hearing more people in Washington say that actually Section 230 should not apply to AI. Like Lina Khan, the chair of the FTC has said, we’re looking really hard at AI when it comes to fraud and consumer protection. And we don’t think that AI is protected by Section 230.
Yeah, and this makes sense to me because, if you’re on a social platform and a person wants to exercise their speech rights and defame someone, I do think the liability should fall more on that person than the person who create a text box, right? There are some nuances there that we could get into, but that’s, at least basically, how I feel.
If, on the other hand, to use your example, Kevin, you want to use the Microsoft example, it’s like, well, if Microsoft writes half the document for you and the document that Microsoft’s technology wrote defames me, then it does seem like Microsoft might bear some responsibility for that. And in fact, we’ve started to see some legal cases about this. There is one in Australia where a politician has threatened to sue because ChatGPT misrepresented something about his career. So we are going to see these things get tested, and I’m interested to see how it plays out.
Yeah.
So the last clip I want to play is about this question of, well, yes, we’re all concerned about AI. And we can agree and disagree about what our biggest concerns are. But I really heard from the senators at this hearing a hunger for ideas, for concrete proposals, for new rules that could help mitigate some of the risks of AI. So this clip is from Senator John Kennedy, who’s a Republican from Louisiana. And he’s essentially asking the witnesses, what should we do? Give us some ideas.
- archived recording (john kennedy)
-
Please, tell me in plain English two or three reforms, regulations, if any, that you would implement if you were Queen or King for a day. Mr. Altman, here’s your shot.
- archived recording (sam altman)
-
Thank you, Senator. Number one, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. Number two, I would create a set of safety standards focused on dangerous capability evaluations.
One example that we’ve used in the past is looking to see if a model can self replicate and self-exfiltrate into the wild. We can give your office a long other list of the things that we think are important there. And then third, I would require independent audits, so not just from the company or the agency, but experts who can say the model is or isn’t in compliance with these stated safety thresholds and these percentages of performance on question X or Y.
- archived recording (john kennedy)
-
Can you send me that information?
- archived recording (sam altman)
-
We will do that.
- archived recording (john kennedy)
-
Would you be qualified if we promulgated those rules to administer those rules?
- archived recording (sam altman)
-
I love my current job.
Was he asking him if he wants to lead a federal agency?
I think so.
I think that’s a job that this guy is not interested in.
Completely. I love everything about that clip. So I wanted to play this clip because this idea of a licensing scheme for AI creators is very controversial —
Is it, really? Interesting.
— in the AI industry. Well, because there’s this idea out there, and you saw a lot of this from other AI companies reacting to this hearing is this idea that OpenAI, by advocating for this licensing law, is actually just trying to entrench itself, that the one effect of requiring every person or every company who wants to build a large language model above a certain scale to register for a license is that you don’t have as much competition if you’re OpenAI because they’re going to get the license.
But some college student or hacker in his room who’s building a large language model is not going to have the lawyers and the compliance department and the people who are needed to secure all the necessary licenses. And so there could be a kind of regulatory entrenchment of the big AI players. Do you buy that? Or is that a concern that some people in Congress have?
I don’t think it’s a concern people in Congress has. I think something like that is a model that they know really well when it comes to licensing and testing. It sounds very FDA, consumer product safety, like after those kinds of models. And so it’s familiar, but you bring up a super important point. I think that, even though Sam Altman in this very same hearing said that he is concerned that there won’t be enough competition, this is one of those things that are loudly spoken to those who understand what the industry is like.
But those in the public do not see that this is the kind of thing that only a big company like Microsoft or Google can afford to do because they have massive legal departments who can spend the money and put the resources into requesting licensing and making sure that all their products meet safety standards, et cetera.
Sure, but just to take the flipside of that, if you’re building a supercomputer that could subjugate all of humanity, we might want you to have a license for that. We might at least want to know that you’re working on that. And so I hope that if such a licensing regime shapes up, it’s in that spirit of like, if you’re building one of the world’s most powerful computers, it feels like somebody in the government should know that. There’s security risks associated with it, among others.
Yeah, so this idea of licensing capture or regulatory capture, I know it’s being discussed in circles in Silicon Valley. I got a lot of messages during this hearing and immediately afterwards about it. People are really worried that by going to Washington Sam Altman is basically trying to convince the government that OpenAI is the good, aboveboard, regulatory compliant, AI creator, but basically everyone else is suspect.
I think that that’s a really important point, and that’s the kind of detail that, unless you’re pretty astute on the topic, you wouldn’t know to ask that. And so that betrays the knowledge gap between Washington and Silicon Valley on that piece. But also, I do want to note that there is a really important point to make about how Washington is so enamored right now with Sam Altman, and because of these things that we talked about, which is that he’s making all the trips there. He’s having dinners. He’s being very open and spending time with people and doing these demos.
And it’s astounding to me, not surprising, but astounding to me that in Washington members of Congress are so easily wooed by this. And this happens all the time. If you’re a powerful corporate executive, there’s something about that position in Washington that appeals to people. It’s just another position of power that’s not a Washington thing because you’re a powerful person in business.
And so I think that we should watch Sam closely, and he may not be getting the skepticism he deserves. I was actually surprised listening to the hearing, like Cory Booker of New Jersey kept calling him Sam. That was weird. I thought that was a weird, a little too familiar. And that’s a little bit Cory Bookerish, but I do think that was very close. And it made me think instead of, oh, wow, this is so friendly. But it made me think, oh, wow, this is such a friendly tenor. That could have some downsides long term.
I do think it is a masterstroke on the part of Altman and OpenAI to run to Congress and say, we’re building something. It could be very dangerous. Please, regulate us before we get out of control because it speaks so perfectly to the moment we were just in, as you point out earlier, Cecilia, where the social media companies didn’t do that. And Congress is still so mad at them. And so now you have this young man who comes along and says, we’re determined to do it the right way. Go ahead and pass any regulation.
And I don’t know if they’re this cynical, but I’m certainly cynical enough to say, one reason why you can say that is because nothing might happen, right? Your whole model from the past five years is they didn’t pass a single bill. So if you’ve gone to them and you’ve begged for regulation and they don’t deliver, who’s really the bad guy? There is a lot of sophisticated diplomacy that’s happening right here.
I think you’re so spot on, Casey. I like that take. One might call it cynical. You just did. But I do like that take. And let’s also be clear. Sam Altman is not tapping the brakes at all when it comes to his technology and development. So he’s saying, look, I’m really concerned. I want to be the voice of sobriety on this technology. I’m different than the other technologists because I’m not saying everything is perfect. But he is not slowing down. And I think that’s something that was lost in the hearing.
I think that’s right. And not only are they not slowing down, but I think there’s a good case to be made that OpenAI really did kick this off, first with the launch of DALL-E, the text image generator, and then with ChatGPT, which then, of course, kicked it into overdrive. But if you’re looking for the list of companies that really change the conversation about AI, there’s really only one name at the top.
Right, Cecilia, you’ve been covering tech and tech policy and regulation for a long time.
Long time.
And when it comes to this topic, I default to maybe skepticism or maybe even cynicism because we’ve seen so many hearings. We’ve heard from so many grandstanding senators about the newfangled technology that they’re trying to get their minds wrapped around. They make all of these promises, and then nothing happens. There’s no laws passed. There’s no bills advanced. It is a total exercise in hot air and futility.
Which was the original title of “Hard Fork,” but Kevin made me change it.
So after this hearing, are you feeling optimistic or pessimistic or something else about the likelihood that Congress will actually regulate AI in the short term?
I don’t know if I’m optimistic, but I do think things are a little different. I think that partly things are different because Congress feels ashamed for those very things that you just said, Kevin, that they haven’t done anything. And they understand that there’s risk for spending so much time and energy and being so theatrical about the doomsday of technology when it came to social media and not doing anything.
So they do want to do something. But I think it’s going to — making regulation is hard. It’s controversial. The companies have not weighed in heavily in a negative way. We got a little glimpse of that when Kristina Montgomery, the Chief Privacy and Trust Officer at IBM, differed in her opinion on what should be done. She differed with Sam Altman in that she said, I actually don’t think there should be an independent agency. She said, I think the existing laws are enough. So she was arguing for a light approach to regulation.
And so when I heard her talk I thought, OK, that’s actually what’s really going on. What’s really going on is that IBM and a bunch of other companies are going to swoop in and say, actually, yeah, regulate us but in the most light touch way.
Which is sort of what happened with social media, right?
Yes.
Facebook and these other companies, they did ask for regulation.
They did.
But when it came to the actual bills that were proposed or the agencies that were trying to rein them in, they were lobbying furiously to stop it.
Yes.
So that could happen again with AI is sort of what I’m hearing you saying.
I expect it. I expect it. And again, I hate to sound so cynical. But yeah, I’ve been covering this for a while. And I think there will be a lot of political interest by these members of Congress to be a big voice, a loud voice on AI and concerns about AI. So that’s where the political theater comes in, and I think you can expect a lot more of that.
Yeah.
One of my feelings after listening to this hearing — and I confess I only watched about half of it.
Oh, Kevin.
How could you?
Kevin, can you please tell me you listened to the last half?
No, I listened to the first half.
Because you were name checked in the hearing. They mentioned you. They said “The New York Times” writer who used a chat bot and was told to get a divorce.
Oh, my god. Well, my congressional debut, there we go. Not what I thought I would be making the floor of Congress for.
What did you think you’d be making the floor of Congress for?
I don’t know.
Just by maybe getting a Presidential Medal of Freedom for your journalism.
Yes.
The jokes on the podcast, they’re so good. We must honor this man.
One of my thoughts after listening to part of this hearing was just that I feel there’s so much energy and excitement around doing something about AI. But a, they’re not really clear on what the something is. And b, they’re not actually really clear on what the AI is. And how fast it’s moving makes actually the regulation of AI a really challenging target.
The rules that you write today are going to be obsolete in two or three years when all of the technology has changed and advanced. So I just think it’s a really challenging spot for Congress because, clearly, they want to do something. But it doesn’t seem to me like they actually understand what the underlying issues are or it’s even maybe possible to regulate something that is changing so quickly. Am I reading that right? Or how do you feel about that?
I think that’s absolutely right. And I think you’re seeing a faster uptake on interest in the industry and regulating industry. But the industry is moving faster than any other technology that I’ve seen. So it’s like that’s what Congress is up against. And there are plenty of examples of bad regulations that have been created and regulations that get outdated very quickly. So that’s the challenge. The education gap, the knowledge gap between members of Congress and their staff and technologists is still pretty wide. It’s getting a little bit better, but it just has to be so turbocharged to catch up with what’s happening right now in Silicon Valley.
And by the way, this is the case for an agency, right? An agency is set up in a way that it can respond faster to things. I think the Senator Michael Bennet from Colorado said recently that you wouldn’t want Congress to have to pass a law to approve every new drug, right? So instead, we have the FDA. And he has a bill coming out that has some AI related stuff in it. But it is proagency for this reason, that you want an agency that can just —
Subject experts.
Subject experts making decisions a little bit more on the fly, not needing a literal act of Congress to do anything as this stuff evolves.
But I do think there is a case that Congress should understand this technology and know at least the basics. So Cecilia, if you would just be willing to tell all of your sources on Capitol Hill to listen to the “Hard Fork” podcast —
But they already are.
Oh.
Cecilia, thank you so much for coming on.
It was so much fun. Thank you guys.
[MUSIC PLAYING]
So Casey, you had a very interesting experience recently of going on a different podcast than the “Hard Fork.”
I did, and not only a different podcast, but I would say one of the greatest podcasts of all time.
Yeah, it was a little like you’re hanging out on the lot at like Warner Brothers and Martin Scorsese taps you on the shoulder and is like, hey, you want to be in a movie? I feel like that’s what happened to you.
Yeah, so we’re talking, of course, about “This American Life,” which legitimately has been my favorite podcast since I was in college. And recently, I had the opportunity to work with them on a story that touches on a lot of the themes that we talked about here on “Hard Fork.”
So this episode, I listened to it in the car after it came out. It was very, very fun and entertaining and informative. And today on the show, we just decided we’re going to play it for you because it is a true labor of love. And I think it turned out really well. And I wanted all of our listeners to be able to hear it. So let’s listen to the story, and then afterwards, let’s catch up. And let’s pull it into the present and talk about what’s happening at Twitter now and where this whole field may be headed.
[MUSIC PLAYING]
There’s been one person in particular at Twitter that Casey has been wanting to talk to, a very senior employee at the company who, while just doing his job, ended up having to take on two of the most powerful people on the internet and in the world. Those two people, Elon Musk and the former President of the United States, Donald Trump. Casey wanted to hear all about that. And also what it was like for the guy, what he was thinking, what he was doing once Elon took over and the place started taking on water. Here’s Casey Newton.
Yoel Roth did a lot of jobs at Twitter over the years, but it was always the same kind of job. He was in the content moderation business, one of those people who decides which of your posts can stay up on the internet and which ones need to come down. And he got his first glimpse at what life as a content moderator would be like while he was in college on a date. He’s gay. So am I.
- yoel roth
-
I went out for drinks with somebody without knowing where he worked. And he volunteered that he actually worked for the parent company of the website Manhunt, which was one of the early gay websites that was very specifically sexually focused.
And even in these early days of the web there was already a team of people who were deciding what you could and couldn’t post there.
- yoel roth
-
They had a set of convoluted rules about what types of nudity you were allowed to show in which places. So nudity, fine, but not all nudity. So there were specifics. And he described to me a system of color-coding images, of red, yellow, green, and then a team of people who were responsible for making those designations. And I’ll never forget. He said, the people doing these reviews are almost entirely straight women.
And I was just floored in that moment of thinking, god, there’s a team of heterosexual women who have to look at the depraved things that gay men are posting on the internet? I’m so sorry.
Right, the senior hole pic specialist at Manhunt was some poor woman.
- yoel roth
-
That’s not an exaggeration.
Yeah, we hope she’s doing OK. Are you out there? Call into “This American Life.”
- yoel roth
-
I’m so sorry.
I’m sorry for what you saw. [MUSIC PLAYING]
After the date, Yoel had one thought.
- yoel roth
-
I was like, aha, that’s my dissertation topic.
Yoel, was in grad school. He got his PhD and soon after a job at Twitter. They gave him a small desk. This was 2015. The office’s most striking feature was probably —
- yoel roth
-
A giant, life-sized cardboard cutout of Justin Bieber sat directly behind my desk.
Justin Bieber, obviously being a major figure in early Twitter.
Maybe the most popular user, at least for some period of time.
- yoel roth
-
Yes, there were rumors that Twitter had entire servers just dedicated to serving Justin Bieber-related traffic.
Besides Bieber, what Twitter was really known for back then was its trolls. The site was plagued by users harassing other users, particularly women. That year, I co-reported a story about how the site’s then CEO, Dick Costolo, wrote a memo saying, quote, “we suck at dealing with abuse and trolls on the platform, and we’ve sucked at it for years.” That was the backdrop for Yoel’s new job.
As an intern at Twitter the previous year, he spent part of his time moderating content. He’d seen this video of a dog getting abused. He removed it from the site, but for years, it haunted him.
- yoel roth
-
It was never even the specific image. I couldn’t tell you what the dog looked like or what the video was. I just remember its existence, and I remember that feeling of seeing it and then of clicking, I think the button said no.
More than anyone ever talks about, it’s this mostly invisible job of content moderation that makes Twitter usable for the average person.
[MUSIC PLAYING]
It’s what makes every forum on the internet usable at all. And Yoel was good at the job. He got promotion after promotion in his department, what Twitter and a lot of other tech companies now call trust and safety. It’s a hard job, and it just kept getting more complicated.
The way Yoel tells it, there was a wild new case to examine almost every day. Foreign governments impersonating their enemies, real people organizing harassment campaigns, impossible debates over what should count as hate speech, and regular meetings over whether to put labels on tweets that didn’t quite violate the company’s rules but would benefit from more context, like about COVID.
In 2020, the biggest case yet landed on Yoel’s desk. It was a case about a user who kept causing problems. And this guy’s fans were even more rabid than Justin Bieber’s. It was the President of the United States, Donald Trump. This is a couple of months into the pandemic. Trump had tweeted that mail-in ballots in that year’s election were going to lead to widespread fraud. And just to lay my own cards on the table, I thought that was really bad because they won’t lead to widespread fraud.
Anyway, Twitter’s policies prohibited misleading people about the voting process the way Trump was doing. But the company had never taken action against the president’s tweets before. Yoel all had to decide what to do.
- yoel roth
-
I didn’t see a basis for changing the policy, modifying it, winking at it, squinting and finding a violation. There was no way around it. It was clearly a violation of our policy. Truthfully, there was a lot of nervousness about crossing this line, for the first time, taking action on a tweet from the President of the United States.
The company decided that, instead of removing the president’s post, it would put a label under it, a label that just said, get the facts about mail-in ballots, with a link to a page that pushed back on Trump’s claims.
- yoel roth
-
At a certain point when it became clear that, yes, this was going to happen, it became a question of who could push the button?
On some level, we probably understand that in a moment like this someone has to take a physical action to type the words, get the facts about mail-in ballots, and click the button to attach the label to the post. I’ve talked to dozens of content moderators over the years, but I’ve never talked to someone who had moderated the President of the United States.
When it came time to take action, only a handful of people at Twitter had the power to do it. The company had locked down access after an incident where a former contractor on his last day working there briefly deactivated Trump’s account. Shout out to Bahtiyar Duysak, who says it was an accident. Also, Twitter had just introduced this idea of putting labels on misinformation a couple of weeks before.
- yoel roth
-
And so it was this perfect storm where it required elevated access and knowledge of this incredibly convoluted system for applying these labels. And I was the only one who knew how to do it. And so I got an instruction from my boss that said, all right, we’re going to do this.
Also, because this is how life goes, Yoel and his husband were moving houses the day all this happened.
- yoel roth
-
I excused myself from wrangling the dog and the movers and the relocation of stuff and sat in the front seat of the car with my cell phone tethered to my work laptop. I was on a video call with some of the other leaders at the company who were making this decision. And I remember a countdown where I was going to push the button that would apply the label to this tweet.
At that same moment, Twitter’s communications staff was going to announce the decision.
- yoel roth
-
And it felt very important in that moment for the timing to be exactly joined up for some reason. We counted down. I clicked the button. And then I refreshed the public view of the tweet and saw the label. And the communications team said, we’ve got it from here. And I said, OK, I have to go back and deal with the movers now. And I hung up the call, and I closed my laptop. And I crossed the street back into my apartment.
If they made a movie about Trump and Twitter, you can imagine how they’d shoot this scene, with the Twitter employees hunched over a console in a control room high-fiving. But in reality, of course, it’s the opposite. Most content moderators try really hard not to bring their own political beliefs into the job. In a way, the legitimacy of the whole company they work for depends on it.
Shortly before that Trump tweet, Twitter had explained its reasoning for adding labels to misleading information with a blog post. Importantly, the post was signed with Yoel’s name. It’s soon after that first label showed up on Trump’s tweet his name was everywhere.
- yoel roth
-
I wake up one morning, the third day that my husband and I are in our new home, to my phone exploding because Kellyanne Conway has just talked about me on Fox News and has said that I’m responsible for the censorship of the president’s account and am responsible for censorship at Twitter more generally. And in that moment, everything exploded.
- archived recording (donald trump)
-
Thank you very much. We’re here today to defend free speech from one of the gravest dangers —
- yoel roth
-
The president held up a copy of “The New York Post” with me on it in the Oval Office as he announced an executive order restricting censorship by Silicon Valley companies.
- archived recording (donald trump)
-
His name is Yoel Roth, and he’s the one that said that mail-in balloting — you look — mail-in, no fraud? No fraud? Really?
- yoel roth
-
And for weeks, discussion of me and my political opinions and my beliefs became a symbol of everything that was allegedly wrong with Silicon Valley and with the decisions that companies have made.
Twitter had to hire security to protect Yoel and his husband. It had all taken him by surprise. He’d expected the criticism but not that he would be the target. In cases like this, people would usually come after the CEO or the company itself. But soon, Yoel realized that what his harassers were doing was much more effective. If you make companies believe that their employees could be hurt for enforcing the rules, they might be more reluctant to enforce them.
Twitter didn’t stop though. They kept putting labels on his tweets. And Trump, of course, lost the election. Though, that’s probably not how he would describe what happened. And after the January 6 attacks of the Capitol, he lost his Twitter account too. Yoel did not press the button on that one, but here’s a detail about that day that I love.
- yoel roth
-
Yeah, there is a technical question about whether it would work or whether Twitter would crash.
Can you actually ban Donald Trump’s account, or is he so —
- yoel roth
-
Banning somebody with that many followers is actually technically very complicated, right? When you suspend somebody, Twitter’s systems have to figure out what to do with all of the people who followed them. And —
In other words, if you follow Trump, Twitter has to remove him from your list of follows.
- yoel roth
-
Which sounds very straightforward. But when you have to do that tens of millions of times immediately, we had to think about, if we push this button, is the site going to go down?
As it turned out, the site stayed up, and Trump was banned, for a while anyway. It was such a strange moment. With a click of a mouse, Twitter had managed to do something that Congress attempted twice and failed, to punish Donald Trump in a way that had real and immediate consequences for him.
[MUSIC PLAYING]
Trump headed off to Mar-a-Lago. Yoel got promoted. He was running the whole department. And that’s when another mouthy, rich guy started to complain about all the rules on Twitter. That guy was Elon Musk. In April 2022, Musk announced he’d acquired a big stake in the company. A few days after that, he announced his intention to buy it outright.
As soon as the news broke, Yoel’s employees started asking what it meant for them. Elon had been tweeting a lot about free speech and his feeling that Twitter didn’t have enough of it. He posted a photo of six people in dark robes with the caption, “shadow band council, reviewing tweet,” and, “Truth Social exists because Twitter censored free speech.” Also stuff like, “next, I’m going to buy Coca-Cola and put the cocaine back in,” and, “let’s make Twitter maximum fun.”
Some employees working in trust and safety worried that maximum fun might mean Elon would dismantle their whole operation. Yoel was willing to give him a chance though.
- yoel roth
-
What I told them and what I sincerely believed was it’s too soon to tell. People are frequently caricatured and villainized in the media certainly I was. And that’s not a reflection of who they actually are, and so don’t prejudge.
At the same time, Yoel knew that his more concerned employees might be right, that he was aboard a ship that might be about to sink. He knew he needed to be alert for the signs. His solution was to make a list, to write down the red lines that he would not cross no matter what. Most days, his job was to enforce other people’s rules. But with Elon coming in, he wanted to write down some rules for himself.
- yoel roth
-
You have to have written policies and procedures so that, when the moment comes to make that decision, you just follow the procedure that you had laid out before.
Your whole job was about trying to not make decisions out of impulse and emotion but by following a playbook. And that meant that, before Elon took over, you actually had to give yourself a playbook.
- yoel roth
-
That’s right.
And so on a notepad by his desk at his house, he wrote down his red lines. I will not break the law. I will not lie for him. I will not undermine the integrity of an election. By the way, if you ever find yourself making a list like this, your job is insane. Then Yoel wrote down one more rule.
- yoel roth
-
This was a big one. I will not take arbitrary or unilateral content moderation action.
So if Elon came up to you just said, ban this person, you weren’t going to do that.
- yoel roth
-
That was the limit.
Did people on your team show you the lists that they were making too or talk to you about them?
- yoel roth
-
We did. [MUSIC PLAYING]
Yoel’s list of rules got its first test pretty quickly on the day Elon officially took over Twitter. It was the end of October. Lawyers were finalizing paperwork, and Twitter staff was attempting to enjoy the annual company Halloween party. The scene was surreal. Were you there for the Halloween party?
- yoel roth
-
I was.
Were you dressed up?
- yoel roth
-
I was not.
Lots of people did dress up though. Employees brought their kids. There were balloons and face painting. I’ve talked to so many people who went to this party, and every one of them has added some bizarre new detail. Some people saw a guy dressed as a scarecrow walking around with what appeared to be a handler. They wondered if it was Musk. It turned out to be a hired performer.
- yoel roth
-
As the Halloween party had started, I was sitting in a conference room doing some work. And we start hearing rumors that, not only has the deal closed, but also, the company’s executives have been fired. And at first, it’s unconfirmed. I get texts from a couple of reporters who asked me, is it true that Vijaya has been fired? And I said, no, I just saw her. She’s still online in the companies Slack and Gmail. Of course not. Your sources are lying to you. And then it was true.
Such an important lesson. Always trust the reporters.
[MUSIC PLAYING]
Pretty soon afterward, Yoel gets summoned over to the part of headquarters where Elon and his team had set up shop. He was nervous.
- yoel roth
-
And I thought, OK, I’m about to be fired. So I walk past a number of my employees, and I don’t let on that any of this is happening because I don’t want to panic them because they’re there with their kids. And so I smile and make jokes about Halloween costumes and walk over to this other part of the office where somebody, who I gather works for Elon Musk in some capacity, but they don’t introduce themselves. They just say, how do I get access to Twitter’s internal content moderation systems?
And I pause and blink, and say, you don’t. That’s not going to happen. I explained that Twitter is operating under an FTC consent decree, that access to internal systems is regarded as highly sensitive, and that there are both legal and policy reasons why we simply couldn’t grant access to somebody.
Elon’s aide explains that they’re worried about an insider threat, someone who might try to sabotage the site on their way out. Yoel tells them, sure, I can help with that. He explains some steps they can take to protect the company. And to you all’s surprise, the aide says, OK, you’re going to tell that to Elon. And then he leaves and comes back with Elon Musk.
- yoel roth
-
Who, at this point, I’ve seen on the internet but I had not met in person. So Elon sits down and asks, well, let me see our tools, our tools. He owns the company at this point. And so I show him his own account in Twitter’s set of enforcement tools. And I explain to him what the basic capabilities are. And then I make a recommendation to him of what I think Twitter should do to prevent insider misuse of tools during the corporate transition.
Yoel also have recommendations about the midterms and the upcoming presidential election in Brazil.
- yoel roth
-
And as I start to explain some of the rationale related to the Brazilian election, Elon interrupts me and says, yes, Brazil, Bolsonaro and Lula, very dangerous. We need to protect that. And I was floored. I came into that conversation expecting him to fire me. And instead, he jumps ahead of me to say that he is sensitive to the risks of offline violence in the context of the Brazilian election and wants to make sure that we don’t interrupt Twitter’s content moderation capabilities. It was like a dream come true.
You’re thinking, maybe I’m actually aligned with this person.
- yoel roth
-
Yes. [MUSIC PLAYING]
And so Yoel stayed. He was surprised in a good way. On Twitter, Elon talked about the company as if it should barely have any rules at all. But in that moment, one on one, Yoel thought he might turn out to be more reasonable. Maybe spending some time inside the company would show Elon the real value of those rules, which is that without them you lose your users. And you lose your advertisers. And Yoel felt like Elon could be sensible. One of his first requests was to restore the account of the Babylon Bee, a right-wing satire site. But Yoel explained how it had broken Twitter’s rules, and Elon backed off.
- yoel roth
-
I found him to be funny. I found him to be reasonable. I found that he responded well to having evidence-backed recommendations be put in front of him. And I, for a moment, felt that it might be possible for Twitter’s trust and safety work to not just continue but also to get better.
After that, things began to move really quickly. About a week later, Elon laid off half the staff. Suddenly, Yoel was one of the highest ranking employees from the old Twitter who was still working at the new one. And it seemed like Elon liked him. After some trolls went after Yoel for some of his old tweets, Elon tweeted that he supported him.
[MUSIC PLAYING]
The US midterm elections took place mostly without incident. Same for the election in Brazil. Elon kept pushing his teams to move faster, even as he was laying them off. At first, Yoel all said that Twitter still had enough content moderators to keep the site safe. But the cuts kept coming, and the work got harder and harder. Soon, Elon unveiled his first big idea for making lots of money and recouping the $44 billion he had spent to buy the company. Yoel and his team thought it was insane. The plan? To let anyone get a blue verified badge for their profile for $8 a month. The company called it Twitter Blue.
The risks seemed obvious. People would just make new accounts to impersonate brands and politicians and other celebrities. Yoel and his team wrote a seven-page document outlining the risks. But the badges went on sale anyway. And almost immediately, impersonators started buying them and wreaking havoc. In maybe the most famous case, someone impersonated the drugmaker Eli Lilly and said that insulin would now be free. The real Eli Lilly’s stock price dropped more than 5 percent. It was a vivid illustration of why companies like Twitter make rules in the first place. Impersonators were suddenly all over the site.
- yoel roth
-
And so, OK, we have to ban them. But somebody has to review them. We can’t just ban everyone. And so you do that with content moderators. And we had instructions to fire more of our contract content moderation staff to cut costs.
All of this seems really self-evident to me, and I think it would have seemed self-evident even before you launched this. What was Elon’s take on this? How did he respond to you raising these concerns?
- yoel roth
-
Do it anyway. And that was a breaking point for me.
We reached out to Twitter for comment but didn’t hear back. Reporters have sometimes gotten automated poop emojis, but I didn’t even get that.
[MUSIC PLAYING]
Yoel had spent a long time gaming out scenarios for what might make him leave Twitter. He made that whole list. He wouldn’t break the law for Elon. He wouldn’t undermine an election. But ultimately, what got to him was something he didn’t foresee. It wasn’t on the list. It was something more personal. He knew this bizarre plan wouldn’t just make people lose trust in Twitter. They would lose trust in him.
- yoel roth
-
Behind Elon Musk, I was the most prominent representative of the company, period. And I became aware that when Twitter Blue turned into the predictable hot mess that it was, that people would ask, why didn’t the trust and safety teams see this coming? Yoel, why are you so bad at your job?
The day after the launch, Yoel and Elon got on the phone. Elon thought the problem could be fixed if Apple would just hand over all the credit card information of the people doing the impersonations. Yoel had to explain that Apple would never do that. He also asked Elon to slow down the rollout of Blue so that they would have time to hire and train more content moderators to look for impersonators. Elon didn’t understand why that would take longer than a day.
- yoel roth
-
I got off that phone call and thought, I can’t solve this problem. I will spend the rest of my time at this company trying to bail out a ship that might sink more slowly because I’m there bailing it out. But I don’t want to spend the rest of my life bailing out a sinking ship.
Yoel had made up his mind to leave. He called a couple of his employees to let them know.
- yoel roth
-
I knew that that day I did not want to be walked out of Twitter after almost eight years by corporate security. I wanted to leave on my own terms. And there was an all-hands going on at the time, Elon’s first time addressing the company in person. And during that all-hands meeting, I hit send on my resignation email, put my laptop in my bag, and walked out of the building for the last time.
Did you purposefully send it when you knew he was on stage?
- yoel roth
-
Yes, absolutely. I knew that it would take some time for the HR team to see it and process it, for that to get to him, for him to react to it. And in that time, I knew I wanted to be back at home and not be in the office.
?] Was it a long email?
- yoel roth
-
It was one sentence.
?] What was the sentence?
- yoel roth
-
I am no longer able to perform the responsibilities of my job and resign it as of today at 5:00 PM.
[MUSIC PLAYING]
I remember feeling two things. On one hand, I felt relieved. And then I also just felt deeply sad. I just wanted to get home.
Yeah.
- yoel roth
-
So I left Twitter’s garage and was driving. And I was about halfway across the Bay Bridge when I think Zoe broke the news that I left Twitter, and my phone exploded.
What? You didn’t even get across the bridge before Zoe broke the news? God, I love her. Zoe is my coworker. I’m immensely proud of her, even if she did kind of mess up Yoel’s plans. The car Yoel was driving that day was a Tesla, by the way. He was leasing it. He’d been trying to return it but couldn’t get anyone to respond to him. Maybe they’d all been drafted to work at Twitter.
[MUSIC PLAYING]
Yoel laid low for a few days. He spent some time writing and published an Op Ed in “The New York Times.” It explained, in a very dry and principled way, why he’d left. That’s when some Rando account reshared something Yoel had tweeted from 2010 about relationships between adults and minors. Around that time, he’d been working on his dissertation, which called for tech companies to do more to protect minors at gay hookup sites like Grindr.
But Elon replied with a tweet, quote, “This explains a lot.” Then he linked to Yoel’s dissertation. Quote, “looks like Yoel is arguing in favor of children being able to access adult internet services in his PhD thesis.” Not true, but Yoel’s phone exploded with abusive messages. It made the backlash to labeling a Trump tweet look minor by comparison.
- yoel roth
-
Hundreds of messages per hour, homophobic, anti-Semitic, and also violent, just deeply, endlessly violent. And he only had to tweet once. He didn’t even have to say directly, Yoel is a pedophile. He just had to wink and nod in that direction, and people took his lead.
[MUSIC PLAYING]
When Yoel had first used the internet, it felt like a small, self-contained space, separate from what we used to call real life. But by the time Yoel quit Twitter, the distinction between online and off had collapsed. And it had collapsed in large part because of the company he worked at, Twitter. The site brought together so many of the world’s most influential people and then pitted them against each other and these all consuming daily battles. And the anger coming out of that could drive people to do things, violent things. Pretty soon, Yoel and his husband were overwhelmed with death threats.
- yoel roth
-
My husband turned to me one day and said, I’ve seen you through a lot of being targeted and being harassed. I’ve never seen you look scared before. And that was the moment that we decided to leave our home.
And so, once again, they moved. I met with Yoel at the temporary house that he and his husband are staying at while they look for a new place. After all this, I thought Yoel might want a different kind of job. I would have wanted a different kind of job. The internet had almost killed him, or threatened to anyway. But still, somehow, he’s optimistic about what the internet could be in a way you almost never hear anymore.
- yoel roth
-
I love the internet. I really do. I think the internet’s power to bring people together and help folks all over the world find connections that matter to them is magical and is one of humanity’s greatest achievements. I also think the internet can be incredibly dangerous and scary. And the work of trust and safety is trying to push that back a little bit and to make the internet more of what it can be and less of the dangers of what it could turn into.
Yoel’s idealism about the internet feels radical given how destabilizing it’s been, how destabilizing Twitter has been. But I know what he means. Back when he was a teenager, the internet gave Yoel a place to discover other gay people, the chance to talk to everyone in the world instantaneously. It gave him a career. It gave me all those things too. I remember life before the internet. It was a less frantic time, but it was also a lonelier one.
[MUSIC PLAYING]
Here’s how Twitter is doing since Yoel left. Hate speech is on the rise. Advertisers have fled. Banks that funded Musk’s takeover have marked their investments down by more than half. Musk himself has warned repeatedly that the site might go bankrupt. I kind of hope it does because what’s happening at Twitter right now is teaching us a lesson it’s taken us way too long to learn. The people like Yoel, they’re not the enemies of free speech online. They’re the ones who make it possible.
If you get any value out of social media at all, it’s in part because of them. They clean the place up, make it feel good to be there. They pull us back when we go too far, and they do censor us. And of course, we hate them for it. We convince ourselves we’d a much better job if it were us. That’s what Elon thought. Look what happened. Nobody likes the guy enforcing the rules, but watching Twitter sink into the ocean, you can’t help but notice how much you miss that guy when he’s gone.
[MUSIC PLAYING]
We’ll be right back.
[MUSIC PLAYING]
So Casey, a, congratulations on that story.
Thank you. Loved it as much the second time as I did the first. And listeners should know. He did not actually listen to it again. We are recording this before they insert that story. So you just heard Kevin Roose lie to you.
You know know that.
On the air.
I was up last night in my hotel room listening to your “This American Life” episode.
Well, thank you then.
I wasn’t actually
OK.
I was watching “Diners, Drive-ins, and Dives” on my hotel TV.
I knew it. Guy Fieri is hard to compete with.
You’ve got the hair, kind of.
I do.
It’s Fieriesque.
So Casey, let’s bring this story into the present. Have you talked to Yoel Roth since the story ran? And what’s he up to now?
Well, we have messaged a bunch, and he is doing some stuff in the academic realm. So Yoel is currently a technology policy fellow at the University of California at Berkeley and is a nonresident scholar at the Carnegie Endowment for International Peace. So I think it’s safe to say his interests are still very much focused on trust and safety. And he’s going to continue to be a player in that world.
And let’s also update our Twitter conversation because a lot has been happening at Twitter since you started working on this story. So Twitter announced a new CEO, Linda Yaccarino, who was previously the advertising chief at NBC Universal. Elon Musk announced her appointment and also announced that he will be the CTO, that’s chief technology officer, of Twitter going forward. What did you make of this announcement?
Well, I think the most important thing to remember about this is that the title now held by Linda Yaccarino was previously held by Elon Musk’s dog Floki. During a recent interview, when an interviewer was trying to press Elon on being the CEO, he said, oh, I’m not CEO. My dog is CEO, and nothing gets past him.
Well, those are big shoes to fill because Floki was an operational genius.
Yeah. Well, I think the company’s actual revenues might disagree with that statement. But the point is, when Elon came in and said I have hired a new CEO, I was just very skeptical for a few reasons. One is this is not a job that he has previously placed a lot of importance on. Two, he said he wants to continue to manage the product.
And so what he’s essentially done is bring in someone to run the ad business, which he spent the past six months undermining, right? That was the confusing part to me is he has said before that he hates advertising, that he doesn’t want Twitter to be an ad-based business, that he wants to pivot to subscriptions and other revenue streams. And then he chooses as his new CEO someone who is steeped in the world of advertising. What did you make of that?
Well, I think that tells you how well Elon Musk’s subscription business is going for him, right? If that thing were taking off, I don’t think that he would feel the need to bring in an ads chief. But it’s not working, and so he’s turning back to that. And I like to remind people that, before he took over, Twitter was a $5 billion business. And the vast majority of that was advertising. Some significant percentage — we don’t know exactly how much of that — has now gone away. And so now he’s going to try to build it back. But wow, those relationships are going to be super tough to repair, I think.
And what do we know about Linda Yaccarino?
Well, she was a longtime ads chief at NBC Universal and is well-known and really well liked by advertisers. Interestingly, she spent a lot of time in her previous job telling them that social media was not a safe place to advertise. And if you really wanted to be safe for your brand, you should advertise on TV. So she will now presumably be singing a different tune at Elon Musk’s Twitter.
Right, and do we know anything more about how Twitter’s approach to content moderation is changing or may change in response to this collapse of its advertising business?
Well, I don’t know what their plans for content moderation are going forward. But a lot of people noticed recently that they were doing very innocuous searches, and they started to see videos of animal abuse and cruelty, which unfortunately, on any social platform, bad people will just upload that anywhere. And you have to put systems in place to catch it. The fact that Twitter either didn’t have them or those systems started to break raised a lot of questions in people’s mind about how seriously they’re taking this. So Linda Yaccarino is really going to have her work cut out for her, I think, in making that platform safe for advertisers.
Got it. Thank you.
It’s my pleasure. [MUSIC PLAYING] Please, never leave me for Ira Glass again.
“Hard Fork” is produced by Rachel Cohn and Davis Land. We’re edited by Jen Puente. This episode was fact checked by Caitlin Love, who I met in person for the first time today at Lovely.
The Queen of Facts.
Yeah, she did not fact check me during our conversation.
Well, must have gotten it right. Today’s show was engineered by Alyssa Moxley, original music by Dan Powell, Elisheba Ittoop, Marion Lozano, and Rowan Niemisto. Special thanks to Paula Szuchman, Ira Glass, David Kestenbaum, Christopher Swetala, Nell Gollogly, Pui-Wing Tam, Kate Lopresti, Jeffrey Miranda, Prince Harry, Barack Obama. I just thought I should keep listing famous — Ira Glass in the credits really got me feeling some kind of way.
LeBron James, Beyoncé.
All of whom helped with this week’s episode. You can email us at [email protected].
[MUSIC PLAYING]