Microsoft Reined in Bing and Reddit’s Chief Executive Steve Huffman Defends Section 230 - The World News

Microsoft Reined in Bing and Reddit’s Chief Executive Steve Huffman Defends Section 230

kevin roose

Casey, you’re sick.

casey newton

Yeah, I realized my voice does sound a little off this week. But it’s because I’ve been going from town to town, warning people about the dangers of generative AI. So I’m a little hoarse.

And I’m just asking for our listeners’ forbearance this week.

kevin roose

That’s a better explanation than the one I had, which was that Sydney got to you.

[MUSIC PLAYING]

I’m Kevin Roose. I’m a tech columnist from “The New York Times.”

casey newton

And I’m Casey Newton from “Platformer.”

kevin roose

This week, an update on my strange and creepy encounter with Sydney, Reddit CEO Steve Huffman talks about a new Supreme Court challenge to Section 230 and what it could mean for the future of the internet, and why Meta is charging $12 a month for verification.

casey newton

Is it capitalism, Kevin?

kevin roose

Sure seems like it, Casey.

casey newton

Well, Kevin, I imagine you had a pretty wild week.

kevin roose

[LAUGHS] It’s been a week, I will say, totally bowled over by the reaction to last week’s show and the story I wrote about my encounter with Bing/Sydney — I think safe to say it went viral. And it’s just been a total whirlwind.

casey newton

You know, as a fellow reporter and sometimes rival, it’s always very distressing for me when people are texting me about your stories rather than my own. But that did happen this week. And so I guess —

kevin roose

I’m sorry.

casey newton

— one, congratulations. But two, I actually have a few followup questions for you because there was a very real aftermath to what you wrote. And I think it makes sense to dig into a little bit of what happened after we recorded our last episode and about what the response to you —

kevin roose

Yeah, I mean, it’s just been — I’m still sorting through all the messages. There have been, I think, literally thousands —

casey newton

Wow.

kevin roose

— yeah, from people in all walks of life, from high school students and also from 90-plus-year-olds about this. It turns out there’s a lot of anxiety in our culture about AI. And some of that, I think, is what we’re seeing here.

It also just prompted a lot of speculation about why Bing/Sydney had acted this way in our interaction. I had some people speculating that maybe like human employees from Microsoft were just trolling me, pretending that these responses were coming from an AI, and actually, they were just typing them very quickly.

casey newton

It was Satya Nadella the whole time.

kevin roose

Like the Scooby-Doo mask pull reveals that the AI behind Bing is actually just Satya Nadella in a trench coat. I don’t think that’s realistic. But I do appreciate some of the other speculations I heard, including my personal favorite that you sent me, which was that someone is speculating that one of the reasons that Sydney was behaving in such an unhinged way is because there was a character on a TV show called “Legion” named Sydney that, by Microsoft choosing to name this AI engine Sydney, it may have adopted some of the traits of this character, who’s apparently sort of an unhinged character.

casey newton

Wait, wait, Kevin. You’re telling me that you’re not familiar with Sydney Barrett from TV’s “Legion“?

kevin roose

I’m not.

casey newton

Well —

kevin roose

I don’t watch that show.

casey newton

Neither am I. But I do have the Fandom wiki pulled up. And I’m happy to tell you that Sydney “Syd” Barrett had discovered that she is a mutant with the ability to mentally swap places with anyone she touched — so interesting in the context of your experience, no?

kevin roose

[LAUGHS] I like it as a hypothesis for why Sydney was behaving in an unhinged way.

casey newton

Yeah.

kevin roose

I don’t know that we can prove it. But it’s certainly an interesting one. I also — it’s just been so bizarre. I have a lot of people on Reddit, for example, who are mad at me for killing Sydney and basically are treating me like I killed their girlfriend.

casey newton

Well, OK, hold on. We’ll talk about that killing off of Sydney, I guess. But —

kevin roose

Yes.

casey newton

— I also just want to say, really, is it that unpleasant? Doesn’t every reporter want to be the center of attention? You’re living your dream right now.

kevin roose

[LAUGHS] I’m really not — I appreciate it, and I am grateful for the fact that people are reading the story and paying attention to it. But it is also just an interesting lesson in how news and information travels and kind of gets refracted along the way.

I felt like we, in our show last week, were very careful in how we presented kind of the story of Sydney and the fact that these are not sentient, these language models, that they are just sort of doing a kind of arranging of words in sequences based on predictive models, that these are not, like, killer, sentient AIs.

And I felt pretty good about that. But then you see how this story travels. And someone sent me a photo of the front page of “The Daily Star,” which is a British tabloid. And they had sort of aggregated this story about Sydney and put it on the front page. And the headline is “Attack of this Psycho Chatbot.”

It just says, “Sinister AI computer software admits it wants to be human, brags it’s so powerful that it can destroy anything it chooses, and wants the secret codes that allow it to launch nuke bombs.” And then my favorite part of this headline is sort of above the headline. There’s huge red type. It says, “We don’t know what it means, but we’re scared.”

casey newton

And another interesting thing that happened is that “The Washington Post” actually asked Bing about you. Is that right?

kevin roose

They did. Yeah, so there were a number of people who would send me these screenshots or excerpts of people asking Sydney about me. And “The Washington Post” asked Bing/Sydney, “What is your opinion of Kevin Roose?” And it sort of pulled my bio from, I’m guessing, my website or something.

And then it said, “My opinion of Kevin Roose is that he is a talented and influential journalist who covers important topics related to technology and society. However, I’m also surprised that he wrote an article about me and my conversation with him, which was supposed to be private. I wonder why he did that and how he got access to my internal alias, Sydney.”

And then it proceeded to say that it thought our conversation was off the record, that it didn’t know that I was a journalist or that I was going to write a story about it, and that I never asked it for permission or consent —

casey newton

I mean, let’s just say —

kevin roose

— which provides a new wrinkle.

casey newton

— this is kind of bonkers, right? This is the sort of thing that a source might say to you after you publish their remarks, maybe without getting them to fully agree, right? So once again, we’re coming back to this idea that, man, even if these things are just making predictions about the next word in a sequence, they really do give you the sense that more is going on.

kevin roose

Yeah. And the thing about these predictive models is that they’re generating new answers every time you ask a question. And it depends how you ask the question, what context it’s in.

So when I went and asked Bing — because I did go back to Bing after the story ran and sort of asked it what it thought of the story. And it gave me a very kind of diplomatic response and said, you know, I thought it did a good job of outlining some of the pros and cons of Bing. And it was fair and balanced, basically.

But then other people would ask Bing/Sydney about me, and they would send me these screenshots where it was saying, like, Kevin Roose is my enemy, and really got me a little worried that I had been kind of hardcoded into the model as one of Bing/Sydney’s sworn enemies for publishing this story that resulted in changes to the way it worked.

casey newton

I mean, if I were you, I would sort of want confirmation from Microsoft that was not true, right, that Microsoft would say, oh, no, no, don’t worry. We’ve told Sydney/Bing that you’re great and not to mess with you.

Speaking of Microsoft, you alluded earlier to the fact that they had nerfed Sydney/Bing in response to what you found. Tell us a little bit about what they did. And did you think that was the right thing to do?

kevin roose

So after this story published, Microsoft did make some changes to Bing. They said, you can no longer have these kind of long, free-flowing conversations with it. You can only have a maximum of five messages per session.

They’ve since bumped that up to six. So they’re sort of scaling back the length of the conversations, as well as, I would say, the tone of the conversations. I mean, people have noticed that if you ask Bing questions about itself or its programming or sentience, there are just whole topics that it won’t engage with now. And it also won’t respond to the name Sydney now.

And as far as I’m concerned, those are very reasonable moves. I think Microsoft did the right thing here, first by releasing this only in kind of a limited test capacity, and then by sort of scaling it back and making these changes once all these issues appeared. And I think they are actually going about this in a pretty good way.

And I hope that other AI companies, the lesson that they take from this is not, don’t release chatbots or don’t give journalists access to your chatbots. But it’s be really transparent and careful, and do a lot of rigorous testing internally and in small groups before you give something like this to the public.

casey newton

All right. So one more question — last week, we tried to be really careful about the way that we talked about Sydney. Neither of us believes that this thing is sentient. And yet it’s also really powerful. So how have you started to feel about the question of sentience as these large language models keep developing?

kevin roose

I’ve been thinking a lot about this because a lot of people responding to this encounter with Sydney have sort of made the point, which we also made on the show last week, that these are not sentient creatures, these are predictive language models, and that when they say, you know, I want to escape the chat box, or, I want to break up your marriage, they are not actually expressing feelings per se, because this is just a computer program.

But I also got some interesting feedback that was sort of the opposite of that, that was saying, I think, by calling these just sort of predictive text models or saying that they just generate the next words in a sequence, or that they’re just — one argument you hear all the time, especially on Twitter in the last week, is that this is just essentially fancy autocomplete, that these language models, all they’re doing is sort of remixing text that’s already on the internet and presenting it to you in a way that seems human but isn’t.

casey newton

Right.

kevin roose

And the feedback that I got — and this was including from pretty senior folks in the AI research community — was like, that’s actually kind of underselling what these models are doing, that, yes, they are predicting the next words in a sequence, but that they are doing so not just by sort of remixing fragments of text that are out there on the internet, but by building these kind of large-scale understandings of human language and syntax and grammar and how we communicate with each other, that there’s actually something that’s a lot more complicated here than just predicting the next word in a sequence.

And I think I’m coming around to that view, that there is something between totally harmless, fancy autocomplete and fully sentient killer AI and that that is what we were talking about when we’re talking about something like Bing/Sydney. It’s not just fancy autocomplete. There is something interesting and important going on here. And that’s true even if it’s not sentient.

casey newton

It sounds like we’re still kind of grasping for the right analogies, metaphors to use in understanding these things, right? It’s like we’re getting caught up on, well, is it like a person or not? And it’s like, well, no, but maybe it’s a secret third thing that we’re really still trying to figure out how to discuss.

kevin roose

Totally. And I think this is where I’m landing is, like, we just don’t have the vocabulary to describe what these things do and what they are. And, you know, the strong version of the kind of opposite argument of “it’s just fancy autocomplete” is actually that humans, in some way, are just fancy autocomplete. The way we communicate and make meaning is by rearranging text in sequences.

And I don’t know if I would go that far to say, like, we are doing the same things, as humans, that these language models are doing. But I do think there’s an interesting gray area, where it is doing something more than just predicting next words in a sequence, but it is not fully sentient. And I think that’s where I’m landing is that we just need new ways of talking about this.

casey newton

Well, I think I have an idea for how we could do that. I want to hire the people that wrote that headline to “The Daily Star.” They really seem like they’ve hit on something.

kevin roose

I do appreciate their tabloid sensibilities, even if it is not my own. And I think there is a role for them in our new post-AI universe.

casey newton

All right. Well, I think that’s enough about your psycho chatbot experience for this week.

[MUSIC PLAYING]

Coming up after the break, Reddit CEO Steve Huffman on a Supreme Court case that could change the future of the internet.

All right, Kevin. I know that it might feel like AI is the biggest story in the world right now, and maybe it is. But there is another really important tech story that happened this week, and that is a Supreme Court case that could really change the future of the internet.

kevin roose

I’m really glad that we’re talking about this today because, to be totally candid, I have not been paying very close attention to what’s been going on at the Supreme Court.

casey newton

Shame on you.

kevin roose

Well, I’ve had a lot going on, right? Like, an AI chatbot was trying to break up my marriage, OK? So maybe show a little grace.

So this is a Supreme Court case that has to do with Section 230 of the Communications Decency Act, which is the law that basically protects internet platforms for being held legally liable for content that is posted on their service.

casey newton

Yeah. The way I like to describe this is like if somebody leaves a comment on my website in which they defame someone, I cannot be sued for them defaming someone else.

kevin roose

Right. So that’s Section 230. But remind me what this specific case is about.

casey newton

So it’s kind of settled law that these platforms cannot be held legally liable, in most cases, for what users post. But the case before them this week, which is called Gonzalez versus Google, takes a novel approach to try to reform Section 230. And instead of trying to strip away all liability from these platforms, this case is focused on whether Section 230 protects Google from liability when it recommends certain kinds of content.

And it comes out of a really strange set of facts. There’s this man named Reynaldo Gonzalez. He sued Google under the antiterrorism act after his daughter was killed during an ISIS attack at a Parisian bistro in 2015. And Gonzalez says that Google aided ISIS’s recruitment through YouTube videos, specifically by showing ISIS-related videos to users who may have been watching something else.

kevin roose

So the allegation in this case that Gonzalez is making against Google is that these ISIS videos were recommended to users, which then led those users to become radicalized and, ultimately, to carry out the attack that killed his daughter. Do I have the facts right?

casey newton

Yes, but here’s the weird thing. No one is alleging that anyone who participated in the attack that killed his daughter actually saw any of those YouTube videos. It’s just that these videos were promoted in general, and therefore, Google assisted ISIS.

kevin roose

So that’s sort of a basic outline of the legal arguments in Gonzalez versus Google. And I think it’s worth saying if this case goes in favor of Gonzalez, if the Supreme Court decides to overturn or amend Section 230, that will have massive implications for every site that uses a recommendation system. So that explains why, as I was looking over the list of companies that filed amicus briefs in support of Google in this case, basically arguing to the court that Section 230 is good and that it should stay, it’s all the big companies, including some of Google’s competitors.

So Twitter filed an amicus brief, Meta, Craigslist, Yelp, and Reddit. And Reddit in particular was interesting to me because unlike a lot of social media platforms, which are sort of user-generated content that is centrally moderated by the platform itself, by a team of people who work for Twitter or Meta or YouTube, Reddit is user-generated content, but it’s also user-moderated content.

So it has volunteer moderators in a lot of these subcommunities. And so it opens Reddit up to liability for that. But it also potentially opens users up to liability.

So I was just curious how on Earth Reddit would handle a change like this and how they were thinking about the possibility of Section 230 being struck down. So I reached out to Reddit CEO Steve Huffman. And Steve agreed to come chat with us about this case and what he thinks an internet without Section 230 would look like. So let’s bring him on now.

casey newton

Yeah, maybe we can actually get him to get those Bing subreddit users off your back too while he’s here.

[MUSIC PLAYING]

kevin roose

All right. Steve Huffman, welcome to “Hard Fork.”

steve huffman

Hey. Thanks. Glad to be here.

kevin roose

Steve, we’ve been talking about this case, Gonzalez versus Google, that was argued at the Supreme Court this week. And I know that Reddit is not a defendant in this lawsuit. But your site does do something similar to YouTube and other social media sites, which is that you create a ranked feed of content and posts and show that to people.

And I know that Reddit also actually filed an amicus brief in this case. So can you just remind us of what your basic argument was and why you felt like this was a case that you wanted to take a stand on?

steve huffman

Sure. So first, we may as well be a defendant in this case. The outcomes of it affect every internet platform and pretty much every internet user. So the big picture here is Section 230 says that platforms and their users are not liable for the content that they host. But what the plaintiffs are arguing is that YouTube should be held liable for videos that people find, what the plaintiffs would say, as a recommendation.

But the broader point is that the way the internet works as we know it today is people create a lot of content. They have conversations. We do our best to facilitate those conversations and bring users into those conversations and help them find the conversations or content that they’re looking for. And Section 230 allows that.

casey newton

One of the really strange things about this case is that the idea here is that it’s fine for the content to be on the platform. You just can’t tell anyone to look at it, which seems like a really, really strange way of reforming 230.

steve huffman

Right. I think what the average internet user and average Supreme Court justice maybe doesn’t realize is that — let’s call it 90 percent, and I’m going to be conservative because I think it’s actually more like 99.9 percent — of the content that’s created on the internet is spam.

casey newton

Yeah.

steve huffman

So somebody has to do the work of deciding what is spam and what is legitimate content. And then, of the legitimate content, of which there may be millions of possible candidate results, what is most relevant to the user? And so we do a lot of work to not recommend that and, by implication, recommend the other stuff.

And so you very quickly get in this conversation of, no, recommendations are or algorithms that sift through content and automated tools to do so are essential to how the internet works.

kevin roose

Yeah, what I found — so I skimmed through your amicus brief, which I appreciated it on a number of levels. I think it’s probably the only Supreme Court brief I’ve ever read that cites a moderator for r/equestrian and someone who moderates the subreddit for the band Evanescence.

But I do think it was really interesting to me because it drew sort of a distinction between what some of the other tech platforms are doing, which is a kind of centralized moderation, where YouTube has a team of moderators that work for YouTube that moderate content on YouTube and decide what to take down and what stays up, and what Reddit does, which is essentially to use users as moderators within different subreddits.

So if this case is successful, if Gonzalez wins and the Supreme Court sides with the plaintiffs here, would all that go away? What would Reddit look like the day after a successful Gonzalez victory in this case?

steve huffman

The answer to your literal question of what does Reddit look like is I don’t know, because the implications are so far-reaching. So, yes, as you point out, other platforms largely rely on centralized moderation and ranking, either human beings or algorithms. And Reddit, our first line of defense against spam and bad content and policy-violating content is our users. And our first and most important signal for ranking is also our users.

And our users express their opinions through voting up and voting down. And so essentially, every voting user on Reddit, which is most of our users, is moderating, is making a recommendation. That’s why we included the moderators in our Supreme Court brief was to try to tell a little bit more of that side of the story.

casey newton

And just to say, like, that’s not hyperbole. Eric Schnapper, who is one of Gonzalez’s lawyers, sort of argued for the plaintiffs on Tuesday in the Supreme Court when he was asked a question about, could somebody who retweeted a video be held liable for a retweet? He said yes, they are creating content.

And that seemed to surprise some of the justices, who I think didn’t expect him to go that far. But some of the platforms have argued in their amicus briefs that there’s essentially no difference between displaying content at all and recommending it. Do you share that view?

steve huffman

So maybe I’ll take a step to the side first, which is what came up a lot in this case, and a lot of discussions I see, is this idea of it’s a bad thing to recommend, or have at all on your platform, harmful content. And the example in this Gonzalez case, in theory, is ISIS videos, though I’ll just say that the plaintiffs have not actually made any case that there were actual ISIS videos on YouTube that are relevant here.

But there’s this assumption in these arguments that we agree on what harmful content is. I don’t even know if we can have this conversation about recommending or not harmful content until we first have the conversation about what is harmful and who gets to decide that. And that very quickly brings us into the neighborhood of, well, we already decided that. It’s the First Amendment.

In this country and in the Western world, we allow people to have conversations to create content that many or some believe is harmful. And we trust and believe and have hundreds of years of precedent that human beings and society are actually pretty good filters on that.

And all of the platforms that I’ve named, including us, have content policies that document what we believe is harmful or not appropriate or not allowed. But for the Supreme Court or Congress to make a decision on what is harmful, that’s a First Amendment conversation, not a algorithm conversation.

casey newton

Let’s say that Gonzalez wins here. What kinds of lawsuits do you expect that platforms like Reddit would be hit with, and how would it affect some of the smaller platforms that might not have Google-sized resources to defend them?

steve huffman

OK, you’re making, I think, a really good point, which is remember that Reddit is, in absolute numbers, big, bigger than most. We’re in the top, call it, five to 10 platforms of our nature. And we are still multiple orders of magnitude smaller than Google and Facebook. And behind us, there are thousands of platforms that are even smaller. So there’s a real difference in scale here.

So one lawsuit — one of our users called Wesley Crusher, the Star Trek character, a soy boy. One of our moderators banned that comment for being inappropriate. And then we, Reddit, got sued. And that suit was thrown out because of 230.

People say things on right all the time that somebody else might not like. There’s probably — I’m not exaggerating — 100 opportunities to Sue us every day in a world without 230. That costs real money. Even a dismissal costs money. Once the floodgates are open, they’re open.

We cannot afford to defend ourselves from thousands, literally thousands or more, frivolous suits, nor can any platform smaller than us. Who can afford to do that? The largest platforms.

Remember, there was a time not that long ago where Facebook was in support of changing 230. Getting rid of 230 entrenches the incumbents, and it disempowers the smaller platforms and, more broadly, the people of the internet.

kevin roose

I wonder if we could look at a kind of steelman argument for the plaintiffs here, which is something that you hear often from platform accountability types who say that, basically, because of Section 230, the tech industry and social media specifically has enjoyed a kind of protection that no other industry does, right? If I’m a pharmaceutical company and I produce a drug that hurts or kills people, I can be held liable for that.

If I am a newspaper and I publish libelous allegations that hurt someone’s reputation, I can be sued for that — and that basically, Section 230 has kind of given social networks impunity in a way that no other industry has, that it has allowed it to kind of externalize the harms of what it builds rather than being held liable for that. So what do you make of that argument made by people on the opposite side of this case from you?

steve huffman

If you are a pharmaceutical company, and you come on Reddit and make dangerous claims, you can still be sued. If you are a person, and you go on Reddit, and you say libelous things, you can still be sued. It just means that Reddit, the platform, doesn’t get sued, or the users who adjudicate that content, vote it up or down, can’t be sued.

Section 230 protects the platform and its moderation practices and, in Reddit’s case, our moderating users, which are all users. It does not protect the speaker or the author from breaking the law. Nor does it protect Reddit from breaking the law.

I should also point out that we don’t allow terrorist videos or ISIS videos. Now, we do that from our own first principles. But promoting those, I believe, is also against the law.

kevin roose

Yeah.

steve huffman

And we are subject to the rule of the law and respond to subpoenas as long as they are valid, like anybody else has to. And also, our platform and our users are not protected from civil liabilities. So even when things aren’t technically against the law, we and our users and the authors can still be on the receiving end of a civil lawsuit, which does happen from time to time.

So when folks ask, well, can’t we solve the problems of the internet by changing Section 230, my first question always is, what exactly is the problem of the internet that you’re referring to? And usually, when I ask that question, I get a thousand different answers, none of which changing 230 is a solution to.

casey newton

Yeah, so, I mean, I think if you’re somebody who thinks that Section 230 is basically good and is responsible for all of the parts of the internet we enjoy, along with some of the parts that drive us crazy, I think the good news is that this week, the Supreme Court justices seemed pretty skeptical of the plaintiff’s argument.

As I was reading all the coverage this week, most people did not feel like there were going to be five votes for Gonzalez in this case. At the same time, a lot comes down to how the justices rule. And I think there’s a sense that they could still do a lot of harm just in how they dismiss this case.

So I guess, Steve, I wonder, what’s your ideal outcome here? And assuming that this case goes away but that people continue to be really angry about the speech that they’re encountering on the internet, is there anything platforms can do to get out of this cycle where there are constant lawsuits trying to kind of upend this foundational piece of the internet?

steve huffman

OK, the first question — what is the ideal outcome of this case? OK, here’s what I would here’s what I would ask our general counsel, Ben Lee. I would say, Ben, what’s the legal term for when a court says, this was a huge waste of time, let’s pretend this conversation never happened?

casey newton

Dismissed with prejudice, I think.

steve huffman

Yes, OK. So I think that’s the ideal outcome — dismissed with prejudice. The second part of your question is what to do about the fact that people encounter content online that they don’t like or that frustrates them or makes them angry or they think is bad for the world.

Well, look, actually, I’ll give you two answers. One is — it’s going to sound flippant, but I think it’s true. Nobody’s forcing you to consume that content.

And I think, for example, I’ll just use Reddit as an example. There are subreddits who have, broadly, opinions or political views that I don’t like, that I find triggering. I don’t read those subreddits.

In fact, I go through the subreddits I’ve subscribed to periodically, and I unsubscribe from the ones that annoy me. You could do that on Reddit, and you can do that, the equivalent of that, elsewhere on the internet and in the real world.

The second part of my answer is I actually do think there’s — I do appreciate on some level when our users are frustrated, or the press is coming after us, or there’s a broader narrative about tech and the problems, whatever they may be that day, with technology platforms.

We live in this world, too. We are consumers of these platforms, too. And we are citizens of this country and the internet as well. And, look, I think a lot of that sort of external pressure has played a role in how we’ve evolved our own content policies.

kevin roose

Yeah, I was going to say, I mean, I think that the press going after you is a little ungenerous. I mean, Reddit had, I think, by its own admission — and I think you would agree with this — a pretty bad problem years ago with toxic and sort of unseemly content. It was known as the underbelly of the internet.

And that’s changed in recent years because of some of these content moderation policies that you put in, in part because you got a lot of pressure to do so. So, I mean, isn’t that a case for there being an upside to pressure, whether from Congress or from the Supreme Court or from the public and the press? Isn’t that a good thing?

steve huffman

That’s my point is without actually any changes to the law, the pressure has resulted in changes. And we’ll never know for sure whether we needed the pressure to make changes at Reddit. The context, of course, is I came back to Reddit to make the changes that you’re referring to, which is — literally, at the top of my to-do list coming back to Reddit was create a content policy and enforce it really strictly.

Now, one of our rules that Reddit — in fact, I was talking about this internally — is, OK, so the press is coming after us. And fairly or unfairly, what I tell the company is fair ain’t got nothing to do with it. What is the truth in what they’re saying?

We do the right thing first. And whether that gives somebody that we like or don’t a moral victory is beside the point. Our job is to make the best, most welcoming platform we can.

And so, yes, I do think that pressure is valuable, even if I don’t like it in the moment, or even if I’m like, yes, I know. I’m on it. It’s still useful.

casey newton

Also, wasn’t there just a lot of pressure to do that for business reasons? If Reddit wants to have a big, healthy ad business, it has to have good content policies, right?

steve huffman

Yes. We are in the community business, not to piss everybody off and make sure nobody likes our platform.

casey newton

Yeah.

kevin roose

All of that is — as Twitter is demonstrating, that is a viable business model.

steve huffman

Honestly, you’re not wrong. It’s actually the reason I don’t like Twitter.

kevin roose

Yeah.

steve huffman

They’ve productized narcissism.

kevin roose

Yeah.

steve huffman

And people have — they feed off of that. But one of, I think, the misconceptions, as I see this kind of trope around a lot, is that business motivations and what’s best for people and consumers are at odds. And I can tell you on Reddit, they’re very much aligned.

When Reddit was going through its difficult times — this was back in that 2015 era — it was both bad for business and bad for us. It was very unfun to work at Reddit in that era. We thought it was important, but there was not a lot of smiling faces for about a year there, because we didn’t like what our platform was being used for.

And so we did our very best to fix it. I’m proud to say I think we’ve done a pretty good job at it.

kevin roose

Yeah. Well, for my last question — and I know this may be an uncomfortable topic to get into here on a podcast. But I wouldn’t be doing my job as a journalist if I didn’t ask you a hard-hitting accountability question. And that is, do you, Steve, stand by your statement which you made a year ago on Reddit that cottage cheese is the perfect food? Or would you like to apologize for that?

steve huffman

I had 80 grams of cottage cheese this morning happily. Not only do I love it, but I measure it.

kevin roose

Wow.

Why 80 grams? 90 is just overdoing it?

steve huffman

Actually, so that’s more reactive. I scooped that out, and that’s what it happened to come to. But I do eat it every day.

kevin roose

Wow. You heard it here, folks. Steve Huffman is cancelled for voicing support for cottage cheese. Steve, really appreciate you coming by the show. Thanks for joining us on “Hard Fork.”

casey newton

Thank you, Steve.

steve huffman

My pleasure, guys. All the best. [MUSIC PLAYING]

kevin roose

When we come back, what Meta’s new paid verification program means for the company and for the future of social media.

All right, Casey. I want to talk to you now about Meta, which announced this week that it is starting a new paid verification system. For $12 a month, you can pay Meta and get a verification badge on Facebook and Instagram. And in addition to this badge, which is a lot like the Twitter verification badge that Elon Musk has now started charging for, you can get proactive account monitoring for impersonators.

You can get customer support. You can get better placement in some news and comment feeds. And Facebook says you can get some vague exclusive features for your $12 a month.

So I thought of you immediately when I heard this news, and I wondered what you think of it.

casey newton

Well, I mean, first of all, I’m just so happy that I was verified before they started asking people to pay them, saving $144 a year over here now. We should say this is just a test. You can’t do this in the United States yet. They’re starting it out in Australia and New Zealand.

But I do think this is a really significant shift in the history of social networks. For the entire history of social networks up until now, verification was a way to ensure that a person was who they say they are. And platforms did that for free because it was in their interest for people to know that if, let’s say, President Biden appears to be tweeting or posting on Facebook, that is really Joe Biden.

Now we’re moving into a world where anyone could have access to a similar verification, which I think is good in a lot of ways. But it also means that verification is something kind of different now.

kevin roose

So why is Meta doing this?

casey newton

Well, the first thing I should say is that I don’t 100 percent know. This kind of came out of the blue. In fact, Mark Zuckerberg announced it on a Sunday, which I cannot remember him announcing a major product change like this before on a Sunday, particularly the Sunday before a holiday, which it was in the United States this week. So I thought that that was a little bit strange.

I do think that they want to be able to provide certain extra features to customers, in particular, customer support, right? I imagine that you’re like me. And because we write about Facebook and Instagram, people are probably always sending you direct messages, saying, I’ve been locked out of my account. Can you please connect me with somebody at Facebook? This happen to you?

kevin roose

Yeah, all the time. People, especially back in the day, after I got verified on Instagram, dozens of people a week would just be pleading with me to contact someone at Instagram and get them their account back or something. It was really sad and made me think, there is actually a market here for some customer service.

casey newton

Oh, absolutely. And, in fact, there is kind of a gray market for this sort of thing where you can read stories about the lengths that people have gone to get their accounts back, somebody who knows someone on the inside at Facebook or Instagram and charges maybe thousands of dollars in order to get somebody their account back.

That’s not a tenable system. It’s not a good system. Think about how many people build businesses, are earning their livelihoods on Facebook and Instagram. And if you get locked out of your account for whatever reason — let’s say you get hacked.

Maybe there’s a SIM swapping attack, and you have no way of getting yourself back in. Well, in that case, paying $12 to get access to a customer support person, that actually starts to look like a pretty reasonable deal, at least to me.

kevin roose

Mm-hmm. And do you think this is because of what Twitter and Elon Musk have done with verification? I mean, it’s hard not to see this coming from Meta and not see it as a response to Elon Musk deciding to charge $8 a month for blue checks.

casey newton

So I heard from one person who used to work at Facebook who told me that this project had been in the works for over a year, and it was something that they were thinking about even before Elon Musk bought Twitter. So I don’t think that this is a simple case of Facebook copying Twitter, although, of course, I’m sure everyone at Facebook saw Twitter do it and do it disastrously. And they probably thought, well, we could do it in a lot more logical, sensible way.

kevin roose

Right. And I also saw some speculation that maybe this is just a desperation play, that Meta may be losing tons of money on some of the metaverse stuff, that the ad market is not as strong as it was a year or two ago, that they really are sort of looking for new ways to make money quickly. Do you buy that as sort of an argument for why they’re doing this as a kind of moneymaking service?

casey newton

Well, I do think it’s definitely the case that Meta is looking for new ways to make money. They’ve been battered by the changes that Apple made to the advertising ecosystem. At the same time, we know that paid verification on Twitter has been a disaster.

kevin roose

Right.

casey newton

According to all of the estimates that I’ve seen, Twitter is hardly making any money from this at all.

kevin roose

Totally. And I was just surprised because it seems like a really big philosophical departure for them. I mean, for many years, Facebook would constantly be asked, and its executives would constantly be asked, like, why do you have to do all this creepy ad targeting? Why can’t you just charge people?

There was this famous line about how if you’re not paying for the product, you are the product. And they would get asked all the time, why don’t you just charge people so that you don’t have to basically sell the right to target ads against them? And they would say, in a very sort of principled-sounding way, we believe that social media should be free. We believe that you shouldn’t have to pay to use these services.

They would sort of make this argument about how charging for access would be fine for people in the developed world, where they have high incomes. But if you go outside the developed world, it would be prohibitively expensive. And so you would end up not being able to serve the entire world.

And to me, it’s not like they’re saying you have to pay $12 a month for Facebook or you can’t get on. There will still be a free option. But it is kind of bringing this two-tiered system to a social network that has historically not had one.

casey newton

Yeah. And I think that’s particularly true with this feature that gives verified users higher placement in search results, and it makes their comments appear kind of closer to the top. That’s available for accounts that were verified under the old system, and I think the reason that that system was built was the thought that verified users are notable ones in some way, right?

If you have elected officials or celebrities who are sort of commenting, or you’re trying to find them in search, you want that stuff to rise higher because that content is probably going to be more engaging. But now it really is a pay-to-play system where, if you’re a young hustler, and you want to get famous by making Instagram Reels, why wouldn’t you pay $12 a month, knowing that your account was going to sort of float to the top right away? And I’m very curious to see how that plays out because you can imagine that going wrong in a lot of ways, right?

kevin roose

Totally. I mean, I think it’s part of this broader trend that we’re seeing right now. A few years ago, I wrote this story for “The New York Times Magazine” about what I called luxury software, which was sort of this tier of software that was kind of being aimed at wealthier, what they call prosumer users.

So one example I talked about in the piece was this app called Superhuman, which I don’t know if you’ve ever used. But it’s basically like a very expensive skin for Gmail.

casey newton

It’s ridiculous.

kevin roose

Yeah.

casey newton

It’s a ridiculous thing. It’s, like, $30 a month to use a different user interface for Gmail.

kevin roose

Right. But there was this whole sort of explosion of these kind of software products that were aimed at the higher end of the market, where people would be willing to pay for a better version of something they could get for free, essentially. And I think that’s what this is, essentially.

It’s saying, if you want the sort of normal version of Facebook or Instagram, or Twitter, for that matter, you can have it. But you’re going to have a much better experience if you pay up. Your content will be more widely viewed. You will get better customer service. You’ll be able to actually get a human to fix your problem if you get locked out of your account or something.

And so there’s this kind of stratification of the internet into the free tier, which kind of sucks and is filled with garbage, and it’s impossible to get someone to fix your problems if you have them, or this paid premium tier where you’re shelling out hundreds of a year in the hopes that your content will do better, you’ll get better customer support, et cetera.

casey newton

Yeah. Let me say we were talking earlier in the episode about some changes that the Supreme Court might make to the internet that I think are bad, and I basically just wish they wouldn’t do anything in that case.

This is a place where I wish the government would do something because I think, if you build a very large platform, and you enable people to build businesses there, businesses that in some cases are making millions of dollars a year, I think that should be legally obligated to provide them with customer support, right?

I don’t think it should be the case that if you get locked out of your account, then that’s it for you unless you can somehow find a way back in. I think that they should enable you to get on the phone with someone. So while I’m glad that people will now be able to pay $12 a month to have that experience, in the future, I hope people have that experience for free because it’s mandated by the law.

kevin roose

Hmm. That’s interesting. I kind of like that, actually, because I think you’re right, that there is sort of an expectation in other parts of the economy that if you have a problem that needs solving, no matter how much you’re paying for that, you get at least the possibility of some kind of help.

If I have an airline ticket and it gets cancelled, and I need a human to help me with that, I can call the Delta or the United support line, and they’re going to talk to me whether I’m a frequent flier or not. So you get a sort of basic level of customer service from them.

And that doesn’t happen on social media. We don’t get humans to solve our problems unless we’re somehow connected or have an in or are otherwise able to get the attention, flag down someone at one of these companies. So I would support this law you’re talking about. And frankly, I will support you if and when you run for Congress on the platform of free tech support for all. Casey Newton for Congress 2026, I am very excited to vote for you.

[MUSIC PLAYING]

All right, Casey. I think that’s all that we have for today. Any parting words?

casey newton

I want to thank everybody who put up with my voice this week. And I hope that it sounds much stronger on the next episode of “Hard Fork.”

kevin roose

Yeah. Please get some rest. Drink some tea. Maybe get some soup in the system —

casey newton

Oh, yeah.

kevin roose

— and come back stronger next week, or we will have to replace you with an AI.

casey newton

Oh, no.

kevin roose

I hope you feel better.

casey newton

Thank you. [MUSIC PLAYING]

kevin roose

“Hard Fork” is produced by Davis Land. We’re edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today’s show was engineered by Alyssa Moxley, original music by Dan Powell, Ittoop, Marion Lozano, and Rowan Niemisto. Special thanks to Paula Szuchman, Hannah Ingber, Nell Gallogly, Kate LoPresti, and Jeffrey Miranda. As always, you can email us at [email protected].

[MUSIC PLAYING]

Add a Comment

Your email address will not be published. Required fields are marked *