This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.
I had a very exciting night last night.
I was out at a party with my wife. And we’re in San Francisco at a nice little bratwurst joint. And who walks in but one Casey Newton. You were there, by total happenstance, in the same restaurant. It was a delightful meet-cute.
It was very fun to see you out in the wild, unanticipated. And of course, my friends were very excited to meet you, having heard so many of my complaints about you over the last several months.
But they put on a brave face and shook your hand, and it was cool.
I did just feel a little weird for the other people at the bar while we were sitting there just gabbing. We talk a lot about the risks of big American cities these days. We’ve got crime. We’ve got break-ins. We’ve got open air drug markets. They don’t tell you about the biggest risk of going out in San Francisco, which is that a podcast might spontaneously break out.
I introduced my wife to one of your friends, and they were just like, Sydney!
So even when I go out, I cannot escape —
— my AI past.
I’m Kevin Roose. I’m a tech columnist at The New York Times.
And I’m Casey Newton from Platformer.
And you’re listening to “Hard Fork.” This week we’ve got big AI news from Snap, Meta and possibly Elon Musk. Then New York Times reporter David Yaffe-Bellany is going to tell us whether crypto is dead or only mostly dead. And finally, we’ll talk about the controversial new TikTok filter that may be making people dangerously hot.
OK. So Casey, while we’ve been talking a bunch on this show about Bing and Microsoft and their AI experiments, there is a lot more happening in the world of AI. In fact, there’s been a bunch of stuff happening that I think we should catch up on.
I would go so far as to say the AI news accelerated.
Yeah, it sure did. And we actually haven’t played our weekend AI theme song in a long time because every show has been sort of about AI. But I think we should play the AI theme song.
Play the theme song.
(SINGING) This week in AI.
So the first big AI news of the week is that Snapchat announced this week that it is introducing an AI chatbot into its product named My AI. And initially, it will be available to subscribers of Snapchat Plus, which is their $4 a month subscription program. But the goal, Evan Spiegel, CEO of Snap, told the Verge, is to eventually make the bot available to all of Snapchat’s users.
So I haven’t been able to get access to this yet, but I’ve seen screenshots floating around. It is basically a version of ChatGPT that exists in your Snapchat feed. So you’d have your texts and snaps from your friends, and then you’d have this little robot avatar thing with kind of a purple face and bluish, greenish hair called My AI. You open up a text with it, and you can just chat with it, like you would with your friend.
You can send it your nudes and say, hey, what do you think of these?
[LAUGHING]: Yeah, it’ll send you nudes back, but it’s just the inside of a server rack.
So I haven’t been able to use this chatbot yet. But I’ve been looking at other people’s reactions to it, and it seems like they really have clamped down on the chatbot as far as what it will and won’t talk about. A lot of people are saying, this thing is boring. It’s giving the most obvious answers.
And I think this is on purpose. In their announcement for this product on Monday, Snap said that this is an experimental chatbot and that it’s designed to avoid biased, incorrect, harmful, or misleading information. And this is, I should say —
Which is more than I can say for most of the friends I talk with on Snapchat, but go on.
Exactly. So I haven’t seen anyone saying that the Snapchat My AI bot has tried to break up their marriage yet, so that’s a good thing. But it is interesting to me because this is sort of the first time that a social app has really built this into its core product like this. I mean, I also — I just think this is going to do such interesting things to the social world of teenagers.
The first thing that I thought when I heard about this new Snapchat AI bot is like, I really, really hope that Snapchat did fine-tuning in a way that makes it safe to unleash on a population of, let’s face it, mostly teenagers and young people.
Well, I mean, what does safety mean in that context to you?
I’ve been thinking about this a lot because on one level, these chatbots are interesting because they will say interesting things. If a chatbot would just tell you the weather and what time it is and what the latest movies are, that’s not —
Then it’s Alexa.
Right, then it’s Alexa. Then no one’s going to use it. So in some cases, I think that there is a direct correlation between how interesting these chatbots are allowed to be, how much crazy stuff or unnerving stuff or even disturbing stuff they’re allowed to say, and how interested people will be in them. But I really do worry. Social media is already such a minefield for teenagers who are self-conscious or depressed or lonely. And I just don’t know what it looks like if you just all the sudden show up with an AI chatbot that can be everything to everyone.
Yeah, and at the same time, what if you’re a kid, and you’re feeling lost? And you go to your Snapchat AI and you just say, hey, what’s my place in the world? And it says, well, actually, you’re going to be a corporate litigator. You could really save these kids a lot of existential doubt and despair.
Wait, that’s supposed to make them happier if you tell them it’s going to make them —
Yeah, because —
Are the corporate litigators you know happy people?
Well, no, but it doesn’t matter. When you’re 14, you think you’re going to go start a nonprofit and save the world. Then by the time you get out of college, you’ve got $100,000 in student loans, and you have to pay them off somehow. And so you become a corporate litigator. Why not save yourself the eight years of existential doubt when you could just embrace your soulless future while you’re 14?
OK, this somehow got even darker.
No, I really hope that the companies that are doing this kind of integration are thinking really hard about not just putting 100 words in the Do Not Say bucket for the chatbot, but in trying to steer these conversations in a positive direction. I can imagine myself as a teenager asking questions about identity and body image and friends and social life and school and stress. And I just — I hope that these products are ready or can be made ready for that kind of interaction because, let’s face it, they’re going to be using it as a proxy friend. And for that reason, I think it’s really, really important that the right safety work go into these products. OK. Next story — meta. Is another big tech company that is also throwing its hat into the ring when it comes to generative AI. Mark Zuckerberg announced on Monday that Meta now has a team that is dedicated to building tools powered by artificial intelligence and that they are exploring putting these tools into their products like Messenger and WhatsApp. They’re also experimenting with using AI for things like creative Instagram filters as well as video and multimodal experiences. So did this announcement surprise you by Mark Zuckerberg on Monday?
I don’t know that it was a huge surprise. We know that Meta has been working on AI tools for a long time, and it makes sense that they would want to create some kind of high-profile team within the company that was going to be working on that stuff in a very visible way. On the other hand, I think it’s fair to say there’s been a bit of flailing over there over the past year or so.
They have announced a lot of things that kind of came and went. There was a newsletter effort. They were really interested in audio for, like, a year and podcasts and stuff. And then, of course, last year, along with everyone else, they got interested in Web3, and they started building ways for people to showcase their NFTs and Instagram and that sort of thing.
So fast forward to March of 2023, and AI is the flavor of the month. And so now Meta has a big AI team. And I can smirk at that a little bit. But in the end, I think it was obviously the right thing for them to do.
Yeah, it feels a little bit like they’re chasing a trend, but it’s the same trend that everyone is chasing right now. And we should say, it’s not as if this is Meta’s first foray into AI. So for many years, they’ve had a very large, very well-funded AI research lab. And actually, last week that lab introduced what they’re calling Llama, which is a foundational large language model.
It’s very similar in how it works to things like GPT-3 or Lambda from Google. And what’s different with this model is that Meta actually made several different versions of it of different sizes, basically so researchers who don’t have a ton of computing power can still work with it. In the research paper Meta put out on this, they said one version of LlaMA, quote, “outperforms GPT-3 on most benchmarks, despite being 10× smaller.
So did you read about Llama? Do you have any thoughts about it?
Yeah, I mean, it’s one of those where it’s really difficult to assess how impressive it is because they won’t let us try it. This is something that they are only allowing research groups to try out, and they have really framed it as a tool for making AI more responsible in general. And that sounds great, but I’m not going to be the person who’s using it to make AI safer. And so I guess I will just sit back and wait for researchers to tell me whether this was useful or not.
Right. Interested to see what comes out of that. I also have some words for whoever at these companies is naming these models. We now have Llama, Lambda, ChatGPT, Bard. I mean, it’s really — they need some help over there in the writing department.
What do you think would be a good name for an AI model?
I think sort of the best one out there right now is this company, Anthropic, which has a model. And they just call there’s Claude. Like, C-L-A-U-D-E,— this is like a Frenchman.
Yeah, it sounds like a chipper butler.
Yes. So I think Claude is easily the best of the names that we have so far.
Well, I can’t wait until OpenAI merges its image generator model with Meta’s new language model, and they call it Dalai Llama.
Oh, boy. And that was our show. OK.
All right. Here’s where I think all of this gets interesting, Kevin. And let’s stipulate that none of what I’m about to say is going to happen this year. But an ongoing problem that Meta has is it always wants more people creating more content in the hopes that you stumble across the thing that keeps you engaged for another two or three swipes of the thumb and maybe see an ad, maybe make a purchase. That’s the model.
Right now they are limited by what human beings can create. But if their AI gets really good, that won’t be a limitation anymore. So imagine someday in the near future, you, as you’re normally doing on an average weeknight, are watching Instagram Reels of people dancing. And you see one that you really, really like.
And imagine you could say, show me more like this. And when you tap that button, instead of seeing a bunch of humans doing different but similar dances, the AI just creates videos of people doing related dance —
Like, on the fly.
On the fly, just generating video for you to watch based on what it already knows that you like. If Facebook can get there, then I think all of these investments wind up being really valuable to them.
But I guess my question about the social networks jamming AI-generated content into their products is like, doesn’t that kind of destroy the whole point of a social network? Facebook has had a real name policy for many years. It wants people to represent who they actually are. It wants the posts that people create to represent what they’re actually — if your cousin is posting photos of their baby, they want it to be actual photos of your cousin’s baby. Isn’t a social network that has AI-generated content all over it kind of antithetical to the whole point of a social network, which is keeping up with your family and friends?
Well, I think that that era has already started to end for Facebook. Last year they said, we are going to start showing you way more suggested posts, which is to say stuff that you are not following that their ranking algorithms are just guessing that you’re going to like seeing. This is an idea, of course, that they borrowed from TikTok, which by some measures, is the biggest app in the world.
So Facebook and Instagram took a look at what TikTok was doing and how much it was succeeding. And they said, we’re moving on into this new era, where it’s not going to be about your friends and family exclusively. They’ll still be there, but the real point, as ever, is just to show you stuff that gets you to keep swiping.
Well, and then what’s to stop them from augmenting the friends and family content with AI, too? I mean, if you could —
I wish they would. I mean, some of my cousins — you’ve never read a more boring post. I’m just kidding. I love all of my cousins.
Right. So you could have dynamically generated baby photos of non-existent babies. This could take a really dark turn. But it is going to be interesting to see how they do this. One word that did not appear anywhere in Mark Zuckerberg’s post about AI — metaverse.
Yeah. That word was nowhere to be found. So my question, Casey, is what the hell happened to the metaverse? Have they lost their interest in it? Are they no longer going to be spending tons of money trying to make it happen? Is generative AI just so hot right now that you can’t get enthusiasm for anything else? Why are they — they seem to be dropping this focus on the metaverse, or at least downplaying it.
Well, I think they came around to the same observation that a lot of other folks have had, which is that a true metaverse is just still a really long time away — like, more than five years, maybe 10 years. In order for it to be real, that hardware has to get a lot better. Tens of millions more people need to buy that hardware. And then the software inside it needs to get much better.
I can remember one of the early episodes of this show as talking about how their “Horizon Worlds” product was not good at all. And they were essentially going back to the drawing board with it because people would just try it and then bounce off of it immediately. So I think they’re just realizing that, while they still believe there is something there, it’s just not going to materialize for a long time. And they’re going to have to do something else in the meantime to convince their employees and investors that this is an important company right now, and we’re not going to need to wait until the 2030s for it.
It just really feels to me like this company is flailing. Clearly, they’re still making tons of money on ads, and I don’t think they’re anywhere near declaring bankruptcy or whatever. But it seems like they are just looking for anything that has sort of a pulse that they can jump on and make their own version of. So I look at this, and I see, OK, this is a company that doesn’t have many new ideas of its own, that is looking for something out there that is resonating with people that they can sort of co-opt and turn into their own.
I’m a little more optimistic than you. If you look at their most recent quarterly earnings, I was surprised because they had managed to get their users up by low single digits. It seems like their ad revenue is stabilizing. Another thing that they’re doing with this AI is just essentially using it to guess things about you, even when they’re not able to collect the data that they used to. And that seems to be helping them on the revenue front.
Oh, that’s interesting. So it could be like a workaround to some of the Apple privacy stuff.
How does that work?
They just build predictive models. But instead of trying to make it fall in love with you, they just try to say, do you want this sweater?
They just say, maybe you’re unhappy in your marriage. Would you like —
— a meeting with a divorce counselor? Is that —
Yeah. It’s something like that. And then the last thing is that they’re actually having success getting people to watch the short form video — these reels. Not just in Instagram, by the way, but they’re having success getting people to watch them in Facebook. And so that seems like it’s meaningfully affecting TikTok’s ability to grow.
And of course, TikTok was just banned from government devices in Canada, and EU officials can no longer have it on their phones if they use those phones for work at all. So I agree with you. It doesn’t feel like they have the most focus on successful product strategy that they’ve ever had right now, but the company is still doing pretty good.
I mean, one question I have for Facebook about their push into AI is how this is going to affect their efforts to moderate content. One thing that happened last week was that there was this sci-fi magazine called “Clarkesworld” that actually had to shut down submissions because they were getting a massive spike of stories sent to them that it just totally overwhelmed them. The publisher — a guy named Neil Clarke — he blames this on generative AI.
He says that it seems very likely that of the submissions that are flooding into our inbox, a lot of them have been written by ChatGPT or other AI programs. And he basically said that this had become a spam problem. And so it made me think. These social networks that already have billions of people posting content every day, they’re already struggling to moderate that amount of content.
Now throw AI-generated content into the mix, where you have people who can make 50 different versions of a video and post them all in hopes that one will go viral, and you just have just a massive scale problem. So I don’t know. How are you thinking about that?
Well, I mean, I think it’s going to be a continuation of the arms race that already exists. All of these giant world-scale platforms are already dealing with hundreds of millions of posts a day. They already have to account for that. Now, if you add a 10x multiplier on top of that, then yeah, I’m sure the problem gets more difficult, and I bet we see a lot of weird things.
But on balance, I think interesting stuff will still probably rise to the top. But if you are a solo proprietor or work on a very small team for a magazine and you’re used to being able to review the — I don’t know — few dozen submissions that you’re getting every week with relative ease and then, all of a sudden, you’re looking at 10,000 stories, then yeah, that does become really difficult. And I think it speaks to the need for us to develop tools that let us know when something has been created with a chat GPT-like tool or not.
I hope that when these platforms stick generative AI into their social networks that they give you the option of the organic feed — the actual —
The human feed?
The human feed. Because sometimes, yeah, I probably do want to be distracted with 25 dynamically generated videos. But it would be a real bummer if that really crowded out your ability to see — it’s already kind of hard to find your friends and family on these social networks because they’ve jammed so many videos and reels and other things into them. I just hope they give us the option of the kind of human tab.
Yeah, that feels like sort of the new give me the chronological feed, where it’s something a very vocal minority will beg for for months. But then when it actually gets introduced, no one will actually use it.
Yeah, because the AI cousins are going to be more interesting than the real cousins.
All right. Well, that is just a little slice of what happened this week.
We haven’t even talked about arguably the most important AI story of the week, Kevin.
There’s another one. OK, one more big AI story this week, which is that Elon Musk has apparently been thinking about starting his own AI lab to develop an alternative to ChatGPT. So this was according to reporting by the Information.
They said that Elon Musk has approached AI researchers in recent weeks about forming a new research lab and that he’s been recruiting someone named Igor Babuschkin, who is a former DeepMind researcher who specializes in the kind of machine learning that powers ChatGPT and other large language models. And this effort, according to the Information, is still in the early stages, and there are no concrete plans. But Elon Musk is thinking about this and talking with people in the field and may be starting an open AI competitor.
And it could be the first AI lab to run inside of a hyperloop in an underground tunnel. And I think that’s really exciting.
[LAUGHING]: Musk has been interested in AI for a long time. He was one of the founders of OpenAI back in 2015. And he’s been talking for years about how superintelligent AI could rise up and kill us all and how we should prevent that. And now it seems like he and some other folks in his orbit are very concerned about these models not expressing the right political beliefs.
He’s tweeting about based AI, and based is sort of like shorthand on the internet now for the opposite of woke. And he really wants these chatbots, it sounds like, to be able to behave in lots of ways that we might consider offensive or dangerous to have fewer guardrails around them. And that just feels, to me, like such a shrinking of ambition. You wanted to save the world from killer AI, and you ended up starting an AI lab to make sure that it can write poems about Donald Trump.
Right. On one hand, it all seems very silly. Another way of saying that the AI is being trained to be woke is just saying the AI is being trained with some safeguards in place. There was a pretty significant backlash in the past when AI models were released and, for example, said a bunch of racist stuff. So if you’re a for-profit business, it makes sense that you would want to create tools that were not going to trigger that same sort of public backlash and potentially destroy the value of what you’re building.
Now, at the same time, it’s clear that as AI takes over more and more products, people are going to want it to reflect their own beliefs and their own viewpoints. And if they feel like they can’t get the AI to talk like them or to answer questions the way that they wish it would, then I think they are going to create a marketplace for an alternative. Although we should say that Sam Altman, who runs OpenAI, has talked about over time wanting to make sure that the models can be adjusted to reflect a wide range of political viewpoints. They want to offer a very politically generic model that people will be able to adjust to their liking, and that makes a fair amount of sense to me.
I think there’s a pretty good argument that at the point where this becomes just kind of a program running on your computer, you should have really wide flexibility to get it to say what you want. And one of the reasons why I’m comfortable with that is that all of the big social networks and places online where you might post whatever you’re making with your political AI, they’re still going to have their own rules. So even if you use an AI to write something really terrible, you’re still going to be limited in where you can post it and how fast it can spread.
Right. I did see someone suggesting that Elon Musk’s AI lab would have an advantage because he would be able to use Twitter data to train an AI chatbot, which I mean, if you want an AI that behaves like a psychopath, you should train it on Twitter data. I cannot imagine a more toxic training ground to use for your new AI model. But if he wants to try it out, go for it.
I do think we are going to see this kind of splintering of the AI research community along ideological lines. I think you’ll have your AI models that behave more like a Democrat and your AI models that behave more like a Republican and ones that behave more like a libertarian. And I think those may end up just coming from different communities and different companies. So it’ll be interesting to see whether there can be such a thing as a truly neutral AI model, which is, I think, what OpenAI and others are trying to build. Or whether a social network, you have to put your foot down at some point and say —
Of course you do. Of course. I mean, that’s why I think this is going to be such an exhausting and tedious conversation is, at some point, you have to decide whether you’re going to let the model talk about certain subjects and in what ways they’ll be allowed to talk about them. And there are just always going to be people working those refs, even though, as humans, we still do retain the ability to write ourselves. And if the output of the model isn’t exactly what you want, you can always just write a few sentences. But I don’t expect that that argument’s going to get very far.
Totally. The other question I have about this is whether this means that he is getting bored of Twitter — whether he is making plans to exit Twitter.
I mean, I think that that would be wonderful. And I hope he is getting bored with it. On Wednesday, Twitter went down for, like, two hours. There was some reporting this week that the site is going down much more often than it used to. That, of course, we assume, is connected to the fact that he keeps laying off hundreds of people every few weeks. So yeah, I think there is actually some pretty good evidence that Elon is starting to get bored.
Or maybe he just wants all the tweets to be generated by AI. I think that’s where this is all leading.
Who knows? My tweets have been generated by AI for years.
Is that right?
Only the bad ones. The good ones, I write myself.
When we come back, we’ll talk to “Times” reporter David Yaffe-Bellany about what is happening with FTX and the great crypto crackdown.
Hey, guys. How’s it going? Sorry I’m not in studio today.
You know, I had a bad cold last week, so something is going around.
They call it the “Hard Fork” curse.
They’re calling it the “Hard Fork” curse.
Well, I wasn’t even in San Francisco last week, so clearly something’s going around nationwide.
Something is sweeping the nation.
And they’re calling it AI fever. It’s not crypto fever. I’ll tell you that much. Crypto fever has passed.
Well, CZ was tweeting this morning about finance’s new AI product. It’s some kind of generative AI thing where you make yourself a profile picture, and then they turn it into an NFT. So —
Automatically evades money laundering regulations.
It’s just a bot that says, you’re sure you wouldn’t like to buy a few more Bitcoin there, David? Come on.
All right. David Yaffe-Bellany, welcome back to “Hard Fork.”
Thanks for having me.
So the last time we had you on the show for one of your patented DYB FAQs on FTX, you were telling us about Sam Bankman-Fried, who had just been arrested and was sitting in a jail somewhere in the Bahamas. Since then, Sam Bankman-Fried has been returned to the US, where he faces 12 counts of fraud and conspiracy charges in conjunction with the collapse of FTX. So we’ve learned a little bit more through these court proceedings about FTX and what was going on. Can you catch us up on what prosecutors are now saying about Sam Bankman-Fried and FTX?
Yeah, so a lot’s happened since we last spoke. He’s not in jail, but he’s sitting in house arrest at his parents’ house in Palo Alto. And then last week, prosecutors unveiled a new sort of revised indictment against him that added some new charges onto the ones that they had already filed. They added a bank fraud charge onto there.
They added a money transmitting charge, and they also revealed a lot of new details about the campaign finance part of the original case. So initially, he was charged with campaign finance violations, but it wasn’t totally clear the specifics of what he allegedly did. And so a lot of that was kind of clarified and expanded upon in this new indictment.
So now all of a sudden it’s illegal to try to buy off a politician? I thought this was America. Come on.
So yeah, I mean, of course, there’s all sorts of legal chicanery that you can do in the political process to buy power and influence. But what you’re really not supposed to do is funnel campaign money through other people. You’re not supposed to give $1,000,000 to your friend, tell them who to donate to, and then have them donate in your name. That’s what’s called a straw donation, and that is the crux of what FTX and SBF are accused of doing.
Not only was the money that was going to these campaigns customer money that had been deposited in the exchange, but it was basically Sam pulling the strings. And donations would be made in the names of other executives — and in particular, two executives who are mentioned in this revised indictment, one who is donating a lot to Republicans, and the other who is donating a lot to Democrats.
I think if we saw one thing from SBF last year, it’s that this was not a person who was afraid of putting his name on donations. So what is the thinking about why he would bring in these surrogates to put their names on these donations?
Particularly on the Republican front, the idea is that you can kind of play both sides in a tricky way. You can sort of influence the political process without having to deal with the potential PR consequences of donating to a lot of Republican politicians who get criticized in liberal media and that sort of thing. Also, if somebody doesn’t like SBF or doesn’t want to be associated with him for whatever reason, then theoretically you could still use your money to influence the process but strip yourself out of it.
You also reported this week that another FTX executive had pleaded guilty in this case. So who was that FTX executive? And what was his role in all of this?
So the executive who pleaded guilty is a guy named Nishad Singh, who was the Director of Engineering at FTX and also one of the original founders of the exchange. And as you may remember, shortly after SBF was arrested and charged, two of his other top lieutenants, Gary Wang and Caroline Ellison, pleaded guilty.
And Nishad was kind of the fourth member of the inner circle, basically. Unlike Caroline and Gary, Nishad was also involved in the campaign finance part of the charges. He was the guy who was making donations to Democrats that prosecutors now say were essentially SBF donations. And he was sort of used as the straw donor in that scheme.
But really, the charges against Nishad and his guilty plea strengthen the campaign finance part of the case for the prosecution because now they have somebody who was a straw donor saying, I was a straw donor, and I was basically acting on Sam’s behalf. And so that’s a powerful bit of testimony to have.
Now, at this point, SBF has pleaded not guilty to all of the charges. Is that right?
Yes, he’s pleaded not guilty. A trial date has been set for fall of this year. I think it’s pretty unlikely that the trial will actually happen around then because these things take a long time to prepare for. But that is the kind of current state of affairs.
But at this point, if his top lieutenants have pled guilty and said, oh, yeah, we were definitely doing a lot of crimes —
Yeah, it’s not looking good for him. Now, not only have they pled guilty, they’ve agreed to full cooperation. And so they’re going to get up on the witness stand at this trial and say, I committed crimes with Sam Bankman-Fried. And that’s not going to look good to a jury.
So in addition to the campaign finance thing, you’ve reported that as part of the bankruptcy process, a lot of the money that SBF had given away or invested was being clawed back in an attempt to make FTX investors and customers whole, or as whole as possible. So what can you tell us about the status of that process of trying to dig up the money that was disbursed from SBF and from FTX and give it back to the customers?
So there are some unknowns here still. We still don’t the exact amount of money that is missing, the exact size of the hole. But the rough estimate is that it’s about $8 billion. And of that $8 billion, the new executive team that has taken over FTX and the lawyers working for them — that team has managed to recover about $5.5 billion —
— which sounds like a lot. It’s a pretty good return, honestly, after only a few months of work. But there are some mitigating factors there. One is that a lot of this is kind of like the low-hanging fruit that they’re able to recover really quickly. It’s not likely that the next $2.5 billion or however much will materialize super quickly.
And then when you break down what makes up that $5.5 billion, yes, there’s some cash. There are some traditional securities that are maybe easily convertible into cash. But then there are a bunch of cryptocurrencies, and the value of some of that crypto is pretty uncertain. And that $5.5 billion includes a lot of FTT, which was the in-house FTX token that you may remember as one of the —
The magic beans. Yes.
— prime drivers, yes, of this whole fiasco. And so customers aren’t going to be super happy if they’re paid back in FTT — a now virtually worthless token.
Right. You lost $1,000,000 when FTX collapsed, but here’s a handful of magic beans.
Exactly. Exactly. So $5.5 billion maybe really isn’t actually $5.5 billion, and it’s definitely not $8 billion. But how do you get to $8 billion? At this point, the bankruptcy team is going after the money that SBF distributed to various places.
So he invested hundreds of millions of dollars in other startups, so you can ask those startups to return the money in kind of a friendly way. Or you can sue them and try to claw it back more aggressively. The bankruptcy team is also reaching out to all of the PACs and political campaigns and politicians who got money from SBF and trying to get it back.
But in a lot of these cases, when it was basically like VC money being pumped into other startups or political donations, the funds just aren’t there anymore. They’ve been spent. And so it’s not totally clear how much of that will be recoverable. And even the funds that are recoverable could take years to get back.
Well, the next time you talk to the bankruptcy team, I hope you’ll tell them my advice, which is to start a generative AI startup. They’ll raise $8 billion in no time. So things are not looking good for SBF, but there’s so much else happening in the rest of the crypto space, in particular with crypto regulation.
You’ve been reporting on how regulators in Washington are cracking down on crypto. You described it as a flurry of actions. And this has been something that crypto advocates have been fearing for a long time, was that the government would basically wake up and decide to go after them. And they pinned a lot of their frustrations, at least initially, on Gary Gensler, the chair of the SEC, who actually said in an interview with New York Magazine last week that he thinks that basically every cryptocurrency except Bitcoin should be considered a security.
And I immediately, when this came out, saw tons of crypto people just losing their minds, very upset about this comment. Why are they so upset about this? What would it mean for the crypto industry if every cryptocurrency except Bitcoin was considered a security?
Yeah, before I answer that, I mean, it’s also just worth reflecting on how much crypto people hate Gary Gensler. It’s really kind of remarkable. I mean, there’s always tension between industry and regulators, but I have talked on the record to crypto executives who describe him as a sociopath.
Days after Kraken settled with the SEC, Kraken’s founder, Jesse Powell, went on Twitter and posting masturbation-themed memes about Gensler that he eventually deleted. The level of toxicity is kind of incredible. And I’ve asked Gensler about it, and he’s basically just like, oh, yeah, yeah, I’m doing my job, sort of thing.
But anyway, to actually answer your question, yeah. I mean, Gensler’s central claim for the couple of years that he’s been the SEC chair is that the vast majority of cryptocurrencies are securities, akin to shares traded on the stock market and that sort of thing. And that’s significant because there are a whole bunch of regulatory requirements that come with something being a security. Gensler wants to kind of extend all those requirements to the crypto industry.
So if you started your random coin, you would actually have to explain what the idea behind it was and go into more detail about the technology and that sort of thing so that people would know what they were getting into. The crypto industry is very resistant to that. They have all sorts of legal arguments about why cryptocurrencies don’t actually meet the standard for security.
But also, it would be incredibly expensive for the industry to suddenly have to get all these licenses and meet all of these disclosure requirements associated with securities. And so that’s kind of the crux of the fight. Gensler has long acknowledged that Bitcoin is not a security.
Why? What is different about Bitcoin that, in his eyes, makes it not a security?
In essence, it’s that Bitcoin is sufficiently decentralized that it’s not a security. There’s no central group of people that is in charge of Bitcoin, that issues Bitcoin, whose business plan will determine whether Bitcoin is a success. That’s sort of what it comes down to.
So my understanding, just from talking with folks in the crypto industry, is that there’s a belief that if every crypto instrument except for Bitcoin were declared or treated as a security, that it would basically destroy everything that the crypto industry has spent the last decade building. All NFTs would be considered securities.
Stablecoins — these sort of crypto coins that are supposed to behave like government-issued currency and be pegged to the value of government-issued currency — those would be considered securities. The tax implications, the regulatory implications — that it would basically just destroy the entire thing. So were they being exaggerated about that, or is there a realistic fear that if these tokens are treated as securities, that the whole industry could collapse?
It’s a complicated question, and a lot depends on how the crypto industry would respond to that state of affairs. A lot of times, industry will say, oh, if you make this designation or institute this rule, our whole industry will collapse. But then when the rule actually gets instituted, you adapt to it. You figure out ways to respond, and you’re able to maintain the basic technological breakthrough.
I mean, Gensler would argue, yeah, too bad. OK, you’re distributing something that meets the legal standard of a security. The argument that, oh, this is really fun and great and so we shouldn’t actually have to play by the rules, doesn’t hold much water. That’s what he would say.
And it’s also certainly the case that the crypto industry has done a pretty good job destroying itself over the last year without any of those rules existing. And maybe even a lot of the problems that we’ve seen over the course of 2022 could have been prevented if some of these basic protections were in place. But it’s an incredibly contentious issue in crypto land. And the industry boosters would rather have their cryptocurrencies categorized as commodities, which come with a lighter touch regulatory regime.
I’m just wondering if these venture firms that raised billions of dollars for crypto-specific investments could have ever raised that much money in a world where cryptocurrencies were considered these kinds of securities. And to the extent that you thought that crypto was going to be the foundation of a new internet, this, to me, feels like the moment where maybe we say, well, actually, just no. It’s not going to be a new internet anymore. The amount of surface area to build on just got way, way smaller. And so it seems possible that this might be one of the more significant developments in the history of the industry right now that we’re talking about.
Yeah, it’s incredibly significant, and it’s going to get heavily litigated. There is one case that’s pending over the cryptocurrency Ripple, which the SEC claimed was a security a couple of years ago. Ripple fought back. And we’re waiting for the judge’s ruling on that, and that’s likely to come relatively soon. And so that’ll be a kind of landmark legal decision in this debate over the classification of cryptocurrencies.
Another thing you hear a lot, or at least that I hear a lot, from people in the crypto industry is that if regulators crack down, as they now appear to be doing in the US, that these companies will just move offshore and that there will be — it won’t actually have the effect of changing what crypto is. It’ll just change where it happens, and the US will lose out on all of this growth and these companies and these jobs.
So my question for you is, do you see any evidence that that is true? Are companies moving offshore to get away from this new crackdown? Or was that always just kind of a scare tactic?
Yeah, I mean, I think it’s probably too soon to judge whether that’s happening in a new way now as a result of this recent crackdown. But it is definitely the case that the biggest crypto company in the world at this point, Binance, has always been offshore. And so yeah, there are definitely crypto people who are arguing the growth and innovation that might be unleashed if you allowed that sort of trading in the US is now flowing to other countries. It’s definitely the case that the SEC sees its job as protecting investors. It’s not a concern for the SEC whether the economic benefits of crypto flow to the US or to another country.
Also, the economic benefits of crypto — I sort of want to put that whole thing in air quotes, just because I feel like there have not been a lot of economic benefits of crypto for many of the people who’ve been using it.
Listen, the benefits to working class Americans of having robust Bored Ape factories in our borders — I mean, those jobs are not going to create themselves. So I think this is an urgent priority.
Yeah. Yeah. No, I mean, the crypto industry hasn’t actually really demonstrated that actually there are strong benefits, so that’s a big issue.
And also, the regulatory piece seems to have a couple dimensions to me along which it could be harmful to the industry. So one is just, it makes the investment case for crypto projects just much less strong because if you could be sued out of existence or have your executives charged with fraud or selling unregistered securities, what VC wants to be investing in that? There’s also sort of a psychological hit, I think.
A lot of the people that I talk to in crypto came from traditional finance because they wanted to be able to play around in this new Wild West where you could do basically whatever you wanted, where there was no compliance person or lawyer looking over your shoulder while you did things — just the fun of being able to experiment in a totally new and untested and ungoverned industry. And a lot of that goes away if you have compliance people who are standing over your shoulders saying, did you comply with this, did you comply with that? It just makes the whole thing a lot less fun.
Yeah, it’s fun to play Grand Theft Auto because there are laws in that game that you can break without any consequence. And you can kill people, and you can fly a helicopter into a mountain and still live the next day. But at the end of the day, you are playing a video game.
Yeah. And that’s going to come to an end.
It’s also just the central philosophical idea behind the origins of crypto was that it was a corrective to the mainstream financial system. And it showed all the flaws and the existing setup and that sort of thing. And now if it’s just governed by the same regulatory agencies and kind of operates in the same type of way, what’s the point at all, really? What does it do that’s different?
Well, I think the point was the friends we made along the way. And I hope that —
And the Apes we bought.
Hold tight to those folks. Yeah.
Well, David Yaffe-Bellany, as always, great to talk to you. Thank you for catching us up.
Thank you, David.
Yeah, thanks for having me. [THEME MUSIC]
When we come back, we reverse the ravages of time.
(SINGING) If I could turn back time.
Mm. Oh, I would love to get Cher on the show. I would die.
Casey, there is a big controversy brewing this week in the world of TikTok filters.
TikTok has these filters that you can apply to your videos. And two of them are really causing a stir. The first is called Bold Glamour, and it is a feature that is ultra realistic and that is coming under fire, according to Vice, for being too good. Vice says the filter, which convincingly alters facial features to look more conventionally attractive and simulates a soft glam makeup look, has some users freaking out that it conveys unrealistic beauty standards without viewers realizing that the look comes from software.
Now, bold glamour is, of course, one of the core values of the “Hard Fork” podcast, but we have never actually used this filter ourselves.
Should we do it?
Let’s try it.
OK. So let’s open up TikTok here.
Now, can I just search Bold Glamour?
Yeah, I think you have to go into Effects and search Bold Glamour. And glamour is spelled the British way with a U.
All right, I’m using this effect. Wow. Unfortunately, I do actually look incredibly hot in this filter.
Casey is winking at his TikTok. OK.
What would you say are the biggest differences?
Well, one, it’s really evened out my skin tone. So I have a sort of default pasty pink complexion, and this has made it very tan, I would say. My cheekbones have really been made more prominent, and I think my jawline is extra defined. I think it also made my eyes bigger and maybe my teeth whiter. But I’ll go ahead and just show you hot me.
Oh, yeah. You look — you look great.
Here, I’ll record a little bit of me in the Bold Glamour filter.
And I’ll show you what that looks like.
Oh, it looks super handsome.
It gave me a little chisel on the jaw. Big eyes. It made my eyebrows bigger for some reason.
It did. And I think it made your eyebrow bones more prominent. It’s like —
Yes. Yeah, I have a little bit of Promethean brow going on.
You have that classic Prometheus brow.
The look that all the Zoomers are going for. So users are very excited about this filter, but also a little spooked. One user said, “As someone who experienced body dysmorphia growing up, this makes me sick to my stomach.
It’s sickening for our youth.” And this user said that if they had had it when they were younger, it would have, quote, “emotionally destroyed me.” So Casey, why do you think this has struck a nerve with people?
One thing I think we should say about these filters is that your reaction to them will probably be quite different depending on your gender. I think if you are a woman, you’re under an enormous amount of pressure to conform to certain standards. And I can imagine opening up TikTok and all of the sort of negative feelings that that might bring up for you.
You’re thinking, oh, here we go. Here is one of the most powerful companies in the world that is reinforcing the idea that we should all have these giant eyes and super prominent cheekbones. And that would probably be a really upsetting thing, or at least it could be.
And there also is a pretty robust set of studies at this point that show that these augmented photos, at least, have a direct impact on the body image of adolescent girls in particular. So there was a study a few years ago where they showed manipulated Instagram selfies to girls between the ages of 14 and 18. And the researchers found that exposure to these manipulated photos — these filtered, retouched photos — led directly to lower body image and that girls with higher social comparison tendencies were especially negatively affected by seeing these manipulated photos. So I do think there’s pretty good evidence at this point that the retouching and filtering of at least photos on social media has created worse mental health outcomes.
Right. Although at the same time, if you grew up in the ‘90s, you were also bombarded with completely unrealistic body image issues. And it just feels like this is something that kind of recurs in every generation in its own way.
It is true that there have been manipulated images of people designed to make them look prettier and more conventionally attractive for forever in every medium, but I do think that there is a difference between retouching your selfies and having this dynamic and always-on filter that is — I mean, I imagine that — right now stuff like this is built into TikTok.
But I imagine that, at some point, it could just be built into the camera on your iPhone. I mean, iPhone cameras already have image retouching on them. You can apply filters right in the camera itself. And maybe that’s going to be popular enough — people will like the way they look so much more with these filters applied — that they will just want it to be sort of the base layer of their camera. And that could lead to a really interesting world.
The other thing that I’m just thinking about, looking at all of this, is drag, which has really just completely taken over queer culture over the past several years, thanks to “RuPaul’s Drag Race.” And I think so many of my gay friends are just fascinated to use this stuff to have the easiest way imaginable of just picturing themselves as a drag queen. And I think there’s just a lot of fun and self-expression in trying on those different looks. And so I don’t know. The more that I talk about this, the more I think I’m generally positive about people having access to these tools to just play around with their identity expression.
Speaking of identity and playing around with it, another TikTok filter is getting a lot of attention this week, which is the Teenage Look filter. Have you used this one?
Actually, I did use this one because when I opened up TikTok the other day, I saw somebody using this filter and, of course, immediately said, well, I need to try that for myself.
So the idea is — I haven’t used it. Basically, it shows you what you would look like as a teenager — what you did look like as a teenager.
But the gimmick is it shows you both your current self and what you would look like as a teenager and sort of stacks those two views on top of each other. So you get to see your teenage self and then your horrifying adult self in the same frame, and people are having very strong reactions to looking at the difference.
Oh, no. OK, I have to do this. So I have Teenage Look and — wow. That’s very strange. It turned me into — here, I’ll just show you. So this is my —
It gave me very soft-looking skin and just basically made me look very unblemished and smooth-skinned, which I appreciate because as a teenager, I did not have smooth skin.
I had acne.
I know. It makes you look like the Neutrogena ad version of yourself.
So I did not look like that as a teenager. But people were getting really emotional about this. I saw some videos of people who saw their teenage self through this filter and just started crying because they remembered being a teen, or it brought that back to them in a really interesting way. Or it looked like their kid or something like that, where I think it’s really powerful.
It is. And when I was watching some of the videos, it seemed like it was triggering in people all of these questions about what their lives had been like. And maybe they happened to be in a spot that is not as far along as they hoped they would be when they were a teenager. I think there is something about being confronted with your teenage self that makes you recall all of the dreams you had for yourself at that age.
And if you haven’t achieved them, I think there is something very emotional. Or maybe you just had a really hard time in high school, and you’re confronted with that version of yourself. And all of a sudden, all of those feelings come up again. That makes a lot of sense to me. And yet, I never would have predicted that it was going to be a TikTok filter that got people there.
Totally. Totally. I mean, I love doing this kind of digital archaeology on myself. A little while ago, I found my high school LiveJournal account that I had sort of forgotten about.
And we’ll include a link to that in the show notes.
No, we will absolutely never. That is being deleted from the internet as we speak. I’m going to go immediately after this taping and make sure no listener can ever find it. I did have a very moody teenage LiveJournal, and it was fascinating to see the world as I saw it at 16 or 17.
And I really — I think there is — one of the things that I like most about the internet is that it does sort of collapse time in this way, where now we’ve got all these apps that will show you your photos from this date 10 years ago or your memories from this part of your life. But I worry a little bit that preserving our digital selves is going to be harder than just throwing everything into a scrapbook because these — LiveJournal, since I had it, has been sold to a Russian company, and so now it’s basically impossible to use.
And it could go offline at any moment. And so all these pieces of our digital pasts, I worry about how easy it’s going to be to preserve them. But maybe if we can’t preserve them, we can just recreate them using the TikTok filters of 2027.
Part of me is interested that these filters keep going viral. Before TikTok had a lot of success with this, Snapchat used to release, it felt like, one or two of these filters every year that would be totally captivating. And at this point, you’d sort of think that we had seen every iteration of how young or old one of these filters can make you and how much like a girl and how much like a boy it can make you look. But clearly, there is something left in there to be discovered.
Yeah. Yeah, I remember the one that made people look old on Snapchat. That was a big deal.
Well, I mean, for me, the reason it was a big deal was that I used it, and I just looked exactly like my dad, which made me feel like it was probably quite accurate.
Yeah, I used it and then immediately started investing in skin care products. I can’t go there.
We’ll be right back.
Before we go, so this week we talked about the man whose sci-fi journal has been overrun by AI submissions. And we want to hear more stories like that. How is AI showing up in your everyday life, at your job, at your kids’ school? What are you using it for? Send us a voice memo and just email it to us at [email protected]. We want to hear your story.
And we may turn these into an upcoming episode. We are interested in telling stories not just about the companies that are creating AI, but in how it’s changing your life for the better or the worse. So let us know. Is it my week on the credits?
It’s my week because you took my week last week.
It’s OK. No. Yeah, I asked you to.
You were sick.
[CLEARS THROAT]: “Hard Fork” is produced by Davis Land. We’re edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today’s show was engineered by Alyssa Moxley.
Original music by Dan Powell, Elisheba Ittoop, Marion Lozano, Sophia Lanman and Rowan Niemisto. Special thanks to Paula Szuchman, Pui-Wing Tam, Nell Gallogly, Kate LoPresti, and Jeffrey Miranda. That’s all for next week.
That’s all for next week.
That’s all for next week.
I’m getting a little ahead of myself.
Did the illness do something to your brain?
I am slowly falling apart, Kevin.
There’s nothing slow about it.