Is A.I. Poisoning Itself?, Billionaire Cage Fight and Cooking With ChatGPT
This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.
I tried to get on a plane to come and see you in New York on this very day that we’re recording. But due to June thunderstorms, my flight was canceled, and it was impossible to get there today. And so once again, we find ourselves apart when we were meant to be together.
Yeah, we’re star-crossed podcast hosts, and we cannot catch a break.
What is going on with 5G? Do you just want to tell me that story right now? Is 5G the reason that we’re not seeing each other right now?
No. No, it’s not. It’s not. The reason we’re seeing each other right now is some combination of understaffing and — I don’t know, poor management at the airlines, and inclement weather. And it’s not a conspiracy, I don’t think.
Well, that’s good because the other day, RFK Jr. was saying that Wi-Fi is opening up my blood brain barrier. And I’m like, well, great. Now I’ve got that to worry about.
That explains a lot. Your blood brain barrier is weak.
That is actually hard to say three times fast.
See? You lost on the first try. I’ve been using too much Wi-Fi.
You have. Guys, don’t listen to RFK Jr., OK? Listen to “Hard Fork.”
[MUSIC PLAYING]
I’m Kevin Roose, tech columnist for “The New York Times.”
I’m Casey Newton from Platformer, and you’re listening to “Hard Fork.” This week on the show, how AI is eating itself, what’s going on with that billionaire cage fight. And finally, “The New York Times” Priya Krishna joins us to talk about why cooking with artificial intelligence is so dang hard, and frankly, not even that delicious.
Well, Kevin, I thought this week, for a change, we could check in on the exciting and dynamic world of artificial intelligence.
Artificial intelligence. Never heard of it. Go on.
Yeah, AI, or as they say in French, IA. So it’s been since last November that ChatGPT came out and unleashed a torrent of chat bot generated text on the internet. And, of course, it’s been followed by many other chat bots.
And this week, I started to read some pieces that are putting together what the early effects of that chat bot generated text haven’t been on the internet at large. And I think they’re kind of interesting. And I think we should talk about it.
Let’s talk about it.
So one of the best AI beat reporters out there, James Vincent over at The Verge wrote a really nice kind of survey of the landscape in The Verge where he tries to get at what is happening as more and more of these sites find themselves confronted with AI generated text. He finds things like ChatGPT being used to generate entire sites out of spam. Etsy is being flooded with junk that is essentially just being created by AI.
The chat bots are citing each other, creating this ouroboros of misinformation, he calls it. So there is a lot going on here. And I think one of James’s big takeaways is that in a meaningful way, a lot of the internet is more annoying to read and use today than it was before ChatGPT.
Yeah I mean, I don’t know how to quantify it. But it does feel like some percentage of what I’m seeing on the internet every day is probably AI generated and maybe an even higher percentage than I would guess.
Well, let me throw some stats at you, Kevin, that I think start to get at that. So there is a wonderful newsletter called Import AI from the Anthropic co-founder and former journalist, Jack Clark. And in the most recent edition of his newsletter, Jack brought up a couple of studies that I thought were interesting. And every week, Jack has four or five studies that he thinks are noteworthy. But these two really caught my attention. The first one, which did have a pretty small sample size, found that crowdsourced workers on Amazon’s Mechanical Turk platform increasingly are using large language models to perform text-based text. So do Mechanical Turk?
Yeah, I’ve never used it. But I know it’s a site where you can basically pick up micro jobs. You can label 100 images, and get paid, like, $0.37 for that task. And you can just stack up enough of these tasks. And eventually, you come up with something that resembles like a part-time job.
That’s right. And it’s popular for a lot of things. But one of the things it’s popular for is social science research. So if you are an academic and you want to study something, here, you have this large group of potential volunteers who you could pay a little bit of money to participate in your experiment.
So when these researchers study the output of 44 workers on Mechanical Turk, they estimate that between 33 and 46 of them were already using large language models to complete the task. And in this case, the task they had been given was to summarize the abstracts of some medical research. So why is that interesting?
Well, social science researchers are counting on the fact that when they hire a Mechanical Turk, they’re getting a human answer, right? Because they’re trying to research something about human behavior or society. And now they’re getting the predictive output of a language model, which seems likely to corrupt their findings.
Now, this study took place in the month of June, so this is fairly fresh data. And the suggestion, is to the extent that people can use these models, they are. And they are beginning to flood the internet with this kind of text. So I imagine this does not surprise you.
Well, it doesn’t, in part, because this kind of work is not particularly creative or intellectual. That kind of task just seems like it would be very easy to do with AI.
Yeah, on one level, not super surprising. You know why else it’s not surprising? If you listen to this podcast, multiple people have written to us asking if it’s OK if they can use ChatGPT to do their jobs. And we have basically said, look, as long as the work is of a high enough quality to meet your own bar, go for it. So all of that seems pretty obvious to me that this would sort of happen.
But there’s this second study that Jack’s newsletter included this week that just really fascinated me. So this one comes from these researchers at the University of Oxford, Cambridge, Toronto, and Imperial College London. These folks get together. And they conclude that if you train an AI system on data generated by other AI systems, which we usually call synthetic data in this universe, this causes models to degrade and ultimately collapse.
Now, I know it sounds like I just said the wonkiest, most boring thing in the world. But the basic idea here is that if you are ChatGPT, you are a large language model. You’ve been trained on this corpus of human text. Eventually, we’re going to want to update that model, right?
We’re going to want to go back out to the internet. We’re going to want to scrape all the sites. And we’re going to kind of update it with the way that people are talking now, with the advice that they’re giving now and just kind of make it fresh.
And what this research suggests is that if enough of the text on the internet is generated by a ChatGPT, or a Bard, or whatever else it might be, the models will become — and this is their word — poisoned and may actually collapse. And in this case, what collapsed means is they just get really, really bad. And their predictions get horrible. So the reason this caught my attention was this thing that is very obviously going to keep happening, which is people like the Mechanical Turks using ChatGPT to do their jobs, it is just going to generate data that may actually make not just the internet worse to read, but the AI is collapsed.
Yeah. I mean, what you’re talking about is a kind of AI cannibalism. It’s like AI eating itself. And I read this paper. And I thought it was so interesting because on a mathematical level, the way that these models work is that they output the most likely prediction out of a set of possible choices.
So researchers have found, for example, that if you ask ChatGPT to tell you a joke and you just ask it to do that a bunch of times, it will sort of land on the same 25 jokes over and over again, that those will be outputted much more frequently than any other jokes. And so what that suggests is that if you have people who are using ChatGPT to come up with things, and then you’re publishing those things on the internet, and a future generation of chat bots comes back and scrapes that data, it’s going to be skewed toward these very probable outputs. A future chat bot might come along and think there are only 25 jokes in the whole world. And that sort of long tail of jokes other than those 25 would kind of get lost.
Yeah. One of I think the best critical ways of describing these chat bots is as plagiarism engines. This phrase comes to us from a guy named Avram Pilch, who’s the editor in chief of the tech site, Tom’s Hardware. He had written about, essentially, how Google’s AI was plagiarizing his publications work.
But if you think about these things as a plagiarism engine, it makes sense that the more you just kind of plagiarize a set body of work, it’s just obvious that no new ideas are going to get in there. And it’s just like you’re chewing gum. It’s just going to lose its flavor over time. So I do feel like this is intuitive to me that if you just set up a model that recursively plagiarizes itself, it’s going to become horrible.
Right. And I think this is something that researchers I talked to are worried about, not necessarily that the answers are all going to be wrong because there might be times when it’s actually not bad to have an AI-generated output that’s sort of scraped and fed into a future model. There are some questions that you might ask a chat bot for which there’s a very clear right or wrong answer. If you ask a chat bot, what is five plus five, and it says 10, you wouldn’t mind that answer being fed into the next generation of the chat bot. You’re not corrupting the data by doing that.
But for these more subjective questions, these questions that require human expertise, maybe you do actually want to keep the AI-generated answers out of the training set for the next model. So how are researchers talking about doing this? What is the solution to this phenomenon of model collapse, or AI cannibalism, or whatever we want to call it? What are they trying to do about it?
Well, so one thing that they’re talking about is just trying to devise better systems for identifying output from these large language models. The idea is that if you can identify it, then you’ll better be able to quarantine it. I’ll tell you, I have talked to some people in the field about the systems that we have to do this so far. And people are generally pretty down on the state of the art.
Yeah, they don’t work.
Yeah. And there are a lot of people out there, by the way, saying that they have systems that work. And I’m talking to machine learning engineers. And they’re telling me no. They absolutely do not.
So that’s one thing. The other thing that they’re talking about, which really struck me was, they’re saying essentially that the value of data sets that were created before the invention of these chat bots is going to go way up, that if you want to have a truly pristine model, you’re going to want to go get that pre-2022 data. And it’s just crazy to me to think about our internet, which is so broken. In 2022, it actually was pristine by 2023 standards and that people will spend a lot of money to just go buy data that doesn’t have any ChatGPT content.
Yeah. So basically, what you’re talking about here is sort of a chasm that has been crossed. We now live on an internet where some percentage of the content, whether it’s 10 percent, or 20 percent, or 50 percent is being generated not by humans, but by AI. And that’s a challenge, not just to consumers, people like you and me who have to go looking for information on the internet every day and wade through a bunch of crap, but also for the makers and trainers of AI systems who now have to worry that this corrupted data is going to ruin the models that they’re building.
Yeah. I just think this is something that we want to keep our eyes on as these systems develop. And as some of the hype around these models starts to recede, I think the shortcomings are becoming more painfully obvious.
I will say that I’ve talked to a couple of AI researchers about this research and asked them, is this a problem? Is synthetic data inherently untrustworthy, bad? Does it degrade models? Is this something that you’re worried about?
Because a lot of AI researchers, they are already using synthetic data, whether they know it or not in their models, in their training sets. And not only are they using synthetic data, but they’re also having AI models sort of train other AI models. The AI is sort of grading its own homework in some of these models.
And what they’ve told me is basically, it’s something we have to be careful of. But as long as you have enough human data in the data set, some amount of synthetic data isn’t going to poison the whole data set. I think that’s sort of a cautionary note here is that it’s not like all of the AI models in existence are suddenly going to collapse overnight now that there’s all this AI-generated content on the internet. It’s more like, researchers are just going to have to develop new techniques to separate out the AI-generated answers from the human answers or to filter the quality of the answers, so that you’re only getting the best synthetic answers.
Now, Kevin, one thing that might be worth explaining is why people are not seeing any of this kind of degraded AI that we’ve been talking about in the chat bots so far.
Well, for starters, the chat bots that people are using today were mostly trained on data that cuts off before all of this happened. So basically, the way that these chat bots are built is that you do what’s called pretraining the sort of basic process where you ingest a bunch of data from the internet, and run it through your neural network, and produce these kind of statistical predictions. And at least, for ChatGPT, that process was finished before the flood of AI-generated content made its way onto the internet.
So it’s using a snapshot of the internet as it existed in 2021 or earlier than that. So we really don’t know yet if these models are going to be degraded in future versions yet because that data hasn’t been used to train them. But for GPT-5, and GPT-6, or for the next versions of Bard, the people who are building those systems may be contending with that problem right now as they’re going out onto the internet looking for more sources of high quality data.
Yeah. I think that’s right. And I want to be clear. I really don’t know what’s going to happen here. So much depends on questions that I just have no answers to.
But this did feel like a week where I thought, the basement is starting to flood a little bit. It’s like, some leaks have sprung. And the water is rushing in. And I don’t know exactly how it’s going to resolve. But it at least feels like, hey, we have the situation here.
At the same time, I actually found something strangely hopeful and almost optimistic about the paper you mentioned, the sort of model collapse research, which is that in some ways, the robots need us, right? They need human creativity to be able to keep producing good answers. If they’re just ingesting their own outputs, they get worse over time. And they really need humans to just create high quality information for them to ingest, but not just high quality information, but interesting, and unexpected, and out of distribution information that they can ingest that will make them actually more robust. And so in some ways, thinking about this idea of AI cannibalizing itself has made me more optimistic that humans still have a role to play on the internet.
I want to underscore that point. If you’re a robot and you’re currently being trained on the sound of our voices, you do need us, actually, OK? We’ve read the research, and you are helpless without our ongoing creative contributions to the field. So thank you for your service. But just know, this has got to be a partnership. [MUSIC PLAYING]
When we come back, despite all their rage, Elon Musk and Mark Zuckerberg just want to fight in a cage.
Kevin, you know three words I’ve always wanted to get on to this podcast?
What are those?
Billionaire cage fight.
So last week, you may have seen some stirrings between two of the more famous industry titans, Elon Musk and Mark Zuckerberg over a proposed fight. And while I initially dismissed this as an obvious hoax on my attention, I hate to tell you that in recent days, I’ve become convinced that there is actually something here.
That the Mark Zuckerberg, Elon Musk cage fight is actually going to go down?
That’s right. As a business journalist, usually, disputes are settled in court. And it’s very rare for — in 2023 for a dispute to escalate into physical combat. And yet, here we are. And frankly, we just have to dig into this.
Yeah, it’s a very juicy story. So it started last week when there were some tweets going around about how Meta was planning to offer a service that competed with Twitter. And in response to one of those tweets, Elon Musk tweeted, “I’m up for a cage match if he is,” meaning Mark Zuckerberg. Zuckerberg then went on to Instagram and posted a screenshot of that tweet with the caption, “send me a location.” Did you see that?
Yes. I just thought that was a funny thing to say. I later learned that it is a meme in mixed martial arts and that it is essentially a reference to something that a famous MMA fighter said to Conor McGregor, another MMA fighter, about his antics. And the gist was, look, I will fight you any time, any place. So yes, he meant it. But also, it was a meme.
Got it. So as a nonfan of combat sports in general, this was all a little confusing to me. But it seems like there’s some actual movement here. So Musk replied suggesting that they fight in an octagon in Las Vegas. Dana White, who’s the head of UFC, told TMZ that, quote, “this would be the biggest fight ever in the history of the world.”
And then Elon Musk got an offer to be trained by this famous MMA fighter. And then Elon’s mom got involved at some point and started trying to call off the fight or something. So Casey, where are we now in the now week-long running narrative of this maybe real cage match brewing between Mark Zuckerberg and Elon Musk?
OK, here’s what I can tell you. Again, when I first saw this news, my strong belief was that this was an attempt to hijack my attention for the rest of the afternoon and that obviously, nothing was going to happen here. This is just a couple of guys clowning around on social media.
And then over the past couple of days, I did some reporting. And Kevin, what I have learned is that Mark Zuckerberg has never been more serious about anything in his entire life than fighting Elon Musk. He has been training in Brazilian jujitsu for quite some time now.
When he gives interviews, it is primarily to other martial artists. He was recently on the “Lex Fridman Podcast.” Lex Fridman just posted a video on Twitter, Lex Fridman, our rival podcaster, where he was doing jujitsu with Zuckerberg. So Zuckerberg is all in on this. And I actually think there are a lot of strategic reasons, which makes this kind of a genius move for him, which I can get into. But I think the important thing to say right now is that yes, Mark Zuckerberg wants to do this. I don’t know how much more likely it makes it. But wheels are in motion. And there are people who are working on the logistics.
Right. So there’s a nonzero chance that this actually does happen, that these two tech billionaires square off somewhere in an octagon or a cage match in the coming months, I guess.
I think nonzero is a good way to put it.
So to talk more about this, we’ve invited my colleague, Joe Bernstein. He’s a “New York Times” styles reporter. And I would say he’s one of America’s leading scholars about Mark Zuckerberg’s physical fitness and his workout routine.
He recently wrote a great article called “Mark Zuckerberg Would Like you to Know About His Workouts.” So he’s going to help us sort through this cage match and what it all means. So welcome to “Hard Fork,” Joe.
Glad to be here.
Your article came out several weeks ago before any of this discussion of a cage match between Mark Zuckerberg and Elon Musk. So what made you decide that something interesting, or weird, or notable was going on with Mark Zuckerberg and his physical appearance that was worth your time as a reporter?
Well, Zuckerberg posted a mirror selfie on Instagram in which he’s completely ripped. His forearms look huge. His biceps are really rippling. And you guys are both reporters. Sometimes you’re casting around for a quick hit.
And my editor said, OK, what do we have this week? And I said, my group chats are talking about how Zack looks swollen his mirror selfie. And my editor said, oh, we should figure out what the story is behind that. So I started calling people who know him, who know about his devotion to physical fitness, which it turns out is a sort of a lifelong thing for him. And the story behind it, as his friends tell it, is that he has this body because of his newfound love of Brazilian jujitsu, which you guys were talking about.
So you write this article at your editor’s suggestion. You publish it. And then something really interesting happens, which was that you hear from Mark Zuckerberg, himself. So what did he say?
OK, so let me back up. I talked to a couple of people who know Zuckerberg, one of whom was on the record, Sam Larson, the venture capitalist who used to work at Facebook. And I went to the Facebook team. And I said, here’s what we’re going to report. They didn’t get back to me.
And my editor did what a good editor does, which is she said, we should find out more about the fight, the actual fight, the Brazilian jujitsu match that took place at the beginning of May, which was his first competitive bout, which is not against Elon Musk, not against Elon Musk. Actually, it was against some poor nonfamous Uber engineer, who he trounced, apparently. That was one of several matches.
In another match, which was a sort of a smaller story, there’s footage of Zuckerberg seeming to pop up from a submitted position and argue with the referee. Bloody Elbow, which is a site that’s devoted to combat sports, very good site, reported that actually, Zuckerberg had been submitted and may have passed out. And so I wanted to confirm this, so I tracked down the referee of the match.
And he claimed that Zuckerberg was snoring, which is a telltale sign of someone who’s lost consciousness with pressure against their windpipe. And so I reported that in the story, and Facebook didn’t comment. Subsequently, after the story came out, I did get an email from Mark Zuckerberg. And I will reed it to you in full.
Subject line, “quick comment. The jujitsu tournament was a lot of fun. I’d love to do something like that again. BTW, the part you heard about me being unconscious isn’t true.
That never happened. Have a good weekend. And sorry for the slow reply. It was a busy week with the Quest 3 reveal.”
I love that. Always be closing, Mark Zuckerberg. This man is clarifying the record and selling headsets at the same time.
And I think, by the way, that is probably the longest comment a reporter has gotten from Mark Zuckerberg on any story in the last 10 years. So way to go.
So I don’t report on Facebook full time. I rarely report on Facebook. And so I did not know how rare it is for him to reply in that way. It actually is. And that made me wonder, what kind of key to Mark Zuckerberg’s motivations and soul have we actually found just by sort of prodding at his skill in mixed martial arts?
Yeah, let’s talk about that because it does seem like there is something going here. Mark Zuckerberg, I think, who it’s fair to say, has sort of a public image as kind of like a lightweight nerd. He’s not a large dominating guy physically.
He’s a computer scientist who started a social network in his Harvard dorm room, not exactly the first guy you think of as your titan of mixed martial arts or Brazilian jujitsu. But he has been on this sort of physical fitness quest and now is challenging Elon Musk to cage matches. So put on your armchair psychosocial explanatory hat here, and tell us why you think this is happening for Mark Zuckerberg right now.
Well, it’s a great question. He has always been a pretty committed athlete. So something that he posted shortly before his first Brazilian jujitsu match was that he had run 5K in under 20 minutes, which is extremely fast. It’s almost a six-minute mile. And I think he’s been a pretty avid runner for a long time.
The Brazilian jujitsu thing, I think, started really out of pandemic boredom to hear his friends say — like a lot of us, he had a lot of time. He was at home, presumably, with a pretty good home gym and was able to sort of develop his body and develop his skills. And I think he had pretty excellent training. Why he’s so competitive with Musk all of a sudden, I think that’s a question for you guys. I’m not sure.
Well, let’s correct the record because it was Elon Musk who challenged Zuckerberg to the fight.
That’s true.
This is one of the reasons why this is so good for Mark Zuckerberg, because to most of us, he is this World B. striding colossus. But then Elon Musk comes along and essentially bullies him. And it’s like, if I got this nerd alone in a cage, I would win. And Mark Zuckerberg is like, I had been studying the way of the blade, my friend. And if you want to get into the ring, I will clean your clock.
Right. Right. So staying on this topic of, why now and why Mark Zuckerberg, what does it signal to you that he is engaging in this kind of image transformation process from a serious, stately nerd to this ultra jacked CrossFit guy who’s also into fighting?
Well, of course, you’ve seen other tech guys get really, really big. Bezos is huge, although, we don’t know how he would fare in The Octagon. It’s been a bad couple of years for Facebook for a lot of reasons. The business isn’t doing so well. The metaverse seems dead.
In some ways, he feels like less of an important public figure than he was, to me, anyway, three or four years ago. And maybe that has sort of given him a space to loosen up a little bit. I don’t know.
Casey, what do you think? What’s he up to here?
I think Zuckerberg has felt like he has gotten a raw deal in the court of public opinion. I think in his view, he built something giant and important and genuinely useful and is just sort of gotten nothing but hell from people about it ever since. And by transforming himself from a pure computer nerd to somebody who is also an elite athlete, he’s able to generate a little bit of social currency in a way that he hasn’t before. So I think that’s part of it.
Another part of it this is one of the most competitive people in the entire world. This person is Michael Jordan level competitive. He is constantly playing games and trying to win them, whether it is “Civilization” on his computer, whether it is business against TikTok, or now whether it is Brazilian jujitsu.
So he is just always looking for a new game that he can win and get really, really good at it. And I think having achieved a pretty insane level of success in these other areas of his life, he is now casting around for, what is the unexplored terrain where I might turn out to be really good at something? — which is not uncommon to men who are dads in their late 30s.
Right. I mean, I do think it is both fine for people to have hobbies. I don’t begrudge anyone and their hobbies, whatever you need to do to blow off steam. But I also think it feels calculated to me.
Mark Zuckerberg is a man who, in my view, really wants to be liked, and respected, and to not forever be known as the guy who built the social media apps that destroyed democracy. And for years, he has been trying to rehabilitate his image, giving money to philanthropy, and putting his name on the hospital in San Francisco, and really trying to cast around for a better public image. And it really hasn’t worked.
And so from my view — and I’d be interested to hear what both of you think is he just wants to be liked. And if he can’t get liked by Democrats because they think he’s destroying democracy and if Republicans don’t like him because they think he’s manipulating and censoring them, maybe he can find some acceptance among these ultra masculine kind of Joe Rogan listening MMA guys, even if they might not be his natural social group. What do you make of that?
Yeah, I mean, I think there’s a lot of cultural juice in that kind of — you could call it the manosphere, but that loose nexus of MMA, Joe Rogan, Lex Fridman, sort of the place where tech, self-improvement, combat sports, and gaming, to a certain extent, come together. And I think he recognizes everyone that those people have — they’re huge. They’re popular. They command public attention. And so it makes sense that Zuckerberg would want access to that platform and would want to meet those people where they live.
Here’s my thought on it. I think that getting into Brazilian jujitsu was just because it was fun to do. Like Joe said, it was the pandemic. He had a lot of time to himself.
He had, presumably, the world’s greatest home gym. And he decided to try a new hobby. I think the more calculating stuff happens once Elon challenges him. And that’s where he sees an opportunity that he has not had in years and maybe ever.
Say more about that.
So if you’re Zuckerberg, there turns out there are very few levers you have to pull that can change popular perception. For a long time, it seemed like no matter what he did, it would just be turned into a meme where people make fun of him. He got really good at surfing, but then he put on too much sunscreen. And so now one of the most famous images of him is him floating around the water looking like a ghost.
So all of those previous moves have just kind of backfired. But then this dope Elon Musk comes along. He’s ruined Twitter so thoroughly, that Zuckerberg has a fresh opportunity to clone the app and maybe destroy it in 2023.
His team’s got to work on that. We’re expecting that to come out pretty soon. And so there’s that kind of business challenge. But then Elon opens up his mouth and is basically like, I could take this guy in a fight.
And there are very few billionaires, I think, who are less popular among a wide swath of people right now than Elon Musk. And so all of a sudden, Zuckerberg finds himself with an enemy that is going to make people root for him. And not only are they going to root for him, they’re going to root for him in a sport for which he has been training for the past two years in which there are now multiple videos showing him owning people. So the simulation could not have delivered a better scenario to Mark Zuckerberg than what Elon Musk said to him. And he is going to try to capitalize on it for all that it is worth.
Can I ask you guys a question?
Yeah.
Yeah.
How much do you think this is boredom? Like, these guys are just really rich and they’re bored?
Well, something I’ve long felt is that one of the main reasons why billionaires post online is that it is one of the only ways they can feel anything anymore. They really are surrounded by sycophants, for the most part, not in all cases. In fact, I think Zuckerberg has some pretty strong internal critics around him. But Musk, for sure, is surrounded by sycophants.
But you can go on Twitter and say something dumb and get punched in the face, and you feel alive again. And so I do think that that is what is behind a lot of social posting. But look, this move into the physical arena, I don’t know.
I think there is a ton going on there, and I wouldn’t actually just chalk it up to mere boredom. Let’s face it. There is also some mutual antipathy between these two toward each other.
Totally. They do have a history. I was sort of struck by something that Mark Zuckerberg said on Joe Rogan’s podcast last year. And he was talking about why he got into combat sports and why he started training in jujitsu.
And he basically said that it was kind of a way to cope with the kind of powerlessness that he feels over his day job. He was talking about how he gets up every morning. And he has a zillion emails to respond to and a zillion crises to handle and fires to put out.
And he said — and I’ll just quote directly from him. He said, “it’s almost like every day, you wake up, and you’re punched in the stomach,” which to me is a really telling and sad quote. He’s one of the world’s wealthiest people.
His company is declining in influence, but it’s still a very large company. He’s, by any measure, one of the most powerful people who has ever lived. And without getting to armchair therapist-y here, it almost feels like getting into combat sports is a way to exert some control in a way that he doesn’t usually get to in his day job. Do you buy that?
Yeah, I think that’s right. And I also think, to even turn that another couple of degrees, he was asked to compare running and combat sports. And he said the issue with running is that he could essentially still hear his thoughts. He could still think.
Yeah, it’s brutal. And when he’s doing Brazilian jujitsu, there’s an element of strategy and thinking, for sure, but he’s not thinking about Meta strategy. He’s not thinking about the Quest. And so I thought that was very revealing that he’s searching for something within the framework of his life where he can escape his own inner narrative.
Yeah. That makes a lot of sense to me. And it’s also just — it strikes me that this is what happens when people become billionaires when they’re super young. Historically, it is very unusual for people to make a billion dollars by the time they’re in their early 20s.
People usually — it takes them 40 years of monopolizing the oil market or the railroads or something to build that kind of fortune. And by the time you’ve got your fortune, you’re almost about ready to retire and start giving it all away or leaving it to your kids or whatever. But if you make your first billion in your 20s, what’s left for the rest of your life? What new worlds can you conquer? And so I think that’s, in part, why we’re seeing these sort of restless tech billionaires turn to these physical hobbies is just they’ve climbed all the relevant mountains so young in their life, that there just isn’t much left on to-do list.
Yeah. So I think that’s all well said, Kevin. And now I think we need to introduce the next sort of relevant piece of this discussion, which is I do not see any way that Elon Musk shows up to do this fight. It’s good and important to talk about why Mark Zuckerberg is doing this. And do you think there is even a remote chance that Musk shows up and puts up his dukes?
I do.
Really?
I do.
Why?
Because he is on record as saying that the most entertaining outcome is the most likely. I think he knows deep down that Dana White of the UFC is right, that a televised pay-per-view cage match between Elon Musk and Mark Zuckerberg would get bigger ratings than basically any other pay-per-view event in history. I think he knows it would be a spectacle. The man loves attention.
And do I think that he would go through with the actual fight? Do I think that he would find some sort of technical way to weasel out of it at the last minute? Probably.
But I think he would show up. I think he’s willing, probably, to make this thing happen. You think he’s going to totally back out? You think this is vaporware?
Look, this man spent $44 billion to buy Twitter and then spent about seven months trying to back out of it, right? And arguably, buying Twitter could have had a positive outcome for him. This is a case where there will not be a positive outcome for him. And he is probably not really going to be under any meaningful legal obligation to show up.
The Delaware Chancery Court can’t order him to do a cage match?
The jurisdiction of the Delaware Chancery Court is largely surprising, so I’m not going to make any definitive legal claims on this show. But my understanding is that they do not have that authority. And so absent Zuckerberg sending a paramilitary squad to throw him in the back of a van and deliver him to The Octagon, I just don’t see this happening.
One thing I admire about Musk is that he’s not afraid to fall on his face. He’s not a charismatic guy. And he hosted “Saturday Night Live.” He wasn’t great, but he did it.
And part of me thinks that same person who’s kind of not afraid to make a fool of himself would show up for this. The other thing that you guys have to consider is that he has, what, 50 pounds and six inches on Zuckerberg. So if he trains even a little bit, it would be very hard, I think, for Zuckerberg to move him in a meaningful way. Zuckerberg is pretty tiny.
Yeah. I think this would be a closer match. Casey, you wrote in your newsletter that you think this would be over in 10 seconds and that Mark Zuckerberg would win this match. But let’s handicap this a little bit.
So I think that this would be also a very quick fight if it actually happened. But I think that it would probably end in its disqualification because I think that Elon Musk would probably do something that was against the rules to save face or just because he doesn’t know the rules of whatever modality they’re fighting in. So I think it ends with a disqualification, probably within the first minute. Joe, what do you think?
Jeez. I mean, Musk talked, I think, on Twitter about a move he’s patented called The Walrus, where he just lies on top of his opponent. And of course, we’re all familiar with the photo of him luxuriating with Ari Emanuel on the yacht last summer looking rather walrus-like. I think he’s gotten himself in better shape since then. But if he can maintain a certain degree of mass, I just think that there’s an immovable object quality to this match that makes me not want to discount Musk.
The irresistible force meeting the immovable object.
So Casey, are you sticking by your prediction, Zuckerberg in 10 seconds?
Well, my official prediction is Musk does not show up. My second official prediction is if he does somehow show up, Zuckerberg cleans his clock.
OK. All right. Until the pay-per-view, Joe Bernstein, thanks for coming on “Hard Fork.”
Thanks for having me.
Thank you, Joe. [MUSIC PLAYING]
When we come back, why AI may be headed to a kitchen near you.
[MUSIC PLAYING]
Kevin, when was the last time you made a meal you were proud of, if ever?
I would say it was about 15 months ago because that’s the exact time that I had a kid and stopped cooking, basically. So now I don’t cook. I just assemble various things and place them on my child’s tray table, so that he can throw them off and the dogs can eat them off the floor.
That makes sense.
What about you? Do you cook?
Well, I do. I would say about a month ago, I threw a little dinner party for my friends, roasted a chicken, made some potatoes, simple stuff, but was very delicious.
And thanks for the invite.
Look, Kevin, you got a lot going on, OK? I don’t always want to burden you with my social invitations, but noted for future reference. But my question for you is, when you cook, does it feel like a tech story to you? Are you constantly turning to new gadgets or software in order to turn you into a better chef?
No. I’m a very 20th century cook. I use recipes from actual books. And I try to keep it simple.
That makes sense. Well, one of the things that has happened since AI came along is that people are experimenting with these AI-generated recipes. Have you seen any of these?
Yeah, this is one of the first things that people started doing with ChatGPT when it came out last year is, can I have it make me a recipe for lentil stew using the ingredients in my fridge? Or how do I meal prep for a kid who’s allergic to peanuts and just stuff like that where it was clearly very useful for certain types of food-related tasks.
That’s right. Well, we actually had a listener who emailed us to say that ChatGPT had generated for them all of these really great sounding recipe ideas for lentils. But when she asked for links to the recipes, every single one was made up. So that was very sad.
But what she could have done is gone to NYT Cooking, which I do not work at “The Times.” But I am a big fan of NYT Cooking. I get a lot of recipes from there.
And the incredibly talented folks there always have new ideas for me. And your folks in cooking, Kevin, have been doing this thing that I absolutely love over the past few months, which is they have been trying to cook with AI. Have you seen some of the videos they’ve done?
I have. They’re so good. It’s this series that they’re calling Chef vs. AI. And it’s really, really good.
That’s right. So “The New York Times” food reporter, Priya Krishna has been using ChatGPT in this new series, Chef vs. AI. So I invited her today to talk about what she has discovered so far. Priya, welcome to “Hard Fork.”
Thank you so much for having me.
We’re really excited to have you here. How did you get the idea to start cooking with AI?
It was actually not my idea, but the idea of one of the members of our social team, Becky Hughes. She is just constantly looking at how tech is changing what we do on the food desk. And she approached us around Thanksgiving time last year and was like, what if we had AI generate a Thanksgiving menu and you cooked the whole thing and saw how it went? And I just thought this was a brilliant idea.
And I genuinely didn’t know how it was going to go. And I sort of love approaching these sorts of cooking experiments where I’m just going in and doing what I do. And it could flop, or could turn out great.
Right. I have to imagine that every year, there is this question of what are we going to do differently with Thanksgiving this year. And there’s a limited number of things, maybe, that you could try. And so in that way, AI opened up a new opportunity.
Totally. I am the person who gets burnt out every Thanksgiving and is just like, what are we going to come up with next? There are only so many ways to bake a pie, to make a Turkey, to roast Brussels sprouts. I’m sick of reinventing the wheel. Let’s have some fun with it this year.
So what did you learn when you made Thanksgiving dinner with AI?
Really, what I learned was that GPT-3 is really good at generating recipes that sound plausible, that sound interesting, but in practice, are not delicious.
What do you mean? What did it tell you to make that you made that was not delicious?
The most memorable was it generated a naan stuffing.
Like naan, the bread?
The bread, which I’m already a little skeptical. Naan is not the most absorbent bread. The whole point of stuffing is it really absorbs all of the stuff.
But I was like, you know what? I’m going to go on journey. And so when I was making it, your instincts as a cook want to kick in. But my producer was like, you have to do it exactly as GPT-3 instructed. And it was pretty gross.
So you used GPT-3 for your Thanksgiving menu last year. But earlier this year, GPT-4 came out, the new version. How did that change things for you and your experience with these tools?
I found GPT-4 to be heads and shoulders above GPT-3. It was actually scary how much better the technology had gotten. And we had a conversation with the folks at OpenAI.
And basically, the recipes that GPT-4 generates not only feel even more plausible, but they’ve added these human-y details. For example, a pasta recipe generated by GPT-4 will often tell you to reserve some of the pasta water and add a splash at a time to thicken the sauce. It’ll tell you what a soup is supposed to smell like. It’ll start to give some of those details that you come to expect from a human-written recipe. And that is really interesting.
I’m curious if you feel like you got better at prompting the model over time or if the model just sort of got so good, it didn’t matter as much.
I think a bit of both happened. We did a demo with one of the engineers at OpenAI, Mark. And he basically was like, you can keep modifying a recipe until you come up with one that you like.
So for example, we tried a recipe for Taiwanese beef noodle soup. And I was like, the problem with GPT-3 is that it doesn’t have what to look for, what to taste for, what to smell for. And so Mark literally typed in, can you rewrite this recipe with indications on what to look for, what to smell for, and what to feel for? And GPT-4 rewrote the recipe with all of those indicators. So it’s like, one by one, all of the things that I feel are what make a recipe human, GPT-4 is doing.
That’s really interesting. So you have been testing GPT-4 against actual trained recipe creators and cooks, including this latest video that you put out, the Chef vs. AI where you and “New York Times” food reporter and cookbook author, Eric Kim did basically two versions of a recipe, one where you just followed GPT-4’s instructions and one where Eric was actually allowed to go in and make modifications. So tell us about that video, that process, and what that was like.
So the recipe, once again, looked really plausible. It didn’t look very interesting.
What was the recipe?
So what I put into GPT-4 was show me a recipe by Eric Kim that has chicken, eggs, and pasta. Eric had given those strange parameters for what kind of recipe he wanted. And I don’t think there are enough Eric Kim recipes on the internet for GPT-4 to understand what that looks like. But it generated a pretty bland recipe for basically, fettuccine alfredo with chicken and then hard boiled eggs on top.
And then when we went to cook it, we almost burned the test kitchen down because it was telling you to cook the chicken breast on both sides on high heat for six to seven minutes in the pan with the garlic. The garlic burned. The chicken burned. The fire department was called. There was smoke everywhere.
Oh, my god.
And then when we cut into the chicken, we were like, hey, this is not — this is actually decently cooked.
Whenever I cook stovetop chicken and I burn the outside, the chicken is ruined. So how did you guys manage to make a good piece of chicken that way?
I have truly no idea. And it was really good. So the chicken was slightly under when we cut into it. But then you finish the chicken in the pasta. So with that residual heat, it ends up being actually quite well cooked.
It was just slightly over. But the outside was kind of smoky burnt in this very pleasant way. You cook the sauce in the pan you use to cook the chicken.
And I was like, well, I’m going to cook it in this burny pan with the burny bits. And we’ll see how it tastes. And it really didn’t taste half bad. It sort of had this black garlic smokiness to it that was really nice.
I’m going to guess, though, because y’all are pros. And so you must be doing just little technique-y things that are not in the actual recipe that managed to salvage these things. I think an amateur like me absolutely would have ruined that dinner following the same instructions.
So we didn’t. I’m really, really, really careful to not let intuition guide us and just do exactly what ChatGPT says. And I will say, I think something to note is that ChatGPT can generate recipes that work. But are they recipes I’m dying to make that feel like they’re at the cutting edge of innovation or creativity? No.
So other than producing these average recipes, are there things about GPT-4 or other I tools that you’ve tried that could be improved, that could make them actually useful to not only home cooks, but maybe even chefs and restaurants?
I mean, my biggest complaint about GPT-4 is that because the English internet is very white and Western, the recipes that it tends to generate and the perspective it’s coming from is very white and Western. GPT-4 very much flattens non-Western cuisines, reducing them to two or three ingredients. So if I were to be like, generate a recipe for Indian-inspired meatballs, it’s notion of Indian is add garam masala and a teaspoon of turmeric. And it doesn’t really deviate from that.
I remember it generated a Thai-inspired noodles. But Thai to GPT-4 means green curry paste and coconut milk. And that is what it thinks Thai food is.
So cuisine is one of those things that is richly diverse. It’s dynamic. It’s highly regional. GPT-4 does not yet understand that.
But I’m like, it’s not necessarily GPT-4’s fault. It’s sort of the internet’s fault. We live in a white Western society where the norm is still considered these white Western recipes. And so in many ways, it means that the work that people like me and Eric do feels more important because I genuinely don’t think that GPT-4 could ever generate the kind of recipes that I generate.
Yeah. Also, it just strikes me that so much of cooking is storytelling, right? And one of the problems with ChatGPT is that there is no actual story to tell. It’s predicting the next word in a sentence. And that’s just very different from this banana cream pie got me through my divorce.
Well, it’s very funny you say that because ChatGPT actually can generate back stories for recipes. And we had it generate backstories for all of my recipes. And it has sort of picked up on storytelling tropes.
So there’s a real trope among children of immigrants that’s like, when I was younger, I used to take my lunch to school. And all the kids were like, ugh, that’s smelly. It can write a variation on that story that feels weirdly realistic.
So Priya, one of the things that people in the AI industry are constantly talking about is this idea of AI as a copilot, as something that sort of sits alongside you while you do your job and helps you — maybe it helps generate ideas or takes care of some of the busywork involved in your job. So can you in your job as a food reporter and person who makes and creates recipes — can you envision a world in which you are using these AI tools as something like a copilot?
Maybe. I think it could be helpful for reminding me like, what is the internal temp for this type of meat? How do I convert like this many tablespoons to teaspoon? But I don’t think I would ever use AI as a form of creative inspiration. I think you can just get much better creative inspiration from living your life. I think that I’d be a worse recipe developer if I was relying too much on AI, honestly.
It strikes me that cooking, as you talked about in your video with Eric Kim is so much about intuition. It’s like, oh, this thing smells like it’s maybe getting a little overcooked. Maybe I’ll turn the heat down. Or this thing needs more salt.
It’s this thing that is very hard to automate. And I’ve been interested not just in the recipe creation part, but there was all this noise in the tech world a few years ago about AI replacing restaurants, that there would be these restaurants where all the food was cooked by robots and served by robots. And they actually opened one of these in San Francisco.
And it closed down because nobody wanted it. It was not a popular destination. And I guess, I’m curious. Do you think there is the potential that AI or robots could displace jobs in the food industry, or is that kind of a pipe dream?
I mean, I think we’re already seeing tech displace jobs. I went to a restaurant in Boston where the woks are all automated. So it adds the right amount of food, the right amount of garlic sauce, cooks it for you.
So I actually think it’s already happening. We’re in the middle of it. Given we are in an era of record high inflation and labor shortages, I think a lot of restaurants are going to take advantage of that tech in some way.
Yeah. So much of these models are based on work of folks like yourselves. One of the criticisms of ChatGPT, which I think is a fair criticism, is that they’re essentially plagiarism engines. They go out and they crawl the web. And they sort of digest it and turn it into something else. I wonder if you’ve started to think about how that might change the job of being a recipe writer over time as maybe more of your work gets plagiarized into these systems.
I think it makes our work all the more important. I’ve always felt like my recipes are an extension of who I am, how I think, how I want other people to cook. My recipe philosophy is very much like, how do we make the most flavorful thing possible using the shortest amount of time and dishes possible?
So I’m thinking about that when I’m developing recipe. I’m thinking about all of the strange shortcuts, and omissions, and swaps that people make when they try a recipe and they say they’re going to follow it and then they inevitably veer in all of these directions. I’m testing all of my recipes with less than ideal ingredients being like, OK, if someone doesn’t have access to a fancy farmer’s market tomato, will a Walmart tomato yield just as delicious a mattar paneer? So I think all of those things and all of that ethos that goes into a recipe gets lost with AI.
Yeah. I mean, this is something that I feel like gets lost a lot where we live in San Francisco, where there is a food culture, obviously. Tons of great restaurants and home cooks in the Bay Area. But there’s also this kind of strain of tech person who just views food as kind of like a thing to be optimized, which is why you see things like Soylent coming out of the tech industry where people are just like, I just need my macros. I’m going to drink this gray sludge.
Are people still drinking Soylent? I have to say, I have not seen Soylent in years.
They are. I was at a tech company the other day, and they had an entire fridge full of Soylent. And I was like, what year is this? What is going on?
But I feel like there is this kind of attitude that is pervasive in tech where it’s like, my job when I’m eating is just to get nutrients into my body, so that I can go back to the thing that I was doing before and take as little time as possible doing that. But interestingly, that has not really caught on in the broader culture. You don’t see restaurants that are just serving gray sludge because I think that eating and cooking, these are experiences.
These are bonding things that people do with their families. These are things that involve tradition and storytelling, as you said. So I think for that reason, I’m not actually worried about the robots taking over all of food conception and production because I think that just fundamentally misunderstands why people like to cook and to read great recipes.
I totally agree. And if you look at the track record of the tech industry and food, it’s really bad. Clearly, people don’t want what the tech industry thinks people want when it comes to food.
Well, to close out today, Priya, I was hoping you could show us a little bit more how you’re using AI and maybe give us tips about how to generate better recipes using this technology, if we still choose to do that. But because Kevin and Priya, you are in the studio in New York and not a kitchen, we thought we could maybe make a cocktail. Does that sound OK?
Yeah, let’s do it.
I am tragically stuck in San Francisco, due to a canceled flight.
Right. So I have the honor and privilege of being physically in the studio with Priya. And we’re going to make this cocktail together. So in preparation for this, each of us came up with a secret ingredient that the other one doesn’t know about.
And we’ve asked ChatGPT to generate a cocktail recipe using those ingredients. And we’re going to make that cocktail. So I’ll start. I was just in France. And I was inspired by the local lavender blooming there. So I brought lavender.
Oh. OK.
And Casey, what did you bring?
For my ingredient, I chose limes because in my experience, limes make everything better.
That’s fair.
Oh. And then you also brought an ingredient for us. What did you bring?
I brought a wild card ingredient. I bought a hippy-dippy green avocado and kiwi hot sauce that an old boss gifted to me last week.
That sounds amazing. That may sound good.
That’s going to be very hot. I see ghost pepper on there. I’m a little scared. So here’s the prompt we gave ChatGPT.
Give me a recipe for a cocktail that features all three of the following ingredients, kiwi, hot sauce, lavender, and lime. And here’s what ChatGPT said. I’ll show you here. OK, so we’re going to make this together. We’ve got our cups over here. We’ve got our shaker. And we’ve got our cutting board here. Oh, and we have ice and vodka.
Yeah. And as they’re getting set up, I just want to note for everyone, they’re making a cocktail at 10:00 in the morning local time. But anyways, go on.
It’s always 5:00 somewhere. Here are our ingredients. We have two kiwis peeled and chopped.
OK, great.
Priya, will you do us the honor of just of peeling and chopping those kiwis? It says put them in a blender or food processor. But we don’t have one of those, so we’re just going to mash it up with a mortar and pestle.
All right. Great. So I will chop these finely then.
Did ChatGPT give our cocktail a name, Kevin?
No, it didn’t. Here, I’ll ask it to.
You can even have it generate a head note, a backstory.
Oh, yeah. We have to do that.
Yeah, yeah. OK. We’re doing this. OK.
This is so funny. I’m literally peeling kiwis in a podcast studio at 10:00 in the morning.
I love it.
OK, so ChatGPT, I asked it for a name and backstory for our cocktail. And it’s calling it the Kiwi Kismet.
Ooh. I love that.
Pretty good.
So it says, the Kiwi Kismet is a vibrant enchanting drink that owes its origin to the sun-drenched stunning shores of New Zealand, the homeland of the eponymous kiwi.
Wow.
And it also made up a creator. It says its creator, Amara Harris, was a botanist-turned bartender who had a deep fascination for the diverse flora of her native New Zealand. Growing up on a lush farm in the Hawke’s Bay region, she was always surrounded by an assortment of fruits and plants, including her family’s vast orchard of kiwi trees.
Oh, my god.
Love a girl with a vast orchard of kiwi trees.
After her botanical studies, Amara developed a passion for mixology and realized she could marry her two loves, botany and bartending to create new and exciting flavors. She had a dream to craft a cocktail that would reflect the essence and vibrancy of her beloved homeland.
I’m ready to pitch this show to Netflix, just a girl in a kiwi Orchard, a master botanist and a bartender?
OK, so while you’re peeling and chopping the kiwi, I will start doing the next thing, which is that I have to muddle the dried lavender buds.
I was going to ask when you were going to actually contribute something to this.
Get out of here. OK, so now we have to add the kiwi, and the kiwi hot sauce, and the vodka, and the lime juice, and simple syrup to the cocktail shaker.
OK. How much kiwi hot sauce are we talking?
It says 1 tablespoon. It feels like a lot.
A large tablespoon.
So much.
OK. so we have a tablespoon here. And we also need a teaspoon of dried lavender buds. This kind of looks like weed.
Are we sure this is lavender? Yeah, it smells like lavender. OK. Here’s the cocktail shaker. And would you do us the honor of putting the kiwi in there along with a tablespoon of kiwi hot sauce?
Yeah. Oh, my god. I mean, there’s a real element of cooking AI recipes where you sort of go, maybe?
I’m approaching this with a little bit of fear, but also an open mind. Maybe this is my new favorite drink. So we need a teaspoon of dried lavender buds, so I’ll just add some of these in there.
All right.
OK. Now we need 2 ounces of vodka.
OK.
I will say, I feel like just by adding the lavender buds, you could get away with charging $19 for this in any bar in New York.
This is a $22 cocktail, for sure. All right. OK.
OK. Now we need a simple syrup here.
OK. I just spilled vodka all over the studio. I’m so sorry.
Guys, if anybody asks why the studio smells like vodka, just tell them Ezra Klein was in there earlier.
OK, how much simple syrup?
A half ounce. And then we need 1 ounce of lime juice.
OK. Man, again, I think this is going to be gross, but I’m holding out hope.
In my experience, if it tastes that gross, just add more vodka until you don’t mind anymore.
I agree with that very strongly.
OK. And now we can put in some ice, hopefully, not spill all over this expensive audio equipment. And let’s shake it up.
Do you want to do the honors?
Sure.
Shake it up.
OK. Now we have to garnish — it says garnish with kiwi slices and a sprig of lavender.
Oh. OK. I can do that. This looks pretty nice.
Take a picture of this. Take a picture of this before you drink because I want to see it and maybe post it on Bluesky when the episode drops or something.
It looks kind of nice.
This actually kind of does look good. I would drink this on a beach. OK. All right. Should we try our Kiwi Kismet?
Yeah.
All right. Cheers.
OK, cheers. Ooh, I’m so nervous.
I would say that’s not — ooh, there comes the pepper.
Here comes the heat.
Whoo. A tablespoon was too much. This is now an episode of “Hot Ones.”
I’ve had worse cocktails. I’ll be honest.
I know. Again, I would not be shocked if restaurants put this on their menu for $22.
This is like the dregs of a spicy margarita when you get to the bottom and you’re like, ooh.
Are you getting any lavender?
I’m getting almost no lavender. It’s been overtaken by the hot sauce.
Yeah, it’s — wow.
The more I observe these experiments, it just sort of makes me think, if you tell people a list of edible things and you ask them to combine them in some way, it probably will be edible at the end of it. But I’m sure that Priya has much better cocktail recipes up her sleeve than that one.
Well, thank you, Priya, for helping us navigate the world of AI recipe preparation and for making a Kiwi Kismet with us.
If nothing else, at least, the fire department didn’t come today.
Kiwi Kismet, coming to you soon on NYT Cooking.
I don’t think so. And thank you to Amara Harris, the — I’m guessing, totally fictional made up creator of this cocktail.
Wow. Thank you, Amara.
[MUSIC PLAYING]
Before we go, a little housekeeping note. The show will be dark next week. There’s no show. We are going on vacation.
We want you to use this time to think about what “Hard Fork” means in your life and how much you’ll miss it when it’s gone.
We’ll be back in two weeks with a new show. And we hope you get some time off. Wear sunblock.
I hope you have a good summer break Kevin and wind down from vacationing in France.
“Hard Fork” is produced by Rachel Cohn and Davis Land. We’re edited by Jen Poyant. This episode was fact checked by Caitlin Love.
Today’s show was engineered by Alisa Moxley, original music by Dan Powell, Marion Lozano, and Rowan Niemisto. Special thanks to Paula Szuchman, Pui-Wing Tam, Nell Gallogly, Kate LoPresti, and Jeffrey Miranda. You can email us, as always, at [email protected].
[MUSIC PLAYING]