Elon’s Crumbling Empire and Generative A.I. Goes to Court
This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.
So this week, in an effort to raise money for the rapidly-declining Twitter corporation, Elon Musk decided to auction off office furniture. Basically, everything that was not bolted down at headquarters, which he may not even be paying rent at, you could have bid on if you wanted a piece of Twitter history in your own home, which who doesn’t?
Who doesn’t? I know it’s an online auction, and it’s not actually happening this way. But I just love the idea of Elon standing on a stage and doing the auctioneer call like, going once. Going twice. I got a Herman Miller office chair. 25. Give me 25. Give me 30. Give me 35. Who wants a pizza oven in the shape of a bird?
[MUSIC PLAYING]
I’m Kevin Roose, tech columnist at “The New York Times.”
I’m Casey Newton, from “Platformer.”
This week, an update on the shambolic state of Elon Musk’s empire. We talk to one of the artists suing the makers of AI image generating software. And a cautionary tale about what happens when AI becomes a reporter.
[MUSIC PLAYING]
Casey, big week for you this week. You have a story in this week’s edition of “New York Magazine.” You, and Zoe Schiffer, and Alex Heath, from “The Verge,” wrote the cover story this week called, “Extremely Hardcore,” which is — I would say it’s fair to call it a compendium of some of the best reporting and some unreported details that you, and Zoe, and Alex have had about Twitter under the Elon Musk regime. So I want to get into it. But first —
Oh.
OK.
You printed the thing out.
I printed it out.
Love that.
This is —
We love to see a story in print. You almost never do these days.
This is how you know a millennial loves you. They’re willing to print out your story.
Right. Well, because the first thing is it means you actually have to find a printer, which no millennial has in their home.
Exactly. Well, I do, and I printed this out. And maybe I’m going to print out all your work, maybe every “Platformer” edition I print out and take copious notes on.
Please do.
I did take some notes on this, and I wanted to just run them by you for your reaction.
And I hope the notes are like, this is boring. Get rid of this. Move this down. Who cares?
No, this is a reading experience that I want for every piece, which is that I get to take notes and then immediately grill the author about everything in the piece. So it’s a great story. It’s very long. And it traces the last three or four months of Twitter’s history, from the moment that Elon Musk showed up with a kitchen sink in his hands at Twitter’s headquarters through all of the disasters and drama that have unfolded, including this incredible scene at the Twitter Halloween party. Just tell us about that scene.
Yeah. Well, so one of the most cinematic elements of the entire story is that the day that Elon actually rolled in to take over the company was the day of the Twitter employee Halloween party. And this was no ordinary Halloween party. This wasn’t just a Jack-o-lantern full of Reese’s Pieces on the conference table. They had hired entertainers. Employees had brought their families. Children were walking around, getting candy. And this is also the day that Elon Musk shows up, and the deal is closing.
And so over the course of a few hours, you have all of the top executives at the company fired. You have the head legal counsel walked out the door. You have people crying in the bathrooms, as they realize that they’re about to lose half or more of their colleagues. And amidst all of that, one of the people we talked to, as she’s crying, looks up. And she sees Jack Sparrow walking by.
Captain Jack Sparrow, from “Pirates of the Caribbean”—
From “Pirates of the Caribbean.”
— exactly who you want to see as your company is being destroyed.
And it was just the perfect surreal note to kick off this story about one of the wildest moments in American business history.
Yeah. My first thought was that is definitely going to be the opening scene of the Netflix series about this takeover. I hope you get to play a cameo as Captain Jack Sparrow.
It’s the role I was born to play. It’s a big, swishy pirate. Come on.
So, OK. I have some thoughts on this. My first note — the first thing I wrote on my printed copy of this story is murder mystery in reverse because it feels to me like the story of Elon Musk’s acquisition and takeover of Twitter, it’s the reverse of a murder mystery. We know who did it. We’re just waiting for the body to turn up. We don’t know when they’re going to start falling. We don’t know when the site is going to go down. But all signs are that everything that’s coming out of Twitter these days — good, bad, and ugly — is the result of basically Elon’s whims, that there is no real brain trust. There is no real board. There’s no — it’s one guy. Twitter is a guy now.
Yeah, it’s sort of just like random neural firings, which are then immediately translated into policy and disseminated, damn the consequences.
Right. It’s sort of — I don’t want to belabor this point. But it does feel very much like the Trump administration in that way, where it’s like everyone is just reverse engineering these policies and rationales because he said something in a meeting that everyone then scrambles to turn into reality.
Oh, and I think the comparison is actually super apt because in both cases you have top staffers who are getting their policy directives from just watching a Twitter feed to see what is the next instruction.
Totally. My second note on this that I wanted to get your reaction to was just I wrote zero interest rate decision. I don’t know if you’ve seen the meme on social media recently about things being zero interest rate phenomena. So the one I saw the other day was like, “My girlfriend just broke up with me. She told me that our relationship was a zero interest rate relationship.” And it’s basically a jokey shorthand for, when interest rates were very low, near zero, as they have been for a lot of the past decade really, people make these impulsive, irrational decisions, right? People running companies, people acquiring companies, people just in various ways behave as though money is free because in some ways it essentially is.
If you are, say, Elon Musk at the beginning of 2022, you’re thinking like, I’m the richest man in the world. My Tesla stock is going up. Money is essentially free. And you just decide on a whim to buy Twitter, and it feels in some ways like the ultimate zero interest rate decision. What do you think?
I think that that’s true, but I think that Elon Musk has always lived his life that way, right? He has not been super careful about his career decisions. He does what he thinks is the most interesting, and he’s been willing to bet his entire fortune on things before. So I do think that this is just a personality characteristic for him. But if six more months had gone by before he made the offer to buy Twitter, and the company’s value had collapsed, and interest rates had gone up, and the challenges of the company became more clear, would he still have done it? It’s an interesting question to ask.
Yeah. So, OK. That’s my second note. My third note was, this could have worked because one thing that I learned from your, and Zoe, and Alex’s reporting is that there was actually a larger group of Twitter employees than I thought who were willing to give Elon Musk a chance.
For sure.
You interviewed this woman you call Alicia. That’s not her real name, but that’s what you named her in the piece — and then this other senior engineering manager. And both of them were under the posture of, well, it seems like it’s going to be a chaotic takeover. But I’m going to stick it out because part of me is actually excited to work for Elon Musk. Twitter’s former culture under Jack Dorsey and then Parag Agrawal was I think you described it as benevolent anarchy. It was just this free-flowing, chaotic atmosphere, where people didn’t really know what they were supposed to be doing, and things weren’t really happening all that fast. But this group of employees that you talked to, they seemed like they would have been willing to work hard for him if he had been less erratic and made fewer bad decisions.
That’s absolutely right. And I think I would not have called it free-flowing anarchy. I think it was more of a stultifying paralysis — right — where for years people knew where Twitter needed to go. They proposed those ideas. But there was always someone along the chain that would say, nope, not right now. Try again. And so you had so many people who were frustrated and wanted someone like an Elon to come in, and just bring a sledgehammer to that bureaucracy, and say, we’re going to start doing things fast. And if you look at everyone in the venture capital world who was so excited about Elon coming in and who — in some cases and bought equity in the company, this was the big reason they thought it was going to work was Elon just moves fast. Elon will snap his fingers, and things will get done. No more two-year design explorations. Everything is just going to get done within days or weeks. And so I agree with you. Had he brought in some humility and respect for the people who he had just paid $44 billion for, this could have had a very different ending.
Right. And instead, we’re looking at what we have now, which is a Twitter that is rapidly losing money.
That’s auctioning off their pizza oven for $10,000 just to keep the lights on.
Yeah, and it really feels like not only could this take down Twitter, this acquisition. But it’s kind of looking like it could take down Elon Musk too. One of the things that your article had, which I loved, is an Elon Musk net worth tracker. Tell me about that.
That’s right. So we co-published the story with “The Verge” and “New York Magazine.” And on “The Verge” version of the story, as you scroll down — because our story unfolds chronologically — and as the chronology unfolds, as you scroll down the page, you can watch Elon Musk’s net worth shrink in real time. And, yes, those things are connected, right? Twitter is a private company. But Tesla, the main source of Elon’s wealth, is not. And as Tesla investors have watched Elon get more and more wrapped up in Twitter and Tesla has started to face some very real challenges of its own, investors are saying, he does not have his eye on the ball here. This company needs a full-time CEO. It doesn’t have one. And so the price of Tesla stock has just been cratering.
Yeah. In some ways, it reminds me of the inverse of this thing called Gell-Mann Amnesia. And it’s basically you read something in the newspaper about a subject that you know really well. If I’m an astronomer and I’m reading astronomy coverage in the newspaper, I’m going to find things that are wrong with it. And basically, their point is then you just forget to apply that same skepticism to areas that you know less about, like you saying, oh, their coverage of this conflict abroad must be accurate because it’s an esteemed news outlet, when the people who know lots and lots about that conflict are also reading it the way that you read astronomy articles as an astronomer.
So anyway, it’s this whole phenomenon of once you see the faults in an authority figure or an institution, it’s hard to stop seeing them. So if I’m a Tesla investor, or a SpaceX fan, or some other fan of one of his ventures, I’m looking at how he’s fumbling this probably winnable battle at Twitter. And I’m just saying, maybe he’s doing the same thing at these other companies. How do I know that his rockets are being built with care? How do I know that he’s not doing this reckless, self-destructive thing at Tesla with all the self-driving stuff? So I think that feels like the big takeaway from this article is I came into it, thinking Twitter is probably going to go down. And I came out of it, thinking Elon Musk may go down.
Yeah. There’s two ways that people approach this question. There’s the way that people approach it on Twitter, which is to have been the first person ever to say that Elon Musk is not a good leader. And I find that very exhausting, although people will play that game forever. I think the more interesting view of this is, what if Elon Musk used to be really good and then just got worse over time? Because to me, that’s what feels truer, right? It’s undeniable that, for at least some period of time, he was a great steward of Tesla, growing the brand. But also, people love those cars, right? And he really did galvanize a lot of interest in electric vehicles. SpaceX has had a lot of success. But then the pandemic happened, and it doesn’t seem like he’s made a lot of great choices since then.
Yeah. OK, so that’s a subject that we’ve covered a lot on this show is the ongoing debacle at Twitter. But there’s actually been some news actually since I would imagine that you wrote and published this article.
Yeah, so many things are happening.
What’s going on at Twitter?
Oh, my gosh. Well, let’s see. Well, one thing that has gone on is that the Taliban is now buying verification.
Excuse me?
So —
Wait. Wait. No, no, no.
So you may remember that last year Elon relaunched the Twitter Blue subscription so that anyone could get a verification check if they wanted one. This was promptly misused. They rolled that program back, then they have since relaunched it. You can now buy it for $84 a year. And who has bought it? Well, two leaders from the Taliban, which does now control Afghanistan, were among the folks who bought these badges. So people will notice that. And, of course, when you have those badges, your replies show up higher in tweets. It’s a way of essentially buying more prominence on the platform. Once folks pointed this out, both of those folks lost their verification badges. But still notable that among the most prominent customers for the new Twitter Blue are these authoritarian leaders. So, yeah, that one caught my eye.
But perhaps even more consequentially, they wiped out the Twitter third-party developer ecosystem, which sounds like a very wonky story. But it’s really a story about the whole history of Twitter.
Wait. When you say third-party developer ecosystem, can you translate that to English?
Sure. So most people use Twitter through the Twitter app, but that is not the only way to use Twitter. There are other apps with names like Tweetbot and Twitterrific. I actually use Tweetbot for more of my time on Twitter than any other app. And the thing about those apps and many of the ones that preceded them was that they played a bigger role in inventing Twitter as we know it arguably than Twitter itself. It was Twitterrific that invented the word tweet, for example. There was an app called Tweetie that invented the idea of pulling down on the timeline to refresh. And Twitter saw it and liked it so much that they bought that app, and that became the first Twitter app, right?
So Twitter has always relied on these third-party developers to come up with ideas for what Twitter could be, and they helped Twitter grow enormously in the old days. Now, for a really long time, Twitter has had mixed feelings about this because they don’t make money really from these third-party apps. They don’t show ads on them, for example. And so they may make some money from fees for using the Twitter API, but it’s not a huge amount of money. And it’s actually quite understandable why somebody like Elon would want to come in and say, OK, that’s over. We’re just going all in on the Twitter app.
Right. Other social networks don’t do this. You can’t — there’s no TikTok app that can access TikTok videos that isn’t run by TikTok, the company.
Right. But this is interesting to me for two reasons. One is when Jack Dorsey recruited Elon Musk to buy the company, he was beating this drum of, Twitter should not be a company. It should become a decentralized protocol. Anyone should be able to build a client for it. Anybody should be able to bring in their own algorithms. The future of Twitter is decentralized. So a world where there are third-party developers was really key to that vision. And Jack, I think, believed that Elon was going to support this. Well, now Elon has said, no. Absolutely, we’re not going to do it.
The second thing is just the way they did it. It would have been one thing if Elon just said on Twitter, hey, we have to shut down this program. It doesn’t make us any money. For the company to survive, we need there to be one app, and we need to show you ads in it. And we need to let you be able to buy subscriptions in it. And I’m sorry, but that’s the way it goes. I would have had a lot of respect for that decision. I think it’s actually a very sound business decision. What happened instead was that these apps just stopped working. You would just open one up. And it would say, error, can’t log you in or whatever. And, of course, many people were asking questions. The developers were putting up blog posts, saying, hey, we have no idea what’s happening with our apps. And then finally the Twitter dev account a couple of days ago just tweeted, “Twitter is enforcing its longstanding API rules that may result in some apps not working.” So people have a lot of questions about this such as, which rules, and which apps? And no answers are forthcoming, right?
So I read this. And I just thought, oh, man. Twitter is just fully in its gaslight era now. They’re not being remotely respectful of these developers who, again, helped to invent the Twitter ecosystem as we know it. And they’re falsely accusing them of violating these unnamed rules. So it’s just a really silly but also Orwellian end to the story of the Twitter third-party developers.
Right. So that’s a pretty I would say minor chapter in the decline story of Twitter. But there’s also this bigger question of product changes aside, staff morale aside, this company is not in good shape financially. So what have we learned about that in the past week?
Yeah. So with my partner, Zoe Schiffer, at “Platformer,” we reported that on Tuesday Twitter employees were told that revenue is down 40 percent year over year. So a company that was making about $5 billion in 2021 may be on the road to making something closer to $3 billion. And this is a company that is loaded up with debt and has to make a massive payment on the interest on that debt. The first of those payments could be due at the end of this month. Elon has been sounding a warning note about bankruptcy for a couple of months now. And when I talk with a lot of the former employees, who are counting on Twitter remaining financially solvent to pay their severance and potentially settlements from the lawsuits that are now being filed, they’re getting really nervous that bankruptcy is going to be the next move here.
And some of that fall off of revenue is probably related to the economy, right? Every social media company is making less money in 2023 than it was in 2021 and 2022. The advertising market has declined and things like that. So of that 40 percent, do you think most of that is due to factors outside Elon Musk’s control? Or do you think it’s mostly the things that he’s done since taking over that have contributed to that?
I think it was mostly Elon’s actions. You remember when there was the debacle over the Twitter Blue subscription. You had hundreds of top advertisers losing the platform. I just pulled up the Meta quarterly earnings for their most recent quarter, and their revenue was down 4 percent. Of course, the comparisons aren’t perfect. Facebook has a much better advertising engine than Twitter ever did.
But when I talk to people who used to work on the revenue side at Twitter, over the past couple of months, they would say, this actually should have been an amazing quarter for Twitter because not only was it the holiday quarter when most advertising businesses make the bulk of their money anyway. But you also had the World Cup, which is this event that only comes around every few years, and that is the time that more eyeballs are on Twitter than any other time. So they actually had a lot of things going for them recently, and they still wound up managing to lose all that money.
And how is the rest of Elon Musk’s business empire going? There was a story this week about Tesla, which has cut prices on a lot of its electric cars in the US and Europe, some by as much as 20 percent. And some of that is probably due to more competition from other automakers. But it seems like Tesla is also struggling. Do you have any thoughts on that?
Yeah. In September, the stock price was trading around $308. This week, it’s been around $129.
Wow.
So the stock price is one thing, but there was also this story this week about how this very hyped video that Tesla released in 2016 of its driver assist system, which it calls Autopilot, that purported to show a car driving itself. Well, the director of Autopilot software said in a deposition that it was faked, that it was set up as a way to, quote, “Portray what was possible to build the system but not what customers should expect the system to do.”
Wait. That’s incredible. I remember that video coming out, and it was — I don’t want to inflate its importance too much, but it was what got a lot of people hyped on self-driving cars and feeling not only that they were coming but that they were coming soon and that Tesla was going to be the company that got there first.
Yes. So when this video comes out, Elon tweets a link saying, “Tesla drives itself.” The video shows a Tesla appearing to drive and park itself. It avoids obstacles. It obeys red and green lights. There is a title card at the beginning that says, “The person in the driver seat is only there for legal reasons,” and that, quote, “He is not doing anything. The car is driving by itself.” But according to this director, that demo was following a predetermined route. It had a variety of other pre-mapped information that was written into the code that the car was following. And I guess at the time this guy was an engineer on the team that helped with that video. So in other words, no, it was not dynamically planning its route. Engineers had to do the work.
So some of this had already been known. A great newspaper called “The New York Times” had previously reported it, but this was the first time it actually came on the record from a former Tesla official. But you add all of that up. And, man, if there is one thing that you do not want people to be playing fast and loose with the truth on, it’s like, can this car drive itself or not? And, of course, among Tesla observers, it has long been an issue that Tesla seems to be overstating the capabilities of its driver assistance features, right? People get mad when journalists like us even use the term Autopilot, which is their brand name but suggests more capabilities than this thing actually has.
But, man, if you’re listening to this and you’re just like, OK, but what does this have to do with the whole Twitter story, it’s like, I do feel like this aura of can do no wrong, which Elon did have, with a large audience of fans at least, I really do think this has been pierced. And it’s not the cliche story of, oh, journalists — a person gets successful, and they decided to tear him down. No, it’s like every time you turn a rock over, you find out that something has been overstated, or it’s flat out false, or there was some act of cruelty or callousness. And so I do think that the day that Elon Musk decided that he wanted to buy Twitter, a finger really did curl on a monkey’s paw. And a lot of the other parts of his empire just started to look very fragile and brittle in ways that they hadn’t before.
Right. And all of that is contributing to what this week broke the Guinness World Record for the largest loss of personal fortune in history. Elon Musk has lost approximately $182 billion since November 2021, although some sources suggest that it could actually be closer to $200 billion.
Well, now, here’s where I want to say something optimistic. If you’ve lost $200 billion, don’t cry because it’s over. Smile that it happened. You know what I mean? You had $200 billion. That is such a huge achievement.
Yeah.
You know?
And yet, you can’t even afford to keep the pizza oven. That’s —
I know. He’s going to have to be, yeah, making mini pizzas in a toaster oven like I used to do after high school.
[MUSIC PLAYING]
After the break, artist Sarah Andersen tells us why she and other artists are suing the companies behind Stable Diffusion, Midjourney, and DreamUp.
[MUSIC PLAYING]
Sarah Andersen, welcome to “Hard Fork.”
Hi, thanks so much for having me.
Sarah, we wanted to have you on because you’re part of a group of artists who have filed a class action lawsuit in federal court here in San Francisco against three companies that have made AI image generating tools — Stability AI, Midjourney, and DeviantArt. I believe this is one of the first major lawsuits we’ve seen against these companies, and I think there are going to be more. And I definitely want to get into what you and the other artists are claiming. But first, let’s just talk about who you are and what kind of work you do.
Sure. So I’m a cartoonist and illustrator. I write a series called “Sarah’s Scribbles,” and that’s where probably a lot of people know me from. I also have another series called “Fangs,” and I — also, I’ve illustrated other graphic novels as well. And, yeah, I love my job. It’s very cool.
And as a web comic artist, how do you make your money? Is this a subscription? Do you put ads on them? How does that work?
So I make most of my money through publishing. So once every two years or so, I’ll put out a collection, and I’ll put new comics in the collection and then collect the best of what was online. And that’s pretty much my bread and butter.
Do you remember when you first heard about these AI art generators and how you felt?
Yeah. So I first really heard about them back in October when someone sent me an image where they had used my name as a prompt. And I immediately had a knee-jerk reaction of dislike towards that because I don’t mind fan art or people trying to draw in my style. But there was something about the fact that you could just use my name and spit out an image immediately that really removed the humanity from the art process. And I saw all of these dark potentials bubbling up immediately. And so I had a very visceral reaction to it right off the bat.
And what was that image that person sent to you?
The person took it down now, but I believe it was just a person holding an umbrella, a very simple illustration.
And they had just typed, “person holding an umbrella in the style of Sarah Andersen“?
Something like that. I’m not sure what the exact prompt was, but they had used my name in the prompt.
And other than this initial example that this person sent to you of a person holding an umbrella in your style, have you seen other people using your name in these AI art generators to make work that looks like work that you’ve done? Have they been selling it or trying to pass it off as a Sarah Andersen original? What other examples of this have you seen?
Well, I didn’t look. What I did look was at how much of my work had been fed into the data sets that create these AI generators or partially create them. So these text image generators were trained off of a data set called LAION, and they basically are billions of images that help to train the generators. And where artists take issue with it is that our images were put into these data sets and then used to create the generators without our consent. And you can search them. There’s multiple ways of searching. But the one that seems to have really risen to the top is a site called haveibeentrained.com. And there’s so much of my work in there that when you type in my name, it fills up the entire screen.
You said that seeing this conjured up some visions of some dark futures in your mind. What were some of the things you started to think about?
Well, my work has been appropriated before. I wrote about it for “The New York Times.” People used to use my images to create these very extreme right-wing propaganda images. They were altered to reflect neo-Nazi ideology through the text being changed, and there was a typeface made of my handwriting. And some of them also had alterations in the drawings. So I saw AI as a tool that could make that faster and streamline it and make it even more difficult to escape. So it got dark very fast for me because it’s happened to me before, and the fact that it can now happen in a streamlined manner was really concerning to me. But it’s also just violating as an artist.
So let’s turn to this lawsuit because it strikes me as a big deal in the sense that whatever way it resolves could set a lot of precedent and could get a lot of people thinking about issues like, what is the role of copyright in AI generated art? So tell me about this lawsuit. Tell me how it came about. And then tell me what you all are alleging about the makers of these platforms.
Sure. So we are basically alleging multiple copyright violations. And a lot of how the images have been scraped, and how the technology works, and its relation to copyright, that has not really been tested yet. And so this is a lawsuit that will test that, and I really hope that we are successful.
And what are you looking for? Are you looking for your work to be excluded from these data sets? Are you looking for payment? What are the damages that you’re hoping to win?
So I’m not so much concerned about damages. I like to refer to what we are in the art community calling the three Cs. We are looking for credit, consent, and compensation from AI art generators.
So when someone types into Midjourney or Stable Diffusion, “man holding an umbrella in the style of Sarah Andersen,” you would want A, for you to have opted in to that to have your work included in that data set and then to have credit on whatever image results from that prompt, as well as maybe some compensation — some money that would be paid to you as the person whose name was used in the prompt. Do I have that right?
Yes, exactly. And I think for me, the big one is consent. People have talked about that maybe in the future artists will be able to opt out. For me, that’s not good enough. I really think if this is your life’s work, as an artist, you should be able to opt in. It should be up to you about whether or not you are part of these generators.
And what do you make of the argument that we’ve heard from some folks who have talked about these kinds of copyright claims that the actual offender is not the platform but rather the person who might, say, use this — one of these generators to make a piece of work that looks like yours and maybe sell it as if it were one of yours? We had Emad Mostaque, the CEO of Stability AI, on the podcast a few months ago. And he made the point that if someone goes and makes a rip-off Mickey Mouse image in Photoshop, that person gets in trouble. Photoshop doesn’t get in trouble, right? That is just the tool that is being used to create the copyright-violating image. So what do you make of that argument?
Well, first, I would go back to the data sets and still argue that these images were scraped in a violating and unethical way. And I think it’s bizarre that you would create this tool and then shrug responsibility for all the harmful things it can be used for. I just — I don’t agree with that argument at all. I think if you’re going to create something like this, then you need to be considering the ethics.
There’s this legal idea of a right to publicity, right, which is this intellectual property right that protects you from having your name or likeness misappropriated. So no one can use my face to sell paper towels without my permission, that sort of thing. What you seem to be saying, with this lawsuit, is your drawing style should be considered eligible for copyright protection such that no AI can imitate it without granting you some legal privileges.
We’re definitely not trying to copyright style. We are arguing the idea that our portfolios were scraped without our consent, and it’s also our names. It’s not so much how you draw or the style. It’s the fact that it is the actual files being taken and put into the generator. When our work is used, traditionally, we go to licensing. So for me, it goes back to the fact that it is our physical works in digital format that are being fed into these generators.
I’m curious. Have you seen — there’s a website called stablediffusionfrivolous.com. Have you seen this website?
I haven’t looked at it, but I’ve heard about it. I don’t know exactly what it alleges.
I just found this website. I was looking up your lawsuit. I was actually looking for a copy of the complaint itself, and I found this website that I guess was put up in response to this lawsuit by a group of people calling themselves tech enthusiasts. And it basically does a point-for-point response to the lawsuit, and a lot of it’s really in the weeds and talking about how these AI diffusion models actually work and what they’re actually storing. Is it a copy of the image? Is it a hash — mathematical hash related to the image?
But the main thrust of it is that artist’s creative control is not unlimited, right? This is not a new principle. Picasso is often credited with saying, “Good artists copy. Great artists steal.” Artists borrow from each other all the time, whether it’s stylistic elements or techniques, that this has been a controversy that dates back to before the invention of the computer even and that this is basically just a group of artists who are scared about competing in the market with these AI tools, that they don’t feel like they’re equipped to compete against, and that this is basically just —
I want to dig up a quote here and have you respond to it. It says, “Artists are faced with change in their industry brought on by advancements in technology. And while many embrace it, others fear and resist it. And one can have sympathy for those people, but sympathy does not give them the right to throw fair use in the garbage. I can have sympathy for an artist whose dying wish was that nobody parody their works, but that doesn’t give them the actual right to impose such restrictions.” So what do you make of that criticism that this is essentially just a group of artists who are scared about having to compete?
It’s just unfair to compete when you’re taking our skills and then streamlining it. It’s like I’m almost competing with a shadow version of myself. And it’s not so much about even skill at that point. I just feel like the idea of competition is nonsensical when they’re making us compete against our work.
Yeah. And just to advocate for Sarah’s position here, I think we would all agree that if you took all of these AI generators that we’ve been having so much fun with and you removed from the training sets all of the artists whose work contributed, if you only used — I don’t know — pictures that were publicly posted to maybe Flickr and Google Maps or something, these things would be terrible, right? So we know that artists have contributed a massive amount of value to these image generators, and we know that they haven’t been compensated for it.
So there is this legal question of, is this OK or not? But I think on just a moral, ethical dimension, there’s a very good argument that, no, the artists have been mistreated here, right? You’ve taken their labor, and you’ve pulped it. And now other people are making millions of dollars, and they’re saying nothing.
Yeah. And I think from a personal level, I don’t understand why we could not be using AI in something that frees people to make art and to be creative as opposed to taking something that is very fundamentally human and automating and streamlining it. Art is something that we’re supposed to be present with in the process, and it’s a very human thing. And I can’t understand why it would be automated in this way.
So, Sarah, I write this newsletter three times a week, and often I will illustrate that newsletter with DALL-E, which at this point I haven’t even given them any money. I just get some number of credits. Do you think it’s unethical for newsletter writers, like me or other independent writers, to be using tools like this to illustrate our works?
I’m sorry if I offend you. But, yes, I do think it’s unethical because I think you should be hiring an artist to do that.
All right.
Casey, you’ve been owned.
Well, I invited the question because it’s something I think about.
I wonder if your position, Casey, would be that the option was not between hiring an artist and using DALL-E for free. It was between using DALL-E and not having an image on your newsletter or using a public domain image or something like that. So I can’t imagine that your actual thought process was, I’m going to use DALL-E instead of paying an artist.
That is certainly how I have rationalized it. Often, I have about five minutes before my self-imposed deadline to send out my newsletter, and that’s not enough time for an artist to take a fair crack at something. But at the same time, I want to be open to the argument that, actually, this is relying on a bunch of exploited labor, and I should be more conscious of that as I make decisions around that stuff.
I will say, Sarah, that I feel like DALL-E has made me a more creative person because I’m somebody who can’t draw worth a lick. And the idea that I can type a few words into a box and get back some pretty cool-looking stuff, it makes me feel like a wizard. And as somebody who tried to learn how to draw as a kid and just never got very good at it, there’s something thrilling about it. But there is a flip side of it, which is how these data sets were trained. And so I really think that yours is an important story to tell here.
Yeah. I want to make clear, a lot of artists, including myself, are actually not against the technology. I am not against the generators. I am against the way they were created, and I can see ways that they can be ethical. A lot of artists do. Like I said, it goes back to credit, consent, and compensation. So if we could rebuild these, I see that as valid. I can see that helping people’s creativity.
Although what I will say is that, again, this comes back to more a personal lens is that there’s something about art being automated that doesn’t quite sit right, to me, as someone who has made art for my entire life. And I think there’s something about learning, and being present in the process, and bringing your own style, and your own line weight, and your own thoughts to it that is really important for people to not just give up on.
The process of making art is a really beautiful thing, and I think automation would kill a piece of creativity that is so vital and beautiful to humanity. So, again, I am not against AI generators as a whole. I am against them being done unethically.
Yeah. Also, all other things being equal, if there were two websites, and one of them was an AI image generator where all the artists had given their consent for their work to be in there, and one was one where they hadn’t, it’s pretty clear to me which one most people would choose to use, right?
Yeah.
If they were both equally good, it’s like, yeah, you’d want to use the one that artists felt good about.
Well — but if one of them cost $1 per image and one of them didn’t —
Yeah.
— that becomes harder. So I’m curious, Sarah, how you think about the compensation piece of the three Cs. As we know, a lot of artists that are compensated by platforms for their work don’t feel like they get enough. Spotify is a notorious example of this, where they pay out billions of dollars a year in royalties to musicians whose songs are streamed on their platform. But if you ask the musicians, it’s like, they’re not happy. They get a tiny fraction of a penny every time someone streams their song, and they’re making a lot less than they were in the days of CDs. So what do you think would be fair? If someone types in, “Sarah Andersen-style drawing of a man with an umbrella” into one of these image generators, how much would you feel OK receiving as a check or a payment for that?
Oh, my gosh. I haven’t thought about what the dollar amount would be because I personally would not choose to opt in. I think it will probably wind up being an amount that will not be amazing for artists, but there will be some artists that would agree to this. But I really think there should be compensation. But in terms of dollar amount, I would really have to sit down, with a pen and paper, and determine what it would be worth, selling a piece of my soul for.
Right. One thing I’m thinking about, as we’re talking, is that we are essentially in the same position as you, right? Casey and I are both artisanal human content creators.
Word artists, if you will.
Right. Our word art is published and is probably also being fed into these text generators — these AI text generators. And no one ever asked us. And so, hypothetically, someone could go into one of these text generators and say, write me a newsletter about tech in the style of Casey Newton or Kevin Roose. And it could do that. And somehow, that doesn’t seem like an existential threat to me. I don’t — maybe I’m being naive here, but I feel like the value that I have is just continuing to come up with new ideas and not just remixing old ideas.
So I’ve heard some artists defending these platforms on the grounds that, well, they’re just allowing people to remix and sample our old work. But our value, as artists, is actually continuing to push the envelope, continuing to reinvent ourselves, continuing to come up with new styles, and new formats, and new ideas. Does that argument hold any water with you?
I would say then it becomes a thing where you are competing with your past self in this very bizarre way, and that’s something that I do not want to do. But I have been very curious about the difference between something like a writing AI and how writers might feel.
Did you see that Nick Cave recently put something out where he felt that it was an artistic violation? I thought that was very interesting because there are so many different perspectives. Like you said, you seem to not be so bothered by it. But then there are people, like Nick Cave, who do see this as a violation. And I think when you consider what you do to be art, it can be very disconcerting to see your work twisted or created in a way that you didn’t consent to.
Yeah. Also, go and tell the Beatles that their real value is in coming up with new ideas, right? See how much of their back catalog that buys you. If you haven’t read this Nick Cave thing yet, Nick Cave is this outlaw singer-songwriter. And he wrote a post I believe on his Substack where he had the AI, probably ChatGPT, write a song — a Nick Cave song in his style. And he hated it. But, yeah, it gave him the willies in the same way that it’s given you, Sarah.
And, yeah, I just think that writing is not quite the same as fine art or music, where you can say, eh, don’t worry. I’ll have a new idea tomorrow. Or maybe you will, but that doesn’t mean that what you created in the past doesn’t have value. In fact, what you created in the past might be some of the most valuable stuff that you’ve ever created.
Yeah. Just because it’s in the past doesn’t mean that I feel OK about that work being used or changed.
Right. Should this system that you’re talking about, this credit, and consent, and compensation, should this only apply to living artists? Or should I — if I go into Stable Diffusion and I say, make me an image of a computer in the style of Vincent van Gogh, should I be required to pay a fraction of a cent to the van Gogh estate?
I have thought a lot about this. I think legally it makes more sense to use artists whose copyrights have basically expired due to time. But on a personal level, I feel in a very complex way about it because those artists could never have anticipated this. And therefore, they could never have consented to it. I think my position at this point is probably a little bit extreme, where I really view art as this personal, sacred thing. And I think even if that person is deceased, it’s not OK to take their work and then use it without their consent. So I think legally probably it would be more OK. But I also think just because someone has passed, that that doesn’t mean we shouldn’t consider their wishes.
Well, I will just — I want to put it on the record that after I pass away, I would like an AI to continue advocating for all the ideas that I’ve advocated for in my columns and create massive online troll armies, if necessary, until all of my goals on this earth are accomplished.
Perfect.
I’m just glad we’ve established that Casey, through his newsletter, is ruining something beautiful and human about the world today. That was really why we invited you on, Sarah.
That’s more than you’ve ever done for this world, Roose.
Sarah, thanks so much for coming. Really great chatting.
Thank you. Thank you.
[MUSIC PLAYING]
Just a quick note. After we talked to Sarah, we asked the companies that she and the other artists are suing for comment. Midjourney and DeviantArt didn’t get back to us before deadline. But a spokesperson for Stability AI said, “The company takes the allegations seriously but believes them to be a misunderstanding of generative AI and copyright law.”
Midjourney’s CEO, David Holz, did give an interview to “Forbes” last year before this lawsuit. He said the company had been looking at providing an opt-out option for artists but said the company had not sought consent because, quote, “There isn’t really a way to get a hundred million images and know where they’re coming from.”
We’ll be right back.
[MUSIC PLAYING]
So, Kevin, lately we’ve been talking a lot about AI and all the different ways it can be used, all of the different industries that it might revolutionize. And this week, we saw AI try to do journalism, and it did not go well.
So what happened?
Well — so I used to work for a publication called CNET in 2012, and a lot of talented human writers over there.
What did you do? What did you cover there?
Well, I covered Google for only about eight months. But I’ve known people over the years, very talented, sweet people. I like the CNET people. And so I was very interested to see, though, recently that all those articles on the CNET website were no longer exclusively being written by human beings and that, in fact, they had contracted with some sort of mystery AI to write some of their stories.
Wow. I heard about this because, as often is the way that I hear about these things, it went wrong in pretty hilarious fashion. So let’s talk about what went wrong, and then we’ll get into what it means.
Sure. So basically, there was another publication called Futurism, which essentially caught them. I believe somebody else had tweeted about this. But Futurism investigated, and they found that CNET had published articles under a byline called CNET Money. And those articles were the explainer content that exists really just to fulfill Google searches, right? So maybe you want to buy a mortgage, and you say, what are mortgage rates near me? And CNET has a lot of incentive to create a page that explains something about mortgages to you so that you might click a link on that page and actually go buy a mortgage, and they’ll get a kickback.
Right. This is SEO bait.
Exactly. Well, then it turns out, though, that CNET had had an AI write at least 75 of these stories.
Did they label them? Was it — did the stories say, this is written by AI?
So at first, these articles had the byline CNET Money staff. And after it was written about by Futurism, that got shortened to just CNET Money I guess because upon reflection it didn’t seem like an AI could really be called part of the staff. But — and then they added a tagline that said, “This article was assisted by an AI engine and reviewed, fact checked, and edited by our editorial staff.” So it’s like, OK, yeah. We might be using AI, but we put human editors on this. And they’re going to do their jobs. And, in fact, they added the name of a human editor to the page so that you would be confident, like, OK. A person really took a look at this.
While there have been other examples of the media using automation technology before — the “Associated Press,” for example, has been using this to do some kind of stock reporting. This company’s stock price went up or down, that sort of thing. This was really — it felt like the first time that we started to see AI creeping into mainstream digital publishing in a way that was actually quite easy to predict, which is there’s money to be made on this search engine optimization stuff. We could pay a human to do it. But, man, if we could automate it, and just snap our fingers, and then just collect all that sweet, sweet traffic and ad revenue, why wouldn’t we do it?
Well — and humans don’t historically like writing those kinds of SEO bait articles. I’ve written a few in — early in my career. I’m sure you did too. They’re not fun.
No.
But they make money and get traffic, so publishers want to do them.
Right. And, of course, the hope is that what you write in that article is correct. But this stuff was not correct. [LAUGHS]
Wow.
Futurism did a follow-up piece where they were able to catch CNET’s AI making all sorts of errors, explaining, for example, compound interest in an incorrect way. And one reason why I thought this was so funny is that, as we’ve talked about on the show, all these AI tools do is predict the next word in a sentence. And when you’re trained to predict the next word in a sentence, you don’t develop any capability with math whatsoever. Math is a totally different discipline than this word guessing thing. So if you’re running a publication and you want to use the AI to do one thing, the thing that would scare me the most would be asking the AI to do math because we just know at this point that AIs are not good at math.
Right. Well, this kind of AI is not good at math. What’s so interesting about this, to me, is that, yeah, as you described, these large language models are notoriously bad at math. That’s the place where they are the weakest. But if you don’t understand that, I can see why, as an editor or an executive at CNET, you might say, oh, we’ll use the robots to do the number stories —
Right.
— because robots are good at numbers.
Right.
And meanwhile, everyone who knows this technology is like, that’s the last thing you would use it for, right? I actually think large language models could be used to do the first draft of lots of different kinds of journalism. But I think it’s actually a bigger threat to things like movie reviews, or tourist guides, or things like that, like lifestyle content rather than the number stories, which is actually the place where these specific types of AIs are the weakest.
And we should say, as you said before, we don’t know what AI CNET tried to use for these stories. But it does seem, from the specific mistakes that it made, it must have been one of these large language model, GPT-style AI engines. And I will say, I’m going to — I’m inclined to cut the AI a little bit of a break here because reporters that are humans aren’t perfect either. And this is the kind of thing that some reporters, especially inexperienced reporters, they might get wrong. But if you are a person who’s Googling things about compound interest and you’re relying on this CNET article to explain compound interest to you, you would have come away with a wrong understanding.
Well — and that’s what makes this a funny story is that the point of this article was to explain all those concepts, then it just explained them all completely wrong. That’s why I’ve been laughing about this story so much this week is that the whole idea of the explainer genre is that it takes you, a person who does not know, and it says, we’re going to break this down. We’re going to make this very easy for you. And then you would come away from this article, and you would know absolutely nothing. Or you would be completely misled.
So I think this is a really good time to set some industry norms. I think one good norm would be that if an AI wrote the article, that’s actually what the byline should say. The byline should not say CNET Money. It should say RAI, and then it should actually name the AI, right? I think part of building trust is some of these AIs are going to be better than others. And if there’s one that is better suited to doing some kinds of automated explainer journalism, I want to know it was that one and not the cheaper version that the cut-rate publisher decided to use instead, right? So tell us what these tools were.
And then there should probably be automated error reporting, where it’s like, hey, I found a flaw in your math. Let me report that on the site, so I can give you some more direct feedback. So those are just a few things that I would like to see to prevent this stuff from taking over the internet in an opaque way.
Yeah. I would also just add to that that I think if you’re a publication that’s considering unleashing ChatGPT on your website, I think you just need to really spend some time getting to know the tool. This goes back to something that Sherri Shields, our high school English teacher friend, told us on the episode last week, which is that you need to know your way around these large language models before you can actually use them in a way that is accurate and helpful. So you have to know something about the topic of compound interest if you want to write an article about compound interest using an AI model because the potential for error is right now, at least, so high.
And so it actually works much better as an assistant, a reporting assistant, something that could help you come up with an outline or a sketch for a draft. But you have to be at least somewhat knowledgeable about the underlying information already, or else you’ll end up getting a bad grade on your English paper or getting made fun of on a tech podcast.
Yep. Something else I’m thinking about in context of the conversation that we had with Sarah, the artist, is to the extent that the CNET AI can explain anything, it’s because it has taken 1,000 other explainers that it already found on the web, and then just digested those into a slurry, and then just spat them out again. And so I’m not ready to call what CNET is doing plagiarism, but it’s definitely just remixing the entire web and just regurgitating it without attribution to anyone, right? And it wouldn’t work if all those other articles hadn’t already been out there on the web. So —
Yeah, it’s like pink slime journalism, like pink slime that was used to make Chicken McNuggets or something.
Yeah.
It’s just churning — blending all of the undifferentiated mass of garbage SEO bait articles into new garbage SEO bait articles. And I don’t want to be too hard on CNET. I think someone was always going to go first and fall on their face in this thing. But I do think that it highlights the danger of ascribing too much capability to these tools before they’re ready. Even OpenAI has said, ChatGPT should not be used for anything that’s truly important. And you could argue about whether an article that explains compound interest on cnet.com is something truly important. But somebody may have looked at that article and said, you know what? I think I understand compound interest now. I’m going to go invest in a Certificate of Deposit. And they got the details wrong, or they went to the bank and said, why are my APR and my APY mixed up?
And if you are that person, call us. We want to talk to you on the show. Have you been led into bankruptcy by the CNET explainer on compound interest? We want to help.
[MUSIC PLAYING]
“Hard Fork” is produced by Davis Land. We were edited this week by Shreeya Sinha and Paula Szuchman. This episode was fact checked by Caitlin Love. Today’s show was engineered by Alyssa Moxley. Original music by Dan Powell, Elisheba Ittoop, and Marion Lozano. Special thanks to Hanna Ingber, Nell Gallogly, Kate LoPresti, and Jeffrey Miranda. As always, you can email us at [email protected]. That’s all this week. See you next time.
So what is compound interest?
Compound interest — you don’t know about compound interest? It’s the most — it’s one of the biggest forces in human society.
I’m excited to hear more.
Let’s do a whole episode about compound interest. [MUSIC PLAYING]