WASHINGTON — In a case with the potential to alter the very structure of the internet, the Supreme Court did not appear ready on Tuesday to limit a law that protects social media platforms from lawsuits over their users’ posts.
In the course of a sprawling argument lasting almost three hours, the justices seemed to view the positions taken by the two sides as too extreme, giving them a choice between exposing search engines and Twitter shares to liability on the one hand and protecting algorithms that promote pro-ISIS content on the other.
At the same time, they expressed doubts about their own competence to find a middle ground.
“You know, these are not like the nine greatest experts on the internet,” Justice Elena Kagan said of the Supreme Court, to laughter.
Others had practical concerns. Justice Brett M. Kavanaugh, echoing comments made in briefs, worried that a decision imposing limits on the shield “would really crash the digital economy with all sorts of effects on workers and consumers, retirement plans and what have you.”
Drawing lines in this area, he said, was a job for Congress. “We are not equipped to account for that,” he said.
The federal law at issue in the case, Section 230 of the Communications Decency Act, shields online platforms from lawsuits over what their users post and the platforms’ decisions to take content down. Limiting the sweep of the law could expose the platforms to lawsuits claiming they had steered people to posts and videos that promote extremism, advocate violence, harm reputations and cause emotional distress.
The case comes as developments in cutting-edge artificial intelligence products raise profound new questions about whether old laws — Section 230 was enacted in 1996 — can keep up with rapidly changing technology.
“This was a pre-algorithm statute,” Justice Kagan said, adding that it provided scant guidance “in a post-algorithm world.” Justice Neil M. Gorsuch, meanwhile, marveled at advances in A.I. “Artificial intelligence generates poetry,” he said. “It generates polemics.”
The case was brought by the family of Nohemi Gonzalez, a 23-year-old college student who was killed in a restaurant in Paris during the terrorist attacks in November 2015, which also targeted the Bataclan concert hall. Eric Schnapper, a lawyer for the family, argued that YouTube, a subsidiary of Google, bore responsibility because it had used algorithms to push Islamic State videos to interested viewers, using information that the company had collected about them.
“We’re focusing on the recommendation function,” Mr. Schnapper said.
But Justice Clarence Thomas said that recommendations were vital to making internet platforms useful. “If you’re interested in cooking,” he said, “you don’t want thumbnails on light jazz.” He later added, “I see these as suggestions and not really recommendations because they don’t really comment on them.”
Mr. Schnapper said YouTube should be liable for its algorithm, which he said systematically recommended videos inciting violence and supporting terrorism. The algorithm, he said, was YouTube’s speech and distinct from what users had posted.
Justice Kagan pressed Mr. Schnapper on the limits of his argument. Did he also take issue with the algorithms Facebook and Twitter use to generate people’s feeds? Or with search engines?
Mr. Schnapper said all of those could lose protection under some circumstances, a response that seemed to surprise Justice Kagan.
“I can imagine a world where you’re right that none of this stuff gets protection,” she said. “And, you know, every other industry has to internalize the costs of its conduct. Why is it that the tech industry gets a pass? A little bit unclear. On the other hand, I mean, we’re a court. We really don’t know about these things.”
Justice Amy Coney Barrett asked about whether Twitter users could be sued for retweeting ISIS videos. Mr. Schnapper said the law at issue in the case might allow such a suit. “That’s content you’ve created,” he said.
Justice Samuel A. Alito Jr. said he was lost. “I don’t know where you’re drawing the line,” he told Mr. Schnapper. “That’s the problem.”
Mr. Schnapper tried to clarify his position and in doing so revealed its breadth. “What we’re saying is that insofar as they were encouraging people to go look at things,” he said, “that’s what’s outside the protection of the statute.”
Section 230 was enacted in the infancy of the internet. It was a reaction to a decision holding an online message board liable for what a user had posted because the service had engaged in some content moderation.
The provision said, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The provision helped enable the rise of social networks like Facebook and Twitter by ensuring that the sites did not assume legal liability for every post.
Malcolm L. Stewart, a lawyer for the Biden administration, largely argued in support of the family’s position in the case, Gonzalez v. Google, No. 21-1333. He said that successful lawsuits based on recommendations would be rare but that the immunity provided by Section 230 was generally unavailable.
Justice Kagan acknowledged that many suits would fail for reasons unrelated to Section 230. “But still, I mean, you are creating a world of lawsuits,” she said. Justice Kavanaugh echoed the point.
Lisa S. Blatt, a lawyer for Google, said the provision gave the company complete protection from suits like the one brought by Ms. Gonzalez’s family. YouTube’s algorithms are a form of editorial curation, she said. Without the ability to provide content of interest to users, she said, the internet would be a useless jumble.
“All publishing requires organization,” she said.
She added: “Helping users find the proverbial needle in the haystack is an existential necessity on the internet. Search engines thus tailor what users see based on what’s known about users. So does Amazon, TripAdvisor, Wikipedia, Yelp, Zillow, and countless video, music, news, job-finding, social media and dating websites. Exposing websites to liability for implicitly recommending third-party context defies the text and threatens today’s internet.”
A ruling against Google, she said, would either force sites to take down any content that was remotely problematic or to allow all content no matter how vile. “You have ‘The Truman Show’ versus a horror show,” she said.
Justice Kagan asked Ms. Blatt if Section 230 would protect “a pro-ISIS” algorithm or one that promoted defamatory speech. Ms. Blatt said yes.
Section 230 has faced criticism across the political spectrum. Many liberals say it has shielded tech platforms from responsibility for disinformation, hate speech and violent content. Some conservatives say the provision has allowed the platforms to grow so powerful that they can effectively exclude voices on the right from the national conversation.
The justices will hear arguments in a related case on Wednesday, also arising from a terrorist attack. That case, Twitter v. Taamneh, No. 21-1496, was brought by the family of Nawras Alassaf, who was killed in a terrorist attack in Istanbul in 2017.
The question in that case is whether Twitter, Facebook and Google may be sued under the Antiterrorism Act of 1990, on the theory that they abetted terrorism by permitting the Islamic State to use their platforms. If the justices were to say no, the case against Google argued on Tuesday could be moot.
Whatever happens in the cases argued this week, both involving the interpretation of statutes, the court is very likely to agree to consider a looming First Amendment question arising from laws enacted in Florida and Texas: May states prevent large social media companies from removing posts based on the views they express?