Google C.E.O. Sundar Pichai on the A.I. Moment: ‘You Will See Us Be Bold’
Here are some other highlights of Mr. Pichai’s remarks:
On the initial, lukewarm reception for Google’s Bard chatbot:
We knew when we were putting Bard out, we wanted to be careful … So it’s not surprising to me that’s the reaction. But in some ways, I feel like we took a souped-up Civic and kind of put it in a race with more powerful cars. And what surprised me is how well it does on many, many, many classes of queries. But we are going to be iterating fast. We clearly have more capable models. Pretty soon, maybe as this goes live, we will be upgrading Bard to some of our more capable PaLM models, which will bring more capabilities, be it in reasoning, coding, it can answer math questions better. So you will see progress over the course of next week.
On whether ChatGPT’s success came as a surprise:
With OpenAI, we had a lot of context. There are some incredibly good people, some of whom had been at Google before, and so we knew the caliber of the team. So I think OpenAI’s progress didn’t surprise us. I think ChatGPT … you know, credit to them for finding something with product-market fit. The reception from users, I think, was a pleasant surprise, maybe even for them, and for a lot of us.
On his worries about tech companies racing toward A.I. advancements:
Sometimes I get concerned when people use the word “race” and “being first.” I’ve thought about A.I. for a long time, and we are definitely working with technology which is going to be incredibly beneficial, but clearly has the potential to cause harm in a deep way. And so I think it’s very important that we are all responsible in how we approach it.
On the return of Larry Page and Sergey Brin:
I’ve had a few meetings with them. Sergey has been hanging out with our engineers for a while now. He’s a deep mathematician and a computer scientist. So to him, the underlying technology, I think if I were to use his words, he would say it’s the most exciting thing he has seen in his lifetime. So it’s all that excitement. And I’m glad. They’ve always said, “Call us whenever you need to.” And I call them.
On the open letter, signed by nearly 2,000 A.I. researchers and tech luminaries including Elon Musk, that urged companies to pause development of powerful A.I. systems for at least six months:
In this area, I think it’s important to hear concerns. There are many thoughtful people behind it, including people who have thought about A.I. for a long time. I remember talking to Elon eight years ago, and he was deeply concerned about A.I. safety then. I think he has been consistently concerned. And I think there is merit to be concerned about it. While I may not agree with everything that’s there and the details of how you would go about it, I think the spirit of [the letter] is worth being out there.
On whether he’s worried about the danger of creating artificial general intelligence, or A.G.I., an A.I. that surpasses human intelligence:
When is it A.G.I.? What is it? How do you define it? When do we get here? All those are good questions. But to me, it almost doesn’t matter because it is so clear to me that these systems are going to be very, very capable. And so it almost doesn’t matter whether you reached A.G.I. or not; you’re going to have systems which are capable of delivering benefits at a scale we’ve never seen before, and potentially causing real harm. Can we have an A.I system which can cause disinformation at scale? Yes. Is it A.G.I.? It really doesn’t matter.
On why climate change activism makes him hopeful about A.I.:
One of the things that gives me hope about A.I., like climate change, is it affects everyone. Over time, we live on one planet, and so these are both issues that have similar characteristics in the sense that you can’t unilaterally get safety in A.I. By definition, it affects everyone. So that tells me the collective will will come over time to tackle all of this responsibly.