Artificial Intelligence Experts Agree That It Needs Regulation. That’s the Easy Part. - The World News

Artificial Intelligence Experts Agree That It Needs Regulation. That’s the Easy Part.

This article is part of our special section on the DealBook Summit that included business and policy leaders from around the world.


  • The emergence of generative artificial intelligence, such as ChatGPT, signals a radical change in how A.I. will be used in every area of society, but it still must be viewed as a tool that humans can use and control — not as something that controls us.

  • Some sort of regulation of AI is needed, but opinions vary widely on the breadth and enforceability of such rules.

  • For the potential of A.I. to be realized and the risks, as much as possible, to be controlled, technology companies cannot go it alone. There should be genuine partnerships with other sectors, such as universities and government.


Get seven artificial intelligence experts together in one room and there’s a lot of debate about just about everything — from legislation to transparency to best practices. But they could agree on at least one thing.

It’s not supernatural.

“A.I. is not something that comes from Mars. It’s something that we shape,” said Francesca Rossi, an IBM fellow and IBM A.I. Ethics Global Leader. Ms. Rossi, along with other representatives of industry, academia and the European Union Parliament, participated in last week’s DealBook Summit task force on how to harness the potential of A.I. while regulating its risks.

Acknowledging that A.I. did not emerge from outer space was the easy part. But how it will be shaped — not just in the United States but globally — was far more difficult. What role should governments play in controlling A.I.? How transparent should technology companies be about their A.I. research? Should A.I. adoption go more slowly in some fields even if the capability exists?

While A.I. has been around for decades, when the company OpenAI released ChatGPT a year ago it immediately became a worldwide phenomenon; Kevin Roose, a technology writer for The New York Times and moderator of the task force, wrote, “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public.”

These new types of chatbots can communicate in an eerily humanlike manner and in countless languages. And all are in their infancy. While ChatGPT is the best known, there are others, including Google’s Bard and most recently, Amazon’s Q.

“We all know that this particular phase of A.I. is at the very, very early stages,” said John Roese, president and chief technology officer of Dell Technologies. No one can be complacent or think of A.I. “just as a commodity.”

“It is not,” he said. “This is not something you just consume. This is something you navigate.”

While A.I. has taken a giant leap forward — and is evolving so rapidly that it is hard to keep up with the state of play — it is important not to mystify it, said Fei-Fei Li, a professor of computer science at Stanford University and co-director at the university’s Human-Centered A.I. Institute. “Somehow we’re too hyped up by this. It’s a tool. Human civilization starts with tool using and tool invention from fire to stone, to steam to electricity. They get more and more complex, but it’s still a tool-to-human relationship.”

While it’s true that some ways in which A.I. works are inexplicable even to its developers, Professor Li noted that is also true about things like pharmaceuticals — acetaminophen, for example. She said, however, that part of the reason most people don’t hesitate to take the drugs is because there’s a federal agency — the Food and Drug Administration — that regulates medications.

That raises the question of whether there should be the equivalent of the F.D.A. for A.I.?

Some regulation is needed, participants agreed, but the trick is deciding what that should look like.

Vice President Kamala Harris, who was interviewed at the DealBook conference, spoke separately on the issue.

“I know that there is a balance that can and must be struck between what we must do in terms of oversight and regulation, and being intentional to not stifle innovation,” she said.

It’s finding the balance that is tough, however.

The European Parliament is hammering out the first major law to regulate artificial intelligence, something the rest of the world is watching closely.

Part of the law calls for assessments of A.I. used in identified high-risk areas, such as health care, education and criminal justice. That would require makers of A.I. systems to disclose, among other things, what data is being used to train its systems — to avoid biases and other issues — and how they are managing sensitive information and its environmental impact. It also would severely limit the use of facial recognition software.

Brando Benifei, a member of the European Parliament and task force participant, said he hopes it will be passed early next year; there will be a grace period before it is implemented.

In October, the White House issued a lengthy executive order on A.I., but without an enforcement mechanism, something Mr. Benifei sees as necessary. “Obviously, it’s a delicate topic,” he said. “There is a lot of concern from the business sector, I think rightly so, that we do not overregulate before we fully understand all the challenges.” But, he said, “we cannot just rely on self-regulation.” The development and use of A.I., he added, must be “enforceable and explainable to our citizens.”

Other task force members were far more reluctant to embrace such broad regulation. Questions abound, such as who is responsible if something goes wrong — the original developer? A third-party vendor? The end user?

“You cannot regulate A.I. in a vacuum,” Mr. Roese said. “A.I. has a dependency on the software ecosystem, on the data ecosystem. If you try to regulate A.I. without contemplating the upstream and downstream effects on the adjacent industries, you’ll get it wrong.”

For that reason, he said, it makes more sense to have an A.I. office or department within the relevant government agencies — perhaps with an overarching A.I. coordinator — rather than try to create a centralized A.I. agency.

Transparency is key, all agreed, and so are partnerships between government, industry and university research. “If you are not very transparent, then academia gets left behind and no researchers will come out of academia,” said Rohit Prasad, senior vice president and head scientist at Amazon Artificial General Intelligence.

Professor Li, the lone academic representative in the room, noted that companies often say they want partnerships but don’t “walk the walk.”

In addition, she said, “It’s not just about regulation. It really has to do with investment in the public sector in a deep and profound way,” noting that she has directly pleaded with Congress and President Biden to support universities in this area. Academia, she said, can serve as a trusted neutral platform in this field, but “right now we have completely starved the public sector.”

A.I. has been called an existential threat to humanity — potentially through its use in surveillance that undermines democracy or in launching automated weapons that could kill on a massive scale. But such highly publicized warnings distract from more mundane but more immediate problems of A.I., said Mr. Benifei.

“We have today problems of algorithmic biases, of misuse of A.I., that is in the daily life of people, not about the catastrophe for humanity,” he said.

All of these issues concern Lila Ibrahim, chief operating officer of Google DeepMind. But a major one, she noted, they hadn’t had time to touch on: “How do we actually equip youth today with A.I. skills and do it with diversity and inclusion?” she asked. “How do we not leave people further behind?”

Moderator: Kevin Roose, technology writer, The New York Times

Participants: Brando Benifei, member of the European Parliament; Lila Ibrahim, chief operating officer, Google DeepMind; Fei-Fei Li, professor of computer science, Stanford University and co-director, Stanford Institute for Human-Centered A.I.; Rohit Prasad, senior vice president and head scientist at Amazon Artificial General Intelligence; David Risher, chief executive, Lyft; John Roese, president and global chief technology officer, Dell Technologies; Francesca Rossi, IBM fellow and A.I. ethics global leader

Add a Comment

Your email address will not be published. Required fields are marked *