OpenAI CEO Sam Altman fears artificial intelligence could “go quite wrong”
Sam Altman, the CEO of the company behind ChatGPT, expressed concern that artificial intelligence could “go quite wrong” at a Senate committee hearing on Tuesday focusing on how to regulate the rapidly developing field of AI.
Altman, who leads San Francisco-based OpenAI, said in response to a question about his greatest fear regarding AI that the technology and industry could “cause significant harm to the world” unless it is properly regulated.
“If this technology goes wrong, it can go quite wrong,” he told the Senate Judiciary’s Subcommittee on Privacy, Technology and the Law. “We want to be vocal about that. We want to work with the government to prevent that happening. But we have to be clear-eyed about it.”
Asked by Sen. Josh Hawley, R-Mo., about the risk that so-called large language models like ChatGPT, which can already predict public opinion with accuracy, could be used to manipulate people, such as undecided voters, Altman replied, “I’m nervous about it.” He also drew a parallel with the emergence of Photoshop in the late 1990s and early 2000s, when many people were initially fooled by photoshopped images before developing an understanding of image manipulation.
“This will be like that on steroids,” he said.
AI a threat to democracy?
The congressional hearing covered a range of concerns, and senators from both parties broadly agreed that AI needed regulation, without reaching firm conclusions on how to do that. Sen. Chris Coons, Democrat of Delaware, fretted that AI models developed in China would promote a pro-China “point of view,” and pushed for the creation of AI that would promote “open markets and open societies.”
Hawley later rattled off a list of potential negative effects from AI: “Loss of jobs, loss of privacy, manipulation of personal behavior, manipulation of personal opinion and destabilization of elections in America,” he said.
But Altman expressed optimism that AI would create more jobs than it destroys, saying, “We’re very optimistic that there will be fantastic jobs in the future and that current jobs can be much better,” and said that ChatGPT was “good at doing tasks, not jobs.”
IBM Chief Privacy and Trust Officer, Christina Montgomery, who also testified in the hearing, used herself as an example of AI creating new jobs, noting that she heads a team of AI governance professionals.
Indeed, the technology is already disrupting some fields. Earlier this month, IBM’s chief executive told Bloomberg the company would pause hiring for jobs that could be done by AI, affecting roughly a third of the company’s headcount, or 7,800 positions.
Altman and AI researcher Gary Marcus expressed support for government regulations on AI. That could could include potentially creating a new agency to oversee the technology, requiring companies to make AI models and their underlying data public, requiring AI creators to have a license to publicly release products or demonstrate their safety before public release, and have independent auditing of AI models. Montgomery advocated for a more narrowly focused approach where the government would regulate only certain “use cases” for artificial intelligence.
“These things will have learned from us”
The rapid emergence of “generative AI” — tools that can put out reams of writing or visual images, helping doctors communicate with their patients and real estate pros quickly write listings, for example — has heightened public concerns about the tech. AI pioneer Geoffrey Hinton, who recently left his job at Google to speak freely about the technology, recently told a conference that AI could pose a range of threats.
“These things will have learned from us, by reading all the novels that ever were and everything Machiavelli ever wrote, how to manipulate people,” Hinton said, according to the Associated Press. “Even if they can’t directly pull levers, they can certainly get us to pull levers.”
In March, a number of prominent CEOs and researchers signed a letter asking for a six-month moratorium on developing major AI models. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” they asked.
At Tuesday’s hearing, however, the consensus was that the explosion in AI would continue apace, as companies and investors pour billions of dollars into the technology.
“There’s no way to stop this moving forward,” said Sen. Cory Booker, D-N.J. “There will be no pause. There’s no enforcement body to enforce a pause. Forgive me for being skeptical, nobody’s pausing.”