As A.I. Booms, Lawmakers Struggle to Understand the Technology
In November 2016, the Senate Subcommittee on Space, Science and Competitiveness held the first congressional hearing on A.I., with Mr. Musk’s warnings cited twice by lawmakers. During the hearing, academics and the chief executive of OpenAI, a San Francisco lab, batted down Mr. Musk’s predictions or said they were at least many years away.
Some lawmakers stressed the importance of the nation’s leadership in A.I. development. Congress must “ensure that the United States remains a global leader throughout the 21st century,” Senator Ted Cruz, Republican of Texas and chair of the subcommittee, said at the time.
DARPA subsequently announced that it was earmarking $2 billion for A.I. research projects.
Warnings about A.I.’s dangers intensified in 2021 as the Vatican, IBM and Microsoft pledged to develop “ethical A.I.,” which means organizations are transparent about how the technology works, respect privacy and minimize biases. The group called for regulation of facial recognition software, which uses large databases of photos to pinpoint people’s identity. In Washington, some lawmakers tried creating rules for facial recognition technology and for company audits to prevent discriminatory algorithms. The bills went nowhere.
“It’s not a priority and doesn’t feel urgent for members,” said Mr. Beyer, who failed to get enough support last year to pass a bill on audits of A.I. algorithms, sponsored with Representative Yvette D. Clarke, Democrat of New York.
More recently, some government officials have tried bridging the knowledge gap around A.I. In January, about 150 lawmakers and their staffs packed a meeting, hosted by the usually sleepy A.I. Caucus, that featured Jack Clark, a founder of the A.I. company Anthropic.
Some action around A.I. is taking place in federal agencies, which are enforcing laws already on the books. The Federal Trade Commission has brought enforcement orders against companies that used A.I. in violation of its consumer protection rules. The Consumer Financial Protection Bureau has also warned that opaque A.I. systems used by credit agencies could run afoul of anti-discrimination laws.
The F.T.C. has also proposed commercial surveillance regulations to curb the collection of data used in A.I. technology, and the Food and Drug Administration issued a list of A.I. technology in medical devices that come under its purview.