A group of top AI researchers, engineers, and CEOs have issued a new warning about the existential threat they believe that AI poses to humanity.
The 22-word statement, trimmed short to make it as broadly acceptable as possible, reads as follows: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
This statement, published by a San Francisco-based non-profit, the Center for AI Safety, has been co-signed by figures including Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman, as well as Geoffrey Hinton and Youshua Bengio — two of the three AI researchers who won the 2018 Turing Award (sometimes referred to as the “Nobel Prize of computing”) for their work on AI. At the time of writing, the year’s third winner, Yann LeCun, now chief AI scientist at Facebook parent company Meta, has not signed.
The statement is the latest high-profile intervention in the complicated and controversial debate over AI safety. Earlier this year, an open letter signed by some of the same individuals backing the 22-word warning called for a six-month “pause” in AI development. The letter was criticized on multiple levels. Some experts thought it overstated the risk posed by AI, while others agreed with the risk but not the letter’s suggested remedy.
Dan Hendrycks, executive director of the Center for AI Safety, told The New York Times that the brevity of today’s statement — which doesn’t suggest any potential ways to mitigate the threat posed by AI — was intended to avoid such disagreement. “We didn’t want to push for a very large menu of 30 potential interventions,” said Hendrycks. “When that happens, it dilutes the message.”
“There’s a very common misconception, even in the AI community, that there only are a handful of doomers.”
Hendrycks described the message as a “coming-out” for figures in the industry worried about AI risk. “There’s a very common misconception, even in the AI community, that there only are a handful of doomers,” Hendrycks told The Times. “But, in fact, many people privately would express concerns about these things.”
The broad contours of this debate are familiar but the details often interminable, based on hypothetical scenarios in which AI systems rapidly increase in capabilities, and no longer function safely. Many experts point to swift improvements in systems like large language models as evidence of future projected gains in intelligence. They say once AI systems reach a certain level of sophistication, it may become impossible to control their actions.
Others doubt these predictions. They point to the inability of AI systems to handle even relatively mundane tasks like, for example, driving a car. Despite years of effort and billions of investment in this research area, fully self-driving cars are still far from a reality. If AI can’t handle even this one challenge, say skeptics, what chance does the technology have of matching every other human accomplishment in the coming years?
Meanwhile, both AI risk advocates and skeptics agree that, even without improvements in their capabilities, AI systems present a number of threats in the present day — from their use enabling mass-surveillance, to powering faulty “predictive policing” algorithms, and easing the creation of misinformation and disinformation.
Follow our socials Whatsapp, Facebook, Instagram, Twitter, and Google News.