Monday, December 4, 2023

Risk of extinction from AI should be a global priority alongside, pandemics, nuclear war: Experts

Must Read

- Advertisement -

On Tuesday, 80 artificial intelligence scientists and more than 200 “other notable figures” signed a statement that says, “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The one-sentence warning from the diverse group of scientists, engineers, corporate executives, academics, and other concerned individuals doesn’t go into detail about the existential threats posed by AI.

Instead, it seeks to “open up the discussion” and “create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously,” according to the Center for AI Safety, a US-based nonprofit whose website hosts the statement.

Lead signatory Geoffrey Hinton, often called “the godfather of AI,” has been sounding the alarm for weeks. Earlier this month, the 75-year-old professor emeritus of computer science at the University of Toronto announced that he had resigned from his job at Google to speak more freely about the dangers associated with AI.

Asked about the chances of the technology “wiping out humanity,” Hinton warned that “it’s not inconceivable.”

That frightening potential doesn’t necessarily lie with currently existing AI tools such as ChatGPT, but rather with what is called “artificial general intelligence” (AGI), which would encompass computers developing and acting on their ideas.

“Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI,” Hinton told CBS News. “Now I think it may be 20 years or less.”

Pressed by the outlet if it could happen sooner, Hinton conceded that he wouldn’t rule out the possibility of AGI arriving within five years, a significant change from a few years ago when he “would have said, ‘No way.'”

“We have to think hard about how to control that,” said Hinton. Asked if that’s possible, Hinton said, “We don’t know, we haven’t been there yet, but we can try.”

The AI pioneer is far from alone. According to the 2023 AI Index Report, an annual assessment of the fast-growing industry published last month by the Stanford Institute for Human-Centered Artificial Intelligence, 57% of computer scientists surveyed said that “recent progress is moving us toward AGI,” and 58% agreed that “AGI is an important concern.”

Although its findings were released in mid-April, Stanford’s survey of 327 experts in natural language processing—a branch of computer science essential to the development of chatbots—was conducted last May and June, months before OpenAI’s ChatGPT burst onto the scene in November.

OpenAI CEO Sam Altman, who signed the statement shared Tuesday by the Center for AI Safety, wrote in a February blog post: “The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world.”

The following month, however, Altman declined to sign an open letter calling for a half-year moratorium on training AI systems beyond the level of OpenAI’s latest chatbot, GPT-4.

The letter, published in March, states that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Demands from outside the industry for robust government regulation of AI are growing. While ever-more dangerous forms of AGI may still be years away, there is already mounting evidence that existing AI tools are exacerbating the spread of disinformation, from chatbots spouting lies and face-swapping apps generating fake videos to cloned voices committing fraud.


- Advertisement -

Don't Miss

Related Articles