CIO Insider

CIOInsider India Magazine

Separator

Top Artificial Intelligence Executives Including OpenAI CEO Sam Altman Joined in Raising the Risk of Extinction from AI

CIO Insider Team | Wednesday, 31 May, 2023
Separator

Top artificial intelligence (AI) executives including OpenAI CEO Sam Altman has joined experts and professors in raising the risk of extinction from AI, which they urged policymakers to equate at par with risks posed by pandemics and nuclear war.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, more than 350 signatories wrote in a letter published by the nonprofit Center for AI Safety (CAIS).

As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft and Google.

Also among them were Geoffrey Hinton and Yoshua Bengio - two of the three so-called godfathers of AI who received the 2018 Turing Award for their work on deep learning - and professors from institutions ranging from Harvard to China's Tsinghua University.

A statement from CAIS singled out Meta, where the third godfather of AI, Yann LeCun, works, for not signing the letter.

The letter coincided with the U.S.-EU Trade and Technology Council meeting in Sweden where politicians are expected to talk about regulating AI.

Elon Musk and a group of AI experts and industry executives were the first ones to cite potential risks to society in April.

The warning comes two months after the nonprofit Future of Life Institute (FLI) issued a similar open letter, signed by Musk and hundreds more, demanding an urgent pause in advanced AI research, citing risks to humanity

Recent developments in AI have created tools supporters say can be used in applications from medical diagnostics to writing legal briefs, but this has sparked fears the technology could lead to privacy violations, power misinformation campaigns, and lead to issues with smart machines thinking for themselves.

The warning comes two months after the nonprofit Future of Life Institute (FLI) issued a similar open letter, signed by Musk and hundreds more, demanding an urgent pause in advanced AI research, citing risks to humanity.



Current Issue
Trust Is At The Center of BFSI Transformation