Developers and scientists warn that artificial intelligence poses a risk like a nuclear bomb

A group of 350 executives from major companies developing artificial intelligence, scientists and researchers who are experts in the technology have signed a new manifesto warning of “the most serious risks of advanced artificial intelligence”. In a brief 27-word statement, the signatories say the technology poses an “extinction threat” to humanity that should be treated like a pandemic or a nuclear bomb.

“Mitigating the risk of AI extinction should be a global priority alongside other societal-wide risks such as pandemics and nuclear war,” the statement said. Complete petition. Among the signatories is the entire staff of OpenAI, the company that developed ChatGPT and is now pressuring the international community to regulate the technology. Among the promoters of the letter is Sam Altman, its executive director, who was in Europe last week and met with Pedro Sanchez and 20 of his executives and researchers.

Also on the list of signatories are Kevin Scott, the head of Microsoft Technology, and Demis Hassabis, the leader of Google DeepMind (a multifaceted artificial intelligence research department). Google is the company that gets the most headlines in the manifesto, with 38 executives, researchers or university professors associated with the company. There are also representatives of other small developers such as Anthropic, Stability AI or Inflection AI.

This is the second such action in two months, which is held internationally. In the previous one, published in late March, hundreds of businessmen and academics expressed themselves in similar terms about the dangers the technology poses if it is not regulated soon. In the text that serves as an introduction to the petition to equate AI with nuclear war, the signatories of the letter published today acknowledge that “Journalists, political leaders and the general public are increasingly discussing a wide range of significant and urgent risks posed by AI. ”, but even so, he believes, “it can be difficult to express concern about some of the most serious risks of advanced artificial intelligence”.

Among the 350 signatories of this manifesto are two Spaniards: Helena Matute, professor of psychology at the University of Deusto; and Ramon Carbo-Dorca, theoretical chemist and professor emeritus at the University of Girona. “I think it is very important that artificial intelligence does not continue to grow out of control, that our leaders do something and that we all realize that it is important, that it is a very dangerous weapon,” Matute explained in a statement to “We need to reach an agreement at the global level on a security minimum that no one can guarantee today and that will not be achieved overnight. should be avoided. A lot of things can go wrong. We must act as we did with the atomic bomb, human cloning and other technologies that carry great risks,” he urges.

The call for AI regulation by these entrepreneurs and academics coincides with a large-scale investigation in the European Union into possible privacy violations that OpenAI may have committed with ChatGPT. The continent’s data protection regulators suspect that Europeans’ personal information was used without their consent to prepare the system.

During his European tour last week, Altman hinted that if he disagrees with the outcome of the investigation and the content of the AI ​​regulation in Brussels, he could order ChatGPT out of the EU. Google did the same with Bard, a similar system deployed in 180 countries but not in Europe.

Source: El Diario





related posts

Post List

Hot News