Those responsible for OpenAI, the creator of ChatGPT, have called for the regulation of “superintelligent” artificial intelligence (AI) by creating a body equivalent to the International Atomic Energy Agency, which protects humanity from the risk of creating something that could accidentally destroy it.
OpenAI CEO Sam Altman and co-founders Ilya Sutzkever and Greg Brockman posted a brief note on the company’s website calling on the international regulator to begin developing criteria to “inspect systems, require audits, check compliance with security standards and impose restrictions on deployments.” and security levels’ to reduce the ‘existential risk’ these systems may pose.
“Within the next 10 years, it is conceivable that artificial intelligence systems in most fields will exceed the skill level of experts and develop as much productive activity as one of the largest corporations today,” they wrote in their note. “In terms of potential advantages and disadvantages, superintelligence will be more powerful than any other technology that humanity has had to deal with in the past. Our future may be much more prosperous, but to get there we need to manage risks. If the opportunity is an existential risk, we cannot limit ourselves to responding. ”
In the short term, the three signatories call for “some coordination” between companies at the forefront of AI research to ensure the development of increasingly powerful models is seamlessly integrated into society, while prioritizing safety. Coordination, they write, could be accomplished through a government-led project or collective agreement that limits the growth of AI capabilities.
Although researchers have warned of the potential risks of superintelligence for decades, those risks have become more concrete as the development of artificial intelligence accelerates. According to the Center for Artificial Intelligence Security (CAIS), which was created in the United States to “reduce societal risks from artificial intelligence,” AI development poses eight categories of “catastrophic” risk. ” and “existential”.
“totally dependent on cars”
Beyond the fear that some feel about the possibility of artificial intelligence so powerful that it completely destroys humanity, whether by accident or on purpose, CAIS addresses other, more pernicious harms. A world where artificial intelligence systems are tasked with increasing numbers of tasks could see humanity “lose self-governance and become completely dependent on machines,” a “weakening” process through which a small group controlling powerful systems can become “intelligent.” Centralized power’, produces a ‘monopoly of benefits’ between the rulers and the ruled in the eternal caste system.
According to those responsible for OpenAI, “people around the world must democratically decide what the limits and default values of AI systems are” to avoid these risks, although they admit that “it is not yet known how this mechanism will be developed. Still, they say it’s worth developing robust risk management systems.
“We believe it will lead to a much better world than we can imagine today (we’re already seeing the first examples in areas such as education, creative work, and personal productivity tools),” they write. They warn that stopping its development can also be dangerous. “Because the benefits are huge, the cost of building it is decreasing every year, the number of players developing it is growing rapidly, and it’s an integral part of our current technology path. Stopping it requires a global surveillance regime, and even that is not guaranteed. So we have to get it right.”
Source: El Diario