In an open letter, scientists shared worry that the loss of human control or malicious use of AI systems could lead to catastrophic outcomes for all of humanity.
A group of artificial intelligence scientists is urging nations to create a global oversight system to prevent potential “catastrophic outcomes” if humans lose control of AI.
In a statement released on Sept. 16, a group of influential AI scientists raised concerns that the technology they helped develop could cause serious harm if human control is lost.
“Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity,” the statement read before continuing:
“Unfortunately, we have not yet developed the necessary science to control and safeguard the use of such advanced intelligence.”
The scientists agreed that nations need to develop authorities to detect and respond to AI incidents and catastrophic risks within their jurisdictions and that a “global contingency plan” needs to be developed.
“In the longer term, states should develop an international governance regime to prevent the development of models that could pose global catastrophic risks.”
The statement builds upon findings from the International Dialogue on AI Safety in Venice in early September, the third meeting of its kind organized by the nonprofit US research group Safe AI Forum.
Johns Hopkins University Professor Gillian Hadfield, who shared the statement in a post on X, said, “If we had some sort of catastrophe six months from now if we do detect there are models that are starting to autonomously self-improve, who are you going to call?”
They stated that AI safety is recognized as a global public good, requiring international cooperation and governance.
The AI developers proposed three key processes: emergency preparedness agreements and institutions, a safety assurance framework, and independent global AI safety and verification research.
The statement had more than 30 signatories from the United States, Canada, China, Britain, Singapore, and other countries. The group comprised experts from leading AI research institutions and universities and several Turing Award winners, the equivalent of the Nobel Prize for computing.
The scientists said the dialogue was necessary due to shrinking scientific exchange between superpowers and growing distrust between the US and China, adding to the difficulty of achieving consensus on AI threats.
In early September, the US, EU, and UK signed the world’s first legally binding international AI treaty, prioritizing human rights and accountability in AI regulation.
However, tech corporations and executives have said that over-regulation could stifle innovation, especially in the European Union.