Over 200 Leaders Demand Binding Global AI Rules Before It’s Too Late

Tags:

Over 200 scientists and politicians, including Nobel Laureates and former heads of state, are calling for a set of global artificial intelligence rules that will protect society from the technology’s greatest risks. Examples include prohibiting AI from being used to launch nuclear weapons or for mass surveillance.

Notable signatories include the most-cited living scientist Yoshua Bengio, OpenAI co-founder Wojciech Zaremba, Nobel Peace Prize Laureate Maria Ressa, and Geoffrey Hinton, who won the Nobel Prize for Physics and left his position at Google to openly speak about the dangers of AI.

Several signatories have worked or are currently working for prominent AI companies — including Baidu, Anthropic, xAI, and Google DeepMind — or earned the Turing Award, which is regarded as the highest honour in computer science.

Risks and red lines outlined

The signatories call for governments to implement so-called “red lines” in what AI systems are allowed to do and what humans can do with them by the end of 2026. The letter warns that the technology will soon surpass human capabilities, something which the likes of OpenAI and Meta are actively pursuing, and will “become increasingly difficult to exert meaningful human control” over it.

“Governments must act decisively before the window for meaningful intervention closes,” the “Global Call for AI Red Lines” reads. “An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks.”

Categories of potential risks

The risks can be categorized into two areas: AI usage and AI behavior.

Examples of AI usage include leveraging the technology to deploy lethal weapons or impersonate a human. In contrast, examples of AI behavior include its ability to develop chemical weapons or replicate itself without human intervention. The signatories want international rules to be established that prohibit both the most dangerous uses of AI and AI behaviors.

These individual risks could lead to devastating large-scale events, the open letter says, including engineered pandemics through bioweapons, widespread disinformation, mass unemployment, manipulation of children and adults, and systematic human rights violations.

While the letter does not demand concrete next steps, it suggests potential pathways such as nations drafting red lines, scientists publishing verifiable standards, global forums endorsing them by 2026, and eventually negotiating a binding treaty.

Former presidents and ministers representing Italy, Colombia, Ireland, Taiwan, Argentina, and Greece have also backed the call for AI red lines, as well as over 70 organisations, many of which are specifically dedicated to AI safety. Prominent figures outside science also co-signed the letter, including actor and author Sir Stephen Fry and author Yuval Noah Harari.

AI vendors are reluctant to support binding regulations

In recent years, several open letters on AI safety have been penned, including a 2023 letter co-signed by Elon Musk, which called for a six-month pause on AI development. There was also a statement, backed by OpenAI’s Sam Altman and Anthropic’s Dario Amodei, urging that AI risk mitigation be treated as a global priority.

None of these three prominent AI figures has joined the call for AI red lines, which has a crucial difference from the others; it is calling for binding regulations.

AI companies do not typically support oversight by third parties, as they view their growth as dependent on their ability to innovate freely. OpenAI, Meta, and Google have all pushed back against steps taken by governments to manage their actions. However, they are not averse to making less concrete, voluntary commitments, such as allowing their models to be used for safety testing, helping to maintain a responsible image. 

The signatories of the Red Lines letter acknowledge the existence of vendors’ internal frameworks and policies, and several companies have committed to a certain standard of self-regulation, as recognized by former US President Joe Biden in 2023 and at last year’s Seoul AI Summit. Still, research indicates that companies follow through on such promises only about 52% of the time.

Co-signatory Yoshua Bengio has launched a nonprofit dedicated to ensuring that AI systems are safe, honest, and fundamentally aligned with human values.

The post Over 200 Leaders Demand Binding Global AI Rules Before It’s Too Late appeared first on eWEEK.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *