You can't put AI back into Pandora's box. But the world's largest AI companies have voluntarily announced a new deal to address the biggest concerns surrounding the technology and allay concerns that unchecked AI development could lead to sci-fi scenarios where AI turns against its creators. We are working with the government. But without strict legal provisions to strengthen governments' AI efforts, the debate will only go so far.
This morning, 16 influential AI companies, including Anthropic, Microsoft, and OpenAI, 10 countries, and the European Union held a summit in Seoul to develop guidelines for responsible AI development. One of the big outcomes of yesterday's summit was that the AI companies in attendance agreed to a so-called kill switch, a policy that halts development of cutting-edge AI models if they are deemed to have exceeded a certain risk threshold. However, it is unclear how effective this policy will be in practice, given the failure to attach real legal weight to the agreement or define specific risk thresholds. Other of his AI companies that were not present, as well as competitors of companies that have mentally agreed to the terms, are not subject to the pledge.
A policy document signed by AI companies such as Amazon, Google, and Samsung states: “At the extreme, organizations may not be able to develop or deploy models or systems if mitigations cannot be applied to reduce risk below a threshold.'' You promise not to do it at all.” To. The summit follows the Bletchley Park AI Safety Summit held in October last year, which brought together similar AI developers and argued that there is a lack of viable short-term efforts to protect humanity from proliferation. It was criticized as “valuable, but pointless.'' A.I.
Following the recent summit, a group of participants wrote an open letter criticizing the forum's lack of formal rulemaking and the outsized role of AI companies in driving regulation of their industry. “Experience shows that these harms are best addressed not through self-regulation or voluntary measures, but through enforceable regulatory obligations,” the letter read.
Writers and researchers have been warning about the risks of powerful artificial intelligence for decades, first in science fiction and now in the real world. One of the most well-known references is “terminator The “Scenario” is a theory that, if left unchecked, AI could become more powerful than its human creators and turn on them. The theory takes its name from a 1984 Arnold Schwarzenegger film in which a woman travels back in time as a cyborg to kill her unborn son, who ends up fighting an AI system that plans to cause a nuclear holocaust.
“AI offers an enormous opportunity to transform our economy and solve our greatest challenges, but I have always been clear that we will only be able to realise this full potential if we can grasp the risks posed by this rapidly evolving and complex technology,” said UK Technology Secretary Michelle Donnellan.
AI companies themselves recognize that their cutting-edge products are venturing into technologically and morally uncharted territory. Sam Altman, CEO of OpenAI, defined artificial general intelligence (AGI) as AI that exceeds human intelligence and said it is “coming soon” but carries risks.
“AGI will also come with significant risks, including abuse, serious accidents, and social disruption,” OpenAI's blog post says. “The benefits of AGI are so great that we do not believe it is possible or desirable for society to permanently halt its development. Instead, society and AGI developers need to understand how to properly realize it. you need to find out.”
But so far, efforts to cobble together a global regulatory framework around AI have been scattered and largely lack legislative authority. A UN policy framework calling on countries to prevent AI risks to human rights, monitor the use of personal data and reduce AI risks was approved unanimously last month, but was not binding. The Bletchley Declaration, the centerpiece of the Global AI Summit held in the UK last October, contained no specific regulatory commitments.
Meanwhile, AI companies themselves are starting to establish their own organizations to promote AI policy. Yesterday, Amazon and Meta joined the Frontier Model Foundation, an industry nonprofit organization “dedicated to improving the safety of frontier AI models,” according to its website. They join founding members Anthropic, Google, Microsoft, and OpenAI. The nonprofit group has not yet come up with a firm policy proposal.
Individual governments have had more success: Government leaders dismissed President Biden's executive order regulating AI safety last October with vague promises outlined in other documents with similar intentions. “This is the first time that the government has taken the lead'' by incorporating strict legal requirements that go beyond the above. policy. Biden, for example, has invoked the Defense Production Act to require AI companies to share safety test results with the government. The EU and China have also enacted formal policies addressing topics such as copyright law and the collection of users' personal data.
States are also taking action, with Colorado Governor Jared Polis yesterday announcing a new bill that would ban algorithmic discrimination in AI and require developers to share internal data with state regulators to ensure they are in compliance.
This is not the last chance for global AI regulation. France plans to host another summit early next year, following meetings in Seoul and Bletchley Park. By then, participants said they would have developed a formal definition of risk benchmarks that would require regulatory action, a major step forward in a process that has so far been relatively cautious.