Amid growing concerns about the lack of safeguards for ChatGPT-style AI systems, more than a dozen of the world's leading artificial intelligence companies pledged to develop and use their technology safely at a global summit on Wednesday.
Fourteen companies, including South Korea's Samsung Electronics, tech giant Naver and America's Google and IBM, agreed on the final day of the Seoul summit to “minimize risks” as they advance cutting-edge fields.
“We are committed to continuing to advance research efforts to promote the responsible development of AI models,” they said in the Seoul AI Business Pledge.
The companies also promised to “minimize risks and enable reliable evaluation of functionality and safety.”
The two-day summit, co-hosted by South Korea and the UK, brought together heads of global AI companies such as OpenAI and Google DeepMind to explore ways to ensure the safe use of the technology.
Advertisement – SCROLL TO CONTINUE
Their work builds on the consensus reached at the first Global AI Safety Summit held at Bletchley Park in the UK last year.
Under the new pledge, the companies also agreed to support vulnerable groups through AI technology, although no details were given on how they would do this.
Sixteen technology companies, including ChatGPT maker OpenAI, Google DeepMind and Anthropic, also pledged new safety initiatives Tuesday, including sharing how they assess the risks of their technology.
Advertisement – SCROLL TO CONTINUE
This includes what risks are considered 'unbearable' and what companies do to ensure that such thresholds are not exceeded.
ChatGPT was a huge success shortly after its release in 2022, sparking a generative AI gold rush, with technology companies around the world pouring billions of dollars into developing their own models.
These AI models can generate text, photos, audio, and even video from simple prompts, and their proponents tout this as a breakthrough technology that will improve lives and businesses around the world. We welcome you.
Advertisement – SCROLL TO CONTINUE
But critics, rights activists and governments have warned that it could be misused in a variety of ways, including manipulating voters with fake news articles and “deepfake” photos and videos of politicians.
kjk/ceb/pbt