SINGAPORE — In a rare moment of global unity, leaders from 28 countries and select technology companies came together recently to agree on the safe and responsible development of artificial intelligence (AI).
The inaugural AI Safety Summit began on Wednesday (Nov 1) and was held at Bletchley Park in southern England, a place where Britain’s World War II code-breakers famously cracked Nazi Germany’s “enigma” code.
The two-day conference was attended by leaders such as host British Prime Minister Rishi Sunak, United States Vice President Kamala Harris, European Union chief Ursula von der Leyen and United Nations Secretary-General Antonio Guterres.
They were also joined by Chinese Vice Minister of Science and Technology Wu Zhaohui.
Tech firms there included Microsoft-backed OpenAI, Anthropic, Google DeepMind, Microsoft, Meta and xAI.
The conference focussed on managing the increasing risks of “frontier AI” and ensuring the responsible and safe development and deployment of AI technology.
“Frontier AI” refers to mainstream generative AI systems such as ChatGPT with the most cutting-edge systems, which some experts believe could become more intelligent than people at a range of tasks.
At the summit, some tech and political leaders warned that AI poses huge risks if not controlled. This could range from eroding consumer privacy to danger to humans and causing a global catastrophe.
These concerns have sparked a race by governments and institutions to design safeguards and regulations.
TODAY looks at what the summit means for the future of AI and how it could impact the region, including Singapore.
WHAT WAS THE OUTCOME OF THE SAFETY SUMMIT?
The “ultimate goal” of the summit was “to work towards a more international approach to safety where we collaborate with partners to ensure AI systems are safe before they are released”, said Mr Sunak in a speech last week.
As such, in a landmark agreement, 28 countries signed the “Bletchley Declaration”, a pact to tackle the risks of the “frontier model”.
PM Sunak said that the declaration, the action on testing and a pledge to set up an international panel on risk would “tip the balance in favour of humanity”.
The declaration agreed on “the urgent need to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community”.
Besides the declaration, Britain announced its investment in an AI Safety Institute.
This new hub will help spur international collaboration on AI’s safe development with leading AI companies and nations, including partners from the US, Singapore and Google DeepMind.
Google DeepMind is a British-American artificial intelligence research laboratory which serves as a subsidiary of Google.
Britain also plans to invest £225 million (S$374.5 million) in an artificial intelligence supercomputer, called Isambard-AI after the 19th-century British engineer Isambard Brunel.
Vice President Kamala Harris also confirmed that the US Department of Commerce would establish the US AI Safety Institute (US AISI).
The institute will create “guidelines, tools, benchmarks and best practices for evaluating and mitigating dangerous capabilities and conducting evaluations to identify and mitigate AI risk”.
Tech billionaire Elon Musk, who has long warned of the dangers of AI, stole some of the limelight by warning, in an interview conducted by PM Sunak, that AI could eventually eliminate all jobs.
He added that people would have robot friends thanks to AI.
DISAGREEMENTS AND BACKLASH
PM Sunak had been criticised by lawmakers from his own party for inviting China to the AI Safety Summit.
This comes after many Western governments reduced their technological cooperation with Beijing, but Mr Sunak said any effort on AI safety had to include its leading players.
He also said it showed the role Britain could play in bringing together the three big economic blocs of the United States, China and the European Union.
“It wasn’t an easy decision to invite China, and lots of people criticised me for it, but I think it was the right long-term decision,” Mr Sunak said at a press conference.
Mr Wu signed a “Bletchley Declaration” on Wednesday. China was not present on Thursday and did not put its name to the agreement on testing.
SINGAPORE AND AI
The conference was attended in person by Communications and Information Minister Josephine Teo and Prime Minister Lee Hsien Loong, who attended it virtually.
Mrs Teo chaired a roundtable on “Risks from Loss of Control over Frontier AI”, attended by 33 participants, which looked at how advanced AI systems could lead to the loss of human control and oversight.
The session also looked at the risks this would pose, as well as tools to monitor and prevent these scenarios.
In her remarks, Mrs Teo mentioned how, given Singapore’s size and population, the workforce has been “an ongoing constraint” in relation to AI.
“In that context, AI represents a significant force multiplier, probably the greatest we will have for a long time to come,” she said.
While AI seems like a solution to many problems, Mrs Teo highlighted three points to ensure that these systems are not given “disproportionate control”.
The first is growing global expertise in AI safety research and development to build safer and more aligned models, deepen collaborations in AI testing, and continue multi-stakeholder exchanges to bring diverse perspectives on this.
In a Facebook post, PM Lee welcomed UK’s new AI Safety Institute and its cooperation with Singapore on safety testing.
He said the AI field is developing rapidly and transforming lives while at the same time raising “deep ethical questions”.
“AI systems must be imbued with human context and human values,” he said
“Singapore has taken some small steps, such as introducing testing toolkits like AI Verify and evaluation sandboxes, to mitigate these risks.”
AI Verify is an AI governance testing framework to help organisations objectively demonstrate responsible AI through standardised tests.
“Singapore is honoured to work with international partners so that we can all reap the benefits of AI, and make AI a force for good contributing to our common prosperity,” said PM Lee.
While the AI Safety Summit marks the first of such conferences, the UK government also announced there will be more lined up.
South Korea is set to launch another “mini virtual” summit on AI in the next six months, and France will host the next in-person AI summit next year.