5 Takeaways From UK AI Safety Summit

2024 will see AI Safety Summits in France and Korea

Ben Wodecki, Junior Editor - AI Business

November 7, 2023

4 Min Read
U.K. Prime Minister Rishi Sunak at the AI Safety Summit.
U.K. Prime Minister Rishi SunakGetty Images

The U.K. AI Safety Summit has drawn to a close, with 100 attendees from across the world descending on a quaint English country estate to discuss considerations for building and governing future AI systems.

British Prime Minister Rishi Sunak said the U.K. is “proud to have brought the world together and hosted the first summit.”

Here are five major takeaways from the event.

1. Consensus Reached – But It’s Still Early Days

The EU and 28 countries signed the Bletchley Declaration, an agreement to commit to designing AI that is safe and human-centric.

The missive consists of pledges to ensure AI is designed and deployed responsibly. Signatories commit to working together through existing international forums to promote cooperation. Of particular focus are the most advanced AI systems such as Google Gemini or the rumored GPT-5 with the potential to inflict the greatest AI harms including human extinction.

The summit held eight roundtable discussions chaired by high-ranking government officials, researchers or academics. One group stated that “we are only at the early stages of understanding how these models work, the risks they pose, and therefore how to develop adequate safeguards.”

2. Sharing Is Caring

A key takeaway from the Bletchley agreement was that attendees agreed to cooperate by building a “shared scientific and evidence-based understanding” of AI risks, as a foundation for policy.

Part of those shared efforts would see increased transparency by private actors developing AI models, as well as the development of tools for safety testing that organizations can access.

Each nation, however, can categorize AI risks based on their national circumstances and legal frameworks – so they are not tied to one global viewpoint.

Suki Dhuphar, head of international business at Tamr said: “Through actively collaborating in AI safety research, organizations and governments can strike a balance between innovation and safeguarding against potential harms, ultimately unlocking AI's full potential while ensuring ethical and accountable AI deployment.”

3. Open Source AI Crowd Disagrees

The overarching theme of the event was to mitigate the risk of advanced AI systems wiping out humanity if it falls into the hands of bad actors.

But a vocal group, specifically Meta's Chief AI Scientist Yann LeCun and Google Brain co-founder Andrew Ng, believes the focus on AI's existential threat is misguided and would lead to over-regulation that would harm open-source efforts and innovation.

They and about 150 of other open-source believers signed a statement published by Mozilla saying that open AI is “an antidote, not a poison.”

They have at least one high-profile backer: U.K. Deputy Prime Minister Oliver Dowden. “If we want to make sure [AI] spreads globally, in terms of the developing world, … I think there is a very high bar to restrict open source in any way,” he told Politico.

Amanda Brock, CEO of OpenUK, lauded Dowden's stance. "Recognizing the value of open source to global economies, its role in democratizing technology and building trust through transparency is critical to the evolution of AI. It is the only way to ensure that our digital future is equitable and that we learn the lessons from our recent digital history.”

4. China Plays Ball

China was a surprise addition to the guest list for some, and perhaps even more surprising is their attendance. A delegation from China’s Ministry of Science and Technology as well as Alibaba and Tencent joined the summit.

China even signed onto the Bletchley Declaration, agreeing to its principles and pledging to attend future events.

Addressing the concern around China in his closing speech, Prime Minister Sunak noted that “some said, we shouldn’t even invite China. Others that we could never get an agreement with them. Both were wrong.”

“A serious strategy for AI safety has to begin with engaging all the world’s leading AI powers. And all of them have signed the Bletchley Park Communique.”

In emailed comments, Peter van Jaarsveld, Global Head of Production at OLIVER, said: “Talk of U.S. / China is interesting and highlights that on the surface, there is a desire for global coordination, which will ultimately benefit people bringing AI into their day to day. But announcing an intention to collaborate does not guarantee collaboration or concrete outcomes. The summit represents the first tentative steps towards a global approach to AI, but whether it bears any fruit remains to be seen."

5. More AI Safety Summits to Come

The event at Bletchley Park marked the first global event on AI safety. But more are on the way – signatories to the Bletchley Declaration agreed to hold further such events in the future.

In 2024, there will be AI Safety Summits in South Korea and France.

In his closing speech, Sunak said: “While this was only the beginning of the conversation, I believe the achievements of this summit will tip the balance in favor of humanity.

“Because they show we have both the political will and the capability to control this technology and secure its benefits for the long-term.”

This article first appeared in IoT World Today's sister publication AI Business.

Read more about:

Asia

About the Author(s)

Ben Wodecki

Junior Editor - AI Business

Ben Wodecki is the junior editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to junior editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like