On the 1st and 2nd of November, a two-day AI safety summit is being hosted at the historic Bletchley Park, 55 miles north of London. Representatives from countries worldwide, including the US and China, have been invited to attend, and the Prime Minister Rishi Sunak will have the opportunity to demonstrate the U.K.'s contribution to the worldwide dialogue on the regulation of AI.
This week, the U.K. is arranging its most important AI summit yet. Policy makers and regulators from around the world are worried because this technology is advancing very quickly. The summit on Nov. 1 and Nov. 2 will include representatives from nations like the United States and China, both of which are major players in the race to make advanced AI. With this summit, British Prime Minister Rishi Sunak is demonstrating the U.K.'s involvement in the international AI debate and how AI should be regulated. Since Microsoft-supported OpenAI's ChatGPT was put out, governments around the world have become more and more anxious to manage AI. Especially worrying is the possibility that AI could take the place of human intelligence or weaken it.
The AI summit is set to take place at Bletchley Park, the celebrated landmark situated 55 miles north of London.
Bletchley Park, the location of a 1941 codebreaker group led by Alan Turing to crack the Nazi Germany Enigma machine, is being used to host the summit by the U.K. in order to emphasize their rank as an innovator on a global level.
The key purpose of the United Kingdom AI summit is to establish global uniformity when it comes to formulating standards on the ethical and accountable fabrication of AI applications. The summit is centered on what is known as "frontier AI" models - or state-of-the-art large language models, (LLMs), developed by organisations such as OpenAI, Anthropic, and Cohere. It will endeavour to tackle two principal categories of danger related to AI: misuse and the lack of control. Misuse perils include a malicious agent receiving assistance from new AI capabilities: for instance, a hacker may use AI to fabricate a novel type of malware that is undetectable by safety specialists, or state actors could employ it to create hazardous bioweapons. The second danger is the scenario where AI that humans developed could be utilised against them - this could "arise from advanced systems that we would aspire to be aligned with our values and objectives," the government said.
Notable figures from both the tech and political spheres will be present.
Among them are Microsoft President Brad Smith, OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, Meta AI chief Yann LeCun, Meta President of Global Affairs Nick Clegg, U.S. Vice President Kamala Harris, and members of a Chinese government delegation from the Ministry of Science and Technology, as well as European Commission President Ursula von der Leyen.
Some leaders have chosen to not participate in the summit.
The list of attendees include U.S. President Joe Biden, Canadian Prime Minister Justin Trudeau, French President Emmanuel Macron, and German Chancellor Olaf Scholz. When asked whether Chancellor Sunak felt snubbed by his international counterparts, his spokesperson denied any such thing, affirming that they had brought together the right representatives from the international community and AI experts to look into the risks posed by the technology. This first summit of its kind is a monumental achievement, bringing together world leaders and AI experts for the first time.
The British government is hosting the AI Summit to provide a space to hash out the future of the technology, emphasizing safety, ethics, and responsible growth. Prime Minister Sunak also wants to find common ground for collaboration across countries. In a speech last week, Sunak said AI has the potential to be as influential as the industrial revolution, the invention of electricity, or the arrival of the internet. With that potential comes risk, too, including the possibility of humanity losing control of AI - referred to as "super intelligence" - if the technology is not developed safely and responsibly. Sunak is introducing the world's first AI safety institute to evaluate and test different types of AI in order to understand the associated risks, and establishing a global expert panel consisting of countries and organizations at the summit who will produce a comprehensive report on the state of AI science. The decision to invite China - a world leader in AI - to the summit has become a point of contention, as the U.S. has found itself in a tense clash with the country over technology and trade. International agreement on an intricate technology such as AI is a challenge, even more so in the presence of two of the biggest attendees.
Recently, Washington curtailed sales of Nvidia's A800 and H800 artificial intelligence chips to China. Governments worldwide have been forming their own strategies for controlling technology to minimize the risks posed in terms of misinformation, privacy, and bias. The European Union is working to finalize their AI Act, which would be one of the first regulations around AI, before the end of 2021 and adopt it in early 2024 ahead of the June European Parliament elections. In the U.S., Biden issued an executive order on Monday that is the first of its kind from the government. It requires safety assessments, directives on equity and civil rights, and further research into AI's effect on the labor market. James Manyika, senior vice president of research, technology, and society at Google, noted that AI is a "transformative technology" with the potential to help resolve many societal challenges, but he pointed out that it is important to make sure the advantages are "broadly distributed." Manyika stated that their purpose for the Summit is to help countries create a shared understanding of the near and long-term prospects and threats of AI models, as well as emphasize international collaboration to make sure that AI governance is uniform. Emad Mostaque, CEO of Stability, a British AI company that is open-source, affirmed that the U.K. has the opportunity to become an AI superpower and guarantee that the benefits of AI are not isolated to the Big Tech organizations. He stated that this can be attained through a collaborative view of the positive changes AI will bring, as well as an understanding of the emerging risks, so that innovation is done with honesty and systems are put in place to guarantee safety and security.
Some in the tech industry feel that the summit is too restricted in its focus. They point out that by focusing only on the most advanced AI models, there is a missed opportunity to hear from a wider range of the tech community. "We are incompletely understanding the situation by focusing only on the current top AI models,” said Sachin Dev Duggal, CEO of Builder.ai, in an interview with CNBC last week. “By only concentrating on the people who are developing those models right now, everybody else is left out of the conversation.” There is also disappointment with the summit's goal of examining "existential threats" posed by AI and the opinion that the government should prioritize more pertinent, immediate issues, such as the potential for deepfakes to disrupt the 2024 elections.
Stefan van Grieken, CEO of generative AI firm Cradle, compared a fire brigade conference, where people discuss plans to deal with a hypothetical meteor strike, to the conversations about potential risks of achieving artificial general intelligence. He urged the focus to be placed on the "real fires" which are a more pressing and present threat. However, Marc Warner, CEO of British AI startup Faculty.ai, spoke in favor of long-term strategic planning and stated that he believes that focusing on the risks of artificial general intelligence to be "very reasonable." Warner pointed out that there is no scientific evidence that proves the safety of such technology and argued that it is preferable for governments to tackle the potential risks before they become a problem.
top of page
bottom of page
Commentaires