Photo by Steve Johnson on Unsplash

AI Safety Summit in the UK – Expert Reaction

This post was updated 2/11/23, after the signing of the “Bletchley Declaration”.

The UK, US, EU, China, and Australia have signed a declaration agreeing that artificial intelligence poses a potentially catastrophic risk to humanity.

New Zealand is not listed as one of the 29 countries signing the “Bletchley Declaration”, at a London summit which focuses on the safety risks that could arise at the ‘frontier’ of AI development.

The SMC asked local experts to comment.

Dr Andrew Lensen, Senior Lecturer in Artificial Intelligence, Victoria University of Wellington, comments:

“It is great to see this summit take place. I support the focus on “Frontier AI”, which are the latest AI systems that have the potential to be applied to many different tasks with only small tweaks to their algorithms.

“Many CEOs and leaders in Big Tech companies prefer to scaremonger around AI taking over the world in the future – but I am much more concerned about the effects it will have in the next 5 years, through technologies such as Frontier AI. Safety issues such as bias, discrimination, and misalignment (where the behaviour of AI systems is not aligned with our society’s views) are immediate and pressing.

“I want to see the summit lead to international safety standards and guidelines on the use of AI that can then be leveraged in countries like New Zealand when forming our own AI regulation.

“New Zealand is floundering in our regulation of AI. Our politicians should be watching this summit very carefully to guide their thinking – while also considering the issues specific to Aotearoa, such as Māori Data Sovereignty and the tendency for commercial AI systems to be optimised for Caucasian demographics.”

No conflict of interest.

Dr Karaitiana Taiuru, Taiuru & Associates Ltd, comments:

“We are all at the crossroads of an evolution that could address inequities with Māori and others, if our voices are heard now.

“From a New Zealand Māori perspective, I would like to see some consideration to the long over looked rights of Indigenous Peoples, particularly Māori with Intellectual Property Rights, bias in data and data driven systems, false and racist narratives that that are published as truth that AI is now using as fact and how regulating the systems can both protect and enhance Māori and all of New Zealand.

“One way this could be done is by recognising and implementing Te Tiriti rights in all AI in New Zealand and by working with tech companies who set up in New Zealand to recognise and implement these with cooperation with Māori social and tech sectors. As such a small and developed country, we could lead the world in this area.

“Moreover, this summit is an ideal opportunity for our new government to consider ethical regulation and New Zealand sovereignty that could bring economic and social gains. A chance to prepare our country for a new wave of employment and investment opportunities. Investments in education, how we currently manage our nations documents, how governments operate and how to tackle wicked problems such as the economic and financial impacts of systemic colonisation and inequities.

“AI has the potential to even the playing field for everyone.”

Conflict of interest statement: Director of Taiuru & Associates Ltd. Tangata Whenua Representative of Aotearoa Artificial Intelligence Research Association. Member of the Kāhui Māori of AI Forum.

Prof Albert Bifet, Director, Te Ipu o te Mahara – AI Institute, University of Waikato, comments:

“In the AI research world, people largely have two different opinions.

“One group, including experts like Yoshua Bengio and Geoffrey Hinton, is worried about the dangers of AI. They think AI can be risky, and we need to be careful.

“Another group, including Yann LeCun and Andrew Ng, are more worried about too many rules for AI. They think too many rules could stop good AI work and research.

“I think Yann LeCun (Meta, Facebook) has a good point, Bengio and Hinton might be inadvertently helping those who want to put AI research and development under lock and key and protect their business by banning open research, open-source code, and open-access models.

“Fearing AI is not a new thing, in fact, the first warning about AI came from New Zealand in 1863 (!) in a letter from Samuel Butler to the editor of the Press, a Christchurch newspaper.

“Now, at the AI Risk Summit in England, we should talk about these things. But we should also discuss how AI can help us solve existential risks such as climate change, war, and poverty. AI can be a tool to make the world better if we use it right.”

No conflict of interest.

Dr Andrew Chen, Research Fellow, Koi Tū – Centre for Informed Futures, University of Auckland, comments:

“With the US President’s Executive Order on Safe, Secure, and Trustworthy AI fresh on people’s minds, it will be interesting to see the outcomes of the AI Safety Summit in the UK this week.

“While there has been a lot of discussion about the existential risk that AI might bring (i.e. the risk that humans become extinct), much of the regulatory protections have been more near-term and practical in nature, such as requiring safety testing of AI systems, mandating transparency and accountability, or watermarking AI outputs. In several jurisdictions, the heavy regulatory pressure has been placed on the largest actors only (e.g. the big tech companies) on the basis that their systems have the widest impact, which may create the potential for a small start-up with outsized impact to create significant risk.

“A significant theme of the AI Safety Summit is unpredictability – that ‘leaps’ in AI capability are often unforeseen even by those actively working in these sectors, and also may arise from unexpected sources. While practical regulations for current-day AI systems like generative AI for text and images are now relatively well understood (and need to be negotiated and then legislated in a race against the continuing advancement and use of AI technology), it’s how we as a global society protect ourselves from threats unknown and unseen that will be interesting as a potential outcome of the AI Safety Summit.”

No conflict of interest.

Dr Collin Bjork, Senior Lecturer in Science Communication, Massey University, comments:

“The AI Safety Summit has identified and described many key risks posed by the latest AI tools like ChatGPT, Claude and Bard (which they call “frontier AI”). But the word “Indigenous” does not appear anywhere in their 45-page discussion document on AI safety. It’s essential that Indigenous voices and perspectives be included in influential discussions like this because many of these tools threaten or violate Indigenous data sovereignty. Indigenous data sovereignty is important because, historically, non-Indigenous groups have used Indigenous data to subjugate Indigenous communities, including here in Aotearoa.

“But the importance of including Indigenous voices in discussions of AI risk is not just about protecting Indigenous communities. It’s also about imagining how to develop new AI tools that operate around totally different ideologies. For example, the AI Safety Summit mentions over and over the lack of “sufficient economic incentives” to develop safe AI. This indicates that using the marketplace to govern the development of AI actively endangers humans. While regulation is one response to the failure of the marketplace, another response is to create tools that operate by totally different ideologies, like Te Hiku Media’s Māori language speech recognition tool. Led by a commitment to Indigenous data sovereignty (rather than profits), Te Hiku Media crowdsourced audio from their communities to develop an effective speech recognition tool that is built and owned by Māori.

“In short, regulating existing technologies is not the only response to AI risk. If we really want to create a safe future with AI, then we also need to invest in building new tools governed by ideologies other than Silicon Valley’s “move fast and break stuff.” Today’s developers have a lot they could learn a lot from Indigenous communities. But they need to first include Indigenous communities in conversations like the AI Safety Summit.”

No conflict of interest.

Dr Pan Zheng, Senior lecturer, UC Business School, University of Canterbury, comments:

“After reading the discussion paper “Capabilities and risks from frontier AI”, I feel there are two distinctive aspects of AI development and safety. First, in the AI infrastructure, it is no longer AI, i.e., computers and algorithms, by itself but computer and human, i.e. Artificial intelligence and Human intelligence. The trend is collective intelligence (CI). The involvement of human expertise is the most vital factor to eliminate the risk and safeguard the use of AI. MIT established Center for Collective Intelligence in 2006 and is pioneering the research in that direction.

“The second aspect concerns the data used in training AI. Training AI algorithms is like teaching children. It is particularly important to understand and use the data in the right way. In many instances, humans can tell the data containing facts and opinions. AI can’t distinguish data of facts or opinions by itself. To ensure the soundness of AI, researchers should always train AI models with factual data. However, sometimes the difficulty of distinguishing between facts and opinions in the data can pose a challenge even for humans.

“In short, 1) Human experts are important in developing low-risk AI, hence the collective intelligence, and 2) Data used in training need to present facts rather than opinions to ensure the integrity of the AI, impartial and unbiased.”

No conflict of interest.

Associate Professor Jeremy Moses, Department of Political Science and International Relations, University of Canterbury, comments:

“While discussions around the use and regulation of AI technologies are of pressing importance, it should be noted that a lot of the language around existential risk, safety, trust, and responsibility comes from those with vested interests in the ongoing development and deployment of these technologies.

“In fact, there is a broad and ongoing debate amongst many notable figures in the AI industry over where the regulatory focus should lie, with accusations flying as to who stands to benefit from representing the risks of AI in particular ways. In this context it is worth being sceptical about the over-representation of ‘big tech’ companies and researchers at the talks in the UK and elsewhere, as well as the proposed commitments to securing the public against ‘existential risk’ that are said to be posed by these technologies.

“AI safety, from this point of view, may be more about protecting the interests of certain corporations and researchers in this field, rather than working to protect the general public against harms that such technologies are already generating.”

No conflict of interest.

Hema Sridhar, Strategic Advisor – Technological Futures, Koi Tū: The Centre for Informed Futures, comments:

“AI has captured mainstream interest since the release of ChatGPT nearly a year ago. Much of the discourse to date has been focused at the extremes – either the opportunities and the benefits generative AI offers or the existential risks and threats it poses.

“Navigating this uncertainty has been a challenge faced by all – government, industry, academia and society.With many nations grappling with the best way forward, the UK AI Safety Summit as well as President Biden’s executive order presents a further step towards a common understanding of the risks and the measures to mitigate them.

“However, it is essential that any proposed measures aren’t only focused on addressing immediate concerns but adequately accommodate the long term risks and implications across social, economical and geo-strategic dimensions.”

Note: Hema Sridhar and Sir Peter Gluckman have authored a discussion paper ahead of the AI Safety Summit.

No conflict of interest declared.

Associate Professor Adrian Clark, School of Product Design, University of Canterbury, comments:

“Within the creative tech and screen industries, there has been considerable concern about future of work with the rise of Generative AI. Not only are text based Generative AI such as ChatGPT able to write functioning computer code and engaging narratives and stories based on simple text prompts but, with just a few lines of text, image Generative AI such as Midjourney is able to create artwork in a seemingly endless variety of styles which can be almost indistinguishable from works created by professional artists, and can even produce photographic quality images which are hard to distinguish from reality.

“The way video games, film, tv, books, and other forms of entertainment media are created is already changing rapidly, with opposition to these changes clearly evident at this year’s Writers Guild of America (WGA) strike. Although WGA won a legislative victory restricting the use of AI in their industry, the technology will only continue improving, and whether laws alone will be enough to protect jobs is uncertain. I will be very interested to hear the ideas from the AI Safety Summit on how to protect and support workers in industries which are increasingly at risk to being outsourced to AI.”

Conflict of interest statement: Associate Professor Clark is a founder and former employee of creative tech company QuiverVision.

Giulio Valentino Dalla Riva, Senior Lecturer in Data Science, University of Canterbury, comments:

“The AI Safety Summit declaration is good starting point, surely not the definitive word on the topic. In the declaration, there is much more emphasis about current, daily uses of AI rather than apocalyptic sci-fi scenarios.

The international challenge has been recognised, and this is reflected in the list of Nations represented in the agreement. Yet, many countries are missing from that list and a truly international adoption is a challenge.

“The declaration recognises that developers have a “strong responsibility for ensuring the safety of […] AI systems”. A framework of regulations and incentives (carrots and sticks) needs to be carefully designed around this principle. And, as the declaration notices, this framework can only came from the collaboration of “nations, international fora and other initiatives, companies, civil society and academia”.

“Hence, I really hope that the “internationally inclusive network of scientific research” that is going to be created to tackle this challenge will have ample representation of civil society, and truly inclusive diversity of voices, starting from Indigenous People’s voices.

“Overall, the meeting walked a tight rope trying to balance the futuristic risk of super-human-robots, but also tackling immediate harms and risks posed by existing AIs.

“Providing a much-needed grounding for the AI ethics discussion, Dr. Rumman Chowdhury reminded that it won’t be AI to solve poverty: we already had the means of solving it, it’s the political commitment that has been missing so far.

“In discussing the limitation and risks of Large Language Models, Dr. Abeba Birhane highlighted how these models embody racist and gender biases. In an interactive demo, Dr. Sasha Luccioni and and Dr. Kristian Lum showed how asking generative AIs to write bed-time stories for kids (and other applications of Text-to-Image algorithms) produce results that can transmit harmful societal biases.”

“Public discussions on technology’s impact on society are crucial. However, the focus on “Frontier”, futuristic, sci-fi AI scenarios can overshadow the very real challenges posed by today’s AI and data science technologies. I’m less scared by super-human robots than the use of surveillance software to stifle political dissent. For instance, the “Risks from Loss of Control over Frontier AI” panel invites to ponder “whether and how very advanced AI could in the future lead to loss of human control and oversight”. I deeply regard our philosophers, futurists, and speculative fiction writers. But the truth is, many of us already lack control and oversight over the current AI algorithms that shape our lives.

“From credit scores and predictive policing to facial recognition and sentiment analysis of work chats, AI is steering our life paths, and we are often not aware of it. What’s more concerning is that control and oversight are often a prerogative of a privileged few. This is less about the technology itself and more about how it’s used in society. The Frontier we should be wary of might not be the next big AI breakthrough but the frontier of neo-liberalist exploitation, ever expanded by current AI systems. While AI has made the repression of political dissidents easier and has amplified discrimination in hiring processes, it’s yet to show significant strides in alleviating poverty, combating climate change, or preventing wars, despite bombastic promises.

“The voices present at AI summits matter immensely. There’s a danger that commercial interests will dominate the AI ethics conversation, overshadowing other crucial perspectives. In Aotearoa, Māori are at the forefront in both AI innovation and its ethical considerations. Globally, Indigenous Peoples, Black communities, LGBTQ individuals, and other marginalized groups must have a more prominent role in these discussions. Without diverse voices in AI ethics and regulation talks, we are doomed to overlooking vital issues.

“Indeed, nowadays most AI technical advancements come from a handful of companies, with academia’s role diminishing since 2012. A skeptical perspective might suggest that established AI giants are using the Summit to push for regulations that could hinder emerging competitors from innovating and challenging their dominance. At the same time, designing regulations that would have an effect on companies in different jurisdictions or organization under less public scrutiny (e.g., the military), is an open challenge.

“In the preliminary documents of the Summit, people seem often to be viewed merely as consumers, and democratic erosion is reduced to “a loss of consumer choice.” I’m eager to see the outcomes of the summit, especially if it addresses the glaring omissions in its preliminary documents: terms like “war,” “colonialism,” “racism,” and “economic disparity” are notably absent. And the risk of increased capacity for weapons development seem to be confined to “bad players” and “terrorists”. The goal of the Frontier AI Summit to nurture the development of “AI for public good” is admirable. Yet, the challenge is huge and requires more than just some superficial correction.”

Conflict of interest statement: Giulio Valentino Dalla Riva is the Director of Baffelan – Data Climbing, a data science consultancy company. Dalla Riva is a founding member of the Digital Democracy Institute and a Principal Investigator of Data Fluencies.