Janet Turra / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

AI experts call for more regulation – Expert Q&A

An open letter signed by more than twenty AI experts calls on the government to better regulate the new technology, pointing to low trust and possible harms.

The Science Media Centre asked the authors and some of the signatories about what kind of regulations New Zealand could put in place, and what could happen if we don’t regulate this technology.

Comments are organised by the following themes for easier navigation:

  1. Author comments
  2. Legal considerations and impact on women and children
  3. Impact on Māori
  4. Regulatory options
  5. Additional legal and policy perspectives

Dr Andrew Lensen, Senior Lecturer/Programme Director of Artificial Intelligence, Victoria University of Wellington, comments:

Note: Dr Lensen is an author of the open letter.

What AI regulations do other countries have in place that we could learn from?

“There are a number of examples we could draw on. The most discussed one is the EU AI Act, which employs a risk framework similar to what we are proposing in our letter. I think that is a good starting point, alongside what has been done in Australia. The EU’s act is also very comprehensive, though; we don’t need to have something so complex.”

What could happen if we don’t put AI regulations in place?

“If we don’t regulate AI, Kiwis will suffer. This includes both direct harms, such as deepfakes, fraud, and biased decision making, but also more subtle ones: increases in CO2 emissions, loss of intellectual property, and interference with democracy. We have an opportunity to decide how we want AI to be used in Aotearoa, and we must seize it.”

How urgent is it that we regulate AI?

“AI harm is already occurring. I am asked nearly daily about the latest harms caused by an AI system, and things are not slowing down. Regulating AI quickly would let us reduce the harm caused, while also providing businesses with regulatory confidence and increased public trust.”

Conflict of interest statement: “In addition to my substantive academic role, I am co-director of LensenMcGavin AI.”


Chris McGavin, Director, LensenMcGavin AI, comments:

Note: Chris McGavin is an author of the open letter.

What AI regulations do other countries have in place that we could learn from?

“Most of the world’s jurisdictions which are exploring AI regulation are opting for risk management models. This means that regulatory intensity scales proportionally with the perceived risk of a model or application. Where a model is more likely to interact with vulnerable people or fundamental rights it is deemed high risk. For instance, a spell checker will have relatively few regulatory requirements in comparison to a facial recognition tool.

“Two very good, and different, examples of this style of regulation are the EU’s AI Act and the proposed Australian mandatory guardrails. Both of which would be highly instructive if we were to produce our own AI regulation. However, it is imperative that any regulation we develop is not simply a copy paste from another jurisdiction. But, one which is suited to our unique context and circumstance and gives ample weight to Te Tiriti o Waitangi and important concepts such as Māori Data Sovereignty.

“New Zealand’s lack of regulatory certainty has likely stifled innovation more than promoted it, and has done very little to improve public trust in AI. This means that any gain that could have been made from becoming a fast adopter could be left on the table. Further, regulatory uncertainty may see AI increasingly used in areas in which it is not suitable, and without proper governance or oversight. Which could cause the proliferation of AI related harm, of the kind highlighted in our letter.

“It would be exceptionally naive to continue on our current path. One that favours as yet unproven efficiency and economic gains over the mitigation of proven harm. It is time New Zealand took advantage of our laggard status and examined approaches globally to produce regulation of a very high quality. Be that through a standalone AI Act, or a revision of our existing law.”

No conflicts of interest.


Dr Cassandra Mudgway, Senior Lecturer in Law, University of Canterbury, comments:

Note: Dr Mudgway is an author of the letter. The links included below link to news articles. 

What could happen if we don’t put AI regulations in place?

“If we fail to regulate AI, we may see an escalation of online gender-based violence that our current laws in New Zealand are ill-equipped to address. “Nudify” apps such as Undress App, advertised across Meta, generate non-consensual sexualised images of women. Elon Musk’s Grok chatbot has been used to virtually undress women in public posts on X without their consent. Meta’s internal documents show how chatbots were permitted to flirt with children and reproduce racist and sexist stereotypes. These harms are compounded by recommender systems which are designed to maximise engagement but often end up pushing harmful material to the top of people’s feeds.

“The most relevant law for addressing online harm, the Harmful Digital Communications Act 2015, was not designed with generative AI in mind. While it can address some abusive content after the fact, it does not deal easily with AI tools that automate abuse, replicate it across hundreds of accounts, or create synthetic sexual images. Likewise, criminal offences such as intimate visual recording under the Crimes Act 1961 do not neatly apply to deepfake abuse. These offences require the victim being in a place where they have a reasonable expectation of privacy. But with deepfakes, there is no such ‘place’ because it is not real.

“Returning to Meta’s AI chatbots allowing sexually suggestive conversations with children, this raises difficult accountability questions: when the abuse is produced by an AI chatbot, who is the perpetrator? Is it the child themselves who entered the prompt? Or is it the platform that allowed their chatbot to respond in a harmful way?

“Currently, it is unclear whether any New Zealand law can adequately protect women and children from such harms or hold perpetrators and platforms accountable. These gaps leave core human rights, such as gender equality, privacy, bodily integrity, and the right to participate safely in public life, at risk of being systematically undermined.”

No conflicts of interest.


Dr Kevin Shedlock (Ngapuhi, Ngati Porou, Whakatohea), Lecturer, School of Engineering and Computer Science, Victoria University of Wellington, comments:

Note: Dr Shedlock is a signatory of the letter. 

What could happen if we don’t put AI regulations in place?

“The call for AI regulation provides an avenue to establish good governance, offering transformative benefits for New Zealand if managed correctly. This technological shift will undoubtedly impact Māori at all levels of society. Central to any regulation must be a governance model that co-exists with Māori as a Tiriti partner and protects Māori communities. The alternative is an unregulated technology that poses a significant threat: it could be used to manipulate narratives and suppress Māori aspirations.

“The thought of AI generating false narratives about Te Tiriti o Waitangi or creating deepfakes of Māori leaders is a real and dangerous possibility, one that would lead to the widespread spread of misinformation. Other challenges include mitigating ‘algorithmic bias.’ AI systems are often trained on data encoded with the biases of their creators; for instance, Western algorithm developers may lack understanding of Māori tribal decision-making processes. In the wrong hands, Māori communities risk being harmed by decision-making algorithms – such as biased image scanning deployed by the Ministry of Justice, incomplete records created by the Ministry of Health, or discriminatory systems designed to allocate resources to selected schools.

“We must prepare for an equitable AI future by building capacity and capability within our communities. If this is not addressed, it will impact all New Zealanders, but without careful governance, Māori will be disproportionately affected. Building trustworthy technology is paramount for Māori businesses, academics, and iwi to engage with AI safely and confidently, allowing them to harness its potential. Regulation with good governance will aid a level of transparency preventing a new form of institutional racism being created.

“In essence, the push for positive change through regulation is a modern extension of the fight for Māori rights. It is about ensuring this powerful technology serves to uplift and empower Māori communities by protecting their culture, promoting equity, and guaranteeing their sovereignty in the digital age – rather than repeating the patterns of harm and exclusion seen in the past.”

No conflicts of interest.


Ali Knott, Professor in Artificial Intelligence, Victoria University of Wellington, comments:

Note: Professor Knott is a signatory of the letter.

What meaningful actions can New Zealand take to regulate AI? 

“Cutting-edge AI systems are mostly developed by large multinational companies. Overseeing how these systems are developed is a task best undertaken by international coalitions of countries. New Zealand can contribute by playing its part in these coalitions. We do indeed play our part. For instance, New Zealand contributes to AI governance work done in the OECD, in particular by the Global Partnership on AI, which is now part of the OECD. We also contribute to AI policy discussions taking place in the EU. The EU is taking a lead on AI legislation. Its AI Act is the world’s most comprehensive legislation on AI, and its Digital Services Act provides similarly comprehensive legislation for social media platforms, which are largely powered by AI. I co-lead the Global Partnership’s project on social media governance: this project has had significant impacts on both these pieces of EU legislation (see here for a recent summary). Many of my New Zealand colleagues are also involved in international AI governance.

“The letter from Andrew Lensen and Chris McGavin to Chris Luxon and Chris Hipkins calls for a bipartisan effort to set up a national AI oversight body. I fully support this proposal. It would serve two important purposes. Firstly, it would help to coordinate our contributions to international discussions about AI safety. Many countries have national AI Safety / AI Security Institutes; these Institutes are becoming a key vehicle for international conversations about AI. New Zealand doesn’t yet have an AI safety institute; this means we are absent from some important international discussions.

“Secondly, a national body would help us to coordinate AI legislation relevant to New Zealand. New Zealand-specific legislation focusses on how AI is deployed in this country. We don’t need a home-grown ‘AI Act’, that attempts to govern how AI is developed in multinational companies. But we do need coordinated thinking about AI’s impacts in this country, and how to manage these. AI is newly relevant to many of the decisions made by Parliament.

“I’ll give just one example. The Fair Digital News Bargaining Bill, currently up for its second reading, would oblige search engine providers (like Google) to negotiate with New Zealand-based news providers for use of their news content. This is an important general principle, to ensure local providers are fairly remunerated for their content. But Google’s new policy of including an AI-generated summary at the top of its search results heightens the stakes enormously. Google is making new use of New Zealand news content, in the prompts which produce its AI summaries. And these AI summaries tend to keep Google users on Google, and prevent them from following links to local news sites. The point is that this Bill is newly about AI. This fact hasn’t been properly recognised yet, on either side of the house. A national AI oversight body would ensure we pay due attention.”

No conflicts of interest.


Dr Joshua Yuvaraj, Senior Lecturer in Law, University of Auckland, comments:

Note: Dr Yuvaraj is a signatory of the letter.

“There are generally three approaches to AI regulation: AI-permissive, AI-restrictive, and AI-neutral. An AI-permissive approach to regulation says the value that AI technology brings society is worth the costs we might have to pay. This means the government should remove hurdles to entice AI companies to develop and operate their products in that country. Australia seems to be going down this road. The Productivity Commission’s recent interim report suggests significant law changes should be made in pursuit of a projected 4.3% ‘labour productivity growth’ figure for the Australian economy.

“An AI-restrictive approach to regulation considers AI dangerous enough to place safeguards around it. The European Union’s AI Act is the most well-known example. It places numerous restraints on AI companies like what standards they must comply with and what information they must disclose, depending on the type of model they are developing. This approach views the government as protector of citizens from the risks of technology. Companies may, however, flee the more restrictive regimes in favour of more permissive ones, in the same way they might set up in low-tax jurisdictions.

“An AI-neutral approach to regulation is a ‘wait-and-see’ approach. This type of approach means the government wants to determine whether existing legal frameworks are adequate, and if not what changes they need to make without being too sweeping.

“AI affects so many parts of society, and people, that it’s likely different types of regulation will be needed. More urgent is the need for informed policymaking and public discourse about AI. The Government would do well to listen closely to the advice of experts in computer science, engineering, law and the humanities in developing any AI regulations. Otherwise, such regulations will either stymy innovation or fail to protect New Zealanders against the real risks of AI misuse.”

No conflicts of interest.


Dr Michael Daubs, Senior Lecturer in Media, Film and Communication, University of Otago, comments:

Note: Dr Daubs is a signatory of the letter.

“Various forms of artificial intelligence (AI), including generative AI (GenAI), are already being incorporated into many of the tools and digital services people use every day. Search engines such as Google now regularly provide users with AI summaries of search results, and Microsoft increasingly incorporates its AI, Copilot, in its Windows operating system and Office software. Information providers such as the New Zealand Herald use AI to personalise the content displayed to individual users when they access its website, and companies are turning to AI to assist with tasks such as vetting job applications. In short, people are interacting with or affected by AI tools, perhaps without even being aware of it.

“As the letter from Andrew Lensen and Chris McGavin details, however, there is ample evidence of potential problems with AI tools, including ‘hallucinations’ (fabricated information), the perpetuation of gender, racial, and ethnic biases, and potential privacy violations. Although ethical principles and voluntary guidelines may recognise these potentially harmful outcomes, dubious rhetoric about productivity, efficiency, and economic gains resulting from AI adoption may prove too alluring for companies and even government agencies to ignore, and existing legislation, while potentially relevant, may have significant gaps that need to be addressed.

“For these reasons, it is imperative that New Zealand urgently consider binding, statutory, risk-based regulation of AI, that addresses accountability, protection of rights, and redress options for people negatively impacted by the outputs of AI tools. Ideally, this work would be overseen, coordinated, and supported by a central agency. While the need for regulation is pressing, the development of that regulation should be carefully considered and evidence based. Work needs to start now, however, to identify weaknesses in the current regulatory regime so that individual rights, as well as economic interests, are protected.”

Conflict of interest statement: “I worked as a Senior Policy Analyst on the Digital Policy team at the Department of Internal Affairs from September 2023 to January 2025.”


Dr. Olivia J. Erdelyi, Senior Lecturer Above the Bar, University of Canterbury, comments:

Note: Dr Erdelyi is a signatory of the letter.

What AI regulations do other countries have in place that we could learn from?

“Yes, several countries have put regulations in place that address certain AI-related aspects. However, the real opportunity for New Zealand lies in looking at rules that go beyond a single jurisdiction or even reflect international consensus. The perhaps two most important examples are (1) the EU AI Act, which contains a fully-fledged horizontal regulatory regime for AI systems with a primarily product regulatory focus, and (2) international AI standards developed by ISO/IEC JTC 1 SC 42, which offer readily implementable guidance on a large range of AI-related issues.”

What could happen if we don’t put AI regulations in place?

“Simply put, inadequate—too much, too little, or the wrong type of—regulation hinders the optimal functioning of an economy. So, if AI regulation is needed and we fail to put it in place, we will not be able to realise the benefits and mitigate the risks of AI. Due to the resulting legal uncertainty, AI adoption will remain under optimal levels—studies show that this is already happening in New Zealand. Thus, it is paramount to regulate AI, and we must do it right.”

How urgent is it that we regulate AI?

“From the previous answer, it follows that until an appropriate AI regulatory environment is established in New Zealand, we will not be able to get the most out of AI and will be at increased risk from its harmful impacts. Hence regulating AI is an urgent task.”

No conflicts of interest.