Government use of Artificial Intelligence – Expert Reaction

New Zealand is a leader in government use of artificial intelligence, but regulation is needed to ensure it is used safely, researchers say.

A University of Otago-led study, funded by the Law Foundation, looks at predictive algorithms – including machine learning and data mining techniques – and considers whether issues around accuracy, bias and discrimination, transparency, human control and privacy have been accounted for.

The authors say New Zealand should consult on and establish a new regulator to oversee our government’s algorithm use.

The SMC asked experts to comment on the report.

Dr Amy Fletcher, Associate Professor of Political Science and International Relations, University of Canterbury, comments:

“Government Use of Artificial Intelligence in New Zealand is an important new report from the University of Otago. As we become an ‘algorithmic society’, increasingly reliant upon Big Data, machine learning, and social media platforms, it is crucial that citizens understand both the possibilities and limitations of these tools.

“Effective government use of AI could lead to more transparent, equitable, and efficient delivery of core services. However, without robust regulation and tech literacy across the public sector, we risk the reinforcement of bias, inequality, and systemic racism. It is imperative that the innovative and financial potential of the tech sector also be balanced against rights of citizens and governments to have reasonable access to the algorithms that drive key decisions in legal or hiring disputes. Globally, these issues of transparency and fairness will likely become urgent in an era of autonomous lethal weapons and weaponised AI.

“This report does a real service in introducing the core issues and providing a comprehensive survey of New Zealand’s current regulatory landscape. Hopefully, this report will inform a process of collectively considering how we can enable algorithmic literacy for all New Zealanders from primary school through to lifelong adult learners.”

No conflict of interest declared.

Associate Professor David Parry, Head of Department, Computer Science, AUT, comments:

“The report understates the risks from the use of algorithms/AI in government. Unfortunately most decision-makers have very little understanding of how these algorithms work or what the results actually mean. Bias is caused by data selection, the right to opt-out of data collection, existing bias in decision making and inappropriate choice of algorithm. Only open, well-designed trials will allow a true assessment of the value of these systems, they cannot be assessed effectively by simply being inspected and complying with a set of pre-existing rules.

“A suitable regulator would behave like a medicines agency, requiring proof that the algorithm is effective from independent studies, proof that it is being used correctly, and surveillance of outcomes along with the right to stop an algorithm being used. Such a regulator would benefit from responding to the very thoughtful and insightful views coming from Māori groups, for example. These ways of thinking about collective and personal rights and benefits, are applicable to everyone and I believe are leading the way to acceptable models of use.

“New Zealand has an exceptional opportunity to get this right and become a world-leader in the use, assessment and development of algorithmic approaches in government if we are prepared to have a scientific and inclusive approach.”

No conflict of interest.

Dr Benjamin Liu, Senior Lecturer, Department of Commercial Law, University of Auckland, comments:

“This report comes at the right time when the legal and policy issues of artificial intelligence are becoming more and more pressing. Indeed, government use of AI has sparked myriad controversies and objections. Take facial recognition as an example. Two weeks ago, San Francisco banned local law enforcement to use facial recognition. Last week, an office worker in the UK launched the first legal challenge over police use of surveillance equipment because his picture was taken while shopping.

“While AI technologies raise difficult legal and ethical questions, we must not forget the huge potential AI can offer. For example, police in New Delhi recovered 3,000 missing children within just four days after using facial recognition software. And back home, we all enjoy the convenience brought by eGate when we pass through Auckland International Airport.

“In the end, the key question is not about whether to use or ban AI, but rather how to preserve the fundamental values we share in our society. This will require policy-makers, lawyers, technologists and AI industry to closely work together to ensure the use of AI is consistent with the current laws and regulations, and to make new rules when the need arises.”

No conflict of interest.