Towards trustworthy and trusted automated decision-making in Aotearoa

Report | 2021-04-16 | By: Digital Council for Aotearoa

Scroll down to read the report content. Or click on the report PDF or one of our supporting documents.

Easy Read version of our report

Report Infographic

DC Report Infographic 2021 (Adobe Acrobat PDF file 154 KB)

Te Kotahi Research Institute Māori perspectives that informed our recommendations

Literature review that informed our research

Scenarios used in workshops that informed this report

Trust in ADM workshop scenarios (Adobe Acrobat PDF file 316 KB)

NZ Sign Language - Welcome

View the script for this video

Having trouble viewing this video? Go to:

NZ Sign Language - Summary of Findings and Recommendations


Tēnā koe Hon Dr David Clark, Minister for the Digital Economy and Communications, Minister of Statistics

Imagine an Aotearoa where digital and data-driven technologies are designed and used with ambition and care in ways that grow trust, embrace innovation, increase equity and give effect to Te Tiriti o Waitangi. Where individuals, whānau and communities have the resources and opportunities to achieve their aspirations, innovators build tools that improve the lives of people and the planet, and the government ensures technology is used to build a more equitable and inclusive Aotearoa New Zealand.

As you know, our role as the Digital Council for Aotearoa is to advise you about how to make this future vision a reality.

We are excited to present you with this report and set of recommendations from our research project for 2020, which forms part of the Council’s wider work programme. This project was led by the Council’s Research Leads, Marianne Elliott and Colin Gavaghan. Our research topic was trust in digital and data-driven technologies. This is a vital foundation for achieving an equitable, thriving digital future for Aotearoa. As this is such a big topic, we chose automated decision-making (ADM) as a case study to help focus our research.

Broadly speaking, automated decision-making refers to decision-making systems where parts of the process are carried out by computer algorithms.

ADM is everywhere and has big impacts on our lives. It informs what we see on our social media feeds and what price we are offered when booking a hotel online. It also informs big decisions, like whether people are approved for parole. ADM is not just about computers and automation. ADM systems are built by people, who also decide what data to feed into them, and how to make decisions based on the outputs. Those people, in turn, are part of larger institutions and systems with reputations, power relationships and histories. All these elements influence how people see, and whether they trust, ADM systems.

There has been research on many aspects of trust and ADM, but most was produced in an academic context and does not usually include hearing directly from people who are most affected by ADM. We wanted to help fill this gap. Our team ran community workshops to hear from a diverse range of people, most of whom do not usually get a seat at the table when it comes to designing and implementing ADM systems. Their voices are central to this report.

Workshop participants not only shared their insights about how automated decision-making should be used now, but also their vision for building a better digital future that centres on the needs and aspirations of people.

We learned that many workshop participants were uncomfortable with aspects of the way ADM systems are used today, especially when personal data is used to make big decisions. We know there are examples of good practice around ADM already, but as far as we can tell, these do not yet address all the concerns and suggestions raised in the workshops.

We built off what we heard at the workshops and from a range of other experts to develop recommendations to increase people’s level of trust in the use of ADM. These are practical suggestions, but they are also bold. They ask people designing and implementing ADM systems to fundamentally rethink how things are done. We recognise that making big changes is not easy, nor is it fast. But it’s worth it, to build a better future for Aotearoa and all New Zealanders.

We want you to pick up these recommendations and encourage and support agencies to run with them. We are also here to help. It is not our style to drop a report on your desk and walk away — we look forward to contributing our skills and expertise as these recommendations are made a reality.

Mitchell Pham (Chair), Colin Gavaghan, Kendall Flutey, Marianne Elliott, Nikora Ngaropo, Rachel Kelly and Roger Dennis Digital Council for Aotearoa New Zealand

Summary of findings

We spoke to over 180 people throughout Aotearoa about different situations where ADM has specific impacts on the lives of individuals, whānau and communities. We heard loud and clear that ADM and other decision-making systems should be built for — and with — the people who are impacted. This is essential for ensuring trusted and trustworthy systems.

When workshop participants talked about ADM, they focused on more than the technology itself. Instead, they talked about algorithms as being part of a much wider system that also included the way data is collected and used, the people and organisations that develop the systems, and the interventions resulting from decisions. Participants thought that ADM, with its ability to process data fast and at scale, is well suited to some situations. However, they were clear that it can be harmful in other situations and can intensify pre-existing bias and discrimination — especially when the decision has major impacts on the lives of individuals and their whānau.

Participants provided clear and concise suggestions of what would make them feel more comfortable in situations where ADM is used. They want systems that are built to meet the needs and reflect the values of the communities impacted. To achieve this, it is important to participants that people who have similar lived experience to them are involved in the development of decision-making systems and the interventions that result from them. Participants told us they would be more comfortable if there was transparency and clear communication about how the government uses ADM and how it is used to make decisions.

We took these clear and urgent suggestions and used them as a basis to develop a set of recommendations to the government. We looked at work already underway and the barriers preventing systemic change, and gathered input from experts to inform our thinking.

To build towards a thriving and equitable digital future for Aotearoa, the Digital Council recommends that the government:

Recommendation 1: Fund community groups to lead their own data and ADM projects.

Recommendation 2: Fund and support a public sector team to test and implement ADM best practice.

Recommendation 3: Establish a public sector ADM hub.

Recommendation 4: Work collaboratively to develop and implement private sector ADM rules and best practice.

Recommendation 5: Build ADM systems from te ao Māori perspectives.

Recommendation 6: Build a diverse digital workforce.

Recommendation 7: Increase the digital skills and knowledge of public sector leaders.

About the research

In 2020, the Digital Council led a research project to help answer the question: What is needed to ensure people in Aotearoa New Zealand have the right levels of trust required to harness the full societal benefits of digital and data-driven technologies?

As this is such a big topic, we have focused on automated decision-making as a specific case study. This specificity has allowed us to take a deep dive in one particular area which has significant impacts on peoples’ lives. We expect that some of the insights gained from looking at automated decision-making in depth will have wider relevance and application to other digital and data-driven technologies.

How we did the research

We partnered with Toi Āria: Design for Public Good, a research unit at Massey University, to facilitate conversations about ADM around Aotearoa. The team at Toi Āria ran a series of in-person and online workshops with 186 people. This is outlined in more detail in the ‘About the participatory research’ section of this report.

To support this participatory research, the Digital Council and its research team gathered additional information from a range of sources. The Digital Council commissioned two literature reviews. The first was carried out by Brainbox to understand the trust and ADM landscape, and inform the development of scenarios to discuss with communities. The second was by Te Kotahi Research Institute and looked at Māori perspectives on trust and ADM, including Māori data sovereignty. Te Kotahi also facilitated an expert wānanga about Māori perspectives on trust and ADM. We also carried out a range of interviews with experts working on ADM and digital issues, including from across the New Zealand public service and the United Kingdom, Canadian and Singaporean governments.

These perspectives all informed the work of the research team, and helped us understand what a more equitable, sustainable, and thriving digital future could look like. Our job as the Digital Council has been to present what we heard, pull out the key themes, and work with experts to develop a set of recommendations to bring these aspirations to life.

Scope of our research

ADM is a really big topic. The scenarios discussed at the participatory workshops focus on uses of ADM which include personal data and decisions that directly affect people's lives. They also focus primarily on government applications of ADM, with a few examples from the private sector. We made this scoping decision to reflect that the government holds significant datasets about the lives of communities, whānau and individuals. In addition, many decisions made by government agencies have direct impacts on people, and often affect big life events. This includes getting a visa, being up for parole, or receiving different levels of government support and social services.

In this research, we focused primarily on understanding the levels of trust that people subject to, or impacted by, ADM have in those systems. We recognise that there are many other places in an ADM system where trust is needed, for example by people developing, procuring, and deploying these technologies. Although not our main focus, we hope that some of our suggestions may increase their levels of trust in ADM too. Based on the findings of the Brainbox literature review, we decided that trust and trustworthiness would be a more helpful framing for this piece of research than ‘social licence’. 

The analysis and recommendations in this report reflect the scope of our research. There are many applications of ADM which we have not addressed in this research, for example environmental monitoring, predictive maintenance, or pricing algorithms. However, we think our analysis and many of the recommendations are applicable not only to wider applications of ADM, but also to how people develop and use digital and data-driven technologies more broadly.

Supporting documentation

This report is a high-level overview of the issues we are exploring and the research we carried out and commissioned this year. It does not cover all the issues we have explored since April 2020, or the full range of comments we heard from the workshops. For this reason, a suite of supporting documentation will sit alongside the report on the Digital Council’s website. This includes the two literature reviews commissioned as part of this research and the full scenarios that were tested with participants at the workshops.

Trust and automated decision-making

Digital and data-driven technologies underpin all aspects of our lives. They can be used to increase equity, inclusion and wellbeing for everyone. People can also design and use these technologies in ways that can cause harm and lead to exclusion, either on purpose or as a result of not taking a broad range of issues into consideration when developing them.

As the Digital Council, we think ensuring that digital technology is trusted and trustworthy is central to realising a thriving, equitable future for Aotearoa. There has been a lot written on the subject of trust, and many definitions of the concept.

In this report, when we talk about trust, we are talking about people feeling comfortable and confident when they are affected by other people’s decisions or actions. 

Trust is relevant in all types of power relationships — whether it’s between landlords and tenants, partners in a relationship, or people and their government. In most of the scenarios we explored there was a clear power differential, and this is reflected in our research. The organisations deploying ADM systems generally had the power to affect and influence people’s lives and wellbeing with the decisions they were making. We also note that trust is never absolute — it is conditional and relational.

Introducing automated decision-making

Broadly speaking, ADM refers to any decision-making process where some aspects are carried out by algorithms. An algorithm is a set of instructions designed to perform a specific task. For this report, we are talking specifically about digital algorithms, which are the set of instructions a computer follows to solve a problem by analysing data.

Algorithms can play a variety of roles in ADM systems. Often, they will provide scores or recommendations to human decision-makers — assisting, rather than replacing, humans. Most uses of ADM by New Zealand government agencies are cases of assistance rather than replacement. The 2018 Algorithm Assessment Report states that, “Humans, rather than computers, review and decide on almost all significant decisions made by government agencies.” 

In other contexts, humans have less involvement in making decisions. Humans do not routinely check the ranking of results when we carry out a Google search, or the recommendations on our social media feeds. Ultimately, what it means to talk about ‘automated decision-making’ will depend a lot on the context.

Outside of individual decisions, people also shape the way algorithms work and the wider systems they are embedded in. For example, people determine which decisions need to be made, and what criteria to use to make them. ADM systems are designed and built by people who decide what data sets and algorithms to use for different situations. People also decide what data to collect, and what actions to take once a decision has been made.

They choose how — and whether — to communicate about the ways their organisation uses ADM to inform decision-making.

ADM has been around for decades and is part of our day-to-day lives. ADM helps inform who gets a bank loan, when planes should be maintained and what TV show or movies you are recommended on streaming services. The uses and impacts of ADM are increasing as more data is collected and stored about people and the wider world, and computing power and storage increases. ADM systems can carry out tasks like data analysis much faster and at a much larger scale than people can by themselves.

There are some uses of ADM that have been controversial due to potential bias or negative effects on people. An example is facial recognition technologies, which can exhibit systemic racial and gender bias, and have been banned in some American cities.

Te Kotahi Research Institute notes in its literature review, “While there are certainly many potential benefits to using ADM, there is also significant potential for them to (re) produce harm for Māori. Primarily because, technologies rely on the availability of data to inform their processes.”

Each application of ADM has differing levels of complexity, types of data inputs, and impacts on people and society. This diversity means there is not one set of risks and benefits that covers all ADM applications. Risks, benefits and trust levels are context-specific. They will likely vary depending on the particular application of ADM, the potential impacts, the types of data and technology used, and who is designing and implementing the systems.

The benefits and risks of ADM in a New Zealand context are outlined in more detail in the report Government use of Artificial Intelligence in New Zealand

Trust and automated decision-making

It is important that systems are not only trusted, but also trustworthy. While trust can be built in a number of ways that may not require a system itself to be trustworthy, for example through engaging branding and marketing, this is not our goal. Our research and recommendations aim to move Aotearoa towards a place where ADM systems are only used in ways that are appropriate, reliable and accurate, mitigate against negative bias, and are safe, just and effective. ADM systems, and the people who create and use them, need to earn our trust.

The government uses ADM in many parts of its work, including in criminal justice, immigration and social development. Ensuring there is trust in the way that ADM is developed and used is vital. Public Service Commissioner Peter Hughes says, “The public service can’t operate without the trust and confidence of the people we serve… Without public trust we lose our licence to operate.” 

If ADM systems are not trusted or if they are seen to be unjust, there can be public backlash that sometimes results in those ADMs being abandoned, and trust in these technologies can be undermined more generally. Recent examples of this include the controversial use of ADM to determine student's final results in the United Kingdom and the ‘robodebt’ programme in Australia. 

Trust is not the only way to frame these issues

Trust is not the only framing people use to understand these issues. We learned from this research that it might not even be the most appropriate framing in some contexts. Ideas about trust also overlap with a range of other concepts and values.

The report from the Māori expert wānanga (page 33) noted the English term ‘trust’ does not have a fully equivalent kupu (word) in te reo Māori. In the report from the wānanga, the authors note the importance of, “Being able to frame questions in ways that comport with Māori concepts and values, allowing for discussion and debate within a te ao Māori view and from the point of view of Māori interests as opposed to having the discussion framed in a way that is not easy to reconcile with Māori ways of doing and thinking about the relevant issues.” 

In their research, Te Kotahi introduces (page 34) the mana-mahi framework as a potential model for framing conversations about people and technology through a te ao Māori lens. While it was developed to organise roles and responsibilities in the Māori Data Sovereignty space, Te Kotahi suggests that the mana-mahi principles are broadly applicable to the ADM conversation. The six principles in the framework cover both governance (mana) and practice (mahi), and include concepts like whanaungatanga (obligations) and kotahitanga (collective benefits) and manaakitanga (reciprocity). The concepts all intersect with issues discussed in our research, like trust, power, and relationships.

We recognise that there are many different ways into this issue. Starting our research with a specific question about trust offered us a starting point for this conversation. It also provided room for participants and experts to offer up their perspectives about the other really important factors or values needed for them to feel comfortable with ADM systems and decision-making processes more broadly.

About the participatory research

During September and October 2020, the team from Toi Āria: Design for Public Good, a research unit at Massey University, facilitated a series of in-person and online workshops across Aotearoa called ‘Algorithms and decisions, where do you stand?’

Workshops were held in Christchurch, Wellington, Palmerston North, and online with participants from around Aotearoa. The online workshops were held over Zoom using a digital collaboration tool called Miro.

Who we spoke to

For the workshops, the Digital Council wanted to hear from people who are most affected by ADM systems but whose insights and expertise are not often listened to when it comes to their development or use. We aimed to speak to people with a range of experiences and backgrounds, and worked with organisations and community advocates to recruit workshop participants from their stakeholder communities.

Workshops were attended by people from the following groups:

  • Blind and vision impaired people
  • Ethnic community leaders
  • Ethnic community youth
  • General public
  • Māori and Pacific youth
  • Pacific youth leaders
  • Women with migrant and refugee backgrounds
  • Whānau Ora navigators
  • Young people with care experience.

We knew that the participatory component of this project would not be able to be statistically representative of Aotearoa. This was in part due to the result of Covid-19 restrictions, and the consequence of resource constraints. For example, we did not hold specific workshops with people living in Auckland, with people from the wider disabled community or people in rural areas. We did ensure to engage widely outside the workshops, for example with the Disabled People’s Organisations Coalition, and insights from this and other expert groups informed our analysis.

Demographics of participants

We asked workshop participants some questions about themselves to help us understand the demographics of the people we spoke to.

It was important to us that participants could self-identify around ethnicity and disability. This let us better capture the broad range of participants’ life experiences. For instance, instead of just providing ‘Pacific,’ as an option, we were able to see that participants had diverse backgrounds across many Pacific nations. The higher percentage of female than male participants is something we often see in voluntary community workshops, and is not necessarily indicative of which genders are more affected by algorithms. While only one of our workshops specifically brought together disabled participants (blind and vision impaired people, in this case), there were people across all workshop groups that identified as disabled or having an impairment. This broad spread and overlapping of demographic groups highlights the intersectional nature of the topics we discussed.

Table showing participant age group, gender, level of confidence in using technology, identity as being disabled or having an impairment, ethnicity, rural or urban
Workshop participant demographics table

Workshop methodology

In each workshop, participants were given a number of scenarios based on real-world situations where algorithms are used. While the Council’s research question uses the terminology ‘automated decision-making’, the workshops acknowledged that, for most people, the word ‘algorithm’ is more familiar. This choice of wording was reflected in the workshop conversations, and is used throughout this section of the report.

Toi Āria facilitators used the Comfort Board method of research they developed in 2017 for public engagement about the use of personal data. (This methodology was developed for the Data Futures Partnership’s Our Data, Our Way project.) This method provides a structure for meaningful conversations about subject matter that is both technically complex and potentially emotionally challenging.

We developed six scenarios for the workshops, which outlined situations where people were subject to, or affected by, decisions that were informed by algorithms. These were delivered in two parts, each with different considerations about the use of data and impacts on peoples’ lives. Workshop participants were asked about their levels of trust in, and perceived benefit of, the use of algorithms for each scenario. Participants were then invited to identify what would increase their comfort. Finally, they were asked to identify and prioritise the themes and concerns they consider most important to increasing comfort.

Discussions took part in small groups, each with one or two facilitators. Participants were encouraged to have open and frank discussions in the workshops on the basis that the findings are reported anonymously to protect individual privacy.

Light blue quadrant with "benefit" 1-7 low to high on the vertical axis and "trust" 1-7 low to high on the horizontal axis.
An illustration of The Comfort Board
Image description

Light blue quadrant with "benefit" 1-7 low to high on the vertical axis and "trust" 1-7 low to high on the horizontal axis. Diagonally across the quadrant from bottom left low corner to top right high corner are written "Very uncomfortable", "Uncomfortable", "Comfortable", "Very uncomfortable". 


About the scenarios

The scenarios were developed by the Digital Council research team, and drew on the literature review carried out by Brainbox and publicly available information.

The scenarios were fictional two-part narratives based on real-world uses of algorithm systems. Each narrative was designed to explore a different type of algorithm system, in a way that allows participants to quickly understand the broad strokes and to imagine themselves or their whānau in the scenario. The algorithms discussed in the scenarios provided a jumping off point for teasing out generalisable observations about ADM more broadly.

The scenarios were:


This scenario was based around algorithms used to inform what music and movies are recommended to people on streaming services. The first part invited participants to consider their comfort with their data being used to curate and recommend content. The second part explored their comfort with providing more information to the streaming service provider to help refine its recommendations.


This scenario was based around search engine and CV filtering algorithms. The first part asked participants to consider their comfort with a search engine that sorts job search results and collects personal data in the process. The second part explored their comfort with an algorithm being used to filter their CV as part of a recruitment process.

Youth support

This scenario was based around the Not in Education, Employment or Training (NEET) algorithm used by the Ministry of Social Development to identify, and provide support to, young people. The first part asked participants to consider their comfort with their teenage school leaver being proactively contacted with an offer of support as a result of an algorithmic risk score. The second part explored their comfort around aspects of the algorithm’s accuracy and the outcomes of the assistance given to school leavers.


This scenario was based around a risk analysis algorithm used by Immigration New Zealand to inform decisions about visa applications. The first part asked participants to consider their comfort with the algorithm being used to assist and speed up decision-making. The second part explored their comfort with the assessment process in the event of an appeal.


This scenario was based around a nationally-recognised algorithm used by District Health Boards to inform decisions about waiting list priority. The first part invited people to consider their comfort with algorithmic waiting list prioritisation. The second part explored their comfort with adjustments for equity based on Māori and Pacific health indicators.

Criminal Justice

This scenario was based around the Risk of ReConviction x Risk of Imprisonment algorithm (ROC*ROI) used by the Department of Corrections. This algorithm produces a risk score to inform a range of decisions, including whether people are granted parole. The first part invited participants to consider their comfort with algorithmic scoring to assess the risk of reconviction/reimprisonment. The second part explored their comfort with the statistical and social fairness of the tool.

What workshop participants told us

There were a number of recurring themes about comfort, benefit, and trust that emerged across all the workshops. These illustrate how participants feel about algorithms and the wider systems they sit within, and issues and opportunities they see with the current approaches.

Algorithms are one part of a much wider system

Workshop participants did not talk about algorithms in isolation, but as part of much wider systems. Their view of what makes up an algorithm ‘system’ includes the organisations that develop and use the ADM, the interventions that are implemented as a result of a decision, the process for developing systems, the way these processes are communicated, and the values and data that underpin the decision-making process. Participants also emphasised the role of the wider social, historical and cultural systems that inform and shape these values and data, including through bias and discrimination.

All these elements of the wider algorithm system affect the participants’ level of comfort in a particular scenario. For example, even when algorithms were working as designed, participants’ level of comfort tended to be low if the resulting intervention was seen as ineffective.

"We shouldn't separate the system and the algorithm because, for something to work, we have to consider both. It has only ever been designed to be part of the system."

General public, immigration scenario

“It’s not whether the algorithm is testing what it’s supposed to test, it’s what they’re doing afterwards.”

Young people with care experience, youth support scenario

Algorithm systems have strengths and weaknesses

Workshop participants clearly articulated what they saw as the benefits of using algorithms. In general, participants were fairly comfortable with situations where algorithms are used to carry out straightforward or perceived low-risk tasks at speed and scale, and acting as an ‘assistant’ to help people get work done effectively.

“If it’s real-time data then I’d trust an algorithm over a human. People can get stressed, tired, emotional and irrational, and doctors are no exception.”

Pacific youth leaders, health scenario

“An algorithm is good for things like buying petrol because it’s a process, whereas for things about people, it’s not good for that.”

Māori and Pacific youth, recruitment scenario

“There is a benefit if the system works. Immigration is overloaded, and it is a good thing if it speeds up the process and frees up resources.”

General public, immigration scenario

Participants generally had lower comfort when algorithms were used to inform complex decisions that had major impacts on people’s lives. In these situations, participants were looking for a ‘human approach’ that incorporates empathy, discretion, fairness and an understanding of cultural nuance. While algorithms may offer efficiency to the people deploying them (for example through saving time or increasing efficiency in tasks like application processing) this can carry less weight if the outcome of the process is seen to have a significant impact on health or quality of life.

“If it is a simple yes/no task, I want the algorithm to do a perfect job. But if it is required to consider nuanced stuff, I want a human in the mix.”

General public, immigration scenario

“I trust algorithms to do a lot of things. They’re really good at things like ‘is this stamp the right way up?’ and that sort of thing. This is the justice system and I can’t imagine a training set that didn’t come from past decisions. Even if they were only last week’s decisions. Their risk of offending would be based on data on reoffending which is based on getting caught, getting convicted which we already know has got a huge amount of bias in this country so it would just self-perpetuate. They don’t have the data on that person. They have data on some other set. There is no data set that could create this system without bias.”

General public, criminal justice scenario

Algorithm systems can make existing problems like discrimination and bias worse

Participants spoke about the bias and discrimination that could be embedded in decisions informed by algorithms, often due to the types of data collected and the way it is used. Participants noted that algorithms and data sets were developed by people, who also hold biases.

“There's too much systemic racism and discrimination that's built into the government, which means the algorithm that they create to collect data and make decisions is also grounded in systemic racism and discrimination. And so it just creates a feedback loop of discrimination.”

General public, youth support scenario

“Algorithms are only as good as the people who designed them. Machine learning might help with that, but right now most of the algorithms are people-designed, so people’s individual biases and so on can come into play on that.”

Blind and vision impaired participants, health scenario

“It just confirms my fears about the system being built for us to lose, and when I say us, I mean Māori. Remove this algorithm. And if it doesn’t get removed at least educate us on what it is, and when they will update it, so it’s not stuck in the past so we can all move forward together.”

Māori and Pacific youth, criminal justice scenario

Some participants spoke about how data could follow them around for significant periods of time, leaving them defined by their history and unable to move on from events in the past.

“The term risk sends up a flag; it’s producing a risk score from my history. Do these factors that you haven’t had any control over follow you around all your life? I was so young, just a kid. What if we change, the data is still there, always there, but it doesn’t mean that I’m that person now.”

Māori and Pacific youth, youth support scenario

“Everyone deserves a second chance. But the system seems way too old to analyse all of this — it’s from 2001, aye? It needs an update because people evolve and change. The things that society values also change.”

Pacific youth leaders, criminal justice scenario

Participants also noticed that the kinds of data collected often focus on people’s deficits or difficulties. These data also fail to capture necessary context about people’s whānau or communities, or surface their aspirations.

“People don’t focus on the negative things like these algorithms do. Who wants their life to be based on stink stuff from their past, that came from things from their parent’s past that they had no control over? Stop focusing algorithms on what you think is the matter with us. Instead focus them on what matters to us, the changes we want to make. Ask us, and start collecting that data.”

Whānau Ora navigators, youth support scenario

Many participants had low trust in organisations using algorithms, especially government agencies

Many workshop participants had very low trust in scenarios where government agencies were using ADMs in high-stakes decision making. This came through particularly in workshops with Māori, Pacific, blind and vision impaired participants.

The level of trust in the ADM system was strongly linked to the trust in the wider organisation and the participants’ perception of its performance.

“My hunch is that the health professionals and systems creating these algorithms don’t have disabled people’s views in them. And what those people see as quality of life is quite often different to what we think of as ‘quality of life’ (like discussions in the End of Life choice referendum). I think that something that should be brought into the design of algorithms is having the disability community involved in the system. Have them involved in the making and review of those systems.”

Blind and vision impaired participants, health scenario

“The scenario says that ‘the DHB will set up Maori and Pacific clinical leadership and advisory groups’. The DHB will set up this new Māori thing. Straight away that takes Māori out of the picture. We know the DHB and the token gesture. Māori should be leading those, not the DHB. Māori will come up with a different algorithm based on whakapapa and some other things, as opposed to the DHB who base it on their deficit approach and all their data. How can we give this back to Māori so that you can come up with your own solutions, and come up with your own algorithm whatever that might look like?”

Whānau Ora navigators, health scenario
Comfort Board blue and white plastic mat on dark carpet with legs of groups of people placing themselves on the Comfort Board
Youth people with care experience participants positioning themselves on the comfort board

What will increase comfort levels?

Workshop participants clearly articulated their discomfort with aspects of the scenarios. For example, some people were really surprised and concerned that algorithms like these were being used, especially in the scenarios where the systems had potential bias or the interventions resulting from decisions had limited effect.

During the workshops, participants spent time considering what could be done to respond to the issues they raised and increase comfort levels. The final activity of the workshop was to discuss these ideas collectively and group them into themes.

These discussions showed that participants had a nuanced understanding of a range of issues related to the use of algorithms and the wider systems that surround them. Many participants discussed the tradeoffs they make when engaging with algorithm systems. People pointed out that ADM systems should not be judged against hypothetically perfect human decision-making processes, which they recognised do not exist. Some participants were able to speak from personal experience, or about the experience of people in their whānau.

It is clear from what we heard that only making changes to algorithms and the data that feeds and shapes them will be insufficient to fully ensure trust in, and the trustworthiness of, algorithms and other digital and data-driven technologies. A more holistic approach is needed, with attention paid to the systems and organisations algorithms are situated within.

Across the workshops, some key themes and topics emerged about what needs to happen for people to feel more comfortable with the use of algorithms. We outline these themes below.

Centre the needs of communities

Throughout the workshops, participants wanted to see services and decision-making systems that centre the needs, values, strengths, aspirations of their communities. This included, but was not limited to, algorithm systems.

Many of the suggestions and concerns we heard were about identifying the right problem to solve, designing effective interventions, or ensuring that the data and criteria for the algorithms are appropriate and represent people’s needs.

Participants emphasised that decision-makers should first understand the needs of people and communities and work from there to find a solution, rather than jumping to algorithms or data-centric solutions in the first instance. This idea was framed by some participants as ensuring that algorithms are used as part of a human system, rather than people being fitted into an algorithm system.

Things that would raise comfort fall into two broad categories:

1. Understand and address the specific needs of affected communities in the development and use of ADM systems. This includes identifying the right problem to solve and designing culturally appropriate actions or interventions that take whānau and community context into account — not just individuals.

“If you want to know an area that an algorithm could help with, find an area that actually matters to family. It’s redefining the job of algorithms and getting some real data. How would you capture the intangible? Instead of a blanket, one size fits all approach.”

Whānau Ora navigators, youth support scenario

“With Pasifika, being approached individually doesn’t work. As a group, if you can get the church, family — it’s much better. Why? Family values and Christian values.”

Pacific youth leaders, youth support scenario

2. Focus on data and interventions that reflect the strengths and aspirations of people and communities. Do not just focus on ‘deficit’ data, collected when people are facing challenges or need assistance. One Whānau Ora navigator summed this by saying “Focus on what matters to us, not what’s the matter with us.” This was a significant theme that many people mentioned and felt strongly about.

“Data is always based around the deficit. ‘Something is wrong’. Where’s the aspirational data? You won’t find any, because everyone stops collecting it when whānau start achieving. I went into WINZ one day and said, ‘I’ve had these whānau, and they’ve all come off the benefit, have you got data on their progress?’ She says, ‘No, why would they? We’re just into giving the benefit out.’ I said it shows that the whānau can contribute to the community, are taxpayers, a better person in society. But that’s not captured. Create the algorithms around that — what makes up a happy person, a happy whānau. Don’t just collect data when something’s wrong.”

Whānau Ora navigators, health scenario

“The system focuses more on history, numbers and algorithms to say we shouldn’t let you out. They should be focusing on if the person is actually changing in jail. I just don’t get this whole system. It’s just unfair.”

Young people with care experience, criminal justice scenario

“I feel like if they were accessing information from a wider pool, not just government organisations, it would sit better with me because I think the way that government organisations work is you either fit in this box or you don’t, it’s not about you as a human being and all of your potential and all of the things you’re actually working towards.”

Young people with care experience, youth support scenario

‘Nothing about us without us’

Throughout the workshops, participants said they would be more comfortable if they, or people with similar lived experience, had led or been meaningfully involved in the design, development and deployment of algorithm systems. Participants also noted that people with a diversity of experiences, cultures and world views should be included in the design of all services and decision-making processes that have significant impacts on their lives.

Things that would make participants feel more comfortable fall into three broad categories:

1. Ensure people affected by algorithm systems have meaningful input into the development and maintenance of these systems and surrounding infrastructures. Participants expressed interest in different types of input, ranging from public conversation and involvement, to passing leadership of a particular project to an affected community to lead the development of solutions.

“The people directly affected need to be consulted about the criteria being written for the algorithm and definite checks and balances are needed, reviewing and monitoring them, and also that things are being created to take into account social disadvantage.”

Blind and vision impaired participants, general discussion

“We want to see Iwi, hapū, whānau involvement in creating them. Co-develop the solution.”

Whānau Ora navigators, youth support scenario

2. Ensure the diversity of people in Aotearoa New Zealand is reflected across the teams that design, build, use and make decisions based on algorithms, including in leadership roles. Participants commented that current ADM systems often reflect the biases of the people that build them, and that their comfort may be increased if algorithm systems were developed by people that have whakapapa, culture or lived experiences in common with them.

“Algorithms should be constructed by a diverse group in order to remove obvious bias”

General public, suggestions for increasing comfort

“My biggest concern is that the algorithms involved with identifying these kids are probably created by white middle-aged men, which therefore marginalises indigenous people and values. So my concern is the bias of the programmer will come through in the algorithm. I think the idea is quite cool and can see the intended benefits — but I’d feel more comfortable if indigenous people are involved in the processes of creating these algorithms.”

General public, youth support scenario

3. Ensure communities have the tools — including access to algorithms and other digital technologies — they need to solve problems and grasp opportunities, and trust them to do so. This would require the government to share power and decision-making authority in some areas.

“The Crown must share decision-making power and resources equitably with Māori.”

General public, general discussion

“There needs a higher level of autonomy given to the groups that are most affected so that they can make some movement to change the outcomes, and it be a bit more balanced at the top. How much of a voice can they give to Māori and Pasifika so that they can be in the driver’s seat. Greater leadership, a greater voice, greater impact, just greater input and balance.”

Whānau Ora navigators, health scenario

Let people know what is going on

Across all the scenarios, workshop participants noted that they would be more comfortable if they knew more about when algorithm systems are being used, how they work, and their effects on people and communities. Some participants were surprised that algorithms were being used at all by government agencies.

“I wasn’t aware at all of this kind of thing. I didn’t even know that there’s all these government departments doing this kind of thing.”

Ethnic community youth, criminal justice scenario

Participants often framed these discussions around the idea of transparency. This included the transparency of algorithm systems, the algorithms themselves, the criteria used and the data involved. Participants wanted open and clear communication from the organisations using algorithms, so people know what is going on and can ask questions or push back when necessary. We outline what participants mean by transparency, and the types of communication that would make them feel more comfortable, below.

Aspects of transparency

Participants talked about transparency in these four ways:

1. Transparency of the wider system, including the purpose and intent of the algorithm system, what processes are in place to come to decisions, who built the algorithm and developed the criteria, what governance is in place, and how the system is maintained.

“Transparency is about being able to see the impact of the algorithm: so we know what decisions it’s making, what impacts those decisions have, who is designing it and making the decisions about design, how often is getting things wrong.”

General public, general discussion

2. Transparency of data use, including knowing what data is used to make decisions, and other information such as how long data is kept, and how it was collected.

“I need to know and understand what information goes in and what are the parameters.”

Whānau Ora navigators, health scenario

3. Transparency of criteria, including understanding the rules, criteria and weighting that automated decisions are based on, and the reasons why a decision was made. Participants recognised that knowing specific details of the criteria may enable people to “game the system”.

“I think I’d like to see a list of variables that it’s using to determine the order that it puts people in. It’s fine enough to say this machine figures out when you’re going to have your surgery, but how? What is it actually looking at? You’re putting your life in its hands so I’d like a bit more information other than, ‘just trust the algorithm, it’ll work’.”

Young people with care experience, health scenario

4. Transparency of the algorithm, including understanding the code used in the algorithms and how they work to get a desired outcome.

“I would feel more comfortable if I understood how the algorithm was written and how it works, and for it to be explained in such a way that you can understand it.”

Blind and vision impaired participants, health scenario

Communication that would increase comfort

Participants emphasised that these four aspects of transparency needed to be accompanied by clear, open communication. Organisations that communicate often and openly were also seen as more trustworthy.

Participants’ comments about raising comfort through better communication fell into three main categories:

1. Provide simple, clear information for the public that could be understood easily by non-experts. This included communication about when algorithms are used to inform decisions, as well as information about how specific decisions were made, especially when the decisions affect people’s lives. They also wanted to know what data was used to make decisions, and what happened to the data afterwards.

“I would be more comfortable if they told me what they were going to do with the information afterwards. Sometimes you apply for a job and they say they’ll send your CV back or say we will destroy your CV. If they are analysing this amount of detail, what are they going to do with it now?”

Blind and vision impaired participants, recruitment scenario

2. Provide opportunities to ask questions, get clarification, and give feedback. Participants said they wanted to know they could get more information about decisions, and to have avenues for recourse if they wanted to contest a decision or outcome.

“I would want people being told when an algorithm was being used to make decisions about them, and opportunities for them to give feedback on it.”

Young people with care experience, general discussion

“Provide a human touch. Provide feedback. Don’t rely on the computer to tell you that somebody is or isn’t a good person.”

Māori and Pacific youth, immigration scenario

“So the positive things are that somebody is reviewing it and somebody is inquiring into the algorithm. But just think of the stress of going through an appeals process. ...What’s the feedback loop? What happens to the algorithm again? Who’s in charge? Who decides? How transparent are those decisions?”

General public, immigration scenario

3. Recognise the importance of personal, or kanohi ki te kanohi (face to face) communication and relationship-building as part of decision-making systems. Participants emphasised that ADM was just one part of a wider system based that involves human decisions and biases, and that communication about ADM systems should reflect this.

“Empathy is a huge thing when it comes to algorithms and artificial intelligence. They don’t have it. Possibly some refined form of artificial intelligence will take into consideration things like empathy and human emotion but I don’t think algorithms can.”

Ethnic community youth, recruitment scenario

“Algorithms work best in conjunction with relationships.”

General public, recruitment scenario

“I’m reassured by the non-algorithmic stuff in the scenario, the clinical leadership by Māori and Pacific people and care navigators who are guiding their own people through the system. That suggests there is a focus on changing reasons why there is a disparity in the first place.”

General public, health scenario

Additional things to increase comfort

Along with the three key areas outlined above, there were a number of other mechanisms for increasing comfort that were raised a number of times by participants. These are:

1. Having clear rules, standards, legislation or frameworks to govern ADM use, as well as capacity to assess whether they are implemented properly and enforce them where required.

“I think the government has to be involved to make sure some sort of criteria is right because my words might not be the words that it picks.”

Women with migrant and refugee backgrounds, recruitment scenario

A similar theme emerged in Te Kotahi Research Institute’s report on Māori perspectives on trust and ADM. For example, the report notes “There ought to be an audit process or design check to rate ADM systems according to criteria that would engender trust by Māori.” 

2. Treating data with care and consideration. This includes collecting only appropriate data, and storing and using it in a way that ensures privacy and security.

“The problem is when it asks for personal information, you don’t know who out there is getting your information and what they are doing with it. No matter how much they promise they will keep it confidential, what level of confidentiality can they really guarantee? So I don’t really trust them.”

Women with migrant and refugee backgrounds, recruitment scenario

3. Ensuring people have opportunities to exert choice and control over how their data is used, and that consent and buy-in are actively sought rather than assumed or “opt-out.”

“You should be able to give consent for people to be looking into your personal information. What is the chance of them leaking the information to the public domain? You definitely want to know what kinds of controls are in place. And you want to know that they are using your data.”

Ethnic community youth, youth support scenario

4. Making sure algorithms and wider decision-making systems are treated with care and ongoing attention. They should be effectively monitored, tested, and maintained.

“That’s the thing with machine learning, you have to make sure that it keeps on learning; keep on feeding things and the system will improve.”

Ethnic community leaders, health scenario

A similar theme about the need for ongoing care and maintenance of algorithm systems came up at the Māori expert wānanga. One participant drew a comparison between the kaitiakitanga (guardianship) of algorithms and pou whakairo (freestanding carved figure) — both need ongoing care.

“You don’t leave a carving alone in the rain.”

Māori expert wānanga participant

5. Building capability and skills around algorithms and related digital processes. This includes making sure people and communities can understand how algorithms work, and making sure that people involved in implementing algorithm systems have the right skills and capabilities.

“If I saw the science behind the algorithms and the papers that were quoted, that kind of thing that would be useful, but obviously not all users want to see that. If there were peer reviews, people that I trusted, people that were academics for example that had reviewed the algorithms or basically said that this thing doesn’t have any inbuilt biases or whatever it happens to be. That kind of thing would be useful.”

General public, recruitment scenario

Working towards solutions

In the workshops, we heard a wide range of views and suggestions about what would make people feel more comfortable in the scenarios that were presented. Many of these fell into two broad categories.

First, we heard that people affected by significant decisions and resulting interventions want agency throughout the process. In many cases this will mean the government should enable communities to lead, or have significant input into, projects and provide them with the resources they need. People emphasised that ADM is not the appropriate solution to every problem. Where ADM is used, the definition of success used to inform decision-making systems must reflect the needs and aspirations of people who are impacted.

We also heard it is important that technical and organisational processes driving ADM systems are safe, secure, well managed, well maintained, and appropriately regulated. This includes ensuring that algorithms, criteria, data flows and wider systems are transparent; having clear rules or guidelines around development and use of algorithms; having regular maintenance of systems; and having clear privacy and security practices.

Following the workshops, our role as the Digital Council has been to work with our research team to take what we learned and undertake further research and analysis. From this, we developed concrete recommendations the government can implement to help realise a trusted, trustworthy, and equitable digital and data ecosystem for Aotearoa.

Our analysis process included reading the full transcripts from each workshop; having a full day co-design workshop with the research team; attending a wānanga with Māori experts facilitated by Te Kotahi Research Institute; gleaning insights from the two literature reviews we commissioned; interviewing experts from other jurisdictions including Canada, the United Kingdom and Singapore; and undertaking an in-depth theme analysis of the suggestions made by participants about what would increase their comfort. We also held two workshops with senior government officials throughout our research, including to test our thinking on recommendations.

As the Digital Council, our thinking is generally aligned with the concerns and categories for improvement expressed by workshop participants and experts alike. For example, one of the key findings from the Māori expert wānanga was that, for Māori, “Meaningful participation and partnership is critical in the development of governance mechanisms around ADM.” 

Some mechanisms are already in place

Work does not need to start from scratch. Where possible, initiatives to help improve trust in and the trustworthiness of ADM should build on what already is in place. It is also important to check that what already exists is fit-for-purpose and to fill any gaps from there.

To this end, there are a range of frameworks and rules, from voluntary to legally binding, relevant to the development and use of ADM. These vary widely, from the AI Forum of New Zealand’s Trustworthy AI in Aotearoa principles to the Department of Prime Minister and Cabinet’s community engagement advice from their ‘Policy Methods Toolbox’. Many of these relate to government systems using personal information. If implemented effectively, these frameworks would help make significant progress towards creating safer, more equitable and trustworthy ADM systems along the lines of what our workshop participants suggested. We can also learn from practitioners in other countries who have developed frameworks and methodologies for digital projects in their jurisdiction.

Throughout the public service, there is some good practice emerging to implement a range of relevant frameworks. A list of the relevant frameworks we know about is attached as Appendix A. There is well established practice and recent regulatory reform in some areas, like privacy and the use of personal information. However, we know that implementation is either developing or patchy in other areas. There is also a lack of coordination across the system to join up these frameworks and ensure people working on ADM projects know their full range of responsibilities.

The government’s Algorithm Charter for Aotearoa New Zealand covers many of the key areas identified by participants and experts for increasing comfort. Charter signatories make commitments to maintain transparency, embed a te ao Māori perspective, focus on people, provide clear communication, and maintain human oversight and channels for appeal. The Charter was published in 2020, so it is not yet clear what implementation will look like on the ground.

While the Charter covers many key areas for action, we heard clear concerns, especially from participants in the expert wānanga and groups like Te Mana Raraunga, that the Algorithm Charter offers insufficient protection for Māori, and does not do enough to ensure the rights and interests of communities affected by ADM systems are represented. The development of the Charter represents a massive effort and a strong intention to improve systems. However, the Charter is voluntary, and there are not currently reporting or monitoring systems in place to support it. Te Kotahi research also notes that the Charter’s risk assessment matrix leaves much room for judgement calls around what counts as “high risk.” 

The algorithm-specific commitments in the Charter are supported by related legislation, such as the Privacy Act 2020 and the Official Information Act 1982, as well as frameworks such as the Ministry of Social Development’s Privacy, Human Rights and Ethics Framework (PHRaE) and the Data Protection and Use Policy, published by the Social Wellbeing Agency.

To conceptualise and implement the system-wide change needed to truly put people and communities at the center of ADM systems, we need to look to other frameworks that extend beyond digital. Te Tiriti o Waitangi is the founding framework for building equitable relationships and sharing power between Māori and the Crown in Aotearoa New Zealand, and should be a foundation for all government digital projects. Numerous human rights instruments also set expectations for ensuring fundamental rights are upheld in all decision-making processes, including those where data and digital technology are used. These include legislation like the New Zealand Bill of Rights Act 1990 and Human Rights Act 1993, and international agreements like The Universal Declaration of Human Rights, United Nations Declaration on the Rights of Indigenous Peoples and The Convention on the Rights of Persons with Disabilities.

More recent frameworks like Te Mana Raraunga’s Principles of Māori Data Sovereignty and the Government Digital Service Design Standard provide more specific and concrete guidelines for how to implement aspects of the above frameworks in a digital context.

To properly honour Te Tiriti o Waitangi and build the kind of trusted and trustworthy systems that communities and experts alike have indicated they want, considerable structural and cultural change is needed. While there are aspects of good practice emerging, there are a range of barriers to implementing these frameworks. These include a lack of coordination across the system, deeply entrenched power structures, a conservative approach to risk, and a culture that is not open by default. The barriers exist in a wider context of government systems, which have their own long-standing systemic barriers and silos (Department of Internal Affairs: Systems settings changes). Public servants and Ministers have started to address these, but there is still a significant way to go.

Changing long-established systems can be hard and slow, especially where it requires sharing power in new ways. Current work will need to be expanded, coordinated, and better communicated, and new initiatives will need to be developed from the ground up. Some of these changes will be uncomfortable, as we learn new ways of working, and replace old ways of doing things that have become barriers to the better future that we are working towards together.

The Digital Council’s recommendations to the government

It is possible to realise an equitable, thriving digital future that centres the needs of people and ensures the right levels of trust and trustworthiness. It will take clear leadership, cross-government collaboration, and a willingness to take risks and do things differently.

Change is needed that goes beyond a few surface level tweaks.

The Digital Council has developed seven recommendations to the New Zealand government that are concrete and achievable. They are also substantial, and will require ways of thinking and working that are innovative, iterative, and bold. These recommendations reflect what we heard from the people we spoke to in our research. They are only a first step, but an important one.

The Council’s recommendations fall into four broad groups:

  1. actions that will give effect to Te Tiriti o Waitangi across government ADM and data projects
  2. on-the-ground projects that test new ways of doing and thinking about things, demonstrate value and build a knowledge base to enable successes to be scaled and replicated
  3. projects that bring clarity and cohesion to work already underway on ADM projects and fill any gaps in frameworks and guidance that are needed
  4. projects that focus on building digital skills and knowledge and increase the diversity of the digital workforce.

These recommendations are a first step, not a full recipe. They could be implemented in a number of ways across different agencies. Importantly, the recommendations are not about starting from scratch. They identify areas where work is needed and their implementation should leverage off what is already underway.

While many of these recommendations are for the government to lead or implement, it is vital that communities and also industry are central to the way they are further shaped-up, developed and implemented.

RECOMMENDATION 1: Fund community groups to lead their own data and automated decision-making projects

We recommend the government funds 3–5 community-led ADM or data collection/use projects. These projects could include communities collecting and storing aspirational data about themselves, monitoring local environments or ecosystems, or using ADM as part of community planning initiatives. These projects should be designed by communities that have identified data or ADM as an opportunity area.

Why this recommendation?

We heard that people wanted decision-making processes and supporting interventions that are led by communities and reflect their specific aspirations, needs and circumstances. The government needs to test ways to share power with communities and give them agency to make decisions. One participant in the Māori expert wānanga noted, “When you share power, you don’t lose power”.

Communities and the organisations that support them are innovative and know their own needs. Some have data expertise, and many are interested in developing and using data-driven technologies for better outcomes.

A key barrier preventing community-led approaches is the lack of dedicated action to share power, decentralise data collection/storage, and enable communities to make their own decisions. Projects also need to be funded properly. This recommendation will help to break these barriers on a small scale so future initiatives have clear case studies and precedents.

We think that for these projects to be successful, they will:

  • have clear, specific objectives decided on by communities, and a project with an ADM or data collection/storage component
  • be community-led by a group with established structures and existing relationships with relevant government agencies
  • be funded for 2–3 years, plus ongoing maintenance costs
  • have government support to embed and scale successful aspects of the project
  • have non-burdensome, effective evaluation based on community-decided outcomes
  • include at least one Māori-led use-case involving te reo Māori, tikanga Māori, and mātauranga Māori
  • collect and share lessons learned with communities and the government so aspects can be replicated by communities and incorporated into government approaches.

RECOMMENDATION 2: Fund and support a public sector team to test and implement automated decision-making best practice

Recommendation one is about divesting decision-making power to communities. In many cases, it will be appropriate for the government to retain administrative responsibility for ADM systems which impact people's lives. We recommend the government funds one public sector team to implement a transparent, people-centred approach to an ADM project and share their work as they go. This will include fully honouring Te Tiriti o Waitangi, upholding human rights obligations, implementing relevant ADM frameworks, and learning to work in new, inclusive, collaborative ways. 

Why this recommendation?

We know that properly implementing already-existing frameworks, from Te Tiriti o Waitangi to the Government Algorithm Charter, will significantly increase the trustworthiness of, and trust in, government data and ADM projects.

It is not feasible to implement all these frameworks immediately and fully across the entire public sector. Barriers include entrenched ways of doing things, lack of a clear pathway forward, limited resourcing, and concerns about openly sharing work-in-progress — especially when things do not go to plan. This recommendation will provide a strong case study to learn from, and a spark for mobilising and accelerating new ways of doing things across the public public sector.

This project will:

  • be led by an agency or cross-agency team where new ADM work or a refresh of an existing project has been internally identified as a priority. This project is suited to an area where there is low trust by, and are high-stakes for, people affected by decisions
  • identify all applicable frameworks and approaches at the beginning of the project and methodically work through how to implement them, drawing on external expertise where needed
  • develop robust equity assessment protocols for algorithms
  • have a diverse, multi-disciplinary team that shares information about their work openly, transparently and regularly
  • co-design solutions with impacted communities and embed their input all the way through
  • support collaborative partnership in project governance and the development and use of algorithms
  • have sufficient funding and supportive leadership who remove barriers and empower and protect the team as they take risks and try new ways of doing things
  • develop and publish accessible guidance, lessons learned and components that can be replicated in other ADM projects, both inside and outside government. 

RECOMMENDATION 3 Establish a public sector automated decision-making hub

We recommend the government establish a hub to bring cohesion and oversight to ADM work across the public sector. The hub would take a phased approach, and could take a number of forms. The first phase would establish the hub as a neutral repository to bring together and communicate about work underway. It will also act as a first port of call for anyone with questions or concerns about the use of ADM.

Why this recommendation?

We heard a lot of work is happening to build understanding and help guide the use of ADM systems across the government. But things aren’t joined up and implementation of relevant frameworks is patchy. This lack of coherence makes it harder for people to know their responsibilities and get guidance. People outside the government also do not have one clear place to go if they have any questions or concerns about the use of ADM. This recommendation will provide an ongoing foundation to address these gaps and beyond

We think that for the first phase to be successful, it will:

  • understand the ADM system across government, share this understanding widely, and build strong relationships with leaders of ADM projects across government
  • understand all the frameworks and guidance available to guide aspects of ADM projects and share this information widely
  • be led by a small public sector team and be properly funded
  • identify areas where implementation guidance is needed, or there are other barriers to implementing frameworks
  • be a central point of contact for anyone, inside or outside government, with ADM questions, concerns or complaints.

Once the hub is established, it would be reviewed and changes to its work and operating model determined. Future phases of work may include:

  • filling gaps in frameworks in partnership with Māori and stakeholders
  • ensuring appropriate governance of ADM projects and ensuring Te Tiriti o Waitangi is honored and human rights principles are applied
  • an ongoing guidance development, and capability building role
  • collaborating with community and private sector stakeholders on key ADM issues
  • considering a monitoring and compliance function.

RECOMMENDATION 4: Work collaboratively to develop and implement private sector automated decision-making rules or best practice

We recommend the government works alongside Māori, the private sector and community stakeholders to understand current regulations and best practice frameworks that apply to aspects of private sector applications of ADM and ensure any gaps are filled. This will be accompanied by mechanisms to demonstrate value and encourage uptake.

Why this recommendation?

We heard from workshop participants that they want organisations to follow clear rules when using ADM, and communicate clearly about what is going on. We have also heard that private sector organisations want certainty around regulatory requirements. This kind of regulatory certainty is integral to some other industries—the clear rules around food safety are one example. One size does not fit all when it comes to ADM rules and frameworks, since applications vary so widely, so blanket regulation is unlikely to be appropriate or effective.

While some legislation (e.g. the Privacy Act 2020) applies to all organisations using personal data and there is no shortage of ADM principles that could be applied, the private sector needs clarity, coordination, and a strong value proposition. In addition, some specific applications of ADM may need more formal regulatory tools to ensure systems are developed and used in ways that reduce harm and increase wellbeing outcomes.

This recommendation will help ensure that private sector ADM systems are developed in a way that builds trust and ensure positive social outcomes, while demonstrating value and providing clarity to private sector companies. 

This project will:

  • bring together private sector stakeholders with Māori, communities, and government to develop a set of best practice. This will leverage off what already exists, reflect the needs of communities, and be adaptable to the needs of different sectors
  • collectively identify areas where formal rules are needed and start work on policy development, while keeping a watching brief on future needs
  • develop mechanisms to encourage uptake of guidelines and regulations, including demonstrating value to the companies implementing them. 

RECOMMENDATION 5: Build automated decision-making systems from te ao Māori perspectives

We recommend the government put in place  a range of measures to give effect to Te Tiriti o Waitangi and promote greater transparency about the use of ADM as it affects Māori communities. Te Kotahi Research Institute has suggested six specific actions. 

The Digital Council endorses these suggested actions and thinks they should be implemented for all ADM projects across the public sector.

They will also bring significant value to private sector organisations.

Some of the actions are also included in our other recommendations — there is a lot of alignment and overlap. Since these actions all need specific, urgent, on-going attention, we have also added them as their own recommendation.

Why this recommendation?

It is vital the government honours Te Tiriti o Waitangi throughout all ADM, digital and data projects. The actions in this recommendation will provide a clear path to follow. They also reflect what we heard from Māori participants at the workshops, and help bring their suggestions to fruition. Action in these areas is time-critical because ADM systems are impacting Māori communities now.

The recommendations from Te Kotahi Research Institute are:

  • build Māori data and digital capacity within both Māori communities and across networks of Māori practitioners
  • develop robust equity assessment protocols for algorithms
  • ensure meaningful Māori participation in institutional algorithm self-assessment processes
  • support collaborative partnership in project governance and the development and use of algorithms
  • create and implement a Māori values framework and tikanga guidelines to support ADM design, development, use and maintenance
  • explore te ao Māori use-cases involving te reo Māori, tikanga Māori, and mātauranga Māori in ADM.

RECOMMENDATION 6: Build a diverse digital workforce

We recommend the government hires, develops and nurtures a public sector workforce for digital and data projects that represents the diversity of Aotearoa New Zealand, and encourages the private sector to do the same.

Why this recommendation?

During our research, many people told us they would feel more comfortable with ADM systems if they knew that people like them or from their communities were part of the teams that designed and built them. It is also well established that having teams with a diverse range of skills and backgrounds lead to products and services that better serve diverse communities.

This recommendation will bolster current efforts to increase workforce diversity. There are opportunities to collaborate with other initiatives on this recommendation. This could include the public service Papa Pounamu diversity and inclusion programme, and groups such as the NZTech and the Tech Alliance, the Iwi Chairs Forum, and private sector companies.

This project will:

  • identify the interventions needed to ensure organisations hire more people with diverse backgrounds, including people from Māori, Pacific, disabled, rainbow, and ethnic communities. This could include implementing hiring practices that recognise and value diversity and adopting diversity targets
  • build Māori data and digital capacity within both Māori communities and across networks of Māori practitioners
  • build a clear pipeline to identify the skills needed in the future and ensure people get the training and support they need from childhood through to tertiary education and beyond
  • enable multi-disciplinary teams including but not limited to policy experts, social scientists, designers, and people with communications, ethics, privacy and human rights expertise working alongside people in technical roles
  • ensure workplaces are safe, nourishing and welcoming for diverse teams
  • foster diversity at all levels, and provide dedicated support for emerging leaders.

RECOMMENDATION 7: Increase the digital skills and knowledge of public sector leaders

We recommend the government establishes a coherent programme of work to increase the digital skills and knowledge of senior public servants across the board and broaden the knowledge of people in digital and data roles.

Why this recommendation?

Digital and data-driven projects operate in a complex, fast-changing environment, and require specific knowledge and expertise to be successful. However, the government’s Digital Inclusion Blueprint identifies senior leaders in the public and private sector as being at risk of not being digitally included — especially when it comes to the skills to adapt to a rapidly changing digital environment. We also heard this in our research.

There is a huge opportunity for the government to develop and deploy ADM and other digital technologies in ways that significantly improve wellbeing outcomes and meet the needs of communities. Without skilled digital leaders across the board, these opportunities will be lost and the potential for causing significant harm will increase significantly.

This recommendation will strengthen digital leadership across the public service, resulting in more successful and transformational digital and data-driven projects.

This project will:

  • develop a comprehensive, condensed digital leadership course for senior leaders across government, which would also be adapted for interested Ministers
  • encourage and support current and potential future leaders of digital transformation projects to complete available qualifications
  • enable people in digital and data roles across the public services to broaden their skills, including through practical or academic courses, or through secondments to the technology sector
  • ensure all public sector senior leaders take meaningful Te Tiriti o Waitangi and human rights training
  • ensure leadership teams have the right mix of digital skills, identify ongoing sector needs for capability building, and make a plan to meet these needs
  • look at where capability-building initiatives could also benefit the private sector, for example in building digital skills at a board level.

Next steps: Making these recommendations a reality

If the recommendations in this report are implemented, they will move the needle towards a more equitable, resilient, and flourishing digital future for Aotearoa.

However, unless there is ongoing effort and overarching coordination, this work will be unlikely to result in sustained improvements in the lives of all New Zealanders.

As we noted in our briefing to you as incoming Minister, we want your machinery of government to get into gear to act on these recommendations in a sustained, systemic, and innovative way. Of course, agencies will need to be appropriately funded for any work additional to their current responsibilities. Trying to make these recommendations happen on a shoestring is a recipe for failure.

We acknowledge that implementing the recommendations will not be easy. One big challenge is that responsibility for digital issues doesn’t sit in one or two places — all government agencies and many private organisations have a big role to play. Some agencies like Statistics New Zealand, the Department of Internal Affairs (DIA), and the Ministry of Business, Innovation and Employment (MBIE) have specific responsibilities for data and digital issues, and will likely be more directly affected by these recommendations.

A small team can help make action happen

We think a small public sector team should be established to do further work on what each recommendation will look like in practice and coordinate across agencies to make it happen. The team could establish where each project would live, join up expertise across the system, help break down barriers as they arise, coordinate funding, and ensure that communities are involved throughout all projects. This team could provide a pilot demonstration for joining up and delivering across your portfolios, and share what it learns as it goes.

You will also need a long-term plan

Transitioning ADM and digital projects to a consistently people-centred approach that fully honours Te Tiriti o Waitangi requires vision and foresight. You will need a plan.

We think the government should make and execute a specific plan to monitor and manage progress on all the recommendations in this report.

These recommendations should also be embedded in the foundations of a wider digital strategy for Aotearoa New Zealand.

While many of the recommendations are specific to ADM and the work of government, they would also serve as an exciting trial case for wider advancement in the digital and data ecosystem.

The Digital Council can help

We are not the kind of advisory group that walks away from our work once it is published. We want to help make these recommendations become a reality, and see them deliver concrete results. We will do all we can to ensure that this opportunity is seized, to build a future in which all decision-making systems, including ADM, contribute to a more equitable and inclusive society.

Specifically, we could play an ongoing role to support the small group we have recommended is established for implementation of these recommendations. In particular we want to ensure a diverse range of the people impacted by ADM systems are engaged in identifying what it will take to implement these recommendations.

We are available as trusted advisors to everyone working to make these recommendations a reality. We look forward to working with you on this exciting mahi. Together with our communities, we can ensure a thriving and equitable digital future for Aotearoa and all New Zealanders.

Appendix A: Existing frameworks

This Appendix is a list of relevant frameworks, legislation, tools and guidance we know about that help to guide the use of personal data and algorithms in New Zealand.

AI Forum New Zealand (2020) Trustworthy AI in Aotearoa: AI Principles

Data Futures Partnership (2020) Trusted Data Use Guidelines for Aotearoa New Zealand.

Department of Prime Minister and Cabinet (2020) Policy Methods Toolbox: Community Engagement

Government Chief Data Steward and the Privacy Commissioner (2018) Principles for the safe and effective use of data and analytics

Government Chief Privacy Officer (2014) Privacy Maturity Assessment Framework

Ministry of Social Development (2018) Privacy, Human Rights and Ethics Framework

New Zealand Government (2018) Digital Service Design Standard

New Zealand Government (2018) Open Government Partnership Action Plan 2018-2020

Social Wellbeing Agency (2019) Data Protection and Use Policy

Te Mana Raraunga (2018) Principles of Māori Data Sovereignty.

Statistics New Zealand (2020a) Algorithm Charter for Aotearoa New Zealand

Statistics New Zealand (2020b) Ngā Tikanga Paihere

Legislation relevant to the use of ADM in Aotearoa

Human Rights Act 1993 

New Zealand Bill of Rights Act 1990 

Official Information Act 1982 

Privacy Act 2020 

Public Records Act 2005 


Thank you to workshop hosts and participants

The heart of our research was going out and listening to a wide range of people from around Aotearoa. We give our sincere thanks to everyone who participated in these workshops and shared their thoughts and feelings about this important topic with us.

Toi Āria would especially like to thank the following people and organisations: Shane Murdoch and young people with care experience from VOYCE Whakarongo Mai; Herbie Bartlett, Ete Igelese and the Kāfa Kollective from Toi Rauwhārangi, College of Creative Arts; Hisham Eldai, Candy Wu Zhang, Deborah Lam, and Ngahuia Harney from the Office of Ethnic Communities; Te Tari Matawaka, Christchurch and the youth leaders, Ethnic Leaders and Women’s group from Ōtautahi; Bruce Kereama and Te Tihi O Ruahine Whānau Ora Alliance Charitable Trust; Peter Butler and the Highbury Whānau Centre; Guntej Singh and the Hutt City taxi community; Mary Fisher and Blind Low Vision NZ; James Ting-Edwards, InternetNZ and the NetHui community; Massey University College of Creative Arts; and staff and friends and who participated in prototyping workshops at Massey University’s College of Creative Arts.

Thank you to everyone who we engaged with during the development of this report and its recommendations. Your help and input is much appreciated and has added huge value to our work. This includes the many public servants, both here and in other jurisdictions, members  of the Disabled Persons Organisation’s (DPO) Coalition, and the Data Ethics Advisory Group.

We acknowledge everyone who has attended Digital Council stakeholder engagement sessions throughout the year and everyone who has joined our Digital Council Alliance community. Your insights have informed us directly and indirectly as we worked on this project and shaped up this final report. You have also helped to give us the confidence to put ourselves forward to support the implementation of our recommendations.

Thank you to past and present members of the Digital Council secretariat: Elena, Gemma, Jacqui, Jill, Judy and Victoria. You have done amazing work to bring together this research project and coordinate the many moving parts.

Finally, to our research team, thank you. Thank you for your dedication to respectfully and accurately gathering perspectives from such a diverse range of experts, including people with lived expertise. Thank you for the rigour with which you held us accountable to those perspectives as we developed our analysis and recommendations. Thank you for working in innovative and collaborative ways to bring all the different strands of this research together. Thank you for your patience, your flexibility, your curiosity and your clarity. Our work has been greatly improved by your contributions.

Thank you to those who informed the report and recommendations

The 2020 Digital Council research team

Marianne Elliott and Colin Gavaghan are the Digital Council Research Leads for this work and were supported by the other members of the Digital Council and the Digital Council Secretariat. This 2020 research project was carried out in collaboration with our research partners.

Antistatic: Anna Pendergrast, Kelly Pendergrast

Antistatic were the writers for this final report, and provided policy advice for the report’s recommendations.

Brainbox: Curtis Barnes, Tom Barraclough

Brainbox produced the literature review on trust and ADM, conducted unstructured interviews with key stakeholders and provided advice to the Council on the research and the scenarios.

Digital Council Secretariat 

Victoria Wray from Department of Internal Affairs was the research stream lead for the Digital Council in 2020 and coordinated the many parts of this research.

Te Kotahi Research Institute : Kiri West, Daniel Wilson, Ari Thompson, Maui Hudson

Te Kotahi facilitated the Māori expert wānanga and produced a report that included a literature review on Māori perspectives on trust and ADM and a summary of the wānanga.

Toi Āria: Design for Public Good: Anna Brown, Matt Law, Simon Mark, Tim Parkin, Ana Reade, Sakura Shibata, Andrew Tobin

Toi Āria designed and facilitated the participatory research workshops and provided expert advice. They also designed the final report for this project.