Our workstream for 2020: Trust

What is automated decision-making

View the script for this video
(White background appears, then black text stating “Digital Council for Aotearoa New Zealand” appears as upbeat music plays, this text then fades away. A black line appears in the middle of the screen, above the line text appears “Colin Gavaghan, Digital Council” and below the line “On the Council’s latest research project”. Music fades slightly to become faint background noise.) “The first focus for our research is going to be on what we call Automated Decision Making.” (The speaker appears facing the screen, from the shoulders upwards and is slightly angled towards the left.) “What on earth is that? Well, it's actually a whole family of things.” (The camera zooms out and shows the speaker sitting down, angled towards the left. The speaker is making slight hand gestures as he is speaking.) “It’s a whole umbrella term that covers a range of different decisions that are made at least partly by automated processes. Sometimes with humans involved as well.” (The camera zooms in to show the speaker from the shoulders upwards and slightly angled towards the left.) “Sometimes largely by machines. Now they can range from pretty trivial things…” (The camera zooms out and shows the speaker sitting down, angled towards the left. The speaker is making slight hand gestures as he is speaking.) “like what you get recommended to watch on Netflix, all the way up to the most important decisions in our lives.” (The camera zooms in to show the speaker from the shoulders upwards and slightly angled towards the left.) “Decisions about whether people get jobs, immigration status, whether people get released from prison, or whether children are removed from families.” (White background appears then black text stating “Digital Council for Aotearoa New Zealand” appears as upbeat music plays, the video ends.)

Automated decision-making (ADM) means any process where parts of a decision are made by computer algorithms. Some of these are pretty simple and easy to understand. Others can involve more advanced technologies like “artificial intelligence” and “machine learning".

We’re interacting with ADM when we unlock a smartphone with a fingerprint or turn to a streaming service for recommendations.

When we apply for a job or loan, or need surgery, ADM might be used to determine if our application is accepted or how far up the waiting list we’re placed.

Why we focused on trust and trustworthiness

We chose trust and trustworthiness because they are key factors in unlocking the potential of digital technologies for social and economic wellbeing.

We included trustworthiness in our thinking because organisations that use automated decision making must earn the trust of their communities, not expect it.

We focused on automated decision making (ADM) because it has significant impacts on individual lives and how society functions.

There are many beneficial uses of ADM. For example, helping us navigate a new city with an app. Or increasing the speed and accuracy of diagnosis of certain medical conditions. However, ADM is not without risks. For example, ADM can be used in ways that cause harm and reinforce historical bias and injustice.

If not used responsibly, ADM can have significant negative impacts for New Zealand. This is especially so for people who are already disadvantaged or marginalised.

Who we talked with and worked with

We wanted to know what New Zealanders’ experiences, hopes and challenges were with organisations using automated decision making.

We sought out members of Māori, Pasifika, disability, youth and ethnic communities to participate in workshops with us. Members of these communities are often excluded when it comes to designing ADM systems. 

We partnered with Brainbox and Toi Āria. Brainbox carried out a review of existing research literature on the topic and interviewed experts. Toi Aria travelled the country using their participatory design process.

Participatory design helps those using or impacted by a system to have a say in how it works. Understanding how someone might solve a challenge they face will surface new insights for design. People become active partners in designing a service, not a passive recipient of that service.

Toi Āria measured levels of trust using a Comfort Board. Workshop participants were given a series of scenarios where algorithms were used to make decisions. These related to immigration, job search, health, youth support criminal justice. They then located themselves and their level of comfort with that scenario on a trust-benefit matrix. The scenarios had deliberate information gaps or ambiguities built in to elicit discomfort and generate the participant’s own voice in expressing their concerns or desired criteria for comfort.

A matrix with "trust" rating from 1-7 along the horizontal axis and "benefit" rating from 1-7 along the vertical axis.
Toi Āria's Comfort Board used to ascertain people's levels of comfort with a scenario

What's next

We’ve taken the results of the workshops and have been feeding them back into communities.

We’ve been working with various representative organisations on recommendations for the government.

We're working on a final report to the Hon Dr David Clark, Minister for the Digital Economy and Communications. We'll have this on our website in February 2021.