What we've heard
We've asked, what's needed to have the right levels of trust to harness the full societal benefits of digital and data-driven technologies?
We've been looking at trust in the context of automated decision-making (ADM).
ADM is when some aspects of decision-making processes are carried out by computer algorithms.
Examples of ADM are everywhere. From what we see on our social media feeds, to whether we're approved for a bank loan, to where we are on a surgical waiting list. ADM has big impacts on our lives. These can be positive or negative depending on many decisions made by the people who design and implement them. Algorithms are being used more and more by government agencies.
There has been research on many aspects of trust and ADM. We identified a gap when it comes to hearing from people who are most affected by ADM.
So we filled that gap by going out and listening to a wide range of people from around Aotearoa. This included Māori and Pacific youth, blind and vision impaired members of the disability community, Whānau Ora Navigators, and members of Aotearoa’s diverse ethnic communities.
The groups we spoke to are often excluded when it comes to designing and implementing ADM systems, but they are often impacted by the decisions.
We used a participatory design method developed by Toi Āria, a public engagement centre at Massey University. Participatory design helps those using or impacted by a system to have a say in how it works.
We developed six scenarios for the workshops relating to immigration, recruitment, youth support, criminal justice and health. Using Toi Āria's Comfort Board, workshop participants were asked to locate themselves on the board according to their levels of trust in, and perceived benefit of, the use of algorithms for each scenario.
Participants were then invited to identify what would increase their comfort. The focus was on discussing and deliberating rather than taking a static position.
Follow their voices through our blogs
- Honouring digital experiences, hopes and challenges
- The closer people are to developing the criteria for algorithms, the more careful they will be
- Don’t just judge us by our data
- The digital dilemmas of Pacific people during COVID 19
- Pacific people should be involved in creating algorithms used on their people
Our interpretation of what people said
Participants were generally aware of how algorithms work and how they make decisions for and about them. They were, however, surprised at just how extensive the use of algorithms is, particularly in government.
Participants wanted to know more about the assumptions behind the algorithm, the data that it was trained on and how it would be used.
When workshop participants talked about ADM, they focused on more than the technology itself. They talked about algorithms as being part of a much wider system. This system included the way data is collected and used, the people and organisations that develop the systems, and the interventions resulting from decisions.
Participants thought that ADM, with its ability to process data fast and at scale, is well suited to some situations.
However, they were clear that it can be harmful in other situations. It can intensify pre-existing bias and discrimination—especially when the decision has major impacts on the lives of individuals and their whānau.
What we heard New Zealanders want
- Algorithms to support rather than replace human roles in decision-making.
- A wish for algorithms to only be used for tasks that don’t require human discretion and empathy.
- To have representation when algorithms are created, used and monitored in relation to their own people and matters that are important to them.
- A non-deficit approach to using algorithms, with a focus on what matters to them, not what’s the matter with them.
- Algorithms that consider a wider context that draws from real whānau stories—creating mana enhancing kōrero for whānau to contribute to.
- Transparency around the data that feeds algorithms, what algorithms are doing, who is making the decisions that affect people’s lives, and what will be done with the data they use and collect.
- Ongoing monitoring to ensure algorithms remain relevant and appropriate for use.
- To be given some level of control regarding algorithms that may use their data.