Don’t just judge us by our data

Blog | 2020-10-06 | By: Digital Council for Aotearoa New Zealand
Baby in a white bodysuit lies on the floor facing the camera with a book open on the floor in front and reading glasses on top. Floor is light bamboo wood and wall behind is mustard colour. Baby's face looks earnest.
Image source: Viktoriia Lykhonosova/bigstock.com

Big data comes with some big questions and big dilemmas that go way into our futures. Its judgements can follow us, stay with us, define us inter-generationally.
“Don’t judge a book by its cover.”
“Don’t be sneaky when doing the right thing.”
“Use emotional intelligence when inputting data.”
“Who watches the watchers?”

This is a very small selection of thoughtful and powerful quotes we’ve heard as we travel the country looking at how people whose lives are impacted by automated decision making (known to many as algorithms) define trustworthiness. What are their current levels of trust, and what do they need in order to trust these technologies?

Over the next few months we plan to share these insights in more detail as we bring attention and energy to the dilemmas in this space. These insights were gleaned through participatory workshops designed and delivered by Toi Āria: Design for Public Good. Their methodology helps people to locate themselves and their levels of trust in relation to a number of scenarios in which algorithms are used — medical waitlists, job searching, at risk youth, the justice system and immigration.

The following summaries are early insights from specific workshops to give a taste for the sort of feedback we’re getting from our community engagement

“Don’t judge a book by its cover”

Traditionally we talk about not prejudging the worth or value of something by its outward appearance alone. Automated decision making takes this to a whole new dimension.

Some workshop participants are concerned that algorithms make deficit-based assumptions about people and do not consider strengths. They’re concerned that algorithms can exacerbate existing racial inequalities. Algorithms (or perhaps those developing them) don’t show respect for cultural difference and personal information.

Participants talk about a desire for strength-based algorithms that build whānau capabilities, rather than being deficit focused. This relates to both the type of data being collected (“data is only collected when you do something wrong”) and the purpose of the algorithm. They would like algorithms to consider a wider context that draws from real whānau stories — “create mana enhancing korero for whānau to contribute to”. They’d like to see positive changes acknowledged through data being updated. And they’d like to see algorithms that balance whānau health and wellbeing with efficiency. This requires a human element.

“Use emotional intelligence when inputting data”

Robot hand and human hand reaching out to touch each other
Image source: World Image/bigstock.com

Computers can’t ‘see’ or ‘know’ a person and their personal situation. They have artificial intelligence, not emotional intelligence. So there is a wish for algorithms to only be used for tasks that don’t require human discretion and empathy.

Many participants place a high value on the need for human involvement in processes where algorithms are used. They talk about humans being at the centre of creating, utilising and agreeing the purpose of any algorithm.

Those human beings need to have clear, good intent. For who is watching the watchers?

“Who watches the watchers?”

Close up of left eye of white skinned woman with dark hair wearing clear smart glasses with black arms on the glasses. Computer images etched on glasses.
Image source: dragonstock/bigstock.com

Automated decision making interfaces with and impacts on our lives in so many ways. Sometimes in ways most of us can’t even begin to imagine. This generates mistrust.

At a higher level, participants would like to see sound governance, oversight, regulation and review of those using automated decision making. This includes a comparative analysis of algorithmic efficiency and effectiveness with human decision-making.

At an organisational level participants talk about transparency from those gathering and using our data. What criteria are they using? Who gets to make the decisions that affect people’s lives so deeply?

At an individual level, participants want to see what data is being gathered on them, for what purpose and how it will make a difference for them. They want more accessible, understandable information about who is using their data, how and for what purposes.

After all, as one participant puts it, don’t be sneaky when you’re doing the right thing.

 

*******

ACTIVITY SUMMARY

  • We have almost completed our workshops looking at how people whose lives are impacted by automated decision making define trustworthiness. We are starting to send out invitations to Town Hall events for interested groups to discuss the findings before we draft up a report and recommendations for the government.
  • A reminder if you’re interested in examining where you stand on automated decision making, you can enrol to take part in our workshop at Nethui on 12 October.
  • Last week we heard from representatives of the travel industry and the supply chain logistics industry. Tomahawk, a tourism digital marketing firm, talked with us about the impact of rapid advancements in online travel bookings on the tourism sector. On the other side of the coin, TradeWindow talked with us about the 10–20 paper documents it takes to complete an export process and how digital adoption could make this process more efficient globally.