Have you noticed the increasing number of automated decisions being made about you?
Subscribe now for unlimited access.
or signup to continue reading
Many automated decisions are benign.
The Australian Tax Office operates an automated system that gathers much of our income data and makes an assessment of our tax return, with the days of ATO staff manually checking each return long gone.
Most of us hope for a decision that puts a few extra dollars into our pockets.
We are surrounded by many similar systems which automatically generate a decision, but as we now know, some automated decision-making processes can be absolutely devastating to an individual's or family's life.
There is a whole chapter in the robodebt inquiry which describes the impact of automated decision-making on social service recipients.
The inquiry found that even well-intentioned staff were not enough to stop robodebt from automatically issuing erroneous debt notices to recipients based on inaccurate calculations and questionable assumptions.
Such automated decision-making wrecked lives.
As Commissioner Catherine Holmes summed up in her royal commission report on the robodebt scheme, it was an "extraordinary saga" of "venality, incompetence and cowardice".
The lack of recourse for the people impacted by robodebt made this failed approach to uncovering overpayments worse.
There was no access to a legitimate complaints process for recipients.
Alarmingly, as the inquiry records, departmental officers receiving calls to the inquiries line were unable to explain how the debts had been calculated.
There was no access to a human who could examine and explain the information on which automated decisions were made.
No avenue for justice existed.
As a leap forward in the data-led world bears down upon us, it's critical that we learn from Robodebt.
Artificial intelligence (AI) now generates content, forecasts and recommendations, and even makes decisions for us.
Robodebt was not an AI system. It completed data matching and automated decision-making without the use of AI.
AI adds a layer of sophistication to automated decision-making, turbocharging the ability of industry and governments to gather and process huge amounts of data.
The coming together of AI and automated decision-making creates a new and significant risk for marginalised people and communities, on a scale that even robodebt didn't achieve.
There is a developing evidence base that suggests unregulated AI will amplify our society's existing racism, sexism and other biases, in ways that could further entrench disadvantage.
Researchers in the US recently devised a test for AI bias and found that its recognition process was prejudiced against dark-skinned people.
The research concluded that AI will embed unfair practices into the hiring, insurance, and renting sectors, and even the education of people.
These new AI processes have the potential to entrench the discrimination and racism that many in our community already experience, especially if the software designers are not willing to grapple with these issues.
Authorities are racing to catch up with the evolving technology. In 2019 the Commonwealth Department of Industry, Science and Resources published the national AI Ethics Principles, followed by the Australian Human Rights Commission's release of multiple reports examining the use of AI in decision-making and how this technology intersects with human rights.
While these bodies have rightly recognised how biased data can be amplified by algorithms and possible measures to prevent it, it is debatable whether such reports and a voluntary code of conduct provide any protection against the new AI.
Stronger measures are afoot in the European Union, where new laws have been approved to protect people's rights online and people negatively affected by AI may be eligible for compensation.
Australian social service organisations work hard to address racism, sexism and other biases in our community.
We don't want to see this important work to make our society more equitable undone by the indiscriminate use of technology without appropriate levels of scrutiny and safeguards.
We cannot foresee the next frontier in the application or development of AI, but we should be concerned that there is no obvious place for independent review and recourse if the outcomes of that application are unfair or discriminatory.
There is no existing legitimate third party to review decisions made by AI for those who find themselves trapped by poorly designed biased technology.
Insufficient consideration has been given to the impact of AI paired with automated decision-making, especially as it applies to marginalised people and communities.
The Data for Good Project, a new partnership between the Centre for Social Impact Flinders and Uniting Communities, is exploring the increasing impact of data, AI and automated decision-making on disadvantaged people and communities, and will work to help shine a light on this area.
We need clearer regulation so that there is a legal requirement to create proper complaint and recourse processes.
We do not want to see unfair automated practices develop in hiring, renting or the education of people.
State and Commonwealth legislation regarding anti-discrimination is the logical starting point for safeguards.
AI has the power to be a hugely positive force.
ut without a regulatory framework which pays attention to AI's capacity to reinforce biases, it has the potential to entrench further disadvantage and marginalise people within our community.
- Peter McDonald is Project Lead at Data for Good, Centre for Social Impact - Flinders University.