- Problems with Australia's Centrelink’s automated debt-recovery system have made headlines worldwide
- Many have criticised the government's decision to use an algorithm to determine whether benefits were overpaid
- Algorithms still offer enormous potential benefits to policymakers – and citizens
The Australian Government’s Ombudsman recently launched an investigation into Centrelink’s automated debt-recovery system. This comes after numerous complaints about the incorrectly calculated debt claims and condemnation from various public figures, including opposition leader Bill Shorten and former digital transformation office head, Paul Shetler.
To understand the issues that unfolded over the holiday break, it is first important to understand precisely what has changed. Centrelink have been using an automated system to check reported income against Australian Tax Office (ATO) data for a number of years now. The primary objective of the system is to identify instances where the reported earnings figures don’t match tax records and investigate whether a benefit has been incorrectly paid.
The problem is, the matching process isn’t a straightforward comparison exercise.
Making numbers count
While Centrelink collects earnings figures on a fortnightly basis, the ATO collects yearly income figures. For some people, it’s easy to translate yearly income into fortnightly income and quickly check whether the reported figures match. But for lots of people (part time workers, freelancers, contractors, among many others) simply dividing an annual income figure by 26 won’t represent their true fortnightly income.
The smart humans at Centrelink know this, so they had been (up until Christmas) reviewing cases of identified mismatches to ensure they weren’t commencing debt recovery processes against people who have, in fact, been correctly reporting their income. This whole process saw Centrelink issue around 20,000 notices each year for debts calculated under the system. However, even this system incorporating human review tended to be wrong in about 20% of cases.
Late last year, Centrelink switched to a fully automated system that reduced the amount of human oversight and review. As a result of these changes, Centrelink began issuing around 20,000 debt notices a week, in stark contrast to the 20,000 notices issued each year under the old system. The problem, however, was that the automated system preserved the faulty assumption about earnings – namely, it took annual earnings, divided them by 26 and if the number didn’t match the amount Centrelink had on file, it sent out letters seeking further clarification and information. Unsurprisingly, many of these letters were based on questionable calculations.
Those defending Centrelink have argued that the request for information process is intended to seek clarification and is exactly the same process as in previous years. They point out that many of the discrepancies are because of mismatch between data provided to the ATO and the data provided to Centrelink. Of those receiving a request for information letter, it turns out that 20% were paid their entitlements correctly at the time once the discrepancies have been resolved. In short, the process is working exactly as intended and the implementation was a success.
However, given the sheer volume of complaints the switch has generated, alongside the accompanying negative media and political attention, it is clear that whether you technically define this system as “working” or not is a moot point. The new system has clearly undermined public confidence in Centrelink and the debt-recovery process.
An artificial debate
Many have criticised the government’s decision to use an algorithm to determine whether benefits were overpaid and have questioned the “idiotic faith in big data”. Others have used this as an example of why we should be sceptical about the increased use of automated processes in the delivery of government services. Let’s be clear, to call what Centrelink used an algorithm is to be generous in your use of the term. Usually, when we talk about algorithms (despite their explosion in use over the last few years), we’re talking about a set of rules that are followed in calculations to solve a problem. Here, Centrelink had one rule – divide a given figure by 26. It was crude, it was simple and (at least 20% of the time) it was wrong. To reject entirely the use of AI in government based on one poorly conceived and executed instance of it seems unfounded.
The real problem, however, with this algorithmic episode is that you risk overlooking the enormous potential benefit technologies like this can have in government. Around the world, governments are starting to use artificial intelligence to improve outcomes for citizens. In the US, the Chicago Department of Public Health is piloting the use of AI to predict the risk of childhood lead poisoning based on evidence such as the age of a house, the history of children’s exposure at that address, and economic conditions of the neighborhood. In India, an app that uses machine learning and data analytics is helping farmers understand the best time to sow crops depending on weather conditions, soil and other indicators. Local governments in Japan are using a machine learning platform to track the impact of anti-littering campaigns and determine instances where littering is most most likely to take place. The City of Milwaukee seeks to prevent juvenile interactions with the criminal justice system by integrating data from across different departments and systems and identifying young people at risk.
Moving forward – carefully
Centrelink’s use of a simple algorithm without human oversight has had a significant impact on the people who have been targeted by the debt recovery process. But equally concerning is the impact these issues have had on the public’s trust of artificially assisted decision-making processes.
It is clear that we stand on the precipice of a new era of enormous potential benefit – everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide. Governments must carefully consider the way in which they implement these technologies or they risk depriving their citizens of its benefits by providing grounds – albeit unwarranted – for the wholesale rejection of algorithms.
- Government and the approaching AI horizon. Danny Buerkli considers how AI is likely to reshape the business of government
- From the machines of today, to the artificial intelligence of tomorrow. IP Australia is pioneering the use of artificial intelligence in the Australian government. Its general manager of Business Futures, Rob Bollard, tells us how services – and citizens – benefit from its deployment
- Mapping the future: how governments can manage the rise of AI. How can policymakers control and steer the future trajectory of Artificial Intelligence? Cyrus Hodes and Nicolas Miailhe, co-founders of Harvard’s AI Initiative, offer up some suggestions
- March of the machines: how governments can benefit from AI. The rapid advances of artificial intelligence are poised to reshape our world, says Philip Evans. He explains why governments should embrace, and not retreat from, this upcoming revolution
- Changing times: Why it’s time to plan, not panic for AI. Although artificial intelligence can lead to many positive results, policymakers should be vigilant about the implications and direction of this fast moving technology, says Sebastian Farquhar. He explains why clear goals and objectives are paramount
- How AI could improve access to justice. With the law never having been less accessible, Joel Tito explains how the smart deployment of artificial intelligence can help
- Data detectives: how AI can solve problems large and small. Richard Sargeant is putting the knowledge gained as one of the founders of the UK’s Government Digital Service to full use by exploring how artificial intelligence can help governments address systemic challenges – he tells us how