Skip to content
Article Article November 5th, 2019
Technology • Innovation

Can communities trust AI? Some guidance for government

Article highlights


"Transparency is essential for building trust in all aspects of the public sector, & the use of AI is no exception" -@EvanStubbs & @AdamJura

Share article

At a roundtable held by @StandardsAus, @EvanStubbs & @AdamJura cite #transparency as essential for communities to trust government use of AI

Share article

People continue to doubt use of AI, despite proven benefits to public interest. @BCG's @EvanStubbs & @AdamJura explore.

Share article

Partnering for Learning

We put our vision for government into practice through learning partner projects that align with our values and help reimagine government so that it works for everyone.

Partner with us

Communities have yet to trust AI applications, especially when they feel they are not in the public interest. Evan Stubbs and Adam Jura from Boston Consulting Group's Sydney office find out how Standards Australia, a standards-setting NGO, is working to encourage ethical AI.

In March 2019, Boston Consulting Group (BCG) conducted a global survey on the ethics of AI. We asked a broad cross-section of communities how comfortable they were with certain decisions being made by a computer rather than a human being, what concerns they had about the use of AI by governments, and their expectations of its impact on the economy and employment.

Homing in on local data, we found that only 37 percent of Australians were positive about AI.

We need to understand why this lack of trust exists - and how to overcome it - if we want to benefit from the opportunities AI can offer.

The potential use of AI in the criminal law offers some explanation for people's negative responses. They are especially anxious about governments using AI to support decision-making in areas such as criminal investigations and trials and a community's eligibility for parole or welfare.

People are concerned, for example, that conclusions drawn from current policing methods perpetuate existing forms of bias and discrimination. We know that individual and societal bias may be built into the police's data and its sources, and into the algorithms that are used to interpret the data. This is particularly true of self-learning systems, whose operations may not be fully understood even by their developers.

The use of AI in surveillance technology, such as facial recognition in public places, is another problematic area. People want to know what data is being gathered about them and what it is used for. The technology has advanced so rapidly that the law and governments have failed to keep up.

We believe that building the necessary trust depends on introducing government-backed standards, and ensuring that AI applications are transparent and make a positive impact on our public life.

The Australian conversation about AI standards

In June, Standards Australia (SA) began the process of defining a nationwide approach to AI. In order to gauge public opinion, it published a discussion paper entitled Developing Standards for Artificial Intelligence and held a series of roundtable meetings in several Australian cities - we attended the one in Sydney on 16 July 2019.

SA then asked participants to make written submissions in response to nine key questions. These included identifying where Australia's competitive advantage lies, where standards should be focused, and the anticipated benefits and costs of their implementation. The final question was “what are the consequences of no action in regard to AI standardisation?”

We responded that the big tech companies would probably self-regulate, holding themselves to account against standards that they had created, and this would benefit neither government nor communities.

In our other main submissions, we reflected on the need for a balanced approach. BCG's preference is for standards that provide guidance rather than prescription, and initially tend towards voluntary acceptance rather than regulatory enforcement. They should provide a detailed framework that can be adapted by individual organisations to meet their own specific situation.

This would offer a practical solution, one that takes account of the public's often negative perceptions about AI and makes trust-building a priority.

The OECD has also provided some guidance on this,  calling for 'consensus-based standards for trustworthy AI' - Australia has joined 41 other countries as a signatory of these principles.

Trust through transparency

Transparency is essential for building trust in all aspects of the public sector, and the use of AI is no exception. It is particularly important in complex areas where careful judgement is required, as in medical diagnosis and the operation of the law courts.

There needs to be transparency about the data, the design, and the software itself, and a high standard of proof that systems' reasoning and decision-making are fair and just.

We believe this can only happen if all AI applications are open to either public scrutiny or unencumbered independent review, so that communities - including tech experts - can have confidence in the code and the data that lies behind it. We expect organisations to be accountable for the outcomes of their systems, and we follow this principle with our own platform, Source.AI. The developer builds code on the platform and hands the source to the client, an approach we would like to see applied to all AI developments.

The technology has to make a positive impact

In order to adopt a more positive view of AI, communities need to be convinced that applications are valuable in themselves and working in the public interest.

If AI systems are successfully entering arenas where - as we found in our survey - the public is happy for them to operate, such as identifying tax fraud, improving traffic flow, and matching jobseekers to jobs, then people's reactions will become more positive.

There are already a number of well-regarded systems in Australia's leading commercial and industrial sectors, such as agriculture, mining and finance. In agriculture, for example, AI is being used to control the environmental distress of events such as droughts and floods. In mining, the applications include autonomous drilling and robots that load and drive vehicles to and from the mines, helping reduce the chance of accidents. And financial firms are introducing AI technologies to control financial risk, as in “regtech”, which converts regulatory texts into compliance obligations. The more AI becomes successfully embedded across the public and private sectors, the more communities will come to accept it.

The degree of rigour applied to monitoring and testing AI systems should, of course, reflect their impact on communities. As we pointed out during the Sydney roundtable, getting a health diagnosis wrong has a far more serious effect than a traffic light staying red for longer than it should. If people can see a string of positive outcomes from AI-supported medical diagnosis, for example in tackling sight loss, they can start to trust the technology in these more morally complex areas.

This is why precise and detailed government-backed standards are so important in promoting ethical AI, and why we are fully engaged with SA's new initiative. We believe that robust national and global standards can help ensure that AI developments take place in a well-managed environment, both making the best use of the technology and serving the public interest.

Written by:

Evan Stubbs Partner and Associate Director, Data and Digital Platforms, BCG
View biography
Share this article: