Skip to content
Article Article October 16th, 2018
Technology • Legitimacy • Innovation

How governments can secure legitimacy for their AI systems

Article highlights


As the use of AI expands into more sensitive and contentious domains, citizens are beginning to worry. #AILegitimacy

Share article

We need to be realistic about the potential of #AI – it is not a panacea for all the world’s problems. #AILegitimacy

Share article

For #AI to fulfil its potential as a tool for government and citizens, it needs legitimacy. Discover why in our latest report. #AILegitimacy

Share article

Partnering for Learning

We put our vision for government into practice through learning partner projects that align with our values and help reimagine government so that it works for everyone.

Partner with us

Public sector organisations across the world are looking to AI to improve their policymaking and service delivery. In doing so, they are contending with two main obstacles: the technology is often over-hyped, and there is public anxiety over the moral and ethical issues it raises, as evidenced by a forthcoming Boston Consulting Group survey.[1] We are realistic about both concerns, as we demonstrate in our new report, How to make AI work in government and for people. Our main conclusion is that in order to be a valuable tool for government and citizens, AI has to possess legitimacy - the deep well of support that governments need in order to achieve positive public impact.

While AI can already automate well-defined, repeatable tasks and augment human decision-making, governments ought to be very circumspect over its future direction. As AI expands into more sensitive and contentious domains, citizens are beginning to worry about the implications of such a far-reaching technology. We believe that AI can be used in government to improve outcomes for citizens, and that even against a backdrop of mistrust and uncertainty, legitimacy is an achievable aim. But how can we secure it?

[1] In its forthcoming report, 2018 BCG Digital Government Benchmarks: What Citizens Think About Governments' Use of AI, The Boston Consulting Group surveyed over 13,000 internet users in 30 countries. Around 30% of respondents were strongly concerned that the moral and ethical issues of AI have yet to be resolved.

Get the report and case studies

An action plan for public sector AI

In order to support governments' decision-making, we have defined a five-point action plan for deploying AI, seen through the lens of legitimacy.

 

  1. Understand and empathise with the real needs of end-users

Government should base their AI interventions on the requirements of end-users, who may be frontline workers or citizens themselves. Such interventions must reflect the diversity of frontline staff's experiences and perspectives, ideally involving them as codesigners from the very beginning of new projects. It is essential to build these kind of authentic connections for the AI development process to succeed, and for end-users' levels of trust in government to increase.

 

  1. Focus on specific and doable tasks

Public sector AI applications function best when they are narrow and well-defined, such as directing a citizen's query to the correct destination. They will rarely if ever replace entire roles, but instead reshape them by automating discrete tasks. This will then free up government employees to focus on more creative, problem-solving, citizen-facing work that enhances service delivery and increases their own job satisfaction.[2]

 

  1. Build AI literacy in the organisation and the public

Our view is that governments have three principal methods of developing AI literacy: firstly, train civil servants to spot potential applications in their work in government. Secondly, enable frontline workers to enhance their AI collaboration skills - they must be able to work with systems and their developers and constantly reassess whether the systems' conclusions can be trusted. And thirdly, encourage education and debate by engaging with the media, educational and cultural institutions, civil society groups, and individual citizens to spread the word about AI.

 

  1. Keep maintaining and improving AI systems

Governments will be developing AI systems within a demanding environment, where the technologies, data, policies and legislation are fluid and subject to change. They should anticipate and manage any risks that may cause systems to fail, so that the legitimacy of individual systems and the technology as a whole is not undermined. Maintaining and enhancing AI systems is a vital function, and must be taken into consideration at the outset of any project.

 

  1. Design for and embrace extended scrutiny

An important aspect of ensuring legitimacy is engaging with and listening to the voices of individuals and civil society groups. Governments have to be resolutely open about their AI systems, making them available online to civil society wherever possible. They should invite close external scrutiny of the design and low-level code of their systems, so they are seen as technically sound and free of discrimination or bias.

 

Creating an institutional AI strategy

Governments need to create the right environment for innovation - the bedrock of any successful AI strategy. The priority is to build a community of skilled AI practitioners, by hiring the right individuals and enabling them to learn from and share their expertise with other specialists in the field. Looking outwards is essential - governments will want to collaborate with academic institutions, industrial partners, and other public sector organisations, both at home and internationally.

Governments should consult data scientists and AI developers about any investment in new infrastructure, so that it is compatible with their existing systems, databases, and AI workflows. The overall technical infrastructure has to promote transparency, so that citizens and communities can access the data and reasoning of any AI systems that may affect their lives, and raise any concerns about their correctness and objectivity.

In defining the processes and timescales for technology procurement and deployment, governments must not focus on risk management at the expense of experimentation. Only by grasping the emerging opportunities for innovation can they make a fruitful and legitimate use of AI on behalf of their citizens.

Get the report and case studies

The Centre for Public Impact is investigating the way in which artificial intelligence (AI) can improve outcomes for citizens. 

Are you working in government and interested in how AI applies to your practice? Or are you are an AI practitioner who thinks your tools can have an application in government? If so, please get in touch.

 

FURTHER READING:

 

[1] In its forthcoming report, 2018 BCG Digital Government Benchmarks: What Citizens Think About Governments' Use of AI, Boston Consulting Group surveyed over 13,000 internet users in 30 countries. Around 30% of respondents were strongly concerned that the moral and ethical issues of AI have yet to be resolved.

[2] This is important because, in the same BCG survey, 35% of respondents said they were very concerned about the potential impact of AI on jobs.

Written by:

Margot Gagliani Former Senior Programme Associate
View biography
Share this article: