How governments can secure legitimacy for their AI systems
Article highlights
As the use of AI expands into more sensitive and contentious domains, citizens are beginning to worry. #AILegitimacy
Share articleWe need to be realistic about the potential of #AI – it is not a panacea for all the world’s problems. #AILegitimacy
Share articleFor #AI to fulfil its potential as a tool for government and citizens, it needs legitimacy. Discover why in our latest report. #AILegitimacy
Share articlePartnering for Learning
We put our vision for government into practice through learning partner projects that align with our values and help reimagine government so that it works for everyone.
Public sector organisations across the world are looking to AI to improve their policymaking and service delivery. In doing so, they are contending with two main obstacles: the technology is often over-hyped, and there is public anxiety over the moral and ethical issues it raises, as evidenced by a forthcoming Boston Consulting Group survey.[1] We are realistic about both concerns, as we demonstrate in our new report, How to make AI work in government and for people. Our main conclusion is that in order to be a valuable tool for government and citizens, AI has to possess legitimacy - the deep well of support that governments need in order to achieve positive public impact.
While AI can already automate well-defined, repeatable tasks and augment human decision-making, governments ought to be very circumspect over its future direction. As AI expands into more sensitive and contentious domains, citizens are beginning to worry about the implications of such a far-reaching technology. We believe that AI can be used in government to improve outcomes for citizens, and that even against a backdrop of mistrust and uncertainty, legitimacy is an achievable aim. But how can we secure it?
[1] In its forthcoming report, 2018 BCG Digital Government Benchmarks: What Citizens Think About Governments' Use of AI, The Boston Consulting Group surveyed over 13,000 internet users in 30 countries. Around 30% of respondents were strongly concerned that the moral and ethical issues of AI have yet to be resolved.
Get the report and case studies
An action plan for public sector AI
In order to support governments' decision-making, we have defined a five-point action plan for deploying AI, seen through the lens of legitimacy.
- Understand and empathise with the real needs of end-users
Government should base their AI interventions on the requirements of end-users, who may be frontline workers or citizens themselves. Such interventions must reflect the diversity of frontline staff's experiences and perspectives, ideally involving them as codesigners from the very beginning of new projects. It is essential to build these kind of authentic connections for the AI development process to succeed, and for end-users' levels of trust in government to increase.
- Focus on specific and doable tasks
Public sector AI applications function best when they are narrow and well-defined, such as directing a citizen's query to the correct destination. They will rarely if ever replace entire roles, but instead reshape them by automating discrete tasks. This will then free up government employees to focus on more creative, problem-solving, citizen-facing work that enhances service delivery and increases their own job satisfaction.[2]
- Build AI literacy in the organisation and the public
Our view is that governments have three principal methods of developing AI literacy: firstly, train civil servants to spot potential applications in their work in government. Secondly, enable frontline workers to enhance their AI collaboration skills - they must be able to work with systems and their developers and constantly reassess whether the systems' conclusions can be trusted. And thirdly, encourage education and debate by engaging with the media, educational and cultural institutions, civil society groups, and individual citizens to spread the word about AI.
- Keep maintaining and improving AI systems
Governments will be developing AI systems within a demanding environment, where the technologies, data, policies and legislation are fluid and subject to change. They should anticipate and manage any risks that may cause systems to fail, so that the legitimacy of individual systems and the technology as a whole is not undermined. Maintaining and enhancing AI systems is a vital function, and must be taken into consideration at the outset of any project.
- Design for and embrace extended scrutiny
An important aspect of ensuring legitimacy is engaging with and listening to the voices of individuals and civil society groups. Governments have to be resolutely open about their AI systems, making them available online to civil society wherever possible. They should invite close external scrutiny of the design and low-level code of their systems, so they are seen as technically sound and free of discrimination or bias.
Creating an institutional AI strategy
Governments need to create the right environment for innovation - the bedrock of any successful AI strategy. The priority is to build a community of skilled AI practitioners, by hiring the right individuals and enabling them to learn from and share their expertise with other specialists in the field. Looking outwards is essential - governments will want to collaborate with academic institutions, industrial partners, and other public sector organisations, both at home and internationally.
Governments should consult data scientists and AI developers about any investment in new infrastructure, so that it is compatible with their existing systems, databases, and AI workflows. The overall technical infrastructure has to promote transparency, so that citizens and communities can access the data and reasoning of any AI systems that may affect their lives, and raise any concerns about their correctness and objectivity.
In defining the processes and timescales for technology procurement and deployment, governments must not focus on risk management at the expense of experimentation. Only by grasping the emerging opportunities for innovation can they make a fruitful and legitimate use of AI on behalf of their citizens.
Get the report and case studies
The Centre for Public Impact is investigating the way in which artificial intelligence (AI) can improve outcomes for citizens.
Are you working in government and interested in how AI applies to your practice? Or are you are an AI practitioner who thinks your tools can have an application in government? If so, please get in touch.
FURTHER READING:
- Transforming technology, transforming digital government. Rare is the policymaker who doesn't support digital government as a doorway for strengthening public services, Miguel Carrasco explains
- Government must be made more human or risk becoming irrelevant - our new report shows how #FindingLegitimacy. Nadine Smith reports on CPI's new report on finding the human in government.
- Power to the people. Few countries have embraced the digital era as successfully as New Zealand. We talk to one of its government's key digital transformation leaders, Richard Foy, about how they've done it.
- Computer says yes. Governments are increasingly reliant on digital technology to deliver public services - and Australia's myGov service is a potential game-changer, says Gary Sterrenberg
- Briefing Bulletin: Going digital - how governments can use technology to transform lives around the world
- Why governments need to dig deeper on digital. Danny Buerkli explores why there is no excuse to ignore data and the potential of artificial intelligence
- Digital dawn. It may not be obvious, but US policymakers have had an important role to play in the creation of today's digital era. But sometimes it involves stepping back rather than stepping up, suggests David Dean
[1] In its forthcoming report, 2018 BCG Digital Government Benchmarks: What Citizens Think About Governments' Use of AI, Boston Consulting Group surveyed over 13,000 internet users in 30 countries. Around 30% of respondents were strongly concerned that the moral and ethical issues of AI have yet to be resolved.
[2] This is important because, in the same BCG survey, 35% of respondents said they were very concerned about the potential impact of AI on jobs.