- In order to cut through the scaremongering narrative of ‘machines taking over the world’, building trust around #AI is crucial.
- The only way to make the promise of #AI in government come true is to do this with citizens rather than to them #AILegitimacy
- Using #AI in government thoughtfully and in a way that is seen as legitimate is possible. But where do you start?
We risk missing out on the potential “Artificial Intelligence” has to improve public services, if people don’t trust what governments are up to. When it comes to AI, we hear and fear either the hype or the horror.
The excitement surrounding AI is officially out of control. Publications are full of wild-eyed headlines about how AI will magically solve all kinds of problems that governments deal with from wildfires to cancer diagnosis.
This irresponsible combination of hype and scaremongering reached absurd heights with “Sophia the AI robot”, an animatronic robot that would be right at home in an amusement park ride, being presented as an intelligent entity and receiving Saudi citizenship. Soon publications started to worry about whether “Sophia” might want to “destroy humans”.
Unlocking the full potential of AI
The potential of AI to improve the way we deliver public services is enormous – but not in the way breathless stories about this technology taking over the world would make us believe.
Both the hype and horror get in the way of a clear-eyed assessment of what AI can and cannot do.
In order to cut through the scaremongering narrative of ‘machines taking over the world’, we need to take measured steps to build legitimacy for the use of AI in government as we proceed. For AI in government to be successful, it needs to be designed and implemented in a legitimate way – in a way that commands trust and understanding.
Building trust is crucial
I ran a roundtable debate on the potential of AI in government at the recent Tallinn Digital Summit, a meeting of some of the world’s most digitally advanced governments. The ministers, senior civil servants and technical experts all agreed that using AI in a way that builds trust and legitimacy in from the get go is critical.
Recent polling has shown that citizens worldwide are generally positive about government’s use of AI. The level of support, however, varies a lot by use case. As the use of AI expands into more sensitive domains, citizens are beginning to worry. For example, 51% disagree with the use of AI to determine innocence or guilt in a criminal trial.
People will only accept the use of AI in public services and policymaking when they trust it. If they don’t, we will quickly see a backlash forming and we’ll lose out on the promise and potential of this technology.
Using AI in government thoughtfully and in a way that is seen as legitimate is possible. Governments are, however, just learning how to do it.
In 2012 Durham Constabulary, the police force responsible for the area around Durham in the northeast of England, began developing an AI-based tool which supports custody officers in assessing the likelihood that an individual will re-offend.
While many open questions remain about how exactly the tool performs its introduction has been comparatively thoughtful and deliberate. The police force has been relatively open about the tool and has made details about the model publicly available. The introduction of the risk assessment tool was also set up as an experiment with external research partners from Cambridge University who provide an external review of the tool’s effectiveness.
People will only accept the use of AI in public services and policymaking when they trust it.
Delivering on the promise of AI in government
The only way to make the promise of AI in government come true is to do this with citizens rather than to them, to develop systems with those working in public services and the public rather than for them.
This requires government to operate in ways it’s not necessarily used to. It requires, for example, empathising with the needs of citizens and civil servants and building AI systems that are resolutely open to external scrutiny. We have set out a practical plan to help governments achieve this.
Now is the right time to improve public service delivery and policymaking with the help of AI but we need to do so without the hype or the horror.
Unless we do this with care, governments will not get the broad public support they need for these technologies to be successful in improving people’s lives and delivering better public services.
There’s lots at stake here and neither governments nor citizens can afford to lose this opportunity.
The Centre for Public Impact is investigating the way in which artificial intelligence (AI) can improve outcomes for citizens.
Are you working in government and interested in how AI applies to your practice? Or are you are an AI practitioner who thinks your tools can have an application in government? If so, please get in touch.
- REPORT: How to make AI work in government and for people
- Finding Legitimacy in the age of AI: Challenges & Opportunities. Trust in government has been on a downhill path and governments are struggling to maintain legitimacy. What does this mean for the age of AI? Walter Pasquarelli explores
- How governments can secure legitimacy for their AI systems. We need to be realistic about the potential of AI. Margot Gagliani explores why it is not a panacea for all the world’s problems in our latest guide “How to make AI work in government and for people”
- It’s time to demystify the use of AI in government or risk losing it altogether. Danny Buerkli explores why AI won’t “destroy humanity” but it could make public services much better
- Transforming technology, transforming digital government. Rare is the policymaker who doesn’t support digital government as a doorway for strengthening public services, Miguel Carrasco explains
- Government must be made more human or risk becoming irrelevant – our new report shows how #FindingLegitimacy. Nadine Smith reports on CPI’s new report on finding the human in government.
- Power to the people. Few countries have embraced the digital era as successfully as New Zealand. We talk to one of its government’s key digital transformation leaders, Richard Foy, about how they’ve done it.
- Computer says yes. Governments are increasingly reliant on digital technology to deliver public services – and Australia’s myGov service is a potential game-changer, says Gary Sterrenberg
- Briefing Bulletin: Going digital – how governments can use technology to transform lives around the world
- Why governments need to dig deeper on digital. Danny Buerkli explores why there is no excuse to ignore data and the potential of artificial intelligence
- Digital dawn. It may not be obvious, but US policymakers have had an important role to play in the creation of today’s digital era. But sometimes it involves stepping back rather than stepping up, suggests David Dean