Skip to content
Article Article October 25th, 2018
Legitimacy • Artificial Intelligence

It's time to demystify the use of AI in government or risk losing it altogether

Article highlights


In order to cut through the scaremongering narrative of ‘machines taking over the world’, building trust around #AI is crucial.

Share article

The only way to make the promise of #AI in government come true is to do this with citizens rather than to them #AILegitimacy

Share article

Using #AI in government thoughtfully and in a way that is seen as legitimate is possible. But where do you start?

Share article

Partnering for Learning

We put our vision for government into practice through learning partner projects that align with our values and help reimagine government so that it works for everyone.

Partner with us

We risk missing out on the potential “Artificial Intelligence” has to improve public services, if people don't trust what governments are up to. When it comes to AI, we hear and fear either the hype or the horror.

The excitement surrounding AI is officially out of control. Publications are full of wild-eyed headlines about how AI will magically solve all kinds of problems that governments deal with from wildfires to cancer diagnosis.

This irresponsible combination of hype and scaremongering reached absurd heights with “Sophia the AI robot”, an animatronic robot that would be right at home in an amusement park ride, being presented as an intelligent entity and receiving Saudi citizenship. Soon publications started to worry about whether “Sophia” might want to “destroy humans”.

 

Unlocking the full potential of AI 

The potential of AI to improve the way we deliver public services is enormous - but not in the way breathless stories about this technology taking over the world would make us believe.

Both the hype and horror get in the way of a clear-eyed assessment of what AI can and cannot do.

In order to cut through the scaremongering narrative of ‘machines taking over the world', we need to take measured steps to build legitimacy for the use of AI in government as we proceed. For AI in government to be successful, it needs to be designed and implemented in a legitimate way -  in a way that commands trust and understanding.

 

Building trust is crucial

I ran a roundtable debate on the potential of AI in government at the recent Tallinn Digital Summit, a meeting of some of the world's most digitally advanced governments. The ministers, senior civil servants and technical experts all agreed that using AI in a way that builds trust and legitimacy in from the get go is critical.

[gallery size="full" link="none" ids="60130,60131,60132"]

Recent polling has shown that citizens worldwide are generally positive about government's use of AI. The level of support, however, varies a lot by use case. As the use of AI expands into more sensitive domains, citizens are beginning to worry. For example, 51% disagree with the use of AI to determine innocence or guilt in a criminal trial.

People will only accept the use of AI in public services and policymaking when they trust it. If they don't, we will quickly see a backlash forming and we'll lose out on the promise and potential of this technology.

Using AI in government thoughtfully and in a way that is seen as legitimate is possible. Governments are, however, just learning how to do it.

In 2012 Durham Constabulary, the police force responsible for the area around Durham in the northeast of England, began developing an AI-based tool which supports custody officers in assessing the likelihood that an individual will re-offend.

While many open questions remain about how exactly the tool performs its introduction has been comparatively thoughtful and deliberate. The police force has been relatively open about the tool and has made details about the model publicly available. The introduction of the risk assessment tool was also set up as an experiment with external research partners from Cambridge University who provide an external review of the tool's effectiveness.

People will only accept the use of AI in public services and policymaking when they trust it.

 

Delivering on the promise of AI in government

The only way to make the promise of AI in government come true is to do this with citizens rather than to them, to develop systems with those working in public services and the public rather than for them.

This requires government to operate in ways it's not necessarily used to. It requires, for example, empathising with the needs of citizens and civil servants and building AI systems that are resolutely open to external scrutiny. We have set out a practical plan to help governments achieve this.

Now is the right time to improve public service delivery and policymaking with the help of AI but we need to do so without the hype or the horror.

Unless we do this with care, governments will not get the broad public support they need for these technologies to be successful in improving people's lives and delivering better public services.

There's lots at stake here and neither governments nor citizens can afford to lose this opportunity.

 

The Centre for Public Impact is investigating the way in which artificial intelligence (AI) can improve outcomes for citizens. 

Are you working in government and interested in how AI applies to your practice? Or are you are an AI practitioner who thinks your tools can have an application in government? If so, please get in touch.

 

FURTHER READING:

Written by:

Danny Buerkli Co-Founder, staatslabor
View biography
Share this article: