• For #AI to fulfil its potential as a tool for government and citizens, it needs legitimacy. Discover why in our latest report. #AILegitimacy
  • We need to be realistic about the potential of #AI – it is not a panacea for all the world’s problems. #AILegitimacy
  • There is a lot of hype around #AI and how it can change the world. Here's how govts can use #AI to make a real difference. #AILegitimacy

Download the report.

Governments are grappling with the question of how to accelerate the use of artificial intelligence (AI) into sensitive areas and take people with them. For many governments, this leaves AI at a standstill. There is a lot of hype around AI and how it can change the world.

At CPI, we take a more realistic approach. In 2017, we published our report, Destination unknown: Exploring the impact of Artificial Intelligence on government, in which we looked at the possibilities that AI offers the public sector.

Building on this and our 2018 report, Finding a more human government and the legitimacy behaviours that we outline in it, we are now looking at how to unlock the potential of AI in government, and the role that legitimacy plays.

We believe that in order for AI to be a valuable tool for government and citizens, it has to possess legitimacy – the deep well of support that governments need in order to achieve positive public impact.

Download our report ‘How to make AI work in government and for people‘ along with the supporting case studies below to get your action plan for public sector AI and fulfil AI’s potential as a tool for government and citizens.

Download the report and case studies

Get the report

Supporting Case Studies

While there is a lot of hype around AI in government, it is not easy to find examples of where the public sector is already applying AI to its own operations or service delivery. To share how governments have already started using AI, we have researched four applications of AI in government:

  • Policing: Durham Constabulary in the UK has been trialling the use of an AI-powered risk assessment tool called HART (Harm Assessment Risk Tool) to help custody officers decide the referral of criminals since 2017
  • Social welfare: The Allegheny County Children and Youth Services in the United States have employed a predictive risk scoring tool to help social workers decide whether children who get called-in via child protection hotlines should be further investigated or not
  • Taxation: Several OECD countries – anonymised due to the sensitivity of the topic – are trialling various uses of AI technology to improve the collection and processing of taxes
  • Internal operations: Several countries such as the UK and Singapore are starting to use machine reading to analyse text and improve their internal operations, e.g. by sorting FOI requests or analysing citizen feedback

This  report ‘How to make AI work in government and for people’ was launched at the Tallinn Digital Summit 2018

The Centre for Public Impact is investigating the way in which artificial intelligence (AI) can improve outcomes for citizens. 

Are you working in government and interested in how AI applies to your practice? Or are you are an AI practitioner who thinks your tools can have an application in government? If so, please get in touch.

 

FURTHER READING:

Sign up to stay updated on news about our meetings, our insights and our other activities.
Back to top