Finding Legitimacy in the age of AI: Challenges & Opportunities
Article highlights
Trust in govt has been on a downhill path & governments are struggling to maintain legitimacy. What does this mean for the age of #AI
Share article#Digital infrastructures & literacy are still underdeveloped because #policymakers are struggling to catch up with technology.
Share articlePromoting digital literacy among #policymakers is crucial for translating citizens’ voices into effective #publicpolicy.
Share articlePartnering for Learning
We put our vision for government into practice through learning partner projects that align with our values and help reimagine government so that it works for everyone.
This week CPI released its advice on artificial intelligence (AI) for government at the 2018 Tallinn Digital Summit - legitimacy is the prerequisite for effective use of AI in government but is still the missing ingredient, and this cannot continue for much longer.
So far, emerging technologies have proved to be a considerable challenge for Western governments in their quest for #FindingLegitimacy. Confronted with systemic complexities and changes that are “redefining human potential” and turning society and established institutions upside down, “political leaders seem unable to articulate a convincing vision for getting through the current upheavals to reach a brighter future”. As a result, trust in government has been on a downhill path and governments are struggling to maintain legitimacy - yet, the “fourth industrial revolution” and the transition into the age of AI has already begun.
As CPI's previous report ‘Finding a more Human Government' shows, legitimacy is only gained through a successful and sustained effort to work on the relationship between a government and the people it serves. And as in every well-functioning relationship, clear communication, mutual commitment, and especially trust are needed to adapt to changing circumstances and overcome crises together. More specifically, CPI found that there are five behaviours for greater legitimacy that government needs to work harder on adopting in an ever-changing world. Thus, how will AI help or hinder government in displaying the key behaviours CPI has found matter most?
Work together with people towards a shared vision
Bring empathy into government
Build an authentic connection
Enable the public to scrutinise government
Value citizens' voices and respond to them
An essential basis of a “healthy society is that people are able to trust [public] services and have an agency to fix things when they go wrong”. In the past, there was little public understanding of economic policies, and political leaders sought to persuade the electorate by explaining complex economic mechanisms in simple terms. “Economic liberalism” was presented as “freedom”, whilst more social economic policies advocated for “an economy that works for all”. Policies created with the use of AI represent in many ways a similar problem: there is what Doteveryone recently coined as an “understanding gap” of how these technologies operate. To mention some figures, only ~30% of the British public, are aware that data it has “not actively chosen to share is collected” by companies and government.
The technical complexity of AI exacerbates this issue, as was reported in a recent report on digital on digital attitudes: “people feel disempowered by a lack of transparency in how digital services operate”. As distinct from economics, this understanding gap covers not only the public but also policymakers and, at times, even technologists themselves. For now, AI is still man-made, but in the near future machines may be able to teach themselves increasingly complex tasks, creating an AI “black box” whereby only a handful of technologists may be capable of understanding how an algorithm made a particular decision.
Ignorance creates opacity, but transparency is key for ensuring accountability. Therefore, cultivating the digital literacy of political leaders as well as the public will need to be at the heart of future government efforts. This will be instrumental in enabling the public to access and understand what information and assumptions are feeding into the decisions that impact on their lives, to participate in relevant policy discussions and better scrutinise government.
Having feedback mechanisms about AI policies is important for creating legitimacy because providing answers and responding to public concerns form the bedrock of public trust in government.
Moreover, promoting digital literacy among policymakers is crucial for translating citizens' voices into effective public policy. New York Times bestselling author Cathy O'Neil, for instance, reported how several policies implementing AI within education, judicial trials, and financial institutions had disastrous social consequences. But most importantly, what prevented a thorough auditing of these policies was a lack of transparency, and general ignorance of public officials about how these algorithms worked.
Having feedback mechanisms about AI policies is important for creating legitimacy because providing answers and responding to public concerns form the bedrock of public trust in government. A well-calibrated AI policy offers an incredible opportunity, by providing “near real-time analytics” and open APIs for improving the performance of public services and allowing the public to join the conversation. Doteveryone even goes so far to claim that an independent digital body should be set up to conduct this conversation between government and its citizenry - and in fact, 60% of people living in the UK considered such an institution a good idea: “People are not sure who they should turn to when they have concerns… many different government departments or regulators already cover different aspects of technology—that's partly why it's hard for the public to understand who's in charge”.
A centralised body making use of open APIs and digital communication channels can function as a crucial venue where citizens can voice their views and concerns, and participate more in public life.
Authentic connections through AI
Such a centralised body would help government build an authentic connection with its citizens. Through open APIs and citizens' feedback, government would have access to a wealth of information and a direct evaluation mechanism about the individual needs of citizens. In this way, AI offers the opportunity to channel large datasets for understanding citizens' concerns and providing bespoke communications on issues that matter most to them. But how can citizens feel government is listening to them if communications are provided by machines? A solution would be to distinguish between communications that can be automated and those where a true human-to-human interaction is more appropriate.
How can citizens feel government is listening to them if communications are provided by machines?
If citizens have an enquiry of a purely informational nature about a certain regulation, an AI system could be suitable. But for those issues where citizens want to voice their concerns (e.g. job losses or rising crime rates in their district), the dialogue should take place between two humans. Making this distinction would allow government to automate communications where little empathy is needed, and in turn be more human about issues citizens truly care about.
Current trends in AI development suggest, however, that all interactions will be automated, as we will no longer be able to discern any more whether we are speaking to a machine or a human being - as shown for instance with Google Duplex. For #FindingLegitimacy, this could be problematic, as it might be detrimental to building a connection that is authentic and ultimately real.
AI is not the vision
Finally, it is clear that if AI can help government in #FindingLegitimacy. However, “simply adding a neural network to a democracy does not mean it will be instantaneously more inclusive, fair or personalised”.
Machines mirror human values: they offer large-scale social good if developed appropriately, but systematically exacerbate human fallacies if developed in the wrong way. It should, therefore, be clear that AI in itself is not the vision - it is the means by which government can enhance its service to citizens and move towards #FindingLegitimacy in an increasingly complex world.
FURTHER READING:
- How governments can secure legitimacy for their AI systems. FoR AI to fulfil its potential as a tool for government and citizens, it needs legitimacy. Discover why in our latest guide.
- Government must be made more human or risk becoming irrelevant - our new report shows how #FindingLegitimacy. Nadine Smith reports on CPI's new report on finding the human in government
- Power to the people. Few countries have embraced the digital era as successfully as New Zealand. We talk to one of its government's key digital transformation leaders, Richard Foy, about how they've done it
- Transforming technology, transforming digital government. Rare is the policymaker who doesn't support digital government as a doorway for strengthening public services, Miguel Carrasco explains
- Why governments need to dig deeper on digital. Danny Buerkli explores why there is no excuse to ignore data and the potential of artificial intelligence
- Becoming a more human government - five behaviours for greater legitimacy. Magdalena Kuenkel reports on CPI's new report on how governments can change their behavior to strengthen their legitimacy