• AI requires changes to policymaking and politics and a completely new level of data literacy in public services, says @MTBracken
  • For AI to work, we’re going to require a level of data literacy and quality that is unheralded in government, says @MTBracken
  • AI requires genuine data scientists in the civil service on an equal level with the policymaking class, says @MTBracken

Artificial intelligence (AI) can already do many things, from menial tasks such as helping people navigate traffic to more existential tasks such as diagnosing rheumatic heart disease in children or mapping and monitoring earthquake-prone regions.

But these improvements require changes to policymaking and politics and a completely new level of data literacy in public services. Such changes also bring real challenges and an acceptance of a different type of risk.

To understand these changes, let’s consider how AI would work in practice in our current setup. Let’s suppose that there’s a spate of elderly drivers being involved in fatal accidents. Politicians and the media agree that “something must be done”.

Today, in order to meet public concern, the relevant minister would pledge to reduce the number of people killed by older drivers. Then a civil service policy team would go away to explore policy options, such as a new driving test at 85. A new policy would be selected, discussed in parliament, and – months or years after the public outrage – changes to regulation introduced.

Many years later, there will be some evidence about whether the new policy has had any effect, but it’s easy for governments to cherry-pick data that suits them if they want to make further policy changes. The whole process is inherently political and contains incomplete and often partly corrupted data that’s used, if at all, to justify the changes made. In short, the process of answering “why did this is happen?” is already scandalously limited in evidence and data analysis.

AI changes all that

Fast-forward to an era when AI could be used to tackle this issue. Firstly, the minister and officials would have near real-time data to explore the causes of traffic incidents. Indeed, we might have the ability to incorporate real-time data about every driver in the country into a digital driving licence. If – and only if – conventional data analysis tools failed to identify a common root cause of these incidents, we could use AI to predict the drivers and conditions most likely to cause these incidents in future.

Within days, thousands of motorists could have their motoring patterns changed. AI systems could stop them driving at night or on motorways in certain weather. They could limit the distances they’re allowed to travel without a period of rest or remove their licences for a day or a month or altogether. Smartphones linked to health data, blood pressure monitors in watches, and smart sensors in cars would feed back data to the AI software in real time, allowing for continuous improvement.

The country would be safer, with only a minor set of inconveniences faced by a small set of drivers. Motorists wouldn’t need their plastic driving licence any more, and insurance premiums would go down because insurance companies would have access to real-time data about who is a higher insurance risk. And some elderly drivers who had been disallowed might be able to drive again if the AI algorithm predicted they were not as risky as previously thought.

How does AI work?

AI works on the basis that huge amounts of data – called training data – is made available to allow the algorithm to learn, millions of times, until its predictive capability is far beyond our current levels. However, we set the parameters of the algorithm ourselves, and inevitably we load them with bias. When the training data is incomplete, or corrupted over time, AI can exponentially amplify the corrupted data. Setting the parameters is explicitly a value question. Who gets to define those parameters, and how are they called to answer for the outcomes?

All sounds fine in our example, but what if a year after the AI-powered driving licence comes into effect it is revealed that twice as many people banned from driving are left-handed? The algorithm learned that, by chance, there was a pattern of left-handed elderly drivers in the training data, and applied that to future decisions. The algorithm has also (correctly) deduced that younger males are the largest group responsible for fatal accidents, and many have had their licences revoked, while insurance premiums have become unaffordable for many others.

Those wishing to appeal against a decision to limit their driving powers discover that there’s no way to recreate the original decision – it was part of the algorithm’s development, and now the data, has changed they have limited recourse in law.

How has this happened? Who is accountable? Is this wrong? Are left-handed drivers more dangerous and we never knew about it? AI’s wider value to society is unquestionable, but these sorts of problems are bound to arise – and it’s hard to see how we can navigate our way through them with our current machinery of government.

Critically, there will be more issues of accountability and trust. The quality of the data currently being used to make decisions is also in the spotlight. We know from bitter experience that it is all too often incomplete, biased, poor quality, and out of date.

The need for government reform

For decades, government’s data practices have suited the tactical needs of departments and agencies rather than doing the hard work of creating public-facing, canonical registers of data, controlled by registrars with responsibility to parliament.

These canonical registers and the training data they can provide are the basis for successful AI in government, and their adoption was government policy just a few years ago. This policy presents some simple challenges – like forcing the departments for education and business to use the same list of universities rather than each using its own. Other challenges are much harder – such as finally making Companies House the canonical record for businesses, and compelling HMRC to use the same data.

Yet in the recent Digital Economy Bill, despite much concern from us and many others close to the issue, the government attempted to give the civil service widespread powers to share data and thus worsen the already poor state of data integrity. And the minister responsible? Why, the same one responsible for the recent AI fund and ethics commission. Demand-side outcomes are much more attractive than the detailed business of reforming the government, it seems.

The problem here is obvious: successful AI implementation rests on accurate and accessible training data, and to have that in the public realm requires substantial supply-side reform of government and an inevitable relegation of departments’ needs below the needs of the user.

For AI to work, we’re going to require a level of data literacy and quality that is unheralded in government, as well as genuine data scientists in the civil service on an equal level with the policymaking class. These reforms require political courage, attention to detail and, on occasion, some major changes to the machinery of government. And this machinery is covetously overseen by permanent secretaries, who are almost entirely dead set against such reforms.

Unless we have enlightened reform of our existing government system, the promise of AI-led improvements in public services and policy outcomes will remain elusive, and that will indeed be a tragedy of the Commons.

 

FURTHER READING

Sign up to stay updated on news about our meetings, our insights and our other activities.
Back to top