- AI has increasingly emerged as something with huge potential for government and policymaking –but let’s all calm down a moment.
- Although some very clever algorithms are able to solve very narrowly defined problems, this does not equate to “artificial intelligence”.
- The UK also lacks the necessary basic infrastructure for the effective use of advanced data processing – all the more reason to get real about the prospect of AI immediately revolutionising the business of government.
Some of us have been around a long time when it comes to debates about artificial intelligence or AI as it is fashionably, and most often misleadingly, referred to.
In my case, it’s about 50 years or so since I first came across ideas about “AI” in science fiction. In between, I have been a telecoms engineer myself (in the 1980s) and a social scientist for decades, but only professionally for the last three.
Having been around long enough I have seen various “boom and bust” technology fashions that never really happened, and the ones no-one expected.
The latest fad in government is the potential of AI and associated ideas around “big data”.
I want to sound a note of caution.
What is ‘natural’ intelligence?
Let’s start with the notion of “artificial intelligence” itself. It has recently become fashionable to label all sorts of things as “AI” – from software apps that can win at the ancient Asian game of “Go” to fridges that can tell you what you need to replenish.
The word “artificial” is counter-posed, mostly implicitly, to “natural” intelligence – that is the sort we humans (and possibly other species) have evolved over millions of years.
My first problem is simple – we don’t really know, or at least agree, on what is “natural” intelligence. For a lot of the 20th century, the dominant idea was that there was something called “general intelligence” – or “IQ” – that humans, and only humans possessed.
This idea has come under attack from numerous directions. The idea that there is a logical form of IQ has been challenged by thinkers like Herbert Simon, who suggested a lot of thinking was actually “bounded rationality” (limited by both capacity and information) and also “satisficing” (coming to roughly ‘good enough’ conclusions).
Simon’s ideas have been developed by many scholars looking at what are called “heuristics” – that is ‘rules of thumb’ we develop to deal with very varied situations that would take too long to analyse in real life situations.
Others, like Howard Gardner, have famously developed the idea that there is no such thing as “general intelligence” (IQ) but there are actually multiple intelligences evolved to deal with a variety of different problems: visual-spatial; linguistic; logical-mathematical; musical; and so on. Gardener’s ideas have been immensely influential in educational psychology and some educational practice.
Keith Stanovich and colleagues have also developed the idea that there is a significant difference between what has usually been called “general intelligence” – IQ – and what they call “rational intelligence” or RQ. I would characterise this as the difference between purely logical intelligence (which can often lead to bizarre conclusions) and more realistic intelligence that takes context into account when applying logic.
Also over the past three or four decades studies from ethology (animals) have shown that humans are not the only species to display some quite sophisticated intellectual capacities – other primates, cetaceans and even some bird species have been shown to have high level cognitive and problem-solving capabilities.
There are other critiques, but my point is we have no agreed and generally accepted definition of what human ‘natural’ intelligence is. This makes it extremely hard to accept that we have any idea what “artificial” intelligence would look like.
Let’s take a very simple example – a great deal was made of the fact that a software programme known as “AlphaGo”, created by Google, beat a renowned Go Master. But ask a simple question – could this programme win at Chess, or Chequers, or the Viking board game of Hnefatafl? It could not even begin to play them, which almost any human, with a little coaching, could.
Please do not misunderstand – there are some very clever algorithms being developed that are very good at solving very narrowly defined problems. But that is very, very, far from artificial intelligence.
We need better infrastructure
My second issue with the ‘AI is good for Government’ fad is very simple: much of the infrastructure that could make it useful in the future has yet to be developed.
The UK may be a unique case – but I doubt it. The basic infrastructure for the effective use of advanced data processing is a data transport system. Rather like roads and railways in the 19th century or early telephones and telegraph in the 20th, you need to have ‘data highways’.
In all the sexy talk about AI one thing is missing – a serious policy to develop the basic infrastructure. I am very lucky in that I have just moved into a new build house that has optical fibre included in the premises. But I am one of a tiny proportion of the UK population that has this access.
When I worked for British Telecom back in the early 1980s it was almost ready to roll out optical fibre across the UK. But then the policies of “liberalisation” and then “privatisation” pushed by the Thatcher government brought BTs plans to a shuddering halt. Instead, Britain went off into a cul-de-sac of broadcast broadband (Sky) and inferior coaxial cable networks. At the current rate of progress, it could be another 10-20 years before the UK has a proper digital infrastructure.
This creates the possibility of a huge digital social divide, with poor and rural areas deprived of a proper digital transport network. This would not just deprive the unfortunate digital poor, but it would mean the benefits of ‘big data’ covering the whole population for a variety of issues would be lost.
Let’s get real
I am a technophile – every new technology I can afford I have adopted, early, in the past five decades. But I am also a realist and that means looking coolly and calmly at the promises of the AI hypers (who usually have a product to sell).
We’ve been here before, whether it was the “micro revolution” of the 1980s or the “dot.com” boom of the late 1990s. Let’s not get fooled again.
- AI: the impending tragedy of the Commons. The application of artificial intelligence across the public domain, while promising benefits for society at large, spells the need for wider reform of government, says Mike Bracken
- Meet Whitehall’s digital wizard: Mike Bracken. When it comes to transforming government digital services, Mike Bracken is your man. He tells us about reforms, results, and revolution from within
- Rising to the challenge: Why governments should reap the benefits of AI. Joel Tito explains why can reshape government for the better
- The six things you need to know about how AI might reshape governments. Adrian Brown and Danny Buerkli examine how AI is poised to impact policymaking around the world
- Government and the approaching AI horizon. Danny Buerkli considers how AI is likely to reshape the business of government
- Mapping the future: how governments can manage the rise of AI. How can policymakers control and steer the future trajectory of Artificial Intelligence? Cyrus Hodes and Nicolas Miailhe, co-founders of Harvard’s AI Initiative, offer up some suggestions
- March of the machines: how governments can benefit from AI. The rapid advances of artificial intelligence are poised to reshape our world, says Philip Evans. He explains why governments should embrace, and not retreat from, this upcoming revolution
- Changing times: Why it’s time to plan, not panic for AI. Although artificial intelligence can lead to many positive results, policymakers should be vigilant about the implications and direction of this fast moving technology, says Sebastian Farquhar. He explains why clear goals and objectives are paramount