What is artificial intelligence?
Broadly defined, artificial intelligence (AI) is software that enhances and automates the knowledge-based work done by humans.
Depending on the type of work it is being applied to, AI can be described as either narrow or general. Narrow AI describes the application of AI to individual tasks that are repetitive or based on patterns, whilst general AI aims to create a machine that is capable of performing all the intellectual tasks of the human brain.
Artificial general intelligence will reason, learn, and solve problems in complex and changing environments as effectively as humans. These abilities do not yet exist and are not expected to do so before 2030. Owing to the difficulties in achieving general AI, some estimates predict that these capabilities will not develop for over a century. Some analysts forecast the emergence of artificial super-intelligence, a type of AI that far surpasses human intellect and abilities in nearly all areas, by 2100.
A particularly promising AI process is machine learning (ML), whereby computer programs analyse data and use their newfound knowledge to inform a decision or prediction. There are two types of machine learning: “supervised” and “unsupervised”.
Supervised machine learning
In supervised learning, machine learning algorithms (MLA) are used to find the most accurate way (known as a function) to predict an outcome (y*) based on previous examples of relationships between inputs (x*) and that outcome. The user tells the MLA which outcome to predict.
Unsupervised machine learning
In contrast to supervised MLAs, in unsupervised machine learning the algorithm learns on its own: the user does not tell the algorithm what to look for. Instead, the MLA is programmed to do things like discovering the structure of the whole dataset (a task known as modelling), or identifying interesting subsets of the dataset, such as anomalous records or relationships (a task known as detection). An example of modelling might be the identification of different groups of prisoners within a dataset of convicted felons or the extraction of different features in unlabelled image data, while an example of detection might be the identification of fraudulent transactions or unusual patterns of care and health outcomes.
So, what can AI do for governments?
Whilst the capabilities that AI presents are wide-ranging and interconnected, there are four clear techniques that AI programs can deploy to improve both the outcomes governments seek to achieve and the way in which they make policy:
1. Predictive analytics
3. Computer vision
4. Natural language processing
1. Predictive analytics
Better predictions can enable governments to implement preventive and/or more targeted policies. For example, by using massive training datasets to select the attributes that best predict an unknown function such as an inmate’s risk of reoffending, AI can help both policymakers and frontline civil servants to make predictions in a way that is more comprehensive and less subject to human bias.
The second goal is to provide predictions on the basis of incomplete information. For instance, predictive analytics can be used to map a complex decision tree of all possible outcomes, which will then simplify human decision-making.
Here, the goal is to automatically detect individual data records (such as a fraudulent transaction) or patterns (such as a relationship between a mode of hospital care and a health outcome) within massive and complex datasets in order to identify those that are abnormal or unusual.
There are two main benefits in using AI for these detection tasks. First, detection MLAs help government officials to identify important patterns that might be causal. Second, detection MLAs help government achieve an unprecedented level of situational awareness. For example, a city official may want to know whether, taking account of the variables that affect a system, there is an abnormality in a
specific area of the city, such as a higher density of toxic particles in the air.
3. Computer Vision
Computer vision (CV) enables the collection, processing and analysis of any digital image data from various sources such as satellite and aerial images and digital video. In such contexts, the goal of an AI programme is to find unsupervised methods of feature recognition; identify objects, actions or characteristics; describe content; and, overall, automate labour-intensive cognitive tasks that would usually require human supervision.
Traditionally, governments have used CV for things like traffic control (e.g. automated number plate identification technology) and policing (e.g. fingerprint matching). More advanced applications such as the analysis of MRIs or CT scans have been limited, due to low accuracy rates and low processing speed. However, deep neural networks, which are the various “layers” that make the algorithm work from input to output, have seen the accuracy of CV leapfrog human level ability in certain areas.
4. Natural language processing
Natural language processing (NLP) makes it possible for machines to process and understand audio and text data to automate tasks such as translation, interactive dialogue and sentiment analysis. Machine learning has enabled NLP research to evolve from the meaning of individual words to the meaning of sentences and even narrative understanding, which enables machines to process wells of unstructured text data and derive meaningful insights.
Government interest in testing these technologies has concentrated on two areas: firstly, to mine public sentiment and expert content regarding citizen preferences and information related to policy propositions or implementations; and secondly, to deploy highly advanced applications of NLP for surveillance and biometric identification.
DOWNLOAD THE FULL BRIEFING BULLETIN