Analysing AI: The impact of artificial intelligence on policy
The primary responsibility of the police is to keep people safe. However, many police departments have traditionally relied on information-constrained and subjective assessments by police officials to direct their activities. With AI, police departments have the
ability to use the predictability of criminal activity to their advantage. Over the last decade, “predictive policing” has developed. This technology has been used principally to predict crime hotspots, but departments have also developed applications for the prediction of gun violence.
In 2011, the Santa Cruz Police Department piloted a predictive analytics tool, PredPol. This tool uses data about the location, place and time of individual crimes to predict crime hotspots, which are areas of 500 square feet in which a crime is probable within the next twelve hours. The Los Angeles Police Department (LAPD) adopted PredPol in selected neighbourhoods in 2011, and in that year burglaries fell by 27%. In 2012, property theft fell by 19% and, relative to LA neighbourhoods in which PredPol had not been introduced, the overall crime rate fell by 13%. On the back of this success, the LAPD has extended its use of the tool and, by 2014, one in three of the LAPD's geographic policing divisions was using PredPol to identify crime hotspots.
Overall, by predicting more than twice as many crimes as experienced analysts (4.7 percent of crimes compared to 2.1 percent), PredPol has saved the LAPD US$9m per year. Another example is the FBI's Facial Analysis Comparison and Evaluation Unit (FACE), which provides investigative support using face recognition software to compare facial images contained within government systems (e.g. the Department of Motor Vehicles' photograph database) with pictures of missing persons and fugitives.
In the UK, Leicestershire Police started a trial in April 2015 of a similar application called NeoFace. In 45% of cases, it has been shown to identify pictured suspects within seconds. NeoFace has already been deployed on over 200 occasions during its pilot phase and is used on a daily basis in police investigations, resulting in cost-savings by cutting out labour-intensive and lengthy searches of criminal mugshot databases.
Risk assessments are used across the court system, but they have traditionally been made in an ad hoc, information-constrained and subjective way. While the convicted individual's crime and record are useful pieces of information in an assessment of their likelihood of reoffending, a risk assessment based on this information may overlook a number of important predictors.
Several jurisdictions in the US are trying to automate and remove the bias from risk assessments by using machine learning algorithms (MLAs) across the court system. For instance, the Laura and John Arnold Foundation commissioned the development of Public Safety Assessment-Court (PSA), an algorithm that uses data about an individual's age, criminal record, and previous failures to appear in court to assess the likelihood that they will commit a crime or fail to reappear in court. This is currently being used by states like Arizona and New Jersey and cities like Chicago and Pittsburgh and has seen some initial success. Lower crime rates and jail populations have been associated with the introduction of the PSA tool.
A high rate of recidivism - the tendency of a convicted criminal to reoffend - is a failing of many criminal justice systems. In the UK, the reoffending rate has fluctuated between 26% and 29% since 2003. Fortunately, recidivism is not always a random variable and can be predicted using AI.
Pennsylvania has a record on recidivism that is representative of the national average: one in three inmates is rearrested or reincarcerated within a year of being released (PR Newswire, 2015). In 2006, Philadelphia's Adult Probation and Parole Department partnered with the University of Pennsylvania to develop a prediction algorithm to forecast the risk of recidivism for individual probationers. This tool has been used in Philadelphia for seven years and has achieved an average accuracy rate of 66% across all probationers.
- Patient treatment
Patient treatment is a prediction problem that doctors face every day. The efficacy of a treatment for a given condition differs from patient to patient because of genetic and phenotypic variation. Indeed, many drug treatments are only effective for 30%-60% of treated individuals. But with AI, doctors are able to segment patients into population sub-groups according to their genotype and phenotype (known as molecular signature matching) and tailor the treatment type and dosage to improve efficacy and reduce side-effects.
This approach enables clinicians to make comparisons of treatment options that were too complex and time-consuming to be made manually before.
One of the first applications of MLAs is the “Oncology Expert Advisor”, which uses massive amounts of patient data and medical literature to provide oncologists with evidence-based care decisions on first-line therapy that are tailored to each patient. This application has proved to be a successful way of quickly assessing the best treatments for an individual patient based on the latest evidence. The overall
accuracy of standard of care recommendations in 200 test leukemia cases was over 80%.
- Patient monitoring
While in hospital, some inpatients experience a deterioration in their medical condition or suffer from a complication of their illness such as a life-threatening blood clot. To protect patients against these risks, clinicians have traditionally made subjective assessments of the probability of individual patients suffering a deterioration in their condition, based on ad hoc interpretations of vital signs such as
temperature and blood pressure. By applying MLAs to this prediction problem in the form of Early Warning Systems, clinicians are able to automate patient
By applying MLAs to this prediction problem in the form of Early Warning Systems, clinicians are able to automate patient risk assessments and, in doing so, improve their accuracy and comprehensiveness. This, in turn, will prevent a number of deaths due to deterioration. Several hospitals in the UK NHS trust hospitals in the UK have implemented this application. The health sector has also explored the application of facial expression recognition software, a subset of facial recognition, which is already being used in the private sector in consumer behaviour research, usability studies, and market research. MLA-trained software can learn to recognise facial expressions, from the most obvious to hidden micro-expressions, with wide-ranging applications from psychological analysis to hospital patient monitoring.
A team of paediatricians from the Institute for Neural Computation, together with Emotient, an emotion measurement company, developed a facial expression recognition CV model for the assessment of children's postoperative (laparoscopic appendectomy) pain. The team argued that human pain assessment is subject to bias and often under-recognises pain in young people, whereas facial expressions are a reliable biomarker.
The model eliminates human bias: it estimated young people's self-reported pain more accurately than nurses. In assessing pain severity, the model was as accurate as parents' assessments for the 50 children who took part in the experiment.