• It's vital that the policymakers, academics + practitioners engage in the debate about what constitutes “good” evidence, says @Tom_Gash.
  • Is a better rating system needed to help address sub-prime evidence? @Tom_Gash explores. #FutureGovernment
  • While evidence-based policy is a no-brainer on one level, some interpretations of it are too narrow and restrictive, says @Tom_Gash.

Adrian Brown has raised a difficult question for those who want policy decisions to be supported by evidence. In this article, he dissects a systematic review of the impact Scared Straight prison visitation programmes conducted by the Campbell Collaboration. Based on this review, the UK’s College of Policing, ranks the evidence against Scared Straight as “very strong”, but as Adrian demonstrates, that evidence is based on “seven US studies from 1967-1982, only two of which are statistically significant, and both of these have potentially serious flaws”.

He concludes that drawing such firm conclusions from such a flimsy base has prevented further experimentation that would generate more robust evidence. This has deterred would-be innovators from exploring modified approaches to prison visitation programmes that might be more effective. Systematic reviews are held up as the gold standard of evidence, but Adrian likens them to the AAA rated bonds filled with low-quality sub-prime mortgage debt that eventually sank the global economy in 2008.

This raises two questions for people who, like me, champion evidence as a crucial part of sound policymaking.

 

Prime assets?

First, how big is this problem? The systematic review of Scared Straight draws on many fewer, much older and much less robust studies than many others. In the field of criminal justice, the Campbell Collaboration is the leading organisation producing systematic evidence reviews globally. Its 2012 systematic review of “hot spot policing” led by Professor Anthony Braga covers 19 studies including a total of 25 experiments that investigated the impact of having police on patrol in high crime areas. Ten of these were randomised control trials and three were from 2011. The conclusion was that “the extant evaluation research provides fairly robust evidence that hot spot policing is an effective crime prevention strategy. The research also suggests that focusing police efforts on high-activity crime places does not inevitably lead to crime displacement”.

This conclusion hasn’t stopped people from testing the impact of different approaches to hot spot policing. Instead, police organisations and a growing number of academics are trying to refine conclusions about how they can better target, test and track the impact of police presence on higher crime locations.

Scared Straight is, however, not an isolated case of a systematic review that has been talked about – and occasionally presented – as having generated more conclusive evidence that might be justified by the underlying studies. It is also clearly a problem that only the most nerdy, committed and methodologically literate are likely to spend time digging into underlying studies to test their robustness. In short, there is still a problem – albeit perhaps not of sub-prime mortgage magnitude.

 

A new rating system

Second, if we agree there is a problem, how can we fix it? Here, I think we may be missing a trick. In the world of finance, the ratings agencies may be flawed (there are books written on this topic) but there is at least a rating system which signals to the market the level of risk in the financial instrument being bought or sold. In the world of evidence, there are also rating systems but they are in some respects less sophisticated than in finance. Systematic reviews are often seen as the apex of evidential proof but there is no rating that signals to a time-pressed researcher or practitioner the quality of the underlying studies within them. The Maryland Scientific Methods Scale provides a 1-5 rating of the robustness of individual studies sitting within Systematic Reviews, so why not weight conclusions based on these scores.

There is, of course, no perfect substitute for understanding the detail and nuance underlying the evidence we are using. I am frequently frustrated by the misuse of evidence and wrong-headed thinking in criminal justice, so much so that I wrote a book about it. We also need to acknowledge that while evidence-based policy is a no-brainer on one level, some interpretations of it are too narrow and restrictive and prioritise some types of public policy intervention and evidence too highly over others. A critical issue in critical justice is that we spend a lot of time and money evaluating small-scale interventions (for example, a weekly, one-hour cognitive behavioural therapy session for prisoners) and insufficient time considering the no doubt far greater (though far harder to measure) impact of broader factors such as the attitudes and behaviour of prison staff.

 

A problem shared

It is vital that the policymakers, academics and practitioners engage in the debate about what constitutes “good” evidence and how we can deal with the messy realities of the world. We need to be careful of presenting “evidence-based policy” as a total solution, and instead see it as a work in progress that will never be complete, but could become ever more valuable.

 

FURTHER READING: