Policy ideas that make sense intuitively can soon run aground when up against the rigorous evidence of pilot testing, says @AdinaRomShare article
No matter your area of policy, you should always want to know what kind of rigorous research has been done already on the subjectShare article
Rigorous evaluations have the potential to identify what works best in addressing the most pressing policy issues, says @AdinaRomShare article
Partnering for Learning
We put our vision for government into practice through learning partner projects that align with our values and help reimagine government so that it works for everyone.
“Scared straight” was a government programme with the goal of deterring at-risk youth from committing crimes.
The initiative involved young people visiting prison inmates and spending time with them - the idea being that it would deter them from committing future crimes. Makes sense, doesn't it? Policymakers and parents of at-risk youth were understandably enthusiastic. Unfortunately, the scheme backfired: it turned out that it actually made young people more likely to turn to a life of crime.
In a similar way, there was a fierce debate about the best way to distribute antimalarial bed nets. Some people advocated handing the bed nets out for free, while others strongly favoured charging a small price for them. They felt that charging for bed nets would reduce waste, as only those who really needed one would buy one and - as a result - it would be used and valued more.
Sounds reasonable, right? Wrong. The evidence from rigorous evaluations shows very clearly that is far better to hand out bed nets free of charge. This evidence led to complete change in policy, and a recent Nature article found that 450 million cases of malaria and 4 million deaths could be averted as a consequence.
These are just two examples where policy ideas that made sense on an intuitive level soon ran aground when confronted by the rigorous evidence of the pilot testing schemes. That's why rigorous assessment of policies is so important. And that's why I have spent the last five years doing impact evaluations, in particular (RCTs) of different aspects of governments' and NGOs' work. RCTs revolutionised medicine and are now used more and more to test and improve social policy. In the simplest case, you choose randomly who participates in an intervention or a programme (the treatment group) and compare that to the group that does not. In more complicated versions, you can also compare different types of programme.
Not knowing might be more expensive than running an RCT
To be fair, opponents of a more evidence-based approach have some legitimate concerns. A primary issue is cost, as an RCT can be expensive. This is simply because large sample sizes are needed to conduct these trials. Yet not knowing whether your programme works, and potentially spending millions of dollars on something that is not working, can be extremely expensive too. In some cases, like the “scared straight” programme mentioned above, a programme could even produce more adverse effects than no programme at all.
Another important factor to consider is that often governments already possess vast datasets, and hence there is no need to collect much additional data. So if you are considering introducing interventions with the goal of increasing children's test scores, for example, or reducing people's delays in paying their taxes, this information is typically already collected by government. Using such data can lower the cost of an RCT dramatically.
Finally, an inexpensive way of improving a planned intervention or programme is to consider the lessons learned from other rigorous evaluations conducted in a similar context.
The ethics of randomised access to a programme
Others have ethical concerns, particularly around the fact that RCTs by their very nature do not allow access to a programme to all those who are potentially interested. That's a fair point, but we need to consider that most governments, NGOs and other organisations apply specific selection criteria when choosing participants for their programmes, and very often these criteria are no fairer than a random selection in which everybody eligible for a programme can potentially benefit from it.
During the time I spent in Kenya, I saw that in many cases people are selected because they live close to the main road or have connections with the local administration. Random selection was perceived as being much fairer by the people in the communities I worked with. Finally, it is possible to randomise selection so that everyone who participates in the study ultimately benefits from the programme.
Start with using existing evidence
There are many different approaches that organisations can take. One place to start is to consider the evidence already out there. No matter what area of policy you work in, you should always want to know what kind of rigorous research has been done already on the subject. A good place to start this investigation is the What Works Network in the UK, IPA, 3iE, or the Poverty Action Lab. If there is not much in-house capacity, you might also hire someone to summarise the existing evidence and present it to your team in an actionable way, because this is not a trivial exercise. Be careful when deducing insights for your own programme or intervention, and consider carefully the methodological soundness of the evaluation (internal validity) and the context in which it was produced (external validity).
And another thing organisations can do is run their own trials. When existing evidence is scarce, it can make sense to run a large, fully-fledged impact evaluation. In other cases, a small and quick operational test is more appropriate. I always advise organisations to start by drawing up their theory of change and writing up the evidence they have for each of the assumptions they had made. This serves as a useful starting point to decide which elements still need to be tested.
Many organisations have insufficient capacity to read, understand and apply existing evidence to their work or to run such tests themselves, which is why we founded Policy Analytics. We help organisations find and understand existing evidence and conduct impact evaluations and operational trials to maximise the effectiveness of that evidence.
The good news is that there has been much progress and research over the past years, and a large number of high-quality impact evaluations have been conducted all over the world. The task now, however, is to bridge the gap between research and policy. To achieve this, we need to use our existing knowledge and extend the use of rigorous testing methods for decision-making to the public sector. I am convinced that rigorous evaluations have the potential to help us find what works best in order to address the most pressing questions of our time.
- Striving for scale. Clean water, deworming a whole community - Evidence Action is leading the charge to deliver evidence-based development interventions where need, opportunity and impact collide. Its former executive director, Alix Zwane, tells us about their progress in scaling impact so far
- Why evidence alone does not create the best policies. A combination of the best available evidence, professional judgment and the real-world data and insights being generated every day by public services should underpin the creation of new policies, says Adrian Brown
- Briefing Bulletin: From evidence to outcomes: how to improve outcomes through effective evidence-informed policy and practice
- The evaluation pitfalls of popular programmes. A recently leaked evaluation of the response to the 2011 riots gives us useful insights into government programmes, says Dan Corry of National Philanthropy Capital