- Evidence should provide government with the insights to achieve better citizen outcomes
- We need to broaden our understanding of when and how evidence is used and who produces it
- Many countries find performance management and measuring policy performance a challenge
Rising up the agenda…
Today’s corridors of power increasingly echo to conversations about the use of evidence in government. Citizens want their governments to work effectively. Governments operate in complex environments, so it is often not obvious which policies work, which don’t and which actually cause more harm than good. Evidence should provide government with the insights they need to achieve better outcomes for citizens. In this briefing bulletin, we explore what ‘evidence’ actually means in the policy context
An example to get us started
Take the Scared Straight programmes. Here, at-risk kids in the US were shown around prisons by police officers in order to deter them from committing crimes. Although this sounds plausible enough, rigorous evaluations showed that it had the opposite effect: kids who took part in the programme were more likely to commit a crime than those who didn’t.
While different methods are being used, Randomised Control Trials (RCTs) are seen as the ideal method to figure out what works and what doesn’t. Originally used in agriculture and popularised by medicine, RCTs are increasingly popular in public policy, too. From the ‘What Works’ initiative in the UK – a network of institutions dedicated to helping government departments understand the impact of policies – to the US Congress instituting an Evidence-Based Policymaking Commission, it is clear that the idea of using better evidence is taking hold.
But potential still untapped
There is a risk, however, that the evidence conversation is being framed too narrowly on at least two counts: the point in time at which the evidence is gathered and who gets to produce it. It “works” there – but will it work here? The current international ‘what works’ debate shows a strong focus on empirically rigorous ‘upfront evidence’ – existing evidence generated somewhere else. Using the results of an RCT on the effects of, say, introducing tablets to schools in Scotland as a basis for their introduction in England is one such example.
Accordingly, ‘what works’ institutions in different countries tend to focus on collecting, assessing, synthesising and disseminating such upfront evidence. While it is helpful to know what has previously worked and what has not, upfront evidence alone is not likely to be sufficient. The outcomes of most public policies are highly dependent on context and cannot be assumed to replicate, even within the same jurisdiction.
It’s empirically rigorous – but is it actionable?
Similarly there is a strong focus on academic researchers as the primary actors and producers of evidence. Governments, however, need evidence that is not just robust but above all actionable. The value of evidence is limited if the question it asks and the problem it tries to solve are not informed by both practitioners’ experiential insight and the judgment of policymakers, no matter how empirically robust it may be. Solving this problem is hard, since academics operate under incentive structures that tend to reward empirical rigour rather than ‘actionability’.
A more inclusive approach is needed
In an idealised process, governments:
1. Distil the best available evidence,
2. Decide on the configuration of a policy based on the evidence,
3. Implement the policy and measure its effects, and
4. Evaluate the results and learn and improve from them
These elements never come as neat steps in a linear process, nor do they all receive the same attention. When thinking about how to leverage evidence, we must recognise that it isn’t as simple as inserting it at a given step of a procedure.
To achieve improved outcomes we may need to broaden our understanding of when and how evidence is used and who is involved in producing it.
Much emphasis has been placed on the importance of evidence forming part of the policymaking process upfront. This is partly because many countries find performance management and measuring policy performance in terms of final outcomes to be a challenge. It can be done, however.
In order to know ‘what works’, test here rather than relying on evidence from somewhere else
Evidence can also be produced in the relevant context by conducting local experiments, drawing on existing performance management data, or a combination of both. Performance management data is the information generated through administrative processes and by the implementing organisation. It is explicitly linked to the system in question. Introducing tablets in English schools, monitoring test scores in a given school, and then adjusting policies based on these results is an example of how performance management data can drive improvement. The introduction of tablets in England could be randomised across schools to allow for stronger causal inference.
For example the UK’s Cross-Government Trial Advice Panel, a group of 25 external experts brought together by the Cabinet Office, works alongside civil servants and implementing organisations to run in situ trials and generate useful evidence within those organisations during implementation.
Simultaneously, the UK Cabinet Office, through the independent National What Works Advisor and its own What Works unit, focuses on developing the government’s own capacity to produce and use rigorous evidence. By contrast, in the international debate these aspects of the system receive significantly less attention than other elements.
Make it actionable
In the current discussion, researchers are a key supply source of evidence, along with policy think tanks in government and the analytical offices supporting public administrations in policymaking. Yet the process of integrating evidence into final decision-making involves politicians, policy advisors and practitioners, who need to recognise that they operate in a complex system. All need to be closely involved in the process of co-creating evidence to ensure it is actionable, by which we mean useful:
- Demand-led: answers questions that are being asked by those who are using the evidence
- Pragmatic: sensitive to operational and political constraints
- Comprehensible: translated into a format that can be digested and easily understood
- Timely: ensures the evidence is available when actually needed
- Cost-Effective: the long-term positive effects outweigh the initial outlay
- Robust: validated and approved by external and respected organisations.