- Support from the top of the Obama Administration opened doors to evidence-based policymaking
- On evidence there have been improvements at federal but less good news at state and local
- Building an ecosystem and teamwork adds huge momentum to the use of evidence
One thing’s for sure, the invidious impact of the financial crisis has spread far and wide. From high unemployment to mortgage foreclosures, drained government budgets to dormant factories, its seismic effects tore through communities large and small. In the US, however, in amongst the destruction there was one chink of light.
The Obama Administration’s Recovery Act not only unleashed huge amounts of new spending in a bid to stimulate the economy and save jobs but, as Kathy Stack points out, provided a fresh opportunity to restructure and redesign programmes to increase their effectiveness. “We wanted to ensure that the spending had a maximum impact, of course, but it was also a chance to create incentives for learning and evaluation as part of a pattern of continuous improvement,” she says. “This focus on evaluation and what works hadn’t happened to the same extent before.”
Stack, a 28-year veteran of the Office of Management and Budget (OMB), adds that the extra weight given to evidence and results flowed right from the top. “In every case where we made progress, it happened because there was demand from the top-level leadership,” she says. “But there is still so much more that can be done.”
DC’s data doyenne
Stack retired from federal service last year after 34 years (prior to OMB, she served for six years at the Department for Education). Not for her, though, the sunlit uplands of retirement. Instead, she headed for the Laura and John Arnold Foundation – a private foundation run by a former hedge fund manager and his wife. Stack serves as the foundation’s vice president for evidence-based innovation.
It’s a role that plays to her insights and experience gained during her long stint at OMB, where she helped federal agencies design innovative grant-making models that allocate funding based on evidence and evaluation. “As an agency that doles out money, OMB has a powerful way of persuading other leaders and agencies to embrace its way of thinking,” she concedes. “And at the time when President Obama took office, there was a new set of leaders coming on board who were in full agreement about the need to use evidence. The stars were aligned.”
Still, she also admits that the current state of play on evidence-based policymaking in the US is very much a mixed picture. “At the federal level, there have been dramatic improvements in certain social service programmes, building on the work that was done in the Bush administration, where there was a lifting up of the value of rigorous evaluation. At the state and local level, it is very spotty. There are some amazing things going on in places like New York City, Chicago and Utah, but there are also opportunities to do much more. The success stories are important, though, because they make it easier to drive the agenda at a federal level as you can point to how it helps local constituents.”
In her role as advisor for evidence-based innovation at OMB, Stack sought to help agencies use data and evidence to make better decisions. There was certainly no shortage of areas to focus on. After all, the federal government spends tens of billions annually on social programmes – many of which deliver results that are far from optimal.
“Under the Obama administration, we designed a set of new programmes that authorised the largest grants for strategies and interventions which have the strongest evidence that they work – the safest bets, in other words,” she says. “But these new programme designs also allowed for some smaller grants for strategies that have promising evidence. We also ‘stole’ social impact bonds from our friends in the UK, and called them ‘Pay for Success’. These types of projects have taken off and are being implemented at the federal level, as well as the state and local level. And we have a number of programmes where we offer flexibility to states and localities on the condition that they conduct rigorous evaluation.”
Steps to success
Clearly, this pattern of work is a tale of progress made, but Stack believes there is far more that can still be achieved – starting with harnessing the administrative data that lies, often hidden and quiet, in different corners of government. “There are all kinds of high-value datasets that sit in various agencies that are used to administer various programmes, but they tend to be locked up in their individual silos,” she says. “It often takes years to get data-sharing agreements in place for one ad hoc study, and I think people are recognising there is a gold mine there if we can create an infrastructure to support it. We want to be able to link data across programmes at different levels of government, including looking at ways to include private sector data where appropriate.”
This, she adds, is a key lesson from her time in post: if evaluation and evidence is seen as a ‘gotcha’ tool then agencies won’t embrace it. “But if you present it as a tool for learning, you can get people engaged and they see it as something that is interesting and helpful,” she adds. “And if people see it as a tool for learning, it helps them embrace the evaluation concept from day one.”
Culture, too, should be another area of focus. Stack illustrates her argument by pointing out that, in the past, evaluation and evidence were seen by those in agencies as threatening – rather than an opportunity to learn – and this mindset still persists today. “There are still large parts of government where evaluation is seen as something that is done to you by researchers and provides no benefits to the programmes,” she says.
Asked why this is the case, she says that it goes back to the 1970s, when evaluation officers were first appointed to government in independent offices. “While evaluation studies always need to be independent to be credible, they were seen as an accountability tool rather than a learning tool, and this created a huge divide between people who run the programmes and the evaluators who were subsequently, in many instances, marginalised. In the agencies where leadership at the top of the agency has recognised that the two sides need to work together, they have demonstrated how rigorous evaluation can help programmes improve. When programme and evaluation offices work as a team, it enables them to identify the key problems and the types of evaluations that are needed to generate useable information.”
This sense of an ecosystem and teamwork – connecting different agencies that are doing similar things – adds huge momentum. “By bringing evaluation officers and data together they can learn from experiences and get excited about the possibilities,” she says. “This approach works, not only across agencies but also across different levels of government.”
And speed is another issue to address. “It is difficult to get people excited about evaluation when they perceive that it will take years to reach conclusions or obtain findings, and by the time the evaluation has been completed the policy is likely to have changed or people will have moved on,” she says. “The more we are able to demonstrate ways to conduct evaluations in a shorter timeframe – via low-cost experiments or randomised controlled trials that use existing administrative data – the greater the opportunities are to bring people to the table.”
Stack remains very much an optimist, believing that the journey of evidence-based policymaking in the US – one she helped kick-start and then oversaw – is well placed to pick up the pace in the years ahead. “Although there will always be challenges, the Evidence Based Policymaking Commission Act is hopefully a sign of things to come,” she says.
“The fact that two senior leaders in Congress sponsored this initiative – Republican Speaker Paul Ryan and Democrat Senator Patty Murray – means that there is genuine demand for this type of approach to take root. And when there is demand from both the White House and Congress, it opens up the potential to accelerate progress and help facilitate discussions between federal, state and local governments around how they could benefit from evidence-based policymaking. We’re just getting started.”
- Striving for scale. Clean water, deworming a whole community – Evidence Action is leading the charge to deliver evidence-based development interventions where need, opportunity and impact collide. Its former executive director, Alix Zwane, tells us about their progress in scaling impact so far
- Reverse to traverse. When it comes to policy, government should do more U-turns, says Jonathan Breckon
- DC despatch. While there is much that unites the policymakers of London and Washington, DC, very few among them have worked in both cities’ corridors of power. Kate Josephs, however, is an exception. She tells us about her experiences driving performance improvement in both governments – and how she got there
- Onwards to outcomes. Dustin Brown is using his role at the heart of the US Federal Government to persuade ever more of his colleagues to focus on results and impact. He tells us how he’s getting on