Measurement for Learning: A Different Approach to Improvement

We are a working group of ten pioneering local authorities focused on meaningful measurement, as part of the Upstream Collaborative, an active learning network of Local Government innovators supported by Nesta. This blog series sets forth a different measurement approach for public services centred on learning, and will culminate in a discussion paper later this summer. This approach has been shaped by our practical insights and experiences working in local government. 

We are from the following local authorities: Barking & Dagenham, Derbyshire, Gateshead, Huntingdonshire, Kirklees, Leeds, Greater Manchester, Oxfordshire, Redcar & Cleveland, and York. John Burgoyne, from the Centre for Public Impact (CPI), serves as a listener and facilitator of the working group, and has gained our collective input to shape this blog series and the forthcoming discussion paper.


Our current system of measurement in the public sector is not working: a focus on metrics designed for control is creating gaming, perverse incentives, and ultimately makes the job of public servants harder.

One of our working group members, Sara Fernandez, who leads Oxford Hub, describes who often gets to set the metrics in our system and why this can lead to adverse effects: “people at the top of the hierarchy, who may not have authentic experiences with the problem, have the authority to set targets”, leading those delivering services to not feel trusted and alter their behaviours to hit those targets – even if that means worse outcomes for citizens.

Together, we set out to propose something better: an alternative approach, based on our practical experiences in communities across the UK, which focuses on measurement for learning and improvement rather than measurement for control.

Two Competing Schools of Thought: Control vs Leave Us Alone

The debate around measurement has too often been marked by two competing ways of thinking that sit at opposite ends of the spectrum. 

On one end of the spectrum, there is the belief that “what gets measured gets managed.” To address a social problem, this side would argue we need to determine quantitative metrics to understand and track our progress, and then collect data that allow us to manage interventions against those metrics. This way of thinking, dominated by top-down accountability, can be characterised as “measurement for control”, where all we need to do is control an intervention well enough and do enough of it, and we will be able to achieve impact. It aligns more broadly with New Public Management, where those at the top of the hierarchy set performance measures and manage those at lower levels to hit those measures. 

While this approach is dominant in public services, it has unintended consequences that can make it hard to achieve impact. As Lisa Keenan, the Commissioning Officer at Leeds City Council and one of our working group members, describes:

Not only do the metrics fail to accurately capture the impact we are having, they negatively change the dynamic of the relationship between government and residents, where government is viewed as a transactional service provider not an authentic connector.

On the other end of the spectrum, there is the belief that metrics alone cannot capture complex reality and that we should blindly rely on professionalism and public sector ethos to do a good job. Given the complexity of social problems we face, it is impossible to attribute quantifiable impact to any single intervention. This side would argue that measurement is a pointless and often harmful endeavor, as it can hold people accountable for things beyond their control, leading to gaming and perverse incentives. This way of thinking can be characterised as “leave us alone”, where there is a lack of accountability mechanisms and ways of measuring. 

While this “leave us alone” camp is in direct opposition to the mainstream measurement for control approach, our working group recognised it also has its flaws. It assumes that the only form of accountability is top-down scrutiny, forgetting, as Nigel Ball, the Executive Director of the Government Outcomes Lab, has argued that: “scrutiny does not need to be exercised by a distant centre, through bossy top-down finger-wagging. People could scrutinise each other, which might actually help to build trust.” As Oliver Morley, Corporate Director at Huntingdonshire City Council and one of our working group members, describes: “It is certainly hard to measure and we will not get it perfect all the time, but that does not mean there is no value in measuring, as the results, enable us to learn, respond, and improve. Measurement is therefore key to addressing complex challenges.”

A Third Way: Measurement for Learning

While in theory it may be easy to fully join one side of the debate and view the world as black and white, this working group has felt there is a lot of grey to be embraced when it comes to putting thinking into practice. We have found it is not helpful to fully reject either, as both sides have merits, but at the same time, it is important to understand and design measurement approaches that acknowledge the tradeoffs of each approach. 

We propose a third way of thinking that acknowledges that:

  • Collecting data through measurement is useful, as it allows us to better understand problems we face and our progress
  • The problems we face are complex in nature, however, so holding people accountable to metrics beyond their control is not helpful
  • Measurement should not be used for top-down control, but rather to learn about complex problems and the people experiencing them, so we can adapt and improve our approach

We argue that because the world is in fact complex, we need better measurement to better understand it, to navigate it, and to develop new forms of accountability that move beyond top-down reporting of metrics. Importantly, if the purpose behind why you are measuring is to control – as it can be in many public services – rather than to learn, then it will have adverse effects.

Unfortunately, we feel that measurement for learning is not as common as it should be in the public sector. As Dave Kelly, Head of Reform at Greater Manchester Combined Authority and one of the members of the working group, reflects:

There is a lack of any legitimate recognition of measures that drive learning and improvement across public services organisations.

Instead, Dave argues we see an over-reliance on “narrow quantitative measures” and therefore an undervaluing of qualitative data and open-ended reflection. 

Whilst we understand the desire for central government, commissioners, and the public to use quantitative measures to hold local government to account and track the impact of their pounds, we believe there are more creative ways to involve these groups. Becky Lomas, who leads Derbyshire’s Thriving Communities work and is one of the members of the working group, makes the case for “involving people proactively in shaping what gets measured” in order to “change the way we define success.”

In order to enable workers’ practice to improve, our definition of measurement must be broadened beyond “narrow quantitative measures”. Collaborate and Dr Toby Lowe have developed a helpful approach to think about how to broaden this lens: Human Learning Systems, an alternative approach to public management centred on learning in complexity. Human Learning Systems calls for practices such as “experimentation, gathering data, sense-making, and reflective practice” to be included in how we think about measurement (see more on Human Learning Systems, including examples from Gateshead and Plymouth here). Given the complexity of the environments government operates in, learning can no longer be viewed as “a luxury”, but instead as a core purpose for how public services improve.

What This Looks Like In Practice: Examples from across the UK

In the coming weeks, we will expand on what this approach looks like in practice, including the values and principles that underpin it and importantly, examples of how it is being used across the UK. 

We will learn from Huntingdonshire to see how they are using data to better understand their residents, from Oxfordshire to see how they are using storytelling as an evaluation methodology that helps them learn and improve, from York to see how they have moved from checklists to conversations to improve adult social care, from Derbyshire to see how they are capturing people’s experiences with Covid-19 and thoughts about the possibilities of a different future, from Leeds to see how they collaboratively developed outcomes by working with community builders, and more!

More than Measurement: A Bigger Shift

Whilst we embarked on a journey to develop a more meaningful approach to measurement, our conversations and debates ended up exploring much broader themes and trends taking place in the public sector. For it is difficult to understand different approaches to measurement if we do not first understand the context within which these approaches sit – the shifting power dynamics, the impact of Covid-19, the mindsets that underpin thinking.

The measurement approach we will describe in the upcoming blogs and discussion paper is part of a bigger shift we are seeing in the public sector – a different way of thinking about and approaching performance, improvement, and accountability.

This way of thinking centres around learning with those closest to the problems, and builds on Nesta’s New Operating Models for Local Government and ongoing research into how local governments are adapting in the face of Covid-19. It is also supported by a long list of thinking from organisations in this space, from Human Learning Systems, NLGN’s community power, Collaborate’s Manifesto for a Collaborative Society, and the Shared Power Principle and A Manifesto for a Better Government from our team at CPI.

#MeasurementforLearning