Is there a need to re-imagine evaluation?

6-minute read

 

Why change?

Evaluation is often defined as examining programmes and interventions to determine their value, merit and worth.[1] It aims to contribute to making sense of what works and doesn’t, and what is deemed to be ‘value for money’. Often, evaluation contributes to evidence for policy and an emphasis is placed on this as an idealised ‘end goal’.

This idealisation highlights an urgent need to re-imagine evaluation for a number of reasons. Primarily because evaluation isn’t just about assessing whether an intervention yields value for money, nor necessarily about influencing a particular policy debate. Evaluation should challenge our current assumptions about the ‘hows’ and ‘whys’ we think something works, and placing emphasis on the importance of change not being linear, nor always measurable. It is important to challenge funders, partners and organisations working on and commissioning an evaluation about how we define impact, and the most appropriate method of understanding the intervention within the context it is situated in.[2]

By re-imagining evaluation in this way, we can also challenge the foundations in which the discipline was formed. We can importantly, acknowledge the post-colonial ideology underpinning the need to evaluate. This, though an important topic, is almost entirely neglected in the evaluation field –spanning national and international contexts.

 

The actions we can take

Re-imagining evaluation shouldn’t be a debate – organisations such as Charity So White and #BAMEOnline have already started important conversations in the UK.[3] However, there also needs to be critical thinking from organisations hiring evaluators and those working in this field, to work towards reframing their own practice through various means.

Some approaches that are important in doing this have been outlined below:

1.       Acknowledging positionality – where do organisations, evaluators and funders fit into the evaluation process? Often, markers of success are determined by those working in services, donors and evaluators. This means that defining what works and doesn’t is done at a macro level – without necessarily involving those with direct experience of engaging with a programme or service.

We interrogate data to find causal links that demonstrate impact, but when do we interrogate our own positionality and how we perceive the world and our findings? These are questions that are central in acknowledging our own position within the evaluation ecosystem. Currently, evaluative practices do not embed reflexivity into their work, but this is key stage in which to zoom out, and consider how our own experiences and with that, biases, may affect how we do evaluation. Be it the objectives and questions that are developed, to the tools used.

Tips for organisations:

  • When thinking about the aims and objectives of the evaluation project, ask yourself, do these speak to the work that is being done? How can we measure these in a way that acknowledges the organisation and funder’s role and requirements as part of the evaluation?

  • How can you co-develop the evaluation objectives and aims by bringing clients into this process.

2.       Never forget the wider context when analysing the findings - complex evaluations are emerging, and though challenging for the very nature of the approach, offer much insight into how a programme, service or intervention operates. It also, importantly, allows for us to engage in critical discussions around the structural systems that heighten inequity – be it around lack of access to mental health support for children and young people, systemic racism in maternal health, or legislation that can result in the over-criminalisation of certain children and young people. Failure to acknowledge or indeed include these when ‘doing’ evaluation is to reinforce the lack of accountability on systemic structures that create these very inequities.

Tips for organisations:

  • When planning for an evaluation, conduct background research in how the organisation fits into a wider ecosystem of services available.

  • Draw on publicly available data, such as deprivation markers, health and wellbeing indicators to help understand the context in which the organisation functions. In part this is also understanding who engages in services or offers available.

  • Be candid about structural systems that heighten inequity – there is integrity in being able to draw on the socio-economic context to situate why services provided matter.

3.       Change is rarely linear nor clear - acknowledging this and re-imagining evaluation to understand, explore and interrogate how change takes place and why often provides more insight than evidencing if something works. This is also important because people’s lives do not operate in siloes, and therefore are situated within a wider ecosystem that extends an intervention or programme. Part of this is paying attention to the ecosystems in which a project, service or intervention is situated in, and being open and candid about the level of contribution it can have on an individual, community or wider society.

Tips for organisations:

  • This is an opportunity to set out a clear Logic Framework or Theory of Change that can describe how activities can lead to outcomes.

  • Create a narrative about how change happens, which can be aided through a set of hypotheses relating to programme areas or activities. Identify how change happens by breaking this down into short, medium, and long-term outcomes.

  • Identify key leavers or mechanisms of change – what is integral to making a change, what does this require in terms of resources or approach?

4.       Pay attention to lived experience - a simple, often overlooked, but core element in evaluation. Organisations, evaluators and funders have a responsibility to acknowledge individual lived experience, and do so by engaging service users, those with lived experience throughout the evaluation. Whether it’s refining a Theory of Change, testing assumptions about what works and why, to understanding how a particular service fits into the person’s life or indeed whether an intervention is what’s needed.

There is a tendency to over rely on shiny polished case studies in reports, that mask the lived experiences of people. Over relying on these can neglect the realities of many people, and reinforce the idea of an ideal service user – one who goes through a transformative journey because of receiving support.

Tips for organisations:

  • Draw on people who directly engage in services or activities to find out what works, for whom and under which circumstances. This will help any organisation between understand whether their services or activities are meeting the needs of clients.

  • Pay attention to small, but important changes that have emerged from an individual’s engagement with a service or a range of services. Transformative change is rarely achieved in silo, and its important to recognise that individuals do not engage in a service alone. Understanding small milestones will help organisations understand the mechanisms that make change possible.

5.       Knowledge is power - but so is ensuring that information is disseminated through appropriate channels and to reach the people it needs to. An unspoken ill in evaluation is the failure of evaluators and organisations to share back information with the people they aim to serve. The science-to-practice gap being 17 years (in clinical research), can pose a risk to evaluation and social research in disseminating ‘real time’ evidence.

Tips for organisations:

  • Embed knowledge mobilisation or sharing as part of everyday practices. Some approaches that could be taken, include:

    (a) Disseminating summaries and findings to participants following data collection

    (b) Ensuring reports uses language to speak to the service user rather than funder

    (c) Engaging stakeholders through the evaluation process – getting their insights to validate findings or test hypotheses.

There are ample opportunities to think outside the box, and reframe how we share what we gather as a way of counterbalancing the extractive nature of evaluation and research.

 

Changing the conversation

There are many ways for us to re-imagine evaluation, and this creates opportunities to interrogate current practices, acknowledge the wider social context and improve how practitioners share and learn as a collective. At Habitus, these are some of the ways in which we work, and part of this is also having open and candid conversations with each other, to challenge our own thinking and recentre equity and inclusion into our practice.

 
 
 

At Habitus, we know that measurement forms the building blocks that can help you strategise, evolve, and grow your programmes and initiatives. We can help by reviewing what you do, evaluating your programmes, building high-impact strategies, and training your staff to continue great work.

Find out more about what we do by clicking the button below.

Previous
Previous

Comment on the scrapping of the Long-term Mental Health and Wellbeing Plan

Next
Next

Mental Health and Wellbeing examples from the Global South