As Diane Kennedy retires, she takes the opportunity to reflect on how third sector evaluation has changed over 14 years.
I’ve been at ESS for nearly 14 years. It’s been a fantastic job, great colleagues, working with great organisations and always something new to learn or ponder.
Funders and funded relationships have improved. I remember, early in my career, leading a workshop and saying ‘remember funders are people too’ and someone shouted out ‘No they’re not’. To be fair that wasn’t a typical response even then, but it’s unimaginable now. In my view, relationships between funders and funded organisations have matured. Funded organisations have less of the ‘fear of funder’ syndrome. Relationships seem more equal, and communication is better. There is more of a sense that they are working together to make a difference.
Language has got a bit simpler. 14 years ago, ESS still talked about inputs, outputs and outcomes, qualitative and quantitative information. Now we talk about resources, activities and outcomes, stats and stories. It’s all a bit friendlier.
It’s not all about reporting. It never was for ESS, improving was always as important as proving. But it feels to me as though this is a message that the sector has embraced. Ask ‘Who is evaluation for? people will tell you, it’s for the service user and for staff. It’s about hearing and adapting our services so that we get the best outcomes for people. And it’s for funders and other stakeholders.
Funders recognise this too. In recent years, there has been a shift in terms of focusing on evaluation being done and used internally, rather than proving impact. Of course this varies widely depending on the purpose of the fund. But it feels to me as though there is often a bigger focus on organisational learning.
Only the geeks knew what a logic model was. Now lots of people are using them to show how their work links to longer term or strategic outcomes. Importantly for me, they are also using them to be clear about the outcomes that directly come from their work, are within their control and are more measurable. For example, a fuel project can show that they have increased claims for fuel allowances, increased insulation and increased knowledge of how to use heating controls. They can’t directly prove that they have reduced deaths from hypothermia. There is recognition that individual organisations contribute to long term outcomes, but you can’t attribute the precise amount.
We have realised that evaluation is an adaptive, rather than purely technical exercise. Whilst there is some technical knowledge required (how to frame an outcome, identifying good indicators, developing methods for collecting information) we also know that it’s just as important to get staff and volunteers on board and feeling that evaluation is important and useful. Otherwise, that great method doesn’t get used or people don’t use and reflect upon the information they have collected. We have done some important research into embedding evaluation, produced Making it stick and now include reflective questions about embedding evaluation in our let’s evaluate workshops. ESS is now exploring leadership and evaluation.
There is more focus on self-evaluation and less on external evaluation. 14 years ago, it was more common to set aside a sum for an external evaluator. (We still have requests for external evaluations, and hear people saying that external evaluators can be/ or are seen to be more objective and impartial. A case of ‘well they would say that wouldn’t they’). There may be a number of reasons for this, less money, a shift in attitude, better self-evaluation evidence. I should say, we believe there is a role for external evaluation, but it is no replacement for collecting, reflecting and learning as you work. It should add value to self-evaluation, not replace it.
It’s more ok to talk about evidence. In the early years we always talked about ‘outcome information’ because we didn’t want to put people off. Evidence sounded like something scary and academic. It still has that connotation for some. But there is an increasing recognition that evidence takes many forms (including experiences and stories) and comes from lots of places (records, staff observations, what people tell us, external stats and assessments). ‘Evidence for what?’ is a key question when thinking about what’s good enough evidence. And third sector evidence is good evidence.
We could do more work around using evidence beyond reporting and improving. We could use evidence (our own and others) better for planning, influencing policy and for evaluating ‘what works’. This is not going to be a priority for smaller organisations, but our recent consultation event suggests there is some appetite amongst larger organisations and intermediary bodies.