News & blog



Learning from self-evaluation – ESS self-evaluation process

To ensure ESS keeps practising what we preach, we have been evaluating how we evaluate…

In evaluating how we evaluate, we followed our own pathway and used our principles for good evaluation to assess how we could improve.  

All staff completed a short anonymous questionnaire to find out how we felt about our existing evaluation processes.  We learned that there were lots that was good, for example:

We set outcomes in all our work proposals… By practising what we preach we demonstrate a range of methods in use to people we work with. 

However, it had been longer than we realised since we last reviewed our processes, so we updated some things. 

Our first meeting was used to identify possible areas for improvement, some quick wins and the need for a new ESS Evaluation Plan (using our template).  We prioritised staff time and effort, and ensured that we weren’t embarrassed by any of our practice!

We reviewed how we typically use our internal self-evaluation results throughout the following year (eg to improve, for reports to our board and to funders, to inform tender responses etc).  This helped us restructure our annual self-evaluation meeting with a focus on our key evaluation questions.

Our outcomes had just been reviewed as part of our strategic planning process, so we were confident that they were fit for purpose.  The next stage, therefore, was to work together to come up with long-lists of possible indicators for each of our short- and medium-term outcomes.  These long lists were then circulated with everyone having the chance to add to or amend them.  At our next meeting we had a go at identifying key indicators for each outcome.  

Going back to outcomes

A message we frequently share is that the evaluation pathway is a cyclical rather than linear process. And we ran into proof of this recently!  We found we were struggling to come up with useful indicators for one of our outcomes: 

Third sector and funders are better able to use evaluation evidence.   

From our experience supporting other organisations, we know this can be a sign that an outcome isn’t quite right. This prompted us to question its place in our logic model

We realised it couldn’t sit with our short-term outcomes because they need to be achieved before this one becomes relevant.  Also, all three of our medium-term outcomes are about funders / third sector organisations using their evaluation in different ways – a similar concept. 

We have therefore decided to ditch it.  As we often tell others – don’t be afraid to go back and tweak other parts of your evaluation plan as you go.  Each part of the process can throw up fresh thinking! 

Your plan will always be a work in progress.  

Collecting evidence

Most recently, we looked at both our existing evidence collection methods and storage systems.  Again, this session prompted us to do some radical rethinking.  We realised we don’t currently seem to have all that much evidence about our medium-term outcomes.  As we are satisfied that our outcomes are right and we know we routinely evaluate the impact of our activities, could this mean we need to design some new activities that will specifically address these outcomes?  And /or can we find a way to map the evidence we already gather more particularly against these outcomes?  

There is obviously more work to do.  Our next session will be in October, 2023.  Watch this space!