Dr Kevin Guyan, ESS trustee, considers the influences of equality, diversity and inclusion on the practice of evaluation
Alongside my role as a trustee of Evaluation Support Scotland, I work as an equality, diversity and inclusion (EDI) researcher for a higher education organisation. Some of my recent work has reviewed approaches to the evaluation of EDI interventions in UK research and innovation. The study examined evaluation methods used by organisations to measure change (e.g. an increase in female applicants for a particular role) and the rigour of evidence gathered. As with other sectors, the quality of evaluation work was mixed: some organisations designed interventions with evaluation as an integral component. For other organisations, evaluation was an afterthought.
We can learn much from the evaluation of EDI interventions. However, it is also helpful to consider the influences of EDI on the practice of evaluation in general.
Evaluating the effectiveness or ineffectiveness of an initiative involves working with real people in real world situations. This brings with it a number of EDI considerations. As with research in general, the practice of evaluation is neither ‘neutral’ nor ‘objective’. It is shaped by decisions about methods used, outcomes assessed and the people involved in the work. These decisions are likely informed by an evaluator’s life experiences, frames of reference and conscious or unconscious biases. In particular, the identity characteristics of evaluators and those who participated in the work being evaluated are likely to inform the approaches followed.
Alongside efforts to ensure evaluations capture a diversity of experiences, it is important to remember that the practice of evaluation brings with it the power to include and exclude. For example, a focus group might present an effective method to gauge the views of an environmental group after they received funding to develop an area of local woodland. However, if the focus group is conducted in an inaccessible location or no expenses are provided for those with caring responsibilities, this method will automatically exclude some people from this key element of the project.
It may not be instantly clear who is missing from evaluation work, as absences (by their nature) are not apparent. It is also difficult to design evaluation methods that work for everyone, at all times and in all contexts. However, a reflexive approach to evaluation – in which you consider the impact of your personal biases, frames of reference and limitations – is a healthy first step. Thinking about EDI as part of the evaluation process, with an inclusive approach that empowers everyone with an equal chance to participate, will help ensure evaluations accurately reflect the reality of the work they intend to describe.
At ESS we have been working with some youth organisations to involve young people in evaluation. (See case studies here). Kevin’s message is pertinent for all organisations and involving your service users in evaluation may be a step to ensure that the feedback you receive represents the views of a wide range of your service users.