Skip main navigation

Applying evaluation methods

In this step you will learn how to ensure that your methods produce useful and meaningful evaluation findings.
Two people smiling and discussing things in front of a laptop.

Before we get to the discussion of specific evaluation methods, it is useful to think about what you need to do to ensure that your methods produce useful and meaningful evaluation findings. In particular, we will consider context, rigour and skills.


Numbers are only really meaningful if they have context.

For example: if an event has 1,000 attenders, on its own that doesn’t tell you much. It’s much more meaningful if you know whether it’s more/less than last time, or than other similar events, or the target, or the numbers needed to ‘break even’. Or perhaps it’s more people than are legally allowed in the venue! Each context can give a different meaning to that figure.

For evaluation, we often want to know what difference doing the activity actually made. We can only know this if we have context. Typically, this means that we need a baseline: a figure of what was the case before, that we can compare the figure afterwards against to see how much difference was made.

For example: suppose you were running art classes to encourage local teenagers to do art and found out that after the class, 10% of local teenagers did art. That could be good or bad, depending on how many did it before. Was it 5%: and you’ve doubled the proportion? 10%: and you’ve made no difference? Or 15%: and you’ve actually put some of them off! This might actually be OK if one of your objectives was to enhance the quality rather than quantity of engagement.

This highlights the need to plan and begin evaluation long before your activity actually starts, wherever possible.

Sometimes, the past isn’t the most appropriate comparator, particularly where there are wider changes taking place. It can be useful to know what the difference is now, between those affected by the activity and those not affected (the ‘control group’).

To return to the previous example: it might be that we know that the proportion of teenagers doing art usually decreases in years that they have exams at school. In that case, comparing to the proportion who did art the year before would be unhelpful. If you saw that the proportion doing art was higher where you had run the activity than where you hadn’t, then that would suggest a positive impact in terms of breadth of engagement, even if levels had decreased since the baseline.

It’s important to understand the impact of activity in relation to the appropriate context. But if you use a control group, it’s also important to understand whether the ones doing the activity are different from those who don’t. For example, are activities only offered in wealthier areas or to those who speak a certain language?


All established evaluation methods have recognised standards of ‘good practice’ in how they are delivered. Ideas of ‘rigour’ in evaluation have developed beyond the idea of ‘objective’, ‘scientific’ and ‘neutral’ study, recognising that there are valuable insights that can be gained from approaches that are subjective, artistic and with a recognised viewpoint, for example. But this is a reason for more care and rigour of approach, not less.

The methodology of analysis should be shared in a way that means that someone else analysing the same data/information the same way would arrive at a reasonably similar interpretation.

Similarly, there are standards for selection of research participants to ensure they are representative of who you want to know about, since you ask a selection of attenders or activity participants [the ‘sample’] as a way to know about all attenders or activity participants [the ‘population’]. There are also standards for the analysis of statistics. If you ask a larger sample of a particular population, you can be more confident and/or more precise about the result.

Equally, all evaluation has limitations and it is better to be clear about these than to conceal them.


Different evaluation methods require different skills. In the next activity we discuss when you might choose to do evaluation yourself or get others to do it, along with many other things you might want to think about when making that decision.

For now, it’s just important to note that which methods you use, and how you are able to provide context and rigour, may affect what types of evaluation you choose to do.

Above all, it is important to understand what you might want to know about the event, activity, or programme you are evaluating, what you (practically) can know, and what methods can contribute. It’s also important to be aware of what you can’t, or won’t, know. 

Key questions checklist

You will also find a copy of these questions in the Applying evaluation methods section of your Evaluation Workbook, which you can keep for future reference and to use as required.

There will always be more that you could find out, but part of the skill of effective evaluation is knowing what will be most useful, and what will be enough, to support your learning. This is linked to the Evaluation Principle of Proportionality explored in Week 1.

This article is from the free online

Evaluation for Arts, Culture, and Heritage: Principles and Practice

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now