Skip main navigation

Evaluation Process Applied to Training – Evaluation Planning

General process of evaluation explained, and its application in education
W.K. Kellogg Foundation Evaluation Handbook illustration representing evaluation process as a circular -hence iterative - process
© Wellcome Connecting Science

In general, the evaluation process is very much similar to the research process in any other social science. It starts with question(s), followed by design, data collection and analysis, ending in sharing of the results which may inform further actions. Simplified, the main phases of the evaluation process are:

  • Determining the need for evaluation and who the stakeholders are
  • Formulating evaluation questions: What do we want to know about the course/initiative we are evaluating?
  • Choosing indicators that are going to help us observe/measure specific aspect(s) we are looking at
  • Collection of the information we need in an ethical manner
  • Analysis of collected data in a valid and reliable way
  • Sharing the results of the analysis and making decisions about further actions

The process of evaluation in teaching and learning follows the general principles described.

Evaluation Planning

Evaluation of teaching and learning will be similar whether it is the evaluation of a face-to-face, hybrid or some form of digitally based learning and teaching. To make the evaluation equitable (i.e. just and fair to all involved), analysis of all the stakeholders’ needs and attributes should be undertaken and considered throughout the planning and implementation of the evaluation.

Defining what we would like to know about a specific course, i.e. formulation of the evaluation question(s) is part of the planning. The scope and type of evaluation will depend on the questions we are asking. To help us define the questions, it is sometimes useful to represent the course/initiative in terms of inputs, activities, outputs and short-to-long term outcomes (this is called a logic model). The formative evaluation would then look at the input, activities and immediate output of the course, while summative evaluation would cover short, medium- and long-term outputs.

Examples of formative evaluation questions asked can be related to how the course is implemented (competencies covered, use of certain resources, learners’ participation, trainers/facilitators roles, pedagogical approaches used, innovation applied, intended learning outcomes etc).

Examples of summative evaluation questions are usually related to the course/initiative’s short, medium and long terms outcomes, including looking at the barriers that might have prevented the intended outcomes from happening.

For example, when the educators designed the evaluation for the course you are currently following, our overarching evaluation question was about the ways and the extent to which this course influences learners’ practice and/or careers, propagates to the local communities and contributes to the organisational/institutional changes.

Evaluation Design

Design would normally draw upon existing theories of how people learn in general and/or in a specific context, to define the theoretical framework for the methods of data collection and analysis, based on the evaluation questions asked. We talked about relevant learning theories at the start of this week. Based on these theories, several evaluation frameworks were developed and are widely used to support the way we design the evaluation – we will mention here one such framework, relevant to adult education and Continuous Professional Development.

The Kirkpatrick evaluation framework is mostly used to evaluate the efficacy of training in short, medium and long term. It was developed by Donald Kirkpatrick in the 1950s. It had survived since then in a slightly changed and adapted form, which currently looks at four evaluable elements:

  1. Reaction of learners – their experience during the course
  2. Learning (including skills and changes in attitudes) – the increase in knowledge resulting from the course attendance
  3. Behaviour – the application of learning in employment and/or other context, such as study or career
  4. Results – the broader impact the learning has on the learners’ wider contexts, immediate community or organisation.

Other frameworks exist, applicable to different contexts of teaching and learning. This particular course will use a networked and community learning specific framework, largely based on Kirkpatrick’s model, to evaluate learners’ experiences and engagement, build-up of knowledge capital and application of the learning in own practice and organisation.

When deciding on methodology and accompanying methods of data collection, the most important choice is between qualitative and quantitative methodologies (or a mixed methods approach). While qualitative methodology draws upon non-numerical data (text, images etc), quantitative methodology uses numerical data, that is often available through learner analytics collected in platforms such as VLEs or can be collected using questionnaires.

A qualitative approach often allows you to go into depth and discover detail about learner experiences. Qualitative data has a good potential to be used to draw inferences about cause and effect and the reasons why a course performed well or was less successful. In contrast, quantitative methodologies can be applied to a larger sample and have the potential for drawing general conclusions.

A combination of the two methodologies, called mixed methods, is often used by evaluators.

Data collection

Some of the typical qualitative methodologies used in evaluation of teaching and learning include case studies, ethnographic/cyber-ethnographic approaches, with data collection methods such as open question questionnaires and surveys, interviews, focus groups, observation, historical texts, reflective texts, visual artefacts and more.

Quantitative methodology sometimes uses a quasi-experimental methodology, that tries to measure the effect of a ‘treatment’ in teaching. Usual data collection methods used in quantitative design include scales, such as a Likert scale rating how much people agree with statements, and surveys or questionnaires that use closed questions or open questions that can be quantified. Surveys are nowadays often conducted online via services such as SurveyMonkey, Google Forms, SmartSurvey. Institutional or openly available statistical data can also be source of quantitative data.

In collecting data, blended and digital learning have the additional option of digital data collection through learning analytics, learners’ discussions forums or comments areas such as the one used in this course.

Probably the most used ways of data collection are surveys or questionnaires and scales. Please find attached more tips on both at the end of this step.

Data collection requires an ethical approach. For example, when collecting data about the learners, their rights to privacy, security and confidentiality, such as confidential and anonymous treatment of participants’ data, secure data storing, compliance with the relevant data protection laws, especially if you are working internationally, need to be respected. This includes observing learners’ right to withdraw their data at any point during the evaluation and this is an important principle of informed consent.

Data analysis

The method of analysis will depend on the scale and type of data. For qualitative data, thematic analysis is often used. Qualitative analysis requires triangulation in data sources, methodology and individuals who analyse it, so that credibility and confirmability of the results can be achieved. The analysis can be done manually on smaller scale data, but with larger sets, software can be used with the coding process. Quantitative analysis includes statistical approaches – often in the form of descriptive statistics, but more advanced statistical analysis can be provided using different software widely available.

Risks and unintended consequences of every evaluation process should be considered or allowed for. Many contextual factors can affect the evaluation process and results, so the more factors are considered, the more valid and reliable the results will be. Finally, the results of the evaluation are normally shared through appropriate forms of dissemination with the majority if not all of the stakeholders involved.

This course’s evaluation will use mixed methods to analyse data collected via FutureLearn analytics and metrics, end of course and follow-up surveys as well as the participant observation notes and feedback.

© Wellcome Connecting Science
This article is from the free online

Train the Trainer: Design Genomics and Bioinformatics Training

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now