Skip main navigation

How Do Trainers Evaluate Their Training?

Learn more about evaluation of training for practice.
8.8
How can you be sure that your learning has been effective? It’s nice to receive positive comments on evaluation sheets completed by learners at the end of the session. But positive responses at the end of training don’t guarantee the learners will incorporate the knowledge and skills in their practice.To ensure they really do put the learning into practice, you need some form of follow up inquiry, after the learning has been completed. For example, you could contact a sample of learners two weeks later and ask whether and how they have put the learning into practice. Or you could ask the sample to complete a follow-up test. Other ways include, asking managers whether they have noticed a difference.
53.1
These all take additional time, but they are the only way to be certain your training is genuinely doing what it’s supposed to.

Healthcare trainers want to know how effective their teaching has been. Evaluation methods range from attendance registers and eLearning completions, through assessment, to evaluating the effects of training on the health of the local population.

This continuum appears to offer an ever-increasing quality of useful information. The reality is more complex. The levels which are most easily measured often show little correlation to lasting learning and transfer to practice.

The Danger of Measuring Only What’s Easy to Measure

Measuring that which is easy to measure and ignoring that which is harder to measure is known as the McNamara fallacy. For example, an easily measured number of homeopathic preparations dispensed gives no indication of the efficacy of this form of therapy.

In training, the number of completed eLearning modules for Infection Prevention & Control within an organisation may not correlate with a reduction in the number of its Hospital Acquired Infections (HAIs).

Levels of Evaluation

Many training evaluation models exist. For several decades, one of the best known has been the Kirkpatrick Model. It evaluates at four levels:

1. Reaction: Learner perception of ‘Knows that’ and ‘Knows how’.

2. Learning: Learning assessed at the time ‘Shows how’

3. Behaviour: Evaluation of transfer to practice ‘Does’

4. Results: Impact on the organisation or intended field.

The above Four levels of evaluation shown as arrows pointing left to right

We have outlined the Kirkpatrick model here because for new trainers it is an easily understood introduction to evaluation. However, as we show below, it can present issues for those unfamiliar with its limitations. At the end of this step we recommend a model which is more complex, but which may better meet the needs of healthcare training.

Level 1. Post-Course Evaluations

Post-course evaluations (also known as ‘Happy Sheets’ or ‘Smile Sheets’) are questionnaires completed by trainees at the end of training. They often comprise a series of statements to which the trainee is invited to indicate how much they agree or disagree.

Five point scale from 'Poor' to 'Excellent' headed 'Training'. The 'Excellent box is ticked.

These questionnaires and their online equivalents offer a simple way of recording trainees’ feelings and are widely used by trainers and learning designers. However, post-course questionnaires do not correlate well with the efficacy of the learning. Positive comments and high scores are satisfying to receive, but they are not a reliable indicator of lasting learning.

Trainees may unwittingly offer suggestions on post-course evaluations which will make it quicker but less effective. In Step 1.10 we described how a group of physicians had no significant retention of learning from an online module after 55 days. The report authors noted that these results were poorer than previous similar studies. This may have been because they had ‘streamlined’ the learning materials following user feedback. In doing so they may have removed learning activities which would have enhanced long-term retention.

Trainers may need to be wary of learner suggestions that parts of the training could be shortened. Learners can base such suggestions on an illusion of learning, simply because they understood it at the time. For example, they may have understood a concept when explained by the trainer and felt the ensuing group work to have been superfluous. But the group discussion may have been included for the explicit purpose of encouraging further processing.

Level 2. ‘Shows How’

In this level, some form of assessment of learning takes place. If the assessment matches the outcome(s) as described in Step 3.7 and Step 3.8 it shows the learner has achieved the outcome set for the training session.

A successful assessment undertaken at the end of learning does not guarantee it will also be transferred to practice. As mentioned in Step 4.4, massed learning produces a slightly better result than the more effective interleaved alternatives if assessments are carried out at the end of training. For this reason, evaluation at this level may not indicate use of the optimum learning method.

Level 3. ‘Does’

In Step 1.5 we explored the difference between ‘being taught’ and ‘being told’ and we gave the example of cabin crew demonstrating how to find and put on a lifejacket.

In January 2009, the passengers on US Airways Flight 1549 received just such a demonstration. Less than 10 minutes later, the aircraft crash landed on the Hudson River. The photograph below shows passengers who have evacuated onto the aircraft wings. Most are not wearing lifejackets.

Image shows aircraft which crash-landed on the Hudson river in New York. Passengers can be seen standing on the wings without lifejackets

This stark image indicated that the ‘Does’ element of that day’s lifejacket demonstration did not materialise.

In Step 2.3 we mentioned how Does following attendance at an Infection Prevention and Control session was established by audits and records of HAIs.

Will Thalheimer (whose evaluation model we recommend below), cautions against accepting individual’s evaluations of their own transfer to practice. He suggests a subjective view may be unreliable when compared with objective evaluations such as audits, interviews with managers, etc.

Level 4. Impact on Organisation and Recipients of Healthcare

This level may be above the reach of someone new to healthcare training, but it is the ultimate aim of all training. In the evaluation model offered below, this level is divided into two.

The Learning Transfer Evaluation Model (LTEM)

The Learning Transfer and Evaluation Model represents an evaluation method which we believe is suited to healthcare. LTEM is too detailed to explore fully in this step, so we have included it Downloads below. We recommend that participants download the document and consider its use within their organisation.

Talking Point

Consider sharing your answers to these questions in Comments below:

  • What form of evaluation do you carry out for your own training?
  • How might you undertake a higher level evaluation for your topic?

 

This article is from the free online

Train the Healthcare Trainer

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education