Skip main navigation

How Do Trainers Evaluate Their Training?

Learn more about evaluation of training for practice.

Healthcare trainers want to know how effective their teaching has been. Evaluation methods range from attendance registers and eLearning completions, through assessment, to evaluating the effects of training on the health of the local population.

This continuum appears to offer an ever-increasing quality of useful information. The reality is more complex. The levels which are most easily measured often show little correlation to lasting learning and transfer to practice.

The Danger of Measuring Only What’s Easy to Measure

Measuring that which is easy to measure and ignoring that which is harder to measure is known as the McNamara fallacy. For example, an easily measured number of homeopathic preparations dispensed gives no indication of the efficacy of this form of therapy.

In training, the number of completed eLearning modules for Infection Prevention & Control within an organisation may not correlate with a reduction in the number of its Hospital Acquired Infections (HAIs).

Levels of Evaluation

Many training evaluation models exist. For several decades, one of the best known has been the Kirkpatrick Model. It evaluates at four levels:

1. Reaction: Learner perception of ‘Knows that’ and ‘Knows how’.

2. Learning: Learning assessed at the time ‘Shows how’

3. Behaviour: Evaluation of transfer to practice ‘Does’

4. Results: Impact on the organisation or intended field.

The above Four levels of evaluation shown as arrows pointing left to right

We have outlined the Kirkpatrick model here because for new trainers it is an easily understood introduction to evaluation. However, as we show below, it can present issues for those unfamiliar with its limitations. At the end of this step we recommend a model which is more complex, but which may better meet the needs of healthcare training.

Level 1. Post-Course Evaluations

Post-course evaluations (also known as ‘Happy Sheets’ or ‘Smile Sheets’) are questionnaires completed by trainees at the end of training. They often comprise a series of statements to which the trainee is invited to indicate how much they agree or disagree.

Five point scale from 'Poor' to 'Excellent' headed 'Training'. The 'Excellent box is ticked.

These questionnaires and their online equivalents offer a simple way of recording trainees’ feelings and are widely used by trainers and learning designers. However, post-course questionnaires do not correlate well with the efficacy of the learning. Positive comments and high scores are satisfying to receive, but they are not a reliable indicator of lasting learning.

Trainees may unwittingly offer suggestions on post-course evaluations which will make it quicker but less effective. In Step 1.10 we described how a group of physicians had no significant retention of learning from an online module after 55 days. The report authors noted that these results were poorer than previous similar studies. This may have been because they had ‘streamlined’ the learning materials following user feedback. In doing so they may have removed learning activities which would have enhanced long-term retention.

Trainers may need to be wary of learner suggestions that parts of the training could be shortened. Learners can base such suggestions on an illusion of learning, simply because they understood it at the time. For example, they may have understood a concept when explained by the trainer and felt the ensuing group work to have been superfluous. But the group discussion may have been included for the explicit purpose of encouraging further processing.

Level 2. ‘Shows How’

In this level, some form of assessment of learning takes place. If the assessment matches the outcome(s) as described in Step 3.7 and Step 3.8 it shows the learner has achieved the outcome set for the training session.

A successful assessment undertaken at the end of learning does not guarantee it will also be transferred to practice. As mentioned in Step 4.4, massed learning produces a slightly better result than the more effective interleaved alternatives if assessments are carried out at the end of training. For this reason, evaluation at this level may not indicate use of the optimum learning method.

Level 3. ‘Does’

In Step 1.5 we explored the difference between ‘being taught’ and ‘being told’ and we gave the example of cabin crew demonstrating how to find and put on a lifejacket.

In January 2009, the passengers on US Airways Flight 1549 received just such a demonstration. Less than 10 minutes later, the aircraft crash landed on the Hudson River. The photograph below shows passengers who have evacuated onto the aircraft wings. Most are not wearing lifejackets.

Image shows aircraft which crash-landed on the Hudson river in New York. Passengers can be seen standing on the wings without lifejackets

This stark image indicated that the ‘Does’ element of that day’s lifejacket demonstration did not materialise.

In Step 2.3 we mentioned how Does following attendance at an Infection Prevention and Control session was established by audits and records of HAIs.

Will Thalheimer (whose evaluation model we recommend below), cautions against accepting individual’s evaluations of their own transfer to practice. He suggests a subjective view may be unreliable when compared with objective evaluations such as audits, interviews with managers, etc.

Level 4. Impact on Organisation and Recipients of Healthcare

This level may be above the reach of someone new to healthcare training, but it is the ultimate aim of all training. In the evaluation model offered below, this level is divided into two.

The Learning Transfer Evaluation Model (LTEM)

The Learning Transfer and Evaluation Model represents an evaluation method which we believe is suited to healthcare. LTEM is too detailed to explore fully in this step, so we have included it Downloads below. We recommend that participants download the document and consider its use within their organisation.

Talking Point

Consider sharing your answers to these questions in Comments below:

  • What form of evaluation do you carry out for your own training?
  • How might you undertake a higher level evaluation for your topic?

 

This article is from the free online

Train the Healthcare Trainer

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now