What is Wrong with Training Evaluation?
3rd November 2014 | Richard Griffin
Author Richard Griffin throws light on the issues surrounding training evaluation and how to overcome them.
There are lots of reasons why we should evaluate training: accountability, control and demonstrating return on investment to name just three. And there are lots of reasons why we don't: lack of expertise, lack of time, lack of tools to do the job and a feeling that it just is not worth the effort.
Unfortunately, the latter almost always wins out over the former. Probably less than 5% of businesses seek to evaluate the impact of their investment in training (and for the UK that is a massive £45 billion a year currently).
Evaluation though should be an integral part of the training cycle that starts with identifying a need and ends with seeing whether over time learning has made a difference or not.
The learning and development field has made huge strides in recent years, not least in its embracing of virtual learning technologies. Organisational spending on training and development continues to rise, yet evaluation practice is, well, frankly in a bit of a rut.
Why is this?
Fundamentally I think there are two related problems.
First off many of the insights we have gained into how and why adults learn and whether they transfer that learning into their jobs have not filtered their way to where it really matters - practitioners. Evaluation thinking remains rooted in an outdated view of learning. While we do not train people on the premise that they passively acquire learning, we evaluate them as if that’s exactly how they learn.
What do I mean by this? By far the most common means of evaluating training remains the handing out of a 'Happy Sheet' reaction survey at the end of an instruction session. Problem? Asking people how satisfied they are with training, will not give you an indication of whether they have acquired learning relevant to their job and transferred that learning into improved job performance. Did you know research shows there is a 'euphoria' effect at the end of training? High satisfaction might only measure the fact that people are happy they have completed the training!*
The second problem is that the tools available to evaluate are not up the job. While academics have been very vocal in their critique of the approaches used by practitioners, they have done next to nothing to provide user friendly alternatives. The result? Practitioners and academics, as one commentator has pointed out, essentially ignore each other. This isn't good enough.
Complete Training Evaluation seeks to address these problems head on by providing information and guidance that is easy for busy practitioners to use but also based on sound research. The book argues that we need a step change in our thinking about evaluation, one that is based on the reality of workplace learning.
Here are five steps to impactful evaluation based on the book’s insights:
Step 1: Plan
Evaluation should never be an after thought. Advance planning is essential for effective evaluation.
Step 2: Stakeholders matter
You need to think about who the evaluation is for and do not assume that there is consensus on what success looks like. Ask stakeholders what they think the training should deliver.
Step 3: Start at the end
Think about the sort of data stakeholders will want to see and how you will present it. This will help shape your thinking about what information to collect.
Step 4: Be creative
There is nothing wrong with a survey but there are lots of other ways impact information can be gathered. Consider using something different, like pictures.
Step 5: Maximise the reach of your evaluation results
Think training awards. Think articles. Think press releases.
* That is not to say that gathering satisfaction reactions does not have its uses. Reactions provide useful training design feedback and can improve employee relations as staff value organisations asking their opinions. Also, of course, happy trainees are a great advert for L&D!