Most writings on training evaluation focus on the best ways to measure training impact and show how the training function influences the business. Of course, the objective of training evaluation is not only to deliver data reports, but to use that data to improve training delivery and impact. The path from having results to acting on them holds an underestimated challenge maybe even more crucial and tricky than training measurement itself: How can L&D teams really use evaluation results to systematically improve their training? There are four steps needed to maximize training evaluation usefulness, with each step addressing elements that must be considered to achieve this end.

Step 1: Asking the Right Questions

The approach, “we’ll look at the data once it’s gathered and we’ll surely find some stuff to improve” is the perfect way for evaluation results to sit and gather dust. There might be some interesting, unsuspected results that deserve attention, but the idea is to not rely entirely on this post-hoc exploratory analysis. A balanced analysis should include predefined targets that when met (or unmet) trigger real decisions and interventions. This is one of the reasons why, when starting a measurement project, you should start with your stakeholders’ needs, whether they’re inside or outside the L&D team.

The first question that could be asked is what the L&D team wants to know about the training. To frame the discussion, a second question could be what they intend to do based on the results. Paradoxically, sometimes what they want to know won’t help much for what they want to do. This refers to the notions of interesting versus useful information. Asking these types of questions assures that you cover their needs more exhaustively, avoid missing an important aspect and also, by concentrating your efforts on useful information, prioritize and narrow your metrics framework.

At the end of this step, you should have a list clearly specifying all the decisions that will be made based on your evaluation results, whether those decisions imply no change or totally revamping the training. You should also clearly state the questions that will and won’t be answered. This will avoid the issue of some stakeholders wanting to expand or twist the results to serve other purposes.

Step 2: Determining What to Measure

Identify which metrics are relevant and necessary to support decision making. When confronted with all the possibilities of training measures, it can be hard determining what is useful to know, realistic to obtain and will have the buy-in from your stakeholders. For example, training participation rates may indeed be relevant, but doesn’t reveal much information on training impact or ways to improve it. It has also now become accepted for training measurement to go beyond trainee satisfaction, considering the weak correlation between satisfaction and training transfer.

Where should one begin the reflection on what to measure? One convenient way to categorize your metrics is to use the efficiency, effectiveness and impact categories. Efficiency metrics refer to operational types of measures (e.g., participation rate, cost per trainee, hours of training, etc.). Effectiveness refers to the change in attitude, learning or behavior that occur from the training. Impact refers to the influence of that change on business results and strategic objectives.

Contrasting with the traditional four levels model of Kirkpatrick that assumes impact is the ultimate metric destination, it is argued that these three categories of measurements can all be useful depending on your stakeholders’ needs and plans. For each metric category, different objectives and targets should be set, followed by the identification of a mean of collecting the data.

You may find it practical to divide the intended metric from the mean of collecting the metric. For example, if the metric of interest is performance improvement, the mean of collecting this information could greatly differ, such as interviewing or surveying the trainees, their managers or collecting business data. Thus, for the same intended metric, different methodological choices could be made based on resources, availability of data and the need to obtain more robust results. When showing interest in impact metrics, which is often harder to grasp, it is recommended to use a multi-sources method that allows you to have a complete assessment and more confidence in the results.

Step 3: Being a Match Maker

Determine the best way to match metrics to metrics users. It is often assumed that interest about metrics is homogeneous among stakeholders, but it depends on what their responsibilities are in the training process. For example, a trainer might not be interested in the same metrics that a training designer is. Trainee evaluation of trainer clarity could be valuable for trainers and ignite motivation to improve themselves, but it might be too narrow of a metric for the head of HR.

How do you find out which metrics each stakeholder will need? First, try to map all your stakeholders (externally or internally) that would benefit from having training measurement results. Then, structured interviews with each stakeholder, before you deploy your measurement strategy, can be sufficient in identifying the most relevant metrics for them. Being able to target which metrics you must deliver to each stakeholder will allow your measurements to be more precise, gainful and less overloading for each stakeholder.

Step 4: Delivering the Results at the Right Time

This is the moment when the results should be analyzed and the conclusions reported. To have the momentum and the exposure that evaluation results deserve, L&D professionals need to find a way to present them at a strategic time. Not too soon, not too late, just in time! The reason is that stakeholders interested in the training outcomes won’t want to wait until an unknown moment, and will probably find another way of guessing the results (such as making one or two phone calls to trainee managers). The consequences are that your measurement effort will have been in vain and their conclusions might be wrong about the training, resulting in decisions made on unreliable grounds.

Note, however, that if your conclusions are going to be partial and subject to change as more data is gathered, you should probably refrain from announcing conclusions until you’re confident about them. To have success with timing, it is helpful to pre-establish the timeframe of delivering results and the conditions under which each metric will be reported (e.g., reaching a minimum response rate). You and your stakeholders may want to negotiate this timeframe to find the right balance between ideal timing and what is possible and trustworthy in terms of results.

Summary

Derive a story from your data. The results should be placed in context and oriented toward decision-making. Once decisions are made to improve training, the four steps proposed may go on to drive further improvements. Keep in mind that interest in training impact must not overlook measurement impact. Training evaluation should be approached in its organizational system to trigger real value and usefulness. Hopefully, putting in place the four steps before training deployment will maximize your chance of providing a valuable contribution for your measurement effort.

Share