Kirkpatrick’s model has become a standard method for evaluating the success of training programs for any organization. Most learning and development professionals are now familiar with the four levels of evaluation – reaction, learning, behavior change and ROI. But perfectly drawn infographics and instructional design theory don’t always translate precisely to real-life situations and challenges – or your LMS. Here are three common challenges and how to approach them.

Challenge One: Choosing Which Levels to Work With

Kirkpatrick’s Model is based on four levels:

  1. Reaction: Does the learner like the training?
  2. Learning: How effectively does the learner obtain knowledge?
  3. Behavior: Does the training actually impact the user’s behavior going forward?
  4. Results: the business impact, or ROI, of the training

There are certain projects in which not all levels of evaluation are required. Sexual harassment training, for example, is often necessary regardless of ROI, so you may be able to disregard that level. Compliance training is usually focused on making sure learners retain the information and apply it at work, so your focus in this case may be mostly on levels two and three.

Challenge Two: Picking the Right Metrics

Every level of Kirkpatrick’s model involves some form of metrics. The “reaction” level measures emotional response. The “learning” level measures learning retention. The “behavior” level measures how new knowledge transfers to everyday work habits. And of course, the fourth level measures ROI. Among these four levels, for any project, there may be hundreds of possible metrics to use. It can be overwhelming to determine which metrics are actually useful to evaluate and will help you draw accurate conclusions.

Consider the following factors when selecting metrics to make sure they will give you deep insights:

  • Ask yourself if stakeholders will care about this metric. Metrics should drive business decisions and determine the value of programs for the people who care about them (or more importantly, who fund them).
  • Decide if the metric is relevant. Some performance indicators might be outside the scope of the project. For example, improving customer satisfaction might not occur as a direct result of distributing booklets on recycling protocols, but you may see a reduction in waste.
  • Calculate how you can connect the metric to key performance indicators (KPIs). For example, number of user interactions probably cannot be a sole indicator of engagement, so you may need several metrics to prove success.
  • Ensure that the data source will stick around. In order to evaluate performance of a metric over time, the data source must be stable for the duration of the project’s existence, whether it’s in a two months or several years.
  • Make sure that you have enough data to draw conclusions. Data volume is important for data integrity – after all, the more data you have for a metric, the more likely it is to produce accurate insights. Also consider the proportion of data. One hundred data points on employee engagement might seem like a lot, but it’s not if you have 1,000 employees.

Have a plan in place for how you will use evaluation for making business decisions down the road. Often, learning and development professionals rush into a project without taking a step back and thinking about their end goals. Having good data available without a plan to use them wastes the time and resources it took to gather them. Make these decisions in the beginning phases of each project.

Challenge Three: The Evaluation

Evaluation, whether you use Kirkpatrick’s model or not, can have both a time and a process cost, depending on what metrics you use and how you measure them. It also depends on the tools you have available to help you streamline this effort.

Sometimes, trying to conduct an evaluation will cost more than it’s worth. For example, you can use a post-training survey to evaluate a short training program that focuses on making employees aware of company policies using weekly email blasts. These blasts and survey are easy to send and relatively cheap. With the right LMS, you can actually automate them. Afterward, you can focus on metrics like emails opened, links clicked and survey responses. Scheduling and conducting in-person employee interviews to attempt to calculate the impact of the program would be a lot more time-consuming and would take employees away from their job, while garnering the same results.

Calculating ROI, on the other hand, can be complex, especially when you need to assign value to concepts like improved engagement and customer service satisfaction. If you can, it may be helpful to work with someone with analytics experience during the planning process. Try contacting the accounting department or marketing analytics team to find someone who already works for your company and has experience calculating ROI. You can also hire a consultant from outside your organization.

Overcoming Real-World Challenges Means Real-World Results

If you’re able to bring your training program out of the world of theory and into the realities of what data and evaluation can offer, you’ll be able to successfully evaluate your L&D project, make conclusions to help drive business decisions, and implement effective training programs.