Measurement is a dirty word in learning and development (L&D). While most organizations use some form of it for employee training, the perception of measurement in L&D is not unanimously positive.

Although it’s not completely overlooked, the importance of measurement is not equally recognized across organizations. A recent sample of learning leaders shows that only 2% feel measurement isn’t critical to their business, while 54% consider it extremely important. In other words, only slightly more than half of organizations view quantifying learning as essential, with 44% assigning it moderate importance.

This common scenario presents a unique opportunity. Since measurement in L&D is not universally embraced, there’s untapped potential for companies to leverage employee learning data. By shifting the narrative around training measurement, learning leaders can quantify the impact of their training programs, identify areas for improvement and make data-driven decisions. However, we recognize this is easier said than done.

While L&D measurement efforts may benefit from some repair work, it simultaneously offers a golden opportunity for forward-thinking organizations. By championing measurement, leaders can set new industry standards and shift toward truly data-driven L&D practices. This approach can transform learning programs from mandatory requirements that organizations passively comply with into mission-critical tools that drive employee upskilling and process improvement. Downstream, organizations can enhance their reputation, attract top talent, and ultimately strengthen their bottom line. What was once seen as the dirty word of measurement could become the key to L&D success.

What Organizations are Doing Now

So, let’s start by looking at how organizations are currently quantifying the impact of their learning programs. Figure 1 highlights the varying importance placed on different evaluation methods within organizations. It’s interesting to note that while most organizations rely on feedback from learners (55%), a significant portion does not measure learning impact at all (17%). This flags an immediate area for improvement in how organizations track their learning programs (i.e., “start doing it”). But it doesn’t let everyone else off the hook. If organizations only gauge training impact through learner feedback, there’s a lot of pieces missing from the puzzle.

Similarly, we found that organizations typically use about two approaches to training evaluations. So, the picture that the data paints is that many training programs tend to collect something from learners and compare it with enterprise-level results (e.g., business impact, return on investment, etc.).

Figure 1. Current Approaches to Measuring Learning Impact (2024, N=368)

Only about a quarter of organizations are utilizing the classic Kirkpatrick model of L&D metrics, which is what most in our field herald as the “gold standard” for measurement strategy in training. But where does it come from, and if it’s such a bedrock, then why aren’t more organizations using it?

The origin of the Kirkpatrick model, with its four levels of training evaluation, can be traced back to a series of articles published by Dr. Donald Kirkpatrick in the Journal of ASTD from late 1959 into early 1960. It was a simple and elegant set of suggestions for evaluating multiple aspects of a training program. But that’s all it was ever intended to be — a set of suggestions. And like a proverbial snowball rolling down a hill, the influence of Kirkpatrick’s suggestions became more widespread over time until they’ve essentially been codified in the language of training professionals.

However, the original scope of this model has been embellished far beyond its intended use. “In the November 1959 article,” Dr. Kirkpatrick wrote in 1996, “I used the term ‘four steps.’ But someone, I don’t know who, referred to the steps as ‘levels.’ The next thing I knew, articles and books were referring to the four levels as the Kirkpatrick model.” The nomenclature isn’t as important as the model’s utility, as reflected in Dr. Kirkpatrick’s comment, “I don’t care whether it is a model or a taxonomy as long as professionals find it useful in evaluating programs.” And if it is useful, it is useful. But we should not be lulled into thinking that training outcomes are not complex phenomena, or that they can be boiled down into a handful of conceptual levels while other evaluative criteria can be dismissed out of hand.

So, why aren’t more than 24% of organizations using this model? We suspect it’s because stakeholders don’t “speak” Kirkpatrick; the model is not a universal language. Organizational stakeholders, including executives, managers and employees, often have different priorities and may not be familiar with the model’s “levels” or see its relevance to their specific concerns. Additionally, while the model’s simplicity is a strength, it may not fully capture the complexity of training processes and outcomes, especially in today’s fast-changing, digital-first business environment.

Moreover, the model may prescribe a measurement approach that doesn’t fully optimize resources or may lead learning leaders to spend time conducting less impactful analysis. Rigidly adhering to the levels can limit the scope of training evaluation efforts while at the same time misdirecting where they go.

We’re not suggesting L&D professionals abandon the Kirkpatrick model entirely, but rather to start thinking beyond it. There is, after all, no rule against complementing Kirkpatrick with other tools and approaches. The challenge has been the lack of a clearly articulated alternative that maintains the interests of the business and enhances training experiences.

As shown in Figure 1, organizations are engaging in some form of training evaluation, but there’s room for improvement. If the current state of training evaluation needs fixing and the Kirkpatrick model is more popular in theory than practice, how can organizations improve their measurement?

The 5-4-3 Model of Training Evaluation

In 2023, the International Organization for Standardization (ISO) released a set of suggested standards for evaluating workplace training. We’ve dubbed the 30437 standards as the “5-4-3 Framework” for training evaluation. Based on these ISO standards, the numbers in the 5-4-3 moniker represent:

  • 5 types of consumers/stakeholders of training evaluation data.
  • 4 reasons to evaluate training that shape the form of both the questions and answers.
  • 3 classes of metrics that can be used to provide insights to stakeholders.

Why is the 5-4-3 Framework useful? It’s specifically tailored for employee training in a business context, not higher education or other forms of training. It’s accessible to training professionals, avoiding academic theory and jargon. Instead of replacing the Kirkpatrick model, the 5-4-3 Framework builds on it, contextualizing the four levels in a broader, more relevant way that appreciably expands the applicability of measurement.

Here, we’re going to focus on the “4” of the 5-4-3 Framework, which outlines the reasons why organizations measure employee training. Before we go further, it’s helpful to leave ideas about “levels” of evaluation behind — you don’t need them. This framework uses different cross-sections of evaluation possibilities based on five different stakeholders and their contrasting interests in training, the four reasons to measure training and three types of metrics that can be used to answer questions about training. Evaluation success depends on matching stakeholder needs with the right forms of information, making “levels” irrelevant.

The ISO defines four purposes for training metrics, which can be thought of as assorted ways to frame the desired depth of insights about training (see Figure 2).

  1. Inform: Provides straightforward answers, like usage trends or number of offerings.
  2. Monitor: Compares metrics to historical benchmarks to contextualize the data (e.g., How have engagement metrics changed over the last several quarters?).
  3. Evaluate: Encompasses inquiries about efficiency, effectiveness and target outcomes — in other words, program evaluation.
  4. Manage: Uses metrics to drive ongoing improvements in training.

While there are four reasons in this framework, they are not levels in the sense that one is better than another but rather reflect the level of specificity needed for different stakeholders. For example, senior leaders may prefer “monitor” or “evaluate” insights rather than just “inform” details. Similarly, learners might need “inform” and “monitor” information but find “evaluate” or “manage” details less relevant or overwhelming.

The key is to match the stakeholder’s needs with the appropriate depth of measurement and metrics, rather than focusing on any idea of hierarchical levels.

An Uncomfortable Truth About Evaluation

So, is one depth of insight better than another? It depends on the question. Powerful insights can come from any of the four points if they align with the evaluation purpose and stakeholder needs. But what are organizations doing?

We surveyed 350 learning leaders about how they approach using different insights to inform their training strategy. As shown in Figure 3, organizations most frequently work with insights at the “evaluate” depth (68%), and least often at the “manage” depth (41%). This might suggest that while organizations focus on outcome goals, they may be disregarding other valuable insights.

Figure 3. Types of Insights Used to Make Decisions About Training Strategy (2024, N=351)

The data from Figure 3 also reveals that only 10% of organizations use all four depths of insights to make decisions about strategy, while a combined 65% use one or two. In short, the figure shows how and where L&D needs to improve. Nearly 60% of organizations don’t utilize insights at the “manage” depth, and 90% don’t engage with the full suite of reasons to evaluate their training.

As we navigate the 21st-century L&D landscape, measurement is a powerful tool for transformation and cannot be underestimated. The journey of training evaluation is one of opportunity. It offers a gateway for organizations to become industry leaders, drive employee upskilling and process enhancement, and contribute to their bottom lines. It provides a chance to change the narrative about training measurement, converting it from a “dirty word” to a crucial key for business success. And most importantly, it is a vehicle to make a difference in the lives of employees by improving their skills and knowledge and empowering them to achieve their full potential.

We hope the introduction of the 5-4-3 Framework can help learning leaders begin to move beyond traditional levels of evaluation and adopt a more holistic and business-first view of training measurement.