Why Everyone Talks About Results Measurement

Results measurement (in particular measuring business impact, or level 4 of the Kirkpatrick Model) is the holy grail of any industry — including training. Thirty years ago, marketers dreaded return on investment (ROI) conversations. With the advent of digital marketing (and the corresponding cascade of data), marketers can easily show their cost per marketing qualified lead (CPMQL) and justify a larger budget.

This type of transformation is one that all training professionals desire. Despite the challenges, training and enablement folks of every variety continue to herald their dedication to results measurement. Having concrete business results to show leaders following a training initiative would make it easier to ask for more investment and headcount. However, for as much discussion of results measurement as there is, relatively few training (even sales training) initiatives prove their business impact.

Why Results Measurement Happens So Rarely

Go to any training conference or peruse any vendor website, and you’ll find that everyone promises three easy steps to measure impact — yet it so rarely happens. Why?

The Analysis Itself Is Actually Hard

You have to have access to the key business data. Then, you must construct your experiment in advance, try to avoid confounding variables and conduct regression analyses to ensure that the training itself actually drove the change. It’s not all that surprising that the average training organization may not have that capability in house.

Most People Don’t Want to Measure Results

Rather, most vendors and internal training professionals are incentivized not to measure results. Under the status quo, finding a null result (no or negative change) would be problematic for a major training initiative, but it would likely happen more frequently than we’d like to believe. It’s much more comfortable to obtain good net promoter scores and make sure your stakeholders are happy than to tell them that result of a $500,000 training program was nada.

What You Can Do About It

Make the Analysis Easier

Pick data you already have access to and metrics that you already measure. Create natural experiments so that you have control groups without leaving out big cohorts from your training. Bring in strategy, finance or operations analysts to help; they’ll be keen to see proof of, or an improvement in, ROI. In the longer term, you’ll want to use data to craft interventions and then watch that metric move.

Change the Culture and Incentives

When making the business case to leadership, present a public hypothesis on business impact, and pre-commit to measuring it. Create a minimally viable training program, and test your hypothesis before scaling. In exchange for doing more experiments, gain leadership buy-in to accepting a null result. In the longer term, you’ll want to change the organizational incentives to reward experimenting and measuring results rather than leading a big training initiative that leaders and participants liked.

Solving the results measurement problem would transform how we build capabilities and drive performance within our organizations. But until we do the hard work of understanding how to design experiments and change the incentives within our organizations, we’re doing ourselves and our colleagues a disservice. Training will undergo a shift similar to the one marketing saw over the past few decades. We can either wait for it to happen to us, or we can make it happen.

Share