If you are running an L&D function and finding it hard to assign business outcomes to all of the blood, sweat and biscuits your team gives out, this blog post is for you. Being fairly new to an L&D function (my background is in analytics), I have been both amused, and alarmed, by the ambition of some organizations to reach level four of the widely used Kirkpatrick model.
It is commonly thought that the steps of Kirkpatrick’s model are to be achieved sequentially and that they increase in complexity, and usually cost, through from level 1 through level 4. This is a barrier for many, who say moving beyond “happy sheets” to change in performance is just too hard.
What many L&D functions do not seem to have grasped is that the world of data, behavioral science and analytics has been running away from them. Yes, we read about and use learning record stores and learning management systems, throw money at xAPIs, and introduce online and remote learning, but many of us are still measuring these experiences using only level one of Kirkpatrick’s model: “How did the learner feel about the training?”
Traditional learning evaluation stops at results or, “OK, so we showed people how to do things, but have our metrics moved?”. Kirkpatrick’s model was expanded in 1994 by Phillips to include a fifth level: return on investment. The top 3 percent of the world’s leading L&D functions are managing to measure ROI in the new five-level model, but even within this group, results are largely directional. They often attribute performance change to the training itself, ignoring many other influencing factors, such as support on the floor, general buzz, the weather, new tools, etc.
Organizations that are struggling to reach level two, three or four of a five-step model may not want to hear that we can now introduce a sixth step, yet that is what I am going to do.
I believe that this five-level model has been overtaken by evolving capabilities in data, and we can now introduce a sixth level: understanding differences in learner populations. Here’s the cool bit: This level can be applied as a lens over any of the five steps easily and cheaply, and it will return immediate insights. It will also inform the more difficult steps of the Kirkpatrick-Phillips model.
For many L&D professionals, the idea of understanding differences in L&D outcomes across various populations may seem improbable, difficult or even science fiction. After all, we seem to struggle just attaining qualitative feedback! The truth, however, is that for most of the other functions of most businesses, this metric is bread and butter. We segment our customer base to identify whom to market to and when, and we conduct A/B tests to online visitors to optimize engagement. This data collection is mostly outward-looking, to customer or supplier populations, but there is no reason we cannot do it for our L&D customers – our learners. I have seen it happening firsthand, and I know the information to do it is available within your organization.
Understanding differences in experience by population is good for our learners, our companies and us as professionals. It is also very easy with a small investment upfront to train learning professionals in the use of data and to make new connections throughout the business to build the relationships that will open up richer data sets. At its most basic level, results can be achieved quickly in Excel.
For level six measurement, we need to be scientific; we need to experiment in a controlled way. Running experiments across training treatments is easy, and measuring outcomes is a cinch if we design experiments against a control group. The control group may be people with the same performance pre-training, with the same mix of tenures, who do the same work at the same site, from whom we deliberately withhold training. If properly controlled, any difference in the movement of performance before and after training can be attributed to your learning intervention. You can even pilot multiple formats of training and then model the business return if a particular format was rolled out. Overlaying your population data will then let you see how effective your intervention is for various populations and form a hypothesis to test in your next experiment.
Working in this way will allow your function to become much more creative, foster an internal culture of learning about your learners, demonstrate ROI and constantly improve business results by delivering the right training in the right way to the right learners, based on what you know to be true.