Why is it “out of this world”? Because I recently used it to evaluate training at the National Aeronautic and Space Administration (NASA). But before we go to a galaxy far, far away, let me give you a little down-to-earth context.
I recently contracted to do training measurement work for the federal government. This meant assessing the impact of training at organizations like the United States Department of the Interior (DOI), The Food and Drug Administration (FDA) and NASA. As this work began, and we began to scope the potential impact of the training (what we call the “story of impact”), the critical questions that continued to arise were “Can we apply a 6-level model of training evaluation?” and if so, “What level of impact can we measure up to?”
The reason why this became a sticking point was that most organizations that measure the higher levels of impact (like Levels 4 and 5) are for-profit businesses using Level 4 to look for “business impact,” which typically means an increase in revenue or cost savings for the company. This revenue or cost saving is then compared to the overall cost of training and a return on investment at Level 5 can be calculated. This focus on business impact and ROI clearly wasn’t going to work for agencies that don’t define success in terms of profit and loss.
So how would we apply our 6-level approach and customize it for this government training? The general consensus was that we can certainly tell our story of impact up to Level 3 (employee behavior changes and on-the-job improvements that are “mission critical”), but when we get to Level 4, we had to push our thinking away from how these behaviors impacted the “business” to how they impacted the “health of the organization.”
In other words, what organizational outcomes can we expect to improve by having all their employees go through training? It may sound like a semantic difference, but until we teased out this distinction between organizational and business outcomes, the stakeholders were not convinced we were shooting for the right story of impact. Here’s how we defined each level of impact and how they differed from a purely business ROI approach.
The Levels of Evaluation
Level 1 measures participants’ experience in the training. Were they engaged with the design, the content and the way it was delivered? This level of data works for both business and government, but one distinction here was our government stakeholders were especially tuned into how relevant their training was to the specific role of participants and how recommendable it was to peers in the same position. This was important because it immediately indicated (right after the training) if the content was on target or had to be revised.
Level 2 measures participants’ new knowledge and skills that were acquired during the training. Are they leaving with new learning and capabilities that will help them do their jobs better? This may also include an attitude change or a new way of looking at a problem. While we typically conduct these assessments with business clients by asking a few simple questions about the new knowledge they gained, it was especially important for these government agencies to understand the real difference between where employees were before and after the training.
To accomplish this without a rigorous pre- and post-test at Time 1 and Time 2 for all training sessions, we used a retrospective scale that carved out each unique knowledge topic and then asked participants, within the same question, to rate where their level of knowledge was before and after the training. Thus, the difference or percent increase in knowledge for each session could be quantified using only a post-training assessment.
Level 3 measures whether participants are returning to their everyday jobs and actually applying what they learned in training. Level 3 is probably the most critical measure because if behavior change doesn’t happen, then all the training impact stops there. Conversely, if you can demonstrate that your training improved the way employees do something, these improvements can then be correlated to the critical metrics that matter most to the organization. And as important as it is to a business impact story, it’s even more critical to a government impact story because these “mission-critical” behaviors have historically defined the success or failure of their training initiatives.
To measure this level for NASA and the DOI, we asked questions about improvement in crucial behaviors 60-90 days post-training and asked raters to use a scale from “no improvement” to “exceptional improvement.” We were then able to report the percentage of participants, and other rater groups, that observed each level of improvement back on the job. This level of measurement is crucial because it shows that the learning at Level 2 was transferred and applied on the job at Level 3.
Level 4 typically measures whether these behavioral improvements actually lead to better business metrics and higher performance. Here is where we had to make a significant divergence from business impact and focus on organizational efficiencies that could increase organizational health by saving time, resources and/or budget. To do this, we had to look at the overarching objectives of the entire agency, look at the mission and vision of the agency and look for how the behaviors taught in training specifically fed into those larger objectives.
For instance, instead of asking about how satisfied paying customers were with a business service, we asked questions about how the DOI was able to make quicker decisions and resolve federal land disputes, and how NASA was able to create internal efficiencies where participants could have more productive and inclusive hybrid meetings, spend less time in unproductive meetings and thus spend more time sending people into the stratosphere.
Level 5 measures the extent to which the benefits of a particular training experience outweigh the costs of that training experience. The ROI calculation can only be done when the benefits identified in Level 4 have been monetized and turned into dollar values for the organization. Once monetized, they can then be compared to the total costs, which are already monetized.
How could we possibly do this for agencies that aren’t looking for financial ROI, but still want to know whether the training was “worth it”? Here we proposed translating at least some of the efficiencies and outcomes we found at Level 4 into a quantifiable dollar benefit. For example, if issues are being resolved quicker, or employees are having more productive meetings, then we can quantify the “hours saved” into salary dollars saved.
Level 6 is an evaluation that simply measures your learning climate and tells you which environmental factors are most affecting the impact of your training. That is, what are the environmental factors (things that are happening in the employee’s immediate work environment) that are either supporting or sabotaging the impact of training back on the job?
We call these Level 6 ROI Maximizers (for business) or the “Transfer Climate” factors (for other organizations) because it tells us exactly which climate factors are most powerful when employees attempt to transfer what they learned in training back to their everyday jobs. I see it as a critical level of evaluation because this additional analysis can ultimately tell you how to improve the impact and ROI of any training program in the future.
For instance, for the DOI, we measured how immediate manager support can influence the effects of training and found that high manager support can more than double the impact.
Measuring the impact of training is no longer a nice-to-have. It’s an absolute learning and development imperative everywhere training is delivered. Whether you’re a business looking for a profitable return on investment, or a government agency looking to increase the health of the organization, you must measure the impact of your training. And whether you’re training employees in this world, or out of this world, stakeholders will still be asking the same question: “Did the training work?”
By pushing your evaluation strategy to higher levels of impact and using a simple 6-level approach, you will not only be able to tell your stakeholders whether training is “working” or not, but also tell them how to improve that impact. By simply defining how training leads to new learning, how learning leads to mission-critical behavior change and how those behaviors then lead to positive organizational outcomes, your measurement results will tell the entire story of impact. It describes in very critical detail the employee’s journey from a training event to their organization’s ROI, and it allows you to identify all the factors along the way that can either help or hinder them from reaching their performance destination.
Simply put, measuring the impact of your training will maximize the impact of your training. So, if that training is for government employees who make our world a better place, then that’s reason enough to measure impact. And if that training is for employees who shoot asteroids out of the sky before they reach our world, then that’s an even better reason to measure impact.