In the first video episode of The Business of Learning, we spoke with Tom Whelan, Ph.D., director of corporate research at Training Industry, about training measurement. Specifically, we discussed the International Organization for Standardization, or ISO’s, latest guidance for L&D metrics, which are further detailed in the Training Industry Research e-book “Learning Measurement for Modern Business: Making Sense of ISO 30437.”
Tune in now for insights on:
- The 5-4-3 Framework.
- The three categories of metrics mentioned in the report — efficacy, effectiveness and outcome metrics.
- What these guidelines mean for the training vendor market moving forward.
Additional Resources:
- E-Book: “Learning Measurement for Modern Business: Making Sense of ISO 30437.”
- Research Catalog: Research – Training Industry, Inc.
- Article: [Industry Coverage] ISO L&D Metrics Standards Provide Long-Awaited Framework for Training Measurement
- Article: Out With the Old — It’s Time to Stop Oversimplifying Training Measurement and Evaluation
Listen Now:
Fill out the form below for a preview of the Training Industry Research e-book “Learning Measurement for Modern Business: Making Sense of ISO 30437.”
The transcript for this episode follows:
Michelle Eggleston Schwartz: Hi, welcome back to The Business of Learning. I’m Michelle Eggleston Schwartz, editor in chief at Training Industry.
Sarah Gallo: And I’m Sarah Gallo, senior editor. Measuring the impact of training has long been a challenge for learning leaders. Training Industry research even found that of all the core training management responsibilities, assessing business and training performance is the one that learning professionals are most likely to rate themselves as below average for. So today we’re excited to discuss the International Organization for Standardization, or ISO’s, recently released standards for L&D metrics and how they can provide a much-needed framework for training measurement. This topic was also explored in the recently released Training Industry e-book, learning measurement for modern business, making sense of ISO 30437, which we’ll link to in the shownotes for this episode with us, we have Dr. Tom Whelan, director of corporate research here at Training Industry. Tom, welcome to the podcast.
Tom Whelan: Thank you for having me. It’s a pleasure to be here.
Michelle Eggleston Schwartz: Yes, welcome. I’m super excited to dig into this topic today, because as Sarah mentioned, measuring the impact of training is extremely challenging. As Sarah also mentioned, the ISO recently released guidelines for L&D metrics, which are the first ever official international standard for L&D metrics. Can you touch on why this is significant and why did it take so long to formalize guidelines around L&D metrics.
Tom Whelan: So I’ll answer the second half of it first. So I think why it took so long to formalize some guidelines around L&D metrics is that largely, most of us have been working off of the Kirkpatrick Model for evaluation. It was, you know, I go into this in excruciating detail in other spots, so I’ll try to give a CliffNotes version of it here, it was kind of crafted between 1959 and 1960, that’s when he first published it. And it gained steam, and for a long time it wasn’t really broke, so nobody was trying to, to fix it. Or at least I think for a good number of years, for as much as learning leaders needed to be able to report back to stakeholders, or higher ups, or whoever it might be, you know, whoever they were accountable to that curb, Kirkpatrick allowed them to gather enough ammunition as it were to adequately address maybe some of the questions we’re getting from the larger organization. That said, I think, you know, a lot of us in L&D have had a sentiment for a while, at least certainly pre-COVID, that, how do we get a seat at the table? We’re doing all these activities. It doesn’t seem like anybody up the ladder is really all that concerned about it. So how are we showing our value? How are we substantiating what, what we’re doing here? And I think in the absence of anybody paying attention to you, anything that you can bring to the table is good. And I think in a lot of those situations, the Kirkpatrick framework afforded a lot of learning leaders with just enough to be able to come to the table. Now at least in my opinion, from everybody that I’ve talked to that’s been through business school, some people get introduced to the model, but I think most people largely don’t know what it is. You know, so I think there’s also been an army of L&D professionals walking around talking about Level 4 ROI and Level 2 learning outcomes, and if they walk into some board meetings and are trying to use that vocabulary and assume that everybody else knows what they’re talking about, yeah, they might, but chances are equally good that they won’t. And so then you have this learning professional kind of walking around using this jargon that, you know, unless somebody goes and reads a Wikipedia entry, it’s just going to fall flat. And I think with, with COVID and so with all the shift to, uh, you know, any training we were doing, we have to figure out a way to do it digitally, um, either in part or in full. I think we have seen the strategic importance of training grow a lot in the past five years. And so I think with that, suddenly there’s been a renewed focus on, “Okay, what are we doing? How good is it? How efficient are we at it? Are we hitting the marks that we need to be able to hit?” And, and I think people have increasingly been turning around and realizing that this framework that comes all the way from 1960. This isn’t cutting the mustard anymore. It doesn’t account for digital transformation. It doesn’t account for, you know, the globalization and the scope and scale of what training looks like now. As I hopefully not unfairly point out in the e book, like when the Kirkpatrick Model was started, we didn’t have people going up into space yet. We didn’t have computers around … having mobile phones and the internet like was science fiction dreams, you know, so it’s like that that evaluation framework came from a world where you were probably doing training in a classroom with an instructor. Everybody was probably smoking cigarettes, or at least the instructor probably was. It was a totally different time. And so I think over time as we’ve continued to try to fit this mold of Kirkpatrick around evolving and modernizing of training activities, it fits less and less well. What was once a one-size-fits-all hat is not fitting anybody’s head really anymore. I think — and this has been reflected in data that we’ve collected from earlier this year — I think 24% of organizations that we surveyed out of, I think it was around 350, said that they’re trying to adhere to the Kirkpatrick Four-Level Model . So three-fourths of organizations are not using this quote-unquote gold standard of training evaluation. Now you might say, “Well, what are they doing? Instead, are they doing something else, something bigger and better?” And from what we see in the data, no. They’re doing piecemeal things that aren’t … it’s giving them some information, but it’s not quite really, I think, addressing the question of, “Are we doing the right things? Is it working? Are we spending money in the right places?” So that was probably too long. To go back to the first part of the question, “Why is this significant?” Our sentiment is these ISO standards are coming from a place of business. You know, so as opposed to, Kirkpatrick and Phillips and a lot of other people that have tried to appreciably improve or just discuss what do we do for training evaluation. A lot of times, it’s taking its cues from a more academic approach to learning. And while that’s great, and I mean obviously I’m coming from that world too, so I’m 100 percent behind it, [but] you can’t talk that language necessarily to business professionals. You know, like what are the differences between learning and knowledge versus a skill? And there’s nuances there, so how you might need to go after it is different. Yeah, interesting to me, interesting to us in L&D, does a stakeholder in senior leadership care about any of that stuff? No. So are there, or not that all of them don’t, but you know, on average, they probably don’t, or at least are they going to be interested in the metrics that tell them all about those things? Probably not. So I think what makes the ISO significant is it gives learning leaders a way to look at training evaluation and approach it in a way that puts the stakeholder and the audience first. And rather than, than say, I have the structure of levels, so here’s how I organize my answers, and so here’s how I communicate them to everybody. And if you don’t get what this is, then catch up because I’m talking about Level 2 here and you know, stop calling it other things. It’s Level 2. It’s always been Level 2, you know … it’s like none of that matters anymore. It’s all about “Who’s the stakeholder? What do they want to know?” Or at least, “What’s relevant to them?”
“What’s useful to them and then what can you use to answer those particular questions? And what can you use to get a much fuller picture of the health and the progress that training is making inside the organization?”
Sarah Gallo: Yeah, and really speaking the speaking the language of the business is key. We’ve heard that time and time again, just through conversations on this podcast, and we’ll definitely dive deeper into like the nitty gritty here with the guidelines, but before we do, Tom, can you share a little bit more about some of those underlying challenges and really touch on why measuring impact is such a notorious challenge?
Tom Whelan: So, so I think historically some of the challenges that organizations have faced with this, um, have, have, have, been around, uh, at least in my mind, a couple of things. One is getting access to the data at all. So if there is data out there, if it lives in an HRIS somewhere, and if it’s in the LMS, if part of it is in mother HR or, as with any effectiveness or performance metrics, you have to go to maybe the department, getting the information together that then you can tie together with training to be able to look at things like outcomes or to be able to look at things like impact. It just has never been easy. And even if you can get the data. Then merging it into whatever you have, uh, is, is equally eyeball bleeding, frustrating task. And anybody that’s like, it shouldn’t be that hard, it’s like, you have, how much have you played with data sets? Like, they’re evil. They don’t care about your feelings. It’s usually not a fun time. So I think, you know, even getting to where the information lives to be able to look at it has historically been a challenge. I know for years, one of the courses that we would always hear from learning leaders is, “I know it exists, I just can’t get to it,” or whoever owns it isn’t going to let me look at it, or, or there, there’s some business reason why they’re holding it where they are, and they won’t let [me access it] or they’re just not going to fulfill the request if somebody in L&D is looking at it, even if it’s a valid reason. And I think some of that was, you know, if training wasn’t particularly seen as strategically important, maybe that story has changed now, so maybe access is different in a lot of places. But I think that, that has been one of the big challenges. It’s just, the information probably exists, but where is it? Who owns it? Are you friends with that person? Can you get access to it? Can you know if it’s something that you’re going to need to get repeated access to, how do you set that up? How do you, you know, forge that, that rapport and create that relationship? You know, so like in, in some ways, measurement is almost just as much about being able to navigate the ladders of politics inside the organization. So if you’re say, for instance, running a big sales training initiative: Can you get access to sales performance data on the back-end that you can then use to connect? Okay, we did this year in training and here’s what happened over here in outcomes. If you have no outcomes, well, then suddenly all you can do is be like, “People liked the training.” Like, “The catering was good. We made sure there was good sandwiches,” and so people gave it a thumbs up at the end … you have no idea what the impact is if you can never get your hands on what that information is. I think access is always one of the biggest challenges. The second one sort of as I already alluded to is, is just, “Can you wrangle it?” You know I think there are a lot of L&D professionals, that, and this is not meant as any shade against them, they don’t have the analytics horsepower to take a lot of this data and maybe work with it to get it to do what they need it to do. And that’s not meant to insult L&D people. You know, because I think what I’m talking about isn’t just like, “Oh, can you put some data in Excel and move it around?” Like, it might, might be way more eye watering than that. You know, like, “Can you get this SQL database to talk to this over here?”
And you got to figure out some Python code to pull this in from over here so that you can even get and everything in one place to begin to see if you can work with it. You know, like there’s I think a lot of technical hurdles with with working with data. If you’re new to it, there is no shallow end of the pool. It’s like you jump in and you can’t touch the bottom. So if you don’t learn to swim pretty quickly, yeah, you have data but step two is you have to analyze the data to then get the results. And so if either nobody on the team has the skills or just nobody has, maybe they do have the skills, but they don’t have the time or the bandwidth to do these sorts of things, then access might be great, but if you’re not able to actually run the numbers, then that’s another problem. And then I think that …I’m going to try to keep this concise. The third challenge that I think is more historically accurate than maybe it is now, has been just getting stakeholders to listen. So if somebody in L&D has something to say about, “Hey, here’s how we can improve training; here’s how we could use this initiative to drive this part of the business forward,” getting an audience for that has been a big challenge for a lot of people. And like I said, I think that has changed in the past couple years. I mean, it hasn’t totally turned 180-degrees for everybody. But I think that has been a big challenge. And along those lines, that’s why, at least I feel like the ISO standards afford learning leaders a different, not necessarily like a language, but a different lens to look at training evaluation. So that the way that they’re talking about it is different, so that maybe other stakeholders will listen. Because I think one of the things that struck me about this framework when the ISO put it out is that it distinguishes stakeholders. That is crucial and something I’ve never seen in any discussion of Kirkpatrick. If you’re talking about return on investment or transfer outcomes, who cares? I don’t say that rhetorically; not everybody cares. Some people do, but who are those people? Does the learning professional even know? If they’re going to package together all these metrics, who is the consumer of those metrics? Also, if you’re in the C-suite versus being a learner, the head of L&D, a training manager, or a line manager, you don’t want the same information about what’s going on with training. The same information isn’t relevant to the person in the C-suite as it is to the employee or the head of L&D. Each of those stakeholders has different data needs; some more intense, some less. They don’t all want the same answers. What you communicate to a learner the C-suite doesn’t care about. What you communicate to the C-suite the learners could care less about. It doesn’t help them or tell them anything about their job. In terms of the challenge of getting anyone to listen to us about training, the ISO goes a long way toward illuminating a pathway. If I’m going to talk to this person, what should I come with? What sort of things might they want to know? What type of questions should I be asking? What form should those answers take? This disposes of all this level one and level two outcome stuff, because that doesn’t matter. Level one or level two outcomes are important or not, depending on who you’re talking to and what the content of those insights are — not so much what level they came in at. With greater specificity of the model, the challenge of getting anyone to pay attention starts to erode. If you’re coming to different parts of the business with answers relevant to different parts, suddenly your case isn’t “Hey, pay attention to me and my important level four stuff.” It’s less about that and more about, “I’m already talking in your language.” Here are some things that you need to know. It’s more about, “Ignore them at your own peril,” rather than trying to convince the stakeholder that they should buy into your interpretation or that the data they might initially view as unimportant has relevance. You should meet them right where they already care.
Michelle Eggleston Schwartz: You illustrated the immense challenges associated with measuring the impact of training. We’re at an interesting time where we have access to an overwhelming amount of data. Understanding and identifying what to look at and whether we can access that data is crucial. As you pointed out, we’re at a turning point where we need direction and guidelines so we can all speak the same language. Turning back to the ISO guidelines, if anyone has had a chance to check these out, they are extremely thorough and a little complicated. I’m hoping today, Tom, you can help break down these guidelines into digestible insights to walk us through them.
Tom Whelan: So, ISO 30437. As I’ve joked, it’s not a very snappy name. It’s not something we’ll see on T-shirts. When we became aware of this in mid-2023 when they released it, I thought it was great. We looked at the industry’s reaction to it, and as you said, it doesn’t simplify the complexity. In the same way that the Kirkpatrick levels do, for some people that might be one of the first hurdles to overcome when looking at the framework. But to us, it’s not that complicated — certainly more complex than Kirkpatrick, but really not that complicated. We’ve named it the 5-4-3 model because 543 are descending numbers that roll off the tongue nicely. Each of those numbers stands for something. The five represents the different types of stakeholders interested in training information: senior organizational leaders, group or team leaders, the head of learning, training managers, and the learners or employees themselves. Out of those five categories, they’re not all going to want the same information. All the same information isn’t going to be relevant in the same way to all of them. They’re different consumers of information. The head of learning probably wants a lot of information. Learners might say they want a lot of information, but you don’t want to give them tons of it; a lot of it isn’t going to be relevant to them anyway. That’s the first part of the framework — there are five different stakeholders worth considering, from who’s in charge of the company down to the learners. The four is about the four reasons to measure: to inform, to monitor, to evaluate or to manage. I know it’s not necessarily a synonym game; those all stand for distinct things. Inform is simple questions — if it’s like, “How many fingers do I have?” the answer is five. There’s no more interpretation needed. Monitor means you have a historical benchmark to compare against. If you had five fingers three years ago and still have five, you’re tracking with the historical benchmark. Evaluate is when you measure what’s going on against goals. Manage is about proactively using information to improve training. Instead of just taking a pulse of something or comparing it to a goal or historical norm, manage is about continuous process improvement. The three, the last digit in the 5-4-3 model, is about the types of metrics: efficiency metrics, effectiveness metrics, and outcomes. In L&D lexicon, we often use those interchangeably, but in ISO, they mean very distinct things. Most things we’re familiar with are probably efficiency metrics. Efficiency metrics might include how many hours were used in training or how many learners went through training. Effectiveness starts to look at how much it cost or how well people did on tests of the content—can we substantiate that people learned anything? Outcomes focus on the impact. You can have training that runs very inefficiently, doesn’t seem very effective, but still has a lot of impact on the backend. Conversely, you can have training that’s super efficient but doesn’t have much impact. All training doesn’t have to have all of this data relevant to every single stakeholder or have all four reasons, but this framework allows learning leaders to push back from the table and look at what they’re doing or what questions they’re being asked. They can consider who’s asking, why they’re asking, what they want to know, and what data they have on hand. It’s about matching the metric to the need rather than walking up to a big array of numbers and thinking you’re going to analyze everything. As you pointed out, there’s way too much data now. It’s about streamlining when you’re actually looking at data. If we’re using the Kirkpatrick model, calculating levels one, two, three, and four metrics gets harder as you go up the levels. Not everyone will care about all of them. The 5-4-3 Framework allows learning leaders to focus in and think about what metrics are worth tracking. Whatever the business initiative is or the organizational turn, how does training support that? How can we substantiate that? It’s just as important to get information indicating that things are going poorly as it is to know things are working. There’s bad information and bad training, but those are not the same. Hopefully, that seems simple in a nutshell. That’s how I think about the 5-4-3 Framework. Rather than being a recipe used regardless of the situation, it’s more of a toolkit. It’s adaptable, and once learning leaders become familiar with it or think through that lens, it allows for a much broader array of conversations with different stakeholders about training and its importance. I think it appreciably expands the impact that we in L&D recognize training could have on learners and the business.
Sarah Gallo: Thanks for breaking down that helpful framework to guide our listeners through these guidelines. Like you said, it allows them to strategically focus their limited time, energy, and resources on what really matters. We’ll be right back after a brief message from our sponsor.
[Ad]
As a training professional, your job is to effectively manage the business of learning. You probably tune in to this podcast to gain insights on L&D trends being used by some of the most innovative thought leaders in our market. But did you know that Training Industry also provides data-driven analysis and best practices through our premium research reports? Our entire catalog, including repots on deconstructing 70-20-10, learner preferences, the modern learner experience and AI’s impact on L&D, just to name a few, can be found at TrainingIndustry.com/shopresearch. New insights create new ways for L&D to do business. Let Training Industry research reports take your training initiatives to new heights. Go to TrainingIndustry.com/shop research to view the entire catalog.
Michelle Eggleston Schwartz: Tom, I’d love for you to share more about the three categories of metrics mentioned in the report: efficacy, effectiveness, and outcomes. Are any of these metrics harder to measure than others? If so, can you share how we should approach these metrics?
Tom Whelan: Sure. As I mentioned earlier, the ISO calls out that most metrics are efficiency measures. So, we’re mostly awash in efficiency measures. For example, what’s the utilization rate? What’s the percentage of completion? You don’t need more information to interpret that; a higher percentage is always better than a lower percentage. For a lot of efficiency metrics, you have to have things that are at least coming in at the monitor depth. Just having an answer without any historical knowledge or benchmark behind it to tell you where things are going means you’re sort of twisting in the wind. You can be like, our utilization is 65%. Well, is that good or bad? Without context for that, 65 is just a number. Efficiency metrics are the easiest ones for us to grasp because they surround us all the time. A lot of the automated metrics that many platforms provide tend to be efficiency metrics. Effectiveness metrics are all about the quality of the training—how good was the training? Was it designed well? Was the delivery effective? Was the content engaging? Do employees seem able to apply it? If learners in a training react poorly to it, they don’t really like it, they don’t learn much, and the chance that they’re going to apply it later diminishes. If it costs too much overall, then you have ineffective training. Effectiveness is all about getting closer to the classic Kirkpatrick levels and maybe efficiency metrics too. That’s how I think about those two categories as defined by the ISO. The last one, and to answer the question, is outcomes. For outcomes, it’s not about the training; it’s what the metric targeted by the training is. If you’re instituting a sales training program to increase close rates, that’s your metric or outcome you’re aiming for. It may seem obvious, but how many learning programs have been set up to affect something like sales close rates? You’re measuring effectiveness metrics, like did the learners like it? You’re measuring all sorts of efficiency metrics, such as completion rates. But did it move the needle? That information typically doesn’t live in L&D, so it can be harder to get or figure out who has the right data you need to make that tangible. That’s the ultimate goal: trying to drive that outcome. But that’s what makes it the most difficult. An outcome metric isn’t just one type of thing. Is return on investment an outcome metric? Yes. Is return on expectations an outcome metric? Yes. Is a sales increase an outcome metric? Yes. Did something meet or exceed expectations? Yes, those are all outcomes. It’s not one thing. The easiest way for me to think about how the ISO positions it is that it’s easy to recognize an outcome as something you need to take learning data and add something else to it to get. If you’re just looking at data from an LMS or internal training data, you have a lot of efficiency and effectiveness metrics but no outcomes until you tie it to the outside world or something else going on in the business.
Sarah Gallo: Thanks for breaking down those metrics for us. They definitely can blend together. I want to touch on another element here, specifically the compliance element, because as of now, there’s really no formal compliance certification or audit process for these new guidelines. It’s up to businesses to determine how to follow or not follow these standards. Do you have recommendations on how our listeners can make this determination?
Tom Whelan: From the jump, one of the things we liked about the fact that ISO put these standards out is the fact that it’s the ISO that did it. Just by virtue of who they are, there are a lot of small businesses that, if you start talking about ISO 9001 or 4001, they might think it’s just numbers and acronyms. But for anyone who knows what those things are, you probably know because it’s painful. Getting certified, or getting the blessing of the ISO or being able to say you adhere to these standards, carries weight in international commerce. For me, at least, who’s putting these standards out matters. Because it came from them, it affords learning leaders the opportunity within their organization to walk around with these and trumpet where these came from. It’s relevant to business. While there is no formal audit process, there never had been one for Kirkpatrick either. From a compliance standpoint, I’m not sure what organizations might want to do internally to say they are or aren’t following bits and pieces of this. At least for now, it’s going to take some companies a bit of experimentation to apply some of this and find out where it does and doesn’t work for them. One of the things the standards are not shy about is trying to make them broadly applicable. They acknowledge that if you’re a large organization, there are a lot of different variables to contend with, while for small organizations, some of the metrics and concerns about training may not exist. All companies are different. Kirkpatrick was a bad one-size-fits-all approach. I think the ISO is trying to say they are not attempting that. Yes, it’s supposed to be broadly applicable, but the way everybody’s going to wear this hat is not going to look the same. If you don’t have a lot of learning technology in your organization, there’s a whole bunch of stuff that you either won’t have access to or that won’t concern you. Conversely, if you’re a huge multinational organization with a complex but well-functioning learning technology stack, you’re dealing with a whole different range of variables. How you say you are adhering to the ISO 30437 standards is just to say that you are and try. I don’t think there’s a checklist for it. I would encourage people to test the waters. You can talk about Dr. Donald Kirkpatrick in a boardroom, and people might not know who you are. But if you say there are ISO standards on this, you will likely catch more ears. I think politically, it helps to know who developed these standards for how organizations socialize and apply them in their companies. As Michelle alluded to, they are not the simplest to go through, but it was made by business for business.
Sarah Gallo: As we mentioned earlier, we’ve alluded to this throughout the conversation. These guidelines illuminate a pathway for training measurement for learning leaders. But what about training providers and vendors? How do you see these guidelines impacting the training vendor market?
Tom Whelan: That’s a great question. I think the potential impact on the training provider market is that it gives them a language to walk into organizations and say, “Hey, what are you already doing? Are you looking at this or that?” They can start asking detailed questions rather than starting from, “Are you measuring level one metrics?” They can have a much broader conversation with customers about what they’re doing. It’s in vendors’ interests to start doing that. It might not be the most exciting or interesting thing to talk about, but if I’m a vendor and don’t know what these things are while a customer talks about them, that’s probably not good for me. These guidelines may show how the product or service I’m offering should or could impact the organization. Maybe there’s a solution that offers many benefits for the customers that use it, which may not be articulated in a Kirkpatrick Level 1 through 4analysis. If vendors try to embrace these guidelines, it can provide more ways to substantiate the effects of their solutions and the results customers can expect. If their solution is all about efficiency, then that’s what it is. They can clearly tell people that’s what it’s designed to do. It might not move the needle on any of your outcomes; that’s up to you to decide. This is what our product affects and what you can expect it to do. It’s akin to going to the store, buying a new cooking pot, bringing it home, and wondering where your dinner is. You have the stuff, but that doesn’t get you the answers. From the vantage point of providers, it presents business opportunities and potential marketing messaging. It’s a way to look at whatever product or solution you currently offer and ask if there’s a different way to take this puzzle piece and make it fit into more puzzles. Can we talk about a product differently? Can we talk about what it will do for the business differently? By the same token, can we shape the language to the customer based on who we’re talking to in the organization? At the customer organization, if you’re talking to the C-suite level, what they want to know about learning or what’s important to them is going to be different than what the head of learning wants. If you’re talking to the head of learning, you have to have a different conversation than if you’re talking to the C-suite. That sounds standard because, of course, those stakeholders have different interests. This framework lets providers look at what those different interests are, how they might differ, and what about training is going to be important to them. In a sense, “How can we help our customers better? How can we help them in ways they were already being helped but didn’t even know?” Or, “How could they be helped in ways they’re not even aware they need?” That’s how I look at it. These guidelines have the potential to improve communication and tailor messaging toward your audience. It can only improve things if everyone is speaking the same language and reading from the same book.
Michelle Eggleston Schwartz: Before we wrap up, Tom, do you have any final takeaways for our listeners?
Tom Whelan: Takeaways about measurement: one of my favorite statements is that all data isn’t created equal. There’s a lot of garbage information and bad data used routinely. Just because we have numbers doesn’t mean they’re good. I think re-examining what we do regarding evaluation is always a healthy question to ask. History is filled with examples where we thought one thing, but closer examination of the numbers showed we were wrong. The job is never done. Measurement isn’t going to get easier. I don’t think we’ll reach a point where AI can just give us all the answers magically, in the right context and frame, relevant to everyone. It takes discernment to determine what information is worthwhile. An example that comes to mind is the idea that there’s another planet beyond Pluto causing anomalies in Neptune’s orbit. For decades, it was believed there was something out there. Eventually, they figured out it was caused by one rogue data set from an observatory that had changed some machinery. The data was slightly off from that one place. If you reanalyze it and exclude the bad data, you find there’s nothing wrong with Neptune’s orbit. That’s just an example of how you can have lots of numbers, but it doesn’t mean you should use all of them or that they are all useful. L&D professionals often face a riddle. On one hand, we need more information to answer our questions. On the other hand, we get all this information and feel overwhelmed. It’s a catch-22 from both directions. But again, not all data is good. Some of it can and should be discarded. Just because you have level one outcomes doesn’t mean they hold value. One of my favorite unpopular correlations in training research is that the more you enjoy the training, the less you tend to learn. The idea that people should love the training, making them engaged and retaining everything? Not quite. Yes, there’s truth to that. If they dislike it, they won’t engage. But it doesn’t need to be a feel-good hug fest to promote the best learning. Learning is supposed to be hard; you’re supposed to fail at it. I think the same applies to evaluation. It’s going to be challenging. I don’t think it will get easier as we access more information. There are more ways to automate how we analyze it, but it won’t simplify things. As long as we know that all data isn’t good, we can exclude some data. It doesn’t have feelings. It’s okay to not cherry-pick only the good news. I think it’s fine to focus on quality data. For as much power as quality insights give us, if we don’t know what we’re using is the best, we might waste time instead of promoting efficiency and effectiveness in learning.
Sarah Gallo: Yeah, great.
Tom Whelan: I don’t know if that’s a sunshiny note, but…
Sarah Gallo: I love that. Great note to end on, Tom, thank you so much for speaking with us today on the podcast.
Tom Whelan: Thank you. It was a pleasure to be here.
Sarah Gallo: For more resources on training measurement, check out the description and shownotes for this episode at trainingindustry.com/podcast. Visit the Training Industry research store, linked in the description and shownotes, to shop our entire research catalog. Lastly, don’t forget to rate and review us wherever you tune in to The Business of Learning.