When faced with the task of evaluating training, a common approach is to solicit feedback using smile sheets – post-training satisfaction surveys, which gather data about the training content, the instructor and the immediate reaction to the training sessions. This data is useful for the providers of learning and development (L&D), so that they can improve how they administer training courses, especially during pilot programs for new content.

However, when training is delivered in nice, comfortable training classes by skilled and experienced training professionals, and the training content itself is purchased off-the-shelf from established and validated providers, this data delivers few surprises.The training professionals themselves often value this type of data to confirm their skills and proficiency (and perhaps to stroke their egos). But how valuable is it to the organization when  analyzing training impact?

It can be argued that this data does little to communicate the level of performance improvement participants may have on the job. And it is not data that can be presented to senior leaders to show return on investment (ROI) for their significant training expenditure (a concern if you are facing the threat of future cuts to your training budgets).

Identifying the Right Data

So what data is the right data to gather? To answer this question, you can research articles, studies and white papers, finding lots of information about different evaluation methods and data sources. For example, many discussions about evaluating training impact include Donald Kirkpatrick’s widely utilized model, first developed in 1959, which outlined four levels of training evaluation:

  1. Reaction
  2. Learning
  3. Behavior
  4. Results

Dr. Jack Phillips and his wife, Dr. Patti Phillips, offer a five-level framework as part of their ROI Methodology:

  1. Reaction and Planned Action
  2. Learning
  3. Application and Implementation
  4. Business Impact
  5. Return on Investment

Both methods of training evaluation include the smile-sheet surveys, measuring satisfaction and reaction to the training itself. The data collected at these levels can be extremely beneficial when your goal is to improve training delivery and focus on what happens within the classroom itself. Historically, many L&D professionals have stopped here, checking the “evaluation” box as the necessary final part of training provision.

Models such as Kirkpatrick’s Four Levels and Phillips’s ROI Methodology advocate the continuation of evaluation beyond the classroom, to gather impact data that indicates actual learned skills and their subsequent application after the training has ended. Phillips and Phillips go one step beyond Kirkpatrick, in fact, to provide a framework and methodology for gathering tangible data, which, when converted to monetary value, can indicate significant financial impact of training provision.

When to Evaluate Training

In addition to knowing what level of data to collect, an L&D professional must understand when to evaluate training impact. This is where having a clear plan for evaluation can prove extremely useful. Project management tools such as a Program Evaluation and Review Technique (PERT), or the Gantt Chart, are useful ways to track when and how evaluation should occur. In addition, administering both pre- and post-training surveys and skills tests are also great tools to determine if learning and improvement has occurred.

Lastly, post-training surveys can require participants to estimate their likelihood of use and performance improvement, directly related to the skills and knowledge learned from the training (Evaluation Levels 3 and 4 of both models). These surveys can also include key performance indicators, such as business strategies or competencies, and ask participants to estimate the impact the training will have on these metrics. They are followed up with another survey 60-90 days post-training asking the same questions, but this time requesting actual use and impact data. This approach is another way for L&D professionals to evaluate training, using the data collected to present training impact in a credible and quantifiable way.

But what if the data shows no significant impact? What if, after all that effort to evaluate the training, the level of on-the-job application and performance improvement data indicates no justifiable ROI? Here lies one of the biggest quandaries facing training professionals today, and is perhaps one of their biggest oversights – providing training that is either unnecessary or does not address an identified skills gap.

Conducting a Needs Analysis

In early instructional design practices of the 1950s and 1960s, practitioners such as Thomas Gilbert and B.F. Skinner realized that even the most well-designed training programs can fail to have a measurable impact on an organization’s or an individual’s performance (if a performance problem is not due to a lack of skill or competence). Despite this revelation, we still suggest training as the preferred solution for many performance problems, and then seem surprised when it fails to hit the mark.

Best practice should include an understanding that the right data must be collected and analyzed long before training is even suggested as a solution. This data forms part of a comprehensive and systematic needs assessment: a strategic planning effort to identify what an organization needs in order to be successful now and in the future, and analyze the organization’s current strengths, weaknesses, opportunities and threats. This needs assessment data could identify a workforce of skilled and competent employees, making the need for training unnecessary. The potential for high performance and/or production amongst these employees may actually be hindered by a lack of communicated expectations, procedures, out-of-date tools and resources, or other environmental obstacles preventing success.

There are many great evaluation methodologies available to us as L&D professionals. If training has been identified as a necessary and needed performance solution, it is possible to evaluate its impact by ensuring that we define measurable and specific objectives. We must also plan how we will evaluate training impact, when it will happen and what data will be important to communicate to each stakeholder. We must, however, be extremely diligent in our use of training and be willing to push back to those who request it as the go-to solution to problems and/or opportunities. Otherwise, the training evaluation data we collect will be weak, and its impact will be a shot-in-the-dark rather than a targeted, intentional and effective bullseye shot.