Susan was proud to teach the flagship leadership development program that had taken so much time and effort to build. At the end of the class, she launched the company’s electronically administered, standardized post-program evaluation. And 90 days later, she used the same system to launch an automated follow-up survey. At the end of the year, she looked at the results the program had delivered.

If you use this common, yet flawed approach to training evaluation, you have made several mistakes that will cost you valuable data, reduce your program results and potentially threaten your job. It is not as difficult as some believe to avoid these common errors.

1. Waiting to Evaluate Until After a Program Has Launched

Many training professionals mistakenly design, develop and deliver a training program, and only then start to think about how they will evaluate its effectiveness. Using this approach nearly guarantees that there will be little or no value to report.

To avoid this mistake, programs should begin with a focus on the high-level organizational results you need to support or accomplish. This automatically focuses efforts on what is most important. Consider which organizational metrics you plan to influence, and articulate how this will contribute to the overall success of your organization.

For example, Susan’s leadership development program was requested because numerous key managers in the organization are nearing retirement age, and a viable succession plan needed to be built. The organization was already monitoring employee satisfaction in direct reports of newer managers, customer satisfaction of those served by those departments, and turnover of both managers and direct reports. They believed this focus would keep the company strong, and support sales and profitability goals in the long run.

Once targeted program outcomes at the highest level possible are identified, think about what really needs to occur on the job to produce good results. For example, what should new managers do on the job to create customer and employee satisfaction, and contribute to sales and profitability? This needs to be detailed in specific and measurable terms.

Then, and only then, consider what training or other support is required for the targeted group to perform well on the job. What you may find is that traditional training is only part of a performance and support package, including things like coaching, job aids, process changes and a recognition program. Finally, think about what training format and environment will be conducive to imparting the required skills successfully.

2. Viewing All Training Programs as Equal

All training programs are not created equal, and only those that are the most mission critical for an organization should have a comprehensive evaluation plan created and implemented for them. From there, the less impact a program has on the bottom line or mission accomplishment, the less that should be invested in evaluation at any level.

During the program planning phase, think about each program through the lens of how it could contribute to your organization’s most important outcomes. If you are not able to easily articulate how the program contributes organizationally, then it is probably not a good candidate for a robust evaluation plan. Think about what data would be useful to you, the training professional, and what information stakeholders might want to see, if any, and focus evaluation efforts on those items.

In the case of a leadership development program, the possible outcomes could be significant to the company. So, good thought and planning should be put into how progress and outcomes will be monitored and reported throughout the program. As an added benefit, regular monitoring allows for corrections along the way, maximizing program success and results.

3. Spending Resources to Evaluate Training Only

On-the-job performance is the most important thing to evaluate for mission-critical training. Without on-the-job application, training has no hope of contributing to organizational results, and therefore is of little value to the organization.

Save resources for post-program evaluation by building evaluation of the training into the design of a program. Nearly any activity and interaction during training could be tracked and reported as validation of knowledge and skill.

To evaluate customer satisfaction with your programs, focus on formative methods that are implemented during the training. Then, formally evaluate the few key items you plan to analyze and use, or report to stakeholders.

4. Relying Solely on Standardized Surveys

Some believe in the existence of a miracle survey that will give you all the training evaluation data you need. Don’t buy it. Surveys, particularly those administered and tabulated electronically, are a wonderfully efficient means of gathering data. However, response rates tend to be low, and there is a limit to the types of information that can be gathered.

It is so easy to disseminate these surveys that they are often launched after every program, no matter how large or small. The questions are not customized to the program or data, and people quickly pick up on the garbage in, garbage out cycle. This creates survey fatigue, and makes it harder to gather meaningful data for any program.

For mission-critical programs, gather both quantitative (numeric) and qualitative (descriptive) data. Open-ended survey questions can gather quantitative data to some degree, but adding another evaluation method provides more data. For example, a post-program survey could be administered, and results analyzed. If a particular trend is identified, a sampling of program participants could be interviewed and asked open-ended questions on a key topic.

Build touch points into your training programs for facilitators to solicit feedback, and ask your facilitators for their feedback via a survey or interview after the program.

5. Not Using Collected Data

When you survey a group of individuals, you are making an implicit agreement with them that you will act upon their aggregated feedback. Continuing to ask for feedback when the participants can clearly see that you are doing nothing with it will quickly create the expectation that nothing will change, and they will stop giving it.

Timely review of evaluation data is also important. Instead of simply measuring and reporting what happened, review the data as you receive it, and respond proactively to what is (or is not) happening. Make positive changes to maximize performance and outcomes. Publicize how the data is being used to support on-the-job performance and key organizational results.

For your next major initiative, create an evaluation plan with touch points during and after the training. A training professional at a hospital chain found that early adoption of the safety practices she taught was low. She spoke with nurses and observed their behavior in the hospital and made subsequent program modifications, and implementation levels improved. She commented that her stakeholders were more impressed about the initial failure and the recovery plan than they were with the stories of things that went well from the beginning. By keeping stakeholders apprised of program progress, she earned their trust and support.

Expect and plan for program modifications during and after training, and use data to guide how to maximize performance and results. You will create and demonstrate value to your organization, and earn stakeholder respect as a trusted business advisor.

Share