Training measurement is positioned repeatedly as a means to gain seats at tables and more money. Although ideal outcomes, they should not be the initial focus. We are in the business of problem solving, performance improvement, and development of learning culture, so it is hypocritical to focus on the seat or the budget rather than using analytics to be better business partners.

If measuring impact beyond reaction was easy, more practitioners would be doing it. It takes process rigor and the spirit of continual service improvement, requiring a change in attitude from both training and the business.

Therefore, training functions need to stop hesitating and dive in by piloting with a single team or initiative. The results will provide actionable data that will likely:

  • Prove the unreliability of surveys
  • Uncover training gaps
  • Reduce training waste
  • Create more opportunities

One way to do this is to employ an interpretation of the scientific method to examine cause and effect of training initiatives. For general guidance on how to incorporate experimentation into instructional design, see the steps below.

Ask Questions and Observe

Do a proper needs assessment – ask why training is needed and what needs to be done differently, how performance is currently measured, etc. The questions may not be easy to answer if the stakeholder is not used to it, so be patient and coach. This is how the business becomes an accountable partner rather than an insatiable and impatient boss.

Next, work with SMEs and thought leaders to outline applicable knowledge, skills and attitudes while gaining a deeper understanding of department and business strategy. Once sure that the problem can be solved through training, then summarize, report, recommend, and agree on the goals.

Form a Hypothesis

Key performance indicators may vary, but all are influenced by workforce behavior. Form a hypothesis that if certain behaviors are changed, the outcome will affect the metric positively. Then, write the learning objectives, assessment questions, and delivery strategy that will train the required applied knowledge and skills.

If there is no current way to measure performance on the job, work with the business to create it.

By this time (prior to content development), the measurements and targets for all four Kirkpatrick levels should be defined and agreed on by the stakeholders – otherwise there is no strategy, just an array of potential and accidental outcomes. Keep in mind that ROI and VOI can be calculated later.

Test the Hypothesis

This is the easy part (we are already good at this). Develop and deliver the content based on the appropriate modality. To get super rigorous, set a control group.

Analyze Results

If you are used to receiving positive Level 1 survey results, the data gathered from your pilot may expose serious gaps not captured by feelings.

For example, evolving needs of the business required a change in behavior from a support group not initially hired for sales acumen. I adhered to the process outlined above to create a rudimentary sales skills class.

Careful and objective analysis of just pre-test and post-test data (Level 2) led to the conclusion that:

  • Learning objectives were appropriate and assessment questions were reliable and valid, but the content of module four failed to deliver positive results
  • The survey results were positive because the content was good – for relevant comprehensive role play scenarios based on the previous three modules – but not for the module four learning objectives
  • Pre-test scores were very low for module four, so that should have been the focus

This is an incredible amount of actionable data to use … and just from analysis of Level Two Kirkpatrick. Think of the value you can bring to an organization with a similar analysis on transfer and results.

Repeating this process will strengthen partnerships and reduce training waste, eventually and naturally earning executive consideration in the big decisions.

Share