When the subject of measurement is discussed, instructional designers and trainers often feel an impending sense of doom. I have two theories: (1) They fear data will point to them being ineffective or incompetent and (2) Nobody told them math was required for their job. But neither of these feelings help a designer get results that matter.

It is best to approach the subject of measurement with designers and trainers by communicating why learning and development (L&D) leaders measure learning. It is not about how happy the learner is or how much they like the trainer. It is also not about passing a written test that measures what participants know. Ask any stakeholder about all the participants who attended training for the prescribed number of weeks and still could not do their job effectively.

Learning leaders must communicate in the language of the business that: the learner can perform the behavior at the expected level while in training (The Kirkpatrick Model, Level 2), on the job (Level 3) and that participant will collectively raise the performance bar to impact business results (Level 4). If it is a major initiative, the stakeholder will want to know that the time, money and effort put into the learning initiative paid off — return on investment (ROI) or Level 5. All of this should be discussed before the designer puts pen to paper. If they do not know the why behind this, the program will be less effective, and the stakeholder will not be satisfied.

If we do not plan for measuring the learning outcomes at the beginning of the project, it will never be done well — or at all. Starting a project without knowing what the outcome is supposed to be is like diving into a tank filled with sharks and not expecting to be eaten alive. Measuring a program well will net the engagement results those trainers and businesses are looking for, and it will happen organically because the right behaviors were identified in the program as it was developed. This means the process will be informed by data, and the result will be something we can act on.

Here are four steps that will help drive the conversation:

1. Know the Baseline.

  • When a business says “X” is the problem, dive deeper into the conversation.
  • Ask what the learner should be able to do by the end of the training session and how well they should be able to do it.
  • Ask if anyone on the team is currently at the level of expected performance. If no one is, ask where the performance is now. Once you know, you will have found your gap.

2. Go Beyond Outcomes and Look at Environments.

  • Make sure that you understand the environment where the training will take place and how it is the same and/or different from the place where the job must be done.
    • If training is done in a quiet room and the job is performed in a loud and chaotic environment, it will impact the outcome.
    • If the learning environment has a faulty internet connection, and the learner is expected to train in an asynchronous environment, it will impact the outcome.
    • If the participant must find time in between customers to learn and is expected to perform their job and the learning simultaneously, it will impact the outcome.

All environmental situations will impact the outcome. If they are not accounted for, measurement will not be effective.

3. Ask the Stakeholder How They Measure Success.

  • It may be different than how they measure performance.
    • You might deliver great results, but if they are not the results the business wants, you will not be successful.
    • Make sure you know how the performance aligns to the long-term success of the organization.
  • Measure the results more than once. Measuring a program once at the end with a great result does not mean the program was successful or sustainable. Measure it over time throughout training and in the role. This will also allow you to see how the learner progresses over time, and if performance is slipping, course corrections can be made more easily. Make sure that when you measure over time, the participants are not the only people you are asking for feedback. Be sure to include managers, and even stakeholders if applicable.

4. Use the Right Tools for Success.

  • Be sure the measurement includes something observable, and that the metrics are both valid and reliable. Valid means it measures what you expect it to measure. Reliable means it produces results that are sustainable and repeatable over time.
  • Using the right tool at the right time makes a difference in the success of the learning experience.
  • When readiness is measured by performance milestones, it can better predict success in the role.
  • Consider a practical measurement like a checklist, rubric or feedback evaluation sheet.
  • Test scenarios, simulations and case studies also measure success well, if they are as close to the reality of the role as possible.

In Conclusion

The best thing a designer can do to create success is to ensure the overall measurement process is built into the design and applied, and that the results are tracked and managed throughout the program application. This works best when measurement, design and training work together and are transparent and honest about results. When this works, risks are communicated in real time, course corrections are made, and participants are successful.

Register for the spring Training Industry Conference & Expo (TICE) to hear Loren Sanders’ session, “The “B” Word: Eliminate Bias From the Learning Environment.”