Thought Leader - Dr. Nabeel Ahmad

Several years ago, I led the learning and development (L&D) function for a big four accounting firm’s business development strategy, where I helped consultants improve their client relationships. In many industries, L&D is widely viewed as a necessary evil. People feel learning is a distraction to doing their real work, but know they need it to maintain industry certifications. To win the minds of these learners, I developed a learning video series on the neuroscience of trusted business relationships.

The idea was simple: Use brain science to show how you and your client’s brains react when interacting with one another and use that insight to establish a trusted relationship. Through a combination of subject matter experts, animations, actor-led scripted scenes and supporting facts and figures, we were able to create several teachable moments and track the learning’s short- and long-term impact. This approach contrasted with past “soft science” learning interventions such as using psychology, which often relies on what we think is happening inside someone’s head.

How can you measure learning impact from an intervention like this?

Lagging to Leading Indicators

Learning measurement often focuses on lagging indicators because they are easy to measure and tie to compliance. Course completion rates are one example. Another is weight loss in a nutrition program. Leading indicators are in a learner’s control and can lead to the outcome they’re looking for. Attending live courses and frequently logging into the learning management system (LMS) are examples of leading indicators, as well as exercise and calorie intake.

Chunk Content

The neuroscience learning series consisted of 20 minutes of video content. With more microlearning content and short videos on social media (often under 60 seconds), it’s less likely that learners will consume a 20-minute video in one sitting. So, we tested how many learners consumed the 20-minute video versus the same video split into five 4-minute videos. We discovered that the mere chunking of content greatly influenced video views.

Randomize Content Order

Is there a set path that learners must follow, or can they choose their own? If the latter, consider randomizing the order of content when it is presented to a user. For the neuroscience learning series, the five 4-minute videos could be consumed in any order. During testing, we found that the first two videos on the learning webpage had far higher view rates than the remaining three. We suspected it was because those were the first two videos listed. We created a simple script to randomize the order whenever someone visited the learning webpage. After looking at the webpage analytics, we found that the views among all the videos were more balanced, meaning that each video had the same likelihood of being displayed at the top of the list.

More Use ≠ Better Learning

Accessing more learning content does not necessarily mean that the learning is effective. Let’s say there is a learning module on how to properly execute bodyweight exercises. You see that several learners keep going to a certain video that shows how to perform a burpee. You might think this is good because the burpee exercise training is popular, though you notice that it’s the same group of users repeatedly accessing it. You may infer that those learners are confused by the video and lack a clear understanding of how to do a burpee. If they already learned to properly do a burpee, then why are they returning to that video?

A Moving Target

Measuring learning impact will evolve as the needs of your organization and learners change. Always keep a pulse on what you’re measuring, how you’re using that insight and what you need that you’re lacking today. Do this and you’ll hit a bullseye even when the target changes.