In his 1982 book “Megatrends,” author James Naisbitt proclaimed, “We are drowning in information but starved for knowledge.” Naisbitt’s concern in the early days of the Information Age has become so commonplace that it is almost an expectation. For decades, L&D departments were required to expend great efforts to gather the most basic sets of data; now, they are awash in learner data. But as Naisbitt noted in the early 1980s, having access to data does not guarantee that we will be able gain insight from it.

Like most industries, L&D now has instant access to descriptive metrics (completions, content accesses, raw scores, etc.) that tell us about our activity. But descriptive metrics don’t inform us of the value of those activities. If we are going to use analytics to improve our performance, we need to know which data points are valuable – both as positive and negative examples. These outliers then become the building blocks of our strategies; we model the behaviors of the positive outliers, and we avoid or remediate the behaviors of the negative ones. But we can only identify these building blocks by looking at the data more broadly and making sound comparisons.

Context is King: The Power of Comparative Analytics

Comparative analytics is the process of gaining insight by comparing two or more sets of similar data. For example, at the end of a classroom session, we are left with a set of descriptive metrics for each individual in the class: an LMS completion record, a post-course assessment and possibly a course reaction survey. These are individual data points, and if we look at them for a single individual, they don’t provide any insight. If our goal is to use data to diagnose, prescribe or predict behavior, we have to understand the data and the context in which it exists.

If a friend wins $50 in a raffle, it may seem like a cause for celebration. But if the friend spent $100 on raffle tickets, the victory becomes less impressive. If the friend spent $1,000 on raffle tickets, the $50 prize is a tragedy. The winning $50 ticket is the same data point in each scenario, but the context determines its impact and value.

For our post-class descriptive data to have valid meaning, they require relevant context – similar data points to inform us of the true value of the data. There are easy comparisons to be found by just comparing the class’s data points to each other. By comparing all the raw scores on the post-course assessments, we know which learners scored the lowest, and we can target them for remediation, while we can target the learners who scored the highest for opportunities such as becoming a mentor.

Descriptive Metrics vs. Comparative Metrics

Without the benefit of relevant, similar data points, we don’t know the actual value of a data point like an 84 percent score on the post-course assessment. If we look at it as an individual metric, we’re forced to provide our own context, which may or may not be helpful (or accurate). We might convince ourselves that 84 percent is good, because it would be a B grade in most schools, or because it’s fairly high within the scale of 0 to 100 percent. But what if every other learner scored above 90 percent? In that context, the learner is a poor performer.

Looking Beyond the Obvious

While the idea of comparative analytics may sound like a logical process, it’s frequently overlooked. Practitioners instead provide context through arbitrary “passing” scores and unrelated metrics. These types of thresholds exist because they provide incentive to take action: people with passing scores return to work, people with failing scores take the training again, etc. By using comparative analytics, we can make more informed decisions. Within each class, for instance, there is a distribution of scores: a few high scorers, a few low scorers and a large bunch in the middle. (Think of the old bell curves from school.) While descriptive analytics would only let us consider an individual’s score, or the average score of everyone, comparative analytics – using a tool as simple as a spreadsheet – lets us identify legitimate statistical outliers whose needs we can address with the appropriate amount of remediation.

More importantly, we can apply those data to our entire operation. Instead of measuring the data within a single class, we can compare multiple classes. We can compare entire curricula to identify which classes are performing better than others and which classes are performing worse. Spreadsheet functions can even identify the outliers that are statistically significant. By simply similar data, we can move from guesswork to statistical significance, with little effort and a minimal toolset. And this shift isn’t limited to completion records or test scores. If the data sets are similar, we can apply it at every level of training analysis, from simple completion records all the way up to impact analyses and ROI.

At a time when L&D departments are looking to identify their performance impact more specifically – and to take informed actions to guarantee that impact – adopting a comparative mindset will yield immediate, valuable insight.

Share