Mark Twain’s popularized phrase, “There are three kinds of lies: lies, damned lies and statistics,” may seem like an odd way to lead into something that supports the use of metrics, but it highlights the fact that just because there’s a bar graph, trend line or something else having to do with numbers, that doesn’t mean a set of metrics is infallible. Not to suggest that metrics are a bad idea and emails that mention them can be happily deleted from your inbox, but they do have pitfalls that consumers of these statistics need to be aware of. The problems, however, are seldom with the actual numbers, but lie with the interpretation and application of the metrics they represent.

How can we avoid making missteps with metrics? Here’s a few suggestions:

Choose Wisely

This is perhaps the most critical decision point. The appropriateness of a metric, in terms of how specific it is, how often data can be collected, and how it relates to training objectives, is vital to obtaining an accurate indicator of a training program’s effectiveness. As they say, garbage in, garbage out. If a company starts with poorly thought-out, half-baked metrics, they will have a difficult time proving that training had the desired effect.

Accept the Bloody Nose

Unfortunately, metrics don’t care about our feelings at all, and they will throw a punch of bad news without warning. The worst thing one can do is dodge this punch for fear of what might happen when it connects. When metrics have been chosen carefully, downward trends shouldn’t be ignored or swept under the rug in favor of saving face politically or keeping a pet project afloat.

See the Picture Instead of the Frame

How metrics are presented can have a meaningful effect on what it leads us to think. While intentionally using one portrayal of data over another for this reason can be patently unethical, we are still faced with having to choose a way of representing the data.  Let’s say we have data that shows a learning outcome on a 100-point scale across multiple departments, with aggregate scores for each department ranging from about 45 to 70. Just from that information, one could conclude that we have some variation between departments, but we can’t say for sure how meaningful it really is. With the exact same numbers, we may be presented with this representation of the data, making it look like the departments are all over the place:

Bar Graph

Or it could look like this, where the differences between departments are visually minimized:

Pie Chart

Hopefully the issue here is pretty obvious—the choice of how to display data may lead us to think there is more or less variation between data points. The figures themselves aren’t incorrect, but without an understanding of what the underlying data represents, a misguided interpretation can easily lead into misguided action (or lack thereof).

Caveat Emptor

Beware the allure of relying too much on a single primary metric, a good example being training ROI. You might think, “But return on investment is such a common concept and it’s a yardstick of impact that’s significant.” And that’s absolutely true, the ratio of revenues by costs can give any program a hug of encouragement that it’s successfully operating in the black. However, looking only at the flow of dollars in and out of a program misses all the important bits that occur in between.

In short, endeavor to be a conscientious consumer of training metrics, but also be aware that the real power of metrics comes with what they can tell us when considered over time. To do so requires holding things constant as much as possible and resisting the urge to immediately revamp some or all of a training program following a single batch of subpar numbers. To quote Lao Tzu, “Do you have the patience to wait until your mud settles and the water is clear?”

Share