To manage performance, management needs a model that predicts outcomes from drivers, a measurement system and targets. There is management math involved to monitor progress and, when necessary, make mid-course corrections before a financial verdict is pronounced at the end of an accounting period. There is a problem, though: The prestige and precision of math can obscure lapses in logic that might mar the performance management effort.
All models are wrong, though some are more useful than others. A model is an abstraction of reality, an approximation of the real thing. In physics, one can start with a few initial assumptions, apply the laws of deduction to them, and wind up with valid and useful predictions that tell us a whole lot about how the world works. In business, there are no laws – only models. Social mores and consumer tastes can blow your model right out of the water, evolving along trajectories that are simply too unpredictable for any one equation to describe. Think Facebook and Google Plus – why one platform dazzles and another disappoints is a question that leads to more questions than answers.
The Measurement System
Are you measuring the correct parameter that matches with your cause and effect strategy narrative? A reliable measurement system produces consistent results that may well be invalid. Our understanding of causality can sometimes be off, leading us to base it on visible attributes associated with outcomes. The association of feathered wings and aviation is easy to see, but we know that it takes far more than strapping feathered wings on our arms to fly, no matter how hard we flap.
When targets become an object of attention, they tend to evolve into objectives in and of themselves, and many of us feel tempted to manipulate processes just to make the target. Once a measure becomes a target, it ceases to be a good measure. A Soviet nail factory whose output target was based on the weight of the nails it produced achieved its target by producing a single giant nail. When consumers measure calorie content as a proxy for the healthfulness of food, some manufacturers may reduce calories by adding poisonous artificial sweeteners.
Performance data can be obfuscated and abused, leading us to infer illusory insights and make faulty decisions that destroy value.
Fixing the problem requires learning the lessons from such lapses. Three lessons come in handy:
- Keep the strategy narrative as simple as possible. Every strategy must have four Ps: people, processes, promise and profit. Models can be useful up to a point, but they must constantly be reviewed and revamped, challenged well before they reach the breaking point.
- Validate the links. Run experiments. Cause-and-effect relationships mirror our understanding of how things work and evolve, but causality in the real world, often, cannot be shoehorned into a single equation. A multitude of drivers, each having a different weight factor, can produce an outcome under certain circumstances. We can sometimes trust our gut thinking, but we must always verify our hypothesis, especially when the downside risk is significant. Lots of small, successive experiments can help us verify our logic, maximizing the rate of failure but minimizing the cost of failure.
- Set the right targets. Targets must be meaningful to your customer. They must reflect your commitment to both efficiency and effectiveness. A single giant nail meets the weight criterion but fails the utility test. A company that used to paint cars for the auto makers in Detroit dropped its dollars-per-gallon-of-paint target in favor of a more customer-centric, dollars-per-painted-car target.
Recognizing lapses in our logic is a necessary first step to learning important lessons about managing performance and closing the gap between what a strategy promises and what it delivers. Performance management requires balancing numeracy and nuance and will add value to the extent that it fuels a search for meaning beyond the math of management metrics.