Reports play an interesting role in the psychology of management. On one hand, they provide certainty and assurance that, indeed, someone is “minding the store.” They indicate that the organization is identifying performance indicators, assigning metrics and gathering data. On the other hand, far too frequently, we use them only to convey information – not to drive action. We deliver reports that describe what we’ve done but that provide no direction for follow-up action, require no oversight to ensure the action takes place and perform no follow-up analysis on the effects of our changes. Far too often, we set a learning initiative in motion and report on its metrics without any intention to act on the findings in our reports.
This process of non-accountable, non-actionable reporting works when we implement a learning initiative without any problems, and then it consistently achieves all its expected business results. Roughly translated, that statement means, “This process works very rarely.” Non-accountable, non-actionable reporting is like trying to save one million dollars by checking your bank balance every day without addressing changes in your spending habits, acknowledging unforeseen expenses or considering alternate investment strategies.
Our primary responsibility when developing reports is to design information that will incite and prescribe appropriate actions whenever necessary.
Assign Owners and Accountabilities
The biggest pitfall of reporting structures is attempting to consolidate all metrics into a single report or dashboard for the entire audience. This “one-size-fits-all” approach exists only to communicate information. It puts the responsibility of contextualization on each reviewer’s shoulders without assigning accountability to anyone.
Before launching any reporting effort, you should identify who will be receiving the reports and – more importantly – what they will be expected to do with the report findings. This process may take some change management on your part; far too many organizations have simply created reports and opened the report access to anyone interested, in the wistful hope that someone – anyone – will take appropriate action. By identifying the recipients of the reports and their responsibilities, you will greatly improve the likelihood of a successful measurement effort.
Precision, Concision and Clarity
It’s not enough to define your audience and their responsibilities if they are unable to locate or understand the metrics within the report. Reports should only contain information relevant to the recipient. Instead of providing one comprehensive report, divide the information into more precise individual reports, each focused on its intended recipient and his or her responsibilities. This process of creating individual reports eliminates any confusion over roles and responsibilities, and it conveys important information without overloading the recipient with extraneous information.
To that end, make sure to write each report clearly and concisely. Poorly organized information creates confusion and distances readers from the key purpose of the report: to incite and prescribe appropriate actions.
Here are some simple best practices for creating concise, clear reports:
- Make sure the report is error-free.
- Make the purpose of the report clear.
- Only include relevant information.
- Keep visual design to a minimum (e.g., no unnecessary shadows, colors or images).
- Make the language in the report reader-friendly and tailored to the audience. The report should already “know” what the reader knows, so it does not waste his or her time.
- Avoid overly technical, punishing or condescending language and jargon.
Tell Them Where They Are and Where They Should Be
For a clear, concise report to be actionable, the content needs to answer three questions for the reader:
- How is my metric performing?
These metrics should be included on most reports. They are the common metrics that you regularly report on, that tell what has happened (e.g., “Sixty percent of learners have completed training,” “The average score on the final assessment was 91 percent,” etc.).
- How is my metric performing in relation to relevant others?
It’s one thing to know how you are performing, but an 85 percent success rate is only meaningful in context. An 85 percent success rate compared to a typical success rate of 55 percent is a success but compared to 96 percent, it is not. An actionable report will provide clear context for its key metrics, so the readers will immediately know if they need to improve their performance or continue doing what they are doing.
Remember to provide the context against relevant others. Comparing to previous learners in the same initiative, pilot groups or industry benchmarks can provide relevant content around expectations. Using different learning interventions, guesses or academic standards (e.g., “an ‘A’ is 90 percent”) is generally not a relevant context and will likely not provide useful guidance.
- What should I do about it?
While contextualized, precise metrics should be enough for a reader to identify necessary action, it can be helpful to provide recommendations. When you define the purpose of each report, you have the opportunity to learn about the possible actions a recipient could take to improve his or her metrics. Including recommendations (in plain language) can help ensure the report findings lead to action.
By clearly answering these three questions for a specific metric, you can change reports from broad-but-inert pieces of information into impactful drivers of performance improvement.