Are your learning and development (L&D) solutions successful? How do you know?
The ultimate purpose of L&D is to support business goals. To that end, it is critical for training organizations to be able to articulate whether an L&D solution is worth the cost in dollars, time, resourcing and effort. Business leaders and stakeholders require this information to inform strategic budget planning and identify impact on organizational goals.
In practice, however, it can be a huge undertaking to evaluate and report on L&D solutions regularly, which is probably why it’s so infrequently done well. In one study by McKinsey, only 8% of the companies surveyed track a program’s return on investment. There are a number of other ways to measure L&D efficacy beyond a dollar return on investment (ROI), but without comprehensive evaluation, companies and training organizations will find themselves making decisions without understanding the impact (or lack thereof) of current programming. As a result, an evaluation toolkit is an invaluable tool to empower data-driven L&D decisions within the department and across the organization.
Why Create a Toolkit?
Creating a toolkit to guide evaluation efforts has several advantages:
A toolkit supports a process-oriented L&D department, a characteristic of high-performing training organizations.
With templates and developed resources, there’s no time lost to recreating the wheel to evaluate new solutions.
Using the same methods and metrics for evaluation enables organizations to compare apples to apples across training programs and over time.
An effective toolkit offers organizations the flexibility to meet specific program evaluation needs while upholding core consistencies.
What Is Included?
Each toolkit should include a comprehensive and flexible foundation for evaluation and reporting efforts. These needs will differ for each company, but here are some broad areas to consider in the development process:
Standard Operating Procedures
Clarify and articulate the evaluation and reporting process the L&D team has committed to. This process should align with any company-wide processes and/or reporting cadences. Other questions to ask while establishing a process include:
- What kind of data do we collect, when do we collect it, and for which types of L&D programs or solutions do we collect it?
- Which audience or audiences receive evaluation impact reports or dashboards, and on which solutions and at what cadence do they receive them?
- On the L&D team, who is responsible for which components of evaluation and reporting?
- What is the L&D team’s retrospective process for reviewing training solutions and impact reports and updating the L&D strategy?
Surveys and Assessments
While the content of L&D programs varies, the impacts to consider will often remain the same. The Kirkpatrick Model is a good guide for developing these resources to standardize evaluation across programs:
- Level 1, reaction: the degree to which learners found the training engaging and relevant to their current role.
- Level 2, learning: the degree to which learners acquired the intended knowledge, skills and abilities based on their completion of the training.
- Level 3, behavior: the degree to which learners apply the lessons they learned during training to their work.
- Level 4, results: the degree to which the training impacts business goals.
L&D teams can often assess level 1 using similar surveys across all training programs, while levels 2 and 3 will need individualization based on content. Notably, L&D teams may not directly measure level 4 metrics (e.g., retention or performance targets), but, in partnership with business leaders, they can identify them and determine whether they were influenced by L&D solutions.
Addressing all four levels may require the creation of pre-training knowledge assessments, post-training satisfaction surveys, post-training knowledge assessments, on-the-job performance evaluations and other materials.
Impact Reports and Dashboards
Evaluation is futile unless the results are communicated and considered in future decision-making. Impact reports and dashboards are effective ways for L&D teams to distribute this information to their stakeholders.
Impact reports are typically formal, contain a comprehensive evaluation and include descriptive text. They are most useful as an annual or biannual report on L&D solutions, their impact and performance against annual goals.
Dashboards clearly and concisely display business and strategic drivers to an executive audience. L&D teams can choose from a plethora of tools to create them, including slide decks, business intelligence tools, single-page HTML or a learning management system (LMS). They can also segment dashboards by audience, such as by business unit or seniority level. Regardless of tool or audience, it is critical that the dashboard include metrics that are of interest to the people receiving it. To that end, it’s best practice to solicit audience-specific feedback on the design of each dashboard prior to distributing them on a regular basis.
But why stop at impact reports and dashboards? Many successful L&D leaders go a step further by creating email or slide templates for recurring communications. The stronger the infrastructure in place to support evaluation and reporting, the more robust and reliable the process.
The process does not end once an L&D team has created a toolkit. First and foremost, they must be sure to share it. In a decentralized training organization, the consistency offered with this approach is key to maintaining evaluation alignment across training functions for various lines of business.
Once L&D leaders have shared the toolkit, they can start leveraging those valuable resources to celebrate L&D evaluation, report wins along the way — and use the data, as well. Of course, future L&D strategies will have the data as an input, but if reports show that a program is not creating the desired impact in the middle of the year, learning leaders can consider further investigation and, perhaps, making adjustments.
Lastly, it’s important to revisit and revamp the toolkit as needed. An annual review of these resources will be sufficient to ensure that the portfolio of evaluation and reporting materials are up to date and poised to provide valuable data to drive L&D decisions and, ultimately, organizational success.