Training professionals must be able to demonstrate training’s value to the organization. How does training impact business results? What is the return on expectations established by our clients? Whether an external consulting service that specializes in training development or a department within an organization who services internal learning needs, the correlation between learning results and business results must be clear and evident to our business leaders. We can’t assume that our executives see those connections. Worse, we run the risk of being seen as superfluous if those connections are not visible and emphasized. If that is the case, the next stop for us may be the unemployment line.

As training and development professionals, we are on a quest to prove that the learning we deliver meets the immediate needs identified by the project sponsor and aligns with the operational and strategic objectives of our business. To assume that these correlations are apparent or to neglect making this alignment explicit is to undersell our value as critical assets to our organizations’ mission and values. But how do we go about tying the immediate results of learning to sustained on-the-job performance and business impact?

Most practitioners measure learners’ reactions to training and learning, and many organizations strive to measure training’s impact to on-the-job behaviors and business results. However, we struggle to do this because of time pressures, access to performance-related data, an absence of clear metrics recording, or any number of obstacles that stand in our way of finding measurable results. I am sure that my team, Atrium Health’s IAS Education team, is not alone in this situation.

What We Did

Our team’s desire to align our work with Atrium Health’s mission, vision and strategic priorities led us to consider what such an evaluation program would look like. Where do we start? What tools might we use? And where do we get the metrics that can show how learner performance impacts business results? With these questions as catalysts, my team set out to develop an evaluation program that could provide clear correlations between the learning that we foster and the impact of learner performance on key strategic priorities for Atrium Health.

Our first step was to research evaluation and find models that could help us conceptualize our own program. Donald Kirkpatrick, the father of training evaluation, provided a framework for us to consider with the Four Levels of Evaluation. In the recent update of his original model, his son James Kirkpatrick and James’ wife, Wendy Kayser Kirkpatrick, provide meaningful suggestions on how to link our learning goals to broader business outcomes using Kirkpatrick’s traditional approach: Reaction, Learning, Behavior, Results.

As I read the first few chapters, I could see that this was something our team should read. We structured our study as a “book club,” in which team members volunteered to lead the discussion of specific sections and provide prompts. This sort of “do-it-yourself” professional learning served as the foundation of our evaluation program development, helping us to see and fill in gaps in ongoing learner support, such as tools that managers could use to monitor, reinforce, encourage and reward skill retention.

Our local ATD chapter was also hosting a one-day workshop with Jim Kirkpatrick. Three of our team members attended and came away with a map of criteria-based outcomes that could link our training to the organization’s highest-level result, the ongoing achievement of Atrium Health’s mission: To improve health, elevate hope, and advance healing – for all. As part of this exercise we constructed a “flags up the mountain” illustration to map our leading indicators that would usher us to the highest-level desired result that awaits at the top of the mountain.

With our study and our “map to the mission,” our team then considered how we could build on our current evaluation strategy. As part of our training, we were capturing Level 1: Reaction data from our learners, in both our instructor-led courses and our virtual and online offerings. This data proved critical in our efforts to make training more flexible and individualized for learners. Additionally, we were evaluating learners’ skill proficiency through performance assessments administered at the end of courses. At first these were observation-based scorecards that facilitators used to monitor specific tasks completed by learners. More recently, we have partnered with some of our internal applications specialists to create auto-graded assessments that record those same tasks that were initially observed by facilitators. This change has yielded more time for facilitators to work with individuals who may need additional practice to master specific skills. It also introduces a greater measure of objectivity in the assessment, as observations can vary depending on facilitators or situational factors.

Until recently, we struggled to collect data on the sustained behavior change demonstrating skill improvement in our learners and the business impact. We work primarily with newly hired employees and existing employees only when application upgrades occur. We had not established a system by which we conduct follow-up work with our learners, and once they completed their new hire applications course, they may not return for additional training. Therefore, we needed to establish methods to collect performance data that are minimally intrusive on our learners’ daily work. Doing so would provide us with information on sustained, applied behavior on the job (Level 3) and enable us a way to begin linking that performance to Atrium Health’s broader mission and values, particularly if we could collect metric data, such as time and money, to correlate with improved performance.

Where We Are

Given our starting point, our team has made significant headway in developing a comprehensive training evaluation program. Here is a glimpse, per Kirkpatrick’s Four Levels, of where we are.

Level 1

We continue to collect and use feedback to make enhancements to our learning environments, such as providing dual monitors to view training and practice in the live training application environment and moving courses to a blended model in which parts of the training are completed asynchronously off-site prior to our instructor-led skills labs. As such changes are made, we will continue to check the pulse of our learners to see how we can shape the experience in ways that impact skill competency, confidence and commitment to apply skills learned.

Level 2

We continue to make more of our performance assessments automated so we can leverage facilitators as guides to learners who need additional practice to reinforce skills that are taught. In addition, data collected from the performance assessments are used to inform training updates so that we can offer additional resources, such as practice exercises, that enable learners to perform the skills that make them proficient. Tracking patterns in the learning results also enables us to proactively target areas of the electronic medical record (EMR) that are inherently difficult. These target areas influence the ongoing support, such as job aids or toolkits, that we offer to new hires as well as existing employees.

Level 3

We have developed a Level 3 evaluation rounding form to observe and rate learner performance after the initial training (three to six months). We have on-site assistance called the Canopy Advanced Support Team (CAST) and the Clinical Informatics Coordinators (CICs), who provide physicians and nurses with at-the-elbow support for the EMR. The IAS Education team, specifically the facilitators, are partnering with them to use the rounding form, to collect metric data on users’ efficiency (accuracy + speed + quality = efficiency) using a one to five Likert scale. The rounding form also asks the teammate to note anecdotal information, such as what a user documented accurately and quickly, or inaccurately and slowly. This qualitative data will help our team make training adjustments and improvements, in the same way that the Level 2 learning data shape changes that we make to the learning components. At the same time, the IAS Education teammate will work alongside CAST and CICs to recommend performance improvements to the physicians and nurses, thus enabling a continuous learning paradigm.

Level 4

This is the area in which we have the most work to accomplish. We are working to establish partnerships with teammates across the organization who are responsible for collecting data related to our strategic priorities, operational excellence initiatives, and financial health indicators. While we have an organizational scorecard that is updated monthly, it is important for us to know what the tools for measurement are in order to correlate those with the training that we offer. What questions are asked? What numbers are used to arrive at specific calculations? What factors affect a percentage? Once we have a clear understanding of these metric sources, we can build strong correlations between the behaviors that our training impacts and the outcomes that are being systemically measured. That alignment will help us to show how our work contributes to the improved outcomes at the foundation of Atrium Health’s mission.

What Lies Ahead

At Atrium Health, the IAS Education team supports learning as it relates to the EMR. In doing so, we have the potential to impact the quality of care each physician and nurse provides. Our role is to ensure that those physicians and nurses are masters of the most efficient methods used to document patient care. While we will continue to make improvements in our collection of learner reaction (Level 1) and learning (Level 2) data, our biggest opportunities lie in the measurement of sustained skills application on the job (Level 3) and the impact those individuals who are performing efficiently have on Atrium Health’s measured outcomes (Level 4) to improve health, elevate hope, and advance healing – for all.

Once we establish strong partnerships with the strategic and operational data owners and apply the work we have done in aligning our leading indicators with those strategic and operational priorities, this initial journey of creating a comprehensive training evaluation plan will have “paved a (metaphorical) road.” This will make it easier to track the ongoing impact our training has on individuals’ use of the EMR and how our training affects the business in our efforts to be the first and best choice for care.

Share