With continuous pressure on business leaders to drive revenue and productivity, there is an increased focus on proving the value of internal investments. One way that learning and development (L&D) partners can help leaders demonstrate impact is through technology-driven measurement and evaluation of key development programs.

Evaluation is a systematic collection of descriptive and predictive information that can be used to improve the effectiveness of training decisions. For those who are new to learning evaluation, there are several reasons an L&D practitioner may want to evaluate their training programs, including measuring the impact of training on knowledge, skills and abilities; examining the cumulative effects of changes made to the training program over time; uncovering areas for improvement in the course curriculum and instructor(s); and demonstrating a return on the investments (ROI) made.

However, the most important thing to remember is that not everything that can be measured should be measured. L&D practitioners should be focusing on evaluating larger “big bet” programs that require substantial investments of time, money and resources. Here are a few tips to focus and ensure accountability for a training evaluation:

  • Determine what will be considered a “big bet” business-relevant outcome of the training program (e.g., increase in sales)
  • Outline the knowledge and key behaviors that will drive success for learners (learning objectives)
  • Socialize the expected outcomes with leaders to secure agreement on measuring impact using a technology solution or in-house tools
  • Share the evaluation plan with stakeholders and leaders to create accountability and commit to a post-program debrief on outcomes achieved
  • Gather high-quality data matched with on training objectives and conduct descriptive and predictive analyses
  • Present a clear, visual summary to stakeholders on the results, lessons learned and next steps for the program

Effective training evaluation can produce valuable information to improve learning experiences, and to align training investments with changes in business strategy. Yet, while many organizational leaders and learning practitioners recognize the value of evaluating the company’s training programs, the question remains – why isn’t it done more often? Sometimes this gap in evaluation is related to a lack of accountability to do such analyses, lack of awareness on how to effectively measure training, or a lack of resources. Recent advances in learning technologies remove or reduce many of these barriers. Today’s learning technologies provide a host of new opportunities to incorporate a consistent evaluation strategy into the delivery of a learning solution.

Identifying a learning technology to drive measurement

While some organizations have in-house tools to deliver and gather data on a training program, today’s learning technology marketplace includes solutions that support the content design and end-to-end delivery of training experiences, personalization and on-demand features to capture the attention of learners within and beyond the classroom, as well as integrated robust reporting and tracking. These off-the-shelf technologies make evaluation, often the weakest part of the training development cycle, more manageable, accurate and compelling. Below are a few such examples.

Content design and delivery that track usage data:

  • Enable content authoring through easy to use solutions
  • Deliver content and an assessment in one place to fuel reporting later
  • Offer on-demand, anytime, anywhere access to learning content
  • Host mobile and social learning
  • Personalize learning content

Reporting and tracking to show key findings:

  • Enable live manager dashboards
  • Track usage of learning materials
  • Log ratings and coaching conversations over time
  • Download trend analyses and summary visuals (heatmaps)
  • Track qualitative and quantitative measurements in one place

Engagement IN the classroom to show application of concepts in real-time:

  • Enable live polling or quizzes
  • Apply gamification principles in learning
  • Design leaderboards using teams
  • Apply virtual classroom methods

Engagement BEYOND the classroom to reinforce concepts and apply behaviors:

  • Enable automatic reminders
  • Provide engaging reinforcement
  • Design flexible pre- and post-assessments
  • Provide a best in class demo for trainees to watch
  • Design video practice moments and coaching observations
  • Deliver custom learning content

Amongst a crowded range of technology options, L&D practitioners must diagnose and discern which tools will offer value rather than picking the “cool” or “cutting-edge” solution. When selecting a learning technology, be sure to ask the following questions to understand the tool’s measurement capabilities:

  • How will the learning technology acquire and track data about the training experience (e.g., higher trainee satisfaction, post-training engagement)?
  • What learning objectives will be assessed in the tool and how will you pull the data for analyses (e.g., exported database, visual summary reports)?
  • What types of reporting does the tool offer managers to ensure their teams are successfully learning the content?
  • Does the technology have the ability to integrate with other HR systems or learning management systems to connect and analyze multiple data sets?

Be sure to know what is critical and identify what capabilities or areas you can improve to drive the right behavior in your organization. From there, invest in a technology solution that can drive long-term value. Below are some common challenges and best practices for learning practitioners when creating an evaluation plan with new technology solutions.

Challenges and Best Practices 

  1. Overcoming resistance to new tools or approaches. Don’t be afraid to experiment with a platform and its reporting functionality before you fully commit. Be willing to ask the vendor to partner on a pilot program in order to test out the analytic capabilities, gain buy-in and ensure adoption.
  1. Knowing if the learning technology is worth the investment. Outline what success will look like to remove bias from the evaluation of the technologies. These include determining expected levels of usage, specific ratings on the likeability of the tool, and ease of access/usage of reporting capabilities and dashboards.
  1. Ensuring accountability on the training program evaluation. By using the technology’s data and other in-house measurements (e.g., post-training satisfaction surveys), commit to a post-training summary. Socialize the measurement approach to ensure team accountability for data analysis and stakeholders.
  1. Ensuring effective reporting and clear visualizations with limited time. With the right initial set-up using the tool and vendor support team, you can ensure clear reporting in a limited amount of time and some even come with different graphs and charts, making life easier and securing stronger stakeholder buy-in, memorability and interest in results. 
  1. Capturing and summarizing comments or learner sentiment data. Many tools also offer the ability to track comments on the learning experience, content, or even assessments. This will allow you to track and adjust training collateral on the go to ensure effective experiences. 
  1. Using pulse surveys to collect data over time. Many tools will enable pulse data-gathering and collect ongoing metrics about access rates, time in course, pre-assessment attempts, and so on.
  1. Integration with other HR/people databases. Too often training data sits across multiple systems and tools, but more technologies are now offering the ability to integrate to other HR systems and tools so that data can be overlaid and connected when looking for trends over time. Be sure to ask the vendor the steps required and time/cost commitment as you begin exploring the products.

Measurement is and will continue to be a critical skill and area of opportunity for any L&D professional. By ensuring ROI, you will be thinking like a strategic advisor for any leadership team to know where to invest. Collecting important evaluation data can aid in telling meaningful, influential stories to business leaders and can ultimately shift the dynamic of moving training and development from nice to have, to critical.