As a learning professional with over 20 years in the field, I can honestly say that one of the most challenging questions to answer is, “How do we know it worked?” This age-old question is an important topic.

Every great learning design begins with performance goals and objectives. After all, if you don’t know what you are trying to achieve, how will you know you achieved it? While we know that the best practice is to make these goals measurable, we need to be sure to tie them to business outcomes. Begin by answering these key questions:

  • What behavior do we want to change?
  • What’s happening now? (Be specific; what can you observe?)
  • How can we measure it? (Be specific; what quantifiable metrics can you collect and analyze?)
  • How will we know we’re successful? Are there quantifiable targets?

If you know what you need to achieve, do you also know how the results will differ by audience? We often create audience profiles when we begin our instructional analysis, but due to the timeline, the budget or other constraints, we look for efficiencies and overlaps and try to create a shared experience that hits the majority of our needs.

However, if the needs of your audiences differed from the beginning, it’s important to measure the differences you expect in the results. One way to do so is by creating archetypes.

By using archetypes for analysis, you can group audiences and analyze your data at a deeper level to track and measure mindset shifts. This process requires going beyond the standard definitions of roles and tenure and thinking about how learners interact with your content and make meaningful, long-lasting changes as a result of the learning experience.

Let’s say your initiative is to upskill leaders to model your new diversity and inclusion practices. Your instinct may be to analyze results by tenure (new managers versus seasoned leaders) or by geography, which is an excellent start! What if you also grouped them by archetype?

Here are three steps to using archetypes to evaluate training effectiveness.

 

1. Create Meaningful Audience Archetypes

Archetypes, or profiles, define how your audience engages with the content. As an example, let’s use motivation as the organizing archetype:

  • The go-getter: This person is excited, engaged, eager to try, and willing to share feedback and inform next-generation systems and actions.
  • Hesitant but involved: This person is not sure about what you’re asking of him or her, is willing to try, and expects immediate results that will lead to long-term, sustained change.
  • Resistant: This person is not at all excited to try and repeatedly requires a clear picture of the desired state with a strong focus on results, which can’t come soon enough.

Grouping your audience in this way is not entirely scientific, as it’s based on your (or your analysts’) perceptions of audience motivation. Consider what works for your organization and the outcomes you are seeking to achieve with your training.

Meaningful audience segmentations can help you analyze the extent to which you were able to change a mindset and a set of key behaviors. Within each group, you can then also analyze the standard tenure, geography … and much more.

2. Create Protocols

Protocols help you collect the best information from a representative sample in each archetype. Begin with question sets that apply to all groups; then, break the questions down deeper to see where there are similarities and differences in how they’ve adopted and applied the new information.

Most evaluations measure net promoter score, or NPS (“Would you recommend this course to others?”), or value for time spent, or VFTS, and include Likert-scale questions to measure the following:

  • I learned new knowledge, skills and/or mindsets in this program.
  • I practiced useful skills.
  • The course was engaging and held my interest.
  • I will be able to apply what I learned.
  • I would recommend this course to colleagues.

Using archetypes as your guide, you can push your measurement further, for deeper understanding. For example, you can test for accuracy, relevance and usability. Using our previous example — let’s say you created an instructional ecosystem to help leaders adopt diversity and inclusion practices to close known gaps in the organization — let’s examine this testing further.

3. Test

Write questions that capture each major item, and consider using a scale of 1 (strongly disagree) to 5 (strongly agree).

  • Accuracy: How accurate was the information shared?
    • The facts painted a clear picture of the need for our organization to change.
    • I’m prepared to share the facts with my team so they see the need to change.
  • Relevance: How relevant was the learning experience to the changing needs of the business?
    • I learned new knowledge, skills and/or mindsets in this program.
    • I practiced useful skills.
    • I can see how the tools shared fit the new way we need to work.
    • I can apply what I learned immediately back on the job.
  • Usability: Was the experience well-constructed and easy to follow?
    • The course was engaging and held my interest.
    • I will be able to apply what I learned.
    • I would recommend this course to colleagues.

When you know learners’ archetypes from the beginning (go-getter, hesitant but involved and resistant), you can create baseline metrics for each type and estimate the amount of change you expect.

What if you don’t have the budget for in-depth analytics, but you want a pre- and post-training assessment to see how far you’ve moved the needle? Consider asking these three questions to track growth at the highest level:

  • How motivated are you to build this new skill?
  • How confident are you that you can do what’s expected?
  • How ready are you to apply these skills in the flow of your work?

What do you think? I’m interested in hearing your perspectives to keep this important conversation going. Tweet us @TrainingIndustr and @TTATraining.

Share