Learning organizations are starting to realize the benefits of digital assistant-driven training. It’s quick and relatively cheap to deploy. It’s also relatively easy to distribute to everyone (with some limitations). Digital assistants are versatile and powerful, and adopting them can support training and development professionals in many ways.
A digital assistant (sometimes known as a” virtual assistant,” “intelligent agent,” etc.) is a technology-based helper that provides immediate and contextual support. It can take a visual form (i.e., an avatar or animated character), a textual form (i.e., a messaging bot or “smart chatbot”), and/or an audio form (i.e., a “voice assistant”). Digital assistants can be integrated into learning software and platforms, or they can be completely standalone. They can specifically focus on learning support, or they can provide a wide range of functions for personal and professional purposes.
Based on a survey of available articles and published case studies, the primarily audio, standalone, general voice assistants have been most underutilized in enterprise learning. You may know them as Alexa, Siri, Google Assistant and other names. Installed on almost every mobile phone and numerous other smart devices, they are nearly ubiquitous. If you were to say the right phrase right now (“Hey ….”), one of them would probably answer.
The tools for creating and sharing your own voice assistant learning actions are readily available. They are usually free or low cost, and getting started is surprisingly straightforward. More advanced interactions do require specialist programming, but digital learning designers and developers can create and launch simple (yet effective) learning actions in a matter of days.
In addition to their near omnipresence, these assistants offer unique and powerful capabilities. Always available, they can also be scheduled to engage learners at particular times. They move learning beyond the screen into an ambient, multisensory experience and communicate in a user-friendly dialogue. You can easily tap into their built-in artificial intelligence (AI) features (such as natural language processing). Deepening the sense of interactive personalization, they provide learners with a high degree of control.
Here are five primary use cases for engaging digital assistants in learning experiences to consider:
- “Recommender:” The digital assistant recommends learning to users based upon their profile, history and preferences. This is the AI-enabled recommendation engine that we see with many bots and system assistants.
- “Responder:” the digital assistant provides performance support on request, meeting the moment of need. Voice-based assistants can be especially useful for drivers, operators, and others who can’t physically interact with a screen and yet still benefit from learning in the flow of work.
- “Resourcer:” The digital assistant offers skill-building resources and regularly updated (e.g., daily or weekly) training content, such as guides, job aids or tips and podcasts. Voice-based assistants offer intuitive ways to explore and access curated and topical content.
- “Reinforcer:” The digital assistant delivers nudges and reminders to sustain learning and build habits. Incorporating learning into assistant-driven “routines” (that is, sequences of interactions that play at a specific time or are triggered by a location or an event) is a powerful way to support continuous learning and growth.
- “Role player:” The digital assistant engages the learner in a realistic conversational simulation. AI-based role playing that doesn’t just involve selecting from pre-defined options is already available (or in beta), presenting incredible opportunities for skills practice in areas like providing feedback, negotiations and coaching.
Designing learning experiences for digital assistants and a voice user interface (VUI) does require a fundamentally different approach than designing content for other modalities. Conversation design (CD) has emerged as a discipline to optimize assistant-based exchanges. One of its foundational principles is to leverage human conversation naturally, to keep prompts and responses short and informal. This eliminates the need to teach learners the rules on how to interact with the assistant, but it also requires a deeper appreciation of the idiosyncrasies and nuances of open dialogue (e.g., colloquialisms and edge case responses, avoiding a breakdown in the pattern of cooperative understanding between interlocutors, encouraging exploration and reflection through probing questions). Designs should support both quick access to relevant information and deeper dives into what’s on offer.
Besides the shift in design mindset, there are a number of other factors when determining your approach to learning through voice-enabled assistants. Accessibility is key for learners who cannot use dialogue or audio-driven formats. There may be technology security considerations for your content as well, and tracking completion and effectiveness requires more advanced APIs or webhooks. Translation requires field testing and localized design. And remember that assistant-based learning interventions do require ongoing updates and maintenance, particularly if you offer regularly changing content.
Earlier, I referred to digital assistants as “helpers.” Currently, there is a somewhat subservient personality to digital assistants that will evolve in the next few years as they become more multi-dimensional. We’re already seeing this in some performance support spaces – such as AI tools that monitor customer service conversations and provide real time prompts to agents on how to improve style, tone, engagement and productivity. Digital assistants will extend the ways in which they support learning professionals in building skills, mindsets and habits across the enterprise.