Since ChatGPT became widely available to the general public at the end of 2022, artificial intelligence (AI) has become a hot topic in nearly every industry. While the discussion may feel like the classic hype that accompanies the introduction of all new technologies, we’re told that AI could be different, with its potential for exponential self-improvement and the potential to do for knowledge work what the industrial age did for blue-collar work.

There’s also tremendous pressure on business leaders to figure out — quickly — how their organizations and disciplines can adopt AI before they’re left behind. AI poses some clear risks for human resources (HR) functions, which often involve a lot of social and ethical nuance that, at least today, remains the purview of human beings. The idea of allowing AI to make hiring or promotion decisions, or even suggestions, raises significant concerns over fairness and unethical and illegal discrimination. Just last year, the US Justice Department warned organizations that using AI tools for selection can result in violations of the Americans With Disabilities Act (ADA).

While there are critical legal, ethical and technical questions yet to be answered about how AI can be used in recruitment, hiring, career development and employee management, a potentially less risky area to start is in learning and development (L&D).

L&D: The Petri Dish for AI in HR?

Each organization must assess the risks and benefits of adopting AI projects on an individual basis. Risks will be specific to the company’s industry, and organizations in compliance-heavy industries may face more risks than others.

AI technology already commercially available can offer real benefits to employee learning. Tools available today offer features like:

  • Individualized and adaptive courses. AI that learns from a learner can tailor content and content delivery to meet learners where they are and how they learn best.
  • Virtual assistance. The gold standard of one-on-one learning may be much easier with AI that can answer questions and interact intelligently with learners.
  • Automated content creation. While some L&D pros may question whether AI can create properly useful course content today, generative AI can allow for a kind of “headless” learning platform where a single general repository of content can be adapted to any medium. For example, if AI learns that a particular learner does better with video than text, it could write, narrate, illustrate and produce existing text content into a video tailored to that learner.

These examples just scratch the surface, but they should illustrate that the potential for AI in L&D is quite strong. If an organization is interested in experimenting with AI and L&D seems like a good place to start, it’s still important to be precise and cautious.

Precision Understanding

Before deciding to pursue an AI project, it’s critical that organizations understand the capabilities and limitations of the technology or platform they’re considering. At the Myers-Briggs Company, we take this kind of precision quite seriously because, while it’s a powerful tool for development and tempting to use for selection, the Myers-Briggs test itself can’t be used for hiring, selection or performance prediction.

It’s easy to let our creativity run wild with ideas for how AI can be used in HR, but the real-world capabilities and limitations of technologies will better guide success. For example, a technology that’s capable of assessing an employee’s competence in a certain area might better be deployed today as part of an L&D initiative, and not to drive hiring decisions.

Assessing Risk and Benefit

With a precise understanding of the capabilities and limitations of a technology in hand, organizations should then assess whether the technology is right for it. The Harvard Business Review has published a model for choosing an AI project that guides decision-making around risk and demand.

Risk assessment in this context answers the question: “How much damage could this do if something went wrong?” Demand is conceptualized as: “How much do we actually need the capabilities or benefits this technology offers?”

This model may provide the justification for first adopting AI for L&D applications since for many organizations such projects would pose less risk and are demanded more by employees and larger goals.

Get the Basics Right First

The demand axis in the HBR model suggests that while there are possibilities in AI, we must take an honest look at how much we need the help AI offers and if that help will really support objectives.

If an organization is assessing where among the array of HR functions to first experiment with AI, it should assess AI’s potential across all those functions and should understand where it stands without AI. In particular, does the organization:

  • Have clear diversity and inclusion objectives and an understanding of how well it’s already meeting them?
  • Have a clear picture of how well it’s already addressing unwanted discrimination and bias in the selection process? If bias, discrimination and diversity objectives are already being addressed effectively, introducing AI in these areas could increase overall risk.
  • Already ensure that only reliable, valid and fair assessment results are used in selection?

Unless the answers to these questions already paint an organization’s efforts in a good light, the introduction of AI to areas like selection could add unpredictable risk. If there are unclear goals around diversity or uncertainty about how discrimination is being addressed, it’s tempting to look to AI as a solution — but from what we’ve seen so far, that would be a gamble.

Whether an organization decides to adopt AI tools now or not, the potential for AI in L&D is certainly exciting, and just may be the safest and most rewarding place to start the experiment.

Share