It is no secret that artificial intelligence (AI) has a bias problem. Facial recognition technologies consistently misidentify people of color; biased algorithms tend to reinforce sexism in the workplace; and AI often overlooks positive qualities in job candidates with disabilities, holding them back from valuable employment opportunities.
Of course, we cannot blame the technology alone for its shortcomings. Michael Yurushkin, Ph.D., chief technology officer and founder of AI and data science consulting company BroutonLab, says AI, machine learning (ML) and deep learning are “as racially biased as the data sets and proxies that humans expose it to.”
For example, he says, if 80% of job candidates that a company hired in the past were white males, “the algorithm will automatically filter out females or non-white-sounding names.” This bias occurs because the algorithms “are matching the patterns” in the training data the technology uses to make future predictions. “Any bias contained in the data used to train AI will affect how the system performs,” Yurushkin says.
Let’s take a deep dive into how bias impacts AI and how training can help fix it.
Women, people of color, employees with disabilities and other groups have long faced systemic barriers in (and outside of) the workplace. With few of these employees holding leadership roles, workplace policies and procedures are all too often designed for the people creating and enforcing them: white, typically able-bodied males.
Black women are one of the most underrepresented groups in the C-suite: The Center for American Progress found that women of color represent just 4.7% of executive or senior-level officials and managers in S&P 500 companies. Further, with Indra Nooyi’s exit as chief executive officer of PepsiCo in October 2018, there remain only two Black women in CEO positions in the Fortune 500.
Disparities among underrepresented groups in the workplace can fuel implicit bias — which can easily seep into seemingly objective algorithms. Darrell West, vice president and director of governance studies, director of the Center for Technology Innovation at the Brookings Institution, and co-author of “Turning Point: Policymaking in the Era of Artificial Intelligence,” says, “Minorities already face a number of barriers in terms of bias, prejudice and discrimination.” If you add biased algorithms on top of those issues, he says, it “can be devastating” for their career advancement.
Companies dedicated to diversity, equity and inclusion must commit to eliminating bias from AI — which starts with eliminating it from their organization.
How Training Can Help
Training and education are vital for organizations that want to improve how they use AI and other deep learning technologies, West says. After all, “People are not always conscious of their own implicit biases or how organizational practices can introduce possible barriers.”
When done well, implicit bias training can help employees identify where their biases lie so they can work to dismantle them, but delivering successful implicit bias training is easier said than done. Often, our biases are so deeply engrained that they go unnoticed (which is why they are called “implicit” biases).
To achieve real behavior change, implicit bias training should not only highlight biases but provide actionable solutions for addressing them. For example, if a learner realizes that he or she has a bias against co-workers with disabilities, there should be resources available to learn how to support and advance employees with disabilities in the workplace moving forward.
Organizations should also train the employees working with AI or ML technologies on best practices for creating inclusive algorithms. This training will help ensure that they apply what they learned in the implicit bias training on the job. Specifically, learning leaders should advise employees responsible for creating algorithms to:
Eliminating bias starts in the early development of AI, “well before even the prototyping phase,” says Rex Freiberger, CEO of Gadget Review. When building out a proof or concept, it is important to “screen your team” for implicit bias. Recognizing the potential for bias at an early stage can help mitigate it before it “sneaks into the algorithms upon which the AI functions,” he says.
Bring in Outside Expertise
Employees responsible for creating algorithms or deciding how data will be applied often have limited sets of diverse experiences to “bring to the ultimate AI product,” says Miriam Vogel, president and CEO of EqualAI, a nonprofit organization dedicated to reducing bias in AI. She recommends that companies hire outside experts from diverse demographic groups to scan their algorithms for bias to ensure that they are as objective and fair as possible.
Each of these “human touch points,” Vogel says, is an opportunity to identify and reduce the implicit bias in your AI.
Identifying and eliminating bias from AI is not a one-and-done process. Freiberger says, “You owe it to your team and your customers to perform rigorous checks to prevent [bias] from happening as much as possible.”
Equitable AI for an Equitable Future
Eliminating bias from AI is simply “the right thing to do,” as it helps ensure that employees have equal access to opportunity and that organizations have a diverse “demographic mix” of talent, says Tomas Chamorro Premuzic, chief talent scientist at Manpower Group, co-founder of assessment platform Deeper Signals and author of “Why Do So Many Incompetent Men Become Leaders (And How to Fix It).”
But it also benefits the bottom line: Research shows that companies dedicated to advancing diversity, equity and inclusion are “more productive, more creative [and] more innovative,” Chamarro Premuzic says. “Even if those benefits are not realized directly, there’s a huge [public relations] and reputational cost today for not caring about diversity.”
While it may seem like companies working to eliminate bias from AI for the sake of their reputation (and to avoid a lawsuit) are simply “checking a box,” Charmarro Premuzic says, “this is actually not the worst case scenario …. It means society is evolving and making progress.” It means consumers expect organizations to advance, not hinder, inclusivity in the workplace.
To that end, actionable implicit bias training can help eliminate bias in AI by ensuring that the humans behind the machines are developing inclusive algorithms that, in turn, contribute to a more equitable future of work.