Search
Close this search box.

HR is in the age of reinvention

It’s all too easy for AI systems to perpetuate and amplify bias–but, if you’re careful, AI can uncover bias, rather than perpetuate it. To reach that goal, you can’t just install a system and trust its results. You have to be critical: you have to think carefully about the kind of results you want, and make sure that you get them.

This is the age of reinvention. Businesses started the 21st century by re-inventing themselves as online businesses; then they took the next step to become data-driven; finally, they have started incorporating artificial intelligence and machine learning into their processes. The COVID-19 pandemic has only served to accelerate these changes: if a business hadn’t made the digital transition at the start of 2020, it had little chance of surviving to 2021.

HR is no different. We’ve all seen HR-related applications in the marketplace: applications for screening resumes, applications for managing benefits, applications for monitoring employee behaviour and productivity, and more. HR is reinventing itself just as the rest of the company is reinventing itself, from the executive suite down. (What executive suite? We’re all working from home now; that’s just one more change facing HR.)

The next year will undoubtedly bring more changes into HR, including increased reliance on AI applications. It’s easy to see why this is necessary: even when unemployment is low, a highly desirable job can receive hundreds of applications. And when unemployment is high, jobs can easily get thousands of applicants. Who is going to read all of those resumes? That’s a massive task, whether you’re at a small company with a few job postings, or an industrial giant with hundreds. And it’s a task that can be automated by using AI to read and evaluate applicants, and make hiring recommendations.

But as HR professionals, we know there are many pitfalls. Hiring processes need to be immune to bias–and it’s entirely too easy for an AI system to produce unfair biased results. In one frequently cited case, Amazon discarded a system for screening resumes because it was penalizing women: having played on a womens’ chess team or having attended a womens’ college caused applicants to be downgraded. Why? The most important part of developing an AI system is training a model. In this case, the model was trained on the resumes of people who had previously applied, and the outcomes of those hiring processes. It’s not surprising that a majority of Amazon employees are male; consequently, the algorithm associated successful applications with male-oriented words. We have to wonder how much great talent would have been turned away if this tool were deployed.

Outright examples of systematic unfairness aren’t the only kind of bias to be concerned about. Another kind of bias occurs when error rates are higher for one class than another. For example, as research by Joy Buolamwini and Timnit Gebru has shown, gender recognition algorithms are most accurate for white men, and least accurate for black women. It’s not surprising that, if you can’t accurately predict gender, you can’t identify faces correctly, either. Imagine the consequences for an algorithm that screens employees entering a building: alarms going off for women of colour, while white men come and go freely.

In both of these cases, training data is part of the problem. Amazon trained their algorithm on their job applicants; the face recognition algorithms that Buolamwini and Gebru studied were trained on datasets that were “predominantly composed of lighter-skinned subjects.” But training data isn’t the entire problem. Biased data comes from biased workplaces. Dealing with issues of bias and fairness is HR’s job. Bias doesn’t just enter through training data (which reflects historical biases), but in determining what questions to ask, who has power over whom, and what kind of conduct is acceptable or unacceptable. Many women and minorities leave jobs because of abuse; will a model trained on hiring data therefore evaluate minority candidates as a “poor fit”? Questions about existing culture and practices may seem unrelated to AI software itself, but they have everything to do with how those applications and their results are used.

In HR, we make decisions that affect human lives. We don’t have the option of “moving fast and breaking things,” because when we make mistakes, we break people. We need to be aware of the problems AI systems bring. But we won’t make progress by rejecting AI. AI brings the promise of making decisions at scale: dealing with thousands (or tens, or hundreds of thousands of resumes). And it also brings the promise of making better decisions. It’s easy to think about biased AI systems; but we can never forget that humans are biased, too. As we’ve argued, it’s much easier to audit an AI for bias than to audit a human. We’re too tricky, too good at rationalizing decisions, too good at failing to realise our own motives for making a decision. An AI will never tell you “I don’t have a racist bone in my body.”

So how do we use AI effectively in HR? First, we have to think critically about our systems. That’s something Amazon did right. They wouldn’t have known about those biased outcomes if they hadn’t been willing to question their algorithm and audit it so they could see whether they liked the results. And we have to think carefully about what an audit requires. While we’re understandably nervous about collecting information about protected groups (such as race and gender), it’s very difficult to tell whether an algorithm is biased if you don’t have that information. As Amazon’s experience demonstrated, the training process can produce a gender- or race-biased model even without explicit data about protected classes.

Second, we need to educate ourselves about the technology we’re adopting: what AI can do, what it can’t do, where it succeeds, and how it fails. That’s not in our training. It needs to be. We don’t need to understand the algorithms, but we do need to know how bias creeps into training data, how bias is encoded into our human institutions, and how AI systems amplify that bias. In the end, our systems are just reflecting who we are. And we need to be familiar with methods and tools for making AI explainable or interpretable by showing the factors that go into any decision.

We need to learn how to investigate the digital “supply chain.” Your company may have an exemplary record on equal opportunity. But if you start using a third party application (or, more likely, a service) to screen resumes, you have to think about much more than your company’s history. Was the AI model trained on your data? Or on someone else’s? On some combination of the two? Or a different dataset altogether? If data from other organisations was used in training, do you trust that data to be fair and unbiased? You’re responsible for the entire data chain, and if you don’t ask the right questions, you won’t get the answers you need.

While HR leaders often haven’t been trained in technology, we have been trained in thinking about bias and systemic issues. That’s our superpower. As people working in human resources, we should be in touch with the human dimension of these technical problems. At best, we can use these tools to investigate our own biases and problems. When we look at the results produced by an AI system, do we like what we see? If we have goals for hiring a more representative workforce, are these new tools helping us meet those goals, or hindering us? When you discover that a model is biased and unfair, use that to help you analyse and address the human factors and biases that created the data used to train the model. Those practices can’t be addressed until they’re out in the open; use AI to expose them.

So, what can we do to take advantage of new AI technology in HR in a way that’s unbiased and fair? Here are a few suggestions:

  • Be aware that AI systems can easily be biased or unfair. Some of that bias arises from training data–but not all. Some of it comes from your own corporate culture and power structures.
  • Think about the supply chains: an off-the-shelf system was trained using data over which you have no control. It may reflect your vendor’s biases, or the biases of third- or fourth-party data sets.
  • Ensure that humans are making the final decisions. AI systems can be a powerful aid to humans, but they shouldn’t be making decisions on their own.
  • Put systems in place for remediation.
  • Avoid the temptation to build your own. It isn’t that difficult for a programmer to create something that sources applicants or screens resumes–but these simple solutions will give very low-quality results.
  • Above all, get learning. It isn’t necessary to understand application development or the math behind AI, but it is essential to understand how AI works, how it amplifies biases, and how to audit its results. HR teams excel at helping train and upskill their technical employees. The most important thing we can do now is put that same requirement on ourselves.

It’s all too easy for AI systems to perpetuate and amplify bias–but, if you’re careful, AI can uncover bias, rather than perpetuate it. To reach that goal, you can’t just install a system and trust its results. You have to be critical: you have to think carefully about the kind of results you want, and make sure that you get them.

    Read more

    Latest News

    Read More

    How HR can help protect businesses and employees against cyber threats

    23 April 2024

    Newsletter

    Receive the latest HR news and strategic content

    Please note, as per the GDPR Legislation, we need to ensure you are ‘Opted In’ to receive updates from ‘theHRDIRECTOR’. We will NEVER sell, rent, share or give away your data to third parties. We only use it to send information about our products and updates within the HR space To see our Privacy Policy – click here

    Latest HR Jobs

    The Bedford College GroupSalary £26 000 pa from depending on experience

    London School of Hygiene amp Tropical Medicine 8211 DirectorateSalary £33 111 to £37 298 per annum inclusive

    The purpose of the role will be to provide a comprehensive HR service for approximately 600 staff within the Trust 50 off Endeavour Children s

    Working closely with the leadership team the interim Head of HR and OD will help lead the organisation through a period of change and lead

    Read the latest digital issue of theHRDIRECTOR for FREE

    Read the latest digital issue of theHRDIRECTOR for FREE