Search
Close this search box.

Addressing algorithmic bias in hiring

Discover how artificial intelligence (AI) is revolutionizing the HR industry and enhancing the recruitment process. With AI, HR departments can automate tedious tasks, identify patterns that predict employee turnover, and improve the candidate experience, reducing both time and cost. However, implementing these technologies may perpetuate biases and require careful management to prevent harm. This blog explores AI’s current use in HR, emerging regulations in the US and EU, and how to safeguard against potential risks with AI risk management strategies.

Do hiring algorithms prevent bias or amplify it?

As the use of artificial intelligence (AI) continues to expand across all industries, human resources (HR) departments have been among the first to embrace its capabilities. AI provides HR departments with new tools to streamline the job application process for both candidates and employers, automate repetitive tasks in the onboarding process and recognise patterns indicative of employee turnover intention. While these technologies offer a range of benefits, such as improving the candidate experience and reducing time and cost, they also pose risks, such as perpetuating existing biases. As a result, it is essential to manage these risks to ensure that AI is used to its full potential without causing harm. 

This blog provides an overview of how AI is being used in HR, examines regulations that are emerging in the US and EU to shape the use of HR technology, and outlines steps that employers and businesses can take to protect against unintended harm caused by AI systems through AI risk management. 

Shaping the candidate pool and narrowing the funnel

The applications of AI in HR are vast, including targeting job postings, screening applications, and evaluating assessment and interview performance. Although automated tools have benefits for both candidates and HR staff, they can pose serious risks. As recruiters use algorithms at more steps of the hiring process, bias can enter hiring decisions in several ways. Algorithms can influence hiring decisions by targeting job postings based on factors such as age or gender, limiting the pool of potential applicants. Programs that screen resumes can lead to discrimination if resumes contain the word “women’s or disclose a disability. Models that analyse video interviews may have difficulty with accurately recognizing facial features for applicants with darker skin tones or penalize non-native speakers. In short, HR practices are becoming less human-led and more automated each year, and AI systems are increasingly being used to assist or even replace human decision-making. 

Regulation ahead

The passing of laws aimed at regulating HR technology signals that managing the risks of automated tools is becoming an increasing priority across the industry. The first legislation of its kind, Illinois’ Artificial Intelligence Video Interview Act, came into effect on 1st January 2020, seeking to increase transparency around the use of AI to evaluate video interviews. Under this law, employers using AI in their video interviews must disclose this to candidates, in addition to the characteristics that will be considered by the tool and how it works. Candidates must also consent to the use of the tool prior to it being used to evaluate them. Maryland has taken similar action, introducing legislation to require employers to obtain a signed waiver from candidates before using AI-driven video interviews.  

The New York City Council has passed legislation that mandates bias audits of automated employment decision tools (AEDTs). Local Law 144, colloquially known as the NYC Bias Audit law, requires employers to commission independent, impartial bias audits of their AEDTs before using them to evaluate candidates for employment or employees for promotion within New York City. The law stipulates that employers and employment agencies must notify candidates or employees at least 10 business days before an AEDT tool is used. Notice can be given on the careers section of the employer’s website, in a job posting or sent via email. As part of this notification, employers are required to inform candidates of the factors and variables that the automated tool uses and considers to make its decision and provide the AEDT data retention policy, and candidates can also request information about the type and source of data being used. This enables candidates to give more informed consent about their interaction with the automated tool and gives them a generous amount of time to research and consider the tool that they will interact with, the implications this might have for them and whether they object to it.

Outside of the US, the European Commission’s EU AI Act seeks to regulate AI systems available on the EU market. Taking a risk-based approach, obligations and corresponding penalties for failing to comply are proportional to the risk presented by the system, with risk being categorized as minimal, limited, high, or unacceptable. Since AI systems used for employment and talent management are deemed high-risk, employers utilizing automated HR solutions must take action to reduce the potential dangers of their employment systems.

Towards equitable hiring

Although U.S. and EU laws impose some limitations on employers utilizing predictive hiring technologies, they are inadequate for addressing the evolving risks associated with machine learning-enhanced tools. For example, the Civil Rights Act of 1964 prohibits discrimination on the basis of sex or race when making hiring, promoting, and firing decisions. Similarly, the Equal Employment Opportunity Commission’s Uniform Guidelines on Employee Selection Procedures state that the four-fifths rule should be used to identify any potential adverse effects or prejudice in hiring practices based on selection rates, yet this, is often insufficient to address the novel harms that are posed by the use of AI.

So, how can we ensure hiring algorithms promote equity? Even though the process of regulation is slow and the implementation of industry-wide best practices is still in its early stages, they certainly have roles to play. Vendors constructing automated hiring tools, as well as employers utilizing them, should consider more than just the bare minimum of compliance regulations in the interim.

Before deploying any predictive tool, vendors must thoroughly evaluate whether their algorithms can create equitable hiring outcomes. They must also assess how subjective measures of success might affect the tool’s predictions over time. Moreover, employers should not only look for any evidence of adverse impact at the selection phase but also monitor their entire recruitment pipeline to identify any places where bias may be present or arise.

The need for AI Risk Management in HR Tech

One approach to ensure responsible AI is risk management: the process of identifying, assessing, and managing risks. By adopting an AI risk management framework, when bias is identified, it can be addressed by debiasing the data the model is trained on, modifying the model to make it more equitable across groups, or amending the model’s outputs to make the predictions more equitable, depending on the source of the bias. There are a number of ways that vendors and employers can support these efforts, including being transparent about the types of technology and model used by the algorithms, any efforts to test the model for bias, and how the model performs in terms of accuracy, for example.

Minimizing the risks, particularly bias, will soon become a legal requirement. Employers will be expected to be more transparent about how their AI is supporting their employment decisions. To decrease bias in hiring, organizations must actively build and examine their tools with intention. Without AI risk management, the technology will most likely not be able to meet that guarantee — and may even make it worse. 

    Read more

    Latest News

    Read More

    Rise in recruitment fraud must urgently be checked

    28 March 2024

    Newsletter

    Receive the latest HR news and strategic content

    Please note, as per the GDPR Legislation, we need to ensure you are ‘Opted In’ to receive updates from ‘theHRDIRECTOR’. We will NEVER sell, rent, share or give away your data to third parties. We only use it to send information about our products and updates within the HR space To see our Privacy Policy – click here

    Latest HR Jobs

    University of Cambridge – Judge Business SchoolSalary: £32,332 to £38,205 pa, pro rata

    University of Cambridge – Judge Business SchoolSalary: £29,605 to £33,966 pa, pro rata

    University of Oxford – Blavatnik School of GovernmentSalary: Grade 5: £28,759 – £33,966 per annum (with a discretionary range to £37,099)

    Software Development Director (Exec Team Seat). Remote Working with Ellesmere Port Office-Based Minimum 1 Day Per Week. + Contribution towards membership fees. £120,000 – £140,000

    Read the latest digital issue of theHRDIRECTOR for FREE

    Read the latest digital issue of theHRDIRECTOR for FREE