Search
Close this search box.

Meet Brielle Bot*, Will Your New Recruiter Be Fair?

To the wearied HR administrator seeking to come up for air amid a sea of resumes, the call of Brielle Bot is much like the siren song of the lovely mermaid calling to the lonely sailor in the middle of the ocean.

Computer robots are making inroads into the human resources departments. Will 2022 be the year of the bot? 38% of enterprises are already using AI – Artificial Intelligence in their workplace for HR applications according to the Frevvo Workflow Automation Blog. 62% report that they expect to be using this resource in the coming year.[1]

The digitalization of HR is seen as a fast-track resolution to addressing the massive record-keeping responsibility that is a critical part of the HR function. The urgency of this transition is amplified by the effect of what has been called the great resignation, the highly publicized outgrowth of the COVID pandemic where as many as 23% of individuals have voiced plans to leave their current employer in search of better paying or more rewarding jobs according to a report in the business magazine Fast Company.[2]

These are some of the challenges faced by the harried HR manager today. There is an intensified need to efficiently fill the numerous requisitions for replacement staff through internet-social media-based recruiting efforts that now can generate hundreds of responses to help wanted ads. Everyone who has participated in employee recruitment is well aware of the time demands of sourcing, scanning, sorting, and prioritizing resumes to find suitable candidates for open jobs.

Enter Brielle Bot*, your new AI-based conversational chatbot who will rescue the overworked HR department by performing the many analytical HR tasks in a way that will save time, save money, remove human bias all while objectively making the best candidate match to open positions. Brandon Bot*, a twin brother chatbot, is available for assignments requiring a male voice presence.

Brielle Bot handles in a fraction of a second what a staff of living-breathing recruiters used to do in weeks taking while no sick or annual leave days, while working on a 24/7 schedule, and not complaining about workloads or burnout.

Brielle Bot promises to use objective data analysis and algorithms to recruit, scan, sort, select, and screen resumes, then text, or e-mail candidates, conduct pre-screening video or telephone interviews, ask and answer questions, or schedule in-person interviews when needed. These tasks will be performed using machine learning skills that objectively evaluate all the numerous data sets to select the best candidate.

Brielle Bot can conduct data searches, consider key metrics such as work experience, education, job progression, and skills measuring candidates against defined criteria, objectively including or eliminating data per instructions while applying machine learning skills as the database grows. Further, she does these tasks without making conscious or unconscious decisions based on a bias for or against any particular protected class or category of individuals.

A review of marketing information posted on the internet reveals that the leading AI providers identify many well-known names on their client lists.  The lists detail such recognizable big-ticket names including Milwaukee Tool, Deloitte, Anixter, Adecco, Loreal, Chilli’s, Arby’s, Wayfair, and many Fortune 500 firms.

It seems like everyone is jumping on the AI bandwagon. But wait. Questions relating to the fairness of relying on artificial intelligence in employment decisions have been raised in the US, the UK, and the EU.

The Society of Human Resource Management (SHRM) has raised a concern, in a white paper article entitled “AI: Discriminatory Data IN, Discrimination Out.” In a December 2019 article, author Allen Smith, JD, cautions that employers who use AI in recruiting might inadvertently discriminate against women and minorities if the data fed into the screening platform is flawed. Employers using AI as well as vendors who create or administer such systems may be named as defendants in employment discrimination lawsuits.[3]

The US Equal Employment Opportunity Commission (EEOC) launched in October 2021 an Initiative on Artificial Intelligence and Algorithmic Fairness. The EEOC has asserted that while technology is evolving, anti-discrimination laws still apply. The Commission has already defined rules relating to employee selection procedures. Court cases in the US have long held that test and selection criteria must be job-related and not create an adverse intent or effect against protected class individuals.

Acknowledging that artificial intelligence and algorithmic decision-making tools have a great potential to improve the employment processes, EEOC Chair Charlotte A. Burrows has cautioned that “…these tools may mask and perpetuate bias or create new discrimination barriers to jobs.” The Commission’s initiative will seek to guide applicants, employees and employers to assure that such tools are used fairly, consistent with non-discrimination laws.[4]

The EEOC’s Uniform Guidelines on Employee Selection Procedures, published in 1978, set a standard of guidance for employers to determine if their tests and selection procedures were lawful for the purposes of Title VII (of the Civil Rights Act of 1964) disparate impact theory. The Commission’s sensitivity to the growing big data use of algorithm decision-making processes prompts the agency to update its existing guidance.[5]

In the UK, the anti-discrimination framework of the UK Equality act of 2010 offers individuals protection from discrimination, whether generated by human or automated decision-making processes. Researchers Reuben Binns and Valeria Gallo with the Information Commissioner’s Office, an independent authority committed to upholding information rights in the public interest, forewarn that AI systems learn from data processed, but that does not guarantee that their outputs will be free of human bias or discrimination.[6]

The problem, the ICO researchers suggest, may be because of imbalanced training data used by the machine learning algorithm. If there is an inadequate cross representation of diverse individuals in the data set, the model may result in biased predictions.  Secondly, if the training data reflects past discrimination, such discriminatory practices could be carried forward into the machine learning algorithm outputs.[7]

In the European Union, a proposed Artificial Intelligence Act is progressing through EU legislative processes. The proposed law intends to ban completely AI systems that manipulate individuals or are used by public authorities for purposes of social scoring, or used for real-time biometric identification of individuals for law enforcement purposes. The law would permit regulated use of product safety applications, creditworthiness assessments, and use in recruiting. The regulated use provision requires that providers feed the AI system with training, validation, and testing data that meet specific quality requirements.[8]

An article in Computer Law & Security entitled “Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI” cautions that a clear gap exists between statistical measures of fairness as embedded in myriad fairness toolkits or governance mechanisms and the contextual equality used in the courts to evaluate sensitive issues.[9]

To the wearied HR administrator seeking to come up for air amid a sea of resumes, the call of Brielle Bot is much like the siren song of the lovely mermaid calling to the lonely sailor in the middle of the ocean.  Clearly, important issues need to be sorted out as this new technology gains acceptance in the C suite as a cost-effective tool to improve the efficiency of beleaguered human resources personnel working hard to meet the people needs of the organization.

When evaluating and implementing AI applications for HR, the following tips are offered:

Take care to continue to comply with current law. The agencies that enforce non-discrimination laws will continue to enforce current laws and regulations now on the books and the courts will adjudicate decisions based on applicable laws and precedent.

Be alert for new laws being implemented in this rapidly changing landscape. New York City has passed a law requiring bias audits on job screening that uses AI beginning in 2023.[10] Legislative processes continue in the EU as the proposed Artificial Intelligence Act is being considered.

Screen vendors carefully, taking close look at how their proposed AI services address EEO issues in compliance with applicable law.  Be wary of vendors incorporating “no liability clauses” in their service agreements.

Evaluate your AI model design to assure that a) data used in training the AI model is representative and balanced, b) the model doesn’t include any discriminating parameters such as gender or age, or ethnicity, and c) the model doesn’t perpetuate any existing skew from past discriminatory employment decisions.

Train HR staff and line managers on AI policies and procedures. Get a cross-section of input from the various elements of your organization to consider all the departmental perspectives and needs when planning and implementing AI applications in the human resources arena.

Evaluate issues with input from diverse perspectives of your organization’s employee population being sure to include viewpoints of individuals in all protected class categories.

Conduct pilot programs prior to implementation, testing for an unintended adverse effect of AI models.

Periodically monitor and test AI algorithmic assessment monitoring for adverse effects.

Consider designating a chief AI officer to define accountability and raise the degree of professionalism in this emerging field.

In the interest of promoting greater transparency, notify candidates, employees, customers, or other users of AI exposure or interactions with Bots.

To guard against violations of the Americans with Disabilities Act, consider incorporating a process that assures reasonable accommodation for qualified individuals with a disability by engaging in dialog and individual assessment of requested accommodations.

* Brielle Bot and Brandon Bot are fictitious names for artificial intelligence robots with female and male voices respectively.

William S. Hubbartt, MSIR, SPHR is the author of Drawing a Line: A Look Inside the Corporate Response to Sexual harassment, and Drawing a Line https://drawingaline175408723.wordpress.com/.

William.hubbartt@att.net

Endnotes:

[1] 25+ HR Automation Stats You Should Know – frevvo Blog

[2] 23% of American workers say they’ll soon quit their jobs (fastcompany.com)

[3] AI: Discriminatory Data In, Discrimination Out (shrm.org)

[4] EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness | U.S. Equal Employment Opportunity Commission

[5] Questions and Answers to Clarify and Provide a Common Interpretation of the Uniform Guidelines on Employee Selection Procedures | U.S. Equal Employment Opportunity Commission (eeoc.gov)

[6] Human bias and discrimination in AI systems | ICO

[7] IBID.

 [8] The Artificial Intelligence Act Proposal and its Implications for Member States – Eipa

[9] Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI – ScienceDirect

[10] Silicon Legal: New York City to Require Bias Audits on Job Screening That Uses AI | The Recorder (law.com)

Read more

Latest News

Read More

Fourth Industrial Revolution navigation: A Guide to Thriving in the Digital Economy – ARTICLE OF THE WEEK – Issue 234 – April 2024

24 April 2024

Newsletter

Receive the latest HR news and strategic content

Please note, as per the GDPR Legislation, we need to ensure you are ‘Opted In’ to receive updates from ‘theHRDIRECTOR’. We will NEVER sell, rent, share or give away your data to third parties. We only use it to send information about our products and updates within the HR space To see our Privacy Policy – click here

Latest HR Jobs

The Bedford College GroupSalary £26 000 pa from depending on experience

London School of Hygiene amp Tropical Medicine 8211 DirectorateSalary £33 111 to £37 298 per annum inclusive

The purpose of the role will be to provide a comprehensive HR service for approximately 600 staff within the Trust 50 off Endeavour Children s

Working closely with the leadership team the interim Head of HR and OD will help lead the organisation through a period of change and lead

Read the latest digital issue of theHRDIRECTOR for FREE

Read the latest digital issue of theHRDIRECTOR for FREE