As artificial intelligence continues to transform HR, with three in four HR teams now adopting AI and automation tools, a new report* suggests that AI models may reduce, not increase, bias in hiring. This is despite concerns about accountability and fairness continuing to worry business leaders and employees alike, plus high-profile cases such as Mobley v Workday capturing headlines. Three-quarters (75%) of HR leaders cite bias as a top concern when evaluating AI tools, second only to data privacy.
Fairer and more consistent across tested demographics
The report, which analyzed “high-risk” AI systems used in talent acquisition and intelligence platforms, reveals that AI may help organizations make fairer decisions than their human counterparts. On average, most AI models audited were found to deliver outcomes that were fairer and more consistent across the demographic groups tested.
The findings challenge the ongoing public assumption that AI is inherently less fair and counteract academic research that shows a rising incidence of bias in publicly available AI models. Warden AI’s audits evaluate how AI models perform across different demographic groups, such as male and female sexes. They assess both the overall outcomes and the model’s sensitivity to demographic attributes. In the first case, models are tested to see whether different groups receive similar outcomes, with 85% meeting industry fairness thresholds. In the second, models are tested to see whether changing attributes linked to demographics (such as names) affects the result, with 95% meeting high standards of consistency across groups.
Some AI systems remain unfair
Not all systems performed equally well: 15% of AI tools failed to meet fairness thresholds for all demographic groups, with performance varying by as much as 40% between vendors, underscoring the importance of carefully selecting responsible vendors to partner with. Nonetheless, the data suggests AI outperforms humans on fairness metrics, with an average fairness score of 0.94, compared to 0.67 for human-led hiring.
Notably, the data suggests female candidates experience up to 39% fairer treatment when AI is involved compared to humans. For racial minority candidates, that figure is even higher at up to 45%.
Still, HR and talent acquisition leaders remain cautious in their AI buying decisions. Only 11% of HR buyers report ignoring AI risk when assessing vendors, while 46% say that a vendor who shows a clear commitment to Responsible AI is a critical driver of success in the procurement process.
High-profile cases are damaging AI perception
Warden AI’s research comes at a pivotal moment, as public awareness of AI bias grows. The recent Mobley v. Workday case (a high-profile lawsuit alleging algorithmic discrimination in hiring) underscores the real-world legal, reputational, and financial risks AI bias can pose.
“Business leaders and the public rightfully are concerned about AI bias and its impacts. But this fear is causing us to lose sight of how flawed human decision-making can be and its potential ramifications for equity and equality in the workplace,” said Jeffrey Pole, CEO and co-founder of Warden AI. “As our research shows, AI isn’t automatically a better or worse solution for talent acquisition. This is a wake-up call to HR and business leaders: when used responsibly, AI doesn’t just avoid introducing bias, it can actually help counter inequalities that have long existed in the workplace.”
Kyle Lagunas, Founder and Principal at Kyle & Co., said, “After a decade advising HR and Talent leaders on how to adopt technology responsibly, I’ve seen excitement around AI quickly give way to concern, especially around bias and fairness. But now is the time to lean in—and find real answers to the real risks we face.
“This report brings a number of interesting points together to crystallize this critical conversation. As the findings highlight, while AI bias is real, it is also measurable, manageable, and, thankfully, mitigatable.”
*Report from Warden AI, The ‘State of AI Bias in Talent Acquisition 2025’,