Diversity, equity, inclusion and belonging (DEIB) have received a lot of attention over the past few years. Employees are putting the onus on their employers to show that they are serious about the subject. In fact, 76% of employees recently surveyed said that DEIB matters are one of the most important factors when choosing an employer.
For HR teams, that means looking towards new and better ways to boost initiatives to not only ensure that employees feel that the organisation is taking these initiatives seriously, but also to be able to show to future employees and candidates that their new workplace is where they can be their true self. Of course, one technology we are looking at to untap potential in this area is AI, and as it continues to grow in importance in HR, it can play a key role in driving real change.
AI has become more than just an algorithm machine that streamlines processes and helps with recommendations and predictions. It’s become advanced enough to assess opportunities within an organisation, for example for skills among employees, and then match the gaps with tailored learning content, helping to close gaps and provide employees with a personalised career and development journey. It can also be useful, if built and used properly, for removing bias among things like CV-checking, which helps to drive better inclusion and diversity. For this, organisations must approach using AI in the right way, for example looking carefully at bias management and relying on a proper ethical framework to ensure AI is performs in an ethical way.
So, how can organisations make sure that they are using AI to its full potential, without causing existing inequities to worsen?
Ethical AI in HR
Behaving ethically in the workplace is nothing new. Many organisations will set out guidelines for employees on how to behave through communication declarations such as purpose statements, values and other HR initiatives. And the same should be set out for ethical AI too. If HR teams are using AI for hiring, promotion or salary recommendations, based on employees’ skills and performance, removing discrimination-based issues, like race and socioeconomic status, need to be considered. However, it’s not as easy as simply writing an ethical algorithm.
Successful ethical AI lies within the foundations of the data – it needs unbiased training data to be able to make ethical decisions. The likes of personality and aptitude tests, for example, are a popular hiring method among organisations and HR teams, yet they may contain inherent biases if the questions aren’t thought through carefully. For example, questions like “How many planes take off from Heathrow every day?” might sound like a standard aptitude question, but for someone who has never been on a plane or in an airport before, which could be due to their socioeconomic status, it can create natural bias towards those who have a more affluent, travelling lifestyle.
How do I know if my AI is ethical?
Even once the AI is set up using ethical data, it doesn’t mean that it will stay that way. Over time, the algorithm might change and adapt, and it could be difficult to predict. Therefore, it must be monitored on an ongoing basis to ensure it remains within the ethical boundaries of the organisation. For example, when using AI for candidate recommendations in the hiring process, humans should remain involved and compare the differences between the outputs of the machine and the HR teams. The AI may, for example, be selecting individuals with university degrees because it’s using learnings from the existing employee data, or it may be biased toward male candidates due to the disparity of female to male employees in the organisation. Monitoring these types of indicators on a regular basis will help to avoid bigger problems down the line. Even if the AI is from an external vendor, as it is often the case, vendors cannot 100% guarantee against bias, in particular in the context of your organisation, so it remains the responsibility (and the power…) of the HR teams using the technology to ensure it performs ethically.
A driver of change for DEIB
Using AI within HR is an ongoing process, and especially when using it for DEIB opportunities – it’s about setting goals, making progress toward those goals and course-correcting. So, for things like gender pay gap reporting, for example, if the gender pay gap is set to 10%, HR teams need to set accountability goals, monitoring the organisation’s progress towards those goals and addressing AI along the way.
This leads to an opportunity for HR and diversity teams to develop additional competencies to be better equipped to ensure the progress of ethical AI and DEIB goals. While using AI applications in HR hasn’t yet become the norm for organisations, it can prove to be a powerful tool when used in the right way, allowing business to become more people-centric and forward-thinking than ever before.