Using AI to gauge employee well-being, satisfaction

Artificial intelligence has the potential to make a variety of human resources tasks faster and more efficient, but while the technology is sophisticated, employers should be aware of its limitations and risks, said Matissa Hollister, assistant professor of organizational behaviour at McGill University’s Desautels Faculty of Management, during Benefits Canada’s 2023 Future of Work Summit.

“There’s this interesting paradox, which is that both of the following statements are true: AI has the potential to make things more fair and AI has the potential to make things much worse.”

Hollister designed and led a World Economic Forum project in 2019 and 2020 to create a guide on the responsible use of AI in HR tasks. “This creates a real challenge in deciding to use AI systems, especially in the employment context.”

Read: Does artificial intelligence have a place in human resources?

The past year has seen a boom in AI-powered tools and the rapid advancement of generative AI products Open AI’s ChatGPT and Microsoft Corp.’s Bing chatbot. But their widespread adoption has heightened worries about the potential for these tools to entrench inequities and displace workers in certain sectors.

Hollister also advised employers to look out for forthcoming regulations around AI specifically — including Canada’s Artificial Intelligence and Data Act and the European Union’s AI Act, the latter of which has specific provisions around the use of AI in employment and for HR tasks. The use of AI can also fall into existing adverse impact, labour and data privacy laws, she added.

There are two types of AI, which are underpinned by machine learning techniques that involve giving a computer a massive amount of data from which it can learn, find patterns and make predictions. Task-based AI is designed to use that data to perform one specific task incredibly well, said Hollister, and represents the vast majority of AI tools on the market. Generative AI tools, on the other hand, are trained on unfathomable amounts of data to be able to create text, images or video.

Task-based AI has been used in the HR space to try to identify top job candidates, scan job postings to flag language that might discourage diverse candidates from applying and more, she noted.

Read: Humi using ChatGPT to remove administrative burden from HR, recruitment processes

But the way these algorithms are built reveals two challenges employers will need to wrestle with. Using an example of a tool for recommending job candidates, Hollister said the algorithm would be tasked with determining applicant characteristics that are associated with people who perform a certain job very well. To do that, it would be fed data from applicant profiles to compare against training data on previously successful employees and those who didn’t perform as well.

Even if the algorithm identifies a strong candidate, there’s no way to guarantee a successful outcome, she said. There are plenty of external factors that can impact someone’s job performance and aren’t measured by an algorithm, including manager and colleague relationships, unexpected events in their personal lives and more.

As well, the decisions around what applicant profile details the algorithm is given to evaluate and what metrics are considered relevant training data reflects human behaviour “and therefore is not objective,” she said. For example, years of unconscious bias against women or minority groups at the hiring and performance evaluation level could embed inequities in the training data and therefore skew the tool’s recommendations.

Employers that are considering creating or procuring a task-based AI tool need to know where the training data came from, the outcome the tool is looking to achieve and what inputs the tool is using to make its predictions, said Hollister.

Read: Canadian employees have mixed emotions about impact of AI on career, skills: survey

She also recommended seeking perspectives from many stakeholders throughout the company with diverse backgrounds, who may be able to identify unintended biases in the training data, inputs or intended outcome.

Meanwhile, generative AI could be used in some HR functions, such as developing job interview questions or writing emails, with Hollister pointing to a study that found it was used successfully to provide customer service agents with suggestions on how to respond to clients.

But Hollister cautioned employers to use it with guardrails, including not putting any confidential company data into an LLM chatbot, since it then becomes part of its training data. The text these tools generate are better used as a base from which to start rather than accepted wholesale, she added, since the text is merely created based on the tool’s prediction of what words should follow each other rather than a genuine understanding of language and intent.

Read more coverage of the 2023 Future of Work Summit.