Artificial intelligence is everywhere and will affect every sector, said Nathalie de Marcellis-Warin, president and chief executive officer of the research organization the Center for Interuniversity Research and Analysis of Organizations, during a keynote session at Benefits Canada‘s 2020 DC Plan Summit in Montreal in February.
AI is becoming disruptive, with startups knocking at the door of big businesses like insurance companies to offer ways to help. Most of these AI experts have computer science or technology backgrounds, but they can help in areas like detecting fraud or building algorithms.
However, with the abundance of AI comes ethical considerations, she said, citing virtual chatbots listening to everything as an example. One issue of utmost importance is how AI gathers and uses personal data.
On the other hand, robo-advisors are an example of helpful AI, since they can help advisors make better decisions or answer routine questions, said de Marcellis-Warin, noting they can also provide on-demand services during the night or on weekends. Either way, with the introduction of new technologies, people will be displaced or see their roles evolving.
Also, there are still issues around trust, she added, referring to a 2017 survey by HSBC Bank Canada that found Canadians are twice as likely to trust a robot to perform open heart surgery than to open a bank account for them.
In a survey of Quebecers’ perception of AI, CIRANO found 35 per cent are concerned about the development of AI and robots, while 42 per cent think the development of AI will make companies more efficient. About half (53 per cent) believe AI will lead to lost jobs in the province and 61 per cent think AI will change many work tasks.
Canadian researchers are working on developing AI and machine learning, but they’re also aware of the potential for abuse, said de Marcellis-Warin. “AI is real and the dangers are very real.” For example, human beings have bias and there have been examples of how AI can amplify discrimination and bias. AI can also be used to make deep fake videos that can spread lies on the internet, she added.
In 2018, the Montreal Declaration for the Responsible Development of AI was launched to combat the dangerous use of AI. It brought together various stakeholders to develop 10 principles and 60 subprinciples for the responsible development of AI.
One subprinciple is the solidarity principle, which highlights the goal of ensuring AI focuses on collaboration with humans on complex tasks and facilitating collaboration between humans. “You can develop a robot. You can develop a chatbot,” said de Marcellis-Warin. “But think about your employees around that — your consumer, your customers.”
Read more stories from the 2020 DC Plan Summit.