
While they aren’t legally binding, Quebec’s workplace artificial intelligence guidelines offer a roadmap for how existing privacy law will be interpreted and where legal expectations are headed, said Arianne Bouchard, partner and national co-lead of Dentons Canada LLP’s employment and labour group, in an emailed statement to Benefits Canada.
“The guidance marks a turning point. Employers need to understand that AI isn’t just a tech issue. It touches on employee rights, privacy and workplace fairness, and that means policies, training and transparency need to catch up.”
Earlier this year, Québec’s privacy regulator, the Commission d’accès à l’information, submitted a landmark brief to the province’s Ministry of Labour warning that AI is transforming the workplace in ways that demand stricter oversight. It outlined how AI tools are already being used to automate hiring, monitor performance and track employee behaviour, calling for a regulatory framework that ensures transparency, fairness and privacy protection.
Read: What are the considerations for employers introducing AI in a unionized workplace?
The brief urges employers to publish internal AI policies that name the systems in use, explain how decisions are made and clarify what data is being collected. It also recommends employers notify staff when AI will be used to support or fully automate decisions that affect them.
For smaller employers, this may sound like a heavy lift. “Start simple. Even a basic internal record that outlines how AI tools are used is better than waiting,” said Alexandra Quigley, senior associate in Dentons’ litigation and privacy groups, in an emailed statement to Benefits Canada. “For smaller businesses, this creates clarity without overloading their administrative capacity.”
The CAI goes further by discouraging the use of AI in areas where it could do more harm than good. This includes systems that try to analyze emotions, use biometrics or make fully automated decisions that significantly impact employees. It also reminds employers to check if vendors are using employee data to train AI models — something that could trigger privacy law violations if done without consent.
“Employers should be asking direct questions,” said Charles Giroux, associate in Dentons’ corporate and privacy groups, in an emailed statement to Benefits Canada. “If your vendor is using employee data for model training, you need to know how that aligns with privacy rules.”
Employers are also encouraged to conduct algorithmic impact assessments alongside their privacy reviews. These assessments should include employee feedback and consider whether an AI tool is necessary, proportionate and free of bias. Although these steps may feel like a stretch, they offer a layer of protection against future legal or reputational fallout.
Employers should also make sure their use of AI aligns with individual contracts, collective agreements and, where applicable, Québec’s Charter of Human Rights and Freedoms. This includes respecting the right to privacy, dignity and equality in the workplace, as well as the right to fair and reasonable working conditions. Missteps here could lead to grievances or legal action.
Read: E.U. introducing new rules governing use of AI in the workplace
At a minimum, employers should document their AI systems and data use, notify employees of any automation in decision-making, confirm third-party data handling practices and train key staff on privacy obligations. These steps can be scaled to the size and needs of each organization, but skipping them invites unnecessary risk.
Even though Québec’s privacy reforms have already raised the bar, the CAI says more clarity is needed for AI in employment settings. Until that framework arrives, it says the safest path is to adopt a proactive approach that centres on transparency, employee rights and responsible data practices.