Human and civil rights groups call for withdrawal of Canada’s AI legislation

Critics are concerned about a lack of public consultation, little accountability for artificial intelligence use by public bodies, and no mention of Indigenous data sovereignty.

Why It Matters

In Budget 2024, the federal government announced $5.1 million to enforce the Artificial Intelligence and Data Act. Most consultations have been with industry, while human rights groups and non-profits have had little involvement in crafting the legislation to date.

Prime Minister Justin Trudeau at a conference in November 2023. Weeks after the federal budget was dropped, several groups say the country’s AI legislation needs to be pulled and redone. (Facebook/Supplied)

Three weeks after the federal government announced a $2.4-billion investment in the Canadian artificial intelligence ecosystem, civil society organizations, human rights campaigners, and academics say the proposed AI legislation should be withdrawn. 

Led by digital rights non-profit OpenMedia, the open letter follows previous communications to Science, Innovation and Industry Minister François-Philippe Champagne, in which a group of signatories asked to have the Artificial Intelligence and Data Act (AIDA) decoupled from the government’s proposed privacy bill, Bill C-27. 

In another letter, signatories said Innovation, Science and Economic Development Canada, the federal department responsible for growing the AI sector in the country, should not also be tasked with regulating it

“From a human rights perspective, AIDA is hugely problematic in its current form,” noted a priority recommendations package submitted to the Standing Committee on Industry and Technology (INDU) in March.

“It fails to recognize that fundamental human rights must take precedence over narrow, commercial interests. It fails to recognize AI’s potential to cause harm to identifiable groups, not just to individuals. And it fails to recognize that the human rights risks arising from AI development are inequitably distributed, with negative impacts falling disproportionately on individuals and groups that are already marginalized.”

“It’ll be a huge PR win for the government to have seen this [bill] pass,” said Matt Hatfield, executive director at OpenMedia. 

“But our concern – and the concern of many of the groups that signed this letter has been that they’re not making the legislation as good as it could be because they haven’t taken the time to get it right.”

The Conservative Party is “publicly trying to delay almost all legislation in order to minimize the accomplishments of this government, which is very frustrating,” Hatfield added.

The history of a ‘hasty, confusing and rushed’ Bill

Following feedback from stakeholders, Minister Champagne proposed several amendments to AIDA at the end of 2023. Most notably, the definition of “high-impact” AI systems was clarified and divided into several classes where AI might be applied: recruitment and employment, service provision, biometric information, content moderation, healthcare and emergency services, justice and courts, and law enforcement. 

The amendments also granted more investigatory and audit powers to the proposed independent Artificial Intelligence and Data Commissioner, where previously, these powers would have been held by the Minister. The Commissioner could also share information with other federal agencies, such as the Ministry of Health, the Canadian Human Rights Commission, and the Financial Consumer Agency of Canada. 

The Commissioner, however, would still sit within Innovation, Science and Economic Development Canada (ISED). 

In their recommendations package, OpenMedia, the International Civil Liberties Monitoring Group and the Privacy and Access Council of Canada highlight that these amendments still miss several vital issues: tackling the amplification of misinformation and disinformation and the potential use of artificial intelligence in weaponry and military systems. 

In a separate submission to the Standing Committee on Industry and Technology in January, the Canadian Union of Public Employees (CUPE) suggested that Canada should follow the European Union’s example and ban specific applications of AI altogether. 

“Banned applications in the EU AI Act include cognitive behavioural manipulation, untargeted scraping of facial images, emotion recognition in the workplace and educational institutions, social scoring, and biometric identification and categorization of people,” CUPE wrote in their submission.

“Minister Champagne’s amendments would allow AI technologies to identify individuals based on their biometric information and assess their behaviour or state of mind.”

Minister Champagne also said in his proposed amendments to AIDA that the legislation should pass now. 

“The costs of a delay to Canadians would be significant,” he wrote. “Failing to act now would mean that AI systems, which are being used across many industries and sectors, remain unregulated in Canada for many more years.” 

There is no enforceable AI regulation in Canada, apart from a voluntary code of conduct that 23 organizations signed. 

Meanwhile, the European Parliament has approved its legislation as of March 2024. 

In an email follow-up with the ISED media team, a representative confirmed that the Standing Committee on Industry and Technology is still studying the bill. It will then be referred back to the House of Commons for a third reading and then to the Senate. 

“Post-Royal Assent of AIDA, ISED will move quickly to develop regulations under the Act and establish the Office of the AI and Data Commissioner,” the representative said, adding that Budget 2024 proposed a pot of $5.1 million to commit to the new commissioner’s office in 2025-26. 

Lack of public consultation concerning 

Hatfield said that when the government announces significant legislation, it typically opens up a public review process for a few months to incorporate perspectives into the legislation. 

In the case of AIDA, the federal government has instead favoured private meetings with stakeholders in industry, he added. 

“This Bill is first and foremost being written for industry,” Hatfield said. “[The government] is trying to do cleanup to also make it relatively good for Canadians. But the primary purpose here is to assure the industry that Canada is going to be a great place – and an intentionally lower [regulatory] standard than the EU – to do development.”

Future of Good asked ISED four separate questions about who had been engaged thus far in developing AIDA, whether non-profits and community groups had been consulted, whether human rights groups had been consulted and whether the federal department had plans to conduct a full public consultation. 

In response to all four questions, ISED’s representative wrote that the government has held more than 300 engagements with “a broad range of stakeholders from academia, civil society and industry, and human rights groups. These include the International Civil Liberties Monitoring Group, the Distributed AI Research Institute and Citizen Lab.” 

ISED provided a complete list of stakeholders engaged between June 2022, when Bill C-27 was tabled, and September 2023. Of the 315 meetings listed, some stakeholders have been involved multiple times: Amazon is listed six times, the Canadian Bankers Association and the Canadian Marketing Association 12 times each, and Microsoft 15 times, including as part of a consultation on the AI code of conduct. 

Amii, Mila, and the Vector Institute—the three national AI research hubs—have collaborated with the government 11 times between them.

The government has engaged with the Office of the Privacy Commissioner 15 times, and only twice with the Canadian Human Rights Commission.

The glaring omission is those not involved in technology at all, according to Bianca Wylie, a partner at Digital Public, a research and advocacy organization focused on digital governance. 

The government seems to be speaking to too few people, including those with a vested interest in developing the country’s AI sector, Wylie said.

“What about a nurse or a teacher?” she asked. “How would someone who has never heard of this topic walk into the room and talk about AI?”

She mentioned staff in the non-profit sector who may be feeling the pressure to shake up or reframe their entire portfolio of work. 

Equally, she pointed out that those working in sectors outside of technology, such as health and finance, should be able to comment on AI applications in their respective domains and in light of pre-existing regulatory frameworks. 

If the government were to begin a full public consultation over the summer, giving people a few months to provide feedback, they could have amended legislation by the end of the year or in early 2025, said Hatfield. 

Indigenous rights at risk

In Canada, Hatfield said, Indigenous communities “ought to be a significant stakeholder when we’re talking about this comprehensive reshaping of people’s rights.” 

The Assembly of First Nations (AFN) submitted this issue to the Standing Committee on Industry and Technology, writing that “the process is flawed because there was no Nation-to-Nation consultation between Canada and First Nations. As a result, the Minister did not hear First Nations and does not understand First Nations, and it shows in the legislation.”

Under the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP), First Nations have the right to own and govern their own data, including making decisions about how the information is collected and for what purpose. 

AFN wrote in their submission that “Bill C-27 infringes these and other rights in both the process of its development and its substance.” 

“Litigation by First Nations is likely if the Crown fails to meet its obligations, which could result in the suspension of the legislation entirely,” they wrote.

“Litigation is expensive, and the crown loses more times than not. Better to get it right the first time and meet everyone’s desires for a safe and secure digital future.” 

The AFN’s submission points out that ISED’s language about consulting stakeholders does not mean they are consulting rights holders. It also describes the exemption of certain public departments, such as National Defence, the Canadian Security Intelligence Service, and the Chief of the Communications Security Establishment, as “chilling.”

“First Nations are left to wonder and worry because it is precisely these tools of the colonial government that have been used to oppress First Nations in the past,” the AFN wrote, referencing “the collective failure to protect Indigenous women and girls” and “the chilling effect of Harper’s Government direction to Indigenous and Northern Affairs Canada to ramp up surveillance on First Nations individuals and communities.”

The over-surveillance and over-policing of historically marginalized communities is a particular focus at the BC Civil Liberties Association, said Aislin Jackson, policy staff counsel. 

The rise of “predictive policing” and the lack of disclosure about algorithmic use in law enforcement are concerning, they added.

Rights can and should be coded into legislation, but Canadians still have a significant problem accessing justice systems, said Wylie.

The Women’s Legal Education and Action Fund also acknowledged this in their submission to the Standing Committee: “It takes time and effort to detect and challenge the harm of AI. For many people, they may not become aware they were subjected to harm by AI, unless they have the capacity to follow up and investigate.”

Government not subject to current AI regulation

Joanna Redden, an associate professor at Western University, recently found that the federal government has already begun implementing artificial intelligence in various projects and initiatives. Her database shows more than 300 tools with predictive and automated technologies in government departments.

“AIDA is not designed to apply to the public sector,” said a representative from the department. 

“It seeks primarily to address risks arising from the development and deployment of AI systems in the course of commercial activities.” 

The Directive on Automated Decision-Making is a mandatory policy for all federal departments, as are guidelines published in 2023 on how to use generative AI. Both are issued by the Treasury Board Secretariat. 

In his amendments to AIDA, Minister Champagne recognized that many of the AI systems used by governments are developed and managed by private sector organizations, which need to undergo appropriate risk management. 

“AIDA would still place requirements on systems and services marketed by the private sector and utilized in the public sector. […] If a high-impact system is made available in the course of international and interprovincial trade for use by police, court or healthcare authorities, then AIDA will apply.” 

Both Wylie and Jackson noted that Canadian institutions and organizations often rely on technology developed in the United States and data that flows through data centres outside of Canada in fundamentally different privacy jurisdictions.

“There is this porous boundary between public and private surveillance that is really concerning in the age of mass data analysis and AI,” Jackson said.

Tell us this made you smarter | Contact us | Report error

  • Sharlene Gandhi is the Future of Good editorial fellow on digital transformation.

    Sharlene has been reporting on responsible business, environmental sustainability and technology in the UK and Canada since 2018. She has worked with various organizations during this time, including the Stanford Social Innovation Review, the Pentland Centre for Sustainability in Business at Lancaster University, AIGA Eye on Design, Social Enterprise UK and Nature is a Human Right. Sharlene moved to Toronto in early 2023 to join the Future of Good team, where she has been reporting at the intersections of technology, data and social purpose work. Her reporting has spanned several subject areas, including AI policy, cybersecurity, ethical data collection, and technology partnerships between the private, public and third sectors.

    View all posts