Federal department in charge of regulating AI should consider “privacy as a fundamental human right,” say experts
Why It Matters
AIDA will likely not be coming into force until 2025, but various organizations – including the non-profit sector – have begun implementing AI-based tools and technologies into their work. The sector must be aware of its obligations to the public regarding applications of AI and data privacy legislation.

A group of 45 experts, academics and non-profit organizations say the federal department responsible for the economic growth of Artificial Intelligence (AI) in Canada should not be in charge of regulating it.
In late September, 45 signatories, including Amnesty International Canada, the Women’s Legal Education and Action Fund (LEAF) and the Canadian Civil Liberties Association, signed an open letter addressed to the Minister in charge of Innovation, Science and Economic Development (ISED) Canada, François-Philippe Champagne. In it, they demand that “ISED should not be the primary or sole drafter of a bill with broad human rights, labour and cultural impacts.
“It is inappropriate for the regulation of AI to fall completely under the auspices of ISED, whose mandate is to support the economic development of the AI industry,” the letter reads.
“The lack of any public consultation process has resulted in proposed legislation that fails to protect the rights and freedoms of people across Canada from the risks that come with burgeoning developments in AI.”
“The government seems more focused on moving quickly than necessarily doing what is right,” says Matt Hatfield, executive director of OpenMedia, a non-profit advocating for an “open, affordable and surveillance-free” Internet and one of the organizations that have signed the letter.
“Our concern hinges on ISED leading here. It’s a contradiction that the ministry that is directly charged with sponsoring innovation and making sure that Canada is a leader in the technology is also the only ministry that is setting limits on how the technology should be used. We would like to see other government ministries also involved, who have different mandates, to protect Canadians and respect our rights.”
So, what do they want?
The House of Commons is reviewing the Artificial Intelligence and Data Act (AIDA) as part of a broader bill, C-27.
At the time of writing, Bill C-27 has gone through its second reading and referral to the Standing Committee on Industry and Technology. Likely, AIDA will not come into force until 2025, which the government’s Advisory Council on Artificial Intelligence has already raised concerns about, given the rapid development and widespread use of the technology.
The signatories are asking for the AIDA to be deliberately separated from Bill C-27 and involve a cross-ministerial approach and public consultation.
The group also asks for definition clarity to prevent “things slipping through the cracks,” Hatfield says. An example raised in the letter is using the term “high-impact system” to describe AI technologies, which the letter writers say is “illegible and void of substance.”
Many of the letter’s signatories have been monitoring artificial intelligence over the years, Hatfield says. This year, however, has seen a “major public awareness breakout on generative AI,” he says, adding that we’re now also seeing companies embedding AI into various commercial applications.
“One thing that is not even in the letter, but is one of my deepest concerns here, is the capacity of these new forms of AI that can speak to people in natural language,” Hatfield adds.
“I’m concerned about harnessing that persuasive power that AI could have to systems that are trying to talk people into buying things or adopting certain political positions or beliefs.”
ISED may not also “not want to step on anything that could make a lot of money for Canadian businesses, says Hatfield, noting there may be an economic incentive to position itself as less regulated than other places such as the European Union.
“Some of us [who have signed the letter] have mostly been talking about privacy, and some of us have mostly been talking about AI, and we’re being forced to mash those conversations and concerns together. There are many concerns that are completely distinct to each.”
As part of the AIDA, ISED also envisions creating a new role for an AI and Data Commissioner. “Codifying the role of Commissioner would separate the functions from other activities within ISED and allow the Commissioner to build a centre of expertise in AI regulation,” says the AIDA companion document.
The AI and Data Commissioner would have a specific mandate to “lead the development of cross-sectoral standards” by coordinating with or supporting other regulators, says a spokesperson for ISED.
The letter to ISED points out that placing a Commissioner within the ministry undermines that individual’s ability to conduct independent reviews of the state of AI in Canada. On the other hand, Hatfield suggests that the role could sit within the Office of the Privacy Commissioner – “although concerns go well beyond privacy,” he adds.
What has been the reaction to the letter so far?
Following the publication of the letter and the advocacy work that led up to it, OpenMedia and other signatories have had some responses and conversations with ISED officials, says Hatfield.
The Minister of Innovation, Science and Industry “had listened to stakeholder concerns and is open to amendments to AIDA in key areas that would help it to meet its objectives and build trust in the framework,” says an ISED spokesperson.
These pieces of legislation had not been a priority or getting sustained attention from the federal government for a long time, Hatfield says. Because of this gap, individual departments such as Health Canada and the Office of the Superintendent of Financial Institutions have already updated their guidelines for using AI in their respective industries.
The letter also stresses “the lack of structured, deliberative, and wide-ranging consultations before and since tabling AIDA is anti-democratic, and it has deprived people in Canada of the rights-protecting, stress-tested AI legislation they need.”
There was never a public consultation on AI, Hatfield adds.
“Certain issues may not even have been identified because they haven’t had a chance to have that big, public back-and-forth,” he says.
“I know [the government] have had quite a few private meetings with a few non-industry folks, but a lot of it has been with industry boosters and people from the AI industry.”
In September 2023, ISED published a voluntary code of conduct for the responsible development and management of advanced Generative AI systems, which was the culmination of a consultation period, ISED’s media team pointed out.
The code of conduct – signed by 14 organizations, including BlackBerry, TELUS and the Council of Canadian Innovators – requires technologists to commit to safe, fair, equitable and transparent development and “human oversight and monitoring.”
As well as the existing Advisory Council on Artificial Intelligence, the Government of Canada launched a Public Awareness Working Group in 2020 “as a means of listening to and informing citizens in the context of this rapidly evolving Canadian AI ecosystem.” Workshops took place in 2021, with a final report published in February 2023. The Working Group made several recommendations, including creating a free AI literacy course, public information campaigns, an Equity, Diversity, Inclusion and Accessibility Strategy, and access to high-speed internet.
Members of the working group also highlighted that while AI could have perceived positive impacts on industries such as manufacturing and transportation, there are significant negative implications to applying AI in law enforcement. The working group plans to release a further report after engaging with Indigenous communities.
Since the letter, OpenMedia has launched an AIDA-focused action, allowing people to write to their MPs about some of the calls to action addressed in the letter.
“So much of the public discourse is being sucked up by people fighting about existential risk. There are people who think the robots are going to be ten times as smart as us tomorrow, and the people who are very angry at those people,” Hatfield adds. “I’m not sure that is the most important thing to be looking at.
“Some of the most informed and passionate people in AI are being distracted by this conversation, which doesn’t speak to the more immediate, clear and big short-term problem with AI.”