Tool to help AI developers ascertain potential human rights infringements
The Law Commission of Ontario and the Ontario Human Rights Commission have developed an impact assessment tool to help developers “identify, assess, minimize and avoid discrimination and uphold human rights obligations” in the design of AI systems.
The self-assessment tool asks questions about who the AI system is designed to benefit and who it could harm. It also encourages developers to consider whether AI is a required part of the solution to the problem.
It also suggests potential internal mitigation strategies to these human rights risks, such as auditing datasets that train the AI and thorough processes for dealing with human rights infringements should they arise.
It can be applicable to any organization – public or private – that is developing an AI system.
The Law Commission wrote that while the impact assessment framework is designed to focus on Ontarian laws, it can be useful for organizations across Canada.
They also highlighted that the human rights impact assessment tool “does not constitute legal advice and does not provide a definitive legal answer regarding any adverse human rights impacts, including violations of federal or provincial human rights law.”
A proposed new bill in Ontario is looking to boost cybersecurity capabilities in the province’s public sector, which would “provide the groundwork for the responsible use of artificial intelligence,” law firm Norton Rose Fulbright said.
Meanwhile, the federal Artificial Intelligence and Data Act, which would regulate all AI systems in the country, has yet to come into force.