Tool to help AI developers ascertain potential human rights infringements

The Law Commission of Ontario and the Ontario Human Rights Commission have developed an impact assessment tool to help developers “identify, assess, minimize and avoid discrimination and uphold human rights obligations” in the design of AI systems. 

The self-assessment tool asks questions about who the AI system is designed to benefit and who it could harm. It also encourages developers to consider whether AI is a required part of the solution to the problem. 

It also suggests potential internal mitigation strategies to these human rights risks, such as auditing datasets that train the AI and thorough processes for dealing with human rights infringements should they arise. 

It can be applicable to any organization – public or private – that is developing an AI system.

The Law Commission wrote that while the impact assessment framework is designed to focus on Ontarian laws, it can be useful for organizations across Canada. 

They also highlighted that the human rights impact assessment tool “does not constitute legal advice and does not provide a definitive legal answer regarding any adverse human rights impacts, including violations of federal or provincial human rights law.”

A proposed new bill in Ontario is looking to boost cybersecurity capabilities in the province’s public sector, which would “provide the groundwork for the responsible use of artificial intelligence,” law firm Norton Rose Fulbright said. 

Meanwhile, the federal Artificial Intelligence and Data Act, which would regulate all AI systems in the country, has yet to come into force. 

Tell us this made you smarter | Contact us | Report error

  • Sharlene Gandhi is the Future of Good editorial fellow on digital transformation.

    Sharlene has been reporting on responsible business, environmental sustainability and technology in the UK and Canada since 2018. She has worked with various organizations during this time, including the Stanford Social Innovation Review, the Pentland Centre for Sustainability in Business at Lancaster University, AIGA Eye on Design, Social Enterprise UK and Nature is a Human Right. Sharlene moved to Toronto in early 2023 to join the Future of Good team, where she has been reporting at the intersections of technology, data and social purpose work. Her reporting has spanned several subject areas, including AI policy, cybersecurity, ethical data collection, and technology partnerships between the private, public and third sectors.

    View all posts