Amnesty toolkit will help researchers uncover discrimination by algorithm in the public sector

Amnesty International has released a new Algorithmic Accountability Toolkit to help rights defenders, community organizations, and journalists shed light on harms and discrimination caused by algorithmic decision-making “in the public sector realms of welfare, policing, healthcare and education.”

More algorithms are driving automated decision-making within government bodies, which increases the risk of exclusion from essential services and limits the extent to which individuals can challenge decisions made about them, Amnesty said. 

Amnesty offers multiple avenues for researchers and organizations to investigate the use of algorithms in government, ranging from Freedom of Information requests to Subject Access Requests. The latter allows people to access information about the data that is held about them within a particular organization. 

Procurement information, such as Requests for Proposals, Memoranda of Understanding and loan agreements, can also uncover the technology supply chain that contributes to the development of an algorithm, Amnesty said. 

Amnesty also provides a means for researchers to audit algorithms using machine learning and statistical analysis themselves, but recognizes that not all community organizations have access to this technical expertise.  

The toolkit emphasizes the importance of participatory research methods and community-led data collection, particularly those that involve the people most affected by algorithmic decision-making.

Tell us this made you smarter | Contact us | Report error

  • Sharlene Gandhi is the Future of Good editorial fellow on digital transformation.

    Sharlene has been reporting on responsible business, environmental sustainability and technology in the UK and Canada since 2018. She has worked with various organizations during this time, including the Stanford Social Innovation Review, the Pentland Centre for Sustainability in Business at Lancaster University, AIGA Eye on Design, Social Enterprise UK and Nature is a Human Right. Sharlene moved to Toronto in early 2023 to join the Future of Good team, where she has been reporting at the intersections of technology, data and social purpose work. Her reporting has spanned several subject areas, including AI policy, cybersecurity, ethical data collection, and technology partnerships between the private, public and third sectors.

    View all posts