Amnesty toolkit will help researchers uncover discrimination by algorithm in the public sector
Amnesty International has released a new Algorithmic Accountability Toolkit to help rights defenders, community organizations, and journalists shed light on harms and discrimination caused by algorithmic decision-making “in the public sector realms of welfare, policing, healthcare and education.”
More algorithms are driving automated decision-making within government bodies, which increases the risk of exclusion from essential services and limits the extent to which individuals can challenge decisions made about them, Amnesty said.
Amnesty offers multiple avenues for researchers and organizations to investigate the use of algorithms in government, ranging from Freedom of Information requests to Subject Access Requests. The latter allows people to access information about the data that is held about them within a particular organization.
Procurement information, such as Requests for Proposals, Memoranda of Understanding and loan agreements, can also uncover the technology supply chain that contributes to the development of an algorithm, Amnesty said.
Amnesty also provides a means for researchers to audit algorithms using machine learning and statistical analysis themselves, but recognizes that not all community organizations have access to this technical expertise.
The toolkit emphasizes the importance of participatory research methods and community-led data collection, particularly those that involve the people most affected by algorithmic decision-making.