As one recent article on artificial intelligence (AI) use in the non-profit sector remarks, “used poorly, there is no doubt that artificial intelligence can serve to automate bias and disconnection, rather than supporting community resilience.”
For the social sector in Canada, a values-driven, human-centred, inclusive process of development can help to mitigate the ethical risks of developing artificial intelligence.
Nonprofits in the U.S. are already starting to use AI in app development, which accesses and analyzes massive amounts of open source data to, for example, report and rate experiences with police officers, or to identify high-risk texters to dramatically shorten the response time for crisis counselling and suicide prevention.
The use of AI enables the surfacing of patterns detectable from reams of information, some of which is very nuanced, li
Our social impact coverage and insights enrich thousands of changemakers like you everyday. Sign up for a free account with Future of Good to continue reading this article.
Already have an account? Sign in.