As one recent article on artificial intelligence (AI) use in the non-profit sector remarks, “used poorly, there is no doubt that artificial intelligence can serve to automate bias and disconnection, rather than supporting community resilience.”
For the social sector in Canada, a values-driven, human-centred, inclusive process of development can help to mitigate the ethical risks of developing artificial intelligence.
Nonprofits in the U.S. are already starting to use AI in app development, which accesses and analyzes massive amounts of open source data to, for example, report and rate experiences with police officers, or to identify high-risk texters to dramatically shorten the response time for crisis counselling and suicide prevention.
The use of AI enables the surfacing of patterns detectable from reams of information, some of which is very nuanced, li
Get full access to this story, and all Future of Good content with a membership. Sign up now with a 14-day free trial.
Already have an account? Sign in.