What Social Impact Practitioners Are Missing About Artificial Intelligence

The Unintended Consequences of AI Blind Spots Can Be Ominous

Why It Matters

AI systems have massive potential, but issues of bias loom. Massive data sets, when put through various filters and systems, can lead to great harm. Impact-focused leaders must understand what this means and how to spot them.

We made it clear in Part 1 that artificial intelligence in Canada is expanding quickly and will have an influence on the HR mix in the world of social impact. For all its benefits, we can’t ignore the unintended consequences that arise, or what mathematician and author Cathy O’Neil calls the “Weapons of Math Destruction,” which is also the name of her book, subtitled How Big Data Increases Inequality and Threatens Democracy.

O’Neil has spent time in academica, at a hedge fund, and in a tech startup. She has theoretical and applied experience worthy of our attention, and she is raising the alarm about potentially harmful applications of AI and machine learning.

Algorithmic models can be punitive, discriminatory and, in some instance

Get full access to this story, and all Future of Good content with a membership. Sign up now with a 14-day free trial.