What Social Impact Practitioners Are Missing About Artificial Intelligence

The Unintended Consequences of AI Blind Spots Can Be Ominous

Why It Matters

AI systems have massive potential, but issues of bias loom. Massive data sets, when put through various filters and systems, can lead to great harm. Impact-focused leaders must understand what this means and how to spot them.

We made it clear in Part 1 that artificial intelligence in Canada is expanding quickly and will have an influence on the HR mix in the world of social impact. For all its benefits, we can’t ignore the unintended consequences that arise, or what mathematician and author Cathy O’Neil calls the “Weapons of Math Destruction,” which is also the name of her book, subtitled How Big Data Increases Inequality and Threatens Democracy.

O’Neil has spent time in academica, at a hedge fund, and in a tech startup. She has theoretical and applied experience worthy of our attention, and she is raising the alarm about potentially harmful applications of AI and machine learning.

Photo by rawpixel on Unsplash

Algorithmic models can be punitive, discriminatory and, in some instances, they can even be illegal. Many add an extra layer of harm to already vulnerable population groups.

Those groups seeking fair rent, an affordable loan, a fair chance at a job interview, or a fair judgment in court are all profiled in the book.

“A model’s blind spots reflect the judgments and priorities of its creators,” O’Neil writes. “Our own values and desires influence our choices: from the data we choose to collect, to the questions we ask. Models are opinions embedded in mathematics.”

She paints a potentially scary portrait of an AI industry wanting to do no harm, but doomed to be harmful, especially at the design phase of an algorithm. She points to the fact that the cost of acquiring data and feeding an algorithm, along with relevant training within social mission organizations, are factors to account for in order for AI and machine learning programs to provide value, and not become weaponized—hence the title of her book.

When platforms lack data and behaviours they are interested in, they may substitute stand-in data or proxies that affect the outcome. She also makes a pitch for Moneyball as a contrast to damaging AI models, because of its transparency: “Everyone has access to the stats and can understand, more or less, how they’re interpreted.”

THE NEW PLAYBOOK

Moneyball fans will recall the central plot: When baseball manager Billy Beane realized he could not outspend his counterparts (civil society organizations can sympathize), he had to break from conventional scouting and recruiting and draft a new playbook—one that could assemble the talent he needed at a price he could afford.

AI-enabled startups have a common set or organizational ingredients that allow them to expand, grow, and adapt. Because these firms are in the express lane of the innovation economy, they can hire the right top talent in their field.

The impact economy could learn from their playbook.

For one, their org charts include skills and competencies in the domains of service design, data, and technology. Nearly all of them hire specially trained service designers, data analysts, data scientists, or visualization specialists. Nearly all have adopted cloud-based productivity tools to ease collaboration and production. And they almost always have someone in charge of talent and culture.

To counter some of the recent tech backlashes, several AI-centric firms are considering adding AI-ethics within ombudsperson functions or creating AI ethics task forces, to further address bias and unintended consequences. Google, for instance, produced their AI ethics in response to internal staff and external concerns. These guidelines are a natural extension of thoughtful service design principles expressed elsewhere. That is the notion that the end users of a product want to know that their data isn’t being weaponized or exploited.

While baseball is a zero-sum game (only one team wins the series), community organizations are not, but that does not mean that we should not employ a similar user-centric, data-driven, and technology-enabled approach.

And while the concept of automation in the workplace—especially in civil society—might be off-putting, there is enough evidence to suggest that pairing smart humans with powerful technology will always produce better results. It is a question of getting the balance right and doing it in a transparent manner.