The Trappings of Artificial Intelligence

The Future of Social Impact Work—and of Workers—Depends on Getting the HR/AI Mix Right

Why It Matters

The AI train isn’t stopping. For social impact, and other sectors, the talent mix of the near future will be human intelligence mixed with machine intelligence. Impact-focused leaders need to think about how they are teaming to make use of—and push back on—an internet economy powered by algorithms.

The recent 2018 AI Now Symposium focused on ethics, organizing, and accountability within the artificial intelligence field. During their keynote, Kate Crawford and Meredith Whittaker highlighted the dizzying pace at which the AI ecosystem is moving, citing examples that were both extremely positive—and deeply concerning.

A positive example includes the recently established New York City Task Force to Examine Automated Decision Systems, which puts in a policy framework on when and when not to use AI technologies.

Conversely, they cited numerous examples of facial recognition developments at Amazon, Facebook, and IBM that are of grave concern. Specifically IBM, working with NYC officials, has covertly co-developed a facial recognition technology for its training data using CCTV footage, without citizens’ consent to use their images.

Despite the long list of concerning examples, the keynote closed with a talk about hopeful horizons which includes new civil society actors bringing interdisciplinary skills sets to the table. They noted the needed skill sets from legal scholars, ethnographers, journalists, health and education workers, among others.

The concerns voiced at AI Now are similar to the questions voiced here in Canadian civil society. Specifically: Will artificial intelligence do more harm than good? It’s a question that comes up often, however, that debate misses the point. Why? Because the chess board has been set. Put another way: The AI train isn’t going to stop. The more salient question is, how do we organize and insert the right skill sets into the right AI arenas (service design/delivery, ethics, legislation, research) so that we can improve the lives of more people without creating more harm or inequality?

Civil society organizations need to be at the table in order to make a dent in the AI future of good.

If they don’t, technology giants will have a dominant say in how human intelligence versus machine intelligence plays out. The goal is to align service delivery with an understanding of data and human impact in order to do our best work. A few key books and disruptive companies offer insights.


The authors of Prediction Machines: The Simple Economics of Artificial Intelligence speak of that moment when they realized artificial technology is different from other technologies. I had my AI moment after being introduced to Logojoy, an online brand-identity creator. It spoke to the amateur graphic designer and entrepreneur in me, as it provides a viable alternative to gig economy platforms and boutique studios whose services can range from the hundreds to the thousands.

In addition to reducing the cost to less than a hundred bucks, the site also massively reduces the time and the creative friction it takes to get to an approved concept. It goes from months or weeks to mere minutes. Additionally, Logojoy will be able to generate more logos in a day than I would be able to in a lifetime. Woah.

Many of my colleagues like these online logo generators, while others have sampled them and reverted back to human designers. Many are wowed by the cost, speed, and ability to visually spitball ideas, colours, and taglines.


As with Logojoy, companies such as Kabbage, WealthSimple, Car2Go, Shazam and EatFeast offer a frictionless and always “delightful” user experience. Those experiences got me looking into the greater HR questions underpinning their successes:

  • How are they attracting top talent?
  • What types of skills and competencies are they hiring for?
  • How do they describe the culture they are trying to create for people to do their best work?What
  • technologies are they using to allow people to do their best work?

Logojoy, like other machine learning and AI startups, hires for a common set of skills. Broadly speaking, they are building teams that are end-user centric, data-driven, and technology-enabled. As part of my research on the future of social impact work, I continue to add to a database of career sites, job descriptions, and recruitment platforms. As the picture comes into focus, it’s apparent that these firms are being deliberate about the type of organizational muscle needed to remain industry leaders.

The authors of Prediction Machines call this type of configuration “The New Division of Labour.” Their key point? Both humans and machines have specific weaknesses and strengths to consider. And that statistical decision making needs to be balanced with human judgment—and not a replacement of it.

They hinge their point on the 2011 baseball movie Moneyball, the plot of which centres on the idea of using stats to prop up undervalued talent to get to wins. The authors hit a home run when they turn to behavioural economics to show that decision-making almost always improves with the assistance of data-driven models. However, these models are not without biases—biases which can create great harm. We dive into the issue in Part 2 of the series.