The Changes That Artificial Intelligence Will Bring to the Social Sector

AI for good (and not-so-good) has already arrived

Why It Matters

When we speak about AI, we’re not talking about some far-off future. AI has already fully arrived both in Canada and around the world, and it is already impacting society — from the way we feed our familiies down to the way crisis line workers do their jobs. James Stauch and Alina Turner continue their series on AI and social impact.

At some point in the coming decades, experts predict we’ll reach the “technological singularity” — the point at which artificial intelligence abruptly triggers runaway technological growth, resulting in unfathomable changes to human civilization. A technological singularity could take us down a range of possible scenarios for humanity, from the harrowing enslavement or extinction of our species to the exhilarating possibility of becoming an incalculably more powerful and omniscient species, perhaps even conquering mortality itself.  

It would be naïve, if not downright ignorant, to think that the social sector is not already profoundly impacted by these changes. Our daily lives are algorithmically augmented and assisted in countless ways already, from our Netflix-viewing or Spotify-listening preferences, to predictive search results, news feeds, and chatbots, to ride-sharing and air transportation. Since their introduction in 2014, over 100 million Alexa personal assistant devices have been sold.

But how does AI and machine learning technology play out in the world of social impact?

 

Current applications of AI in social impact

We’ve already seen racial and gender bias revealed in algorithms for hiring, policing, judicial sentencing, and financial services. We have even borne witness to socially malicious applications of AI, such as the use of bots to generate and amplify anti-vaxxer social media posts. Beyond this, AI is already disrupting employment patterns and the job market, and likely intensifying patterns of inequality, at least in the near term. 

At the same time, from a social impact standpoint, certain aspects of AI are absolutely beneficial. For example, Facebook’s “proactive detection” technology identifies self-harming and suicide-risk behaviours. Machine learning is being applied to make the lives of those impacted by disabilities much easier, thanks to tools like voice recognition and speech-to-text. Instant translation is another example that is breaking down communication barriers across languages with positive social impacts. 

While we have yet to see the full impact of the current transformation, we can reasonably anticipate that these will be significant for society and the social impact sector. In these contemporary conversations about how we tackle the “social deficit” and our seemingly intractable complex challenges and “wicked problems,” it is unimaginable not to be thinking of the role of AI. 

So what role is AI already playing in the social impact space? So far, the use of new technologies to address social issues remains relatively underdeveloped, but we are seeing very early experimentation and application of machine learning among non-profit organizations in the United States, where analysis of open source data has, for instance, enabled the reporting and rating of racially-motivated harassment from police officers. Deep learning has also been used to identify “high-risk” texters, dramatically shortening the response time for crisis counselling. 

Below, we explore several fields where AI is already rapidly changing the way we produce social impact.

 

Poverty & Food Security

In Canada alone, one in seven people currently lives in poverty. When we zoom out to take a global lens, that percentage increases to 50 percent of people around the world, with over 3 billion people living on less than $2.50 a day.

To address social challenges like poverty, satellite imagery and algorithms are already being used to identify wealthy and poor regions. With this knowledge, policies and interventions can be targeted to the areas in most need. AI can be used to focus educational training or food systems and make predictions to help prevent future social issues. Such predictor models have already been deployed in the United States to identify homeless persons likely to become high-cost users of public services, or families that are at highest risk of homelessness.

Given the impacts of AI and automation on employment across income levels, it is ironic that we might use it to also mitigate the replacement of human labour with machines. For example, machine learning could be used to identify the best ways to reskill those left unemployed in the changing economy. It could help decision-makers analyze the job market, predict employability, and identify future job losses and market gaps to anticipate labour force development needs. In turn, governments could incentivize particular careers and industries. 

Another global challenge — this one borne of environmental issues and climate change — food security has already become one of the greatest issues for humanity, with 821 million people facing chronic food deprivation in 2017, while the world’s population is expected to increase by 2 billion by 2050. 

AI has become key in addressing such issues: algorithms are currently being used in the agriculture industry for livestock, water, soil and crop management, and the agri-tech industry has developed advanced algorithms to improve every aspect of the industry, reducing potential risks associated with the production of foods. For instance, we now have alerts sent to smartphones to warn farmers about wind changes, or warn them about possible pests landing on crops (tracked via satellite images). 

 

Mental Health

Considering that in any given year, one in five people in Canada will experience a mental health problem or illness, this is another area in which machine learning may prove immensely useful. Using algorithms to predict violent or suicidal behaviour would have been unthinkable a decade ago, and any attempts to do so would have been rightly dismissed as techno-naivety in the extreme — but it’s happening today. One ongoing experiment that looks at predictive factors for riots, lynchings, and other mob violence in Liberia is already producing rich and often counter-intuitive insights using machine learning.

The non-profit Crisis Text Line, which analyzes millions of texts to predict suicidal behaviour, revealed that the word “Ibuprofen” is “16 times more likely to predict the need for emergency aid than the word ‘suicide,’” according to Elizabeth Good Christopherson, president of the Rita Allen Foundation. In turn, this insight has enabled a reshuffling of the queue and lives have been saved.

Further analysis with the same technology also showed that crisis workers were more successful when they employed improvised, adaptable responses instead of scripted therapeutic interventions (ironically, the AI is telling crisis line workers that they need to be more human and less robotic). Other studies are testing complicated algorithms based on decades of accumulated insight: a meta-analysis of a half century of research on risk factors for suicide suggests that incredibly complex algorithms incorporating hundreds of variables would be required to have any predictive value at all. 

Despite this complexity, there are teams of researchers at the Royal Mental Health Centre in Ottawa and at Florida State University using machine learning to analyze social media activity and anonymized patient records for suicide risk. Beta versions of this technology show promise: Florida State University’s study showed 80 percent accuracy at predicting suicide attempts within two years of suicide, rising to 92 percent accuracy within one week of harm. 

Similarly, AI’s advancement has enabled teachers to develop strategies to monitor students’ internet searches and even identify students at risk through machine learning algorithms. GoGuardian internet content monitoring is a software that prevents students from accessing damaging content and generates alerts to staff members about searches that could potentially lead to catastrophic events. Using this technology, members from a school in Florida were able to identify and provide appropriate intervention services to a child that was planning suicide. Similar applications can be helpful in identifying students looking for weapons, pornography, or narcotics. 

 

Justice & Democracy

When it comes to AI’s role in justice and democracy, there are plenty of risks that could be taken directly from newspaper headlines over the last two years — from the Cambridge Analytica scandal that shook the American elections in 2016 to concerns raised over Canada’s use of algorithms in immigration and refugee application processing

But while new technologies can undermine democracy, they can also — paradoxically — improve it. The primary agora of political debate, participation, and activism is now social media. The European Union and Canada are probing the use of AI to streamline issues of security and citizenship — screening of people at the port of entry to detect lies and possible national threats, sorting temporary visa applications, and granting refugee protection. 

And as we move forward, social good can be realized, and might even flourish, with the use of AI: since machine learning makes predictions based on large amounts of information, governments could greatly benefit from systems that allow the development of better policies based on AI predictions of positive outcomes. 

Governments could better foresee the impact a particular policy would have if implemented, or run tests to obtain a clearer picture of the best policies to implement to solve specific social problems. If properly regulated by governments, AI and machine learning could help predict the impacts of global warming and pollution, as well as forecast earthquakes or tsunamis. It could also assist governments in deploying resources in a more accurate, strategic, and affordable way.

 

Moving forward

Considering the myriad ways that AI is already impacting social challenges — in ways both positive and negative — the further development of machine learning systems and artificial intelligence should include not only computer scientists, but sociologists, anthropologists, social workers, designers, lawyers, economists, and historians in an attempt to better understand the effects and potential of AI on Canadian social good.

The integration of “high tech” and social change efforts will be essential moving forward. It is interesting to consider the implications of certain universal rights informing the design of machine-assisted social programs, or how public policy could be optimized by deep learning of literature on promising interventions and/or welfare state supports.

Super-intelligent AI may well determine, based on reams of high-quality, peer-reviewed research and petabytes of liberated data on pilot projects and social intervention prototypes, that we need policies and programs that are politically unpalatable in today’s context. Universal basic income, a flexible 15-hour workweek, decriminalization of all narcotics, psychotropic treatment of addictions, nature-based incarceration, or any number of other audacious-sounding social good decisions may emerge. In this light, AI may well prove to be the worst nightmare of status quo politicians — or of status quo nonprofits, for that matter.

 

Next up

Stay tuned for the next article in our series on AI and social impact, available next week, or find James Stauch and Alina Turner’s full report on AI and the future of social good, In Search of the Altruithm: AI and the Future of Social Good, here