With today’s technology revolution already afoot, what role can the world of social impact play and where can we add value as part of our contribution to lasting social change within the Fourth Industrial Revolution? If we view the explosion of AI technology, both in ubiquity and sophistication, as just that — an “explosion” — then we can extend the metaphor to see our role as helping carefully guide where the charges are laid.
Nick Bostrum, the founding director of the Future of Humanity Institute at the University of Oxford, characterizes this as “the most careful controlled detonation ever conceived.” How will the detonation be controlled such that our social foundations and ecosystems are maintained and optimized, while the biased, exploitative, and inequitable implications of AI are avoided?
As mathematician Cathy O’Neil points out, it’s not the algorithms themselves that are responsible for the socially destructive effects of AI, it’s the human bias already implanted in the coding of the algorithm, since algorithms are best thought of as “opinions embedded in math,” as O’Neil puts it. But which opinions have mattered so far? And from where do they originate? As we have already seen, the answer to this is currently far less prosocial than what we think should be the case.
While by no means exhaustive, there are several competencies and conditions that those working in the social impact sector need to get very serious about investing in over the coming years, within and outside civil society and the social impact ecosystem. These competencies and conditions help us overcome fear of AI, while at the same time gaining agency and voice in how AI is developed. Each of these competencies extends beyond the individual to the role of civil society, the public sphere, and even to the private sector, which is increasingly professing interest in social responsibility.
To be successful in this new era, both extreme enlightenment and hyper-citizenship skills are essential. Increasingly, the public is challenged to distinguish between the fake and the real: to know whether what is being seen and heard is authentic or manipulated. The line between objectivity and subjectivity is thinner than ever before, and as George Orwell put it in his book 1984, “The very concept of objective truth is fading out of the world. Lies will pass into history.”
By 2020, some analysts believe that “AI-driven creation of ‘counterfeit reality’, or fake content, will outpace AI’s ability to detect it.” We can only expect to see new and more sophisticated forms of digital skullduggery, like “deep fake” creations — one of several ways to manipulate audio and video to create artificial speeches and embarrassing or incriminating scenarios that are false but assumed to be real, with catastrophic impacts.
In this light, a skeptical society and critical thinking skills are more necessary than ever. Knowing “real” truth will be more challenging and time-consuming than in the past, given the rapidly growing sophistication of manipulation, but such vigilant sleuthing and skepticism can make us better citizens.
AI may also help us find a pathway to rational compassion rather than relying on our imperfect, hyper-biased sense of empathy. Empathy biases the near and familiar over the different and far-away. It is a useful and necessary mental function, essential to our very sense of humanness, but it is also the same region of the brain that produces racism, parochialism and wildly uneven — often deeply irrational — social outcomes when extended to the practice of charity or public policy.
Creativity & collective imagination
Jobs in manufacturing, sales, transportation, accounting, security, diagnosis, and basic research and storytelling (including much of journalism) are likely to disappear in the not-so distant future. However, careers involving caring, complexity, and creativity will continue to exist, and our imagination will be better nurtured and rewarded than during previous industrial eras. We may even bear witness to a post-digital Renaissance.
Philosopher Marshall McLuhan once said that art is a “distant early warning system.” As such, artists need to be at the centre of AI development, and the humanities will take on a renewed relevance and importance in the service of fostering creative mindsets, systems thinking, and mental elasticity.
The ethical frameworks for AI might benefit, from a social justice standpoint, from embedding such notions as Peter Singer’s “effective altruism” (ensuring global fairness in the relief of poverty), to John Rawl’s “veil of ignorance” (ensuring genuine equality of opportunity and eliminating barriers to social mobility), to Susan Moller Okin’s feminist modifications to liberal theories of justice, as just a few potential examples.
The social impact sector may withstand, at least in theory, the impacts of AI and automation because of its focus on caring at the frontline levels — but we cannot assume that the way we work now will remain remotely the same. Already, we see the introduction of low-cost counselling services using online platforms, the disruption of open data for nonprofit and government funding models, and the democratization of information using technology to challenge traditional service access pathways.
This is not to lament these changes; in fact, we can make the case that from the user perspective, technology has empowered consumers with real-time information about benefits and services they can access. Similarly, the donor and taxpayer can now understand the financial flows into the social safety net in a much more transparent fashion.
These changes are all opportunities for creativity, and we can leverage AI to support social impact in ways we could never have previously imagined. AI’s potential to amplify our talent, creativity, openness, and capacity for inclusion should make all people who share and care for a living incalculably more effective and valued.
Technological & data literacy
Technological and data literacy will be essential for the social impact sector to both support better machine learning analysis and to develop better policies and societal outcomes. We don’t need the entire sector to become proficient coders in order to successfully navigate technology, but at the moment, we have a lack of talent in the pipeline specific to socially-oriented AI development, and competition for talent for the foreseeable future will be fierce. The social impact sector is historically on the losing end of such battles, although it will help that we see more and more computer science, engineering, and business students embrace social change work.
Karen Hao, MIT Technological Review’s AI reporter, contends that we also need to “stop perpetuating the false dichotomy between technology and the humanities.” She argues that, in order to build more ethical products and platforms, software engineers and programmers need better grounding in the liberal arts. Conversely, policymakers, social changemakers, and civic leaders need better technology literacy.
AI bootcamps and short intensives for social sector managers, designers, and evaluators could prove useful, and universities should consider social impact work-integrated residencies for AI specialists finishing their PhDs or other advanced credentials. Without integration of these skills, we risk becoming marginalized from both debates and practical applications of these new technologies.
There is extreme risk in our attempts at social innovation being merely social engineering. We have seen what happens when we leave affordable housing solutions to architects or city building to transportation engineers. Similarly, leaving social application development to computer scientists and data specialists is a recipe for guaranteed unintended harm, no matter how well-meaning the aim.
Enhanced efforts to integrate social innovation and high-tech innovation will be essential moving forward, including working as diverse, interdisciplinary, cross-functional teams who can help anticipate and mitigate bias, assumptions, and unintended consequences. As such, people who care about and know a lot about — including having lived experience with — a given social issue must be deeply embedded at every stage of machine-enabled deep learning regarding that issue. We also need “data translators” in the sector, such as NGOs employing data scientists who can interpret and stress-test an algorithm’s “brittleness” and bias vulnerability.
While the private sector and, to a lesser extent, the government have enjoyed access to technology software and hardware, the social impact sector has always faced a significant challenge accessing basic technology infrastructure. Community organizations often run on donated or low-cost machines, with obsolete software and slow performance. Tight resources have made investing in new technologies, software, IT, and professional development on AI difficult as well.
This has hampered capacity development in the frontline and management layers of service providers’ learning about these technologies and how they could benefit the population they serve. Encouragingly, as tech gets easier to use and cheaper to buy, this divide is closing, and funders can play an obvious role here to help close the gap.
Fortunately, there are also organizations working to close this gap. American initiatives such as the Partnership on AI and AI4ALL aim to make AI approachable to the general public, recognizing that AI can be an intimidating topic and that those who engage with the topic are disproportionately male, privileged, and working in the commercial sector. Google has also invested in the STEM education organization Actua to develop an artificial intelligence curriculum for high school students across Canada.
Participating in the co-creation of a future with AI will require our sector to embrace risk and have a stronger voice in public decision-making. We will need to adapt to and influence future technological advances while developing better services and contributing to social justice goals. We would be foolish to assume we are disruption-proof: take, as one small example, the emergence of Benevity, which has disrupted workplace employee giving such that it is displacing the United Way campaign model in some Canadian cities.
We will need to develop a different lens for risk internally to fully take advantage of the opportunities ahead. This should be a wake-up call to nonprofit boards, many of whom are notoriously risk-averse. An important dimension of this is data sharing between organizations: “open data” has ruptured barriers that we took for granted, and there are many other areas in which we will need to adapt and pursue social impact very differently at the frontline, policy, and funding levels. The age of charity may be yielding to an age of shared, collaborative social impact infrastructure, and many organizations will not survive these transitions.
With less people needed to perform routine tasks, our sector will find itself caught up in the broader shifts in work, as well as in shifting attitudes toward work and living. With more time available to dedicate to creative and recreational pursuits, and to connect with nature, we can likely expect the arts, sport, environmental protection, and entrepreneurship to flourish, and citizens will likely become more responsible, critical, and skeptical about the information they receive. Governments may also have a more diverse array of potential representatives.
Our ability to innovate and adjust to this new reality can contribute to a constructive unfolding of this new industrial and technological revolution, but only if we are leaders and active participants in this change.
Public commitments, declarations, & protocols
The visibility of AI’s social implications and transformational possibilities must be greatly elevated in the public sphere. Because if the monopoly power of commercial tech giants and totalitarian regimes is not greatly circumscribed, AI will fail to serve the common good.
At a global level, we are seeing the emergence of concords such as the Montreal Declaration for a Responsible Development of AI, “born from an inclusive deliberation process that initiates a dialogue between citizens, experts, public officials, industry stakeholders, civil organizations, and professional associations.” Emphatically voluntary and aspirational, there is a need to build on such initiatives with multi-lateral teeth, anticipatory regulation, and legislative commitments, as we see happening with the Ministerial Declaration on AI in the Nordic-Baltic Region. Facilitated through the Swedish-based Future of Life Institute, the policy objective is to develop and promote the use of AI “to serve humans better.”
In December 2018, Canada and France announced the creation of an International Panel on Artificial Intelligence (IPAI), which aims to include representation from civil society and will align AI investment to the United Nations’ Sustainable Development Goals. Despite this commitment, however, Canada has been weak in its criticism of autonomous weapons systems and other forms of socially malignant AI in international arenas.
Such global and regional efforts also need to penetrate the public consciousness within nations and cities. Existing or future networks and coalitions of nonprofits, foundations, and social innovation practitioners should be keen to enhance awareness and visibility of AI, and be active agents in the promotion of responsible AI.
This is one issue that the social impact sector should not be deferential to. Social impact organizations cannot be at the AI “kids table” hoping to be asked to sit at the big table. There will be those who use techno-obfuscation to keep the social purpose voices on the margins, but there are likely many more allies and champions who would welcome a strong social impact sector voice in the development of accords, protocols, and processes.
This series was based on research conducted by James Stauch and Alina Turner. Find their full report on AI and the future of social good, In Search of the Altruithm: AI and the Future of Social Good, here.