New framework designed to advocate for responsible AI use among fundraisers. How does it work?

“This is a half-trillion-dollar sector in the US alone with very few guidelines.”

Why It Matters

Since ChatGPT launched in November 2022, many non-profit organizations have used it to drive efficiency and innovation. However, non-profits could jeopardize public trust in them without knowing how generative AI affects them, their donors, and their clients.

Monitor showing HTML in a text editor

This independent journalism on data, digital transformation and technology for social impact is made possible by the Future of Good editorial fellowship on digital transformation, supported by Mastercard Changeworks™. Read our editorial ethics and standards here

TORONTO/TREATY 13 – When ChatGPT first launched last fall, there was an “enormous appetite” within the non-profit sector to understand how it would help them with their work, says Nathan Chappell, senior vice president of DonorSearch. However, many organizations jumped on board without asking important questions about the implications of embedding generative AI into their tech suite. 

Non-profits didn’t seem to be aware of the potential harm AI could cause their organizations or the communities they serve, he says. “Globally, charitable participation is eroding, and the number of people donating is diminishing. I felt that the irresponsible use of AI could accelerate that decline, especially if it wasn’t prioritizing relationships and trust.” 

Chappell is part of the steering committee at Fundraising.AI, an organization advocating for the responsible adoption of artificial intelligence in the non-profit sector. The organization recently released the first iteration of its responsible AI framework, targeted specifically towards fundraisers and staff in non-profit organizations. 

The framework is an amalgamation of existing AI frameworks, says Chappell. The output was then synthesized and contextualized for the non-profit sector. 

When non-profits and philanthropic organizations sign onto the framework, they commit  to the responsible use of artificial intelligence as defined by Fundraising.AI. That includes ethical and accurate data collection that “actively addresses biases and disparities” within AI systems.

Signatories must also be accountable for any AI-based technologies they develop and transparent about how that technology is deployed and used. They must also establish a  method of auditing AI-based technology.

Organizations that commit to the framework should also be able to assess the environmental impact of AI technologies, as well as commit to staying informed about the latest developments in AI. 

Both the framework and organization have been in development for four years, but Chappell says there was a dramatic shift when ChatGPT was released to the public in 2022.

“With the broad adoption of generative AI, the risk was too high to let people fumble on their own,” he says.

When they first started developing the framework after the launch of ChatGPT, Fundraising.AI called on 92 experts, including non-profit practitioners, data scientists and technologists. Twenty-five of those experts now sit on an advisory council that will continue adapting the framework as AI tools evolve. “If we’d built this framework last year, we would have mentioned generative AI, but not in the same lens we would today. And next year, there will be something different and bigger too,” Chappell says.

Supporting non-profits in adopting, developing and procuring new technologies

“Non-profits are also most likely to believe that responsible AI is the responsibility of the platforms and not their own,” Chappell adds. This signals a lack of knowledge about how artificial intelligence works. It is a participatory technology, which means that it learns from the data that users themselves add to it, and can reflect certain biases. 

In other words, if non-profits aren’t responsibly managing AI use among staff, there is a risk that sensitive, personal information of donors and clients could become part of an AI tool’s training data.

Adopting the framework can also help non-profits make more informed procurement decisions when it comes to new technology partners. It can serve as “a shorthand for evaluating a [technology] vendor”, Chappell says, and if they are compliant with the appropriate standards. Entering negotiations with a pre-set questionnaire for vendors can give non-profits some leverage in technology partnerships. 

Data scientists consulted during the development processes wanted to create something that similar to a rigorous legal framework, Chappell says. But ultimately, the wording in this area was left vague enough for non-profits to tailor the framework to their needs, without scaring off larger technology vendors.

“We couldn’t put too many teeth on this because Fundraising.AI is also a volunteer-led effort,” Chappell adds. “We don’t have a governance person, so there is no way we can enforce those teeth. The framework is as specific as we could make it in terms of providing directional guidelines on responsible AI, without making it so restrictive that it was hard to comply with.” 

What’s the social impact sector saying? 

Jason Shim, the chief digital officer at the Canadian Centre for Nonprofit Digital Resilience, has already observed an influx of AI-technology across the sector. “You have interesting use cases like Furniture Bank using AI for image generation, and Kids Help Phone [using AI] to help triage messages from youth.” 

Amy Sample Ward, CEO of NTEN, says staff members may also be using AI tools unofficially “as is often the case when technologies are getting so much media attention.” Many non-profits are also not aware of AI capabilities built into software they’re already using, over which they have no control, they say.

That the framework asks non-profits and fundraisers to commit to continuous learning is particularly important, Shim says, because the field is rapidly evolving, and organizations need to be able to respond quickly.

On the other hand, Sample Ward feels that the focus on fundraising might be problematic for non-profits. “I think it is important that we are careful about assigning frameworks to technologies for specific departments of an organization that are inherently not limited to those teams,” they say. 

For instance, if data from an organization’s donors is inputted into a machine-learning model to identify the best times to make donation requests, and the assumption is that the output is all about fundraising, there will be inherent bias in the outcomes of that model, too. 

“The data related to those donors’ program and service interactions, or event participation, or even volunteer history, is likely not included,” they add. “We are complex people, and we should both honour that a constituent has complex data they may choose to share with the organization and that we should have more complex approaches to our use of that data when it is entrusted to us.”

Fundraising.AI will be hosting its first summit in October to allow non-profits and philanthropic organizations to learn from one another and share best practices regarding responsible AI implementations. He says there is still a significant information gap when assessing non-profits’ readiness to adopt AI. “This is a half-trillion-dollar sector in the US alone with very few guidelines.”

Your job. Your mission. Your news.

With your support, the sector you're building gets the journalism it deserves, and you get a tax receipt. 

Author

Sharlene has been reporting on responsible business, environmental sustainability and technology in the UK and Canada since 2018. She has worked with various organizations during this time, including the Stanford Social Innovation Review, the Pentland Centre for Sustainability in Business at Lancaster University, AIGA Eye on Design, Social Enterprise UK and Nature is a Human Right. Sharlene moved to Toronto in early 2023 to join the Future of Good team, where she has been reporting at the intersections of technology, data and social purpose work. Her reporting has spanned several subject areas, including AI policy, cybersecurity, ethical data collection, and technology partnerships between the private, public and third sectors.

NO PAYWALLS HERE

Future of Good’s journalism is free — always.

Subscribe to our newsletter for essential social sector reporting found nowhere else in Canada.

Grab Your Copy Now

SIGN UP NOW

* indicates required
Close the CTA