Since ChatGPT launched in November 2022, many non-profit organizations have used it to drive efficiency and innovation. However, non-profits could jeopardize public trust in them without knowing how generative AI affects them, their donors, and their clients.
Misinformation and disinformation that originates online can not only cause confusion and distrust among communities, but can also be directly tied to racism, misogyny and queerphobia, putting certain people at risk. For staff in community organizations, having to speak to the community about the origins of false information, or reporting the information appropriately such that it doesn’t spread, can add to their already heavy workload.
As artificial intelligence becomes more ubiquitous, the charitable sector is at risk of being left behind if they don’t feel they have the adequate knowledge and resources to learn about AI-based tools and applications.
Join Future of Good publisher Vinod Rajasekaran and guest speaker Sharlene Gandhi, Tuesday, March 14th, 1:00 - 1:45 pm ET in an engaging discussion examining
the ethics of using artificial intelligence when working directly with communities.
It only took five days for ChatGPT to reach a million users. While there are clear benefits to using an AI-enabled tool when it comes to reducing time and resources spent on tasks, there are also some open questions for the social impact world in particular: what are the ethics of using artificial intelligence when working directly with communities?
Digital identification could alleviate certain accessibility issues, but at the same time, exacerbate inequities when it comes to digital literacy and device accessibility. It’s also unclear how exactly digital identification will benefit or intersect with the work of community-serving social purpose organizations.