Technical staff and clean data key to being granted money for AI projects, funders say
Why It Matters
Few funders focus on technology and artificial intelligence grants. Few technically skilled staff in the non-profit sector can expertly handle data science, machine learning, and artificial intelligence projects.

Clean data and sound data governance policies are key criteria that funders look for when making grants and assessments about AI projects, according to two technology-focused philanthropic organizations.
The data needed to fuel AI models in the social purpose sector has significant gaps, said Jennifer Pratt Miles, practice director at the Meridian Institute, which runs the Lacuna Fund.
The fund, which focuses on technology and data grants in low- and middle-income countries, was created through a coalition of several philanthropic organizations, including the Rockefeller Foundation and Canada’s International Development Research Centre.
The data that AI is trained on also often comes from North America and Europe.
“That means that the benefits of AI are not accessible or useful for people in Africa, Asia and Latin America,” Pratt Miles said.
“You can have super high-quality data and perhaps an equitably trained model, but the [data] product itself can either be good or bad,” added Nick Cain, vice president of strategy and innovation at the Patrick J. McGovern Foundation, one of the coalition members that co-founded the Lacuna Fund.
In this context, a data product is an accessible interface that helps users understand and glean insights from the data.
How are funders making decisions about AI projects?
At the Lacuna Fund, the steering committee and technical advisory panel mainly focus on projects with well-labeled data.
There is a lot of data out there, said Pratt Miles, but without adequate cleaning, annotation and labelling, it cannot be used for machine learning or to train artificial intelligence models.
“For a computer or an AI model to be able to recognize the difference between maize, sorghum and millet [crops], a human being is going to have to look at and label thousands of images of crop types before a machine learning or AI model can use them,” she said.
In Kampala, Uganda’s capital city, a team of researchers at Makerere University has been awarded funding by Lacuna to develop a dataset that identifies, classifies and eventually detects crop disease.
The research team collected and annotated more than 127,000 images and 39,000 spectral data points to train a machine-learning model to detect potential diseases in crops proactively.
A key part of the criteria for applicants to the Lacuna Fund is to ensure safe and equitable working conditions and fair compensation for all those involved in building the AI model, especially those working in data collection, labelling, and annotation.
When reviewing applications, the steering committee and technical advisory panel also consider whether the team has enough expertise to produce and maintain a high-quality dataset, said Pratt Miles.
They could be data scientists or experts in agriculture, climate, health, and language preservation who collaborate with technical staff.
“The technological readiness of an organization is really important,” added Cain. “That could be an organization that has been on a slow and steady journey to embracing technology and data – maybe that started with them hiring someone with data science or data management skills to ensure their data was housed in the right way, has strong governance, is structured, tagged and labelled appropriately, and is queryable.”
Lacuna Fund grantees must meet several hosting and data documentation guidelines, such as assigning a digital object identifier, making their dataset searchable on major search engines, and tracking views and downloads.
They also advise that grantees consider dataset hosting options that “are already used by communities who might implement envisioned use cases.”
How are funders ascertaining the long-term sustainability of AI projects?
When making assessments about where to allocate AI funding, the Lacuna Fund asks whether applicants have taken a participatory approach to their projects, Pratt Miles said.
In other words, have applicants spoken with the community about how they might benefit from using the data and AI tools?
“We also ask how the project could be sustained after the period of the grant,” she added.
“The grants are typically one-to-two years, and so is there a community or user that is interested in this data and might be willing to update it?”
Along with grassroots communities, projects could choose to become part of academic research labs or startups. For instance, another one of the Lacuna Fund grant recipients, who is developing a Ghanaian language financial inclusion speech dataset, plans to host their dataset at Ashesi University.
For Cain, the context in which AI is being deployed plays a significant role in ascertaining long-term sustainability and the potential for harm or bias. Certain use cases, he said, are much more sensitive than others.
“For example, AI has a role in better monitoring ecosystem restoration through satellite imagery and on-the-ground data. But that comes with a different set of questions to the potential use of AI to help people more effectively apply for public benefits,” he said.
“They’re really, really different conversations, and so we’re always looking for [funding] partners that can demonstrate an awareness of the distinctions and sensitivity and incorporate the voices of those that are most proximate to the challenge.”