Most foundations don’t know how to evaluate and fund artificial intelligence projects, even as the technology rapidly transforms society, according to a new survey.
Nearly two-thirds of program officers lacked confidence in their ability to assess AI-related proposals at major grant makers like the Gates Foundation, Chan Zuckerberg Initiative, and Annie E. Casey Foundation, according to a March 5 report by research nonprofit Project Evident. The study analyzed grant-making practices at 38 large philanthropies that have funded AI implementation projects — say, chatbots that answer questions about social services or algorithms that identify students at risk of dropping out — revealing widespread uncertainty even among the sector’s most tech-savvy and well-resourced funders.
The findings come amid diminishing prospects for federal regulations even as the technology is embedded in online tools at breakneck speed. Across the economy, even most highly educated white-collar workers — whose jobs are far more likely to be affected by AI in the near term — have had little time or training to develop proficiency in the technology. Nonprofit and foundation workers are no exception.
“Just because you don’t use it doesn’t mean it’s not coming,” said Sarah Di Troia, senior adviser at Project Evident, who emphasized the role that even reluctant nonprofits are likely to eventually play in addressing the profound societal implications of AI. “Just because you don’t use it doesn’t mean you won’t be on the front lines of the cleanup.”
AI Learning Curve
Philanthropies have largely trailed behind their nonprofit grantees when it comes to adopting AI, according to Project Evident’s research. As of last year, roughly two-thirds of nonprofits used some kind of AI in their work, compared with fewer than half of foundations, the vast majority of which do not view technology as a grant-making priority.
The foundations in Project Evident’s recent survey represent both the financial elite of philanthropy — 88 percent come from the wealthiest 10 percent of foundations — and the vanguard of AI grant making, having already funded at least one AI implementation project. Yet even these deep-pocketed funders struggled to evaluate AI proposals with confidence, meaning that smaller foundations are likely less equipped to fund technological innovation in the sector.
Only 36 percent of program officers surveyed felt confident in their ability to assess the technical feasibility of AI proposals, with many turning to both internal and external experts for guidance. That uncertainty has created a kind of institutional imposter syndrome: Foundations anxious about their AI literacy may be less likely to wade into funding such proposals at all, says Di Troia.
Program officers reported somewhat higher confidence in their ability to assess the ethical impact of AI proposals, with about half highly confident in addressing questions about data safety, biases, and informed consent.
“If foundations don’t feel comfortable making investments in innovation, then it really slows down the whole innovation pipeline for nonprofits,” said Di Troia, who warned that philanthropy’s sluggish response to previous technological shifts like social media has left the sector without meaningful influence over how these tools were designed and deployed.
“Nobody was at the table around how social media should be integrated in our society, certainly not those who are closest to our nation’s youth, parents, and educators,” she said. “That does not mean that parents and educators are now not on the front line of trying to deal with the ramifications of social media in our society. I think the same will be true for AI.”
What Matters Most?
The foundations surveyed have yet to fully align on what matters most when evaluating AI proposals — a disconnect that Di Troia says leaves nonprofits struggling to navigate the emerging landscape.
On technical assessments, foundations overwhelmingly prioritize determining whether “AI is the best solution to the problem” and evaluating whether grantees possess adequate technical talent, while only 15 percent of program officers considered the long-term viability of AI projects, despite the technology requiring ongoing investment in licenses, computing resources, and model maintenance.
Meanwhile, though program officers broadly prioritized data safety, they diverged sharply on what ethical priorities mattered most. Just over half focused on design processes that directly involve the people who will use the tools, while fewer than 35 percent prioritized community involvement in defining problems AI would solve — a potential blind spot given the technology’s potential to reinforce existing biases.
“It’s important to hold two things in mind when you look at AI: It presents amazing opportunities, but it also brings its measure of risks,” said Mike Belinsky, director of the AI Institute at Schmidt Sciences. “I see a lot of the time people look at one but not the other.”
Belinsky warns that while AI has become “the shiny thing that everyone’s excited about,” grant makers need to press harder on fundamental questions when evaluating potential projects.
“How will this tool make the work you are doing different or better?” he said. “Or is it just kind of another tool?”
Beyond the Check
For the most tech-savvy philanthropies like Schmidt Sciences, founded by ex-Google CEO Eric Schmidt and his wife, Wendy, grant making for AI projects often goes well beyond financial support.
The foundation provides crucial access to AI tools through partnerships with major companies like OpenAI or Google, providing grantees otherwise expensive computational resources in an environment where such access is “sometimes more scarce than money,” said Belinsky.
More than half of those surveyed by Project Evident provided additional capacity-building assistance to grantees working on AI implementation, connecting grantees with the educational content or engineering and developer talent they need to implement AI projects.
Still, as an emerging technology, AI remains challenging territory for both nonprofits and the foundations that fund them. For foundations working to build their staff’s confidence in evaluating AI proposals, connecting with other funders to discuss what works — and what doesn’t — could be a good place to start. The report found that 59 percent of respondents already participate in formal or informal AI-focused communities of practice, including groups like the Partnership on AI and the Data Funders Collaborative.
“When you are doing something very new, and there’s only so many other people that are doing the exact same thing, it’s very valuable to gather regularly and exchange practices,” Belinsky said. “Some of them are going to become best practices. Some not, but you really have to share in a very robust way.”
As the technology rapidly develops, the stakes of philanthropy’s AI learning curve have only grown, says Di Troia, who noted that “we are in a country that has yet to make at a federal level any regulation or guidelines around AI,” a situation unlikely to change given the Trump administration’s deregulatory approach.
In the meantime, virtually all major AI tools remain built and controlled by tech giants like Google, Microsoft, and OpenAI, whose philanthropic arms often fund and provide technical support for nonprofits’ AI-related ventures.
As commercial interests continue to dominate the technology, the window for foundations to wield meaningful influence over how AI is deployed and developed appears to be rapidly narrowing.
“The nonprofit sector has an incredibly important voice to help shape how our society determines where we should or should not use AI in our workplaces and our school systems and our society writ large,” she emphasized. “It’s very hard to have a voice and a seat at the table if you don’t have a perspective, and the only way to gain a perspective is by using the tools.”