Artificial intelligence is transforming much of the nonprofit world — from creating new, and occasionally questionable, fundraising practices to providing innovative tools for reaching constituents and addressing problems. But a major gap exists in the fluency and adoption of A.I., with a few foundations leading the effort as most funders and grantees fall behind.
A recent survey by the Technology Association of Grantmakers, known as TAG, found that while 81 percent of foundations are experimenting with A.I., just 30 percent have an A.I. policy in place, and only 9 percent have an advisory group focused on both the technology and policy.
Foundations are also unsure of A.I.’s role in their work with grantees. According to a study released last year by Candid, only 10 percent of funders accept grant applications with A.I.-generated content. And while the technology is popular among grantees, the TAG survey found that just 5 percent of grant makers fund A.I. tools and 3 percent offer A.I. training and resources.
The data overwhelmingly point to a disconnect, with foundations increasingly willing to experiment with A.I. for their own operations but reluctant to invest in or support its use for grantees eager to explore these tools.
This disconnect exists because funders tend to support what they know and are often risk averse when it comes to seeing the potential of new technologies. However, by stepping too slowly into an A.I. future, they are leaving nonprofits unsupported and risk losing the ability to harness A.I.’s potential to address some of humanity’s greatest challenges.
Four Steps for Funders
In collaboration with their grantees, philanthropy needs to take a leading role in exploring the potential of this technology, with a commitment to responsible, equitable, and safe adoption of A.I. tools. Based on our research and observations, we recommend that funders take the following four steps.
Engage staff and grantees. Foundations should start by bringing an A.I. expert on board or creating a task force to oversee the process of responsible A.I. adoption. For example, one of us, Chantal, serves as the A.I. strategy resident at the Annenberg Foundation, helping guide the organization’s specific A.I. needs and mission both internally and in partnership with grantees.
Part of this process has included a survey of staff to gauge their concerns, aspirations, and current engagement with A.I. Their responses helped Annenberg develop practices for responsible A.I. use, incorporating, for instance, the strong staff view that equity means all employees have access to tools and training.
Foundations should also actively reach out to grantees to hear what they need before making A.I. investments. Annenberg discovered that the organizations they fund are even more interested in A.I. tools than foundation staff — and that those interests weren’t always what the funder assumed. Topping the list was development of an A.I. strategy, followed by training and microgrants for A.I. tools, according to a survey of the foundation’s Los Angeles-based grantees.
Create an A.I. policy. This doesn’t have to be complicated and can start with the foundation’s guiding principles for using the technology, as well as key goals, such as increased efficiency or innovation.
Protecting data should be at the top of the list. The policy should specify which types of data can be categorized as “public,” “proprietary,” or “private,” with the goal of restricting sensitive data from ending up in A.I. systems. It should also include guidelines for what A.I. tools can be used, while clearly prohibiting unauthorized or risky applications. The policy might stipulate, for example, that all content generated by A.I. must be reviewed by humans.
All policies should clearly spell out the ethical considerations when deploying A.I. The Gates Foundation, for example, has established an advisory committee that is drawn from outside experts to share insights on ethical A.I. practices and provide accountability to the foundation and their grantees.
These policies should be created by leadership teams working together across discipline areas to ensure their widespread relevancy and adoption. For example, the operations and program teams at the Gates Foundation “have been in lockstep … balancing experimentation with our duty to protect the foundation’s reputation through safe and responsible use,” says Bob Benoit, CIO of the foundation.
Foundations should guide grantees in adopting similar policies. They might offer workshops on A.I. governance or responsible use of the technology and share sample A.I. policies with their grantees. This will help ensure that data privacy, security, and ethical considerations are embedded in the A.I. policies of organizations they support.
Build A.I. skills. Fully 43 percent of foundations report that a “lack of A.I. skills” is the second most common barrier to A.I. adoption, after privacy and security concerns, according to the TAG survey. Once guardrails are in place, foundations should deepen literacy by offering training to both their staff and grantees in areas such as crafting A.I. prompts and A.I.-assisted research techniques. They could also provide hands-on sessions with A.I.-enabled productivity tools like Microsoft CoPilot, Perplexity, or ChatGPT.
This training should be available to all staff and include customized learning opportunities so no one is left behind. For example, all staff at the Annenberg Foundation, regardless of role or technological literacy, were invited to receive personalized ChatGPT training in developing prompts and creating customized A.I. chatbots, along with guidance in ethical use of such tools.
Leverage board experience. Board members with tech expertise can play a critical role in developing a strategy and long-term vision for A.I. use. One of us, Alethea, has seen this up close in her work at Board.dev, a social enterprise that places tech leaders on nonprofit boards.
Tech-focused board members can help nonprofits incorporate multiple perspectives when deploying new technologies and identify what works best with the organization’s mission. They can also help at a tactical level, on everything from hiring tech talent to data privacy and security.
Tech-focused nonprofits are often a good model for how to use boards effectively. For example, Quill.org, which uses generative A.I. to teach students reading and writing skills, has brought on multiple board members with tech backgrounds to offer different perspectives on how to build an A.I. strategy. “They often disagree with each other, which has been very useful in helping us determine our overall strategy,” says Peter Gault, CEO of the nonprofit.
Urgent Action Needed
Grant makers need to act quickly. But they will have greater success supporting grantee’s A.I. use if they first learn how the technology can safely and ethically support their own work. “Foundations can help nonprofits deploy A.I. in a thoughtful way: Keep it mission focused, make sure your nonprofit has tech expertise to deploy these tools, and allow for experimentation,” says Nate Wong, a partner at the Bridgespan Group.
Fast Forward, which supports tech nonprofits, has produced a roadmap for funders to help them get started. The Philanthropist’s Guide to AI Investments, is based on its work with dozens of A.I.-savvy organizations and offers helpful information for the field.
Philanthropy’s strength and relevancy lie in its response to the most urgent needs and opportunities of our time. Today, A.I. is among society’s greatest challenges. Staying on the sidelines isn’t a neutral stance — it is a choice to let others decide how A.I. shapes our world. By grappling with its complexities now, we can ensure these tools reflect our values and amplify our missions. This is not just an opportunity — it is our responsibility.