Artificial intelligence is quickly transforming the way we live and work — and nonprofits are no exception. From using ChatGPT to jumpstart your grant proposals to building out responsible and secure A.I.-use policies, nonprofits are grappling with new questions on how to leverage the technology to advance their mission — while proactively navigating the risks.
We spoke with several A.I. experts about the promise and perils of this technology for nonprofits, and what you need to know to get started.
What is A.I.?
In the past, training a machine meant specifying every step it needed to take to complete a task. With a technique called machine learning, many artificial intelligence tools can learn, communicate, and create in ways that sometimes mimic human intelligence.
Much like a toddler learns to speak by listening to the conversations around them, programmers train A.I. systems on large swaths of data that those systems then use to learn and do a certain task, whether it’s to play chess, predict the next words in your email, or create an illustration.
That ability to learn on its own — and quickly — makes A.I. a flexible and powerful tool.
Today, A.I. is the algorithm that filters your incoming email or fills your social media with eerily specific targeted ads. If you’ve browsed Netflix, searched Google, or listened to Spotify — you’ve used A.I.
In other words, “you’re already using it, you just don’t know it,” says James Ellis, managing director of EV Strategic Partners, which provides technology and A.I. consulting to nonprofits. And those uses have only exploded in the last couple of years as AI has grown by leaps and bounds.
Going beyond just recognizing patterns, generative A.I. — the type of advanced technology behind ChatGPT — can use data to generate entirely new text, images, and other media.
How are nonprofits using A.I.?
While the largest and most tech-savvy nonprofits have been building out their own custom A.I. systems for years, the scale of experimentation with A.I. has exploded since ChatGPT came on the scene last year.
Fundraisers have used ChatGPT, which is free (for now) and simple to use, to offload time-consuming tasks like drafting thank-you notes to donors, completing lengthy grant applications, and scheduling social media posts. Other tools on the market, like Grantable, an A.I.-powered grant-writing assistant, and DonorSearch, which uses machine learning for donor prospecting, have made A.I. accessible to even the smallest nonprofit fundraisers.
Nearly 80 percent of nonprofits used automation for online fundraising — including 15 percent who used A.I. for donor prospecting — in 2021 and 2022, according to Nonprofit Tech For Good’s 2023 report.
Meanwhile, nonprofits with robust tech teams and big data repositories have partnered with the software industry for their own in-house A.I. solutions that go far beyond fundraising and communications. The American Red Cross already boasts over 20 such projects, including chatbots to teach kids about fire safety or direct disaster survivors to the nearest shelter.
One tool, nearly out of beta testing, uses drone footage to quickly assess the damage level of disaster-stricken places, which can help the Red Cross determine where to deploy assistance. What once took weeks of in-person outreach and door-knocking may soon take hours.
“Many of these projects are actually accelerating nonprofits’ ability to deliver critical services on the ground,” says Michael Jacobs, sustainability and social innovation leader at IBM, where he leads a $30 million initiative for A.I.-powered philanthropy projects.
Elsewhere, nonprofits have used A.I. to monitor deforestation, help kids with their homework, and train volunteer crisis counselors. The technology has delighted museum-goers in Paris and Washington, D.C. and preserved endangered Indigenous languages.
The use of both custom A.I. tools and large models like ChatGPT has also generated controversy — over biased algorithms, dangerous inaccuracies, and privacy concerns. (More on these risks below.)
Most nonprofits today have just begun to test the A.I. waters — and build out strategies to avoid the pitfalls.
What tools can help me get started?
“There’s no award for the nonprofit that adopts the most A.I. the most quickly,” says Afua Bruce, an expert in public-interest technology and author of The Tech That Comes Next: How Changemakers, Technologists, and Philanthropists Can Build an Equitable World.
Instead, she recommends that nonprofits start small. Prioritizing data collection and taking inventory of your IT priorities are great first steps — indeed, prerequisites — to taking full advantage of new A.I. tools. IBM’s free Data and A.I. Readiness Assessment and NTEN’s A.I. Readiness Checklist can help nonprofits think through their goals and capabilities. NTEN, a network of nonprofit professionals that embrace technology, also provides tech training and certificate programs on making more data-informed decisions for nonprofit staff. The Chronicle also has several virtual briefings on A.I. to help nonprofits get started.
While flashy new A.I. products seem to pop up every day, most paid software is “probably overkill” for the average nonprofit, Ellis says.
There are plenty of free or affordable options for those just getting started:
- ChatGPT and Bard can quickly compose emails, poetry, or social media posts based on simple prompts. They’re great for jump-starting ideas or creating first drafts of content, but can sometimes produce information that’s inaccurate or out of date. OpenAI, the company behind ChatGPT, has a list of best practices for new users, and the fundraising platform DonorBox has a free guide to using ChatGPT.
- Canva provides nonprofits free access to its easy-to-use A.I.-powered design tools for crafting social media posts, flyers, and infographics, alongside simple how-to guides.
- Grammarly gives nonprofits free access to its A.I. assistant, which scans for spelling and grammar mistakes and offers writing suggestions. They have a step-by-step guide to getting started.
- Descript offers discounted pricing to nonprofits for its suite of audio and video editing tools, which use A.I. to make podcasts or promotional videos far easier to create. Descript also has free guides for using its software to create videos and podcasts.
Many nonprofits may already have access to A.I. for crunching data through their cloud computing systems:
If I don’t use A.I., will I fall behind?
Not exactly, says Bruce — but that’s because even though you may think you’re not using A.I., you probably already are. Even the most technophobic nonprofits are already using A.I. without knowing it — and that will only become more true as companies continue to integrate A.I. into their products.
Send emails? Gmail and Outlook are testing out new ways to help you write. Use Slack? You’ll soon notice A.I.-powered conversation summaries. The tech that nonprofits use every day, like Zoom, Dropbox, and QuickBooks, have all touted their own A.I. capabilities in recent months.
“For better or for worse, nonprofits won’t have an option of saying no,” Bruce says, and in all likelihood, “they’ve already been using A.I. for a while now.”
Not to say that every nonprofit needs to debut a cutting-edge chatbot or predictive modeling system to stay relevant — and A.I. is not the solution to every problem. However, experts agree that the technology is here to stay, and nonprofits ought to be thinking about how they’ll approach it.
“My biggest advice is to lean into experimentation” as A.I. becomes more accessible, says Brigitte Gosselink, director of product impact at Google.org, Google’s philanthropic arm.
“A.I. is real, and it’s an opportunity” that’s already impacting nonprofits, she says. “The question is how to use it in a way that really drives value for people and the planet.”
Will A.I. take my job?
As A.I. continues to evolve, many workers fear that their jobs could soon become obsolete. Why pay a human to do what a chatbot can do in 30 seconds?
That’s unlikely to happen anytime soon, says Ellis. While ChatGPT can quickly compose clear and concise text, it tends to write in general platitudes and sometimes hallucinates false information with alarming confidence. That won’t impress donors. Instead, experts agree that the best uses of A.I. still require a human touch. For example, a fundraiser might use A.I. to create a first draft of a donor letter — before editing it to be more personalized.
“It’s just not a great writer” yet, Ellis says.
Still, A.I. is already changing how people work — and is likely to gradually replace certain jobs. It wouldn’t be the first time that automation has shaken up industries.
Over half of nonprofits anticipate that A.I. will lower their HR headcount by at least a little bit over the next three years, according to a new survey of 250 groups by NonprofitHR, though most said the impact will be small. Another 55 percent of nonprofits surveyed believed A.I. will lower their organization’s need for gig workers.
What are the risks of using A.I.?
A.I. tools can be biased, often scaling up prejudices present in the data that they are trained on. Amazon stopped using a recruiting algorithm after finding that it favored men’s resumes and penalized candidates for attending all-women’s colleges. Americans are more concerned than they are excited over the increasing importance of A.I., according to Pew Research Center.
Sharing data with A.I. systems, which are typically developed by large tech companies, can also pose privacy concerns for nonprofits, donors, and the people they serve, says Jacob Metcalf, program director at Data & Society, a research institute that studies the impact of A.I.
It’s important that nonprofits develop policies for acceptable usage of A.I. — plus ways to regularly involve their stakeholders in conversations about the use of their data, he says.
“There’s very sensitive information that nonprofits hold,” he says. Even if nonprofits take care to embed privacy protections in their A.I. systems, “they still run a serious reputational risk” if donors or beneficiaries feel that their data was used without their consent.
What’s next for A.I.?
The federal government has yet to create comprehensive regulations around A.I. systems, which raise unprecedented questions around copyright infringement, misinformation, and privacy invasion.
It’s still unclear who owns the content that A.I. produces and the data, writing, or art that it’s trained on. As a result, the next few years will likely see a lot of legal wrangling, Ellis says.
“If you’re training on paintings from the Renaissance, it’s unlikely that anybody’s going to come after you,” he says. “But if you’re using modern artists, attribution becomes a much bigger question — how do you give credit where credit’s due?”
Nonprofits will also be faced with their own questions around how to responsibly use A.I. to advance their mission.
“It’s important to always keep in mind what your mission is and who you are impacting,” Bruce says. “If that is your guide, you can make the decisions about what actually will make their lives better.”