Interest in artificial intelligence has been exploding in recent months with the emergence of new tools like ChatGPT. Yet fewer than 30 percent of nonprofits have started using or exploring A.I., says Nathan Chappell, senior vice president of DonorSearch, a software company that provides prospect research and donor-intelligence data to nonprofits.

Chappell and three other experts joined the Chronicle for an online discussion about how nonprofits can use technology to do their work more efficiently.

Nonprofits can’t afford to fall behind the curve of learning and adopting A.I., Chappell says. “This is not a fad — this has fundamentally changed how our world works. The nonprofit sector has not only an opportunity but a responsibility to rise to the occasion.”

As more and more companies use A.I. to improve their interactions with consumers, he says, donors are increasingly going to expect the same level of “personalization and precision” from nonprofits — and could choose not to support those that don’t deliver it.

Plus, A.I. is a time-saving tool, he says, and time is the “most precious commodity” for every nonprofit.

ADVERTISEMENT

The session, titled “Putting A.I. to Work at Nonprofits,” was hosted by Sara Herschander, breaking news reporter at the Chronicle, and included:

  • Nathan Chappell, senior vice president of DonorSearch
  • Philip Deng, CEO of Grantable, an A.I.-powered platform that helps nonprofits write grant proposals
  • Allison Fine, president of Every.org, a nonprofit that provides a free online fundraising platform for charities
  • Gayle Roberts, chief development officer at Larkin Street Youth Services, a group that fights youth homelessness in San Francisco

Read on for highlights, or watch the video to get all the insights.

Move Slowly and ‘Do No Harm’

As A.I. becomes more advanced and accessible, it has the potential to transform how nonprofits operate, from increasing efficiency to optimizing fundraising. But this technology comes with some key risks that leaders and fundraisers shouldn’t ignore.

The biggest return on investment in A.I. could be the “dividend of time,” Fine says, by creating extra time for staff to focus on relationship-based work. “That, to me, is the greatest upside we could hope for with A.I.,” she says. “But that only happens if you use A.I. carefully and strategically on the rote tasks that are sucking up so much of our work time — the 30 percent of time that we spend that way.”

Many groups are eager to jump into A.I., but it’s crucial to focus on responsible and ethical use of this technology, Chappell says. “In fact, I’m really trying hard not to ever use the word A.I. in fundraising without the word responsible in front of it.”

ADVERTISEMENT

This is especially important for nonprofits, he says, which rely on the public’s trust. Unlike in the private sector, if a large, reputable charity makes a mistake like using a biased algorithm, that will hurt all nonprofits in the eyes of the public. And with donations to charity falling, he says, “there’s not a lot of room to risk doing this wrong.”

A.I. Tools

Here are some of the A.I. tools the panelists mentioned using.

Leaders should make sure to “do no harm” with A.I. by taking steps like asking questions of your software providers about what their tools are really doing, including what is in their A.I. algorithms and how they factor in bias. “Demand to understand how decisions are being made for your organization,” Chappell says.

Any software product that uses A.I. is likely going to be biased, Fine says. “Chances are that it was programmed by a white man and then tested on historic data sets that already exist, which tend to benefit white people,” she says. “So you’ve got a double whammy there that by the time you take a product for workflow improvement or hiring or providing services to communities, it’s likely biased both against people of color and women.”

Take the time to ask “good, hard questions” of the firms that created these tools about what assumptions they built into them and how the developers tried to mitigate bias so you can work to do the same, Fine advises.

Then start implementing these tools very slowly with “tiny pilots,” Fine suggests, and check what happens as you automate different functions. For example, consider if any employees got left out of decision making or if certain people outside of your organization, such as job applicants, were screened out of a hiring process.

Stay ‘Deeply Human-Centered’

“Using A.I. well is a fundamental leadership challenge for organizations,” Fine says. “This starts at the C-suite level of [asking] what are the really fundamental human things that we do in this organization that we need to protect and do more of? And how can A.I. augment that work ... without doing any harm?”

The way to start that conversation is to identify an “exquisite pain point” at your organization that is preventing a lot of other things from happening.

ADVERTISEMENT

For example, a few years ago the Trevor Project, which operates a hotline for LGBTQ youths, was having trouble training enough volunteers. “Instead of replacing volunteers with a bot, they created a chatbot called Riley to train the volunteers — always with human supervision,” Fine says. “That was the pain point for them, and they’ve done a beautiful job of always making sure that there’s human oversight of the bots.”

We cannot unleash the bots on the world without human supervision, and we have to always stay deeply human-centered in this work.

If people take only one thing away from this panel discussion, Fine says, it should be this: “We cannot unleash the bots on the world without human supervision, and we have to always stay deeply human-centered in this work.”

How to Start

A.I. tools are very simple to use, Roberts says. The hardest part is managing people. “A lot of folks have fears around these tools. So just like with any initiative, you’ve got to manage those fears.” For example, she asked her team to read the book The Smart Nonprofit: Staying Human-Centered in an Automated World, which Allison Fine co-wrote with Beth Kanter. Roberts held a discussion with her team about it to address questions and help staff become more comfortable with using technology wisely.

Roberts suggests starting by setting goals for the technology, sharing lessons with other departments as you use A.I., training employees and addressing their fears and concerns, and choosing the right tools for your needs. Then do some testing, review your results, and revise your strategies as you go.

Acknowledge the “emotionality” of A.I., Deng says. “I think when you type into a box and it replies in a way that feels intelligent, it is many things, one of which could be unsettling.” Many of the organizations that are most fluent with this technology seem to take a playful approach to get to know it, he says.

ADVERTISEMENT

“We’re hardwired to take on difficult, challenging, puzzling situations by applying a game or a playful framework to that,” he says. “So I would say, lower the stakes. Don’t experiment with A.I. on your next fundraiser keynote speech. Do it in ways that are much lower stakes but allow you to repeat and get those practice exposures — and try to have a little bit of fun with it.”

For example, have ChatGPT tell you stories, try out image-generators, or use a tool called Census GPT to pull census information for you. “It’s really cool to just ask about your neighborhood and practice your prompting,” Deng says.

Chappell in his work sees many nonprofits hold off on using A.I. because they think they don’t have enough data or their data isn’t consistent enough. “Usually my response is, with A.I. you’ll never really be ready, and then also you’ll never really be done,” he says.

Start small, he suggests. “I do not believe that A.I. will replace fundraisers or nonprofits, but nonprofits that use A.I. will replace those that don’t,” he says. “Start — because if you don’t, you will be left behind.”