In a matter of months, ChatGPT has become a go-to tool as many people use new generative artificial intelligence to help them do their jobs better. Some fundraisers use it to write social-media posts, draft thank-you notes, and create internal documents. One nonprofit went so far as to use a A.I. chatbot to respond to people who contacted its eating-disorder chatline.
While new and exciting, the technology is not a panacea. The information the tool provides is based on what it’s learned by perusing the internet. So sometimes the content it produces is wrong, biased, or inappropriate, experts say. Because of this, it’s crucial for nonprofits jumping into A.I. to think carefully about how they are using this technology so it doesn’t violate laws, ethical principles, or the faith of a charity’s constituents.
“If we use this irresponsibly, we will diminish trust in our sector,” says Nathan Chappell, senior vice president at the company DonorSearch. “And that’s something that no one wants or can afford.”
When using generative A.I., there are a few key areas to watch out for, leaders say. To protect your organization and serve your community, nonprofits need to address these issues with thoughtful policies and by talking to employees.
“The technology has exploded, and many organizations right now are still trying to catch up,” says Rodger Devine, president-elect of APRA, a group dedicated to prospect research. He adds that staff may be using the technology to do their work without their bosses knowing. “ All the more reason we need to catch up and provide some safeguards, guide, teach, and help people.”
Serious Privacy Risks
One of the major concerns with generative A.I. tools is privacy, contends Jeffrey Tenenbaum, a nonprofit lawyer. Generative A.I. is trained both by scraping the internet and by the information users feed into it. Problems can arise if a nonprofit feeds private information about donors or other constituents into an A.I. it doesn’t own or control.
“The terms of use of many of these generative A.I. platforms make clear that once you upload something into it, you’re giving essentially a broad license for them to do anything they want with it,” Tenenbaum says. “And it’s also going to be captured and used by that A.I. platform to generate responses to other people’s questions of the future. So you lose any exclusive rights to your content once you put it in there.”
Some generative A.I. platforms do allow organizations to turn off the ability of the A.I. to share their content, says Charles Lehosit, vice president of technology at the RKD Group. However, he doubts many folks take that option.
“Are people really turning those off?” he says. “Or when you sign up for a service and you have a lot of terms and conditions to read, are you just hitting accept? Nonprofits need to look closely at the terms and conditions.”
Tenenbaum says nonprofit employees who are uploading data into an A.I. should make sure they’re following their organization’s data privacy policies, which are often drafted so they comply with local, state, and federal privacy laws. If you want to train the A.I. tool to write thank-you notes, be sure the samples you upload for training are stripped of any identifying information.
“You want to make sure you’re scrubbing any of that personal information from the thank-you letter,” Tenenbaum says. “You don’t want to include a donor home address, their email address, phone number. You want to make sure you’re scrubbing dollar amounts, the organization’s name. Make it as generic as possible.”
A.I. May Be Wrong
There’s a chance the material you get from that snazzy new A.I. tool will be wrong. Generative A.I. learns by searching the internet for content, and the internet has a lot of inaccurate information. Devine, from APRA, says any organization using generative A.I. needs to make sure the information it gets is accurate before sharing it.
“When you use these open tools, there’s a trade-off,” he says. “They can produce seemingly remarkable results, but the source of the information is often opaque. Just because you got the answer fast doesn’t mean it’s high quality.”
Recently, the National Eating Disorder Association decided to replace its hotline staff with an A.I.-powered chatbot. Shortly before the organization was supposed to transition completely to the chatbot, which it had been using to respond to some people who contacted the hotline, the organization halted all use of the bot. The reason: A user reported that the bot had told her to restrict her calorie intake to unhealthy levels — advice that can lead to eating disorders rather than fight them. Via Instagram, the group said the chatbot “may have given information that was harmful and unrelated to the program. We are investigating this immediately and have taken down that program until further notice for a complete investigation.”
According to Tenenbaum, organizations risk legal liability if they give “advice on how to do something or what to do or not do” and the advice is wrong or incomplete and “someone relies on it and gets injured as a result.”
Even if inaccurate content provided by A.I. falls short of legal liability, Devine notes that it can also cause “reputational harm” to an organization.
Lehosit adds that generative A.I. may be well suited for crafting internal documents or ones that are somewhat formulaic but can take a lot of time.. Lehosit likes using A.I. to create a first draft of requests for proposals because they’re pretty formulaic, and he’s done enough that it’s easy for him to spot and correct errors. He says having the first draft completed saves a lot of time.
A.I. Is Biased
Because A.I. is trained on content created by humans, it shares the same biases present in that material.
“You have to think and be thinking about the amplification of bias,” Devine says. “All A.I.-powered tools are subject to their training data and garbage in, garbage out.”
The experts say bias can be hard to spot because A.I. doesn’t explain how it comes to conclusions, it just provides them. At a recent webinar, an association said it considered using A.I. to identify which submissions for its journals should move to the next round and be reviewed by human editors. But in a test, the tool selected only papers submitted by Harvard and Yale researchers, basing its choices on whether they worked at prestigious universities, not on the papers’ content, so it was useless for that purpose.
“With A.I. tools, how did it come to this conclusion is not always clear,” Devine says.
Legally speaking, Tenenbaum says, the biggest risk to most nonprofits is “in the employment setting” if they are using A.I. tools to help select job candidates and those tools are biased. “If there’s bias built into resume screening and they make decisions about whether to not hire someone because of that built-in bias in the platform, that can give rise to a potential discrimination claim,” he says.
From an ethical standpoint, organizations that plan to use A.I. tools to help make decisions — such as a foundation narrowing down which proposals to fund or prospect researchers choosing whom to contact — need to understand the data sets that are being used and know where to look for possible bias.
For example, historical data often reflects racism of the time. Data that shows wealth by ZIP code doesn’t explain to the computer the historical practice of redlining that kept people of color from buying into certain communities, Devine says. When looking at the data inputs and outputs, he suggests examining the data holistically and considering who has been left out or isn’t represented.
“We can’t just rely on these tools and descriptions of the past to tell us how the future is going to be,” Devine says. “We have a responsibility to shape and create a more equitable and just world that we want to live in.”
How Much Disclosure Is Necessary?
Another ethical consideration is whether to disclose that something was generated by A.I.
“How would a prospective partner or a donor feel if they knew that a computer had generated these messages that you are using to engage them?” Devine says. “So much of the work that we do in advancement is relationship building, and the center of that is trust and integrity and transparency. So we want to think through that.”
Chappell, from DonorSearch, notes that it’s important to add the human touch to anything that starts off as A.I.-generated.
“Our whole industry is built around relationships, and you don’t want to automate a relationship,” he says. “You want to keep a human in the relationship. Sounding authentic is not the same as being authentic.”
That doesn’t mean that a nonprofit needs to say it used ChatGPT to craft 50 percent of, say, fundraising appeals, similar to the way that people don’t disclose that they use fundraising templates to craft letters, Lehosit says. In fact, that level of transparency might have the opposite effect. Vanderbilt University apologized when facing criticism after acknowledging it used ChatGPT to create a letter in response to a mass shooting.
However, it’s important not to be misleading, Lehosit says, which can happen if a group uses A.I. to generate images. “If you’re trying to come off as, this is a cancer survivor that you’ve helped or this is a homeless person that you’re helping in your community, that’s problematic.”
Create A.I. Policies
Tenenbaum says organizations need policies to address multiple aspects of A.I. at their nonprofit. The first has to start with employees.
“There’s no question you have to have a strong policy governing employees’ use of A.I.: when they can use it, when they can’t, when it has to be disclosed that some content came from A.I.,” Tenenbaum says.
Policies also need to flesh out acceptable uses of A.I. for volunteers and vendors. Can they use generative A.I. to create products for your organization? If they use it, what information must they disclose? If vendors are handling your donor data, their policies should align with your own, and they should not provide the nonprofit’s data to third-party A.I. Devine notes that APRA has a tool kit to help nonprofits ask the right questions when dealing with vendors, including those that will be handling the organization’s data.
Few sample generative A.I. policies for staff and vendors exist, but one circulating among people concerned about the issue is an interim policy by the City of Boston. It spells out some things nonprofits might want to consider as they craft a policy.
Even if organizations don’t have a policy yet, Devine says the most important thing at this point is to be aware of the problems and talk through where the concerns are. Some of the concerns may sound scary, but he thinks the technology is super important for nonprofits.
Says Devine: “My advice is to stay informed and to keep asking questions and sharing useful tools with your colleagues to help advance literacy and discussion around this rapidly evolving space.”