The National Eating Disorders Association made headlines is 2023 after its artificial intelligence-powered chatbot gave people seeking advice about eating disorders suggestions that were inappropriate and unhealthy. While the group pulled the bot quickly, the case is a powerful example of how using A.I. without enough oversight can harm a nonprofit’s reputation.
While many nonprofits using A.I. today are performing simple tasks like text or image generation, even those uses should be guided by organizational policies that help groups stay true to their principles and avoid damaging the trust of their supporters, say consultants and executives who study the issue.
“I’m very hopeful about the promise of A.I. to help nonprofits do amazing things,” says Nathan Chappell, a co-founder of Fundraising.AI, a platform dedicated to exploring A.I. use at nonprofits. “The reality is that we could diminish trust at scale if we use A.I. inappropriately or irresponsibly.”
Unfortunately, few nonprofits have policies to make sure A.I. is being used appropriately: Chappell describes the current state as “very wild West.” But if more nonprofits put on their good cowboy hats, the A.I. frontier can be tamed.
To create policies for ethical A.I. use for fundraising and other common nonprofit activities, Chappell and others says it’s crucial to start with organizational values, focus on key concerns like privacy, bias, and transparency, and remember that humans, not the technology, should be top of mind in all the work.
Even nonprofits whose leaders don’t intend to use A.I. need to have a policy, says Beth Kanter, a technology consultant and author.
“There’s shadow use going on,” she says. “People aren’t telling their bosses, but they’re using it. And I think that’s where we can run into trouble.”
A.I. Should Complement Mission
When nonprofits create policies to use A.I. ethically, organizational values have to be at the core, says Kanter, co-author of The Smart Nonprofit: Staying Human-Centered in an Automated World. For example, if an organization values equity, then A.I. use policies should start from there.
“You need to have in your acceptable-use policy some kind of value statement: ‘We carefully review A.I.-generated content through an equity lens to avoid perpetrating harmful stereotypes and for accuracy,’” she says. “So that’s the value, but operationalizing it, you might make sure that content’s being reviewed by a diverse team with a diverse point of view.”
The organization’s values are the starting point because they are not going to change; the same can’t be said about technology, says Karen Boyd, director of research at the Policy & Innovation Center.
“Think about what those values are before you dive into decision making — this technology is changing crazy fast,” Boyd says, noting the technology tools an organization might be thinking about when writing a policy “will probably be outdated by the time” the policy is applied.
Consider Data Security, Privacy, Bias
With values in mind, a few key issues arise with the ethical use of A.I., says Ben Miller, senior vice president of data science and analytics at the tech company Bonterra. Those are data security and privacy, bias, accuracy, and transparency.
Data security and privacy concerns relate to keeping organizational data — such as donor information — both private and secure. Organizations shouldn’t upload private or proprietary data into systems they don’t control because that data could end up exposed, Miller says.
Any data uploaded to an A.I. tool also might be used to train that system, says Dan Kershaw, executive director of the Canadian charity Furniture Bank. While private data needs to be kept safe, Kershaw thinks nonprofits should upload documents that speak to the heart and soul of their organizations so that what A.I. generates reflects more of the nonprofit spirit.
“If we don’t, the for-profits will happily keep training the models on the data they have,” he says. “The amount of our social wisdom — to give it a name — will become smaller and smaller in the training.”
To Kershaw’s point, bias has become a real concern for A.I.-produced content because some of the data that trained the technology contains significant biases.
“You may not understand what biases are there,” Miller says. “It’s really important to ask questions like, ‘If I’m asking the bot to send back information about nonprofits that might help my organization in referrals, I might miss out on underfunded nonprofits because they’re not in the news or they’re not on the corpus of data that it’s been trained on.’”
Another concern with the use of A.I. is accuracy. The programs are known to “hallucinate,” a tech industry euphemism for making things up that sound true but aren’t. The technology can also provide matches that are wrong — and potentially offensive.
Boyd says she’s heard of people asking for images of monkeys but getting images of Black people, or someone seeking playground pictures but being served up concentration camp images.
“It’s not making a moral choice,” Boyd says. “It’s making a pattern-matching choice. But that choice has ethical implications — serious ones.” She stresses that it’s crucial to have humans check A.I. output.
Charities should also think through how they’ll address mistakes — which will inevitably happen, says Afua Bruce, co-author of The Tech That Comes Next: How Changemakers, Philanthropists, and Technologists Can Build an Equitable World.
“Define what harm looks like and what repairing harm looks like,” she says. “If an organization is using a chatbot to provide responses to clients and it gets it wrong, how are people notified? What happens? How is that information corrected? How is harm repaired?”
Transparency is also a hot topic in A.I. ethics circles — whether or not organizations disclose when they use A.I. to create content. Generally, for text, how much A.I. was used impacts whether nonprofits should disclose it.
“Are you using it to help you brainstorm or as a glorified spell checker? Maybe you wouldn’t disclose,” Kanter says. “If you’ve used it to help write a first draft and then that’s been human edited, you probably want to disclose.”
How you feel about disclosure might be an important signal, Miller says.
“One good gut check is, if you’re nervous about saying that you used A.I., you might be using it incorrectly,” Miller says, adding that if A.I. wrote the entire piece, the technology should be credited, not a human. “If you’re willing to say, ‘I used A.I. to help,’ it’s probably because this is still your idea,” Miller says. “It’s still what you’ve done.”
A.I. Images
The best practice consensus on A.I.-generated images is that they should be disclosed, says Kanter, because people generally assume images are real unless told otherwise.
Creating images with the intent of passing them off as genuine is deception, Miller says. “If you’re trying to deceive somebody,” he says, “that’s not OK whether you use A.I. or not.”
Furniture Bank has been using A.I.-generated images in its work. The Canadian charity helps people who can afford living quarters but not furnishings. However, it was nearly impossible to show donors images of the people their money would support, says Kershaw, the group’s executive director.
“In 26 years of operating, we’ve only ever had three families that invited us in to show the squalor and trauma that comes when you have housing but you don’t have the resources to furnish it so your children end up sleeping in a bed of clothes, garbage bags or storage milk crates or tables,” Kershaw says.
In 2022, he used A.I. to generate images of what their clients’ living spaces were like in a fundraising campaign called, “The picture isn’t real. The reality is.”
“We had 40-plus pages worth of stories that had been written by families over the years that, in their own words, articulated what it is like,” Kershaw says. “We could take the power of the words, maintain the anonymity and the privacy, and visualize something that, economically as a small charity, we’d never have been able to afford. That would have costs $60,000 to $80,000 to do it in the traditional way.”
By clearly disclosing the use of A.I., the campaign touched hearts and was generally well received, according to Kershaw. “It was not trying to exploit,” he says. “It was trying to educate and start a discussion.”
Allowing people to get help without asking them to consent to have their images published in fundraising appeals is a good use for A.I., say Kershaw and Miller. Bruce, the author and technology consultant, says that while using A.I. to generate images can help avoid exploitation, it’s not a guarantee.
“The question is still there as to whether or not you’re preserving dignity,” Bruce says. “If you have an exploitative picture, whether it was taken by a photographer or generated by A.I., it could still be exploiting that situation. I encourage people to think about what message it is they’re conveying through whatever images they’re sharing.”
Another ethical issue some organizations consider with A.I.-generated images is whether they’re displacing humans, Boyd says.
“Are we taking a paid gig away from a human photographer who may bring a point of view and a human touch?” she says. “There are lots of cases where we might use stock images. We know those people have consented, we know we’re not exploiting folks, and someone’s getting paid for that.”
‘Real People in the Loop’
Finally, when creating policies for the ethical use of artificial intelligence, “it’s important to always have real people in the loop,” Chappell says.
That means both having humans review A.I. output and also ensuring that humans jumpstart the use of A.I., Miller advises.
“The important decisions should be in human control,” Miller says. “I might say that when someone gives me a donation, within the 24 hours, I’m always going to send them a thank you. I made the decision that this is the time that I want them to get that thank you. If you let the bot make the decision, then that’s a problem.”
While many nonprofit professionals are excited about artificial intelligence, some people may be resistant or fearful of it, says Kanter, who has worked for 30 years helping organizations with technology adoption. She says it’s important to talk about A.I. and help people understand it’s just another technology.
“I hear a lot of misconceptions because they haven’t put their hands on it, or they’ve read something and they just have some narrative in their head,” Kanter says. “Having that conversation about what the concerns are, and getting it, and showing them that you’re addressing those concerns.”
A.I. is really good for automation, and that is where many nonprofits are going to see the most benefit, Boyd says. She advises talking to staff first — even before the group crafts A.I. guidelines — to determine how A.I. can be most effectively used at your organization.
“Figure out how people are currently using it is a good place to start, because that tells you where the most acute need is,” Boyd says. “In addition to what they’re using it for now, it’s useful for leaders to know what their workers are worried about and excited about with A.I. So understanding where they see potential moral issues in their work, and where they’re worried about A.I. They’ll know where some of that gray area is. So talking to them about that is really useful.”