The Greater DC Diaper Bank, which distributes millions of diapers to low-income families every year, uses machine learning to predict areas with the greatest need for diapers.
In 2021, in the midst of a major pandemic-induced diaper crisis, the Greater DC Diaper Bank turned to a then-fledgling tool to cope with surging demand: artificial intelligence.
At first “I wasn’t even sure we really needed A.I.,” says Cassie Fassett, director of partnerships and impact at the Greater DC Diaper Bank, which distributes millions of diapers to low-income families every year. Fassett first applied to IBM’s A.I. incubator for social impact “on a whim” at a time when supply-chain issues and rising prices had left many American families struggling to stock up on
We're sorry. Something went wrong.
We are unable to fully display the content of this page.
The most likely cause of this is a content blocker on your computer or network.
Please allow access to our site, and then refresh this page.
You may then be asked to log in, create an account if you don't already have one,
or subscribe.
If you continue to experience issues, please contact us at 571-540-8070 or cophelp@philanthropy.com
In 2021, in the midst of a major pandemic-induced diaper crisis, the Greater DC Diaper Bank turned to a then-fledgling tool to cope with surging demand: artificial intelligence.
At first, “I wasn’t even sure we really needed A.I.,” says Cassie Fassett, director of partnerships and impact at the Greater DC Diaper Bank, which distributes millions of diapers to low-income families every year. Fassett first applied to IBM’s A.I. incubator for social impact “on a whim” at a time when supply-chain issues and rising prices had left many American families struggling to stock up on diapers.
The Greater DC Diaper Bank used a machine-learning model, a subset of A.I., designed by IBM to scrape and organize anonymous data from government benefit rosters, local tax codes, internal distribution data, and census demographics to predict areas with the highest diaper need. The model has helped the nonprofit identify neighborhoods where it’s fallen short, find new distribution partners, and bring more attention to the shortage in ways that Fassett hopes could one day become a nationwide standard.
“People will be able to see and understand the issue in a way that we haven’t been able to before,” she says.
Public interest in A.I. has exploded in recent months, thanks to the power — and potential — of expansive new tools like ChatGPT, which can write and process commands in a way that mimics the complexity of human thought. Yet A.I. has been quietly transforming nonprofit operations for years, driven largely by an influx of corporate philanthropy from major tech companies. As more nonprofits of all sizes seek to use the technology, they’re considering both the benefits and the risks of an A.I.-driven future.
At the Greater DC Diaper Bank, A.I. has been both a boon for expanding its reach and a challenge for the organization’s small staff.
ADVERTISEMENT
As part of the incubator, experts from IBM worked pro bono alongside the diaper bank’s staff to create a machine-learning model that can give a hyper-local look at diaper needs in the D.C. metro area. While the tool “really helped us to start getting very targeted about our services,” it ultimately became too unwieldy for the diaper bank’s 10-person team to maintain once the experts were gone, says Fassett. At her request, IBM has since created a simplified, yet still sophisticated, version of the original model that’s been easier for the team to manage on its own.
“These types of custom tools are just not something that small nonprofits will be able to sustain” on their own, “if they have access to them at all,” says Fassett, who also stressed the importance of “careful data governance and guidance” that takes into account people’s privacy and consent when building out new A.I. projects. For example, the diaper bank opted to use largely public and anonymous geographic data to build out its A.I., rather than personal data from its partners or beneficiaries, to avoid privacy violations.
For these concerns and others, it’s important for nonprofits of all sizes to have a seat at the table as A.I. tools become more mainstream, says Michael Jacobs, Sustainability and Social Innovation Leader at IBM, where he leads a $30 million initiative for A.I.-powered philanthropy projects. Although IBM provides the technical expertise, tools, and occasional cash grants, he says, nonprofits are the experts in creating accessible and equitable solutions for their community.
“Tech companies have a lot to learn from these organizations, too,” says Jacobs.
Starting Small
While not all nonprofits have been as eager to adopt the new technology, experts agree that A.I. is here to stay — and that organizations ought to start thinking about their next steps.
ADVERTISEMENT
“The barriers to access are coming down and will continue to come down” for A.I. tools, says Brigitte Gosselink, director of product impact at Google.org, the philanthropic arm of tech giant Google, which has given over $100 million in cash grants and 160,000 hours in pro bono consulting to a total of more than 150 organizations for A.I.-related projects over the past several years.
The organizations that Google.org supports say their A.I. projects have helped achieve their goals in a third of the time and at half the cost, according to surveys Google has conducted. That claim is echoed in research about A.I.'s impact on productivity. A study by Stanford University and the National Bureau of Economic Research found that A.I. increased workers’ productivity by 14 percent; another released by MIT researchers in March found that ChatGPT improved workers’ efficiency by 37 percent.
Most of the A.I.-driven tools used by nonprofits bear little resemblance to more advanced (and expensive) A.I. like Google’s Bard or DALL-E, which can generate their own text and images. Simpler forms of A.I., like Apple’s Siri or even an automatic spam filter, focus instead on analyzing and making predictions based on existing data.
For example, the Trevor Project, a nonprofit that provides crisis support to LGBTQ+ youths, worked with Google.org to build a chatbot to train volunteers and A.I. that identifies the highest-risk young people through chat and puts them in touch with a volunteer.
Reimagining Disaster Response
In the past five years, the American Red Cross has launched more than 20 A.I.-powered projects, including disaster-response chatbots that can help people find the nearest shelter and algorithms that can predict levels of attendance — and anticipate staffing needs — at future blood drives.
ADVERTISEMENT
One project uses a tool similar to the Greater DC Diaper Bank’s machine-learning model to determine which areas of the country have the highest risk of fire. Using that data, a campaign to install free smoke alarms around the country has been able to target the communities most at risk.
More recently, the group has begun exploring more advanced deep-learning models, which rely on much larger datasets than other forms of A.I. and can produce more complicated predictions and analyses. Two new tools, which are nearly ready for pilot testing, will allow the group to automatically assess the damage level of disaster-stricken communities using drone footage and a set of GoPro video cameras affixed to a car.
“Identifying damage takes a lot of time because you need to have a lot of people on the ground going door to door,” says Sajit Joseph, chief innovation officer at the American Red Cross. “The process could take weeks — and technology’s changing that to hours, or maybe days.”
American Red Cross
An A.I.-powered project at the American Red Cross helps predict levels of attendance — and anticipate staffing needs — at future blood drives.
While most of the American Red Cross’s A.I. tools are developed in-house through a dedicated innovation team, the newer and more technically advanced projects have been built with the support of Microsoft and Amazon Web Services.
The disaster nonprofit has also begun thinking about how it might use generative A.I., the technology behind ChatGPT, for internal processes. A new volunteer, for example, might soon be able to ask a chatbot for a bite-size explanation of how to conduct shelter counts without sifting through thousands of internal Red Cross documents.
Still, the group is in no rush to deploy the technology, which has been plagued by bias and privacy concerns, to external users, says Joseph. Critics of A.I. contend that the technology often replicates and scales up the racial and gender biases embedded in its algorithms, while exposing user information or data. It’s important to choose the right projects for such advanced tools, he says, and to make sure that employees and the public alike understand them before they’re deployed.
ADVERTISEMENT
“The opportunity with these A.I. models is to change the way that work is done,” says Joseph. “It takes a little time to make sure that change is really well understood.”
A.I. for Everything?
Ensuring that projects are properly planned and targeted is key to using an A.I. program that genuinely advances nonprofits’ missions, says Jacob Metcalf, program director at Data & Society, where he leads an initiative researching the impact of A.I.
Some nonprofits and government agencies have already generated controversy for biting off more A.I. than they could chew. In Pittsburgh, a child-welfare tool, built to lighten the load for overwhelmed city social workers, has been accused of discrimination against families with disabilities. The mental-health hotline Crisis Text Line came under fire for sharing user data with its for-profit A.I.-driven customer-service spinoff, and the National Eating Disorders Association was criticized earlier this year for replacing its hotline staff with a problem-plagued chatbot.
“If all you have is a hammer, everything looks like a nail,” says Metcalf, warning against an overly zealous approach that ignores existing biases or expects that the “solution is A.I. before you even figure out the problem.”
It’s the biggest lesson Gosselink herself has learned while leading Google’s A.I. for social-good initiatives. Not everything needs to be A.I., she says.
ADVERTISEMENT
“I worry about people thinking it’s an inaccessible opportunity for them,” she says. “Or getting so distracted by the hype that it becomes something they’re investing in before they should be.”
Instead, before every project, she recommends that nonprofits ask themselves if A.I. will advance the organization’s mission. Many nonprofits could benefit from starting with smaller-scale data projects to improve their operations, she says.
“You don’t need to be all the way there. You don’t need to be programming a robot or developing some profound new algorithm,” says Gosselink. “Most of what we’re doing here is really thinking about how to have more data-driven insights.”