What’s the common thread between a chatbot that helps kids with their homework, a machine learning model that combs through biomedical research papers, and a tool that can decode symbols on ancient tablets?
They are all part of a wave of tech-savvy nonprofits aiming to solve pressing problems using advanced A.I. tools. Fast Forward, a nonprofit accelerator, on Monday released a landscape analysis that categorizes the four main ways nonprofits are using artificial intelligence. Analysis of dozens of organizations suggests that this is just the beginning of a burgeoning transformation. But some experts urge caution, given the emerging technology’s possible ethical pitfalls and unintended consequences.
Over the past year, the number of A.I.-focused nonprofits applying for Fast Forward’s flagship tech training program has increased sixfold from 13 to 84 , says Kevin Barenblat, one of Fast Forward’s co-founders and a longtime software entrepreneur. While making up just a tiny fraction of the country’s more than 1.8 million nonprofits, these tech-forward organizations have recently multiplied as the vast majority of charities have barely begun to consider the implications of the rise of A.I.
“There’s a risk of diving in too fast because the market is changing so quickly,” concedes Barenblat, but unlike the Silicon Valley start-ups where he began his career, and where virtually “every company has become an A.I. company,” nonprofits need not rush to adopt A.I. if it doesn’t fit into their mission, he says.
When it comes to finding solutions to issues like homelessness, educational inequity, or climate change, “there may be a way that tech can help, and within that, maybe A.I. is a part of it,” he says. “But for nonprofits, it needs to stay impact first.”
Fast Forward’s landscape, which attempts to demystify the growing field for other tech-curious organizations and funders, identifies four primary ways in which nonprofits are leveraging A.I.:
- Advising: A.I. chatbots and virtual assistants that provide personalized support to beneficiaries, volunteers, or staff.
- Structuring data: A.I. tools that help nonprofits make sense of vast amounts of data, uncovering insights and patterns.
- Translating: A.I. tools that help nonprofits communicate across language barriers or decode information.
- Platforms: Tools that make it easier for others to take advantage of A.I. or build their own tools.
An Emerging Landscape
Although A.I. may have appeared to emerge out of thin air with the release of ChatGPT in 2022, many of the tech-savvy nonprofits Fast Forward analyzed have been using more limited versions of the technology for years. Artificial intelligence refers not only to advanced chatbots but also to a range of tools capable of learning from data to make predictions, identify trends, or monitor content.
Many tech nonprofits use various forms of A.I. to structure data, according to Fast Forward’s analysis, a task that can include providing real-time insights and making relevant information easier to find. For over a decade, the mental health nonprofit Koko has used A.I. to monitor and identify potentially dangerous search queries about self-harm on social media sites like Tumblr or Discord.
Recently, advancements in A.I. have made that monitoring work — and other kinds of data analysis — far more efficient and autonomous. “It’s been really striking to see how our techniques and approaches have evolved once the largest language models matured in the last year or two,” says Rob Morris, the co-founder of Koko. “It’s very different for us in terms of how we make this stuff, how we get it to work, how we audit and analyze it.”
Not all of Koko’s experiments with A.I. have gone smoothly. In early 2023, the organization came under fire for a trial in which it used GPT-3, the A.I. model behind ChatGPT, to help write mental-health support messages on its peer-to-peer platform. The experiment aimed to augment human-written messages with A.I.-generated suggestions, but critics assailed the nonprofit for using A.I. to feign empathy in such a sensitive context.
“It was kind of an interesting moment where the public was really reacting to ChatGPT as a new technology, and they were uncomfortable with any use case,” Morris says. While he sees some promise for the use of generative A.I. for mental health care, he says that Koko has no plans to create an “A.I. therapist” or use generative A.I. to aid in communications again.
“I think it’s important to be aware of the capabilities but to always, always think about what’s that core unmet need you’re solving for,” says Morris. “You can’t start with the tool and then look for a problem. Start with the problem and then figure out what the tool is.”
A.I. for Advice
Many nonprofits included in Fast Forward’s analysis use A.I. in multiple ways. A nonprofit that works in community outreach, for example, might build an A.I. chatbot that can train volunteers, while deploying a separate A.I.-powered translation service to help facilitate conversations between those volunteers and beneficiaries. At the American Red Cross, more than 20 A.I.-powered projects are in the works, including several chatbots and advanced analysis tools.
Some nonprofits analyzed by Fast Forward are focusing primarily on building platforms that help others take advantage of A.I. The nonprofit Playlab, for example, has a platform to help educators build their own customized A.I. tools and apps.
Across the burgeoning industry, developments in A.I. have made it far easier to experiment, says Barenblat. “When people used the technology before, they had to learn the language of the machines” through coding, he says. With the rise of tools like ChatGPT, “the machines have learned our language” making A.I. far more accessible and flexible than it was before.
That’s made it easier for nonprofits to use A.I. to interact directly with beneficiaries, he says, with a growing number of nonprofits using A.I. for advisory roles. A.I., for example, can be used to chat with people navigating public benefits enrollment or to answer students’ questions about their math or chemistry homework.
Take CareerVillage, a nonprofit that aims to connect young people with career advice. It recently launched an A.I.-powered career coaching tool called Coach. The tool uses several advanced language models to provide personalized career guidance to students, who can ask the bot wide-ranging questions about job applications, interviews, and career planning.
“Virtually nobody has access to a career coach,” says Jared Chung, founder of CareerVillage. “And we’ve known for 70 years that personalized career coaching from a trained professional coach has hugely positive outcomes on job placement, on income, and on job satisfaction.”
Fear of Falling Behind
Other nonprofits have built similar A.I. coaches atop large language models like ChatGPT, including the education nonprofit Khan Academy, which debuted an A.I. tutor called Khanmigo last year. Building such a model has gotten easier but still isn’t simple — or cheap — says Chung, and it can be difficult to root out biases and inaccuracies. Khanmigo, for one, still struggles with basic math.
Indeed, A.I. is still an emerging technology, replete with stubborn bugs, lagging regulations, energy inefficiencies, and frequent “hallucinations,” in which an A.I. tool spews made-up results. Some experts caution that the technology is simply too new and too fraught for nonprofits to safely experiment with, especially in ways that could reach beneficiaries.
The idea that nonprofits should quickly take up A.I. so they can compete with more tech-savvy organizations is “not helping us and it’s certainly not helping our missions and our communities,” says Amy Sample Ward, CEO of NTEN, a nonprofit that provides technology education and support to other nonprofits.
“That is actually feeding into the same narrative that the five tech giants want the sector to feel and to fear — that they are falling behind” on A.I., says Sample Ward. “That they should not read those terms of service, they should click accept and use these tools.”
Yet for A.I. advocates like Chung, the investments and care required to constantly debug and monitor for biases of A.I. systems are worth the possibilities they present. He argues that while it’s important to be thoughtful and cautious about the use of A.I., ultimately the decision should be driven by the needs and preferences of the people nonprofits serve. In CareerVillage’s case, the positive feedback from students and the impact that Coach is having on their career prospects makes the investment in A.I. worthwhile, he says
“The good news is that you actually can create systems to identify when something goes wrong,” he says. “If it doesn’t meet our quality standard or accuracy standard or does something in a biased fashion, we want to know about it and proactively reach out to the learner.”