Wyatt the chatbot has already helped more than 30,000 high-schoolers complete their federal student-loan applications on time — a key step to unlocking money to pay for college. Day and night, students text him questions about the country’s byzantine financial-aid processes, but Wyatt never burns out. The larger his caseload, the smarter he becomes.
With A.I. to do the busy work, employees can focus on what matters most.
Before there was ChatGPT, there was Wyatt — an A.I.-powered college adviser designed in 2019 by the nonprofit Benefits Data Trust — one of many innovations that have emerged or evolved over the past few years as A.I.'s advance suddenly kicked into warp speed.
A.I. will save the world. It may also doom it. It could supercharge productivity for an overstretched nonprofit work force — or it might replace human workers altogether. In fact, nobody knows for sure what comes next for a technology that is already changing the way we work and live — and is expected to replace some white-collar jobs. Yet experts say A.I. could also eventually help burned-out employees focus more on what matters most — the tasks that require a human touch — while the robots take care of the busy work.
“It’s about getting the A.I. to do what it does best — rote tasks — and freeing up people to do what people do best,” says Allison Fine, president of Every.org and co-author of The Smart Nonprofit: Staying Human-Centered in an Automated World. It’s this kind of “co-botting” that will likely come to dominate the “future of work,” she says.
Trends Special Report
A new year has begun. Here’s what you need to know about the most important trends shaping philanthropy and the nonprofit world.
In other words, if Wyatt can handle the most mundane of tasks — like nudging students to fill out their forms on time or answering routine questions — then school counselors, who advise upward of 400 students a year, will have more time to focus on one-on-one care. And a fundraiser, using A.I. to streamline processing for donations, will have more time to give individualized attention to donors in ways that cannot be automated.
“I don’t want to get lunch with a robot,” says Fine. “If we use A.I. badly and we make people feel less connected to other human beings — it will be a tragedy.”
A.I. Isn’t Good at Everything (Yet)
The last several years have seen a surge in funding for artificial intelligence, an umbrella term for a range of technologies that can mimic human capacity for language, analysis, and learning.
Google.org, the tech giant’s philanthropic arm, has given more than $100 million in grants for A.I. projects. The American Red Cross already has over 20 A.I. projects in the works, including disaster-response chatbots and algorithms that can assess the damage level of disaster-stricken communities. Hundreds of tech companies — including Salesforce and Microsoft — are in the process of integrating A.I. into the tools nonprofits already use every day.
The rise of A.I., and especially generative A.I. — the deep learning models trained on internet data behind tools like ChatGPT — has led to widespread fears that robots could soon replace workers across the economy.
Indeed, around one in five Americans works in a job considered “most exposed” to A.I., according to a report by the Pew Research Center, meaning that many of their important tasks may be replaced or automated entirely. Many “most exposed” jobs — including web developers, technical writers, and budget analysts — are in higher-paying fields and currently require high levels of education.
That exposure doesn’t necessarily mean those jobs will go away entirely — in many cases, A.I. may simply change the way work is done or give humans more time to focus on other tasks. While A.I. excels at the most data-driven repetitive tasks, it often lacks the so-called soft skills that humans take for granted. ChatGPT may be able to pass the bar exam, but babies still outscore A.I. when it comes to common sense.
“A.I. is still really stupid,” says Trooper Sanders, CEO of Benefits Data Trust, the nonprofit behind Wyatt and a member of the Biden administration’s National Artificial Intelligence Advisory Committee. “It is not the human brain by any stretch of the imagination.”
Everyday Automation
Instead, Sanders likens A.I. to a really sharp chef’s knife: a powerhouse tool in the kitchen that “can do a lot of great things but can also cause a lot of damage” if used incorrectly.
For nearly two decades, Benefits Data Trust has developed technology-based solutions to address pain points in accessing the country’s public-benefits programs. Low-income Americans lose out on $60 billion a year in benefits like SNAP or Medicaid because of administrative snags, many of which could lend themselves well to automation — a priority also outlined by the White House in its guidance on the technology.
It is not the human brain by any stretch of the imagination.
In recent years, with $20 million from MacKenzie Scott and backing from the Gates Foundation and Google.org, the nonprofit has leveraged A.I. for building tools like Wyatt, which Sanders says can help staff save time on administrative tasks and focus more on building positive relationships with the people they serve.
Wyatt, for example, was not built to replace school guidance counselors, who handle students’ sensitive needs; it’s meant to make their lives easier by doing their administrative work for them, says Sanders.
“What A.I. can do is give you back 15 minutes by automating” certain tasks, he says, leaving enrollment specialists more time to help people access public benefits in “as dignified a way as possible.”
The Chatbots Will See You Now
There are jobs that may one day disappear because of A.I. The virtual assembly line of data analysts, office managers, and others who cull our digital lives could eventually see their jobs replaced or transformed beyond recognition.
It’s difficult to predict exactly which jobs A.I. may replace — or inspire. Around 60 percent of careers common today — like social-media managers and software developers — did not exist in 1940.
Still, in much of the nonprofit world, where empathy, social connection, and adaptability are often paramount, experts say A.I. is unlikely to be an adequate replacement for people anytime soon.
That hasn’t stopped some nonprofits from trying.
Last March, the National Eating Disorder Association, or NEDA, announced it would shut down its two-decades-old helpline — just two weeks after staff voted to unionize — and focus instead on promoting the use of Tessa, a wellness chatbot.
Tessa was a disaster. In June, it was taken offline after making headlines for giving users dangerous weight-loss advice. NEDA, which once served more than 70,000 people a year, no longer offers a helpline, either chatbot- or human-run.
Abbie Harper, one of four staffers laid off by NEDA, recalls warning the group about the dangers of transitioning to a chatbot that can’t “provide the empathy and validation” needed for such sensitive work.
“It is so hard to reach out for help,” says Harper, who, like other helpline staff and volunteers, often relied on her own experience with eating disorders to connect with callers.
“If you do reach out, and that’s what you get — it just broke our hearts,” she says.
In A.I. We Trust?
If nonprofits move to integrate A.I. too quickly, they could jeopardize their relationships with donors, beneficiaries, and staff — or undermine their missions altogether.
More than half of Americans say they feel more concerned than excited about A.I., compared with only 10 percent who feel more excited than concerned, according to the Pew Research Center.
While a majority still have a high degree of trust in nonprofits compared with other institutions, nonprofits may risk eroding that trust if they move to adopt A.I. without proper safeguards and transparency.
“If you’re here to help people, scaring them is not going to do anybody any good,” says Michael Jacobs, sustainability and social-innovation leader at IBM, where he leads a $30 million initiative for A.I.-powered philanthropy.
NEDA is not the only nonprofit whose foray into A.I. has become a cautionary tale. Crisis Text Line, a nonprofit hotline, came under fire in 2022 for sharing user information with a for-profit spinoff.
A.I. algorithms, which are already widely used in résumé screenings, have also been found to discriminate against women and people of color in hiring.
It’s true that A.I. can do a lot of good, but “you have to keep humans in the loop” and be mindful of the risks, says Yolanda Botti-Lodovico, policy and advocacy lead at the Patrick J. McGovern Foundation, whose grant making focuses on A.I. and data science.
“We’re starting to see what can happen when we combine the capacities of A.I. with the human-centered approach of nonprofits,” says Botti-Lodovico, who stressed the importance of maintaining transparency and training staff to take full — and ethical — advantage of new A.I. tools.
“We need leaders who really fundamentally understand how to combine the uniquely human talents and qualities that their employees bring to the table — things like empathy, passion, creativity, and human inspiration — with the visionary capabilities of technology,” she says.