My colleague always takes time to write thoughtful notes to donors to our university, and I am moved by the effort she puts into it. She takes the time to learn the names of each donor, and she researches their stories to craft each note.
I remember watching her write to a long-time donor who had recently passed away. She wrote about how much his donations had meant to the university, and how his legacy would live on for generations to come …
Until those last two sentences, you were probably appreciating this essay’s personal tone and clear writing. Now you’re likely wondering why the Chronicle of Philanthropy’s copy editor didn’t catch an obvious mistake: Why was my colleague writing to someone who was dead?
Well, artificial intelligence isn’t perfect. But considering the minimal effort I put in, it isn’t bad. To create the opening for this piece, I instructed the new text-generating artificial-intelligence system ChatGPT to “write a short, touching anecdote about watching a colleague who takes time to write thoughtful notes to donors to our university.”
Here’s the rest of the story ChatGPT wrote about my pretend colleague’s correspondence with the deceased donor: “She wrote about how his generosity had touched the lives of so many students, and how his memory would be kept alive in the university. I was so moved by her words and her effort to craft such an intimate and thoughtful tribute to this generous donor. It reminded me that our work has the power to touch lives in ways we never expected.”
Aside from that one fatal giveaway, the overall effect is remarkably human. There’s pathos, a throughline, and embellishments that I doubt I would have come up with.
This was a fun exercise, but it was also eye-opening. It demonstrated to me the potential of language applications such as ChatGPT to change philanthropy in significant ways. Donor communications, proposal writing, giving profiles, donation reporting, press releases — and on the other end, philanthropic funding decisions — will, for better or worse, increasingly be influenced by A.I.
Although I now work in the advancement department of the University of Utah, I started my career as a data scientist. It was common in the field to look down on those who were overly enamored of A.I.’s potential to either destroy the world or build a better one. To many of us, A.I. was just a glorified form of what’s known as “curve fitting” — it could find patterns in data as any statistician would, but it wasn’t capable of higher intelligence. Early text models bore this out, producing paragraphs that were barely cogent and failed Turing tests, the method devised by the renowned computer scientist Alan Turing to distinguish people from machines.
More recently, the scope and scale of these models have given them a sophistication and set of capabilities that once seemed impossible. And while human-level A.I. is still far away, recent advances have shown that higher-order thinking is not necessary for some of its most important applications.
What does this mean for philanthropy? On the fundraising side, A.I. will create efficiencies and simplify work that was previously done by experts, such as gathering information about potential donors and developing marketing campaigns.
ChapGPT is in the review phase, but anyone can sign up and test it for free. If the actual costs are reasonable — a big “if” considering the computing power required — it’s easy to imagine small nonprofits creating funding applications on par with major nonprofits and large, sophisticated universities, such as the one where I work.
A.I. will likely reduce the cost per dollar raised, especially for smaller, scrappier organizations that don’t have resources to communicate regularly with donors. It will help with bookkeeping, research, hiring, and even more abstract tasks, such as designing a theory of change and demonstrating a program’s impact. The overall effect could be one of leveling the playing field, giving historically marginalized organizations the resources to compete for funding against the biggest and best-endowed nonprofits.
The downsides of embracing ever-more-powerful A.I. are harder to predict. As many on social media have discovered, ChatGPT can be both confidently wrong and oddly persuasive in its incorrect reasoning. While the examples tend toward the inconsequential and usually hinge on the wording of the prompt, this is a worrying bug.
Finding Beneficiaries
Donors will also rely more on A.I. to achieve their philanthropic goals. This will likely begin with consulting firms using it to help foundations select from a sea of potential grant recipients. They could use prompts such as “find all the Hispanic-serving institutions with below-average graduation rates that are being led by a progressive president.” Even a small variant in the prompt — or of more concern, the underlying data — could give results that are ostensibly correct but missing the most impactful, deserving, or needy organizations. In this case, ChatGPT spewed out a list of 10 universities, but some quick fact-checking revealed that the information provided was, in several cases, either wrong or incomplete.
Research shows that algorithms often behave badly when they are based on fallible, biased data. “Big Data processes codify the past, they do not invent the future,” wrote Cathy O’Neil in her book Weapons of Math Destruction.
Human Touch Is Essential
The danger is that consultants, donors, and fundraisers will rush in to use the technology before its limitations are understood. To ensure philanthropy benefits from A.I. without exacerbating its potential for harm, a thorough and uniquely human approach to monitoring the technology will be needed. O’Neil advocates for ethical audits that uncover bias and discrimination in these types of systems — a difficult task given the billions of data inputs and unpredictable reactions from ChatGPT and other new systems.
I put the challenge to ChatGPT, and it produced a completely reasonable and valuable response: “There must be a strong focus on using A.I. for the public good and avoiding the pitfalls of using it to amplify existing inequities. This will require a holistic strategy that focuses on A.I. education, responsible use of data, and access to technology and resources for underserved and disadvantaged communities. Additionally, there will need to be clear guidelines and ethical frameworks for A.I. use in philanthropy, and an emphasis on transparency and accountability. Finally, it will be important to ensure that those making decisions about A.I. use in philanthropy come from diverse backgrounds and are aware of the potential consequences of their decisions.”
Ironically, A.I. can write these guidelines, but it cannot implement them. It will be up to the humans in the nonprofit world to tackle the issues of transparency, accountability, and diversity described accurately by ChatGPT. Much like the fundraiser it conjured who wrote thoughtful, personal letters to every donor, it will require a human-centered and heartfelt approach to get right.