Months after an earthquake rocked Nepal, the humanitarian group Direct Relief posted online an accounting of how it had used $5.5 million in donations to help the country. Titled “Where’d the Money Go?” the report included a link to an interactive map documenting the charity’s aid disbursements.
“Not often do you have a charitable organization that is so straightforward and honest enough to share details,” declared a donor on the group’s website. “Hats off to you.”
We're sorry. Something went wrong.
We are unable to fully display the content of this page.
The most likely cause of this is a content blocker on your computer or network.
Please allow access to our site, and then refresh this page.
You may then be asked to log in, create an account if you don't already have one,
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or cophelp@philanthropy.com
Months after an earthquake rocked Nepal, the humanitarian group Direct Relief posted online an accounting of how it had used $5.5 million in donations to help the country. Titled “Where’d the Money Go?” the report included a link to an interactive map documenting the charity’s aid disbursements.
“Not often do you have a charitable organization that is so straightforward and honest enough to share details,” declared a donor on the group’s website. “Hats off to you.”
The exchange approaches the utopian donor-charity relationship that many envision — nonprofits supply substantive information about their work and earn funding based on evidence of impact.
That dream is far from reality.
ADVERTISEMENT
“We’re still relatively early in considering what this would look like,” says Katherina Rosqueta, executive director of the Center for High Impact Philanthropy at the University of Pennsylvania. “The tools available are still pretty limited.”
Still, advocates of results-focused giving are working to build out their vision. GuideStar recently introduced a “platinum” seal — its highest recognition — for nonprofits that provide at least pieces of information that illustrate how they measure results. Charity Navigator plans to roll out a similar tool this year. Dan Pallotta of the Charity Defense Council has called for nonprofits to collectively invest about $500 million to build a “user-friendly, iTunes-like” database to display charity results “on a massive scale,” accessible for every American who wants to give.
Individually, many groups like Direct Relief are producing “just the facts, ma’am"-style reports that target donors’ reason more than their emotions. Year Up, a fast-growing charity that provides career training and corporate internships to young adults, issues a Wall Street-style prospectus for its capital campaigns that details its growth and impact plans for the next five years.
Fighting the Overhead Myth
The push to promote results is partly a reaction to concerns that charities are too often judged by their spending and overhead. In such calculations, groups with low overhead are often deemed the most effective.
GuideStar, Charity Navigator, and BBB Wise Giving Alliance, another charity-ratings service, have in recent years fought to debunk the “overhead myth.” They are encouraging charities to give donors something else to focus on, including explicit data and information about their performance and impact.
ADVERTISEMENT
At the same time, many fundraisers are concluding that donors want to know exactly what their money pays for and what impact it delivers. Direct Relief has in recent years emphasized good results in communiqués like its Nepal earthquake report. Contributions from individuals have grown fourfold since 2009, to $32 million. “It seems to be what people want these days,” says CEO Thomas Tighe.
Amanda Seller, vice president for revenue at the International Rescue Committee, says precepts of giving will change even more as younger supporters come of age. Donors in their 30s aren’t satisfied with light-on-details appeals promising easy answers, she says.
“Unlike the baby boomers, they’re not going to have a huge, built-in optimism about the state of the world,” Ms. Seller says. “They know the world is a tough and difficult and complicated place. If we’re going to engage that generation in philanthropic giving, they’re going to need a more sophisticated proposition.”
Donors and Impact
Despite the movement coalescing around results-focused fundraising, research hasn’t definitively confirmed that donors actually care about impact. A recent study by economists Dean Karlan and Daniel Wood tested this idea through a mailing to two sets of donors by an international development organization. The letter to one group included data documenting success by “independent researchers” who had conducted “rigorous impact studies.” That language increased the likelihood of giving by large donors but “turned off” small donors, the study found.
Caroline Fiennes, founder of British group Giving Evidence and a leading advocate of charitable giving based on proven results, says there’s little research yet about how donors respond to reports of impact. “We all have ideas, but there have only been maybe one or two proper studies of whether evidence of effectiveness influences donors.”
ADVERTISEMENT
A one-size-fits all approach won’t work given the diversity of nonprofits.
The results movement also has not yet settled on how success should be defined. Ms. Fiennes says charities are not equipped to do the kind of rigorous evaluation needed to illustrate impact. “When most charities report their results, it’s total garbage,” she says. “They have neither the skills nor the resources nor the incentive to do it properly.”
Most of the more than 1,400 groups that have earned GuideStar’s platinum seal are reporting measures of outputs, not results, according to GuideStar. For instance, the Urgent Action Fund, which provides rapid-response grants to women’s-rights groups worldwide, lists the number of grants it awards and their total dollar value, not their impact, says Caitlin Stanton, director of partnerships. “They are actually process indicators rather than results indicators, but in this context, they may be more easily translatable.”
The field as a whole is only just beginning to study and develop metrics, says Eva Nico, GuideStar’s senior director of programs. Still, she says collecting and reporting outputs is a critical first step, particularly for organizations new to thinking about data: “If you’re not measuring outputs, you’re nowhere on the journey to outcomes.”
Charity Navigator 3.0
This summer, Charity Navigator hopes to release an improved version of its ratings, dubbed CN 3.0. It will incorporate information on how nonprofits measure their impact, but the organization has yet to define how.
The watchdog group awards zero to four stars to more than 8,000 charities based on measures of financial health plus accountability and transparency. Though it announced its goal of adding a results dimension in January 2013, it put the effort on hold when early data collection showed that most charities were not measuring impact.
ADVERTISEMENT
The new rating will likely involve collecting information directly from nonprofits or aggregating it from other entities, like GuideStar, that are already gathering more qualitative data.
Collecting information on how nonprofits measure results in a consistent and common manner is challenging, says Michael Thatcher, Charity Navigator’s president. “We don’t have that nice, crisp single number that we can point to.” And a one-size-fits-all approach won’t work when nonprofits are as diverse as food banks, hospitals, and museums.
For now, Charity Navigator staff are talking with community foundations, academics, and other groups about ways to collaborate. “We don’t want to replicate any existing efforts, and we don’t want to create redundant efforts for the charities,” Mr. Thatcher says. “I’m slowing us down a little bit, but I think it’s the right approach.”