Many of us in the nonprofit world cheered when the watchdog group Charity Navigator threw in the towel on an evaluation system that focused narrowly on the percentage of expenses a charity spends on overhead—a naïve indicator of nonprofit efficiency and effectiveness.
Now a new movement is under way, in the form of a cornucopia of nonprofit ranking and assessment tools and Web sites. These rating sites choose favorites, award gold stars (or their equivalent), and declare victory as they anoint charities “impactful” and worthy of a donor’s investment. But it is far from clear that the new systems are any better than the ones they seek to replace, and both donors and nonprofits need to beware of the potential trouble spots. Better would be to provide donors with a truly meaningful blend of information about an organization’s leadership, direction, revenue model, capital needs, and program results.
The aspirations of the ratings movement are well intentioned. That philanthropists should focus less on a handful of financial measures and more on the connection between investment dollars and program results is a given. Most people would also agree that donors will benefit from new sites that make it easier to compare charities and that aggregate critical data in one place.
Encouraging openness among nonprofit organizations is a critical goal for anyone who cares about improving philanthropy. Charity Navigator’s recently announced efforts to catalog whether nonprofits make information easily available on their governance and ethical practices, finances, effectiveness, and results is one step in the right direction.
Equally important is the growing interest in more comprehensive information about financial health.
While financial strength does not guarantee nonprofit effectiveness, financially strong organizations are far more likely to have the leadership, staffing, systems, and working capital in place to support successful programs. They stand a better chance of recovering from mistakes and successfully adopting new, often risky opportunities.
But little attention has been focused on the flaws in the new approaches to ranking charities. A review of some of the more prominent ratings systems raises more questions than it answers. While the rating models are different—some offer Morningstar-style reports based on an array of criteria while others depend on votes from experts or reviewers—certain widespread trends raise concern.
Among the most prominent weaknesses in the new systems:
Selection bias leads to herd mentality. In selecting “whom they know” (or whom their “experts” know), many ratings sites are rallying potential donors around a small number of nonprofits from a handful of arenas that may or may not have the best solution for a given social challenge or opportunity. At worst, it can seem at times like a popularity contest. How will this bias affect highly effective charities that don’t ever make it to the ratings table?
“Expert” affiliations aren’t always disclosed. Some of the new sites are cloaked in too much secrecy. We need to do a better job of disclosing potential conflicts of interest. Donors should demand that anointed experts disclose any relationships they have with nonprofits they might be rating—board participation, volunteer association, grant maker, consultant, or the like. Anonymity should not be an option.
Many new measures are arbitrary, inconsistent, and misleading. New measurements of financial condition are, thankfully, replacing the old rating gold standard: the comparison of spending on overhead and fund raising versus a charity’s direct efforts to carry out its mission. But new criteria that lead ratings sites to claim “cost effectiveness,” “scalability,” and “financial sustainability”—to name just a few frequently cited in the rankings—are often no better indicators of nonprofit effectiveness than the overhead comparisons.
In some cases, the criteria used for a specific assessment aren’t disclosed so comparability across organizations is questionable. In other cases, the definitions and standards are not grounded in a nuanced understanding of nonprofit economics: They fail to account for the variety of sound business models, capital structures, and investment requirements.
Indeed, while Charity Navigator has expanded its ratings methodology, it continues to score organizations on “efficiency” and “capacity” based on measures of how much a group spends in specific categories and growth rates that do not take into account the individual circumstances that might guide a decision to expand or reduce a charity’s operations to improve its quality and effectiveness.
Other charity-watchdog sites do no better. Alarm bells went off in my head when I was reading one site’s description of whether a nonprofit is “overfunded” because of large amounts of reserves: “Large accumulations of reserves or stagnant expenses may serve as warnings about a charity’s ability to productively use additional funds,” it said.
These are not the standards we want to set for high-performing organizations that may have made the choice to improve the quality of their services while they build up their income to cover growth and change in the years to come.
Certainly, commonly used measures like “months of cash” available or “cost to serve” a client do help tell an organization’s story and should be considered as part of any donor’s decision. But context matters, and rigid determinations often lead to suspect judgments.
For example, compare a charter school that serves low-income students in a troubled neighborhood to a private school that serves wealthy students with a devoted pool of alumni. How much cash should each of these organizations maintain on hand? Should the wealthier school have the higher ranking because it has six months of cash, while its poorer peer gets by on two? What does a more “cost effective” school that educates students for $10,000 versus $15,000 really tell us about the quality of the education or the population being served? When we base donation decisions on arbitrary yardsticks, rather than take the time to do a comprehensive analysis of the context, risk, and opportunities facing each nonprofit, we do ourselves and nonprofits a huge disservice.
Perhaps instead of trying to improve rating systems, nonprofits and outside experts on philanthropy should examine whether offering gold stars or other scores will ever really help donors make better giving decisions.
The much-touted annual college rankings do not necessarily lead young people to make the right collegebound decisions. Nor is it necessarily “better” to finance a highly rated charity five states away than it is to get involved as a donor, volunteer, or board member with a neighborhood program that provides critical support in a donor’s own community.
So why the rush to rate and rank? Why not provide information and let donors decide?
Financial-performance indicators should be one tool donors examine, along with measures of progress in managing operations and efforts to carry out an organization’s mission. Watered-down scores (and the judgments they lead to) are meaningless without context and analysis. Carefully chosen trends and ratios can help assess a nonprofit’s financial strengths and weaknesses; no one “magic number” can take the place of a full range of strong data.
A sophisticated consumer will look beyond a simple rating and ask data-driven questions about a broad range of ingredients that lead to success in achieving an organization’s mission.
Certain kinds of comparisons can also play a useful role in evaluating organizations. Online platforms like the Cultural Data Project provide nonprofits and grant makers with comprehensive multiyear financial and program information in a standard form, for use in individual organization or peer-group analysis. Understanding the similarities and differences of a group of nonprofits that work on the same mission can guide donors to ask good questions and help them better understand common challenges and opportunities.
- Greater sophistication and consistency about what we’re measuring and why.
- Greater transparency about who is doing the measuring and evaluating.
- Candor about the limitations of what we can achieve through any ranking system.
Finally, but most important as we rush to rate organizations, we need to be mindful of the many highly effective nonprofit groups we may overlook in the process.