Opinion
March 10, 2013

As Nonprofit 'Research’ Proliferates, It Must Be Viewed With Healthy Skepticism

In the last decade or so, the number of organizations and academic institutions doing what is billed as “research” on philanthropy has proliferated.

That is, by and large, a good thing, and I have been heartened at the Center for Effective Philanthropy by how hungry foundation leaders are for pragmatic information about what it takes to do a better job. But the rise of so many research groups increases the importance of understanding what—in the slew of reports and articles being e-mailed, tweeted, and otherwise distributed—is based on careful data gathering and analysis and what is not.

This matters because nonprofit leaders are looking at what is published to inform—and change—their practices. So those of us putting out work purporting to say how nonprofit and foundation workers should do their jobs had better be darn sure we are getting it right.

But I am not sure that is happening. Much of today’s nonprofit “research” is not as rigorous as it should be, leaving it to readers to be more discerning.

Here are five simple questions everyone should ask about nonprofit research:

What was the methodology used? Shockingly, many reports masquerading as “research” disclose little or nothing about the approach used to gather data or reach conclusions.

The methodology matters. A survey sample selected to be representative of a population is more reliable than a sample based on an open invitation, which is little better than the “text your vote” polls run by television news shows. Yet reputable organizations, such as the consulting firm McKinsey & Company, conduct surveys of nonprofits by sending e-mails seeking participants.

Too often it isn’t clear what approach was taken, nor is the bias introduced by haphazard approaches acknowledged.

If we learned anything from Nate Silver, the New York Times guru on election statistics, it’s that some survey samples are better than others for gauging the views of a large population.

In addition, quantitative studies should discuss the statistical approaches used and indicate whether any differences being reported are big enough to matter.

Studies based on qualitative data should also disclose analytic approaches. Too many studies are based on a set of interviews with no details about the analysis: The reader is asked simply to trust that the findings were not cherry-picked to support the author’s preconceived views.

Is the conclusion warranted? Too often I see reports leap from a legitimate, data-based research finding to an overreaching conclusion, implication, or “practical step.”

The temptation is always there, of course, to jump to a conclusion that may not stand up to scrutiny. That’s especially true for those in pursuit of media coverage of their research reports. (And who isn’t?) And the fact is, many readers understandably want it all boiled down to something easy to process. But we need to be careful.

One simple example is the 2011 “Daring to Lead” study, jointly produced by CompassPoint Nonprofit Services and the Meyer Foundation, which includes some important data about nonprofit leadership and succession. But it concludes that “many boards of directors are underprepared to select and support new leaders,” citing as evidence that “just 17 percent of organizations have a documented succession plan.”

But, I wonder, is a “documented succession plan” necessarily a good thing to have?

My organization’s board regularly discusses succession and has chosen not to write it all down, for all kinds of good reasons. It’s important not to assume the authors have drawn the right conclusions, no matter their reputation.

Is this really research at all? Authors too often fail to differentiate between making observations and gathering data through original research.

Consulting firms are probably the most notorious offenders here. One example would be the notion of “collective impact,” put forth by FSG and widely discussed among nonprofit leaders seeking to pull together business, government, and other players to advance new ways to improve society. Examples cited in the FSG consultants’ own 2011 Stanford Social Innovation Review article on collective impact date back decades, suggesting the practice has been around a while. Yet a more recent article in the same publication, by the same authors, discusses collective impact as a “new and more effective process for social change” that is “upending conventional wisdom.”

The authors of that article refer to findings from “our research and our consulting”— suggesting they can bolster their claims of effectiveness.

I don’t doubt that practices FSG promotes are often helpful, but where are the descriptions of the research they cite? It’s difficult for the reader to know whether collective impact is a promising idea, a proven approach, a historical fact, or some mix of those things.

Has other relevant research been done on this topic? Too often, authors of reports fail to build on the work of others.

Those conducting research should feel obliged to understand what related research has been conducted; a good report will put new findings in the context of previous ones.

Too much of what is put out about nonprofits and foundations makes it seem as if the world was created yesterday, with no acknowledgment of what was done before.

When we don’t make these connections, we lose chances to see if our findings confirm those of other studies, thereby increasing confidence that they are accurate, or if they are diverging.

Sometimes some of the best research on nonprofits is overlooked by those studying the same topic.

One example: The Johns Hopkins Center for Civil Society Studies has conducted numerous well-done, rigorous studies on key issues facing charitable organizations, yet its research is cited much less frequently than it should be—I have no idea why.

Who paid for it? Too often, it’s hard to identify the interests, donors, and potential conflicts of a report’s authors.

At the Center for Effective Philanthropy, we do research about foundations, receive grants from foundations, and try to get foundations to use our assessment tools to learn and improve.

Our mantra is to be true to the data, no matter what the facts say—but readers of our work have a right to know where we get our money and who our clients are and to judge for themselves whether our work seems influenced by those who make grants to us or hire our organization to study them.

I have confidence that what we’re putting out stands up to scrutiny, but if readers decide that we are hopelessly conflicted, that should be their right.

So we maintain on our Web site a list of all donors by the amount they give as well as a list of those who use our assessment tools.

If we received project support for a particular effort, we make that clear, too, for everyone to see. This seems basic, but some organizations don’t do it.

These questions are just a start. I know we at the Center for Effective Philanthropy are not perfect. But there’s no question we do our work more thoughtfully and rigorously now than we did in our first couple of years, when we lacked staff with the right skills. I don’t possess those skills, but once I realized I was out of my depth—about a year into my job (read: one year too late)—I hired people who do.

And of the mistakes I have outlined, I think perhaps our biggest one has been that in our early years, we did not pay enough attention to what other research had been done—or was being done—by others.

Although those with formal training should and often do know better, those taking shortcuts come with all kinds of degrees—MBAs, PhDs, you name it. Some are even tenured faculty at the most prestigious colleges and universities in the country.

The nonprofit world’s work is too important to get this stuff wrong.

If organizations putting out work purporting to be research don’t step up their game, then we all need to become much, much more discerning—and less credulous—readers.

 

Phil Buchanan is president of the Center for Effective Philanthropy and a regular columnist for The Chronicle of Philanthropy.