> Skip to content
FEATURED:
  • Philanthropy 50
  • Nonprofits and the Trump Agenda
  • Impact Stories Hub
Sign In
  • Latest
  • Commons
  • Advice
  • Opinion
  • Webinars
  • Online Events
  • Data
  • Grants
  • Magazine
  • Store
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
  • Jobs
    • Find a Job
    • Post a Job
    • Career Advice
    • Find a Job
    • Post a Job
    • Career Advice
Sign In
  • Latest
  • Commons
  • Advice
  • Opinion
  • Webinars
  • Online Events
  • Data
  • Grants
  • Magazine
  • Store
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
  • Jobs
    • Find a Job
    • Post a Job
    • Career Advice
    • Find a Job
    • Post a Job
    • Career Advice
  • Latest
  • Commons
  • Advice
  • Opinion
  • Webinars
  • Online Events
  • Data
  • Grants
  • Magazine
  • Store
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
    • Featured Products
    • Data
    • Reports
    • Collections
    • Back Issues
    • Webinars
  • Jobs
    • Find a Job
    • Post a Job
    • Career Advice
    • Find a Job
    • Post a Job
    • Career Advice
Sign In
ADVERTISEMENT
Managing
  • Twitter
  • LinkedIn
  • Show more sharing options
Share
  • Twitter
  • LinkedIn
  • Email
  • Facebook
  • Copy Link URLCopied!
  • Print

More Nonprofits Adopt Scientific Approach to Testing Whether Social Programs Work

By  Nicole Wallace
February 24, 2014

Working in rural Cambodia, an aid organization knew that increasing the number of latrines was critical to cut the incidence of diarrhea, a leading cause of death for young children. But it wanted to know if more households would buy its low-cost latrines if it helped families finance the purchases.

So the nonprofit, International Development Enterprises, conducted an experiment.

In 30 similar villages, the charity chose at random households without latrines. Sales teams offered people in half of the villages one-year loans to pay for latrines, while people in the other villages got the charity’s standard offer, which required cash on delivery.

We're sorry. Something went wrong.

We are unable to fully display the content of this page.

The most likely cause of this is a content blocker on your computer or network.

Please allow access to our site, and then refresh this page. You may then be asked to log in, create an account if you don't already have one, or subscribe.

If you continue to experience issues, please contact us at 571-540-8070 or cophelp@philanthropy.com

Working in rural Cambodia, an aid organization knew that increasing the number of latrines was critical to cut the incidence of diarrhea, a leading cause of death for young children. But it wanted to know if more households would buy its low-cost latrines if it helped families finance the purchases.

So the nonprofit, International Development Enterprises, conducted an experiment.

In 30 similar villages, the charity chose at random households without latrines. Sales teams offered people in half of the villages one-year loans to pay for latrines, while people in the other villages got the charity’s standard offer, which required cash on delivery.

The results were impressive: Four times as many people who received the loan offer bought the latrines, compared with those who had the cash option.

What’s more, the study found that increasing the number of sales in each village made the program more cost-effective to operate.

ADVERTISEMENT

International Development Enterprises hopes the rigorous study will help it raise more money to expand the program and influence other organizations doing sanitation work, says Stu Taylor, the group’s director for performance measurement “When you have a paper like this in your pocket that you can pull out to advocate for a particular approach,” he says, “that just really strengthens your message.”

Randomized, controlled trials, like the one run by International Development Enterprises, are getting a lot of attention as the nonprofit world increases its focus on using data to make decisions and demonstrate results.

Gold Standard

The approach, borrowed from clinical drug trials, requires that organizations select at random a group of people to participate in a program and a similar group who will not. The idea is that evaluators will be able to pinpoint the impact that can be attributed to the program rather than to other factors.

But the trials have their drawbacks. They’re difficult and expensive, and don’t make sense for every program.

Some charities argue that helping some people but not others presents an ethical dilemma. The trials’ supporters counter that there’s never enough money to aid everyone and that selecting beneficiaries at random is the fairest way to apportion assistance.

ADVERTISEMENT

In addition, some nonprofit leaders charge that charities are selective about which trials they share, hiding studies with less-than-glowing results.

Well-designed trials test concepts in a way that advances an entire field, not just the particular program being studied, says Rachel Glennerster, head of the Abdul Latif Jameel Poverty Action Lab, a center at the Massachusetts Institute of Technology.

Sometimes the trials upset conventional wisdom, she says. In a series of studies in which researchers charged a small fee for disease-prevention products, such as anti-malarial bed nets, overall demand for the products decreased and people who bought them were no more likely to use them.

“We have to think of this as a public good and something that many other programs will learn from,” says Ms. Glennerster. “That’s the context in which it’s valuable.”

Hiding Bad Results

But too often, nonprofit organizations hide data on failed strategies, says Paul Niehaus, co-founder of GiveDirectly.

ADVERTISEMENT

His organization fights poverty in Kenya and Uganda by giving very poor people money to spend as they see fit. From the beginning, it designed its program as a randomized, controlled trial, selecting participants and members of a control group at random and hiring an independent evaluator to measure the program’s impact.

Before the test began, the fledgling nonprofit announced the trial publicly to National Public Radio and GiveWell, the charity-ratings group.

“We had very little cover if the results had come in bad, if they had been embarrassing,” says Mr. Niehaus. “We stuck our necks out quite a bit.”

It turns out that GiveDirectly had nothing to worry about. The organization’s trial found that recipients’ income and assets increased compared with those of the control group.

But Mr. Niehaus says many charities aren’t willing to take that kind of chance, waiting instead to see the results first. If they’re good, they get published, he says, but if the results are bad, “they just get buried and nobody ever hears about it.”

ADVERTISEMENT

“There are a lot of organizations out there right now that are thought of as being very effective and very impactful that have done [trials] with academics who I know personally and have kept those results hidden because they’re actually quite unflattering,” says Mr. Niehaus.

Big vs. Small Issues

Nonprofits and researchers, experts say, sometimes tussle over what to study: whether to focus on big issues—like whether an approach works—or more granular questions about how a program is run.

Randomized, controlled trials can be extremely helpful when charities need to decide among approaches to carry out their missions, says Neil Buddy Shah, a co-founder of IDinsight, the nonprofit evaluation group that helped conduct the International Development Enterprises study of loans for latrines.

“You can be much more confident about the big decisions that you have to make based on evidence as opposed to kind of guessing,” he says.

Because the research in Cambodia was paid for by the Bill & Melinda Gates Foundation, International Development Enterprises was able to design the tests to focus on exactly the questions it wanted to answer, says Mr. Taylor. For example, it is now testing ways to use subsidies to help the poorest households get latrines without decreasing other residents’ willingness to pay for them.

ADVERTISEMENT

But when a charity works with a university scholar, says Mr. Taylor, such researchers are “interested in publishing and in answering some of the big questions. So these nitty-gritty operational questions are not as much of interest.”

Other Options

Despite their popularity, randomized, controlled trials are not the only approach, say evaluation experts.

For instance, three trials have shown that the Nurse-Family Partnership, a program in which nurses visit vulnerable, low-income women during their first pregnancy and until the child turns 2, improves children’s health, reduces child abuse, and helps families become more self-sufficient.

But the testing doesn’t stop there. Nurses carefully record information about participants and visits, which the charity monitors to make sure the program is carried out consistently, to determine which sites need more help, and to identify what approaches would be smart for everyone to adopt.

One of the measures that the Nurse-Family Partnership examines: how well each local unit does at keeping mothers in the program until their babies become toddlers.

ADVERTISEMENT

The program found that in a number of cities with high retention rates, nurses regularly took photographs of the infants and gave the moms prints during their next visit, says Nancy Botiller, chief operating officer at the charity’s national service office.

“That is a huge retention strategy,” she says. “The mom keeps her appointment because she knows she’s going to get that picture.”

Surveys and Sampling

Groups that want to measure their impact but can’t afford a randomized, controlled trial are turning to alternatives.

Population Services International, a health organization, is working closely with Harvard University researchers to develop a survey and sampling method it can use to evaluate multiple elements of a program.

For example, the nonprofit could use the survey to determine which parts of a program to reduce HIV/AIDS among sex workers were effective, such as outreach efforts in bars and radio advertisements. The survey method the group had been using allowed it to assess the overall effect of a program but not specific activities.

ADVERTISEMENT

“Your gold standard is always the randomized, controlled trial, but if you can’t get that—and in most cases, we don’t get it—then this is a great alternative,” says Kim Longfield, the charity’s director of research and metrics. “It is better, obviously, than doing nothing, and it’s a lot better than what we were doing before.”

Do-It-Yourself Study

For a lot of nonprofits, sophisticated survey methods are out of reach. But that doesn’t keep them from trying to document the results of their programs.

Opportunity Junction provides job training to help low-income people become self-sufficient. Many more people than the 20 it can accommodate apply for its seven-month program, so the group chooses participants through a system of interviews and ratings.

When one of the organization’s grant makers asked what happened to the people who don’t get into the program, Opportunity Junction devised a way to try to find out.

For one of its training classes, the group kept in touch with 18 people who nearly made the cut or who were offered a place but turned it down. Over two years, the nonprofit interviewed them in person six times, paying them $25 for each meeting and an additional $5 if they brought in a pay stub.

ADVERTISEMENT

Alissa Friedman, Opportunity Junction’s executive director, is quick to point out that the comparison group isn’t an “apples-to-apples” match with program participants, but she says the exercise still provided important information.

Only half of the people in the comparison group got a job during the two years. By contrast, all of the people who completed the program found jobs.

The results were a morale booster for the charity’s staff, says Ms. Friedman: “People believe that they’re doing good work, but it’s nice to have it actually borne out with that kind of a comparison.”

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Innovation
Nicole Wallace
Nicole Wallace is features editor of the Chronicle of Philanthropy. Follow her on Twitter @NicoleCOP.
ADVERTISEMENT
ADVERTISEMENT
SPONSORED, GEORGE MASON UNIVERSITY
  • Explore
    • Latest Articles
    • Get Newsletters
    • Advice
    • Webinars
    • Data & Research
    • Podcasts
    • Magazine
    • Chronicle Store
    • Find a Job
    • Impact Stories
    Explore
    • Latest Articles
    • Get Newsletters
    • Advice
    • Webinars
    • Data & Research
    • Podcasts
    • Magazine
    • Chronicle Store
    • Find a Job
    • Impact Stories
  • The Chronicle
    • About Us
    • Our Mission and Values
    • Work at the Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Gift-Acceptance Policy
    • Gifts and Grants Received
    • Site Map
    • DEI Commitment Statement
    • Chronicle Fellowships
    • Pressroom
    The Chronicle
    • About Us
    • Our Mission and Values
    • Work at the Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Gift-Acceptance Policy
    • Gifts and Grants Received
    • Site Map
    • DEI Commitment Statement
    • Chronicle Fellowships
    • Pressroom
  • Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    • Advertising Terms and Conditions
    Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    • Advertising Terms and Conditions
  • Subscribe
    • Individual Subscriptions
    • Site License Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
    Subscribe
    • Individual Subscriptions
    • Site License Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Philanthropy
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin