When you’re setting out to measure your nonprofit’s programs, there are many factors to consider. Chief among them is what kind of study makes sense for your organization.
Lisbeth B. Schorr, senior fellow at the Center for the Study of Social Policy, says it’s important to customize measurement rather than assume any particular assessment will yield the most helpful information. “You have to be very pragmatic in figuring out what you need in your community,” she says.
Here is a list of questions to help guide your nonprofit as it plans how to evaluate its work.
What assessment best suits our needs?
According to the Abdul Latif Jameel Poverty Action Lab (J-PAL), a network of more than 100 professors who partner with nonprofits to conduct randomized evaluations, these are the most common types of assessments and the purposes they serve:
-
A needs assessment uses interviews, focus groups, surveys, and existing data to help a nonprofit describe and understand current problems and solutions. It can focus on a neglected problem, determine whether the conditions for a successful program exist, and determine the right target for an intervention.
-
A process evaluation tracks how well a program is carried out. It can help a nonprofit determine whether a new program or a revision of an existing program would better address a need.
-
A business-case assessment determines whether a program would work in a different context, examines realistic outcomes, and weighs them against the costs of the proposed program and its alternatives.
-
A literature review draws on the results of other relevant programs to inform a nonprofit about how to proceed, while taking into account that some programs are context-specific and unlikely to translate well.
-
A randomized, controlled trial (known to scientists as an RCT) allocates subjects at random to either a group receiving an intervention or a control group to conclusively determine the effects of a program. They can be difficult to properly design and expensive to carry out, but some researchers, donors, and policymakers believe these types of experiments, which are used in medical and pharmaceutical research, provide the clearest and most compelling evidence about which programs work. Others, such as Ms. Schorr, argue that there’s no clear hierarchy of research methods, and that programs that are place-based and lack clear causal relationships may not be well-suited for an RCT.
Would the assessment interest an academic?
Working with an academic researcher offers several benefits: expert guidance, the credibility of an independent review, and access to additional funding opportunities.
Not all nonprofits’ work interests professors, though. “If it doesn’t speak to any broader issue or question, it’s probably going to be a tough sell for an academic,” says Paul Niehaus, assistant professor of economics at the University of California at San Diego and president of GiveDirectly, a nonprofit that enables donors to give money directly to impoverished people.
But some organizations tackle problems related to hot topics in the academic sphere and have rich datasets that professors would love to mine.
Groups that serve as matchmakers between nonprofits and academic researchers include Innovations for Poverty Action and J-PAL. IPA enlists academics to conduct RCTs to evaluate social and development programs in many countries. Although J-PAL is based in the economics department of Massachusetts Institute of Technology, its professors work at institutions around the world, and they form relationships with nonprofits at conferences and through less-formal outreach efforts. To fund the RCTs, J-PAL’s academic partners work with nonprofits to apply for government or foundation funding.
Another matchmaker is DataKind, which charges a management fee to pair nonprofits with data scientists who work for six months to analyze organizations’ data and make their programs more efficient. DataKind also hosts DataDives, weekend-long gatherings of data scientists who tackle research-related problems for nonprofits. The John S. and James L. Knight Foundation recently gave DataKind $1.1-million to support its efforts.
Mary Ann Bates, deputy director of J-PAL North America, says there are three essential ingredients to forging a good partnership between a nonprofit and an academic: having good data-collection and storage systems in place, being willing to let researchers publish the results regardless of what they find, and being flexible and open to test assumptions, try new methods, and work collaboratively to build an evaluation.
Is a randomized, controlled trial within your reach?
Even if a partnership with an academic researcher isn’t an option, your nonprofit may still be able to run an RCT. These trials don’t have to be complicated or costly, Ms. Bates says.
The nonprofit Coalition for Evidence-Based Policy is studying how to lower the cost of RCTs, in part by running a contest that finances experiments conducted by winning nonprofits. The coalition’s findings suggest nonprofits ask themselves the following questions:
1. If you’re already collecting data, can you incorporate randomization? Making adjustments to the research your nonprofit already does may prove more efficient than starting over from scratch. “The expensive part is collecting data, and often you have to collect data for any kind of research you’re doing,” says Ms. Bates. “It’s more about knowing you have to embed randomization from the beginning.”
The goal of randomization is to ensure a fair comparison between two groups, according to Andrew J. Vickers, attending research methodologist at Memorial Sloan Kettering Cancer Center. Randomization means researchers can’t predict or change the group to which a subject is assigned. If these conditions are not met, the trial risks introducing selection bias, which means the studied subjects would not be representative of the population as a whole.
As with any experiment, it’s important to avoid contaminating the data. Writing in the Handbook of Practical Program Evaluation, professors Carole J. Torgerson, David J. Torgerson, and Celia A. Taylor recommend that tests be selected, run, and assessed by people who don’t know which subjects are assigned to which groups, to avoid introducing a bias.
Using a lottery process or random assignments to select which participants take part in your nonprofit’s programs sets your organization up to run an RCT.
For more guidance on running an RCT properly, Ms. Torgerson, Mr. Togerson, and Ms. Taylor recommend referring to the Consolidated Standards Of Reporting Trials (CONSORT) checklist and diagram, developed by a group of research experts. These documents explain what information the scientific community expects nonprofits to gather in RCTs.
2. Do the data you need already exist? A nonprofit doesn’t always need to collect new data, Ms. Schorr explains. “Some of the information may be available in administrative data,” she says. For example, “You may just need someone to persuade the board of education to use their data.”
That was the case for Youth Guidance and World Sport Chicago’s Becoming A Man—Sports Edition initiative, which offered mentoring, group counseling, and sports as a way to reduce violent crime among boys in public school.
More people wanted to participate than the program could accommodate, so researchers at the University of Chicago Crime Lab randomly selected which kids would take part. They then looked at state education and crime records—which were already being collected at no cost to the nonprofit—for those students in both the participation and control groups to figure out whether the program was effective.
“Based on that, they were able to very, very credibly say the impact of the program was a 40 percent reduction in the rate of violent crime,” Ms. Bates says.
3. Have you considered the underlying mechanisms? Studying the results of other organizations’ RCTs—or conducting a literature review—can save your nonprofit time and money by revealing concepts that have worked in other situations.
Rather than trying to test all possible approaches with the people your nonprofit serves, Ms. Bates recommends first looking for the “underlying mechanism that’s leading some approaches to work, and some not to work, and some to work better than others.”
For example, she says, a J-PAL analysis of studies conducted in India and Kenya found that remedial tutoring was very effective in helping children learn, suggesting that first using that approach—as opposed to another effort, such as buying more textbooks—with a different population may yield similar results.
Have you thought about ethics?
Research that involves human participants and is supported by federal money is subject to ethical regulations known as the Common Rule. The primary concerns of these regulations are privacy, confidentiality, weighing the risks of an experiment against its benefits, making sure subject selection is fair, and ensuring that subjects know participation is voluntary.
The Common Rule requires proposed research to be approved by an institutional review board made up of scientists and nonscientists qualified to assess experiment proposals. Typically, researchers submit their experiment proposals to an institutional review board after receiving funding.
Some organizations, such as Population Services International, have their own internal review boards that critique proposals from their researchers. Kelly O’Keefe, the manager of the internal review board at the nonprofit, says the benefit of an in-house board is that its members understand the issues relevant to the work its researchers do and can review cases quickly.
The PSI board also reviews cases for other organizations that don’t have their own boards. Ms. O’Keefe says using an external board also can have advantages, especially for groups with tight budgets.
The regulatory requirements of each federal agency vary, so groups seeking guidance on federal regulations should contact the department or agency supporting their research.
If your nonprofit is conducting the study in-house, is your staff trained?
The details of conducting research properly can be complicated, and there are several options for ensuring nonprofit staff members are adequately prepared to design and run studies.
For example, researchers at PSI use ethics-training modules from the Collaborative Institutional Training Initiative at the University of Miami, which has an annual subscription fee of $3,000 for nonprofits. The National Institutes of Health Office of Extramural Research also offers a free training program.