Results matter to nearly all types of donors, but especially grant makers. For nonprofits that are new to tracking and analyzing the effectiveness of programs, the vocabulary can be a little intimidating. Here we define common terms to help you get a better handle on evaluation.
Logic Model: A flowchart that maps the sequence of events intended to effect change. It describes a nonprofit’s resources, the way it uses them in programs, and the long-term results. Here’s a template provided by United Way of Greater Richmond & Petersburg in Virginia.
Theory of Change: A framework for demonstrating your impact by tying your activities to long-term goals and results. This is the foundation of your ability to measure lasting change, says Jamie Austin, senior director of impact and learning at Tipping Point Community, in the Chronicle webinar “Tips for Demonstrating Impact in Grant Proposals.” Tipping Point is an anti-poverty grant maker in the San Francisco Bay Area that helps nonprofits improve their ability to measure effectiveness.
For example, says Austin, if your end goal is to help unemployed adults get jobs, you might create this hypothesis: By teaching X number of participants certain skills, such as interviewing techniques, through Y number of hours of instruction, we will help Z number of people enter the work force. Theories of change range from one-page infographics to much longer reports, Austin says.
To create a theory of change, groups often start by identifying their desired outcomes, then work backward to map the connection to their programs. Here’s a guide produced by the Annie E. Casey Foundation on creating theories of change.
(The terms “theory of change” and “logic model” are sometimes used to mean the same thing. When mentioned separately, the theory of change often refers to an organization’s overall work, while the logic model may describe the activities of a particular program, Austin says.)
Target population: The people you’re trying to serve. For example, if your nonprofit runs a literacy program, you might target people at a certain reading level, native or non-native speakers, or those in a specific geographic area, he says.
Inputs/Resources: “The human, financial, organizational, and community resources a program has available to direct toward doing the work,” such as money, facilities, and staff members’ skills and time, according to the W.K. Kellogg Foundation.
Activities: “The processes, tools, events, technology, and actions that are an intentional part of the program implementation,” according to the Kellogg Foundation. In other words, your activities are how you’re trying to reach your goals, Austin says.
The literacy program example might involve activities such as online courses, one-on-one tutoring sessions, classroom instruction, or a “secret sauce” approach like working with a local college to design a program tailored to the needs of your target population, or geared toward reaching certain outcomes, he says.
Outputs: The immediate results, or direct products, of program activities, such as the number of participants in a program or the total hours of training provided. Outputs convey the volume of your work but not your impact, Austin says. Outputs of the literacy program might include the number of students you’ve trained or the hours they spent in the classroom or online, he adds.
Outcomes: “Specific changes in program participants’ behavior, knowledge, skills, status, and level of functioning,” according to the Kellogg Foundation. The number of participants who found jobs directly as a result of your program is an outcome.
Austin advised nonprofits to focus on certain metrics because they show you’re reaching your goal. You can set short-, intermediate-, and long-term outcomes, he says. Here are some examples:
- Short-term: Changes that happen to participants during your program, such as learning to read a prescription or parts of a newspaper, or reaching a third- or fourth-grade reading level.
- Intermediate: Changes that affect clients by the end of your program, like the ability to read and understand an entire newspaper or letters from their kids’ teachers, or to pass a literacy test. These results should predict your long-term outcomes, Austin says.
- Long-term: The changes you ultimately want to see, which are tricky to track and evaluate, he says. Examples include the number of participants who get a job in which they use literacy skills learned in your program or who learn to communicate with teachers by email.
Indicators: Measures of success toward reaching your desired outcomes, such as a pay stub, diploma, or apartment lease. Indicators should be measurable, specific, time-limited, and meaningful. And some are better than others. For instance, if you’re working to improve kindergarten readiness, a weak indicator would be one teacher’s account of how students are doing, Austin says. Drawing on several teachers’ accounts, review by trained observers, and clinical assessments would be a stronger approach.
Impact: “Fundamental intended or unintended change occurring in organizations, communities, or systems as a result of program activities,” according to the Kellogg Foundation. Or, as Austin says in the Chronicle webinar, “impact is the sum of all your outcomes.”
Randomized, controlled trial: A study that assigns subjects randomly to either a group subject to an intervention or a control group in which no intervention is made, to conclusively determine the effects of a program. You’ve likely heard about trials like this being used in pharmaceutical research; some patients take a new drug and others receive a placebo.
Impact evaluation: A type of assessment for measuring your organization’s outcomes. If your nonprofit distributes mosquito nets, for instance, an impact evaluation might determine the effect of your work on rates of malaria.
Process evaluation: A type of assessment for measuring your charity’s internal effectiveness. For example, if your organization trains volunteers, a process evaluation might determine how many volunteers you trained, how much information they learned during the training, or how many of those trainees went on to become actual volunteers.