Too often, foundations spend an inordinate amount of time and money developing a strategic plan, and then do little to make sure it’s implemented effectively. This is a costly mistake. Execution is just as important as the planning. And the key to achieving the desired effects is monitoring and evaluating the performance of grantees because they, after all, actually carry out the plan.
Monitoring performance involves answering questions like these: Is the grantee meeting its goals? For example, did the grantee recruit the number of trainees it aimed to reach, hold the number of workshops promised, or build the specified number of affordable housing units on schedule?
The John A. Hartford Foundation has long been committed to monitoring the performance of its grantees. Even before awarding grants, the foundation often works with grantees to decide on the metrics to be tracked. Staff members, sometimes accompanied by board members, routinely make annual site visits. Information from the tracking and the site visits is shared with both grantees and the board.
In many cases, this kind of basic monitoring can provide early warning of potential problems. But even careful monitoring cannot gauge the impact of a program. That requires an evaluation.
Generally, the more rigorous an evaluation, the more money and effort that’s required to carry it out. So, before deciding what kind of evaluation to conduct, think carefully about the reasons for the assessment. If you want basically a general sense of what was accomplished and do not need to use the results to convince anyone else to support the program, an informal self-evaluation by the grantee or a review by foundation staff should suffice.
If, on the other hand, you want to leverage your investment by convincing others to replicate a program, then consider a more rigorous (and costly) evaluation. The David and Lucille Packard Foundation spent millions of dollars for Mathematica, an independent research-evaluation firm, to study the effectiveness of a program to expand children’s health insurance in Santa Clara County, California. The findings persuaded other California counties to adopt the Santa Clara model and also convinced governments and foundations around the country to support it. A rigorous program evaluation like this can be a powerful strategic intervention in its own right.
The findings of evaluations do not have to be positive to be influential. Disastrous results can be beneficial in the long run. Take the large-scale evaluation commissioned by the Robert Wood Johnson Foundation in the late 1980s of a program to improve end-of-life care for terminally ill patients. The results showed that the intervention had no impact on patient care. Rather than giving up, the foundation redoubled its efforts; in collaboration with the Open Society Institute, the foundation embarked on a campaign to change deeply ingrained medical norms and public attitudes. The new effort led to widespread acceptance of palliative care for seriously ill people — one of the major philanthropic success stories in recent decades.
Here are five ways to ensure that your evaluations gauge a program’s actual impact:
- Whenever possible, design and plan the evaluation at the same time as the program. Too often, evaluators are brought in after a program is underway, when it is too late to collect the necessary baseline data.
- Evaluation results often come too late to be of much use. In many cases, particularly when outside academics conduct the evaluation, the findings may not be available until after a program has ended and the foundation has already moved on. If possible, select evaluators whose results will be timely enough to be factored into future program decisions.
- Consider following the solid middle ground between basic monitoring of a grantee’s performance and a rigorous evaluation by specialists. For example, the Robert Wood Johnson Foundation’s annual Anthology provided credible assessments of its strategies and programs written primarily by experienced journalists. Many readers found them to be more informative than conventional evaluations.
- Don’t be a prisoner of data. Quantitative information is important, but case studies, focus groups, reportage, and personal interviews provide valuable insights too.
- Remember the denominator. It’s easier to look at, and cheer, the numerator (for example, how many school principals were trained) than the denominator (how many need training). A program that trains 250 school principals could be considered successful, but if 10,000 principals need training, then a good evaluation would conclude that the program only scratched the surface.
If a foundation approaches evaluation with common sense and is clear about why it wants to evaluate a program or strategy, then evaluation can be an effective way to increase both the short-term and long-range impacts of a foundation’s grant making.
The writers are founding partners of Isaacs/Jellinek, a consulting company that works with foundations, and the authors of “Foundations 101: How to Start and Run a Great Foundation.”