By Jane Wales
In recent years, the philanthropic sector has neared consensus on the need to improve measurement and evaluation of its work. Although the philanthropies they lead use different methods, members of the Aspen Philanthropy Group (APG) have agreed that basic principles and practices can inform efforts to monitor performance, track progress, and assess the impact of foundation strategies, initiatives, and grants. They hope to build a culture of learning in the process.
Over the past two years these CEOs of private, corporate, and community foundations have supported a series of meetings on measurement and evaluation (M&E) with leaders of grantee organizations, issue experts, and evaluators. They have concluded that, when done right, assessment can achieve three goals. It can strengthen grantor and grantee decision-making, enable continuous learning and improvement, and contribute to field-wide learning. Below are broad observations from the workshop process, followed by articles from five APG authors describing the M&E philosophies of the institutions they lead. Their articles will be among those to appear in an edited e-volume, to be published by the Aspen Institute and continuously updated to capture evolving foundation practice and comments from voices in the field. This is what we learned.
APG members found that differing terminology can undermine efforts by grantors and grantees to collaborate effectively in the design and implementation of an M&E system. Many grantors and grantees use the terms “evaluation,” “impact measurement,” and “measurement and evaluation” interchangeably. In fact, M&E encompasses distinct activities with distinct purposes, methods, and levels of difficulty. In his article, William and Flora Hewlett Foundation president Paul Brest separates M&E into three categories undertaken at three stages: theories of change described and logic models devised during the initial design of a project or foundation initiative; tracking progress against the strategy set during the life of the grant or initiative; and assessing impact after the fact. The first of these is essential background for M&E, and the three together provide a useful means of organizing the various activities and purposes of M&E. The second enables a grantor and grantee to gain the information needed to make mid-course corrections to the strategy and learn throughout the process. The third activity—assessing impact—is the most daunting. Brest notes that in some undertakings, such as policy advocacy or Track II diplomacy, exogenous influences make it hard if not impossible to attribute impact to any one actor or strategy. He argues for demonstrating “contribution” rather than claiming “attribution,” where contribution means increasing the likelihood of success, and notes that the true impact of such “risky” grants may not be possible to ascertain. Nonetheless, they are well worth pursuing.
At its best, M&E informs decision-making and provides for continuous learning. In his article, Matthew Bannick, managing director of Omidyar Network (ON), discusses why M&E is more likely to be used—and used to good effect—when it is designed collaboratively by grantor and grantee, and when data are gathered and organized around decisions that each needs to take. It is therefore critically important that they agree on their evaluation approach at the outset. Ford Foundation president Luis Ubiñas agrees, adding that “from the very beginning, grantees should have a clear sense of what benchmarks of success are expected of them at each stage of initiative development”—and why.
The Cost-Benefit Ratio Matters
Ubiñas points out the costs of M&E, arguing that in designing an evaluation system, careful consideration must be given to the burdens on each party. Failure to do so, he writes, can lead to “excessive data gathering” in which grantor and grantee gather as much data as possible in search of evidence of impact. The costs are fourfold. “First, it is a burden to grantees, creating surplus work for often tightly staffed and financially strapped nonprofits. Second, it undermines quality because grantees will provide the requested information to meet their grant obligations, but may not have time to supply the insight that is often more valuable than the data. Third, it inundates foundation staff with information but may leave them little time to use it effectively. Fourth, it may not provide the information that is actually needed to understand how effective our initiatives and grantmaking are.” According to Bannick, ON reduces the burden by using a limited number of easily collected metrics, and, as an alternative to the “time-consuming, costly, and complicated challenge of measuring impact,” ON often measures outputs as proxies. As for the price tag, Rockefeller Foundation president Judith Rodin notes the efficiencies gained by using technology to gather real time data. Brest notes (in Money Well Spent, coauthored with Hal Harvey) that “if you are a philanthropist with a long-term commitment to a field, it is well worth putting your funds—and lots of them—into evaluation.”
To read the full report, please visit the Aspen Institute's website.