“Currently, there is no one proven model for social impact measurement. The conversation to find better ways to measure and create collective impact requires active participation, reflection, coordination and action. Only then, would we truly be able to deliver actual value to the social sector.”
A reflection paper by Zhao Binru Bryan, as partial fulfillment of a BBA (Honours) module, “Measuring Success in Philanthropy and Impact Investing”.
This paper seeks to discuss whether managers in the field of the public sector or nonprofit organizations (NPOs) should, or should not use performance measurement for performance management. According to Hatry (2014), performance measurement refers to the regular collection of output and/or outcome data throughout the year for the NPO’s programs and services. Performance managers would then use these performance data to help them make decisions that continually improve their services to their customers. In this paper, the concept of philanthropy is extended to ‘entrepreneurial philanthropy’, comprising of ‘venture philanthropy’ as well as ‘impact first impact investing’ (John, Tan and Ito, 2013). Financial and human capital are deployed to primarily achieve social impact and outcomes.
The adage that “you can’t manage what you can’t measure” appears to be unequivocal. Without performance measurement, managers would not be able to make decisions pertaining to evaluation, control and budgeting (Behn, 2004). Both Brest (2003) and Gates (2013) emphasize the importance of measurement of progress and outcomes in NPO programs. Performance measurement thus serves as a feedback loop to ascertain if the program is on track or not, and if necessary corrections should be made. Moreover, Brest and Born (2013) define the achievement of social impact as an increase in the quantity or quality of the enterprise’s social outcomes beyond what would otherwise have occurred. Determining the social impact that is solely attributable to the program through quantitative or qualitative indicators may help NPOs determine the validity of its theory of change. For instance, the Abdul Latif Jameel Poverty Action Lab at the Massachusetts Institute of Technology was able to prove that the eligibility for merit-based scholarships led to better academic grades amongst students, with improved student and teacher attendance (Brest and Krieger, 2010). Measurement through randomized controlled studies proved that the intervention works. This serves as useful information for the program managers to continue with and make further improvements to the program. Even if the results had proven negative, or inconclusive, they still serve as useful information for subsequent studies to become more targeted.
Based on the theoretical and empirical evidence presented, measurement that is tied to outputs and outcomes is essential for performance managers to make objective and informed decisions. However, measurement is not an end in itself and being able to measure performance does not necessarily equate to the ability to manage performance.
“Not everything that counts can be counted and not everything that can be counted counts”.
The first half of this statement posits that it may sometimes be impossible to obtain quantitative data for performance management. Brest (2003) acknowledges this through the example of a performing arts organization, which should place emphasis on the quality of its production as well as the size of the audience. While the size of the audience is a quantitative indicator that is comparable and objective, this is difficult to achieve for the quality of production. It could instead, be assessed by critics. However, Brest also argues that data such as quality of production, though subjective, may be quantified for comparative purposes. Brest notes, however, that this should be done with the program’s goals and outcome in mind. This clearly shows that measurement should be carried out merely as a tool to assess outcomes and impact. Barkhorn, Huttner and Blau (2013) further debunk the validity of the first half of the statement. Through designing an Advocacy Assessment Framework, they were able to quantify and compare advocacy efforts of different NPOs. This quantitative estimator for the likelihood of success pushes the boundaries of evaluating advocacy, traditionally thought to be a risky area of business.
The second half of the statement considers the circumstances where the quantification of outputs does not necessarily guarantee the achievement of the intended outcomes. The Acumen Fund had used an output metric measuring the sales and distribution of bed nets as a proxy to measure the outcome of malaria prevention (Ebrahim and Rangan, 2011). However, not considering if the bed nets were even used represents an information gap to establish causation and impact. This presents a limitation in performance measurement for management of outcomes. However, Trelstad (2008) explains that understanding outcomes and demonstrating the counter-factual is both complicated and costly. He recommends measurement by outputs and using literature reviews to justify the output’s link to impacts. While this may circumvent the problem of measuring outputs to outcomes, it reveals another argument against measurement, in that it may be costly and not produce useful results.
“Measurement is expensive and its results are often ignored”.
Brest (2003) recognizes that data collection becomes more difficult and expensive when trying to measure intermediate and ultimate outcomes. Often, NPOs have limited time, money and lack the administrative expertise to track social outcome (Tuan, 2008). Although the cost of measurement may sometimes be borne by foundations and funders, this process still incurs a huge time cost for the NPOs. In addition, the measurement results may sometimes be ignored. Tuan (2008) highlights REDF primary funder’s decision to discontinue using SROI metrics, as the SROI results had no impact on any investment decisions in the REDF portfolio. Likewise, the William and Flora Hewlett Foundation had also decided to exit its Nonprofit Marketplace Initiative after some review. A report “Money for Good” from Hope Consulting revealed that the American donors’ demand for information to identify top-performing nonprofits was much lower than expected. Only 3% of Americans would compare between NPOs when making a gift. However, Harty (2014) presents an alternative viewpoint. He posits that advances in technology had led to lower costs of information and more timely results. Evidence-based decisions and program evaluations have also led to a widespread increase in demand for reliable evidence in the NPO sector. The examples of REDF and Hewlett Foundation reveal that measurement results may sometimes be ignored. However, one must note that the SROI framework was intended to measure the returns to society as a whole, and not to track individual program outcomes. Likewise, the Nonprofit Marketplace Initiative also served to improve the information availability for the whole American population. In fact, 85% of the population cares about performance of the NPOs, which are measured by indicators (Hope Consulting, 2014). As such, it is not justified to say that results are often ignored.
“The more any quantitative social indicator is used for decision making, the more it will be subject to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor”.
Fitzgerald (2013) points out that emphasizing social impact reporting may result in NPOs being too concerned with securing funding from donors and funding agencies. There is also a possibility that focusing too much on measurement may lead NPOs to take on activities that are only measured by quantitative metrics (Brest, 2008). This might lead to perverse incentives for creaming and cherry picking. The Oklahoma Milestone Payment System, an outcome-based payment system, created incentives for managers to screen out difficult customers (O’Brien and Cook, 2005). However, such creaming practices may be countered through higher payment levels for higher support needs (O’Brien and Revell, 2005). While quantitative social indicators may lead to unintended consequences, social impact can also be measured by qualitative metrics. These qualitative metrics can serve as a check and balance on the programs, and be considered in tandem with quantitative results for performance measurement.
The above discussion leads to a viewpoint that managers in the field of NPOs should use performance measurement for performance management. Empirical evidence also suggests that pre-investment venture capital practices do matter for the expected performance of social investment funds (Lam, Leong and Lek, 2010). With that being said, it is imperative to keep in mind that the process of measurement is not an end in itself. Devoting too many resources to measurement may result in managers neglecting the social impact and outcomes of the programs. Often, the customers and beneficiaries are overlooked, and yet they are the results that provide leading indicators for long-term program effectiveness (Twersky, Buchanan and Threlfall, 2013). As such, a strong engagement with the partners and beneficiaries is crucial to obtain high quality information and data. Currently, there is no one proven model for social impact measurement. The conversation to find better ways to measure and create collective impact requires active participation, reflection, coordination and action. Only then, would we truly be able to deliver actual value to the social sector.
Barkhorn, I., Huttner, N., & Blau, J. (2013). Assessing Advocacy. Stanford Social Innovation Review, 1-8.
Behn, B. (2004). Why Measure Performance. Bob Behn’s Public Management Report, 1(11), 1-2.
Brest, P. (2008, 11 20). Paul Brest, President, William and Flora Hewlett Foundation: Smart Philanthropy in Tough Times. (P. N. Digest, Interviewer)
Brest, P., & Born, K. (2013, 8 14). Stanford Social Innovation Review. Retrieved 3 15, 2015, from Unpacking the Impact in Impact Investing: http://www.ssireview.org/articles/entry/unpacking_the_impact_in_impact_investing
Brest, P., & Krieger, L. H. (2010). Problem Solving, Decision Making, and Professional Judgment. Interpreting Statistical Results and Evaluating Policy Interventions, 185-206.
Brest, P. (2003). Update on the Hewlett Foundation’s Approach to Philanthropy: The Importance of Strategy. William and Flora Hewlett Foundation 2003 Annual Report.
Ebrahim, A. S., & Rangan, V. K. (2009). Acumen Fund: Measurement in Venture Philanthropy (B). Harvard Business Case.
Fitzgerald, J. (2013). “Just Do It” – Making and Measuring Social Impact. Asia-Pacific Centre for Social Investment and Philanthropy, 1-9.
Gates, B. (2013, 1 25). The Wall Street Journal. Retrieved 3 15, 2015, from Bill Gates: My Plan to Fix The World’s Biggest Problems: http://www.wsj.com/articles/SB10001424127887323539804578261780648285770
Hatry, H. P. (2014). Transforming Performance Measurement for the 21st Century. The Urban Institute, 1-91.
Hope Consulting. (2010). Money for Good: The US Market for Impact Investments and Charitable Gifts from Individual Donors and Investors. 1-107.
John, R., Tan, P., & Ito, K. (2013). Innovation in Asian Philanthropy: Entrepreneurial Social Finance in Asia. Singapore: The Asia Centre for Social Entrepreneurship and Philanthropy (ACSEP) in National University of Singapore.
Lam, S. S., Leong, S. M., & Lek , S. M. (2010). Venture Capital Practices: Do They Matter for the Expected Performance of Social Investment Funds? ACSEP Research Working Paper Series No. 14/01, 1-35.
O’Brien, D., & Cook, B. (2005). Oklahoma Milestone Payment System. 1-18.
O’Brien, D., & Revell, G. (2005). The Milestone Payment System: Results based funding in vocational rehabilitation. Journal of Vocational Rehabilitation, 101-114.
Trelstad, B. (2008). Simple Measures for Social Enterprise. Innovations: Technology, Governance, Globalization, 3(3), 105-118.
Tuan, M. T. (2008). Measuring and/or Estimating Social Value Creation: Insights into Eight Integrated Cost Approaches. Bill & Melinda Gates Foundation Impact Planning and Improvement, 1-45.
Twersky, F., Buchanan, P., & Threlfall, V. (2013). Listening to Those Who Matter Most, the Beneficiaries. Stanford Social Innovation Review, 1-7.