Social investing and philanthropy is one of the few sectors of the economy where we only measure after we invest. Think about it: most investors on Wall Street would not take a position in a public company without first predicting the stock price using Bloomberg or Factset; most bankers would not issue a loan to someone without first running a credit score through Experian or Equifax; and most lawyers would not try a case without first researching legal precedent via Westlaw or Lexis-Nexis. But in the nonprofit and public sector we invest first, and then measure to see what we got. That may be a key reason why we aren't making as much progress as we could be.
Predicting the success of social programs before we fund them holds great promise for the future of social impact:
- Rational resource allocation. Policymakers, government budget analysts and funders could use data to predict the likely success of a prospective social intervention. As opposed to politics, cronyism and bad data like overhead ratios.
- Benchmarks. Standardized data for social programs will enable us to compare relative performance of one program versus another.
- Level playing field. Currently, only those programs that can afford to hire an evaluator and conduct a fancy evaluation are considered "evidence-based." Predictive data analyzes all programs on the same basis and uses evidence-based factors to determine which are most likely to succeed.
- Attract private capital. I've never heard of a bond being issued without a rating. But social impact bonds are, and as a result, there is no ability to attract real risk-seeking capital. With predictive data, real market-based investors could invest in philanthropy, and capital markets could emerge to fund social change.
Some argue that predictive data is irresponsible. We can never be 100 percent sure whether the prediction is correct, so using this type of data would unfairly penalize certain programs.
Others argue that data doesn't tell the whole story -- there are many other "non-quantifiable" factors like quality of leadership, context, geography, etc. that go into determining a program's success.
Still others say that predictive data would stifle innovation -- because innovative programs have no track record they would likely come up short in any analysis.
Finally, many say that predicting social impact is just plain impossible. It's never been done before. There's no way to predict human behavior.
These are all reasonable concerns. However, each argument fails on its merits. The standard for predictive data is not perfection, it's being directionally right. Predictive data isn't the only consideration in other sectors; it's just one input into a calculus of professional judgment. Think about it: when Moneyball was invented they didn't fire all the baseball scouts -- they just gave them an additional tool to use. And predictive data is unlikely to kill innovation; indeed, it's likely the opposite. Currently, limiting funding to only 'evidence-based" programs that have been fully measured and evaluated is limiting innovation. Predictive data levels the playing field among innovations and established programs by shifting the analysis to risk and expected outcomes, rather than how well the program evaluation was conducted.
Finally, with regard to the concern about impossibility: if we can predict outcomes in the fields of medicine, economics, finance, entertainment, law, sports and weather, we can surely find a way to predict outcomes in social change. We have a number of things on our side:
- Advances in meta-analysis
- A robust evidence base
- Accessibility of database technology
- Data science and statistical modeling
The time is now. The demand for data is growing. But if you think about it, data is most valuable ex ante, not post hoc. The analyses that government policymakers and funders are trying to accomplish these days are predictive:
"Which program is most likely to produce the outcome we want?"
"Which program is going to give me the best bang for my buck?"
"Will this program achieve the outcomes we want?"
"What is the ROI for investing in this program?"
"How do we design a program to maximize its effectiveness?"
Professor James Heckman, one of the world's leading experts on early childhood development and a Nobel Laureate at University of Chicago told me "I get calls every day from policymakers asking me 'how do I design the best early childhood program?' and I tell them 'I don't know, I just know that the one I studied worked." We need to crack the code on social impact. We need to figure out what works. And predictive data is the only way to do that. It may not be easy, and it may not be perfect. But it is possible. And I would argue that we are more irresponsible as a sector for not trying to do it at all, than for trying and failing to get it right.