That World Bank economist was me, and the outcome has weighed heavily on me over the last two decades. What if I was wrong?
This story captures the essence of why we need much more open-sourcing in the aid and philanthropy fields. Official aid (World Bank, USAID, et al) and large foundation experts say, "If we do enough research and analysis, we can: a) determine the most pressing social and economic problems facing a society, b) come up with good ideas that address those problems and c) we can develop investment projects that will solve the problems." In this equation, "we" = a small group of development, program and government experts.
During the multi-year standoff between the Indonesian government officials and me, I got little input from civil society groups, private sector business people or even the farmers themselves. Consequently, I had no idea what they thought of the original project design. Nor had they any ability to suggest alternative designs that might have worked better. Against this background, it is not surprising that studies have failed to find much social or economic impact from the bulk of the $2 trillion of development aid provided by the World Bank since 1950. This analytical, expert-driven, data-intensive approach has not panned out as we had expected.
Fortunately, the world has changed -- especially in the last five years. Because of our transformative experience with the Development Marketplace at the World Bank, Mari Kuraishi and I created GlobalGiving to allow anyone in the world with a good idea to submit it, and anyone in the world to fund it or comment on it. So far, almost 100,000 individuals and many leading companies have provided more than $28 million to 2,000 projects in 115 countries.
Throughout our nine years, we have experimented with ways to engage "the crowd" in choosing the best projects on GlobalGiving. Early on we tried voting-only contests. These proved complicated for a small organization focused on giving to manage. In late 2007, we were fortunate to work with the Case Foundation on the first large-scale online philanthropy challenge -- America's Giving Challenge. This campaign demonstrated to hundreds of nonprofits that they could utilize the Internet and social networks to mobilize donors to support their work...and the results of that mobilization determined which causes would receive $50,000 bounties provided by the Case Foundation.
More recently, a number of companies and foundations have launched their own online philanthropic crowdsourcing campaigns. One of the most impressive of these has been the Pepsi Refresh Project. Pepsi is giving away over $1 million per month -- a total of $20 million this year -- to ideas selected by popular vote.
The approach is almost pure crowdsourcing. Any person, organization, or even company in the U.S. (and soon Canada) is eligible to submit an idea, and as long as the idea is for public benefit, provides sufficient information about its idea and doesn't violate the published rules, it goes up for public vote. In the first three months, millions of votes have been cast, and many great ideas have been funded. Undoubtedly, some of the ideas will not be stellar, but it's only by being open to all ideas, and being willing to take some risks, that Pepsi has a chance to fund amazing ideas that would otherwise not have had access to this level of investment. Either way, Pepsi is committed to providing an open platform for updates and feedback on the ideas as they become realities. (Disclosure: GlobalGiving is a partner in Refresh.)
So far, Pepsi is alone in the scale of brand dollars it is using to power this campaign. But other innovators such as American Express, GiveMN, Chase, Google, and Sam's Club have used crowdsourcing to allocate corporate philanthropy. At GlobalGiving, we conduct quarterly Global Open Challenges that require interested organizations to demonstrate their ability to use our platform by attracting 50 donors and $4,000 in order to secure a spot on the GlobalGiving platform. Small prizes sweeten the pot. This is crowdfunding, more than crowdsourcing of votes. And it works for us. We've found this mechanism effective in identifying and qualifying organizations that an "expert" at GlobalGiving would not otherwise have found or given a chance to succeed.
Different structures will work better for different types of organizations and campaigns. Matching, prizes, thresholds, voting -- all have a place in the crowd (fill-in-the-blank) approach to social impact. While some contests have had flaws, this is to be expected at this stage, and we hope the experimentation will continue. Despite their flaws, my bet is that, in aggregate, these contests have produced more social good than would have been produced by corporate or foundation committees making the decisions alone.