Use Proven Programs or Manage Using Data? Two Approaches to Evidence-Based Reform

Use Proven Programs or Manage Using Data? Two Approaches to Evidence-Based Reform
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

2014-05-15-Huff35image051514.jpg

In last week's blog, I welcomed the good work Results for America (RfA) is doing to promote policy support for evidence-based reform. In reality, RfA is promoting two quite different approaches to evidence-based reform, both equally valid but worth discussing separately.

The first evidence-based strategy is "use proven programs (UPP)," which is what most academics and reformers think of when they think of evidence-based reform. This strategy depends on creation, evaluation, and widespread dissemination of proven programs capable of, for example, teaching beginning reading, algebra, or biology better than current methods do. If you see "programs" as being like the individual baseball players that Oakland A's manager Billy Beane chose based on statistics, then RfA's "Moneyball" campaign could be seen as consistent in part with the "use proven programs" approach.

The second strategy might be called "manage using data (MUD)." The idea is for leaders of complex organizations, such as mayors or superintendents, to use data systematically to identify and understand problems and then to test out the solutions, expanding solutions that work and scaling back or abandoning others. This is the approach used in "Geek Cities" celebrated by RfA.

"Use proven programs" and "manage using data" have many similarities, of course. Both emphasize hard-headed, sophisticated use of data. Advocates of both approaches would be comfortable with the adage, "In God we trust. All others bring data."

However, there are key differences between the UPP and MUD approaches that have important consequences for policy and practice. UPP emphasizes the creation of relatively universal solutions to widespread problems. In this, they draw from a deep well of experience in medicine, agriculture, technology, and other fields. When an innovator develops a new heart valve, a cow that produces more milk, or a new cell phone, and proves that it produces better outcomes than current solutions, that solution can be used with confidence in a broad range of circumstances, and may have immediate and profound impacts on practice.

In contrast, when a given school district succeeds with the MUD approach (for example, analyzing where school violence is concentrated, placing additional security guards in those areas, and then noting the changes in violence), this success is likely to be valued and acted upon by district leaders, because the data come from a context they understand and are collected by people they employ and trust. However, the success is unlikely to spread to or be easily replicated by other school districts. The MUD district may tout its success, but district leaders are not particularly motivated to tell outsiders about their successes, and usually lack sufficient staff to even write them up. Further, since MUD approaches are not designed for replication, they may or may not work in other places with different contexts. The difficulty of replicating success in a different context also applies to UPP strategies, but after several program evaluations in different contexts, program developers are likely to be able to say where their approach is most and least likely to work.

From a policy perspective, MUD and UPP approaches can and should work together. A district, city, or state that proactively uses data to analyze all aspects of its own functioning and to test out its own innovations or variations in services should also be eager to adopt programs proven to be effective elsewhere, perhaps doing their own evaluations and/or adaptations to local circumstances. If the bottom line is what's best for children, for example, then a mix of solutions "made and proven here" and those "made and proven elsewhere and replicated or tested here" seems optimal.

For federal policy, however, the two approaches lead in somewhat different directions. The federal government cannot do a great deal to encourage local governments to use their own data wisely. In areas such as education, federal and state governments use accountability schemes of various kinds as a means of motivating districts to use data-driven management, but looking at NAEP scores since accountability took off in the early 1980s, this strategy is not going very well. The federal government could identify well-defined, proven, and replicable "manage using data" methods, but if it did, those MUD models would just become a special case of "use proven programs" (and in fact, the great majority of education programs proven to work use data-driven management in some form as part of their approach).

In contrast, the federal government can do a great deal to promote "use proven programs." In education, of course, it is doing so with Investing in Innovation (i3), the What Works Clearinghouse (WWC), and most of the research at the Institute of Education Sciences (IES). All of these are building up the number of proven programs ready to be used broadly, and some are helping programs to start or accelerate scale-up. The existence of the proven programs coming from these and other sources creates enormous potential, but is not yet having much impact on federal policies relating, for example, to Title I or School Improvement Grants, but this could be coming.

Ultimately, "use proven programs" and "manage using data" should become a seamless whole, using every tool of policy and practice to see that children are succeeding in school, whatever that takes. But the federal government is wisely taking the lead in building up capacity for replicable "use proven programs" strategies to provide new visions and practical guides to help schools improve themselves.

Popular in the Community

Close

What's Hot