Evidence-Based vs. Evidence-Proven


Way back in 2001, when we were all a lot younger and more naïve, Congress passed the No Child Left Behind Act (NCLB). It had all kinds of ideas in it, some better than others, but those of us who care about evidence were ecstatic about the often-repeated requirement that federal funds be used for programs "based on scientifically-based research (SBR)," particularly "based on scientifically-based reading research (SBRR)." SBR and SBRR were famously mentioned 110 times in the legislation.

The emphasis on research was certainly novel, and even revolutionary in many ways. It led to many positive actions. NCLB authorized the Institute for Education Sciences (IES), which has greatly increased the rigor and sophistication of research in education. IES and other agencies promoted training of graduate students in advanced statistical methods and supported the founding of the Society for Research in Educational Effectiveness (SREE), which has itself had considerable impact on rigorous research. The U.S. Department of Education has commissioned high-quality evaluations comparing a variety of interventions such as studies of computer-assisted instruction, early childhood curricula, and secondary reading programs. IES funded development and evaluation of numerous new programs, and the methodologies promoted by IES are essential to Investing in Innovation (i3), a larger effort focused on development and evaluation of promising programs in K-12 education.

The one serious limitation of the evidence movement up to the present is that while it has greatly improved research and methodology, it has not yet had much impact on practices in schools. Part of the problem is just that it takes time to build up enough of a rigorous evidence base to affect practice. However, another part of the problem is that from the outset, "scientifically-based research" was too squishy a concept. Programs or practices were said to be "based on scientifically-based research" if they generally went along with accepted wisdom, even if the specific approaches involved had never been evaluated. For example, "scientifically-based reading research" was widely interpreted to support any program that included the five elements emphasized in the 2000 National Reading Panel (NRP) report: phonemic awareness, phonics, vocabulary, comprehension, and fluency. Every reading educator and researcher knows this list, and most subscribe to it (and should do so). Yet since NCLB was enacted, National Assessment of Educational Progress reading scores have hardly budged, and evaluations of specific programs that just train teachers in the five NRP elements have had spotty outcomes, at best.

The problem with SBR/SBRR is that just about any modern instructional program can claim to incorporate the standards. "Based on..." is a weak standard, subject to anyone's interpretation.

In contrast, government is beginning to specify levels of evidence far more specific than "based on scientifically-based research." For example, the What Works Clearinghouse (WWC), the Education Department General Administrative Regulations (EDGAR), and i3 regulations have sophisticated definitions of proven programs. These typically require comparing a program to a control group, using fair and valid measures, appropriate statistical methods, and so on.

The more rigorous definitions of "evidence-proven" mean a great deal as education policies begin to encourage or provide incentives for schools to adopt proven programs. If programs only have to be "based on scientifically-based research," then just about anything will qualify, and evidence will continue to make little difference in the programs children receive. If more stringent definitions of "evidence-proven" are used, there is a far greater chance that schools will be able to identify what really works and make informed choices among proven approaches.

Evidence-based and evidence-proven differ by just one word, but if evidence is truly to matter in policy, this is the word we have to get right.