The biggest problem in evidence-based reform in education is that there are too few replicable programs that have strong evidence of effectiveness available to educators. The evidence provisions of the Every Student Succeeds Act (ESSA) encourage the use of programs that have strong, moderate, or promising evidence of effectiveness, and they require School Improvement efforts (formerly SIG) to include approaches with evidence that meets these definitions. There are significant numbers of programs that do meet these definitions, but not enough to give educators multiple choices of proven programs for each subject and grade level. The Institute for Education Sciences (IES), Investing in Innovation (i3) program, the National Science Foundation (NSF), and England's Education Endowment Foundation (EEF) have all been supporting rigorous evaluations of replicable programs at all levels, and this work (and work funded by others) is progressively enriching offerings of programs that are both proven to be effective and ready for widespread dissemination. However, progress is slow. Large-scale randomized experiments demanded by these funders are expensive and may take many years to be completed. As in any scientific field (such as medicine), most experiments do not show positive outcomes for innovative treatments. At a time when demand is starting to pick up, the supply needs to keep pace.
Given that money is not being thrown at education research by Congress or other funders, how can promising innovations be evaluated, made ready for dissemination, and taken to scale? First, existing funders need to be supported adequately to continue the good work they are doing. Grants for Education Innovation and Research (EIR) will pick up where i3 ends, and IES needs to maintain its leadership in supporting development and evaluation of promising programs in all subjects and grade levels. The National Science Foundation should invest far more in creating, evaluating, and disseminating proven STEM approaches. All of this work, in fact, is in need of increased funding and publicity to build political and public support for the entire enterprise.
However, there are several additional avenues that might be pursued to increase the number of proven, ready-to-disseminate approaches. One promising model is low-cost randomized evaluations of interventions supported by government or other funding. Both IES and the Laura and John Arnold Foundation are offering support for such studies. For example, imagine that a school district is introducing a digital textbook to its schools, however, it only can afford to provide the program to 30 schools each year. If the district finds 60 schools willing to receive the program and randomly assigns half of them to start in a given year, then it is spending no more on digital textbooks than it planned to spend. If state test scores can be obtained and used as pre- and post-tests, then the measurement costs nothing. The only costs of studying the effects of the digital textbooks might be the costs of data analysis, perhaps some questionnaires or observations to find out what schools did with the digital textbooks, and a report. Such a study would be very inexpensive, might produce results within a year or two, and would be evaluating something that is appealing to schools and ready to go.
Beyond these existing strategies, others might be considered to speed up the proven programs process. One example might be to build on Small Business Innovation Research (SBIR) grants. At $1 million over two years, these grants, limited to for-profit companies, are often too small to develop and evaluate promising approaches (usually, technology applications). IES or other funders might proactively look for promising SBIR projects and encourage them to apply for larger funding to complete development and do rigorous evaluations. One advantage of SBIRs is that they are usually created by small, ambitious, undercapitalized companies, which are motivated to take their programs to scale.
Another strategy that might work could be to fund "aggregators" whose job would be to identify promising approaches from any source, help assemble partnerships if necessary, and then help prepare applications for funding. This could help young innovators with great ideas combine their efforts, create more complete and powerful innovations, and subject them to rigorous evaluations. In addition to SBIR-funded projects, promising program elements might be found in projects funded by private foundations or agencies outside of education. They might be components of IES or i3 projects that produced promising but not conclusive outcomes in their evaluations, perhaps due to insufficient sample size. Aggregators might link up programs with broad reach but limited technology with brash technology start-ups in need of access to markets. If the goal is finding promising but incomplete efforts and helping them reach effectiveness and scale, every source should be fair game.
Government has made extraordinary progress in promoting the development, rigorous evaluation, and scale-up of proven programs. However, its success has led to a demand for proven programs that it cannot fulfill at the usual pace. Current grant programs at IES and i3/EIR should continue, but in addition we need innovative strategies capable of greatly accelerating the pace of development, evaluation, and scale up.