Since launching Evidence for ESSA on February 28, I’ve gotten a lot of emails. In general, the responses to the website have been very positive. However, a small minority of emails have been really angry about the entire project.
The writers of these angry emails are upset that positive ESSA evidence levels were assigned to what they considered “bad programs” and less positive ESSA evidence levels were assigned to what they considered “good programs.” Of course, in each case I explain that we are only reviewing existing evidence for demonstrated impact on students’ learning and assigning ESSA evidence levels according to the standards defined by the evidence provisions included in the Every Student Succeeds Act (ESSA), which is now the law of the land. We are not assigning ESSA evidence levels to programs based on their “goodness” or “badness” on any dimension other than impact on achievement.
Evidence for ESSA critics were having none of it. In their minds, “good programs” are ones that adhere to well-established principles, or have been supported by experts, or are aligned with state or national standards. “Bad programs” are ones that, in their opinions, violate these standards or fail to incorporate well-supported principles.
Expert opinions and standards are important, of course, but how about effectiveness? I asked how anyone could tell if a program was good or bad unless they knew if it actually benefitted students. This did no good. “Don’t you understand?” they asked. “Such-and-such experts or so-and-so standards support these programs, so they are good.” End of story.
But adhering to principles of good practice is not at all the same as demonstrated effectiveness. To understand this, imagine a textbook that meets every standard and conforms to all current conceptions of good practice, yet teachers are given only three hours of in-service to use it. An evaluation would probably find no improvement in learning. Now imagine a program built around the very same textbook that provides a week of training, in-school coaching once a month, videos to demonstrate program elements, and so on. This program is much more likely to work. The point is, the content of a curriculum is part of what might make it effective or ineffective. The professional development and other features are also essential. So declaring a program or curriculum “good” or “bad” based on content alone is misleading.
The conversations I am having with Evidence for ESSA critics illustrate the sea change being brought about by the ESSA evidence standards. Way back in . . ., well, 2016, educational programs were largely judged according to alignment with standards, state textbooks and software reviews, correspondence with expert opinion, or most often, perhaps, based on leaders’ preferences, tips from nearby districts, and appeals from sales reps. Actual proven impact on students was hardly ever involved. Today, as the ESSA evidence standards begin to be implemented, evidence of effectiveness is beginning to get some respect. This is a good thing for students, teachers, parents, and our nation, but it is deeply uncomfortable for those who have long relied on curriculum content or opinion to drive selection of educational programs. Those are the people contacting me to complain about the “bad programs” being assigned positive ESSA evidence levels, the ones that, “bad” as they may be according to some peoples’ opinions, actually enhance student achievement.
For many years, in school principals’ and superintendents’ offices all over America, I’ve seen the following statement proudly displayed on the wall:
“In God we trust. All others bring data.”
At long last, this saying is beginning to apply to the critical choices educators make in selecting programs, books, software, and professional development. Good programs? Bad programs? Don’t tell me your opinions. Show me the data!
This blog is sponsored by the Laura and John Arnold Foundation