By now everyone who cares about National Assessment of Educational Progress (NAEP) results might be sick of thinking about them, in part because the 2015 results from what is often called the nation's report card were -- let's face it -- depressing.
It is possible to come up with explanations for the drop in mathematics scores. For example, Common Core State Standards call for a couple of math topics to be taught in later grades than NAEP tested them in, so kids in states that adopted Common Core might not do as well on those kinds of questions.
But why the stagnant reading scores? No one has really given a feel-better answer for that.
The best anyone can say is, "Don't panic! One data point does not make a trend!" Not exactly satisfying for anyone who is convinced that we need improvement.
But the fact is that we don't really have a national NAEP score; we have at least 51 scores from the 50 states and the District of Columbia (plus Puerto Rico and a few other places), as well as 21 cities that participate as if they were states.
To me the more interesting results are not the national results but the state and local ones.
So, for example, I am intrigued by the improvement in fourth-grade reading and math scores by the District of Columbia, a continuation of a decade's worth of trends. No doubt some will attribute the gains to consistent leadership at the district level. But they will have to have a different explanation for Chicago, which jumped 7 points in fourth-grade reading and 6 points in eighth-grade math since 2013. Chicago has suffered a superintendent turnstile since Arne Duncan left the city to become U.S. Secretary of Education in 2009, and consistent is not a word that can be used to describe the district's leadership.
One of the states I immediately looked at when the NAEP scores were released was Oklahoma. Several years ago the state developed a cadre of early-reading teachers recruited from every region in the state and provided them with sophisticated training in the science of early-reading instruction. I spoke to a group of those specially selected and trained reading teachers a while back, and their energy and enthusiasm for ensuring that every Oklahoma student learn the basics of early reading were palpable. I told those teachers then that I expected their fourth-grade NAEP reading scores to improve in the following years.
So I was waiting for Oklahoma's results with bated breath. From 2013 to 2015 there was a 5-point gain (on top of a 2-point gain from 2013 to 2015). At 222, Oklahoma is now 1 point above the national average in fourth-grade reading -- instead of several points below, which it used to be. No real change in eighth-grade reading or math, but that's a nice gain in early reading.
Any serious person would say that you can't draw a direct causal line from any one factor to an improvement on NAEP without doing a LOT of complicated analysis. In the case of oil-rich Oklahoma, for example, researchers would want to look at whether the economic boom Oklahoma has been experiencing for the past few years -- until the recent drop in oil prices -- could have played a role in the improvement on NAEP.
But I certainly hope that some researcher is looking at whether Oklahoma's statewide, systematic effort to ensure that early elementary teachers understand the basic scientific principles of reading instruction can be credited with the improvement in fourth-grade reading scores. It certainly seems as if it is the kind of thing that could have an effect.
That kind of state analysis is what Stanford researcher Martin Carnoy called for recently in a presentation at the Economic Policy Institute on his new paper, Bringing it Back Home: Why state comparisons are more useful than international comparisons for improving U.S. education policy.
His essential argument was that we spend too much time comparing the United States with other countries, many of which (such as Singapore) have little in common with a large, diverse industrial democracy like ours. He argues that it would be much more productive to find stories within the United States.
For example, he and his colleagues, Emma García and Tatiana Khavenson, analyzed the mathematics data through 2013 (that is, not including the latest release of NAEP data) from several pairs of similar states and found that some states improved much more than others. After controlling for demographic factors such as poverty, they found that students in Massachusetts improved at much faster rates than students in Connecticut -- even though they started in similar places in 2003. He found himself curious about why two such similar states with similar resources (in fact Connecticut is arguably wealthier than Massachusetts) would have such different results.
The different results in different states are no surprise to anyone who has hung around The Education Trust. In fact, I wrote a whole chapter on the gains in Massachusetts in HOW It's Being Done: Urgent Lessons from Unexpected Schools.
But that kind of analysis is not often heard in research circles, so this is a welcome contribution.
I don't know that we as a country want to ignore the international comparisons that emerge from TIMSS, PISA, and PIRLS; they provide a lot of information that lead to important questions and investigations.
But Carnoy is right that there are rich stories to be mined here in the United States, looking within and across states to try to understand which factors lead to improvement, stagnation, and decline.
I hope other researchers start taking a look.