In June, DCPS Chancellor Kaya Henderson announced that, according to the District of Columbia Comprehensive Assessment System (DC-CAS), more students are proficient in math and reading than ever before. Moreover, gains from last year extend to African-American and low-income students. That is wonderful news. D.C. students, especially minority and low-income students, have historically been left out of gains, especially in the past few years. They deserve better, and we all hope they have gotten it. At the same time, this should have been a moment for caution, rather than unfettered celebration.
That caution is due, in part, to DC-CAS' multiple limitations. Like all state assessments, D.C.'s test of student "proficiency" in math and reading provides, at best, point-in-time and growth measures of students' knowledge of narrow subject areas. They capture neither other subject areas considered key to student success nor potentially more important skills such as critical thinking, problem-solving, and perseverance. They are not necessarily designed to validly indicate appropriate growth from one cohort or one year to another. Finally, and most critical, teachers can gear their instruction toward the content, making it difficult to distinguish between test-preparation skills and true learning.
Because DC-CAS scores constitute a substantial proportion of teachers' evaluation rankings, and because DCPS actively encourages such instruction, cramming is common, particularly in low-income schools where stakes are highest. As investigations into unusual gains in some DCPS schools and a growing number of other districts indicate, heightened stakes can push system-gaming into cheating. The need for caution is greatly increased by recent revelations that, after initial test scores revealed that students had not made the gains anticipated, DCPS shifted to a different set of cut scores, which showed the large gains touted. DCPS officials assert that they made the shift to enable comparability, but the timing suggests otherwise. Moreover, current tests measure different content from prior assessments, rendering such comparisons invalid.
The call for caution is triggered, too, by differences between the DC-CAS and another test taken by DC students, the National Assessment of Educational Progress, or NAEP. Often called "The Nation's Report Card," NAEP shares DC-CAS' limitations in terms of subjects (though it also covers science). In other key respects, however, it compensates for flaws inherent to DC-CAS. NAEP provides a more in-depth assessment of student knowledge in the areas covered, and is much harder to teach to; as such, NAEP scores are widely regarded as valid indicators of student learning. Because NAEP is given to an anonymous sample of students every two years, there is neither reason nor capacity to game the results. It is also designed to provide valid cohort-to-cohort and year-to-year trends, so that overall, state-level, and income, racial and ethnic subgroup changes in students deemed "basic" or "proficient" can be trusted.
Substantive differences manifest themselves in gaps between the proportion of DCPS students deemed "proficient" on its own assessments and those "proficient" by NAEP's standards. This substantial gap is illustrated, among other sources, in a 2009 report comparing the two. While state "proficiency" standards vary widely, it finds, "most states' proficiency standards are at or below NAEP's definition of Basic performance." With slightly higher-than-average standards, DC-CAS reading "proficiency" is the equivalent of NAEP "Basic."
Changes to state standards over time also drive growth patterns that vary across the two tests. As per the DCPS press release, "Students in all but one ward, Ward 3, improved reading performance compared to 2012 and students in every ward have shown steady reading progress since 2007." When measured by NAEP, however, DCPS students showed no growth whatsoever in reading between 2007 and 2011. In fact, scores for many groups fell--for instance, Hispanic eighth graders' scores dropped an astronomical 12 points, or 5 percent, in those four years. As my organization has argued, it is hard to square claims of success on state-level assessments with clear evidence from NAEP of (sometimes substantial) losses.
Of course, we do not know whether this mismatch will continue; 2013 NAEP data come out later this fall. This prior pattern, however, should prompt skepticism at the very least.
In sum, there are many reasons for caution. Even the most reliable test scores demonstrate only a narrow slice of a complex picture. In DCPS, that picture includes the IMPACT teacher evaluation system, as well as school closures and reconstitutions, also based on test scores, and a teacher pool that is heavy on novices in the most struggling schools and that turns over frequently. These elements are all more likely to impede than to boost student achievement. While recent score gains do not necessarily indicate cheating or manipulation, the district has not provided proof that they truly demonstrate "proficiency" or across-the-board growth, as asserted. This is time for a much closer look before celebrating the latest DC-CAS "miracle."