Education Reform: Nothing New Under the Sun

In this new reform effort, what is the end result? How will these reform efforts improve the educational outcomes of students in New York City, or make schools better? In simply creating a new evaluation rubric, exactly how is instruction being improved?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

In a recent Washington Post article entitled "Four Decades of Failed School Reform," Pat Welsh -- a retired 43-year-veteran teacher from Northern Virginia -- discussed the various education initiatives that have come down the pike over his career in the classroom, none of which had any noticeable improvement on either teaching or student outcomes. For this 10-year-veteran public school teacher, reading Mr. Welsh's catalog of failed initiatives yielded an exasperating realization: The last three decades of reform have been just as fool-hardy as this current one. And it's not just in the D.C. metro-area.

In New York City, our most recent and controversial reform has included the rolling out of a set of standardized tests called Measures of Student Learning -- or MOSL. (My colleagues keep pointing out that this sounds like the name of a klezmer band.) MOSL's mission is two-fold: First, to evaluate teachers, as students' standardized test scores comprise 20 percent of their teacher's yearly "rating" (the number by which the teacher will be judged effective or ineffective) through NYCDOE's "Advance" initiative; second, to evaluate the school itself, based on the improvements in students' scores from one sitting, in September, to the second, sometime next spring.

The New York City test was apparently released in mid-September (though no teachers had access to it until the day before it was given to students). The kids sat for the English Language Arts (ELA) portion one week, and the math portion the following week. The prompt in the ELA test asked my 10th graders whether genius was a result of hard work, or was innate--but while one passage, by Malcolm Gladwell, actually addressed this topic (discussing the number of hours violin students would practice to achieve proficiency), the other passage was about Dr. Temple Grandin's accomplishments in spite of having autism.

From the outset, my students observed (correctly I felt) that the two passages about which they were supposed to write a point-counterpoint argumentative essay were only tangentially connected to each other. "Miss, why didn't you prepare us for this test?" they asked, clearly alarmed. They did not realize that their teachers had only received a sample test twenty-four hours prior, and thus had little more idea than they did about the content. "So, does this really count? For our grades?"

When I assured them that the test would neither adversely affect their grades nor jeopardize their graduation, but rather was "a tool to evaluate our school as a whole," many of them flat-out refused to take the test; the rest scribbled down superficial responses (muttering complaints about the quality of the readings or the opacity of the prompt) and spent the rest of the period staring out the window or doodling. They repeated that performance on the math section the following week.

(Note to New York City public school parents reading this article: If you too object to having your children's instructional time spent in this manner, I encourage you to bring your concerns to our DOE higher-ups.)

The irony is that after the final reckoning, when all tests are scored and grades are entered, what will likely turn out to be an abysmal performance by many students across the city is in fact the ideal outcome for their teachers and schools. Low scores on this fall round of the MOSL tests mean it will be easier for students to improve in the spring (particularly now that both students and teachers have actually seen the test), making their teachers more likely to be designated "effective" when the students' improvements are calculated into the 20 percent of teachers' respective ratings.

While I would certainly like to be labeled an "effective" teacher, and thus earnestly hope that my students do better on the spring MOSL tests, it's impossible to ignore the fact that this entire system is something of a dog and pony show. Aside from the poor writing and construction of the test itself, it seems intuitively obvious that students will do poorly on a test for which they have received no preparation, but will do better once they've been made aware of the format and content. What that shows about me, as a teacher, seems fairly negligible.

But beyond the question of whether or not student test scores are actually good evidence of teacher quality, there is the unavoidable question of how Advance and MOSL will benefit anyone -- students or teachers. In this new reform effort, what is the end result? How will these reform efforts improve the educational outcomes of students in New York City, or make schools better? In simply creating a new evaluation rubric, exactly how is instruction being improved?

In the world of education reform, the adage holds true: The more things change, the more the stay the same.

Popular in the Community

Close

What's Hot