Cal State Coming Clean About Math Test's Limitations

California State University has done many things right in the twenty years since it became one of the nation's first universities to take on the challenge of remedial education. For example, it developed a widely-lauded eleventh-grade readiness assessment now being replicated around the country to help students catch up academically during their senior year of high school and avoid remedial courses in college.

It's hard to understand, therefore, why the university would be anything but candid about how it measures students' proficiency in math and English. But according to a 2010 study that recently came to light, the placement test required for entering CSU students does not predict whether students can succeed in college-level math.

Scores on that test, the Elementary-Level Mathematics Test or ELM, send nearly 20,000 new CSU admits to remedial math courses when they get to campus each fall (or during the preceding summer). According to the study, conducted by test maker ETS, those students could have enrolled directly in college-level math courses and done just fine, or at least as well as the 9,000 or so students who passed the test.

The news about the test's apparent lack of predictiveness won't shock anyone who follows the research on standardized tests. As discussed in my series of reports on college math requirements, studies repeatedly have found that high school records are more effective predictors of student success in college courses than test scores. Just this summer, ACT acknowledged as much when it decided to jettison its current placement test, Compass.

The revelation might, however, surprise the tens of thousands of California high school seniors required to take the test each year. Last fall, CSU students required to take remedial math had an average high school grade point average of 3.2. Like all CSU students, they were in the top one-third of high school grads in the state. But they included a disproportionate share of under-represented minority students - 51 percent of African Americans and 35 percent of Mexican Americans vs. just 15 percent of whites and Asians - raising the possibility that the test perpetuates racial discrimination.

CSU, which commissioned the study, has for years been less than transparent about it, declining to share it even upon request. So has ETS. Instead, the validity study came to light only because a recent dissertation described its findings after a doctoral student obtained it under the state's public records act. This week, CSU officials said that, going forward, the analysis will be available to those who request a copy.

Earlier this year, in an interview for my report on math placement exams in California, the faculty chair of the system's ELM committee told me that the test "performs okay in validity studies." I tried to obtain a copy of such validity studies from ELM committee members as well as CSU headquarters. Having no luck there, I reached out to an ETS official.

In an email, I asked the official to what extent the ELM predicts students' grades in college-level math courses. "The purpose of ELM is not to predict a course grade. The purpose is to identify students who are not ready for a college-level mathematics course," she responded. She said the study involved a survey of professors at ten campuses, which found that "students were being appropriately placed and the ELM cut score was not changed."

She failed to mention the first question in the validity study:

Is the ELM an effective predictor of students' success in mathematics courses, where success is defined by final course grades?

Or its answer: 78 percent of the students who "failed" the test (by scoring below 50) but nevertheless enrolled in college-level math courses passed those courses. What about students who passed the ELM? Exactly 77 percent of them were successful in the course.

What is not clear is why no changes were made to the test or the cut-off score after the validity study was shared with CSU officials in 2010. According to Nathan Evans, CSU Chief of Staff for Academic and Student Affairs, none of the administrators involved with the validity study still work for the system, and no record has been found about the response to the study.

An executive order in 2010 requested that math chairs review the cut-off score every other year, which apparently has not occurred. In a statement earlier this year, the math chairs blessed the test content - mostly algebra, geometry and number sense - as reasonable to expect of college students. But that doesn't mean the test scores are meaningful indicators of students' ability to succeed in math classes.

Yet, CSU provided the data on which ETS based the study. There is no indication that its findings have been questioned - or superseded by subsequent research. Furthermore, longstanding norms in educational testing dictate that test scores alone should not be the basis for any high-stakes decision.

And why the secrecy? Even administrators and mathematics faculty within CSU have told me they had trouble obtaining a copy of the CSU-funded study. The ETS official implied that only psychometricians (i.e. testing experts) could understand it.

"These are psychometric technical reports and the language used is very technical," she said. "At the creation of the ELM in the late 1970's and early 80's, it was decided not to make this a public document."

That decision appears to be a violation of California's public records act as well as professional standards for educational testing. (Those standards can be found here.)

With new Common Core-aligned assessments replacing CSU's former eleventh-grade readiness test and a new SAT test in the works, CSU now intends to take a new look at the various ways it measures students' readiness, including the ELM. "The discussion needs to be occurring," Evans noted.

After five years of secrecy, that's a good thing. The lack of transparency harms students, faculty, and the university itself. A test that can deter students' educational progress should have solid evidence behind it. High school and college faculty devoted to improving students' math learning deserve an accurate account about how their students are faring. And the university's path-breaking commitment to improving math proficiency should not be undermined by inaccurate measures of success.