K12 Schools Need To Know What Works

The challenges of analyzing purchasing decisions against student achievement are many.

America spent over $640 billion on public K-12 education last year. Add expenditures for private and parochial schools, and the total soars beyond three-quarters of a trillion dollars annually. That is more than the Pentagon and foreign aid budgets combined.

Not surprisingly, most of that education spending—about 80%—goes to salaries: for teachers, administrators, and support personnel. But about 17% of public school expenditures—some $110 billion annually—goes to costs like transportation, supplies, technology, and curriculum. Notably, since 2002, the share of curriculum spending has grown.

Understandably, educational researchers have been working hard to understand the educational efficacy of different types of learning models, behavioral theories, and materials to improve student achievement. Countless hours have been invested in trying to discern what works, what doesn’t, and what works with what types of students.

This research yields important insights. That is the good news. The bad news is that, unfortunately, it takes far too long to get these findings back into the hands of teachers and administrators. And the findings are generally based on small numbers of students, leading many educators to conclude they aren’t pertinent to their own student populations.

The information is out there that could make a difference in student achievement. But it is buried among far too much data that emerges from fifty states, thousands of school districts, tens of thousands of individual schools, myriad researchers, and the federal government. The useful information isn’t being intentionally hidden; its absence is the consequence of bureaucratic silos, privacy concerns, and a lack of transparency.

That needs to change.

For 35 years, I have used technology to try to improve student achievement. My first company, The Princeton Review, shook up the testing world. My second, 2U, brought high-quality online teaching to higher education. Now, with Noodle Markets, I hope to bring some transparency to K-12 product selection, spending, and achievement. What TripAdvisor is to the travel world—taking you from discovery through booking—we hope Noodle Markets will be for K-12 purchasing.

We just released the first of what will be a series of monthly reports. Each report analyzes what educational products a school district chooses, what they spend, the demographic details of their student population, and student achievement. We started with K-3 math instructional resources; the results can be seen on our own site and on EdWeek.

Each report will help educators with three essential questions: (1) What products are popular with school districts like mine, and which are waxing or waning? (2) What is the relative pricing among similar products across districts? (3) How do product decisions correlate with student performance?

The challenges of analyzing purchasing decisions against student achievement are many. First, we recognize that correlation is not causation. Researchers understandably worry about whether the choice of a curricular product actually leads to the better (or worse) student achievement, or whether it is statistical noise.

Second, it is tough to control for different socio-economic characteristics. While we evaluate the impact of curricular materials against three tiers of student income, there may be significant differences in how they work with urban or rural students, for instance.

But as we get better—and continue to improve—at sussing out subtleties, our goal is to identify real differences in the efficacy of certain instructional resources.

Unfortunately, there is more bad news: while the data exists to extend this research over many more subjects and instructional methodologies, it is excruciatingly difficult to parse without the cooperation of various stakeholders. Some folks—the occasional researcher, the sponsoring university, the district bureaucrat—are reluctant to share their data. As a result, there’s a lot of data we still can’t include in our analyses. For instance, we don’t know how various curricular or supplemental programs are used in which classrooms, or whether professional development came with the curriculum.

Pulling together more and more longitudinal and detailed data, and getting more and more stakeholders to cooperate, is not going to happen quickly or easily. The academic community’s inherent suspicion of misapplication of sophisticated research makes this task formidable. We share their concerns about privacy and summative evaluations of teachers and schools, and we are confident that mutually acceptable workarounds are achievable.

But we need to make faster progress; student achievement in math and reading has seen modest and mixed changes since 2000. Reading scores for younger children have improved, but for high school students the results have actually eroded. Math scores have crept up, but we still rank far below many other countries.

Improving student achievement is always challenging. Giving teachers and administrators the best information available about what works is an important tool. We’re about to move from a series of small, controlled experiments—similar to a medical research mindset—to the world of big data. We can connect K-12 schools and districts to the power of their purchasing decisions and outcomes. But we need to do this faster. And to that end, we need the cooperation of all stakeholders.