This is a guest post by Dr. Helen Janc Malone, Director of Institutional Advancement and National Director, Education Policy Fellowship Program at the Institute for Educational Leadership.
The new Every Student Succeeds Act (ESSA) is shifting state accountability systems away from test-driven academic performance toward a balance between academic and non-academic factors. While ESSA opens new conversations as to the type of assessments and their applications to inform teaching and learning, it is vital that the dialogue start by asking how much are our public school students being tested?
Data show that since the implementation of NCLB students are taking more tests, more often, and for a variety of reasons. Dr. Raymond Hart, Director of Research for the Council of the Great City Schools (CGCS), recently discussed his latest report that sought to capture the amount of time students spend in school taking tests.
According to Dr. Hart's report, Student Testing in America's Great City Schools: An Inventory and Preliminary Analysis, students in the major urban school districts spent 4.22 days, or 2.34 percent of last year's school time, taking tests. In 2014-15, 401 unique tests (112.3 of which were required) were administered across subjects to students in 66 Great City School systems. An average student took 8 standardized tests per year, some to fulfill NCLB requirements and some to meet state and local mandates. Although the opt-out movement has gained some traction, less than 1% of CGCS systems encountered boycotts of standardized tests by students and their parents.
The higher the grade levels, the more tests students encounter in CGCS systems, with high schoolers having a particularly high test burden, including taking federally and state mandated compliance-based tests, formative and mastery, end of course tests in subject areas, and transition, postsecondary related tests.
Multiple layers of regulations guide schools' testing schedules. Although the federal government regulations have set the requirements across the board, states themselves have also imposed additional formative and/or benchmark assessments, making the lines between federal and state mandates blurry.
The study also noted redundancy in the exams, lack of alignment among some tests, and reporting lags, which reduced the utility of the test data informing instructional practices. Although Dr. Hart acknowledges the importance of tracking student progress to inform policy, he also underscores the importance of meaningful measurements that help teachers track students' content knowledge, identifying areas of growth and improvement.
Meaningful student comparisons are difficult to discern. As Dr. Hart points out, between 2011 and 2013, in half of the assessed school districts, states changed their NCLB standardized tests at least once. In 2014-15 school year, 65% of the districts changed their assessments again (most opting for either PARCC or SBAC). Given the diversity of state assessments students take over time and across the states, comparing students' progress becomes a challenging proposition.
While the study offers an assessment of how much time students spent taking tests in 66 large districts, the study's scope did not calculate the amount of instructional time devoted to test preparation and the variations within and across schools, which might offer a deeper picture of the effects testing has had on school culture and classroom practices.
ESSA is changing the accountability parameters, shifting the power of decision-making back to the states, and also adding non-academic measures to the mix. The new law offers an opportunity to reimagine assessments, but will they remain standardized or move towards a more individual-centered approach? Will the new assessments recognize the role of non-academic factors in student learning? How will your state respond to the new accountability framework in 2017?
Dr. Raymond C. Hart is the Director of Research for the Council of the Great City Schools and has more than 20 years of experience in research and evaluation. He presented the findings of the student testing study at the monthly American Educational Research Association/Institute for Educational Leadership (AERA/ILE) session in January 2016.
The AERA/IEL Luncheon series, launched in mid-1980s, is a monthly lecture series featuring renowned scholars and practitioners focused on salient issues in education policy. To get on the mailing list for the Washington DC-based series, please email email@example.com.