If Tests Aren't Working for Teachers and Families, They're Not Working

The major backlash against student testing is because teachers and families are getting little value out of it. If a test is to be worthwhile, it needs to be producing information that's useful in classrooms and at kitchen tables.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

At the Data Quality Campaign (DQC), we've been talking for years about the great power data have to paint a full picture of a student's learning. But when most people encounter the term education data, they still hear just one thing: test scores. We at DQC know that data are much more than test scores. Education data include other information from multiple sources that is used to support student learning and manage schools (for example, student and teacher attendance, services students receive, student academic development and growth, teacher preparation information, postsecondary success, remediation rates, and more). When the system is working effectively, it is getting information back to teachers and families in a timely manner and in formats that are actually meaningful and useful to them. When it's not, we're only collecting data for accountability.

Accountability is one important goal of education data use, but it is not fully tapping into the power of data. Only when we create a culture that supports the use of those test results (both "high-stakes" summative exams and formative tests) to explore important questions will we be able to get the results our kids deserve and our country needs. No one puts a dipstick into their engine and finds out that they are low on oil, blames the car mechanic (or themselves) for not filling it, does nothing about it, and then is surprised when the oil is even lower next time they check it. Yet that is how the vast majority of schools are using test results.

Yes, tests are an important piece of the data puzzle, but we need to have a conversation about how those tests are used. Many teachers and parents feel besieged by tests -- they feel there are too many, the results are used to blame and shame them, and they limit the amount of learning that can happen in the classroom. These concerns need to be heard and addressed. But the major backlash against student testing is because teachers and families are getting little value out of it. If a test is to be worthwhile, it needs to be producing information that's useful in classrooms and at kitchen tables.

When testing -- both summative and formative -- is working, it produces timely, useful information that educators can use to adjust their instruction and administrators to adjust curricula and the use of time, training, and talent to improve student achievement. Good tests can demonstrate what's working and what's not for teachers and kids. At a DQC event earlier this month, DC Public Schools teacher Jennifer George explained that she's always relied on test data to improve her instruction, using them to pinpoint exactly how her students are doing and shine a light on what lessons were successful or not. She was able to use interim assessments and checks that incorporated observation, attendance, exit tests, and more to improve student outcomes. States and districts need to help teachers by providing time, opportunities for collaboration, and professional development around multiple types of assessments and their uses so that they're not left to do all of this work by themselves. A great example of this leadership is in Georgia, where the state worked with teachers to determine what information they need at their fingertips and the best format to use. Teachers so appreciated clear, easy access to assessment information in the state dashboard that districts asked the state to upload their formative and interim assessment information to be viewed alongside what the state was already providing. This illustrates how assessments in context generate demand from educators.

We must move away from the outdated, ineffective culture of testing in which teachers instruct their students on a concept for a few weeks, test them, and then move on no matter what the scores are. Did your child get a C- in fractions? Oh well, maybe she'll do better in decimals. This is not the best model for deep learning. Jennifer George is never surprised by how her students do on a summative assessment, and that's because she uses data effectively -- from tests and other sources -- to make adjustments in real time. Without this important function, we're just testing for testing's sake, to generate scores for accountability.

What a waste. Instead of effective data use, we've created incentive systems around tests that cause people to be panicked and stressed and to attack the very tools that could potentially be the most useful to assist their efforts to help their kids. This is not the conversation we should be having. We should be using testing data to answer important questions. "Which of your students aren't getting the concept and why? What needs to change to get better results? What help does my child need to address her identified academic gaps?" That's the conversation we should be having. Data are no replacement for teacher and parent judgment, rather -- when deployed effectively -- they are important tools to inform that judgment by illuminating the current situation, faster and more accurately than the naked eye can.

The post originally appeared on the Hunt Institute Blog, The Intersection. View the original post here.

Popular in the Community

Close

What's Hot