By Dina Hasiotis
We know that by consistently helping teachers reach their full potential in the classroom, we could put millions of students on the path to success. And as we shared previously, conventional wisdom is that we all already know how to help teachers improve. But do we really?
Over the last two years, we’ve looked at the existing research and policy guidance on teacher development, with a critical eye to help us better understand what we already know. We’ve noticed that the common threads running through many previous studies, reports and commentaries on teacher development—once you dig into footnotes and peel back the compelling prose—are logical, practical ideas grounded in a selective review of the evidence base. There seems to be a collective inclination among researchers and public policy leaders alike to look for the bright spots in all this research. And these bright spots are often based on the perceived usefulness of certain types of PD. What about actual usefulness and actual improvement?
While thousands of studies have been carried out on teacher development, today, most major discussions on the topic refer to a handful completed in the early 2000s. At that time, most teachers participated in short-lived “sit and get” workshops that tended to lack relevant content—the “drive-by PD” almost every teacher can attest to.
As a result, a camp of researchers tested alternatives and produced a body of work that generally supported a few recommendations for what effective professional development should provide: more time on tasks and longer, on-going sessions; more job-embedded opportunities rooted in content; and activities and lessons stimulating enough for adult learners (e.g., Garet et al., 2001; Desimone et al., 2002; Carpenter et al., 1989; Cohen et al., 2000; Supovitz et al., 2000). While a few of those studies did rely on observations of teachers’ actual practice, and even more limitedly on changes in student learning, the majority of the research conducted at that time was based largely on teachers’ self-evaluations and satisfaction.
In 2007, another group of researchers set out to assess the statistical rigor of existing studies on teacher development (Yoon et al., 2007). The goal was to highlight higher quality research that relied on strong methodology and meaningful sample sizes. But after reviewing 1,300 studies, they found only nine that met rigorous research design standards. And among those nine, the researchers found positive, though not always statistically significant, improvement in student outcomes when teachers participated in professional development.
A commitment to gathering a clear understanding of what truly helps teachers led to two federally funded studies on reading and math instruction starting in 2008. Led by Mike Garet, these studies used an experimental design—the gold standard in research methods—to see what happens when some teachers got the “best” professional development in terms of time, content, coaching, and more, while others didn’t. While they looked at outcomes (including teacher classroom observation scores, teacher content knowledge, and student achievement) immediately following the delivery of development activities, their rigorous methods also included tracking student achievement and teacher knowledge over time to determine if professional development had lasting effects.
The results were surprising. Garet reported that teachers who received the best of the best were no more likely to see large, lasting improvements in their practice, knowledge, or student learning. In fact, many did not use the techniques they’d been trained to employ—even when researchers were in the room to observe them. But both of these studies are often overlooked in public discussions on teacher improvement, despite their design rigor.
And what about those studies you may be familiar with, that do in fact track change in teacher practice and student learning (e.g. Biancarosa et al., 2010; Allen et al., 2011; Saunders et al., 2009)? We have relied on those in the past to guide our work as well. But when we begin to ask hard questions of those studies—like how large was the sample of teachers and across how many schools or subjects, or what does that change in student learning really mean for kids?—we end up having more questions than answers.
This is why we launched our own study. We’ve been working over the past two years in three large school districts and one charter management organization to understand what may actually help teachers at scale. After all, figuring out how to help more teachers improve is a challenge that’s close to our heart as an organization. It’s at the center of our work—and we don’t think we have any better answers than anyone else. We wanted answers that would help us—and the districts, school leaders, and teachers we work with—improve our efforts to positively affect student outcomes.
On August 4th, we will share what we’ve learned—and what we haven’t—and where we think we should all go from here when it comes to helping teachers to do their very best work.
Dina Hasiotis is Partner at TNTP.