Recently Secretary of Education John King announced the Department's new regulations for a teacher preparation program accountability system. This is the Department's final rule concerning the Federal role in regulating teacher preparation under Title II of the Higher Education Act, with provisions very similar to prior versions. On cue, Randi Weingarten, President of the American Federation of Teachers, rejected the rules out of hand.
Will other professional groups, also opposed to prior version of these regulations, break ranks and stand down from obstructing requirements that student learning be an outcome for which teacher preparation programs should be held accountable? Instead, will they stand up and lead the profession, to ensure these regulations are implemented with rigor? If both major teachers' unions and the association of colleges of teacher education again line up in opposition, painting these as another instance of over-valuing high stakes testing, they will have secured their position as obstructing the long-term improvement of the profession they claim to represent.
In the version Secretary King presented, state-approved reporting systems will require programs to provide data on 4 clusters of graduates' outcomes - employment in the profession; employer and graduate surveys; learning outcomes for the students taught by graduates; and an "other" category that includes accreditations, etc. By far the indicator that may attract the most heat will be student learning outcomes. In the frenzy about value-added measures (VAM) of a few years ago, unions and other groups skillfully portrayed, VAM as the sole metric for student learning. This was not true then and is less true now. There is tremendous flexibility in the regulations for states approve a range of assessments, including portfolios, pre-post tests, and growth indicators, among others. Opponents who single out VAM as the primary reason for rejection are not telling the whole story.
The devil that resides in the details will be the rigor of assessments and procedures states use to compile program information into categories of effectiveness: low-performing, at-risk, and effective. Absent strong assessments that discriminate among teachers and the programs from which they graduated, its almost guaranteed we will see a Lake Wobegon effect - 95 percent of programs rated as effective while we wring our hands daily about the quality of teacher preparation to ensure graduates are ready to enter the classroom.
Unfortunately the Department has no oversight authority over state-level teams responsible for approving program plans, aggregating data, and reporting. The only "stick" the feds can use will be to deny programs' eligibility for TEACH grants, a source of student aid. Let's hope that professionals and stakeholder groups recognize the recognize this as an opportunity to build a system that systematically leads to improving the impact that teachers, and the programs that prepare them, have on student learning.
I am a dean of a school of education with a program in teacher preparation, and I watch our faculty, staff, and students, not only discuss rigorous performance measures but implement them, during the time our students are with us. The stumbling block to achieving our next goals in using evidence to drive program design is access to data on our graduates' performance as teachers, including learning outcomes for students they teach. We want this information, and many of my colleagues that lead schools with teacher preparation programs have similar interests. These regulations will provide a major boost toward that goal.
I wrote a similar piece a few years ago when the prior version of regulations were proposed and shot down by the very groups that purport to lead this profession. And although efforts by individual stakeholders, such as Deans for Impact (of which I am a member) and states such as Louisiana have taken a lead in re-thinking regulation of the profession, our field should not let this opportunity pass us by lest we fail in our mission to serve students and fall further in the public's assessment of our credibility and value. Real leadership, on the part of the profession, would embrace this opportunity, recognize that we may all be a little uncomfortable for awhile and roll up their sleeves to sort out the thorny matters of implementation. Count me among those who want to get to work.