Florida's Department of Education released the first results of a sweeping new teacher evaluation system Wednesday morning that sought to provide more accurate data on teacher effectiveness and increase accountability.
But state officials retracted the information just hours later, citing inaccuracies: thousands of teachers were double-counted as a result of duplicate job codes, the Tampa Bay Times Reports. The reports rated 95 percent of the state's teachers as "effective," according to WTSP, but showed results for 23,970 teachers, when the state actually employs fewer than 15,000 educators.
Florida Education Department spokesperson Cynthia Sucher told the Times that the error is "distressing" to the agency. But those who have been critical of the new evaluations were not surprised -- like the National Center for Fair & Open Testing, otherwise known as FairTest and a longtime critic of high-stakes testing.
"Garbage in, garbage out," FairTest Public Education Director Bob Schaeffer told the Times. "The teacher evaluation system is ideologically driven and not ready for prime time . . . When you rush to put a shoddy system in place, you get ludicrous results."
Kathy Hebda, Florida's deputy chancellor for educator quality, told the Florida Times Union that all districts are to submit all evaluation data by the end of the month and a more complete teacher report will be available in January.
Florida's redesigned teacher evaluation system is part of the Obama Administration's education reform efforts through Race to the Top, a competitive program that gives states $4.3 billion to design new ways to assess teachers and connect student performance to teacher pay.
Previously, teachers were evaluated by just their principals and graded as "unsatisfactory" or "satisfactory." Under the new system, teachers are assessed across three areas: professional development, a principal evaluation and student test scores -- which account for at least half of a teacher's grade.
The third area is the most contentious. Critics argue that heavy reliance on student test scores are unfair, as they don't take into account external factors that affect performance -- like poverty. Advocates say value-added models provide the most accurate objective measure of a teacher's effectiveness.
Value-added analysis calculates a teacher's effectiveness in improving student performance on standardized tests -- based on past test scores. The forecasted figure is compared to the student's actual scores, and the difference is considered the "value added," or subtracted, by the teachers.
And the stakes are high. Top-performing teachers could receive permanent salary bumps, while lowest-performing ones could get the boot for failing to improve over two years.
The state's Education Department already had a misstep in July, when it miscalculated grades for hundreds of schools across Florida, fueling public distrust in the state's accountability system.
The flub affected 40 of the 60 districts, incorrectly grading 8 percent, or 213 schools by omitting one piece of a complex, newly revised grading formula.