About this blog Subscribe to this blog

Thompson: Value-Added True Believers Should Listen to Principals

Sadly, a new Gates-funded study, "Principal Use of Teacher Effectiveness Measures for Talent Management Decisions," provides an ideal metaphor for what is wrong with value-added evaluations, in particular, and corporate school reform, in general.

I do not question the quality of work of its authors - Ellen Goldring, Christine M. Neumerski, Mollie Rubin, Marisa Cannata, Timothy Drake, Jason A. Grissom and Patrick Schuermann, or its findings.

The problem is that the report seems to assume that principals who do not agree with the Gates Foundation are incorrect and need retraining; it doesn't consider the possibility that value-added models aren't appropriate for teacher evaluations. 

Goldring et. al found that 84% of the principals they interviewed believed teacher-observation data to be valid "to a large extent" for assessing teacher quality, but only 56% viewed student achievement or growth data to be equally valid. The study acknowledged that value added is perceived to have “many shortcomings.” Principals have doubts whether the data will hold up to official grievance processes. Principals also perceive that teachers have little trust in teacher effectiveness data.

Education Week’s Denisa Superville reports that value added expert Douglas Harris echoes the findings, “the results confirmed feedback he had received from other educators about the challenges in using teacher-evaluation systems.”

Rather than ask whether principals know something about the real world use of statistical models that Gates doesn’t understand, Goldring et. al recommend that systems “clarify their expectations for how principals should use data and what data sources should be used for specific human-resources decisions.” In other words, they apparently believe that more training in value-added estimates will convince educators that the theorists have been correct all along.

Goldring et. al did listen on one issue, however. Because principals cited the lack of time to use new data for evaluations, it was recommended that principals receive more support in the dismissal process.

Also, principals reported the use of test score results to transfer low-scoring teachers from tested grades into the all-important, but untested 1st and 2nd grade. That inevitable and oft-repeated misuse of test scores raises the question of why true believers in bubble-in accountability won’t acknowledge that practitioners might know something about the real world that escaped the policy wonks who pushed value-added. Why can they not agree to shift resources from experimental policies that principals and teachers do not trust and invest in high-quality early education? – JT(drjohnthompson) 

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Value Added Assessment is junk science. It does not measure the growth of individual students. It measures different groups of students year to year, and in doing that, the standardized measures lack validity from their inception. All standardized tests are wide approximations.

Valure added is an insidious means of branding schools as failing so they can be turned over to private management organizations for profit. It is time we had some honesty about the truth of what is really going on. Wake up.

This is Rich from Philly speaking. I am reading diagnostician with years and years of experience diagnosing reading ability, then actually teaching the kids I test. I can not beleive the intellectual dishonesty going on today.

Rich,
Maybe I'm naïve. I keep getting more and more shocked about the intellectual dishonesty of today.
I keep hoping, but its hard to believe these reformers sold themselves such a dishonest bill of goods.
Thanks

Thanks for this post, John. It reminds me of studies and debates regarding the impact of master's degrees. When reformers, policy makers and policy wonks say that a master's degree doesn't make a teacher better, I wonder if that means maybe we don't know how to measure the effect (something other than standardized test scores perhaps?). Or maybe we're doing something wrong in terms of what we ask of teachers if increased education doesn't positively affect practice.

The comments to this entry are closed.

Disclaimer: The opinions expressed in This Week In Education are strictly those of the author and do not reflect the opinions or endorsement of Scholastic, Inc.