About this blog Subscribe to this blog

Bruno: Why Teacher Observation Guidelines Are So Complicated

8422737329_6b32575760EdWeek's Stephen Sawchuk has easy-to-digest coverage of a new white paper from TNTP arguing that the frameworks and rubrics used to evaluate teachers during classroom observations are too complicated.

Unfortunately, and somewhat characteristically for the group, TNTP is operating on naive assumptions about why teacher evaluation works the way it does.

For example, to hear TNTP describe them teacher evaluations are "often inflated" because observation guidelines are too vague, sprawling, and complex.

With so many poorly-defined criteria to judge, evaluators are unable to focus on the things that matter and thus unable to discriminate effectively between better and worse teachers.

This is a superficially plausible story about high teacher observation ratings, but it's not well-supported by the evidence, nor does it acknowledge other possible explanations.

The TNTP report briefly comes close to identifying one problem with its story: namely, that even when reformers have made efforts to revamp observation and evaluation processes and put pressure on administrators to evaluate more harshly, evaluations as a whole remain very positive (i.e., "inflated").

This suggests that something is putting considerable upward pressure on teacher evaluations. I suspect it's that administrators are loathe to pick fights with or disappoint teachers whom they see little upside to dismissing, but regardless of the details TNTP doesn't seem to realize there must be more to the "inflated evaluations" story.

At the same time, TNTP is correct in its assessment that there is considerable ambiguity in teacher observation criteria, but here too they adopt an overly simplistic model of the problem.

The authors suggest that what evaluators look for during an observation should be more "specific and narrow" so that they can make more rapid, decisive judgments about effectiveness and provide more focused feedback to teachers.

Their proposed examples of "specific and narrow" indicators, however, demonstrate that this is easier said than done. Looking to see whether "students provide evidence to support their thinking" in the classroom - as TNTP suggests - doesn't actually avoid the hard problem of determining what counts as "providing evidence" in different situations: e.g., while completing a mathematics problem set as opposed to writing a persuasive essay.

So current observation rubrics may be vague and unwieldy because classroom contexts exhibit considerable diversity and because we often lack a clear professional consensus about what "good teaching" looks like anyway.

These alternative stories are more complicated than the ones advanced by TNTP, but they also have more explanatory power. And to the extent that a story better helps us understand the causes of the status quo, it's likely to be more useful for identifying next steps forward. - PB (@MrPABruno) (image source)

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

The comments to this entry are closed.

Disclaimer: The opinions expressed in This Week In Education are strictly those of the author and do not reflect the opinions or endorsement of Scholastic, Inc.