Bruno: Good Formative Assessment Is Hard
The most informative piece I've seen on the Seattle teacher boycott of locally-imposed mid-year testing was written by Sherman Dorn over the weekend, and it's as much about problems with assessment generally as it is about Seattle's MAP exams.
As Dorn points out, while the MAP isn't ideally designed to offer useful formative assessment data for teachers, the fact is that there doesn't appear to be much demand for that sort data to begin with.
Ideally, teachers would be giving quick, frequent assessments to students and then using the results to modify their instructional plans almost immediately. In practice, this doesn't happen with or without the MAP. He attributes this lack of demand for meaningful formative assessments to administrator demand.
There's something to that, but I think it's helpful to recognize a real lack of demand at the teacher level as well. Teacher attitudes toward formative assessments are important both in their own right and for understanding why administrators want these tests to begin with.
I'll elaborate on my thinking below the fold.
Part of the problem is definitely that teachers are uncomfortable communicating even low-stakes assessment results to their administrators. Most districts give MAP-like "benchmark" or "interim" assessments in the core subjects. I've never seen those results shared with administrators without some degree of defensiveness and at times I've even seen the results gamed.
Frankly, though, "stakes" are not the only thing that makes formative assessment challenging for teachers. Dorn alludes to the "five-minute Friday quiz", and that's an ideal I've aspired to myself - mostly at the encouragement of administrators - but I implement them only infrequently.
My inconsistent use of formative assessments is a result of at least two factors. First, finding the class time consistently is difficult to do. I've been told that my timing in class is unusually precise and I can reliably plan half-period quizzes or whole-period tests, but my planning is not quite so precise that I will consistently have five extra minutes in a class period just because I want to.
Second, while I could probably resolve to find those 5-minute blocks more often, the payoff always seems so doubtful that I'm disinclined to bother. The results of such formal-but-formative assessments are typically ambiguous and do not lend themselves obviously to decisions about how to use instructional time in the future so their marginal value is usually small.
The result is that I mostly muddle through with occasional larger-scale assessment data from quizzes and tests supplemented with murky-but-plentiful informal assessment information based on my day-to-day interaction with students in class. This isn't ideal, but it's functional and consistent with the quantity of time I'm prepared to devote to data analysis and curriculum modification on my nights and weekends.
While that's usually good enough for me - and, I think, for most teachers - it's probably not very comforting for administrators who have meaingful access to few of those data and would prefer not to be completely surprised when state test results come in over the summer.
It's not surprising, then, that administrators go for standardized MAP-like tests. Their thirst for data could conceivably be satisfied by completely autonomous teachers, but it's not clear to me that teachers as a group are interested in collecting and sharing those data to begin with.
I don't blame teachers for this. Formative assessment is hard to do well and it's easy to overestimate it's importance. We also shouldn't overstate the distinction between "formative" and "summative" assessment when the latter can be used "formatively" to plan instruction for future students.
By the same token, though, the fact that administrators want to see formative assessment data - or even summative data collected mid-year - is a natural consequence of the fact that teachers by and large do not provide such data themselves. - PB (@MrPABruno) (image source)