About this blog Subscribe to this blog

Bruno: Standards, Instructional Programs, & Field Testing

On July 12 Diane Ravitch expressed concern that the Common Core standards "are being rolled out in 45 states without a field trial anywhere." That sounded like a reasonable enough objection to me, but Kathleen Porter-Magee thinks the idea "betrays a fundamental misunderstanding" because standards "aren’t an instructional program or curriculum...They are nothing more or less than a simple list of knowledge and skills that students should learn at particular grade levels. You can’t 'field test' what a state should expect its students should learn."

image from scholasticadministrator.typepad.com

Even putting aside the implausible suggestion that Ravitch doesn't understand it, the distinction between standards and instructional programs doesn't really tell us much about whether new standards should be field tested. Maybe standards shouldn't be field tested in the same way as instructional programs, but I can think of two reasons why we might want to field test them in some way.  

First, while it's true that in theory standards and instructional programs aren't the same thing, the distinction can get pretty blurry in practice. For example, this past week saw the release of "publishers' criteria" from the lead writers of the Common Core math standards indicating how long textbooks should be and how instructional time should be allocated in math classes. And all of this follows the release of analagous criteria based on the ELA standards. The standards writers themselves do not seem to believe that the standards/curricula distinction is a clear one.

Second, it's probably not safe to assume that the new standards are going to be perfectly clear to teachers. I am not an English or math teacher, and I'm not very familiar with their respective Common Core standards, but my experience is that many teachers will often complain that even existing standards can be somewhat vague and subject to interpretation. I'm actually a pretty big fan of California's existing science standards, but consider 8th grade standard 3.a, which requires that:

Students know the structure of an atom and know it is composed of protons, neutrons, and electrons.

What exactly does it mean to say students know "the structure of an atom"? Do they mean Thomson's "plum pudding" model? The Rutherford modelThe Bohr model? Do they mean for 8th graders to be learning the more complex - but more accurate! - quantum mechanical models? Do students need to know how electrons are distributed across different orbitals, or just that they are outside of the nucleus?

Suffice it to say that despite being given a perfect score for "clarity and specificity" by Porter-Magee's own Fordham Institute, California's science standards are still in some places neither clear nor specific. In fact, Porter-Magee herself co-wrote the forward to Fordham's critique of the upcoming common science standards, which stated that the drafts as written have "left too much to curriculum developers" by being excessively vague. We cannot assume, then, that standards are necessarily "a simple list of knowledge and skills"; that depends on how well they are written.

Certainly, one way to catch these shortcomings is by releasing drafts of the standards for public comment. Isn't it reasonable to think, however, that it might also be helpful to field test the standards to see which elements teachers find difficult to interpret and work with? If anything, Fordham's previous assessments of state and Common Core standards suggest such field tests could be quite valuable. - PB (@MrPABruno) (image source)

 

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I’m surprised a standard as unclear as the atom one is part of an otherwise-tidy set of standards for science. I personally feel the more complex model should be taught alongside the others, so students have both a jumping-off point, and the reality.

To some extent there's a limit to how clear you can make the standards without giving the additional context of what assessment looks like, but yeah, this one's particularly unclear.

My feeling is that the quantum mechanical models are unnecessarily complex for 8th graders to bother with. I started learning about them in high school, and it was helpful to have learned about the Bohr model, but I didn't feel like I needed a preview of the QM models before then.

Ultimately all of the models are "wrong" to various degrees and in various ways. At some point hopefully students begin to grasp that all models have weaknesses and are subject to revision as we gain knowledge. So I'm not really wedded to the idea that kids always need to get the *most* accurate model scientists have available.

I think there are two sets of confusions here:

The one pointed out already is the distinction between the standards (how high the bar is set) with the implementation of the standards.

The other is what a field test is. A field test is a trial to determine the appropriateness of a technique. The standards as distinguished above from implementation are not a technique. They are the expectation. The only thing that can be field tested in this scenario is how to achieve them. There is no way to scientifically know for sure whether the bar is too high. The point of a field test is to determine how the practical application plays out and since standards are not about whether a certain application works, a field test is not appropriate. What is appropriate is for studies of brain function at various ages to determine whether in ideal circumstances students are capable of achieving particular outcomes.

In other words, one does not lower the bar based on current practice, one adjusts practice to meet a bar.

@Ari - Those are both issues I dealt with directly in the post. In practice the standards/implementation distinction is not clear-cut.

More importantly, in my view, it's not really true that standards are just a simple list of what students need to know; as the Fordham Institute has ably demonstrated, and as I tried to illustrate with an example, they are often unclear and subject to great differences in interpretation. I see no reason why field tests wouldn't help with ambiguous or vague standards. The bar, in other words, has to be clear before we can adjust practice to it in a useful way, and bars are not always clear.

Also, realistically, existing "studies of brain function at various ages" are not robust enough to indicate clearly which standards are "appropriate" for different ages, or even whether such age-appropriate cutoffs exist in most cases.

The comments to this entry are closed.

Disclaimer: The opinions expressed in This Week In Education are strictly those of the author and do not reflect the opinions or endorsement of Scholastic, Inc.