About this blog Subscribe to this blog

Thompson: Asking the Wrong Questions regarding NCLB and Testing

The Shanker Institute's Matt DiCarlo, in The Debate and Evidence on the Impact of NCLB,  issues a typically nuanced, precise and (I'd say overly) cautious summary of what quantitative researchers may have proved about the meager positive effects of NCLB, as he overlooks the extreme "mis-naepery" of non-educators who support test-driven accountability.  

DiCarlo correctly asserts that it is invalid to "use simple, unadjusted NAEP changes to prove or disprove any policy argument." But, he ignores a more meaningful and relevant reality. It is possible to use NAEP scores to disprove disingenuous claims that NAEP shows that NCLB worked. 

DiCarlo concludes that "(test-based) school accountability in general" (emphasis in the original) "tends to have moderate positive estimated effects on short-term testing outcomes in math, and typically smaller (and sometimes nil) effects in reading. (emphasis mine)

The quantitative researcher then concludes, "There is scarce evidence that test-based accountability policies have a negative impact on short-term student testing outcomes." Such a narrowly worded statement is not false.

But, DiCarlo then states that "the vast majority of evaluations of test-based accountability policies suffer from an unavoidable but nonetheless important limitation: It is very difficult to isolate, and there is mixed evidence regarding, the policies and practices that led to the outcomes." That conclusion ignores the vast body of qualitative evidence by journalists and scholars who do not limit themselves to regression studies.

The Shanker Institute researcher thus dismisses the experience of countless numbers of teachers who testify to the damage done after NCLB testing and sanctions were imposed, as he fails to challenge the illogical and evidence-free claim by Eric Hanushek and Margaret Raymond that NAEP increases that preceded NCLB should be attributed to pre-NCLB "consequential accountability."  (emphasis mine)

Let's not forget that Hanushek and Raymond asserted that nearly half of the states had "consequential accountability" that increased student performance. But, they admitted, "Most states did not, however, actually impose the consequences, particularly any sanctions, during the introductory phase."  The consequences "in most states" were "really potential." In other words, tests that didn't have stakes attached were supposedly high-stakes tests, and potential consequences without sanctions were supposedly "consequential."

More importantly for their methodology, Hanushek and Raymond made no effort to prove that pre-NCLB accountability measures were perceived as consequential in any given state or that they influenced administrators or classroom practices. Determining which states had measures that were actually consequential would have required a massive, multivolume study of what happened in those states. It would have required a body of journalism and scholarship comparable to that which now shows the harm done by NCLB.

I do not deny that there was consequential - or semi-consequential - accountability in some places, but Hanushek and Raymond do not even attempt to determine which systems or states passed meaningless benchmarks and which did no more than go through the motions when claiming they had imposed real accountability. Often, the real purpose of education accountability is creating the opportunity to chirp, "Accountability!, Accountability!, Accountability!" over and over again. But, unless Hanushek and Raymond can provide evidence that the states who raised their NAEP scores actually had consequential accountability, they are engaging in sloganeering, not research.

For instance, Oklahoma City had consequential accountability in the 1980s, but it had died out by the time I entered the classroom in 1992. The horror stories that teachers and students told me about testing were credible, but until NCLB they were all in the past tense. Contrary to Hanushek and Raymond's claims, Oklahoma passed a law in 1995 that did not come close to being consequential. It sure didn't influence classroom practices.

By the way, my state's experience can't prove that none of Hanushek's and Raymond's sample had consequential accountability; its  just an illustration of their failure to try to prove their assertion.  

I could go on with the mis-naepery done by economists that DiCarlo ignores, such as using 2004 NAEP survey data to disprove post-2007 evidence of the harm done by the law. But, I will limit myself to two big points.  NCLB did not become law until 2002, and its sanctions usually did not take effect until 2005. Moreover, many or most states kicked the can down the road, delaying as much as possible the law's punitive measures. It is intellectually dishonest to claim, without qualitative evidence, that the law's sanctions (whether they started in 2002 or 2005 or later) caused student achievement gains that preceded the beginning of those sanctions. (Its possible, that fear of impending consequences along with the additional resources that preceded the sanctions had an effect in some places, but the burden of proof is on those who would make such a claim, and it would require more than just regression studies to make such an argument.)    

Secondly, we've had a deluge of outstanding books and great journalism documenting the pain and anger of students, educators, and parents that is a legacy of NCLB. I'm not surprised that economists, who seem completely uncurious about realities inside schools and who have made their names promoting test-driven reform, dismiss the judgments of other scholars and educators. But, does DiCarlo believe that we are all suffering from a mass hallucination? Otherwise, why did he ignore the evidence that does not rely solely on statistical models?-JT(@drjohnthompson)

 

 

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

The comments to this entry are closed.

Disclaimer: The opinions expressed in This Week In Education are strictly those of the author and do not reflect the opinions or endorsement of Scholastic, Inc.