The Shanker Institute's Matt DiCarlo, in The Debate and Evidence on the Impact of NCLB, issues a typically nuanced, precise and (I'd say overly) cautious summary of what quantitative researchers may have proved about the meager positive effects of NCLB, as he overlooks the extreme "mis-naepery" of non-educators who support test-driven accountability.
DiCarlo correctly asserts that it is invalid to "use simple, unadjusted NAEP changes to prove or disprove any policy argument." But, he ignores a more meaningful and relevant reality. It is possible to use NAEP scores to disprove disingenuous claims that NAEP shows that NCLB worked.
DiCarlo concludes that "(test-based) school accountability in general" (emphasis in the original) "tends to have moderate positive estimated effects on short-term testing outcomes in math, and typically smaller (and sometimes nil) effects in reading. (emphasis mine)
The quantitative researcher then concludes, "There is scarce evidence that test-based accountability policies have a negative impact on short-term student testing outcomes." Such a narrowly worded statement is not false.
But, DiCarlo then states that "the vast majority of evaluations of test-based accountability policies suffer from an unavoidable but nonetheless important limitation: It is very difficult to isolate, and there is mixed evidence regarding, the policies and practices that led to the outcomes." That conclusion ignores the vast body of qualitative evidence by journalists and scholars who do not limit themselves to regression studies.