About this blog Subscribe to this blog

Thompson: "Big (Dumb) Data"

Those of us who oppose the misuse of data to punish educators and students must always remember that computers are not going away, and that "Big Data" has great potential for improving schools and our lives in unanticipated ways. 

Big Data, A Revolution that Will Transform How We Live, Work, and Think, by Viktor Mayer-Schonberger and Kenneth Cukier, gives a perceptive appraisal of the benefits and dangers of data-driven decision-making. 

While not specifically mentioning value-added teacher evaluations, Mayer-Schonberger and Cukier seem to agree that punishing an individual based on an algorithm would be misguided.  On CSPAN, Mayer-Schonberger asserts, "government must never hold an individual responsible for what they are predicted to do."  

Mayer-Schonberger and Cukier  argue that "datafication" of our world will make data messier and ubiquitous.  Big Data seeks correlations rather than causation. Data-driven analysis, or "letting data speak," will be "good enough" to be transformational.  

Mayer-Schonberger and Cukier recall the 2008 Wired magazine proclamation by Chris Anderson that "the data deluge makes the scientific method obsolete." While rejecting such overreach, they agree that Big Data will create analyses where "more trumps better." 

Mayer-Schonberger and Cukier then explore the dangers of misusing this powerful new way of viewing the world.  In their book, and on CSPAN, they exhibit a cautiousness that has been missing from data-driven accountability in education.  By failing to properly appreciate the difference between correlation and causation, school reformers would institutionalize what Mayer-Schonberger and Cukier call "punishment without proof." 

Mayer-Schonberger, who holds two law degrees, explains that data correlations are "singularly unfit to decide who to punish and who to hold accountable." Such a policy would be a "dictatorship of data," and it would "cut off common sense."

Mayer-Schonberger and Cukier also offer a check and balance for some of the worst aspects of high-stakes data-driven policies.   They propose an umpire which they call an "algorithymist." 

Proponents of value-added evaluations, we must remember, trust algorithms to predict how much an individual teacher should increase student performance.  A teacher who fails to meet his test score growth target would essentially be indicted as ineffective.  He would have to hope that observations, guided by rubrics created by the system that adopted the value-added model, would keep such estimates from damaging or destroying his career. Moreover, school "reformers" have imposed value-added without ensuring checks and balances to protect educators. 

Before we can truly benefit from Big Data, Mayer-Schonberger and Cukier say, we need a system of data user accountability.  They call for impartial auditors or "algorithmists."  Internal algorithmists would question the methodology of statistical models.  They would be like ombudsmen at newspapers.  External algorithmists would review the accuracy or validity of data-driven predictions.  People who believe that have been harmed by those predictions should be able look to algorithmists for appealing those decisions.

I do not doubt that these in-house algorithmists would conclude that the statistical methodology of proponents of value-added advocates is sound.  I expect that an objective auditor would be dumbfounded, however, at their methodology for policy decisions.  For instance, the craftsmanship of statisticians, who might predict with 80% accuracy the effectiveness of a teacher in raising test scores, will probably be lauded.  But, the idea that such accuracy justifies value-added evaluations would arise eyebrows. 

I find it hard to believe that an algorithmist would look at the relatively primitive ways that value-added models try to account for poverty and peer effects, and conclude that those models would not damage inner city schools.  Even if value-added is only one of "multiple measures" of high-stakes decision-making, it is hard to believe that statisticians will ever devise a model that does not disadvantage teachers in schools where it is harder to raise test scores.

Advocates for value-added evaluations should have to prove to these auditors that they can determine that the failure to meet test score projections is caused by the individual teacher, as opposed to the school or the district's school's policies. I am confident that algorithmists would determine that those models disproportionately impose collective punishment on teachers who have committed to the toughest schools.-JT(@drjohnthompson) Image via.     


Feed You can follow this conversation by subscribing to the comment feed for this post.

The comments to this entry are closed.



Disclaimer: The opinions expressed in This Week In Education are strictly those of the author and do not reflect the opinions or endorsement of Scholastic, Inc.