The Case Against Statistical Significance
No one cares about education research. Still, a few more thoughts from me about what some researchers I asked about the post I wrote from last week (Do Funders Sink Education Research, Too?) had to say -- including the notion that maybe statistical significance is the problem.
Responses from my small and nonrandom sample ranged from those who doubt it ever really happens that way to those who think, well, of course it does. All the time. I'm inclined to think it does.
The argument that negative research doesn't get shelved in education
like in medical R&D is essentially that the education context is so different. The research sponsors are agencies and foundations, not for profit companies. It's not done as randomized trials, most of the time. The fixed costs for labs etc. are much less (grad students are cheap!).
But many of the interests and dynamics are the same in education research as in any other field -- and that's the overwhelming response I got from researchers I asked to comment on the situation. They talked about the
"file drawer effect, which makes average findings look more positive because the zero or negative findings are thrown into a file drawer and forgotten." Zero or null findings seem like just as much a problem as out and out negative findings.
Solving these problems includes focusing on larger studies, including dissertations and tech reports regardless of findings, and not just focus on actual published reports, I'm told. Another idea I hadn't considered is getting away from the whole notion of statistical significance that's been drummed into us, which apparently isn't the gold standard we think it is. See attached report: The_case_against_statistical_significance_testing.pdf