About this blog Subscribe to this blog

Bruno: In Defense of Multiple-Choice Tests

Testing-31With people using a confusing pineapple-related question on a standardized state test to prove the supposed evils of standardized testing,  it's probably worth mentioning that multiple-choice tests are not, in fact, inherently flawed. Yes, multiple-choice tests have some limitations in terms of what they can assess. And yes, poorly-designed questions can make them deeply problematic.

By the same token, though, when they are well designed, multiple-choice tests can be very useful for teachers and other educators. Obviously, they're quick and efficient to grade and analyze, which can be important given how little time we have to work with and for our students. Granted, they're often easiest to use to test factual recall, but factual recall is badly underrated by many educators, who sometimes don't appreciate how necessary recall is for "higher level" thinking abilities.

And maybe most importantly, taking multiple-choice tests can help you learn. This isn't just because self-testing is a good way to learn in general, although that's part of it. It's also that in some respects multiple-choice tests are better for learning than open-ended recall tests.  As Wray Herbert explains, new research finds that, compared to recall tests, "the learning fostered by the multiple-choice tests was broader, including even material that had not been tested". This is apparently because "the wrong answers were plausible enough that the students had to think about why the correct answer was correct", something students aren't forced to do on an open-ended recall test.

This is one of the reasons I don't mind my students having to spend a couple of hours during the year taking a multiple-choice test for the state. It not only has the potential to be informative for me, it might even be good for them. - PB (@MrPABruno) (image source)

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

A couple of hours during the year when the tests are used for positive purposes -- not a problem. Many hours and an entire school year devoted entirely to nothing but test prep -- very bad.

Read Linda Perlstein's "Tested," about a low-income school in Maryland under crushing pressure to keep its test scores up, and report back, @Paul.

The pineapple flap just put this in the spotlight. In my view the story is fine -- the issue is putting questions about a deliberately ambiguous story that don't HAVE a clear right answer into a multiple-choice format, then basing students', teachers' and schools' futures on the outcomes of such tests. Paying multigazillions to the corporation that creates the test (which of course is one of the real purposes of all this) is a huge issue too.

@Caroline - As a teacher of low-income students in a school under pressure to keep test scores up, I don't think I need to be patronized on that particular point. Obviously, the question should be thrown out and we should be careful about how we use the test results generally, esp. on reading tests. My point is just that too many commentators conflate legitimate criticism of the uses of the tests with aesthetic objections about "bubble-in" tests being inherently lousy.

Paul,

The "self-test" literature is *NOT* about multiple-choice tests but the opposite, writing in one's own words as much about material as one can remember. The item highlighted a few months ago in the popular press was quite unstructured.

@Paul, I'm not commenting for your benefit but for the benefit of your readers.

When you write about multiple-choice tests without mentioning the actual reason for the widespread and increasing objections to high-stakes testing, you mislead and misinform your readers. The dissent is not due to the misuse of the pineapple story -- that's a symptom and a symbol.

To restate (again, for the benefit of readers), the reason for the increasing dissent is that the high stakes attached to standardized tests do damage to children and education (and teachers too).

@Sherman - It's definitely been awhile since I looked at it closely, but I always interpreted the literature as transferring in a pretty straight-forward way to multiple-choice contexts, at least insofar as "self-test" basically means "practice remembering". Even if that wasn't previously obvious from the literature, isn't that what's suggested by the new research Herbert was writing about? I realize there are some inferential leaps I'm making, but I didn't think of them as especially large.

@Caroline - Since this post was about the (wrong, in my view) idea that the format of multiple choice tests makes them inherently severely flawed, it is true that I did not digress into other, even related, topics. It will probably continue to be the case that I will often write about one thing without discussing every other conceivable related thing.

Paul ... Please comment on the amount of test prep in your building. Too much? Too little? Just right?

@Art - I'm reluctant to generalize about my entire (medium-large) school, especially since my experience is with the relatively-de-emphasized science test, rather than with math or ELA. My science experience is that it's about right: the students will spend about 90 minutes of advisory class time reviewing general test-taking strategies using released CST questions (for all subjects), and we will spend perhaps another 60-90 minutes in science class focusing on released science questions in particular. Since the questions cover my content anyway, and because the scores are more meaningful to me if the students are familiar with the format of the test, I think this is a worthwhile use of time. In any case, I doubt I could increase their scores more with additional "test prep" anyway rather than just spending the time on my science content.

@Sherman - Here's an example of what I mean about testing effects generalizing to multiple-choice contexts:

http://pps.sagepub.com/content/1/3/181

I realize this isn't technically "self-testing", but I think it's close enough for the purposes of this discussion.

It’s completely true. The problem is the tests, not the method. Many students feel more comfortable taking multiple-choice tests, due to an ill-founded notion of them being easy. And any test in which students can feel comfortable is better than the alternative. The problem is that too often, I’ve seen tests in which a few answers are technically correct, or that there is one answer very clearly correct. Efficiency isn’t bad, it’s when laziness and efficiency go hand-in-hand that there’s an issue.

The comments to this entry are closed.

Disclaimer: The opinions expressed in This Week In Education are strictly those of the author and do not reflect the opinions or endorsement of Scholastic, Inc.