20 February 2015

Because I'm Bad, So Bad, You Know It

Here are a few reasons I regularly hear or see about why students have a low score on a state test:
  1. The kid was having a bad day.
  2. It's a bad test.
  3. The kid is bad at taking tests.
The first one is possible, but I rarely put any stock in it. Why? Because not once have I heard the opposite reason given for a good score. "Wow! That kid passed the test. He must have been having a great day." It's understandable that we get disappointed when students perform well---especially when we have been beside them every day and have seen all the progress they've made. Makes it all the more important to celebrate the successes we have. One score does not define a student or a teacher. But we shouldn't dismiss it based solely upon Chance, either. If we're not willing to say that the scores we like were only attributed by good days, we can't toss the others to bad ones.

As for the second, I'm willing to believe it far more for a teacher made test than a large-scale version. Why? Most teachers have had no formal training in assessment. I've seen plenty of poorly developed items and poorly constructed tests at the classroom level---including my own. For a lot of my classroom career, I would not have been able to tell you whether or not an item was good---Did it measure what I wanted it to measure? Was it free from bias? Was the content accurate? Did the analysis of the student responses support the goal of the whole assessment? Bad tests exist. But they are not the ones your state is using to measure student learning. Don't like the way the results are used? That's a whole different discussion.

The last reason is similar to the first in that it does exist...but very rarely. In my classroom career, I only ran across one student who was truly terrible with tests. She was in an AP class and could answer nearly any question I asked her in a conversation. On paper---even with the benefit of her textbook and all of her notes, labs, and assignments---she couldn't pick the right answer. She is the only student I ever discouraged from taking the AP test. I tried coaching her all year...we went over questions and talked about the thinking involved with answering them...but we just couldn't get her enough strategies to make it work. So, she was one out of thousands that I could truly say number 3 fit. As for the rest, I rarely made a concerted effort at "test prep." I did work with students on metacognition---being aware of how they were making choices with questions. But the most important thing to do was focus on content and teach to the standards.


Lots of people---both in and out of the classroom---don't like large-scale testing. I'm not one of them, as unpopular as it might be to say so. But then, I've had the benefit of being involved with writing items, test builds, scoring sessions, rangefinding, and more. I've seen the entire sausage get made. However, I don't like how the scores from these tests get used---for graduation requirements, student placement, teacher evaluation, and other uses for which the tests were never intended. But we can't disconnect those from testing by attacking the tests themselves with arguments about them being bad.

1 comment:

PamelaTrounstine said...

In general, I agree with you. But how do you explain the talking pineapple on the NY state test a few years ago? For sure, most of what I have seen that is problematic has been due to bias, or inappropriate content, and uncommon and several-grade-levels-above-the-one-being-tested words being used, which would fall under content maybe, or whether the question measures what we want it to, which it won't.


But I also believe we shouldn't have ambiguous questions or questions where the right answer can be subjective. This mostly applies to reading comprehension in my experience. A typical multiple choice will be one clearly or laughably wrong answer, one wrong answer, one almost right answer, and one that is correct. But what does it mean when all the adults reading the question see two perfectly good answers? I question even the style of question where "choose the best answer" among several that are correct, depending on say, whether the point of view came from the tortoise or the hare, but "from the rabbits point of view" was not part of the question. I do not believe this is giving anyone useful data, especially as the best way to prepare for questions like that is to ask yourself, "in what way is some adult committee trying to trick me with this one?" Of course, I do know that strategy works, because I got through the CBEST, the CSET and especially the RICA that way, but I am an adult. I don't see the value with third graders.