I am starting to see districts build leveled assessments. This means that the questions are ordered to reflect the proficiency scale. Questions that address the descriptors associated with Approaching Standard are placed first, then those for At Standard, and finally, Above Standard. Sometimes, the step from the proficiency scale is included on the assessment—but I have seen districts that don’t include it. Personally, I like it on there. If we’ve been using the language from the proficiency scale with students and have been making intentional connections about how performance does or does not match the items, then it seems like a logical choice to include it at the end, too.
I mocked up a test to show this. Keep in mind that assessment is much broader than tests and quizzes—there’s no reason why you couldn’t apply the same format to labs, projects, and so forth. Also, I won’t claim that these questions are the best. Just take things at face value here for our model.
I do know a district where they choose to make the cut using the minimum points necessary for that level. For example, this assessment has 22 points, 9 of which are assigned to the Approaching Standard questions. So, earning 9 points would be the minimum needed to get an overall score for the test at Approaching Standard/Level Two. There are ten more points assigned to At Standard questions, so 19 would be the next cut…and 22 the final cut (to earn an “Above Standard” evaluation). In other words, you have to get a perfect score to get a “4” on the test.
The scale I chose is a bit of a mix on this. I kept the first cut, scaled back the final cut to getting any of the three possible points for the Above Standard item, and then split the rest. I do not have a psychometric reason for any of this…so feel free to set the cuts at whatever makes sense for your own work in the classroom. And if you want to throw some half-points in there, that’s your choice, too.
The nice thing about developing these leveled assessments is that they dovetail with standards-based grading so nicely. Once you’ve determined the score, it slides right into your gradebook. It also makes providing feedback to students very clear. You could even have them track how many items were scored correctly in each category.
You can download my version of the assessment here, if you want to play around with things yourself. (Note: I was too lazy to write the individual guidelines for the short answer items.)
Are you using leveled assessments, proficiency scales, or related ephemera in your classroom? How’s it working for you?
Getting back to a question from the last post—Is there a difference between a proficiency scale and rubric? I still think there is, even though they have several things in common. In fact, Jennifer asked this question of the Marzano Research Labs and got this answer:
In my mind, however, a proficiency scale has a more universal application in the way we structure the information we provide to students, how we score their work, and how we evaluate their overall performance. It is more than just a measurement device...bigger than just evaluating student progress toward a standard. I may be drawing a very thin line in making that distinction, but I think it's enough of a different tool to do so.