28 December 2009

Crossing the Rubricon

Next month, an intrepid group of educators from around the state will be joining me to help construct our assessments for Educational Technology. While I can't say much about them individually (oh, those pesky confidentiality agreements...), I can say that collectively, they are a "dream team" of teachers from all walks of K-12. They have significant experience with developing, rangefinding, and scoring large-scale assessments. A few are nationally recognized for their contributions to the profession. I am totally stoked about meeting them and working with them over the next eighteen months, in part because we have some big issues to hash out. I will share what I can along the way as I will be needing your help, too.

As I plot, plan, and prepare for this project, I am struggling with thinking about how the rubrics will shake out. Take a standard like this:
Generate ideas and create original works for personal and group expression using a variety of digital tools.
  • Create products using a combination of text, images, sound, music and video.
  • Generate creative solutions and present ideas.
This standard is not about a tool. We aren't interested in whether or not a student can make a powerpoint presentation. This is a little bit like asking a student to create a picture. The kid might choose watercolors or charcoal or pastels or pen and ink or...the list goes on. The same is true for digital products. A student might choose powerpoint, but they could also choose Voicethread or Zuiprezi or GoogleApps or...the list goes on. So part of the challenge is to develop a way to score student products when there are no parameters around the media used.

The bigger challenge, however, is that these standards don't nicely fit into a rubric. I have been trying for awhile and you know what? I've decided not to try anymore, at least for now. If I am trying to make a square peg fit in a round hole---doesn't it make more sense to go find the square hole rather than keep pounding away at the round one in impotent frustration? (Okay, that sounds naughtier than intended.)

What are the alternatives to using a rubric to evaluate student performance tasks? Are there other scales of performance out there? I've been looking around...and there isn't much. The Council of Chief State School Officers (CCSSO) was working on a project called EdSteps that is making some attempts to do so, but they are some distance from showing off their efforts.

Or maybe we just need to get back to the roots to rubric-ness. I was reading something recently that reminded me that a Level One performance is not about identifying the worst characteristics of a product or a list of what is lacking---it is about describing what the work of a beginner looks like. This is an excellent perspective. I know that I have been guilty of building a rubric by identifying "at standard" performance and then taking away from that to get to Level One. Instead, the approach should be more individual for each level: here is what a student at standard looks like...and here is what a student who is just beginning to engage with the standard looks like. It is more about identifying what is present, rather than absent.

I'm glad that I will have a constellation of superstars joining me in a few weeks to have some real time conversation about these issues. However, for those of you reading this who have your own ideas about how you would evaluate standards like the one described above, leave a comment for me to pass along. Suppose you could create whatever system you wanted to score student performance---would it include rubrics? Or are there other/better ways?

12 comments:

Mr. B-G said...

Just out of curiosity, why do we have to score student performance? I managed to get through high school, college, and graduate school without teachers "scoring" me on everything I did.

Yes, I received grades (but not for everything, and not in the form of rubrics), but most importantly I received feedback and constructive criticism. I don't understand the current wave of breaking every performance task down into a number.

So far this year eight of my journalism students have had articles published in the major daily newspaper in our region. I haven't graded a single article. That's right! No grades, no rubrics, no artificially compartmentalized categories. Do you know what I did? I had conversations with my students about their work, and in turn encouraged them to have conversations about their work with other students. And, believe it or not, I didn't use a rubric to evaluate them on the quality and substance of their conversations! I merely listened and responded using my voice.

These conversations led to organic rewrites spurred by a desire to express ideas clearly and be published for an audience, not to move from one rubric category to the next.

Think back to your own education. What was most beneficial? A piece of paper with a numerical score and appropriately circled boxes of verbiage, or the conversations, discussions, and anecdotes shared by your teachers?

The Science Goddess said...

I agree with you that scoring/grading is not required for every classroom assignment---and is not as meaningful as well constructed feedback. I ran my own classroom that way. Sounds like you're doing great things with your students, too.

My issue du jour is that I have to construct a system that can be used statewide. I am required by law to create something that can be "scored consistently by school personnel." I have no doubt that at the individual classroom level, teachers will provide individual feedback to students (and these assessments need not count for a grade)---I'm just wrestling with the kinds of things that emerge when you're trying to have a tool for 1 million students vs. 30.

Mr. B-G said...

You're required by law to create a tool for 1 million students? Perhaps the law needs to be changed!

I remember reading that the further away from the classroom an assessment instrument is created, the less reliable it is. Why not come up with a set of general guidelines or standards and let classroom teachers create the means of assessment... or even better, why not have the STUDENTS create the assessment tool?

Hugh O'Donnell said...

Mr. B-G, You're quite right, grading, by whatever means, is not essential to learning.

Student engagement and feedback from teacher and peers (and self, upon reflection) does the trick.

Since we're stuck with reporting systems and state/community expectations, we (SG and me, and others), try to make grading as unobtrusive and accurate as possible. In the best of all possible grading worlds, grading i.e., report cards, would become mild background noise drowned in the roar of learning. (Please don't grade my metaphor! :) )

The Science Goddess said...

There are many tests in education developed for tens of thousands (up to millions) of students. Most states have them to fulfill NCLB requirements. Reliability and validity are really not issues at that scale in most states because the item and test development process is far more rigorous than what happens at the classroom level.

Teachers already can and do develop classroom assessments. Our task is not to supplant those. The task is to create a valid and reliable assessment that can be used and scored in the classroom by a teacher. Should be a good way for a teacher to reflect on what they are teaching and measuring in contrast with a tool that has undergone a full development process.

Mr. B-G said...

Hugh, your metaphor passes muster. I'd give you an "A", but would that be as meaningful as me just saying I enjoyed it?

Science Goddess, you provide a legal/financial rationale for outside agencies creating assessments (to fulfill NCLB, for example) but you don't provide a compelling pedagogical rationale.

I've read a number of stories about the flaws of state assessments born out of the "rigorous" development process of which you speak. For example, one open response question on the Massachusetts Comprehensive Assessment System asked students to describe activities they would do on a "snow day," yet failed to consider that a number of students who had moved to MA from warmer climates had never experienced a snow day and thus didn't know how to answer the question.

Individual classroom teachers know such things about their students, and aren't prone to the types of errors of a distant corporate or bureaucratic agency charged with test creation.

I find fault in the notion that a group of outsiders can do a better job creating an authentic assessment to measure my students than I (or my students) can.

This is akin to a coach relying on an outsider who doesn't know his players or team philosophy to develop a system to assess his players and team. This simply doesn't happen because such a system wouldn't be as reliable or accurate as one the coach and his colleagues could create themselves, based on their intimate knowledge of their players' and team's overall strengths and weaknesses.

If teachers had half of the autonomy and professional respect as the people who are paid to facilitate the playing of games, we might actually be able to see improvements in K-12 education.

Top-down assessments from education corporations and bureaucracies are not the solution.

Roger Sweeny said...

This is akin to a coach relying on an outsider who doesn't know his players or team philosophy to develop a system to assess his players and team.

Perhaps this metaphor doesn't point the way you mean it to. There is a very accurate way to assess how well a team is doing: how many games it wins. This is considerably more accurate than asking the coaching staff, "How is the team doing?" Coaches will often be over-optimistic. They want to be encouraging. And, let's be honest, few coaches or teachers want to say, "they're doing poorly" because it seems to reflect poorly on the coach or teacher.

A big problem in education is that we have no equivalent of "go out and play the game."

Roger Sweeny said...

Mr. B-G,

Your journalism students do have a way to "go out and play the game." They try to get an article published. Most students don't have anything like that.

I would love to be able to have deep conversations with all my students about everything we're trying to do. But with a hundred students that is simply not possible. Numerical grades have their problems, but they are a remarkably efficient way of conveying information in many circumstances.

Mr. B-G said...

Records aren't determined by outsiders. They're determined by performance. I'd argue that education does have a "go out and play the game" equivalent.

For example, regional accreditation agencies verify the veracity of educational programs to ensure students are learning what they need to play the game at the appropriate level. Graduation rates and college acceptance rates then provide data that let a school know how well its programs are serving the students and how many are "winning."

As for the purpose of grades, well, on a basic level they serve to provide colleges with data which they can use to measure applicants. But what do grades really represent? Knowledge? Task completion? Performance ability? All three? Something else?

Roger Sweeny said...

Mr. B-G,

My school just went through its every-ten-year re-accreditation process with the regional accreditation agency. Once again I was struck by how much our business is concerned with inputs and how little with outputs. The process was very concerned with "what are you doing?" and hardly at all concerned with "how are your students doing?"

Graduation rates don't mean a hell of a lot. For years, schools found it hard to refuse graduation to anyone who hadn't been absent too much and who was basically a nice person. It was just considered too mean to deny that person a diploma. So the high school degree became devalued.

Out of that failure came NCLB and the high stakes testing movement. Schools couldn't be trusted to test their own students so outsiders would have to do it. Alas, in most places the tests themselves have been dumbed down or the passing score set absurdly low.

College acceptance rates mean something, though that too is limited. If a college has accepted a number of your students in the past and they have done well, it has an idea how well-prepared new applicants will be. But a lot of schools won't have enough experience. They will have to guess based on the characteristics of the high school. And they can use SAT/ACT scores, themselves a kind of numerical grade.

Of course, there are a large number of colleges who will accept anyone with a high school diploma (and some without). The statement, "Ninety percent of our graduates will be attending college in the fall" doesn't you much.

Mr. B-G said...

Roger,

I think there's a pretty solid connection between what we do as teachers and administrators and how our students do.

I'd argue that graduation rates do mean a lot. Your generalization that high schools gave out diplomas simply for showing up and smiling is inaccurate. The national graduation rate is about 70%, and has been for the last couple of decades.

Students who do not receive a high school diploma have a significantly higher rate of incarceration and reliance on public welfare systems. There is, and has been, value in a high school diploma.

NCLB is a move toward the privatization and corporatization of K-12 education. It means millions of dollars for testing agencies and other for-profit institutions "hired" via legislation to "fix" education.

I think the majority of high schools would be very happy with a statement of "Ninety percent of our graduates will be attending college in the fall," as the national rate shows only two-thirds of students attend any type of college, with one-third of those students attending four-year schools.

Roger Sweeny said...

Mr. B-G,

I asserted that graduation rates do not "provide data that let a school know how well its programs are serving the students." You then told me that graduation rates haven't changed "for the last couple of decades." Does that mean you are agreeing or disagreeing with me?

My high school is very happy with the fact that, as of graduation day, ninety percent of the graduates have told the Guidance Office that they will be attending college in the fall. Many of them will attend for a few classes or a few semesters but never get a degree. Of course, we don't know who because we make no attempt to find out. Teachers hear things but no one tries to put together any quantitatively useful statistics.

The fact that 90% say they are going to college says a lot about where the school pushes people and where families push their children. It doesn't say much about how well we've actually prepared them for college.