18 December 2012

Spot the Differences

Last week, I had the pleasure of listening to a keynote by Charlotte Danielson. Her framework has had a place on my shelf for a long time. It was a key piece of my conversations with beginning teachers and a source of self-reflection. This framework is one of three being used throughout our state as a basis for new teacher and principal evaluation.

Evaluation.

It's an oft-dreaded word in the realm of teaching, or at least one that might not be taken seriously. I know I have been guilty of that, especially early in my career when there was just a simple checklist for the principal to use. I didn't feel that the tool (or follow-up conversation) was particularly useful...and really, would an observation a couple of times a year encapsulate all that I could do in the classroom?

In the intervening years---and the advent of high-stakes testing---evaluation has taken on a more sinister feel. What is your "value added" as a teacher?

But I'd like to set that aside for now, because the similarities between quality teacher evaluation and student evaluation are so striking. These were the four questions Danielson started with:
  1. How good is good?
  2. Good at what?
  3. How do we know?
  4. Who should decide?
This is not so different from how we approach student work. What is "good enough" and against what standard? How will you know when you see it? In most classrooms, the teacher is the "decider," but I continue to see more student self-assessment and conversations about how a grade is determined.

In general, we view teachers as experts about the students and subject(s) they teach---their roles as evaluators are not called into question. But when we take things up a level, I often hear distrust start to creep into the conversation. Does a teacher evaluator always know what good teaching looks like for any and all classrooms? Is the principal the best decider? I hadn't really thought about it this way before---this shift between teacher as student evaluator and teacher as subject of evaluation. There are lots of other things at play, of course, but the basic questions are the same for both.

In order to get a teacher evaluation system in place, Danielson said schools need
  • A clear and validated definition of teaching (the “what”)
  • Instruments and procedures that provide evidence of teaching (the “how”)
  • Trained and certified evaluators who can make accurate and consistent judgments based on evidence
  • Professional development for teachers to understand the evaluative criteria
  • A process for making final judgment
These pieces elaborate on the four questions above. But when I think about them within the context of a classroom, I have to wonder how grading practices would shift if all of these were tightened up a bit. There has been considerable energy devoted to developing standards (the "what"), assessments and rubrics (the "how"), and PD in both areas. PLCs don't necessarily replace becoming a "trained and certified evaluator," but I do think that more conversations about evidence of learning are happening.

But that last bullet?

For a long time, the $1M question in standards-based grading has been "How do you crunch the numbers?" In other words, if you have to give a summary grade at the end of the marking period, how do you determine it? I've seen a lot of different attempts to answer this question (I've even made my own attempt)...but I haven't seen a lot of agreement. And this area, for teacher evaluation, appears to stump Danielson a bit, too. Here is a copy of the slide she shared as she talked about this.



The idea here is that there should be a lot of evidence for different attributes, such as questioning and rapport. On the right side of the slide, we see two parts of the process: interpretation and judgment. This is not so different from assigning a grade---we assess often against various standards, consider what we see, and determine a final grade to represent the work. Danielson spoke about the need for a process, as well as the dangers and pitfalls of various approaches, but there was no bottom line in terms of guidance.

Whoever solves this problem in such a way that everyone is happy will make a mint...but I think it's an impossible task. Some are taking the seductively easy (and oversimplified) way out by just reducing it to number crunching. Humans evaluating other humans is always going to be a subjective endeavor, however. We might take comfort by reducing things to formulas---whether we weight student scores or teacher evidence. We might say that it's better than just leaving things up to the subjectivity of an evaluator ("What if my principal has it in for me?"). Just like all of the growing pains with standards-based grading and reporting over the last several years, we are going to have to figure out how to communicate about good teaching while separating it from a one-size fits all rating system. I hope that one (grading) will inform the other (teacher evaluation). Right now, the conversations feel very disconnected, as if they are two separate attempts...but I'm having a hard time trying to spot the differences.

10 December 2012

Super Eight

I've been thinking about this space. I know I haven't been around much this year. Some health issues have held my energy levels low...most days, it takes all I have to get through the workday. But I am feeling better and stronger all the time. I picked up blogging over at Excel for Educators about a month ago, mostly because I had more nagging about sharing ideas over there. But on this, the 8th birthday of this blog, it is time to re-awaken the space and rejoin the community here, too.

Welcome back.