Evaluation.
It's an oft-dreaded word in the realm of teaching, or at least one that might not be taken seriously. I know I have been guilty of that, especially early in my career when there was just a simple checklist for the principal to use. I didn't feel that the tool (or follow-up conversation) was particularly useful...and really, would an observation a couple of times a year encapsulate all that I could do in the classroom?
In the intervening years---and the advent of high-stakes testing---evaluation has taken on a more sinister feel. What is your "value added" as a teacher?
But I'd like to set that aside for now, because the similarities between quality teacher evaluation and student evaluation are so striking. These were the four questions Danielson started with:
- How good is good?
- Good at what?
- How do we know?
- Who should decide?
In general, we view teachers as experts about the students and subject(s) they teach---their roles as evaluators are not called into question. But when we take things up a level, I often hear distrust start to creep into the conversation. Does a teacher evaluator always know what good teaching looks like for any and all classrooms? Is the principal the best decider? I hadn't really thought about it this way before---this shift between teacher as student evaluator and teacher as subject of evaluation. There are lots of other things at play, of course, but the basic questions are the same for both.
In order to get a teacher evaluation system in place, Danielson said schools need
- A clear and validated definition of teaching (the “what”)
- Instruments and procedures that provide evidence of teaching (the “how”)
- Trained and certified evaluators who can make accurate and consistent judgments based on evidence
- Professional development for teachers to understand the evaluative criteria
- A process for making final judgment
But that last bullet?
For a long time, the $1M question in standards-based grading has been "How do you crunch the numbers?" In other words, if you have to give a summary grade at the end of the marking period, how do you determine it? I've seen a lot of different attempts to answer this question (I've even made my own attempt)...but I haven't seen a lot of agreement. And this area, for teacher evaluation, appears to stump Danielson a bit, too. Here is a copy of the slide she shared as she talked about this.
The idea here is that there should be a lot of evidence for different attributes, such as questioning and rapport. On the right side of the slide, we see two parts of the process: interpretation and judgment. This is not so different from assigning a grade---we assess often against various standards, consider what we see, and determine a final grade to represent the work. Danielson spoke about the need for a process, as well as the dangers and pitfalls of various approaches, but there was no bottom line in terms of guidance.
Whoever solves this problem in such a way that everyone is happy will make a mint...but I think it's an impossible task. Some are taking the seductively easy (and oversimplified) way out by just reducing it to number crunching. Humans evaluating other humans is always going to be a subjective endeavor, however. We might take comfort by reducing things to formulas---whether we weight student scores or teacher evidence. We might say that it's better than just leaving things up to the subjectivity of an evaluator ("What if my principal has it in for me?"). Just like all of the growing pains with standards-based grading and reporting over the last several years, we are going to have to figure out how to communicate about good teaching while separating it from a one-size fits all rating system. I hope that one (grading) will inform the other (teacher evaluation). Right now, the conversations feel very disconnected, as if they are two separate attempts...but I'm having a hard time trying to spot the differences.
No comments:
Post a Comment