31 January 2010

The Ups and Downs of Grades

I'm never quite sure what is meant by the term "grade inflation." Does it reference a student who gets a grade that is higher than what has been earned/deserved (however that is defined)? Or are we talking something larger in scale---that out of any given student population there should always be a normal distribution of grades and any distribution which has a positive skew means that some sort of grade-related shenanigans are occurring?

Arkansas is judging grade inflation as a mismatch between a grade on a transcript and a score on a standardized test (via Memphis Daily News):
The Arkansas Department of Education says 58 public high schools inflated Algebra I and geometry grades last year.

The action means graduates of those schools face additional requirements to qualify for the new Arkansas Academic Challenge Scholarship. The scholarship is funded by the lottery and could be worth up to $5,000 a year.

Graduates of the 58 schools will have to earn more than the minimum 2.5 grade point average or earn more than the minimum 19 on the ACT college entrance exam.

The inflation report compares the grades of students who made an A or B in Algebra I and geometry in the 2008-09 school year — but scored at below-proficient levels on state exams.
There are two points of interest here for me. First of all, the state exam represents what a student knows on a given day about only a sample standards selected for that test---not what a student understands about Algebra or Geometry on the whole. However, I can also see the other side of the argument here. If a "well-taught hard-working" student has been provided a standards-based education, then the sample score should reflect the overall score. If you have a teacher who starts on page one of the textbook and just keeps working forward, regardless of relevance to learning targets (or instructional adjustments), then you're definitely going to have a mismatch between the scores.

The assumption, of course, is that the test score is not only accurate, but will always be lower than the grade. Otherwise, there is no "grade inflation." I have to think that there are going to be students for whom the reverse is true---the blow the top off the state test and earn a D or F in their math class. This happens when factors like late work or missing work or non-academic factors get rolled into the grade. You end up with a kid who knows the standards, but the grading scale only counts that as part of the score.

I don't believe in grade inflation, the Tooth Fairy, or the Easter Bunny. I do believe that there are inaccurate grades and Arkansas would do well to address that issue as opposed to punishing kids because schools/teachers need help with grading practices.

Even more mythical---or so I thought---was grade deflation. Until now, I can't recall a single news article that I've seen which describes such an issue. (via New York Times)
When Princeton University set out six years ago to corral galloping grade inflation by putting a lid on A’s, many in academia lauded it for taking a stand on a national problem and predicted that others would follow.
Galloping grade inflation! Jumpin' Jehosaphat! It's a national problem...or not.

But the idea never took hold beyond Princeton’s walls, and so its bold vision is now running into fierce resistance from the school’s Type-A-plus student body.

With the job market not what it once was, even for Ivy Leaguers, Princetonians are complaining that the campaign against bulked-up G.P.A.’s may be coming at their expense.

It is no secret that grades are currency. They buy things---from cheaper car insurance rates to athletic eligibility, scholarships, and college entrance to a final ticket to the working world. Princeton students and families have paid a lot of money for that Golden Diploma Ticket. You can argue all you like about whether or not being able to fork over tuition is enough to entitle someone to a sheepskin, but the school isn't playing fair, either:

In September, the student government sent a letter to the faculty questioning whether professors were being overzealous in applying the policy. And last month, The Daily Princetonian denounced the policy in an editorial, saying it had “too many harmful consequences that outweigh the good intentions behind the system.”

The undergraduate student body president, Connor Diemand-Yauman, a senior from Chesterland, Ohio, said: “I had complaints from students who said that their professors handed back exams and told them, ‘I wanted to give 10 of you A’s, but because of the policy, I could only give five A’s.’ When students hear that, an alarm goes off.”

Nancy Weiss Malkiel, dean of the undergraduate college at Princeton, said the policy was not meant to establish such grade quotas, but to set a goal: Over time and across all academic departments, no more than 35 percent of grades in undergraduate courses would be A-plus, A or A-minus.

I realize that college is not a standards-based environment. However, if a student completes the requirements of a course at a top level, shouldn't s/he receive an A? How does one justify a cutoff of one-third of the population?

It would appear that schools at every level need to take some time to really think about what a grade does and should represent. Until then, artificial terms such as "grade inflation" don't help the discussion---and they certainly don't support students.

Update: Looks like grade inflation isn't just for kids. NYC plans to change the way it "grades" public schools so that not so many will have A's and B's. You can read more in this NYT article.

30 January 2010


I have seen any number of fussy bloggers over the years---the kind who only post negative thoughts and angry sentiments. I have no beef with them. Blogging should be whatever you want it to be. I have hoped to be more reflective and positive. When I hit the patches in my professional life where I am more interested in shaking my fist than being capable of finding solutions, I don't write as much.

For whatever reason, I've been feeling rather impotent (in terms of job function) as of late. This has led me to thinking about who really has power in education and who those people listen to. Even with a stateside balcony seat, I often find that I have very little influence on educational events. My goal is not one of power or influence for its own sake---my interests are in supporting kids. It was too difficult to stay in a district where money meant more than people. I wish I could tell you that people at a higher level serve a higher purpose where schools are concerned. The fact is, a lot of them do. Many of the people I work with have kids first and foremost in their minds as they make decisions. Unfortunately, these people are not in the kinds of leadership positions where that could make a real difference.

I do know that the biggest impact on kids is made at the classroom level. In that sense, teachers have more power than anyone. And yet, in terms of the education system as a whole, teachers often have the least say in what happens in terms of policy and budgets.

I'm not sure how to resolve this---or even if this rambling makes a lot of sense. I don't wish to be negative about it, although it's been a downer sort of thing rolling around in my head. How do we ensure that those who are most passionate about doing what is best for kids are the ones who have the biggest voice in shaping policy?

23 January 2010

And In Other News...

Not only did I have to deal with coming down off of a ScienceOnline 2010 high this week, I also had to kick off the assessment project which is the primary reason for my employment...all while having a rather nasty case of laryngitis. However, I have been looking forward to this week for months and even sounding like a 12-year old boy undergoing puberty was not going to stop me from enjoying the work.

There is an stunning group of educators working on this project. You never know when you put a call out for help who will respond---and even sifting through a pile of applications is no guarantee that you will have the cream of the crop. I am sure that I am not the only one who has been burned in the past by an applicant who looked beautiful on paper and was nothing but heartbreak in the flesh. This time, however, there appears to have been a perfect storm of events and I have roughly a dozen superstars from all walks of education to help guide this process.

Their presence comes at a time when I need them most---not simply for the task at hand, but as I wrestle with various ideas related to educational technology and what happens in a classroom. I had to listen this week to talk about the worthlessness of public schools and teachers from someone who has never worked in one (nor places any value on my lifelong passion for them and experiences within them). Public education is far from perfect, but it is not a useless social experiment either. How and where the most recent advent of educational technology fits remains to be seen. There are plenty of predictions out there---how these tools will transform education in the next 10 years. I don't agree. I have nothing to base that opinion on, other than anecdotal evidence. Over the last 20 years, we've seen computers and internet move into classrooms; but I am unconvinced that instruction has undergone any significant changes as a result of these tools. I think more change has been driven by policy, not tools.

I was thinking this week about the various stakeholders in the educational process and their buy-in for educational technology. It's simpler to think about those associated with higher SES; however, if I'm working a minimum wage job at Wal-Mart, should I care that my child is able to create a Voicethread or collaborate on a wiki when those tools have no impact on my world? Is that what I want schools teaching my child? Does a migrant worker care more about whether or not an interactive white board is in a classroom or whether his child feels safe at school? I don't know the answers to these questions, but I am thinking that it is a mistake to not know what these voices would say. It contributes to the ever increasing divide of haves and have nots.

I know that there is a lot of instructional power in educational technology. I know that the tools are engaging for students and can create opportunities for learning that did not previously exist. I also know that they aren't necessary in order to develop students who think critically and creatively...who can collaborate and organize information...who can read and write. As I move forward with the assessment group that I have, I will be looking for some answers as to how we justify change.

18 January 2010

ScienceOnline 2010: Final Thoughts (For Now)

---photo by MamaJoules

I don't really know how to eloquently sum up the whole ScienceOnline 2010 experience. All I can think of is a Keanu Reeves' like "Whoa."

I am sure that I will be processing all of the ideas, conversations, and experiences for awhile. There is quite the archive of tweets from the conference, if you are so inclined. Here are some of my Lessons Learned:
  • I will never be the owner of a Sleep Number bed. I tried to find my perfect sleep number---I really did. I inflated and deflated the inner balloon in the mattress. I set different numbers for each side so I could do a comparison before making a decision for the night. I even read the posts Bora suggested for us in my search for a comfortable night on such a mattress. It was all for nought. The problem with these mattresses is that they have no give to them---they don't conform to the shape of your body or your sleeping position. Very uncomfortable.
  • I gained a cold (courtesy of Scicurious) and lost my coat. Double sigh.
  • As frustrated as we educators are with our IT staff and administration when it comes to using web 2.0 tools in the classroom, I can guarantee you that similar frustrations are being voiced at colleges, universities, science museums/zoos, and other institutions. However, there are exceptions to every rule. Damond Nollan, web manager at NCCU, is just such an exception. While there is no doubt that there will always be concerns about web security and the "digital footprints" we leave as we make our way through the internet, these should not rule out our attempts to learn and connect. Thank you, Damond, for your leadership in this area.
  • Cell phones in learning (and not just for science) will become increasingly important. I understand the challenges of harnessing their power for the classroom---and that we educators will have to figure out how to manage that. However, I picked up two interesting pieces of information last week that has made me more determined to work on incorporating cell phones into instruction. First of all SMS (texting) is the "king" of communications---it works across all types of carriers and in all countries. Secondly, it is a tool that is not necessarily impacted by disparities in equity. (Data plans/Smartphones are---but not texting from a basic cell phone.) This means that the kinds of divides we see among "haves and have nots" for other technology and access don't exist with this tool.
  • I still cannot believe I really met all of the amazing people that I did. I will try to work on a list and some links to share of new-to-me blogs and conversations. There was only one session which was a disappointment to me---the moderators being the only evidence of a clique I ever saw for the conference and their approach to a topic which showed no hint of personal reflection was a bit insulting. However, the blogosphere takes all kinds. I can appreciate examples of what not to do just as much as the blogs which inspire me.
  • Finally, below are three screenshots of tweets from the conference. They are the ideas that intrigued me most, but didn't get explored. Perhaps they will serve as fodder for future posts. If you have some thoughts to share about any of these ideas, I'd enjoy hearing them.

Assuming that ScienceOnline continues and grows, I wonder if what makes it intimate and participant-driven will be able to stay as the center of things. What is the maximum size a conference can be for remaining a reflection of what attendees build for themselves?

Did you attend the conference either in-person or "ether"eally? If you're an educator, would you be interested in something like this---what sorts of topics would be most useful? Educon is coming up, which is probably akin to ScienceOnline in some ways, but does not attract the diversity and expertise we had last week (although it attracts plenty of attention from the EdTecherati). Maybe we need to reboot our educational gatherings.

Update: There is a great list of BlogMedia Coverage on the ScienceOnline 2010 site. One in particular, is a response to this post by Greg Laden. Go give him some comment love.

17 January 2010

ScienceOnline 2010: The Beginning of the End

Today is the last day of ScienceOnline 2010. The conference experience has been one akin to a reunion---so many people I feel like I have "known" for a long time from their blogs and Twitter feeds, and this was our chance to finally meet in person.

I enjoyed the sessions I attended, as well as those I facilitated. The Data Visualization session I moderated was particularly interesting to me. I had adapted my material for the "unconference" format and also for a different audience. I almost exclusively present to educators these days. Scientists? Not so much. But I liked the connections that they made with the material and the discussions they had about the changes they see happening in the sciences. I expect that these conversations continue in one form or another, as DataViz seems to be of increasing interest.

One of the things I have appreciated the most at this conference is the diversity of connections to science. There are science librarians; artists who paint or photograph scientific concepts; online and print journalists, bloggers, authors, and editors; students and educators from public and private institutions; science industry reps; physicians; museum, zoo, and aquarium staff; and many others. These various areas of expertise lend so much to the conversation. Journalists are contributing to the discussions of ethics in science reporting while librarians give us different ways to document and catalog work. Teachers can help researchers understand what is needed for their students to participate in citizen science projects. Those institutions which are already using social media can help the rest of us understand what is and isn't working. I don't know what this would look like in the context of an educational conference, but we need to find a way to do this.

In a few hours, I will board a plane bound for the west coast, headed back to my normal quiet life. I am already anticipating returning here for ScienceOnline 2011.

16 January 2010

Enough Is Enough

Regulars here at Ye Olde Blog know that I have posted many times about classroom grading practices. Regardless of which philosophy you ascribe to, at the end of the term, a teacher must make a decision about whether or not a student has learned (and to what extent). How much evidence is enough "to convict a student of learning," as Rick Stiggins would say? Is it by the number of assignments completed? Quality of work? Length of time information is retained? The answer is not as cut-and-dried as we might like. I have wondered if there is an answer at all.

I was pondering this particular conundrum again on Friday evening during a ScienceOnline 2010 keynote by Michael Specter, author of Denialism. The largest bone of contention amongst the crowd (and with Specter) was around how many expert opinions are "enough" to determine what the truth of the scientific matter is. For example---Is the H1N1 vaccine safe? How many doctors, virologists, physiologists, epidemic researchers, etc. must one talk to before accepting that the answer is "Yes."? Is just a doctor enough? Do you need three who agree? How much expertise is necessary for an opinion to be considered?

The struggle for some in the crowd appeared to be around defining the exact quantity of expert opinions in agreement that should be required. Others cared more about making sure the "right" experts were used. I can empathize with that sort of mental wrestling. It is similar to the questions I get from teachers at grading workshops: How many assignments should there be for each standard? Should I have three summative assessments...or five? I can never really answer their questions any more than Specter could provide a definitive answer last night about how much scientific expertise is enough. It isn't that I don't want to answer teachers or frustrate them. These questions just do not fall into black and white sorts of categories.

The danger, of course, in not thinking about guidelines and trying to get to the answer is that there continues to be wide variation in what is acceptable. Just as for one teacher, three tests and an essay is enough to say whether or not a student should receive credit for a course while other teachers need 8 tests and 5 essays, some people will accept one opinion about vaccines and others want four. Unfortunately, many of those who accept a single opinion often choose one that is not based on evidence. As Specter pointed out last night, "185,000 people died from measles last year...just no one Jenny McCarthy knew."

At some point, we all have to come to terms with balancing quantity and quality of information. While I doubt that the mothers of the 185K measles victims would share McCarthy's opinion about vaccines, it does not mean that those mothers have any more medical expertise than she. We have to make a decision about how to weigh both validity and reliability of the information we have access to, whether we are teachers looking at our gradebooks, or citizens and scientists evaluating information that impacts our health and environment. How do we determine when "enough is enough"?

15 January 2010

Notes from ScienceOnline 2010: Day 1.5

The event has not begun in earnest yet, and I can already tell you that ScienceOnline 2010 is the best. conference. ever. It's a place where egos do not appear to exist---only enthusiasm to share and learn. People are very friendly, always willing to strike up a conversation and share a story. Bora is a delightful host, boundless in energy and as genial as I had always imagined.

I don't have enough headspace at the moment to fully develop a post, but I did want to share some of my observations from the first day or so.
  • It is a different sort of crowd here. Not only is everyone interested in science, but also in social media. Several people I have met have described that they are the only ones in their lab, library, or office who dabble in blogging and tweeting. Many of them have run up against institutional policies or disinterest in these endeavours. This is an important for me to note, because I run across so many teachers who feel the same way. It is not just schools which are undergoing growing pains when it comes to integrating "web 2.0" (or whatever you wish to label it)---we are not as behind as we might think.
  • Blog posts are like lesson plans. You know how we educators will spend hours crafting what you think is the most awesome engaging lesson in the history of your classroom, only to have kids chew it up and spit it out...only to discover on another day when you have 5 minutes to plan that students love things? I've heard a few comments here around the same sort of relationship with posts. Scientists who take a ton of time to research and construct a post only to find that they get more conversation and comments on the "toss offs." Maybe there is something to be said for deadlines.
  • People rarely resemble their avatars---even the ones who use their own photos. I don't care how many times I've seen someone's tiny avatar on Twitter or on their blogs, the 3D experience is very different.
  • Blogging 101 was a ton of fun. An hour was woefully inadequate for getting people up and running with their own blogs, but it was enough time to allay some fears and provide places to start. I so enjoyed their positive energy and enthusiasm. I really hope that at least some of them get into blogging.
Most of the sessions will happen tomorrow. Considering the atmosphere oriented toward personal learning, the level of participation, and the openness of this conference where everyone can contribute, I really think this may well be the most powerful learning experience I have had.

12 January 2010

ScienceOnline 2010: Sessions

The countdown is on! I leave for North Carolina in the wee hours on Thursday. By Friday morning, I will be sleep deprived and jetlagged when I will be starting my first session at ScienceOnline 2010. Go me!

Blogging 101
This session is meant to be a boot camp, of sorts. Those attending will be new to blogging, so we'll start with the basics:
  1. What is blogging and why would anyone want to have a blog?
  2. How do I get started? (choosing a platform/hosting, template basics)
  3. How can I create and publish a post? (how posting works, including adding links, graphics, video, etc.)
  4. How do other people find my blog? (ways to connect and communicate your information; dealing with comments and establishing “house rules” for visitors; logging visits)
I have set up two blogs, one in Blogger and one in WordPress, for us to play with. We are scientists, after all. Why not experiment a bit?

After our boot camp, I'll be expecting them all to drop and give me 20 (posts).

Data Visualization
Readers here know that this particular topic is my new passion. I am really looking forward to a conversation which puts a science spin to things. Research scientists, physicians, science writers, and other stakeholders are going to have some unique needs.
  1. How do the capabilities of open publishing and associated tools change the ways in which we can visualize and share data with various audiences?
  2. What do you need your data to do that you can’t currently make happen (either due to lack of knowledge and/or tools)? For example, would you like to be able to overlay various samples with Google Maps?
  3. What tools (both commercial and open source) are you using to develop visualizations?
  4. How can we use visualization to better communicate messages with the general public?
I have pulled a few slides to use as a way to guide the conversation along and stimulate some thinking, but beyond that, our discussion will be participant driven.

Citizen Science and Students
This session is moderated by Sandra Porter of Digital Bio and we are joined by Antony Williams (ChemSpider). Sandra has written a post to get the conversation started on her blog. If you have examples of ways in which your students are involved with research science (e.g. water quality, bird counts...), please leave a comment on her blog. I know that ChemSpider has already done a bit of thinking about this and other sessions. Me? I'm a bit of a slacker in this group. You know---the person you never wanted to do a group project with because they totally biffed the whole thing and then got the same credit as everyone else? I'm teetering on that line, but I am working on getting my poop in a pile. My experience has been more from the classroom vs. researcher side, obviously; but I am hoping to speak to how schools can be engaged with ongoing work.

So, there you have it. As for sessions that I am just attending for my own edification...well, I haven't made my final decisions yet. I do know that on Friday morning after my Blogging 101 session, I want to drop in to the Podcasting workshop. In the afternoon, I've signed up to go to Duke's Immersive Virtual Environment to experience a "3-D simulator that shows the path a molecule of ethanol makes from a beer can to your brain, with molecular-scale stops along the way." I've signed up for the Saturday evening dinner and may nab one of the last Monti tickets for Thursday (although I worry about arriving late). I have time to attend a couple of sessions on Sunday morning before the long trip home. Amongst all of this, I hope to post updates here. I'll hang out the "Do Not Disturb" sign on Monday.

11 January 2010

A Preview of Coming Attractions

---Science Online 2010 Promo by Cephalopodcast

Later this week, I'm off to North Carolina for ScienceOnline 2010. Bora from A Blog Around the Clock started recruiting me last August. And I, being a girl who can't say "No," decided to jump on in and participate this year.

I was telling some colleagues earlier this week that what intrigues me most about this conference is that while it is all well and good for us educators to promote "21st Century Skills" in classrooms---here is a group of adults (most of whom were educated in "traditional" environments) who are remaking their professional world. Can we, as educators, claim that blogs, wikis, cell phones, and other tools have a place in the classroom when we don't couple that with examples of how real world professionals use these? I won't pretend that the kinds of online tools available to a kindergartner today will be the same as the ones when s/he exits graduate school, but I will predict that open access and the ability to connect with others across the globe will be even more important. So I am approaching this conference with a bit of an anthropological take.

I am leading or co-moderating three sessions (more on that in another post). It has been fun for me to un-think my usual approach for this "unconference," where sessions are driven by the knowledge, skills, and interests of participants. I like the idea that I don't have to be the expert...and I also like the idea of being part of the collective expertise for the sessions I attend. My plan is to immerse myself in as many events as possible. I am hoping not to become too starstruck among the science blogerati that will be present: Carl Zimmer, PZ Meyers, Dr. Isis, and more.

So, expect a slew of posts (I believe that is the proper collective noun) this week about ScienceOnline 2010. You can also follow the event on Facebook, Twitter, and via the main conference wiki. Just click the ScienceOnline 2010 link at the beginning of the post. If you can't be there with us in person, at least you can be present in ether.

10 January 2010

Universal Design

In most public school classrooms in the US, it isn't unusual to have at least one student on an IEP (Individualized Education Plan) or 504 Plan. These plans identify accommodations for students with one or more disabilities so that they may fully participate in the educational program offered at the school. Over the years, I've learned a lot about how to adjust curriculum, instruction, and assessment in the classroom for students with these plans---but I have to admit that until recently, I hadn't thought about accessibility on a large scale. In many circles (both inside and outside of education), the term Universal Design is used to refer to "solutions...that are usable and effective for everyone, not just people with disabilities."

What are the costs and benefits of using technology to achieve Universal Design?

As our state moves to a testing model that is computer-based, some have pointed out that there are great possibilities for Universal Design. It is relatively simple for all students (not just those who are blind or have reading disabilities) to plug in headphones and listen to the test. Although not currently under discussion, color options for text/graphics, the ability to magnify text, layout of questions to encourage focus, are all examples of ways we could change the testing experience for students. (I thought this idea on color-coding for the color blind was intriguing, and believe the symbols would be useful for nearly all students.) It doesn't change the content or structure of the test---only the way it can be presented.

I have already had several inquiries from the special education community in our state about our upcoming technology assessments. And why not? They have not always been included with the conversations, perhaps due to the view that students' IEPs could cover any accommodations as opposed to the test itself being flexible. I cannot guarantee that we will develop assessments that can be used by every possible group, but I will guarantee that Universal Design will be a consideration throughout the process.

With the possibilities that come with technology, there are also costs to consider. One of the most interesting articles I've run across in this regard was in the New York Times this week. It asks, "With New Technologies, Do Blind People Lose More Than They Gain?" The article is centered around the illiteracy developing in the blind because Braille is no longer as necessary as it once was. When you can have a computer read all your text, why learn to read yourself? Beyond that is an interesting cultural commentary from within the blind community as to those "elite" who use Braille vs. those who don't (and tend to suffer economically).

This makes me wonder about other possible pitfalls to increasing access and where the balance is. In our zeal to design universally, are we neglecting other considerations along the way?

08 January 2010

Makin' a List...Checkin' It Twice

In my recent search to build a better rubric, I have run across the idea of using a checklist several times. Assessment gurus offered a checklist as an alternative to using a rubric. I wasn't convinced that this was a viable option for me in my current situation. It felt too binary (present/absent)---and if that was going to be the case, why not just give a test made of objective items?

And then I was pointed to an article on National Public Radio (NPR) this week about The Checklist Manifesto by Atul Gawande. Although the book is written by a surgeon about the world of medicine, I am wondering what the applications might be for education.

"Our great struggle in medicine these days is not just with ignorance and uncertainty," Gawande says. "It's also with complexity: how much you have to make sure you have in your head and think about. There are a thousand ways things can go wrong."

At the heart of Gawande's idea is the notion that doctors are human, and that their profession is like any other.

"We miss stuff. We are inconsistent and unreliable because of the complexity of care," he says. So Gawande imported his basic idea from other fields that deal in complex systems.

"I got a chance to visit Boeing and see how they make things work, and over and over again they fall back on checklists," Gawande says. "The pilot's checklist is a crucial component, not just for how you handle takeoff and landing in normal circumstances, but even how you handle a crisis emergency when you only have a couple of minutes to make a critical decision."

This isn't the route medicine has traveled when dealing with complex, demanding situations.

"In surgery the way we handle this is we say, 'You need eight, nine, 10 years of training, you get experience under your belt, and then you go with the instinct and expertise that you've developed over time. You go with your knowledge.' "

Might this be true for the classroom, too? The closest thing to a checklist I have ever seen in education was really more like a flow chart. We had it at an elementary school and used it for developing reading groups for students. If a kid scored X on the latest DIBELS test and the teacher had observed Y, then the kid was placed into Z group and given a particular curriculum. For kids who were behind, the flowchart guided a teacher toward which intervention materials should help eliminate the deficiency. For kids who were at or above standard, there were suggestions as to how to move them forward.

Teachers are diagnosticians, of a sort. We are expected to determine each child's abilities and then tailor our curriculum, instruction, and assessment to meet students' personal needs. Might a checklist of some sort help us along? I understand that every child is unique and that we aren't making widgets---but teachers are juggling either 25 kids engaged in several content areas of learning at elementary or 150+ kids at secondary in one more content areas. It isn't reasonable to assume that we can be an expert on every student in every subject area. Perhaps a checklist might provide some guidance?

Here is a sample one for surgeons from the World Health Organization (click to embiggen):

What would be included in a version for education? Who are the stakeholders? Would time for other classroom pursuits be freed up if checklists were available? I don't believe that there will ever be a checklist for instruction---just like we don't see a step-by-step sort of thing in the list shown above. This is more of a pre/post idea. The "during" is still quite flexible.

At the other end of the spectrum is the assessment piece, which is where I originally started. I'm still not 100% convinced that a checklist is appropriate for the kind of assessment and evaluation I want to build, but I am no longer going to rule it out. Perhaps by giving teachers another way to identify what a student can and cannot do in terms of using technology (and some ideas about interventions), a large-scale assessment might gain additional functions. This alone makes checklists worth a second look.

02 January 2010


I've been collecting a variety of rubrics recently, along with various bits and pieces of research and advice on their construction. It's not that I haven't written them before. I just haven't had to write them for standards that are like the one below.
Generate ideas and create original works for personal and group expression using a variety of digital tools.
  • Create products using a combination of text, images, sound, music and video.
  • Generate creative solutions and present ideas.
I've been feeling a little "descriptipated," that is to say, having trouble cranking out what I think would represent the levels of a rubric for the standard shown above (and others like it). As I mentioned in my last post, this sort of standard reminds me of something you might see in the arts---there is a creative process involved. I had some arts rubrics mentioned to me. Here is an example of one:

This type of rubric makes me a little sad. Why? First of all, it's about quantity---not quality. Even if every child is not a Monet or Picasso, I would like to think that their understanding of the basic principles should be assessed as opposed to how many of the principles show up in the product. If one student product has only 4 attributes, but those are demonstrated at an expert level, then this is not as important as a student product which shows 7 poorly executed attributes---because, hey, 7 is better than 4, right? I think there's something wrong with this approach.

Although I have not included this information with the graphic above, a score of "3" is at standard for this product (as described in its directions). I would like to see detailed descriptors for every level---but if you're only going to do one, make it at the standard, not at Level 4. I'm also a little concerned at the number of standards each part of the rubric ostensibly addresses. How do you give effective feedback to kids when there is a melting pot of standards present in a holistic rubric?

I have to say that quite a few of the rubrics I'm running across suffer from one or more similar issues. This is especially bothersome when I see things like this:

Why? Because there is no requirement that the student actually consider the validity of the source. If s/he comes up with any three sources and lists the basic information...it's "Excellent." We are missing opportunities for asking for critical thinking from our students in favor of something more rote. Apparently, reading three things is good enough (regardless of veracity), as long as you include the title, author, type of source, and date in a list. I have other beefs with this rubric, including the "Minimal" to "Excellent" labels and the whole "passing/not passing" thing at the top, but that is another rant for another time.

These sorts of examples are clogging up my thinking. I've been needing a healthy dose of brain fiber...a mental cleanse and new starting point in writing descriptors. What I'm starting with now is thinking about What is involved in creating a multimedia product? There's likely some research...some understanding about which tool is best for developing the product you're after (e.g. ppt, voicethread, wiki, etc.)...the ability to make original content (as opposed to just pull it from others into a single product)...a sense of how to use the various elements (graphics, text, audio) to enhance the overall message. Now I can start thinking about how a beginner might approach such a task (e.g. probably borrows all content) vs. an expert (records own audio/video) and using these to write some descriptors. It's not about quantities---did the ppt have 10 slides with three bullet points each? Are there 3 graphics and two outbound links from the webpage?---but the actual characteristics of a performance. This is admittedly a much more difficult thing to do. I think it will be more meaningful in the end in terms of what kinds of feedback students get and the instructional steps teachers can take next.

Onward we will go with this task this month. I will share what I am allowed to float along the way. I'm hoping not to feel the mental bloat of descriptipation much longer.