This has absolutely nothing to do with school libraries, but it’s fascinating to watch the painting unfold anyway in this YouTube video (sorry – no embedding permitted for this one).
Archive for January, 2011
They are in my family (Google Maps is a surprise favorite).And, I guess, in some other people’s, too, according to a survey by anti-virus software company AVG.
I have been giving a boatload of feedback these days. Our ed school students are ramping up to submit professional articles for publication, my advisees are starting the job hunt and are looking for resume feedback, some chapters for the book we’re working on have crossed my desk, and my students are blogging like mad to reflect on articles.
I like giving feedback. I think it’s empowering to students. And as someone whose mentor once said, “They’re master’s students … we should be helping them become masters at their profession,” it’s empowering to me, too. Because feedback is one of the ways that we can help guide our students into professional ways of being. And that’s heady stuff.
But I’ll be honest. I don’t like *grading.* (Sorry, SI students. I just don’t.) Let me explain. I like going through and giving lots of feedback. I don’t like the moment of plunking down the grade.
I think it’s part of why I design elaborately long rubrics or checklists — because I think it will remove some of that sting of whittling a student down to a number or a letter grade. I don’t like quantifying something down to just one indicator. I’m never quite certain that such a system does a student the kind of justice he or she deserves. And I often end up tweaking rubrics for the next time around because I realize I didn’t measure something I needed to.
My hope, as I give out all that feedback and chart each characteristic on a rubric or checklist, is that students walk away with a better understanding of the individual skills and strategies in which they excel or in which they are weak. So if they get an 89, they know it’s because they didn’t quite master one aspect, and they can look to the rubric or checklist or comments to know where the dip came.
While summative assessment can technically include all that feedback, the reality is that it almost always boils down to a grade. (Not here, but that’s an exception. And for the record, that feedback could nail or praise me better than any grade could.)
A summative evaluation – most often a percentage or a letter grade – depends very much on the person who does the grading, doesn’t it? It can be inherently subjective. Rubrics and checklists can minimize wild fluctuations in the scores given by that person, but even the rubrics and checklists implicitly represent the bias of the person who made them. Remember Ralphie in A Christmas Story, who is so certain that his Christmas essay was going to be A+++++? His implicit evaluation criteria was different from that of his teacher. Ralphie didn’t think, “You’ll shoot you eye out” would be part of the essay criteria.
But the problem with rubrics and checklists can be that they over-prescribe what student work should look like. Instead of letting our students loose to soar beyond our expectations, we can accidentally rein them in by being overly specific. We can spend so much time mandating that students follow certain formatting procedures or citation protocols, for example, that we can overlook whether the cited material actually leads to a synthesis.
I saw a gifted and talented checklist once. Every single item on the checklist could be boiled down to, “Did you follow directions?” If you had the right pages in your binder in the right place, if you filled in all the brainstorming sheets, if you had captions on your poster board, on and on … A for following dirctions. Which wouldn’t be so bad if the purpose of gifted and talented programs weren’t to stretch kids into creative and deeper thinking, right? I *know* those teachers didn’t mean to grade obedience … but they did. Now along the way, I’ll bet my life on the fact that they gave qualitative feedback that pushed students further and deeper into their work. It just wasn’t reflected in the checklist.
Enough about me.
For an upcoming Nudging column, we’d love to hear YOUR experiences with summative assessment. How do you participate in the evaluation of students? Do you grade bibliographies, for example? If so, what are you grading? Quality of resources? Accuracy of formatting?
How much do you grade process over product?
How do you make sure students know the specific areas in which they’ve done exemplary work versus the areas that still need improvement?
Hope you’ll share your ideas below. And please, people. Don’t shoot your eye out!