Thursday, June 24, 2010
Hooked on Measurement
Just last year, Stanley Fish was playing Clint Eastwood with his manifesto: Do Your Job, Punk! (or, My Tinfoil Hat Keeps Politics Out of My Teaching--Get Yours Today!) In that widely panned book, he argued that the role of the faculty was to produce and distribute knowledge magically apart from the mundane and political.
Earlier this week he more convincingly took on the student evaluation of teaching and specifically, a Texas proposal to hold tenured faculty “more accountable” by giving faculty bonuses of up to $10,000 for earning high customer assessments of specified learning outcomes.
Fish makes two arguments against the proposal. He squanders pixels bolstering his weaker point, that students aren’t necessarily in a position to judge whether Fish-as-teacher-phallus has, ugh, “planted seeds that later grew into mighty trees of understanding."
Far better is his second point:
Students tend to like everything neatly laid out; they want to know exactly where they are; they don’t welcome the introduction of multiple perspectives, especially when no master perspective reconciles them; they want the answers. But sometimes (although not always) effective teaching involves the deliberate inducing of confusion, the withholding of clarity, the refusal to provide answers; sometimes a class or an entire semester is spent being taken down various garden paths leading to dead ends that require inquiry to begin all over again, with the same discombobulating result; sometimes your expectations have been systematically disappointed....
Needless to say, that kind of teaching is unlikely to receive high marks on a questionnaire that rewards the linear delivery of information and penalizes a pedagogy that probes, discomforts and fails to provide closure. Student evaluations, by their very nature, can only recognize, and by recognizing encourage, assembly-line teaching that delivers a nicely packaged product that can be assessed as easily and immediately as one assesses the quality of a hamburger.
This part rings mostly true for me. No question, Fish is clearly wrong to generalize so broadly about students and evaluation instruments. As students enter majors and graduate programs, they are of course far more likely to welcome the sort of intellectual adventure that he describes.
And it’s just plain out of touch with the subject he is purporting to address to claim that all kinds of student evaluation are “by their very nature” (huh? philosopher much?) of the sort that can “only recognize” teaching-as-information-delivery. Nonetheless, that’s the kind administrators mostly impose so his point is valid despite the unwarranted generalization.
That said, I personally like getting student evaluations of my teaching, even the lame sort that predominate and which Fish is critiquing here. I learn things even from bad instruments poorly used by persons with little knowledge of the field or who display imperfect judgement, and so on.
My concern is with the way these instruments are misused--by activist administrators and politicians, aided and abetted by paid policy flacks. The managerial literature cheerfully describes all this as the “assessment movement” to consolidate their control of “institutional mission."
Faculty themselves, even with tenure, learn all too quickly to teach to the instrument.
Example: long after receiving tenure (twice!) I once got mid-range scores in response to a question asking students to assess whether their capacity for critical thought improved. The next term I included a twenty-minute exercise studying different definitions of critical thought the week before they took the survey: my scores jumped to the top of the range, with no other change in the syllabus.
I use that example because it’s double-sided. On the one hand, it shows how a modest change can essentially manipulate the results or, more to the point, manipulate the students providing the results.
On the other hand this modest change, motivated by a base consideration, was also a real one: it marked a moment where I took seriously the importance of reflection in the learning process.
By asking students to reflect on what had happened to their thinking in the class, they were not only more likely to appreciate the teaching, they were more likely to appreciate, value--and retain--the change itself.
So the stupid instrument, my vanity, and a modest change resulted in better learning.
While that instance of teaching to the instrument worked out more or less fine, most responsible studies are pretty clear that teaching to the instrument is generally harmful.
For instance, one Fish commenter quoted a reliably-constructed study that concluded “professors who excel at promoting contemporaneous student achievement teach in ways that improve their student evaluations but harm the follow-on achievement of their students in more advanced classes."
In other words: teaching to get high customer assessments produces intellectual junk food: the focus group says “yum!” but it’s all bad news after that. This is consistent with study after study on “teaching to the test” in K-12: the more tightly that management and politicians grip the handful of sand that is teaching and learning, the less they grasp.
Most of the commenters don’t address the motivation for the Texas proposal, which is to standardize and marketize the curriculum along the lines supported by the current administration. An easily assessable form of learning-as-information-download is an easily commodified form of learning: “Log in to Pixel University, where you get the exact same education as Yalies!” It’s also more easily controlled by a political bureaucracy, along the lines of K-12. Both Republicans and Democrats are actively supporting for-profit “education providers,” and the leading edge of their contribution is redefining knowledge as information delivery.
So what’s best about Fish’s effort here is the emphasis upon the nature of learning itself, which is easily distinguishable from information download.
The most difficult lesson for my first-year students to learn--the most frustrating, the one with the longest-term impact--is the construction of a review of scholarly literature, toward posing a research question unanswered by that literature. I ask them to zero in on a “bright spot” in the literature, where conflicting views are unresolved, or a “blank spot,” a question that hasn’t been posed. I try to help them to think of a modest but orginal way that they might advance the conversation.
The lesson takes them on a journey of the sort that Fish describes, full of frustrations and ventures into the failings of academic prose, dead ends and discombobulations. What they learn is that any act of knowledge origination emerges from a vast multivocal conversation and is framed by the professional modesty of the actual researcher. They are often amazed by the narrow frame of actual research questions, the extent of qualifications and hesitations, and the ways that knowledge is produced by error. They are often confused by the extent of collaboration, the fact that questions aren’t constructed in binary terms, the fact that questions are constructed, and by the amount of time spent acknowledging the diverse views and paths explored by one’s professional colleagues.
As Fish points out, students come to us trained to see “the master perspective” (of history-as-objective-fact, eg, rather than history-as-historiography, the writing of Helen Keller, Jack London and Einstein’s socialism into, or out of, the conversation). Or at most they see two perspectives, the binary the either/or of right and wrong, or for and against, good and evil, etc. I tell them that easy clarifications--such as “are you for or against” such and such a proposition-- are usually trick questions, that making knowledge and the act of learning entail entering into a hive of confusion, ambiguity, and error.
They don’t always like this lesson, which is deeply experiential: they have to try to read difficult things, ask for help, wait in line to get journals delivered to them. But they are always glad to have had it, and it clearly yields real results in subsequent classes.
Can this sort of lesson and journey be assessed? Yes, but not so easily by the sort of instruments we use for the purpose. We do need better instruments. For instance, measurement per se is not intrinsically useful: you might say losing 20 lbs at Pixel U is the same as losing 20 lbs at Swankfield--until you learn that at one school you lost the weight by exercising, and at the other they amputated a limb.
More than better instruments, though, we need better attitudes toward these instruments. We could start with a critical understanding of why administrations and politicians support the kind of assessments they do, and not the many better alternatives.
Above all: we need to be able to offer a clear, cogent justification of education as learning and distinguish between learning and download.