Sunday, May 28, 2006
It looks like this year’s SAT results dipped significantly from last year’s. Here’s an AP story on preliminary findings, which come from university admissions offices. A full picture won’t appear for a few weeks, but some schools report a serious decline in the cumulative numbers. The UC system, for instance, tallied a 15-point slide.
The last time verbal scores dipped significantly, in 2002, one researcher at the organization that administers the test suggested that the cause lay in an increasing portion of high school English classes being devoted to visual materials. BUt we had a rise the following year, and 2004 NAEP data indicated that the amount of reading assigned to 8th- and 12th-graders was rising. At the same time, however, the leisure reading by the same group dipped significantly. In fact, the correlation between trends in leisure reading matched NAEP scores more closely than did trends in in-class reading. And yet, in addressing the issue of reading scores, educators, researchers, and journalists rarely talk about trends in leisure reading habits.
How many especially easy (or especially hard) questions would it take to change the stats this much? Could it come own to, say, 4 too-hard questions one year, or four too-easy questions the next, or some other statistically-unexpected combination?
The original analysis (less leisure reading=lower SAT scores) is ALMOST too shallow to bother commenting on, but here are a few problems with it.
1. It neglects to note that the SAT changed in the past year.
2.. It ignores the fact that the new SAT is longer and thus increases the fatigue factor of the test. The New SAT has ten sections rather than 7 (including an essay, which many people would find more tiring than, say, reading comp) and it lasts 3 hours and 35 minutes (the old test was 3 hours)--longer than the LSAT (law school), GRE, and GMAT (MBA programs), longer than a grad seminar--and when you take into account breaks and the amount of time a student typically waits for the test to start, most students can expect to be there for five hours. Throw in there a 7:45 arrival time and lots of anxiety; I’d rather do my orals again any day of the week
3. It does not mention the fact that although the test is longer and has more sections, it actually has fewer Math and Verbal questions in total than the old test did AND the Math section has a larger percentage of difficult questions (according to the test’s own scale) than the old test.
4. It does not mention that the new test no longer has Analogies and Quantitative Comparison questions, which lent themselves much more than other questions to guessing and coachable techniques.
5. There are very limited data about which section(s) suffered drops in average scores. UVA reported drops in Math and Reading scores, which, if anything, suggests some other cause than less leisure reading, some cause that would affect Math as much as reading--see items 1-4, for instance.
Now, I suspect all of this information was neglected out of ignorance rather than a desire to hide some facts.
I’m not sure which I find worse--ignorance or obfuscation? Either, in the service of an argument that seems at best riding the hobby-horse, if not self-promotion, is lame. Is this not the guy who thought it worthwhile to call out some grad students at a composition conference for their embarassing paper titles in order to warn us all that the world is watching the humanities departments? I would think this same guy, a representatative of the US government, no, might also think it worthwhile to spend the time necessary to know his subject in order to insure that he not leave us thinking he was ignorant, duplicitous, and/or always ready to speak before thinking. Of coure, that impression seems just about right on for our government.
The preceding comment, insults and all, begins by referring to the “original analysis (less leisure reading = lower SAT scores).” But the post doesn’t do any analysis of SATs and leisure reading or claim to do so. The correlations drawn concerned NAEP scores, in-class reading, and out-of-class reading, not SAT scores, and they can all be found in the NAEP 2004 Trends report. Also, the AP story linked to addresses several of the points listed by the commentator.
With all due respect, it’s not wholly clear to me from the post what sort of discussion, if any, you were seeking here. That is, the information you posted was straightforward enough—but your post seemed to end abruptly without emphasizing the connection in your mind between paragraphs (1) and (2), beyond the segue from 2006 SAT scores to 2002 SAT scores. Is there a particular conversation you wanted to initiate with your post, or did you just mean to provide information? I apologize profusely if I’m not reading carefully or if I’m missing some obvious point, since it’s at least clear that people’s failure to read carefully is a problem you take seriously.
"Knocking our heads against each other seems and unlikely path to enlightement.” As the irenic scholar Jeffrey Perl likes to say.
It’s hard to tell, jasmurph, whether you are bashing at Mark’s post, the study, the report on the study, or all three. So, I’ll just assume you’re after all three of them, and respond to Mark.
That said, the questions could be the varialbe, the leisure reading could be the variable, both could be. Stats are slick that way. Reports are often not focused on the most interesting of their own findings.
But, that said, there’s heaps of annecdotal evidence that those who read for fun, and more so those who read widely for fun, become more skilled in comprehension and anaylsis as well as in writing. That is, in in taking in, processing, and expressing ideas. Have we not all had students to who are and are not readers and seen the difference in their facility with thinking and language?
There’s no harm in the strongest implication of Mark’s post: encourgage the youngins to read, to read many genres and areas of discourse.