Thursday, August 24, 2006
For the Historical Record: Cog Sci and Lit Theory, A Chronology
Back in the ancient days of the Theory’s Empire event I contributed a comment paralleling the rise of Theory with that of cognitive science. That parallel seems - at least to me - of general interest. So I decided to dig it out from that conversation and present it here, in lightly edited form. The parallel I present does not reflect extensive scholarship on my part, no digging in the historical archives, etc. Rather, it is an off-the-top-of-my-head account of the intellectual milieu at the periphery of which I have lived my intellectual life.
I take 1957 as a basic reference point. That’s when Frye published his Anatomy; that’s when Chomsky published Syntactic Structures. 1957 is also when the Russians launched Sputnik, the first artificial satellite to circle the globe. The Cold War was in full swing at that time and Sputnik trigged off a deep wave of tech anxiety and tech envy. One consequence was more federal money going into the university system and a move to get more high school students into college. So we see an expansion of college and university enrollments through the 60s and an expansion of the professorate to accommodate. Cognitive science (especially its AI side) and, perhaps to a lesser extent, Theory rode in on this wave. By the time the federal money began contracting in the early 70s an initial generation of cognitivists and Theorists was becoming tenured in, and others were in the graduate school & junior faculty pipe-line. Of course, the colleges and universities couldn’t simply halt the expansion once the money began to dry up. These things have inertia.
We may take cognitive science for granted now, but the fact is that there are precious few cognitive science departments. There are some, but mostly we’ve got interdisciplinary programs pulling faculty from various departments. These programs grant PhDs by proxy; you get your degree in a traditional department but are entitled to wear a cog sci gold seal on your forehead. As Jerry Fodor remarked somewhere (I forget where) in the last year or three, most cognitive psychologists don’t practice cognitive science. They do something else, something that most likely was in place before cognitive science came on the scene.
Let’s look at the 1950s:
1948: Norbert Wiener, Cybernetics
1949: Claude Shannon and Warren Weaver, The Mathematical Theory of Information, Bell System Technical Journal
1953: Double helix model of DNA published in Nature (Watson and Crick)
1956: The Dartmouth Summer Program on Artificial Intelligence (coined the term “artificial intelligence”); The Magical Number Seven, Plus or Minus Two (George Miller)
1957: Syntactic Structures (Chomsky), Anatomy of Criticism (Frye), Mythologies (Barthes)
1958: Anthropologie Structurale (Levi-Strauss), The Computer and the Brain (von Neumann)
1959: Chomsky’s review of Skinner’s Verbal Behavior
1961: Histoire de la Folie (Foucault)
We can conveniently mark the coming-to-visibility of high theory with the 1966 structuralism conference at Hopkins and the subsequent publishing of its proceedings (my modest contribution is on 243-244):
Richard Macksey and Eugenio Donato, eds. (1970). The Languages of Criticism and the Sciences of Man.
The following two volumes can serve to mark the unveiling of cognitive science as a specific, if diffuse, interdisciplinary activity:
Marvin Minsky, ed. (1968) Semantic Information Processing, Cambridge, Mass.
Endel Tulving and Wayne Donaldson, eds. (1972) Organization of Memory.
This is when things, in both arenas, really started to take hold and move out. Note that there was a real, though failed, attempt on the part of literature to hook up with cognitive science through Chomsky (stylistics and Culler’s early structuralism). Terms such as “competence” and “deep structure” gained some purchase but, for better or worse, the substance of Chomsky’s (often obscure) thought remained safely in linguistics. There is also a story grammar literature that developed mostly in the 1970s and 1980s and is beholden to both strands of thinking. For that matter, it strikes me that one Sheldon Kline did a computer simulation of Levi-Strauss’s myth theory that, in fact, looked more like Propp. I read a tech report on this sometime in the mid-1970s.
[As for viciousness, the inter-school arguments by and around Chomsky are as bitter as anything in and around Theory. The rancor continues to this day.]
It seems to me that by the late 70s and early 80s the main ideas were on the table in both camps. Consolidation was setting in. The early 80s also saw an attempt to commercialize AI technology, but that went bust by 1985 or so and Roger Schank, among others, began talking about AI winter. Two benchmarks:
Stanley Fish (1980) Is There a Text in this Class?
George Lakoff and Mark Johnson (1980) Metaphors We Live By.
Fish is a well-known phenomenon at The Valve, so I’ll leave him alone. But Lakoff deserves a remark or two. He was an early student of Chomsky’s who, along with James McCawley, Haj Ross, and others, developed something called generative semantics and thereby precipitated a nasty war within Chomskydom (see The Linguistics Wars by Randy Allen Harris). While generative semantics is still mostly syntax, the metaphor book is deep in semantic territory, which had pretty much been forbidden to linguists by Leonard Bloomfield, a ban Chomsky was happy to reinforce. Lakoff and Johnson see Metaphors (and associated work) as marking a second generation cognitive science, one that emphasizes embodied cognition. From my (biased) POV this 2nd gen looks a lot like New Criticism with a new set of tropes and a modest interest in laboratory experimentation.
Two more reference points: In 1986 J. Hillis Miller was president of the MLA and thus delivered the presidential address. He complained at that time about the decline of interest in deconstruction; the address was published in the May 1987 issue of PMLA.
Meanwhile, the 1984 meeting of the American Association for Artificial Intelligence had a panel discussion on “The Dark Ages of AI.” This appeared in AI Magazine for the fall on 1985. The field was running low on new ideas and the business community was getting stale about AI’s commercial promise.
I don’t know what Chomsky & Co. were up to at that time.
As far as I know there really isn’t anything in cognitive science that’s parallel to Theory’s Empire. That is in part because these two intellectual areas are organized along different lines, with different publication habits and pedagogical needs. But I’ll list three anthology volumes:
R. Núñez and W. J. Freeman,eds. (1999). Reclaiming Cognition.
Port, R. F. and T. van Gelder, Eds. (1995). Mind as Motion: Explorations in the Dynamics of Cognition.
J. Petitot, F. J. Varela, B. Pachoud and J.-M. Roy, eds. (1999) Issues in Contemporary Phenomenology and Cognitive Science.
These volumes all argue that the “classical” cognitive science has failed and we need a more dynamic approach, one that’s more realistic about the nervous system and, incidentally, one that’s more friendly with the continental tradition in philosophy. Walter Freeman, in particular, has been pursuing a rapprochement with Derrida.
My quick and dirty reading of this intellectual history is that it has been driven by ideas that began crystallizing during the 1950s. Those ideas have now given up their vitality. There’s nothing new to be gained from them. We stand in need of fundamentally new starting points. Just what they might be . . .
Frank O’Connor’s highly regarded book on the short story, The Lonely Voice, was published in 1957, too, an extraordinarily rich year of book publishing for literary criticism, I think –- though sorely lacking the vital diversity of the multicultural (and gender) boom of recent decades. It may be that the first five books of 1957 listed below are considered to be landmark works of criticism. Possibly others are as well. In addition to a couple of lively books by Gilbert Highet and others, some of the standout books of 1957 include –-
The Lonely Voice, Frank O’Connor
Anatomy of Criticism, Northrop Frye
Politics and the Novel, Irving Howe
The Rise of the Novel, Ian Watt
The American Novel and Its Tradition, Richard Chase
The Shape of Content, Ben Shahn
Mythologies, Roland Barthes
The Court and the Castle, Rebecca West
Contexts of Criticism, Harry Levin
The Territory Ahead, Wright Morris
The Living Novel, Granville Hicks, Ed.
Literature in America, Philip Rahv, Ed.
Literary Criticism: A Short History, Wimsatt and Brooks, Eds.
Bracket the list with The Mirror in the Roadway, Frank O’Connor, 1956, and American Moderns: From Rebellion to Conformity, Maxwell Geismar, 1958....
I take it from your summary Bill that there hasn’t been much if any professional literature on this disciplinary comparison. Am I right in assuming that? I started as a linguistics major at Yale, and then switched to literature (just graduated), so I sort of got my education right on this divide or non-divide, and got primarily interested in just this type of parallelism. Such comparison seems fruitful both for the internal futures of both fields, as you suggest, but also, I think, on the more abstract level of intellectual history and studies of disciplinary formation. I’m heading over to a hist/phil of science program at Cambridge this fall and hope to pursue the kinds of resonance you’ve pointed out. So I wondered if you had come across any “professional” interest in the cross-disciplinary, historical sort of research.
I haven’t really looked for that kind of literature, Jeremy, but I think that if anything major had been done, the ripples would have gotten to me. The lit crit world, of course, has had a number of studies of the professionaLlzatioin of the discipline, which I’ve not read. I don’t think cog sci has been so self-conscious, though I don’t really know. Some years ago Howard Gardner wrote a book on the cognitive revolution that may have a historical angle to it. I know there’s a journal of the history of computing; just how much it gets into the AI end of things, I don’t know.
One place to start would be Flight from Eden: The Origins of Modern Literary Criticism and Theory, which starts at the beginning of the 20th century with Russian formalism. Two people you should contact are Reuven Tsur at Tel Aviv and Haj Ross at U North Texas at Denton. Ross was a student of Chomsky’s back in the 1960s but he went across the river and studied poetics with Jakobson as well. For the last several years he’s devoted more energy to poetics and has established a poetics program at Denton. Tsur visited at Yale in the 1980s and, I believe, that’s where he got interested in empirical work on the sounds of poetry. But Roger Schank and Co, were at Yale during that period and Tsur certainly knew about their work.
And there’s me. I did a Ph. D. in literarture at SUNY Buffalo in 1978. My dissertation was on “Cognitive Science and Literary Theory.” Two years before that I’d published an article on “Cognitive Networks and Literary Semantics” in the Centennial edition of MLN. That article was basically hard-core knowledge representation using a literary example (Shakespeare’s Sonnet 129). As far as I know—other than some Chomsky stuff and early story grammar—that’s the deepest and earliest attempt to bring cognitive science and literature together. In terms of influence, it went nowhere.
I just got this note from Reuven Tsur:
That’s true, Tsur visited at Yale in the 1980s, but he got interested in empirical work on the sounds of poetry at the Haskins laboratories, ten minutes walking distance from the department of Comparative Literature. However, the professors of Comparative literature had not heard of this institute until they came to make recordings for Tsur. (Professors of psychology and linguistics, by contrast, were deeply involved in research at the laboratories).
Indeed, Tsur did participate in all the courses in artificial intelligence by Roger Schank and Co (Wendy Lehnert) available that year. He also met, on private occasions Robert Abelson.
1959: Chomsky’s review of Skinner’s Verbal Behavior
The Noam vs BF chestnut is worth a re-perusal: Whatever difficulties a materialist, behaviorist account of language acquisition presents, they are quite less problematic than those offered by the Chomskyan Plato-meets-Rousseau model of Universal Grammar. And there exists ample evidence demonstrating the associative and observational aspects of language learning.
NC’s deep structure (tho NC seems to change his programme about every 7 years) quite more of a stretcher than operant conditioning and reinforcement; BF never denied genetic determinism and the possibilities of a biochemical account of cognition as Chomsky seems to suggest.
Just what they might be . . .
AI was, to some extent, re-invigorated when the idea of Embodied Cognition took hold. The credit for this, as Michael Anderson points out in his field review in the journal Artificial Intelligence (see http://ecl.ucsd.edu/EmCog_Anderson.pdf) was Rodney Brooks, now the director of AI lab at MIT. Two of his famous papers “Intelligence without Representation” and “Intelligence without Reason” can be found here:
More work on this idea has been done by David Chapman and Phil Agre, both of whom worked for Brooks at MIT. It continues and is very promising, I think. Interestingly Agre in his book “Computation and Human Experience” (an extract can be found at http://polaris.gseis.ucla.edu/pagre/che-intro.html) draws extensively on the humanities—Rorty and Derrida, particularly deconstruction—to locate and critique the philosophical foundations of, what is called computationalism or GOFAI (Good Old-fashioned AI). I’m not sure how much AI researchers still buy his thesis—I do—however researchers working in Human-Computer Interaction do draw upon continental philosophy, which was first introduced by Terry Winograd in his “Understanding Computers and Cognition” (which in turn drew heavily on Hubert Dreyfus’ interpretations of Heidegger).
Interesting. I suspect that few of the cognitive critics who talk about embodied cognition have much awareness of Brooks at all. What he’s up to is too deep into technical issues to register in the literary world.
Hi Bill, While I wouldn’t classify what Brooks does as excessively “technical” (I presume that by technical you mean too mathematical?), you’re right in that I can’t imagine how Brooks’ ideas might be applied to literary theory and criticism. Still, he’s probably the only technical writer I know who writes with such verve (see “intelligence without representation").
However what’s interesting is that Brooks is actually building on Dreyfus’ criticism of AI, even though he doesn’t put it that way. Dreyfus is famous for criticising AI’s symbol-based approach to intelligence. This tendency, which Phil Agre has called “computationalism” posits that intelligence arises out of manipulation of mental representations, the idea being that there is a world, we build mental representations of it in our head, manipulate those representations ("computation") and then apply the result back to the real world. Dreyfus criticized this aspect of AI and drawing on the work of Heidegger and Wittgenstein, argued that it is impossible to create “intelligence” this way because clearly human intelligence is not simply the result of transforming mental represenations. He argued that the AI’s initial successes, widely reported, were illusions because the researchers had restricted themselves to simple situations and “block worlds”.
The problem with Dreyfus’ criticism for engineers is that, while he criticised the approach, he offered no suggestions and I can’t really blame him, because he isn’t trained to do that. As engineers, we’re sort of trained to “do” as opposed to “think”—so Dreyfus’ criticisms left many of AI researchers unmoved, even hostile. And then of course, the level of invective rose. In his book Dreyfus all but calls Minsky an idiot ("idiotic theory of free will” are the words, I think). But in a way, there wasn’t any alternative, really. Computers are all about representions and their manipulation so it was (and still is) hard to see how AI can proceed any other way.
Brooks is responsible for changing that. His point is that AI has wasted too much time on chess-playing programs and such but we don’t even have robots that can do the simple things that ordinary insects do—walk without falling, avoiding obstacles, basically simple things. His main contribution is his “subsumption architecture” using which he and his team have actually built robots that are able to do these things (walking, avoiding obstacles, following paths) without actually using any forms of “mental representations”. That’s basically what Dreyfus was saying too. (Dreyfus however has hardly any good words for the new approaches. I read his preface to the second edition of “What computers can’t do” and his point was basically “yada yada yada, this is all good, but they still haven’t figured out how to do this, this, this, etc”. Come on, give them time, at least they took your criticisms seriously!)
I was actually very curious about how embodied cognition ideas have been applied to literary theory and criticism (We’re sort of starting to grapple with them in Computer Science). Perhaps you might want to write a Valve post on that?
I’m quite favorable to the “against representation” argument, though I tend to buy a version from Walter Freeman’s neuroscience. I’m not going to even try to convince literary folks that we’ve got to scrap the notion of representation.
As for embodied cognition an literature, it’s really simple: cognitive metaphor theory is half of it, the notion that the mind is in the brain is another half, and something else is the third half. I’ve got four long articles at PsyArt that lay out something of how I go about it. In my (now ancient) dissertation I developed a cognitive network model that I put to literary uses; the model was loosely related to psychological data, and even some neural.
I end the above chronology in the mid-1990s. I’m wondering what would happen if I’d extended it into the current millennium? Would, for example, object-oriented ontology appear? Probably? Would it appear in one line, the Theory/Continental Philosophy line, or both.
I observe that Tim Morton is an OOO-er who comes out of Continental philosophy. Ian Bogost, though he does appear to have Continental intellectual roots, is also a coder. And that puts him in the cognitive science line of development.
Is that how it goes? Is that how it’s coming to be?