Welcome to The Valve
Login
Register


Valve Links

The Front Page
Statement of Purpose

John Holbo - Editor
Scott Eric Kaufman - Editor
Aaron Bady
Adam Roberts
Amardeep Singh
Andrew Seal
Bill Benzon
Daniel Green
Jonathan Goodwin
Joseph Kugelmass
Lawrence LaRiviere White
Marc Bousquet
Matt Greenfield
Miriam Burstein
Ray Davis
Rohan Maitzen
Sean McCann
Guest Authors

Laura Carroll
Mark Bauerlein
Miriam Jones

Past Valve Book Events

cover of the book Theory's Empire

Event Archive

cover of the book The Literary Wittgenstein

Event Archive

cover of the book Graphs, Maps, Trees

Event Archive

cover of the book How Novels Think

Event Archive

cover of the book The Trouble With Diversity

Event Archive

cover of the book What's Liberal About the Liberal Arts?

Event Archive

cover of the book The Novel of Purpose

Event Archive

The Valve - Closed For Renovation

Happy Trails to You

What’s an Encyclopedia These Days?

Encyclopedia Britannica to Shut Down Print Operations

Intimate Enemies: What’s Opera, Doc?

Alphonso Lingis talks of various things, cameras and photos among them

Feynmann, John von Neumann, and Mental Models

Support Michael Sporn’s Film about Edgar Allen Poe

Philosophy, Ontics or Toothpaste for the Mind

Nazi Rules for Regulating Funk ‘n Freedom

The Early History of Modern Computing: A Brief Chronology

Computing Encounters Being, an Addendum

On the Origin of Objects (towards a philosophy of computation)

Symposium on Graeber’s Debt

The Nightmare of Digital Film Preservation

Richard Petti on Occupy Wall Street: America HAS a Ruling Class

Bill Benzon on Whatwhatwhatwhatwhatwhatwhat?

Nick J. on The Valve - Closed For Renovation

Bill Benzon on Encyclopedia Britannica to Shut Down Print Operations

Norma on Encyclopedia Britannica to Shut Down Print Operations

Bill Benzon on What’s an Object, Metaphysically Speaking?

john balwit on What’s an Object, Metaphysically Speaking?

William Ray on That Shakespeare Thing

Bill Benzon on That Shakespeare Thing

William Ray on That Shakespeare Thing

JoseAngel on That Shakespeare Thing

Bill Benzon on Objects and Graeber's Debt

Bill Benzon on A Dirty Dozen Sneaking up on the Apocalypse

JoseAngel on A Dirty Dozen Sneaking up on the Apocalypse

JoseAngel on Objects and Graeber's Debt

Advanced Search

Articles
RSS 1.0 | RSS 2.0 | Atom

Comments
RSS 1.0 | RSS 2.0 | Atom

XHTML | CSS

Powered by Expression Engine
Logo by John Holbo

Creative Commons Licence
This work is licensed under a Creative Commons License.

 


Blogroll

2blowhards
About Last Night
Academic Splat
Acephalous
Amardeep Singh
Beatrice
Bemsha Swing
Bitch. Ph.D.
Blogenspiel
Blogging the Renaissance
Bookslut
Booksquare
Butterflies & Wheels
Cahiers de Corey
Category D
Charlotte Street
Cheeky Prof
Chekhov’s Mistress
Chrononautic Log
Cliopatria
Cogito, ergo Zoom
Collected Miscellany
Completely Futile
Confessions of an Idiosyncratic Mind
Conversational Reading
Critical Mass
Crooked Timber
Culture Cat
Culture Industry
CultureSpace
Early Modern Notes
Easily Distracted
fait accompi
Fernham
Ferule & Fescue
Ftrain
GalleyCat
Ghost in the Wire
Giornale Nuovo
God of the Machine
Golden Rule Jones
Grumpy Old Bookman
Ideas of Imperfection
Idiocentrism
Idiotprogrammer
if:book
In Favor of Thinking
In Medias Res
Inside Higher Ed
jane dark’s sugarhigh!
John & Belle Have A Blog
John Crowley
Jonathan Goodwin
Kathryn Cramer
Kitabkhana
Languagehat
Languor Management
Light Reading
Like Anna Karina’s Sweater
Lime Tree
Limited Inc.
Long Pauses
Long Story, Short Pier
Long Sunday
MadInkBeard
Making Light
Maud Newton
Michael Berube
Moo2
MoorishGirl
Motime Like the Present
Narrow Shore
Neil Gaiman
Old Hag
Open University
Pas au-delà
Philobiblion
Planned Obsolescence
Printculture
Pseudopodium
Quick Study
Rake’s Progress
Reader of depressing books
Reading Room
ReadySteadyBlog
Reassigned Time
Reeling and Writhing
Return of the Reluctant
S1ngularity::criticism
Say Something Wonderful
Scribblingwoman
Seventypes
Shaken & Stirred
Silliman’s Blog
Slaves of Academe
Sorrow at Sills Bend
Sounds & Fury
Splinters
Spurious
Stochastic Bookmark
Tenured Radical
the Diaries of Franz Kafka
The Elegant Variation
The Home and the World
The Intersection
The Litblog Co-Op
The Literary Saloon
The Literary Thug
The Little Professor
The Midnight Bell
The Mumpsimus
The Pinocchio Theory
The Reading Experience
The Salt-Box
The Weblog
This Public Address
This Space: The Fire’s Blog
Thoughts, Arguments & Rants
Tingle Alley
Uncomplicatedly
Unfogged
University Diaries
Unqualified Offerings
Waggish
What Now?
William Gibson
Wordherders

Wednesday, October 24, 2007

Intention and Story-Telling: A Neural Explication

Posted by Bill Benzon on 10/24/07 at 07:35 AM

But in the night of thick darkness enveloping the earliest antiquity, so remote from ourselves, there shines the eternal and never failing light of a truth beyond all question: that the world of civil society has certainly been made by men, and that its principles are therefore to be found within the modifications of our own human mind.
- Giambattista Vico

Consciousnesses present themselves with the absurdity of a multiple solipcism, such is the situation which has to be understood.
- Maurice Merleau-Ponty

Though I find myself perplexed over all the wit, intellect, and energy expended in contemplation of peculiar hypotheticals that, so far as I can tell, have yet to materialize - you know, Wordsworth on the beach and such - I nonetheless find myself thinking about intention from time to time. Most recently I’ve been thinking about the notion that all those present at the telling of a story - teller and audience alike - share the same “intentional frame,” where intentional frame is defined with respect to the operations of the nervous system. This would be true of the actors and audience of a play or the audience at a movie as well. I also think it true of all those who read a given novel, though that situation differs sufficiently from those face-to-face situations that the generalization cannot be casually granted.

My object in this post is to lay this out. First I’ll use Walter Freeman to establish the use of intentionality when theorizing about the nervous system. Then I’ll argue that people engaged in face-to-face conversation share the same intentional frame. Then I’ll consider oral story-telling and develop a restricted notion of intentional frame to cover that situation. The point of this exercise is to come up with a way of thinking about story-telling at the neural level.

A Neural View

Let’s consider how a neuroscientist, Walter Freeman, talks about intention. I first encountered Freeman’s treatment of intentionality in his Societies of Brains, but I’m going to quote from an essay on The Self-Organizing Subject of Psychoanalysis (PDF):

The basic Thomist premise is the unity and inviolability of the self that is inherent in the brain and body. This unity does not allow the entry of forms (we would say information) into the self. The impact of the world onto the senses gives rise to states of activity he called ‘phantasms’, which are ephemeral and unique to each impact and therefore cannot be known. The function of the brain is to exercise the faculty of the imagination, which is not present in the Aristotelian view, in order to abstract and generalize over the phantasms that are triggered by unique events. These processes of abstraction and generalization create information that assimilates the body and brain to the world. Assimilation is not adaptation by passive information processing, nor is it an accumulation of representations by resonances. It is the shaping of the self to bring it into optimal interaction with desired aspects of the world. The goal of an action is a state of competence that Maurice Merleau-Ponty (1945) called “maximum grip”. It is the beginning for all knowledge. Sensory impacts that are attended by the brain are only those which can be assimilated on the basis of the pre-existing structure and capabilities of the body and brain, which have already been created through the prior experience.

Thus the manner of acquisition of knowledge is by thrusting the body into the world, from which our word ‘intention’ has come from the Latin “intendere” = ‘stretching forth’. The thrust initiates the action-perception cycle, which is followed by the changes through which the self learns about the world, and ultimately about God, by assimilation (from the Latin “adequatio") of the self to the world. There is no transfer of information across the senses into the brain, but instead the creation of information within the brain under the existing constraints of the brain and body. In this respect cognition is related to digestion, which protects the integrity of the immunological self by breaking all forms of foodstuffs into elementary ions and molecules, that are absorbed and built into complex macromolecules, each now bearing the immunological signature of the individual self. Similarly, events and objects in the world are broken into sheets of action potentials like pinpoints of light, the ‘raw sense data’ of analytic philosophers and the phantasms of Thomists, and new forms emerge through constructions by the chaotic dynamics in sensory cortices. The explanation for this manner of function of both the neural and the digestive systems is essentially the same: the world is infinitely complex, and the self can only know and incorporate what it makes within itself. This is why neurobiologists using passive neural networks cannot solve the figure-ground problem, why linguists cannot do machine translation, why philosophers cannot solve the symbol grounding problem, why cognitive scientists cannot surmount the limitations of expert systems, and why engineers cannot yet build autonomous robots capable of operating in unstructured environments. The unbounded complexity of the world defeats those classic Platonic and Arisotelian approaches. 

So, that’s Freeman on intention. He’s been investigating the nervous system considered as a dynamical system (he’s been influenced by the physicist Hermann Haken among others). In particular, he’s studied the olfactory system, looking at how the brain “stretches forth” to comprehend odors and how it assimilates its own structures to the activity patterns imposed upon it by odorants. We need not worry about the details of his models except to note that they are very much about the timing of impulse trains and how they propagate through the nervous system. [Note: FWIW, Piaget would talk of accommodation where Freeman talks of assimilation. Piaget uses assimilation for a different purpose.]

Conversation

Now let’s consider ordinary conversation between two people. Such conversations generally involve fluid turn-taking; many remarks are elliptical and-or grammatically slipshod. Conversation is dynamic and two-way and is thus unlike the prototypical situation in those strange hypotheticals beloved of philosophers and literary critics, where some observer is simply confronted with (mysteriously) written signs. It’s not at all clear to me that intuitions “pumped” from such a situation (to use John Holbo’s Valvological term) are of much use in understanding basic conversational interaction - but then, that’s not what those strange tales are about, is it?

The situation of written signs allows us to slip into what the cognitive linguists call the conduit metaphor for communication. This is the notion that the author puts meaning into the text, which then serves as a conduit that conveys that meaning to the reader. That is what happens with the electrical signals in a telephone conversation, for example; those signals do travel through wires (and satellite links too) between people. But the meaning does not. Similarly, written marks pass from author to reader, but the meaning does not mysteriously tag along on those ink splotches, waiting to leap from the page into the mind of the attentive reader. Something else happens, something we don’t understand very well. Hence the attraction of talking about communication as though it were sending meaning through a conduit: That’s easy to understand. But wrong.

Getting back to conversation and its constant two-way interaction, I am going to say that, in conversation, two people (or more, as the case may be) share the same intentional framework. It sometimes happens, for example, that one person will finish a sentence begun by the other. This is not mind-reading in the sense of paranormal access to the thoughts of another, but it certainly implies that, in conversation, one can become highly attuned to what’s on the other’s mind.

To be useful, however, the notion of intentional frame needs to be more than a matter of mere definition. The definition needs to “pick out,” call our attention to, a significant range of observations. Here’s a start:

Starting back in the 1960s and continuing on through the 1980s, a Boston psychiatrist named William Condon filmed and video-taped people interacting with one another. He found that in normal successful interactions that the physical motions of the participants were entrained to one another so that they shared the same temporal framework to within 10s of milliseconds. People with certain kinds of disabilities - e.g. schizophrenia, autism - were not able to synchronize with others. Condon further discovered that neonates could synchronize their body movements to the rhythms of adult speech within an hour after birth. This interactional synchrony, as he called it, is the physical correlate of sharing the same intentional framework. It is evidence that something physical is supporting intersubjective intentionality, something more tangible than being able to finish someone else’s sentence for them.

Here’s some more on interactional synchrony from my review of Steven Mithen’s The Singing Neanderthals:

Using Freeman’s work as a starting point, I have previously argued that, when individuals are musicking with one another, their nervous systems are physically coupled with one another for the duration of that musicking (Benzon 2001, 47-68). There is no need for any symbolic processing to interpret what one hears or so that one can generate a response that is tightly entrained to the actions of one’s fellows.

My earlier arguments were developed using the concept of coupled oscillators. The phenomenon was first reported by the Dutch physicist Christian Huygens in the seventeenth century (Klarreich 2002). He noticed that pairs of pendulum clocks mounted to the same wall would, over time, become synchronized as they influenced one another through vibrations in the wall on which they were. In this case we have a purely physical system in which the coupling is direct and completely mechanical.

In this century the concept of coupled oscillation was applied to the phenomenon of synchronized blinking by fireflies (Strogatz and Steward 1993). Fireflies are, of course, living systems. Here we have energy transduction on input (detecting other blinks) and output (generating blinks) and some amplification in between.  In this case we can say that the coupling is mediated by some process that operates on the input to generate output. In the human case both the transduction and amplification steps are considerably more complex. Coupling between humans is certainly mediated.  In fact, I will go so far as to say that it is mediated in a particular way: each individual is comparing their perceptions of their own output with their perceptions of the output of others. Let us call this intentional synchrony.

Further, this is a completely voluntary activity (cf. Merker 2000, 319-319). Individuals give up considerable freedom of activity when they agree to synchronize with others. Such tightly synchronized activity, I argued (Benzon 2001), is a critical defining characteristic of human musicking. What musicking does is bring all participants into a temporal framework where the physical actions - whether dance or vocalization - of different individuals are synchronized on the same time scale as that of neural impulses, that of milliseconds. Within that shared intentional framework the group can develop and refine its culture. Everyone cooperates to create sounds and movements they hold in common.

That people “give up considerable freedom of activity” when they engage others in, e.g. conversation, is important. When you converse with someone you commit yourself to being intelligible to them; you tailor your remarks to (your best understanding of) their conceptual competence and interests (cf. that well-known Grice article that I’ve never read). I’ll call on this “giving up” when talking about story-telling. Let’s continue with the passage from my review:

There is no reason whatever to believe that one day fireflies will develop language. But we know that human beings have already done so. I believe that, given the way nervous systems operate, musicking is a necessary precursor to the development of language. A variety of evidence and reasoning suggests that talking individuals must be within the same intentional framework.

Consider an observation that Mithen offers early in his book (p. 17). He cites work by Peter Auer who, along with his colleagues, has analyzed the temporal structure of conversation. They discovered that, when a conversation starts, the first speaker establishes a rhythm to which the other speakers time their turn-taking. That is, even though they are only listening, other parties are actively attuned to the rhythm of the speaker’s utterance (cf. Condon 1986). What if this were necessary to conversation, and not just an incidental feature of it?

Let us recall some passages from Eric Lenneberg’s landmark review and synthesis, The Biological Foundations of Language (1967). While he does not address the issue of conversational turn-taking, he does devote the better part of chapter three to timing issues. He was particularly interested in problems arising from the fact that neural impulses travel relatively slowly and that the recurrent nerve, innervating the larynx, is over three times as long as the trigeminal branch innervating the one of the jaw muscles. It also has a smaller diameter, which means that impulses travel more slowly in it than in the trigeminal. The upshot, observes Lenneberg, is that “innervation time for intrinsic laryngeal muscles may easily be up to 30 msec longer than innervation time for muscles in and around the oral cavity.” He goes on to observe: “Considering now that some articulatory events may last as short a period as 20 mesc, it becomes a reasonable assumption that the firing order in the brain stem may at times be different from the order of events at the periphery” (96). It is on the basis of such considerations, which he discusses in some detail, that Lenneberg concludes: “rhythm is … the timing mechanism which should make the ordering phenomenon physically possible” (119).

It follows from this that, if you wish your utterances to smoothly intercalate with those of others, you need to share their rhythms; that is the only way your conversational entrances will be appropriately timed. Still, this might merely be a conversational convenience, not a necessity. So, let us consider the problem of speech perception.

We know that, while we tend to hear speech as a string of discrete sounds, that is something of an illusion. Sonograms do not show the segmentation that we hear so easily (Lenneberg 93-94). The brain is doing some sophisticated analysis of the sound stream. Though I am not aware that anyone has investigated this, I can imagine that it would be very useful if the listener operated within the same temporal framework as the speaker. This might help with the segmentation. If this is so, rhythmic synchronization is no longer simply a feature of how the nervous system happens to operate. It becomes essential to being able to treat the speech stream as a string of phonemes; it is necessary to linguistic communication.

Let us push the argument a step further. For the last decade or so there has been considerable interest in the notion that people acquire a so-called theory of mind (TOM) early in maturation and that this TOM is critical to interpersonal interaction (see e.g. Baron-Cohen 1995). Gaze following is one behavior implicated in TOM. Humans beyond a relatively early age will follow the direction of one another’s gaze. I would like to suggest that we notice gaze direction in people with whom we synchronize, but not otherwise.

Think about the perceptual requirements of noticing and tracking gaze direction. Even at conversational distance, another person’s eyes are small in relation to the whole visual scene; thus the visual cues for gaze direction will also be small. Further, people in conversation are likely to be in constant relative motion with respect to one another. The motions may not be large - head turns and gestures, trunk motion - but they will be compounded by the fact that one’s eyes are in constant saccadic motion. Synchronization would eliminate one component of relative motion between people and therefore simplify the process of picking up the minute cues signaling gaze direction. But if one cannot properly synchronize with others, then those cues will be more difficult to notice and track. Thus the capacity for interpersonal synchrony may be a prerequisite for the proper functioning of TOM circuitry.

In this light let us now consider Paul Bloom’s (2000) recent work on language acquisition. He has demonstrated that young children do more than merely associate the words they hear with the objects and events to which they refer. Such associations are not sufficient. Rather children make inferences about speaker’s intentions when listening to them and learning the meanings of words they use. In the current parlance, children use a so-called theory of mind (TOM) to infer what, of many immediate possibilities, the speaker’s words refer to. Inferring another’s intentions also plays a large role in Quine’s (1960, 26 ff.) classic argument about radical translation.

The general point, then, is that asserting that people in conversation share the same intentional framework is not a matter of mere definition. It’s a conception that’s tied to various empirical correlates. It’s a statement about how human nervous systems interact as physical systems.

Given that conversation requires a shared intentional framework, we’re now ready to think about story-telling.

Story Telling

One reason for taking oral story-telling as my paradigm case is that it is historically prior to written texts, which is where almost all literary discussion of intentionality begins. The fundamental point about oral performance is that teller and listeners are there, together, in one another’s visible and audible presence. The teller can sense immediately whether or not the audience is enjoying the tale; and audience members can register their interest or boredom, their pleasure or their anxiety. To be sure, each person’s subjective experience, is of course, private. But not totally so, for their posture, gestures, facial expressions, sighs, murmurs, groans, giggles and exclamations, all are apparent to everyone else and to the speaker as well. The living significance of these non-verbal expressions is obvious to all, as they are grounded in biological behaviors that evolved to communicate inner states to conspecifics; these behaviors may be modified by cultural convention, to be sure, but those present share the same conventions.

The storyteller can thus modulate his performance in response to audience reaction and individual audience members can modulate their reactions by taking into account the reactions of their friends and family. Here literature - sometimes called orature - exists in the interaction among people assembled together. To be sure, there is an asymmetry between the role of storyteller and that of audience members in that audience members do not converse with the story-teller in the normal fashion. The teller talks, everyone else listens. Still, audience reaction is directly available to the teller, and to others in the audience. The experiences of people in this situation are public and shared.

The details of just how this happens are something of a mystery. But I can employ the notion of an intentional frame without having to solve that mystery. In fact, the purpose of using the idea is to develop a way of thinking about literary communication at the neural level without having first to work out the details of verbal and non-verbal communication. The idea depends only on certain general characteristics of the situation, not on all the messy details of how the mechanisms work.

Now we need to take a very abstract view of the nervous system, the kind of view adopted by AI types when they think about computers and brains and just when it is that we’ll have a computer as intelligent as a human brain. They make the argument in terms of raw computing capacity, arguing that when computers have the raw capacity of a human brain, they’ll be as smart as we are - and when computers have more raw capacity, they’ll outsmart us. Forget about whether or not they’re right about that. It’s the thumbnail capacity estimate that interests me.

The estimate is stated in terms of the number of states a physical system can assume. Both computers and brains are, after all, physical systems. The number of states depends on how many elements the system has, how many states each element can assume, and the way the elements constrain one another. If the elements are completely independent, then the last factor drops out. The elements in both brains and computers, however, do not operate independently of one another. Neurons signal one another, thereby influencing one another’s states. So do elements in computers.

However the estimate works out for a human brain, the number is very large. That’s all that matters right now. We don’t need to know how many elements are in a typical brain (the problem here is that we don’t know just what the appropriate element is - the neuron, the synapse, the molecule), nor how many states each element can assume, nor even just how they constrain one another. All we need to know is that that’s what we’re talking about and that number is quite large.

Now consider the system consisting of all those people present during the telling of a story - the members, say, of a single hunter-gatherer band. How many states are available to that “collective” brain? Since, say, 30 brains taken together have thirty times as many elements as one brain, you might think that that system has many more states available to it. If one brain has X states available to it, then 30 brains taken independently would have 30^X states available to it - a number that is hugely vaster than X. And that would indeed be the case if we were considering the band, as it were, at some time when the people were going about their business either as lone individuals or in groups of two, three, or so. In such a situation, constraints on overall activity are loose. But that’s not the situation we are considering.

We’re thinking of these thirty brains as they are committed to the story-telling situation. Though only one brain is telling the story, all brains are intent on it. That places severe constraints on the ensemble. Let me suggest that, in the story-telling situation, the state-space of the collective is no larger than that of a single brain and may, in fact, be somewhat smaller, given the constraints imposed by the story itself.

Consider further that, during the story-telling interval, cells and synapses are dying, synaptic weights are being adjusted, and so forth. That is to say, each brain is modifying itself in modest ways as it “stretches forth” to comprehend and assimilate the unfolding story. Everyone is thus becoming attuned to the same set of (imaginary) events. And this happens time and again, with the same story, with related stories, and has been going on for generations. This is culture at work, one aspect of it.

* * * * *

That’s the basic story. Now for some further comments. In the first place, it seems to me that traditional story-telling is far more restrictive than ordinary conversation. The stories are from a repertoire that’s well known and relatively fixed. No new information is being conveyed. On the contrary, it’s very important that the characters and incidents be the same from telling to telling, though the exact verbal formulations will differ. It is because of this restriction that I made an explicit argument about the size of the collective state space.

I made no such argument about general conversation. Yes, there are restrictions, and parties to a conversation do try to be mutually intelligible. But many conversations are intended to convey information from one person to another and, beyond that, some conversations are intended to change the way people think in significant ways. In such conversations it is common that one person will say things that another cannot understand. That is to say, one party will move into a region of their neural state space that has no correlate in the state space of some other party; hence, that other party cannot “follow” the argument. It may well be that all parties in the conversation will make such statements. These issues may or not get resolved. If they are not resolved, then the collective state space still contains regions that at least one party cannot enter.

It thus seems to me that the collective state space of ordinary conversation might well be larger than that of either nervous system that is a party to the conversation. In the absence of mathematical analysis - which will have to be done by someone with more math than I’ve got - it’s difficult to say much about this.

[Note, however, that this is a different issue from whether or not the parties share the same intentional frame. That doesn’t necessarily entail mutual understanding; it only implies a certain mode of interaction.]

Finally, the world of written texts, which is the world that interests most literary critics. Obviously various issues must be addressed if we wish to generalize this story to written texts. I want to look at only one of the issues, the fact that stories are no longer drawn from a fixed repertoire known to all. But of course, while any given story is new, the situations and kinds of characters in it are not - hence the notion that there are only 36 plots, or whatever the number is. What interests me, though, is that some texts are very popular, while others are not. And the popularity of a texts may well change over time.

What we see is a selective process in which authors and publishers put texts on the market and people, in effect, form themselves into a diffuse network of communities around those texts. [Whether or not these are Fish’s interpretive communities is an question I’ll leave to the reader.] Each of these communities shares an intentional framework that is regulated by a given text (or, as the case may be, body of texts).

I think that what’s going on is that the gradual and continuous shaping of a fixed body of stories in oral cultures has become transformed into a somewhat different way of guaranteeing the “fit” between text an audience. The audience has a large number of texts at their disposal, more than any one individual can read, and each person chooses those which best fit their needs. In this they are influenced by family, friends, and peers (Duncan Watts has some web-based experiments on this, though I don’t have any citations conveniently at hand).

Exit, pursued by a bear

There you have it, a neural explication of intention among groups of story-telling apes. I suppose that, for the literary critic who doesn’t buy into intentionalism, this argument is strange enough that it may be safely ignored. For the critic who wishes to use intentionalism as a way of restricting interpretive play, there may or may not be something here. The state-space restriction I’ve stated cannot be taken to imply that a story means the same thing to everyone who hears it. It might well mean different things. To figure that one out we’d need a way of determining the meaning of a text.

That’s a different discussion. On that issue, my opinion is that you can’t determine the meaning of a text “from the outside.” Meaning is subjective in the way that color is subjective, and considerably more complex. And that means you can’t determine what a text means to someone else any more than you can determine whether or not red is exactly the same for them as it is for you. So why bother?

In general, this is a different conceptual universe from that of standard-issue lit. crit. and its Theory-laden alternatives. The standard questions either do not exist in this universe, or they are transformed in sometimes strange ways.


Comments

An engaging collation of germane material, Bill—thank you. I’d like to twist it back to John Holbo’s starting point, if I may.

You frequently mention “intention” and “intentionality”, meaning by that, I take it, that conversationalists, storytellers, and audiences put forth these strikingly complex efforts voluntarily (unless it’s a classroom or brainwashing or interrogation or whatnot). But John begins with functionality, which is then (perhaps too quickly) associated with intention: if we intend to achieve some end, what is that end?

Now, in the anthropological and folklorist material I’ve read, the question “Why are you doing this?” receives a bewildering variety of (sometimes bewildered) answers. Thus the old-fashioned tautology, “Cultures tell stories to maintain their [storytelling] culture.” (So why do the great-grandchildren prefer to watch TV?)

Under the circumstances, and given the evidence of social psychology and the neurosciences, the most parsimonious explanation may be that storytelling and conversation are the primary movers, with functional justifications being secondary examples of storytelling or conversation.

By Ray Davis on 10/24/07 at 11:27 AM | Permanent link to this comment

I didn’t really address the function of story-telling in the post, Ray. I just take it as something people do. And, at some level of analysis, I’m willing to assert they do it because it’s pleasurable. Now, there may be something else going on as well, as I’ve indicated in the link to my Pinker post (at the phrase “mutual understanding"), but that’s outside the immediate scope of these remarks.

As for Holbo and functionality, I assume he’s after something more differentiated that the function of this or that text as a whole, that he wants to motivate a criticism that asks how elements and aspects of a text function within that text. What do they do?

By Bill Benzon on 10/24/07 at 11:38 AM | Permanent link to this comment

Bill, your second paragraph neatly sums up my discomfort with John’s posts. Which of those two he’s after was left ambiguous, and the philosophical terminology seemed to reinforce rather than resolve the ambiguity.

By Ray Davis on 10/24/07 at 12:43 PM | Permanent link to this comment

That is, it’s not just that screwdrivers are for X, but that the function of the screwdriver’s handle is to allow the user to apply rotary force to the shaft. Etc.

By Bill Benzon on 10/24/07 at 12:45 PM | Permanent link to this comment

Bill, (see here). this made me think of sampling & googling brought up resonant sampling (Frontiers in Numerical Analysis, MsFEM, Ain’t It Grand! & Black Studies: Rap and the Academy). this is interesting:

“Generally, neural networks can be divided into two large classes. One class contains feedforward neural networks (FNNs), and the other contains recurrent neural networks (RNNs). This book focused on RNNs only. The essential difference between FNNs and RNNs is the presence of a feedback mechanism among the neurons in the latter. A FNN is a network without any feedback connections among its neurons, while a RNN has at least one feedback connection. Since RNNs allow feedback connections in neurons, the network topology can be very general: any neuron can be connected to any other, even to itself. Allowing the presence of feedback connections among neurons has an advantage, it leads naturally to an analysis of the networks as dynamic systems, in which the state of a network, at one moment in time, depends on the state at a previous moment in time [...] Convergence analysis is one of the most important issues of RNNs [...] Convergence of RNNs can be roughly divided into two classes of convergent behaviours: monostability and multistability. In a monostable RNN, the network has one equilibrium point only [...] Generally speaking, monostability requires all the trajectories of a RNN to converge to an equilibrium [...] Though monostability of RNNs has successful applications in certain optimization problems, monostable networks are computationally restrictive. They cannot deal with important neural computations such as those necessary in decision making, where multistable networks become necessary [...] In a multistable network, stable and unstable equilibrium may coexist in the network.” pp1-4, Convergence Analysis of Recurrent Neural Networks, Zhang Yi, Kok Kiong Tan

By on 10/25/07 at 12:36 AM | Permanent link to this comment

Bill, you write: “intentional frame is defined with respect to the operations of the nervous system.” I don’t really see how this can work - nor does your post seem to me to provide a hint of an answer.

Compare: mimicry is defined with respect to the diffusions that generate the spot patterns on butterfly wings.

Is that likely to be true?

There is a sense in which mimicry (in this case) is a matter of spot patterns (let’s say). But you cannot define the phenomenon of mimicry ‘internal’ to the wings. You have to study the larger (ahem) functional system. (At this point I get all swampman about it, for extra clarity. But I will abstain.) Why should I believe that ‘intentional frame’ - a thing that crucially concerns stuff outside my brain: how not? - can be defined with respect to the operations of stuff that is happening inside my brain.

Why should cognition be like digestion? Digestion is plausibly something whose structures and operations can - descriptively and normatively - be captured ‘internally’. (I actually doubt it. But I’ll let it pass.) But cognition? Surely not? We are naturalists, I take it. We need to understand cognition’s place in nature, not just in that comparatively confined space in nature we call the brain.

In short the brain does not “stretch forth”, as you suggest. There aren’t enough scare-quotes in the world to allow it. (There have been some SF movies in which this happened. But I think that’s strictly swampman fodder. Scary movies.) If something stretches forth, it isn’t the brain.

If I can make an analytic philosophy suggestion:  it sounds to me as if what Freeman is advocating is a very conceptually confused version of a view that is explored in, for example, Donald Davidson’s work. You might try reading “Nice Derangement of Epithets” on ‘prior’ and ‘passing’ theories.

I am sure you will want to reply that you didn’t mean for ‘defined with respect with’ to be construed so strongly, as ‘defined only with respect to’. But it seems to me that the plausibility of the working assumption about the neuronal stuff depends on sticking with the untenable, stronger reading. ‘Intentional frame’ is not a term for denoting neural activity any more than ‘chemical diffusions within a membrane cell’ is a term for denoting mimicry. Specifically, you are going to end up missing the externalistic, normative structure of the concept. So I would probably argue.

By John Holbo on 10/30/07 at 10:08 PM | Permanent link to this comment

Are you objecting to the functional relationship I’m arguing for or to my use of intention in characterizing it?

Intentions may often (always?) be about something that is outside one’s brain, but surely those intentions take place inside one’s brain, no? If the brain is shut down, those intentions disappear, but whatever it is in the world that they were about, that still exists. So, however you want to phrase it, I’m interested in the process that happens in the nervous system.

And I’m arguing that under certain conditions, two (or more) human beings can interact in such a way that their nervous systems have a certain functional relationship. I’ve characterized that relationship in various ways, some of them functional.  I note, further, that this relationship does have to do with how we communicate with one another, through linguistic and non-linguistic means. If this relationship is not functioning well, communication is poor or non-existent.

[Though I didn’t say this in the post, I also believe this kind of functional relationship is unique to human beings. As far as I know, members of other animal species do not have this kind of relationship.]

I’ve decided to call this relationship one of “sharing the same intentional framework.” One set of issues is whether or not that relationship, in fact, is functionally real and has the various properties I attribute to it. It seems to me that this is an empirical matter requiring further investigation.

And there is this other issue: What do we call that relationship? I’ve coined the phrase “sharing an intentional framework” to serve that purpose. Perhaps that’s not such a good usage. Maybe we should say, for example, that the individuals in such a relationship are “interdigitated,” whatever.

As for “stretching forth,” it’s a metaphor that makes more sense when you’ve been through Freeman’s model of perception, which is not something I’d want to hash out here.

By Bill Benzon on 10/30/07 at 10:53 PM | Permanent link to this comment

Bill writes: “So, however you want to phrase it, I’m interested in the process that happens in the nervous system.”

I don’t care how you phrase it either, but I take this as pretty good evidence that you are not, in fact, interested in ‘intentional frames’, if by ‘intentionality’ you mean any of the usual things - ‘aboutness’, so forth. There is a fundamental conceptual confusion in your framework, I think. This is where I think some philosophy could help: clearing up confusions about how to conceive of the naturalistic study of something like intentionality. I agree that it is an empirical matter whether the brain can ‘stretch forth’, but my hypothesis is that the skull would get in the way every time. Seriously, you are welcome to explain to me how metaphor can be traded for one that makes literal sense, but I submit that we will get one or the other of two results: we are talking about something ‘stretching forth’, in which case we are not talking about the brain, i.e. neural stuff. Or we are talking about the brain, in which case we are not talking about something ‘stretching forth’, i.e. intentionality. But feel free to explain to me how you propose to square the circle.

Think about the mimicry-wing intentionality-brain analogy. It is a very seriously meant. You couldn’t study mimicry by studying the chemical basis of wing coloration (though of course that basis is very relevant to the study of mimicry.) Mimicry is not a NAME for anything so internal as the chemistry of wing coloration. It points in the direction of things happening outside the wing as well - in infernally complex, normative fashion. Mutatis mutandis, it seems to me that you are taking ‘intentional frame’ to be a name for something that is far to internal to the skull to be a serious candidate for the role of intentional frame. You need to be more of an externalist about how intentionality talk works (and I’m not just talking about salvaging ordinary usage here). Maybe swampman is too strong, as externalist medicine goes, but you need to consider that you have the wrong sort of conceptual framework. You need to be more of an externalist.

By John Holbo on 10/30/07 at 11:38 PM | Permanent link to this comment

If it helps, suppose that someone said that mimicry works by wing chemicals ‘stretching forth’ beyond the bounds of the wings. This is obviously a metaphor, and one I actually understand. But really it is just a dodge. It is a way of staving off admission that you are not actually talking about a chemical process in the wing. You are talking about a larger system in which the wing chemicals are only one part. You will reply that, of course, you admit there is a larger natural system in which cognition takes place. But the question is: is ‘intentional frame’ properly conceived as denoting one specifically cranially-confined component of that system? I say: not.

By John Holbo on 10/30/07 at 11:42 PM | Permanent link to this comment

OK, one final point. Bill concludes that we might, alternatively say “that the individuals in such a relationship are “interdigitated,” whatever.” No. Not ‘whatever’. The brain does not have digits. There is no particular reason to suppose that this metaphor has any validity at the neural level. What this shows is that the relationship being captured by the finger metaphor is not, per se, neural. The ‘whatever’ here is hazardously breezy. It implies that we may safely go on empirically studying the brain, confident that eventually the objects of our study will, in due course, ‘stretch’ forth. I don’t see that these empirical studies will have any automatic tendency to overcome the stark, conceptual problem with the frame. Not that this in any way vitiates the interest of the neural studies. But you need to be clear about what you are studying.

By John Holbo on 10/30/07 at 11:54 PM | Permanent link to this comment

If it helps, suppose that someone said that mimicry works by wing chemicals ‘stretching forth’ beyond the bounds of the wings. This is obviously a metaphor, and one I actually understand. But really it is just a dodge. It is a way of staving off admission that you are not actually talking about a chemical process in the wing. You are talking about a larger system in which the wing chemicals are only one part.

Right, as you had originally stated it, the butterfly mimicry example omits the full range of phenomena necessary for a proper causal explanation. We need the apparatus of evolutionary theory to account for that one, differential survival rates of phenotypes having differences in wing coloration, where those differences have to do with chemicals, in an environment where there is another butterfly species, etc.

Similarly, evolutionary process accounts for how animals are able to communicate with one another through sent, vocalization, and gesture. A lot of that goes on in primate societies and, of course, we are primates. And we do all of that. But we also have such things as music and art and language and stories. And it’s pretty hard to account for those things simply through biological evolution. Biological evolution is way too slow to account for the changes that have taken place in the history of human cultures. What’s different about us that we can do these things, and how do they work? All of them are social phenomena; they exist in and among groups of human beings interacting with one another.

But the question is: is ‘intentional frame’ properly conceived as denoting one specifically cranially-confined component of that system?

But you see, that’s not what I’m talking about at all. This intentional frame is something that obtains between two or more individuals and hence is not confined to the cranium of either one. If we’re going to talk about human interaction, then we need to talk about something that isn’t confined to the cranium. Since interaction is obviously mediated by the brain, which is so confined – though the optic nerves and retina which are, embryonically, part of the brain, do come awefully close to escaping that cranium – we need some conceptual framework that get’s us out of the brain. That’s what I’m building, and I’m building it on a variety of observations about human interaction that point to very precise temporal coordination during communication, coordination on the time-scale of neural firing, coordination on the time-scale of endogenous spontaneous firing of neurons, coordination which, on my view, authorizes me to assert that, in this situation, we may treat the ensemble as one physical system. Hence interaction between individuals so-coupled is communication within a single (very special) physical system.

Such systems are short-lived. They come and go all the time. The individuals who participate in these systems enter into them rather easily, and disband with little difficulty. In many different combinations of individuals. Day in, day out.

On the “other side” of this argument I’ve got a thought experiment that says that actually connecting two brains via some kind of electrical-electronic system transmitting millions of signals in parallel between brains, such a system is not, in principle, at all plausible.

What I’m trying to do is understand human culture as a collective phenomenon at the neural level. Perhaps we don’t need such an understanding. But, I observe, our understanding of current human culture is not very effective either. Something is missing. So why not give a go at couching such understanding at the neural level? So far almost nothing is being done in those terms.

Does analytic philosophy have anything to say about culture as a collective phenomenon? Or about language as a collective phenomenon? If not, then perhaps it is of little use to me. There this business of Other Minds: Have we any rational reason to believe they exist? Is that problem still on the docket or has it been resolved one way or the other? And if the consensus is that there really is no good reason to believe in Other Minds, well, how silly is that? If the consensus runs the other way, well, give me the references to those arguments in favor of Other Minds that most, if not all, philosophers now accept.

Then there’s all this strange business about Wordsworth on the beach – though I believe it was the Lord’s Prayer on the beach when I learned about it as an undergraduate? But what does that establish?

In your various discussion of that business you’ve invoked speaker meaning and hearer meaning and then something like sentence meaning, all as (logically) distinct objects for analysis and discussion. Well, speaker meaning is bound-up in some neural process in the speaker’s brain, and hearer meaning is bound-up in some neural process in the hearer’s brain. What of sentence meaning? If it’s not the speaker’s meaning, then it can’t be happening in the speaker’s brain. And if it’s not the hearer’s meaning, it can’t be happening in the hearer’s brain. So it isn’t happening in any brain at all. And if it isn’t happening in any brain at all, where is it, what is it? Does it exist anywhere at all except as a notion in the minds of analytic philosophers?

I suppose the notion of sentence meaning follows from some notion of language as a system of conventionalized symbols and is supposed to capture the meaning of a sentence or proposition minus the little idiosyncracies and nuances and “spin” imposed by individual minds. So it’s some sort of neutral meaning. Only one problem: it’s not real. Maybe it’s a facilitating abstraction. The idea is certainly common enough. But so far it’s not gotten us a workable semantics, it’s not gotten us a workable understanding of language. Maybe we need to rethink our approach.

If, for example, you want to formulate an account of how language originated, you might say that a system of conventionalized symbols descended to earth on golden tablets, one by one, and the people began to read them and lo! they began speaking with one another. Not terribly plausible. You have to figure out how language can come into being without the prior existence of a system of conventionalized symbols. I don’t see how a philosophy that assumes the existence of such a symbol system can be of any help.

* * * * * *

So, we have Daddy and young Jane playing. Spot walks into view. Daddy looks at Spot. That is to say, he intends Spot. Jane notices that Daddy’s head moves, so she focuses on his face, notes the direction of his gaze, and directs her gaze to the same point. Now Jane intends Spot as well. This happens in all of a second or two.

Next day, same situation. Spot comes in view. Daddy looks, and says “doggie.” Jane follows Daddy’s gaze, looks at Spot, hears Daddy’s utterance, and says “goghh.” They play with Spot awhile, then it’s nap time. Jane naps for a bit, wakes up, and there’s Spot, looking at Jane and wagging his tale. “Goghh.” Jane has learned a word.

That’s the sort of thing I want to understand. If the philsophical notion of intention is of no use in understanding that situation, then I’m happy to discard it. But first I’d like to try to refit it so it’s workable in such cases.

By Bill Benzon on 10/31/07 at 07:07 AM | Permanent link to this comment

Bill: “Does analytic philosophy have anything to say about culture as a collective phenomenon? Or about language as a collective phenomenon?”

What would culture or language be if not a collective phenomenon? It’s not like there is a contrast class between the accounts of language and culture that accept this and the ones that do not. (At least not after Wittgenstein.)

Bill: “And if the consensus is that there really is no good reason to believe in Other Minds, well, how silly is that? If the consensus runs the other way, well, give me the references to those arguments in favor of Other Minds that most, if not all, philosophers now accept.”

Crikey! why would you think the consensus is that there are no Other Minds? (Wouldn’t it be a bit silly to have a consensus that there are no other minds?) If you just want a reference to post-Wittgensteinian analytic philosophers who think language is a collective phenomenon and who are generally not solipsists, that’s easy: post-Wittgensteinian analytic philosophy as a whole. Do you have any significant body of counter-examples to that generalizations? (I’m genuinely puzzled as to who you think you are pushing against here.)

“That’s the sort of thing I want to understand. If the philsophical notion of intention is of no use in understanding that situation, then I’m happy to discard it.”

But obviously the notion is of use in understanding the situation. What I was arguing was precisely that BECAUSE the notion is applicable to such situations that the notion might be unavailable to you, given your apparent theoretical commitment to neural accounting. Basically I want to understand how it is that you can be BOTH talking about such situations AND offering a ‘neural explanation’ - a neural definition of ‘intention’. I do realize that you WANT to end up talking about such situations. It just seems to me that your characterization of ‘intentional frame’ probably precludes that. I just don’t get it. Please set me straight about how it’s supposed to work.

By John Holbo on 10/31/07 at 09:43 AM | Permanent link to this comment

Well, John, the examples I see are either about lone individuals trying to make sense of signs (e.g. Wordsworth on the beach) or about abstract individuals variously referring to or meaning this or that. Where’s the discussion of continuous real-time interaction in face-to-face conversation? If you’re going to tell me that such interaction is irrelevant to a proper philosophical understanding of the situation, well, then I’ll dispense with proper philosophy.

As I see it, such interaction is how people arrive at mutual understandings. The one-shot situation where a person emits a message and the other receives it, without any interaction between them other than that one-way message, that’s not a realistic starting situation. I don’t think that a treatment of language that starts where is going to tell us how language works.

And then there is that sentence meaning stuff, the stuff that’s not connected to any real brain and so must be out there in the ether somewhere. It may be available to each and every one of us, but that’s no way to talk about language and culture as collective phenomena.

. . . given your apparent theoretical commitment to neural accounting. Basically I want to understand how it is that you can be BOTH talking about such situations AND offering a ‘neural explanation’ - a neural definition of ‘intention’.

Then maybe I’ll have to get along without intention. If what you’re telling me is that human intentions of the philosophical sort can exist independently of human nervous systems, then such intentions are clearly philosophical chimeras and I’ll do without them. Though, if you don’t mind, I’ll retain the ordinary language sense of intention.

Do you seriously believe that if Fred is, e.g. watching the sunrise and suffers a massive stroke that destroys his visual system, that that intention can nonetheless persist in the absence of viable neural tissue?

Or maybe you’re just telling me that intention is something that, by definition, obtains only between an individual and, well, an intentional object, then my notion simply violates that definition. So, maybe I scrap the talk of intentional frame.

Note that, for whatever the hair-splitting is worth, that I’m talking about a neural explication (that’s the word in the title) not an explanation. I’m saying that here’s the empirical situation with respect to the nervous system when, e.g. two people are conversing. I’m going to say that they share the same X framework; it’s a matter of definition, not explanation. In that situation it’s easy for them not only to attend to the same things, but to know that one another is so doing. Maybe I’ll call that an attention framework and simply not bother with intentionality in the philosophical sense.

Keep in mind that what I’m ultimately aiming at is literature. At the moment I’m looking for a way to talk about what’s going on in the face-to-face story-telling situation. If “attention framework” gets me there without having to bother with intention, well, I can live with that.

By Bill Benzon on 10/31/07 at 10:52 AM | Permanent link to this comment

Bill: “Well, John, the examples I see are either about lone individuals trying to make sense of signs (e.g. Wordsworth on the beach)or about abstract individuals variously referring to or meaning this or that.”

Look, Bill, if this is your impression then the only thing I can say is that you have, in my opinion, a wildly erroneous impression of what has been going on in Anglo-American philosophy departments that last half century. We can leave it at that.

Bill: “Do you seriously believe that if Fred is, e.g. watching the sunrise and suffers a massive stroke that destroys his visual system, that that intention can nonetheless persist in the absence of viable neural tissue?”

Nor do I think that if I paint a butterfly’s wings some other color that its capacities as a mimic will be unimpaired. It hardly follows that I can, after all, account for mimicry just by studying the chemistry of its wing color. Saying that X isn’t just a matter of Y is not saying that X necessarily does not depend on Y in any way.  You are misconstruing the form of the argument, Bill.

By John Holbo on 10/31/07 at 11:02 AM | Permanent link to this comment

. . . that last half century.

Too long, John. I studied in such a department until ‘68. Call it 40 years.

It hardly follows that I can, after all, account for mimicry just by studying the chemistry of its wing color.

But then, I’ve not argued that, have I?

So, what does intention depend on other than an intending subject with an intact brain and mind? Well, it would help if there’s an external world full of things that can be intended. But I haven’t denied that, not at all. What else? Am I missing something, or am I simply abusing a definition?

Take my little stories about Daddy, Jane, and Spot. As far as I can tell, the stuff up there in my main post accounts for what’s happening in those stories. What am I missing?

By Bill Benzon on 10/31/07 at 11:20 AM | Permanent link to this comment

"But then, I’ve not argued that, have I?”

Well, yes. I think you must have. If your objection to my point about intentionality were valid, you would have to make this analogous, absurd argument about mimicry. Either that or explain why my proposed analogy between mimicry and intentionality was unsound.

Look, there is a huge literature on this stuff. Read Dennett and Millikan and Davidson just for starters, I would suggest.

Let’s try again. Suggesting that being in an intentional state might depend on more than having an intact brain is just like suggesting that being a mimic might depend on more than having certain spots on your wing. The idea is that intentionality might be something that can only be modeled/explained/account for in more ‘externalist’ sort of way. Obviously this doesn’t commit you to any sort of bizarre mysticism or skyhooks. It’s a thoroughly naturalistic proposition.

“Or maybe you’re just telling me that intention is something that, by definition, obtains only between an individual and, well, an intentional object”

Again, the normative account might be much more complex. You don’t need to think that being a mimic needs to be a relation between the butterfly and some ‘intentional object’. There is no ONE object that the butterfly is mimicking, for example.

You ask why your post doesn’t account for what is happening in the stories. Well, to quote your previous comment: “here’s the empirical situation with respect to the nervous system.” I don’t see that you can account for these stories ‘with respect to the nervous system’. I don’t see that there is compelling reason to suppose that the explanation will be neural. Basically the concern is that intentionality is a more broadly holistic phenomenon than you are considering it, and therefore proposing this sort of neural explication would be like (what’s a vivid analogy) trying to reduce ecology to physiology. (That’s extreme, but it may help you see what I am getting at. Also, that it’s not Platonism or mysticism or chimeras or any of that.)

By John Holbo on 10/31/07 at 11:37 AM | Permanent link to this comment

I should change some of those ‘is’s to ‘may be’s. It’s not that I know what the hell a naturalistic account of intentionality would necessarily look like. It’s just that the accounts that attract me most, look most promising, look to me probably inconsistent with what you are proposing, Bill.

By John Holbo on 10/31/07 at 11:40 AM | Permanent link to this comment

Well, yes. I think you must have. If your objection to my point about intentionality were valid, you would have to make this analogous, absurd argument about mimicry.

Nonsense. What I said was this: “Right, as you had originally stated it, the butterfly mimicry example omits the full range of phenomena necessary for a proper causal explanation. We need the apparatus of evolutionary theory to account for that one, differential survival rates of phenotypes having differences in wing coloration, where those differences have to do with chemicals, in an environment where there is another butterfly species, etc.”

What more can there be to mimicry than the environment and the mechanisms of evolution? As for holism, what’s more holistic than that? It’s the whole biosphere.

Basically the concern is that intentionality is a more broadly holistic phenomenon than you are considering it, . . .

I’ve said: “So, what does intention depend on other than an intending subject with an intact brain and mind? Well, it would help if there’s an external world full of things that can be intended. But I haven’t denied that, not at all.” We’ve got the mind-brain and the external world. What more is there? Is intentionality anything other than some kind of a relationship between a mind and the world? Have I somehow denied that?

. . . and therefore proposing this sort of neural explication would be like (what’s a vivid analogy) trying to reduce ecology to physiology.

Ah, so reductionism’s the issue. Well, that’s a bit different. I have written something about why the mind cannot be reduced to the brain You’ll find it in the section on Computation: Literature in the Mind, in this essay:

http://www.clas.ufl.edu/ipsa/journal/2006_benzon01.shtml

That won’t get you all the way to conversation, but it’s a start. Chapters 2 & 3 of my book on music (Beethoven’s Anvil) flesh out the neural situation with respect to people making music together. And, while I do discuss intentionality there, I do not use the notion of an “intentional frame.” That’s a phrase I’m just now trying out and it’s not looking too promising. But then, that’s what blogging’s for.

It’s not that I know what the hell a naturalistic account of intentionality would necessarily look like.

Didn’t think you did.

It’s just that the accounts that attract me most, look most promising, look to me probably inconsistent with what you are proposing, Bill.

I can live with that.

By Bill Benzon on 10/31/07 at 02:29 PM | Permanent link to this comment

Bill: “Nonsense. What I said was this ...”

I suggested that a certain condition (being in a neural state) might not be sufficient for being x (being in an intentional state). Your objection to this claim was that I must be absurdly committed to it being not necessary. You are confusing necessary and sufficient conditions. I know what you said about the butterfly. I just don’t understand how you can be saying BOTH that yours is a neural explication AND that it is sufficiently holistic. I guess you take my point (whether you agree or not.)

You write: “Conversation is dynamic and two-way and is thus unlike the prototypical situation in those strange hypotheticals beloved of philosophers and literary critics, where some observer is simply confronted with (mysteriously) written signs.”

I think you think it’s just twiddling because you don’t see the problems it is meant to deal with, which are actually (in some cases) your problems. I fear your framework won’t really handle the dynamics of conversation. You’ll fall back on uncashable metaphors of brains ‘stretching forth’, whereas the philosophers might have a better grip (better notions about how notions like ‘intentional frame’ need to be framed.) But we’ll have to agree to disagree about that.

Above I named Millikan, Davidson, Dennett. I should have added that they are only one side of the conversation. You would probably find that Searle, Chalmers, Fodor, Kent Bach, Tim Crane and many others have intuitions similar to your own. (With important reservations in each individual case.) The issue really is the one that McGinn raises at the end of his review of Pinker. What are concepts? How do you account for/explain the nature/status of content? Pinker has no answer, says McGinn. I’ll bet McGinn’s right. Pinker has written a book called “The Stuff of Thought” in which he provides no account of the nature of concepts. (I haven’t read it. But I’ve read his other stuff, and I suspect McGinn is right.) This is sort of a blind spot for Pinker - not that I don’t like Pinker. I actually do. Your account seems to me likely to have a similar blind spot. But, to be fair, there are lots of philosophers who disagree and think I’m worried about nothing. I’m only getting a bit snippy with you because there’s this huge ongoing debate, which you seem inclined to dismiss on the grounds that those engaging in it must surely be a bunch of solipsists, denying Other Minds, denying that language is social. Oh, well. Up to you whether you give a look.

By John Holbo on 10/31/07 at 07:44 PM | Permanent link to this comment

I suggested that a certain condition (being in a neural state) might not be sufficient for being x (being in an intentional state). Your objection to this claim was that I must be absurdly committed to it being not necessary. You are confusing necessary and sufficient conditions. I know what you said about the butterfly. I just don’t understand how you can be saying BOTH that yours is a neural explication AND that it is sufficiently holistic.

What is this holism you’re invoking? What is there OTHER than a neural state? Aboutness? What is that? Where is it? How does it work? As far as I can tell, nervous systems, from the jellyfish through us, are aboutness machines; activity inside them is about states of affairs external to them. Just how this could be the case is under investigation; but I don’t think anyone seriously doubts that neural activity is about the body (which is, after all, external to, if not the nervous system as a whole, to the CNS) and about the external world. What more is there, GHG: General Holistic Goodness?

Telling me that a “sufficient holism” is a necessary condition for an explication of intentionality is not telling me very much.

Meanwhile, you keep stonewalling on sentence meaning - which I’ve brought up in comments on one or two of your posts. As I see it, if sentence meaning doesn’t take place in a nervous system, then it doesn’t take place in the world. I haven’t got the foggiest idea how it can help us understand language. If it does take place in a nervous system, then it must be either speaker meaning or hearer meaning because those are the only kinds of nervous system we have.

Concepts: here’s the final two paragraphs of McGinn’s review:

Pinker has listed the types of concepts that may be supposed to lie at the foundation, but he hasn’t told us what those concepts consist in—what they are. So we don’t yet know what the stuff of thought is—only that it must have a certain form and content. Nowhere in the course of a long book on concepts does Pinker ever confront the really hard question of what a concept might be. Some theorists have supposed concepts to be mental images, others that they are capacities to discriminate objects, others dispositions to use words, others that they are mythical entities.

The problem is not just that this is a question Pinker fails to answer or even acknowledge; it is that without an answer it is difficult to see how we can make headway with questions about what our concepts do and do not permit. Is it our concepts themselves that shackle us in the cave or is it rather our interpretations of them, or maybe our associated theories of what they denote? Where exactly might a concept end and its interpretation begin? Is our concept of something identical to our conception of it—the things we believe about it? Do our concepts intrinsically blind us or is it just what we do with them in thought and speech that causes us to fail to grasp them? Concepts are the material that constitutes thought and makes language meaningful, but we are very far from understanding what kind of thing they are—and Pinker’s otherwise admirable book takes us no further with this fundamental question.

I find that last paragraph of questions rather baffling. It seems like a bunch of throat-clearing that’s never going to result in actual speech. It may just be a matter of my intellectual style, nothing deeper than that. But my style is my style and I’ve got to live with it. As for whether or not concepts are “mental images, ... capacities to discriminate objects, ... [or] dispositions to use words,” those are not exclusive alternatives unless one declares them to be so for some reason. And if they are mythical entities, I can live with that as well.

It’s a difficult and messy business. We all have to hedge our bets. I’m quite aware that my program could collapse at any point for this or that reason. For better or worse, I’m more worried about empirical collapse than philosophical.

By Bill Benzon on 11/01/07 at 10:29 AM | Permanent link to this comment

Well, as a downpayment I’ll register the irony of the situation we’ve worked our way round to.

You write: “What is there OTHER than a neural state? Aboutness? What is that? Where is it? How does it work? As far as I can tell, nervous systems, from the jellyfish through us, are aboutness machines; activity inside them is about states of affairs external to them.”

Also, you are baffled by McGinn’s final paragraph, which (to me) has quite a sharp conceptual bite and is the very last thing in the world from throat-clearing.

It looks to me as if you take it to be throat-clearing because you are a bit too quick to assume a certain sort of solipsism, if I may say so, and it is pinching your sense of the conceptual problem, hence (to some degree) the empirical possibilities. I, happily raised up by the likes of swampman from my youth, am more inclined to consider language and culture as a social phenomenon, by contrast.

It is not self-evident (to me) that intentional systems are ‘internal’ to the organism in the way you seem prepared to assume they must be. There isn’t anything mystical about my doubts on this heading - no more so than are my doubts that you can tell, from the physical properties of a butterfly in a box, whether it was a mimic or not. ‘Intentional system’ may not, properly, be construed as a ‘narrow’ fact about brains. It may be, rather, a broader social or linguistic or ecological fact. (I do not say must be. I am being agnostic here.) Talking about an organism’s intentional states is a way of talking about an organism in its engagement with the environment. (Just as calling something a mimic is a way of generalizing about a very complex set of states of affairs.) There is no obvious reason to suppose that there is ‘nothing more’ than neural stuff here. Just as there is no obvious reason to suppose that there is nothing more to being a mimic than having certain chemicals in one’s wing.

This is what McGinn is getting at in the final paragraph. It’s a very important and basic conceptual point. And it has empirical bearing: if someone decided to investigate butterfly mimicry and didn’t look at anything outside the wing - sure that somehow the chemicals in the wing would ‘stretch forth’ - that would be a mistake. Similarly, just taking it for granted that it makes sense to talk about brains as ‘aboutness engines’ seems to me likely to result in empirically confused investigations. You are helping yourself to a relation that extends beyond the bounds of the object you are studying. Why should you suppose you have taken up enough to study the relation?

As to sentence meaning: I’m not sure why sentence meaning should only ‘take place’ in a nervous system in order for it to exist. I admit that having a nervous system is probably necessary to understand sentence meaning. But that’s a different thing.

By John Holbo on 11/01/07 at 08:54 PM | Permanent link to this comment

Shorter version: suppose someone said that the chemicals in mimic butterfly wings constitute ‘aboutness engines’, due to their representational or proto-representational quality. To my mind, this is a mystifying way to talk about the relations between those chemicals and their organismal and ecological environment. I am worried that (to a lesser degree) talking about brains as ‘aboutness engines’ will be similarly mystifying. It’s not that it’s going to be flat-out wrong. But it may treat the concept of intentionality as being a different sort of concept than it may actually be. It’s a kind of category error.

By John Holbo on 11/01/07 at 09:59 PM | Permanent link to this comment

Talking about an organism’s intentional states is a way of talking about an organism in its engagement with the environment.

Of course it is. Have I ever denied this? But the intentional states are in the organism not the environment. No organism, no intentional state.

When Freeman investigates the properties of the olfactory system, he doesn’t do so by examining a brain in a vat. He observes activity in the living brain of an intact rat, albeit one with electrodes resting on the surface of the olfactory bulb. And that rat is sniffing an odorant with known properties. That odorant is in the environment. The states Freeman observes reflect the engagement of the rat’s olfactory system with (something in) the environment, and they also, in his view, reflect the entire history of the rat’s engagement with the world.

When I say that nervous systems are aboutness engines I’m talking about systems that are, from their very inception, interacting with the environment. The development of nervous systems is guided by interaction with the environment. It is not as though an animal’s nervous system is constructed in isolation from the world and then, at some particular instant, it is at one and the same time activated and exposed to the environment. Neurons are living cells, they are continually interacting with the environment.

Nor are nervous systems “driven” by stimuli from the environment. They generate their own endogenous activity. Perception is a process involving the interaction of endogenous activity and activity stimulated by the environment (through sensors). When Freeman is talking about “stretching forth” he’s using that as a metaphor for the interaction of the endogenous and exogenous activity. That interaction is not a metaphorical activity. It is real.

Shorter version: suppose someone said that the chemicals in mimic butterfly wings constitute ‘aboutness engines’, due to their representational or proto-representational quality.

That someone wouldn’t be me.

To my mind, this is a mystifying way to talk about the relations between those chemicals and their organismal and ecological environment.

Agreed.

By Bill Benzon on 11/01/07 at 11:58 PM | Permanent link to this comment

I write: “Talking about an organism’s intentional states is a way of talking about an organism in its engagement with the environment.

Bill replies: “Of course it is. Have I ever denied this?”

You haven’t meant to, but I have argued that you have done so, by implication. The explication of an organism’s relations to its environment will not be a ‘neural explication’ any more than an explanation of a mimic butterflies relations to its environment will be a ‘chemical explication’.

Bill objects to me worrying away at ‘stretching forth’. “That interaction is not a metaphorical activity.”

Nor is it a neural activity. So the explication of that activity is not ‘neural explication’. This may seem picky-picky. But it seems to me rather important to get clear about exactly what, and how, we are ‘explicating’. ‘Neural’, when we are talking about intentional frames, puts an excessively solipsistic spin on it. It implies that we will find sufficient explanation a bit further in that, it seems to me, than we can be sure we will find it.

The crux of the biscuit, I think, is trying to manage a naturalistic account of the ‘aboutness’ relation. You say that these “systems that are, from their very inception, interacting with the environment.” Again, you are confusing necessary and sufficient conditions. No one doubts that interaction is necessary. The question is: is it sufficient. Our solar system is interacting with the rest of the galaxy. It hardly follows that our solars system is ‘about’ the galaxy. The puzzle of
‘aboutness’ is trying to figure out what the hell to say about this, short of throwing up one’s hands and throwing over naturalism altogether for some Cartesian or otherwise mysterious or transcendental options. (I am personally socially friendly with that sort of move, but would like to dance with the one what brung me - naturalism - a few more rounds.) How is it that intentional states acquire ‘content’. This is what McGinn is pressing Pinker on. Pinker talks a good game about the Stuff of Thought. But he hasn’t thought about the Stuff of the Stuff of Thought: what it is to be a concept. How can something be ABOUT something else? I do think there is a lot of bite to this critique of Pinker.

By John Holbo on 11/02/07 at 12:31 AM | Permanent link to this comment

I probably shouldn’t say ‘solipsistic’. That’s just me being intolerably cheeky. I should say: individualistic. Or: you are implying, more strongly than you can know (but you could turn out to be right!) that one part of the system - the stuff within the skull - is sufficiently ‘isomorphic’ with the ‘intentional frame’ (the whole holistic banana) that it can be taken as an index of it.

By John Holbo on 11/02/07 at 12:38 AM | Permanent link to this comment

"Meanwhile, you keep stonewalling on sentence meaning - which I’ve brought up in comments on one or two of your posts. As I see it, if sentence meaning doesn’t take place in a nervous system, then it doesn’t take place in the world.”

As to the stonewalling about sentence meaning: I’m not sure that sentence meaning is something that ‘takes place’. A sentence meaning is not an event, exactly. Sentence utterances and acts of understanding take place. But there is no need to identify sentence meaning, in the abstract, with either of those poles of the embodied language-use process. Whence the pressure to identify sentence meaning with neural events?

By John Holbo on 11/02/07 at 01:39 AM | Permanent link to this comment

To clarify further: sentence meaning is, at least plausibly, a type (rather than a token.) It is a bit unnatural to theorize a type - as opposed to any given concrete token - as ‘taking place’. (Not that there aren’t ways of working around that, possibly. But tell me what you have in mind.)

By John Holbo on 11/02/07 at 01:44 AM | Permanent link to this comment

You haven’t meant to, but I have argued that you have done so, by implication.

You mean though some kind of implicit assent to your butterfly mimicry example? Well then, just scrap it. I was never very clear about what’s analogous to what in the analogy. In particular, I don’t think “wing chemicals” is analogous to “the nervous system,” though it might be analogous to “chemicals in the nervous system.” But my argument is about nervous systems taken as functioning, integrated systems, not their chemical components.

OTOH, if you want to keep asserting the analogy, then tell me what’s analogous to what, keeping in mind that, as far as I’m concerned, the relevant comparison includes the full ontogeny and phylogeny of the organisms.

This may seem picky-picky.

Yes it is. Or perhaps, “virtuoso twiddling of a high order.”

Again, you are confusing necessary and sufficient conditions. No one doubts that interaction is necessary. The question is: is it sufficient.

Sufficiency, presumably, demands a certain kind of interaction. I’m asserting that nervous systems – in their full ontogeny and phylogeny with operational capacities intact – have the appropriate kind of interaction. If they don’t, then what are we? Zombies? More seriously, I’m also asserting that we now know enough about nervous systems to say, “yes, they’re intentional systems.” Now, I certainly have not said enough on that score in this post for someone to make an informed judgment, nor do I intend to try. When I say “full ontogeny and phylogeny with operational capacities intact,” I mean it. That’s a bit much to go into here.

How can something be ABOUT something else? I do think there is a lot of bite to this critique of Pinker.

I haven’t read the whole book, so I don’t really know the book’s full scope – I tend to read around in these sorts of books rather than read them straight through. But it seems likely that he doesn’t provide such an account. Pinker didn’t do it all. But is what he has done useful? Can one do useful work on language and thought without having a fully tested and certified account of what concepts are?

Look, I don’t think Pinker’s got the last word on the subject. I think he’s got a very useful pile of information. For the casual reader, it’s useful on the face of it. For the scholar, it’s a useful set of pointers into a very large technical literature. Some reviewer pointed out, however, that there are no pointers to some of my favorite stuff: Rosch on categories, folk taxonomy. What can I say, he didn’t get it all, who could?

Given that this is, after all, some kind of blog about literature, I’d also say any student of literature interested in natural meaning – as I understand your use of the phrase – needs to know what’s in this book, and then some. Think of it as a handbook, a compendium, not a full-out attempt to figure out what thought is and how it works.

Or: you are implying, more strongly than you can know (but you could turn out to be right!) that one part of the system - the stuff within the skull - is sufficiently ‘isomorphic’ with the ‘intentional frame’ (the whole holistic banana) that it can be taken as an index of it.

But I defined intentional frame only with respect to two or more individuals interacting in a certain way. I said nothing about the single individual interacting with the world, but not with another individual. Does the concept of intentional frame apply there as well? Can’t really say. I didn’t come up with it to cover that case and haven’t really given it much thought. Off hand, I’d don’t see that it adds anything to our understanding of that case. That is to say, I don’t think one must have an intentional frame in order to have an intention. An intentional frame is not simply, as I have defined it, an intention minus its “content.”

Now that I am thinking about it, when Condon did his work on interactional synchrony he discovered, for example, that autistics have trouble synchronizing with others. That is, they cannot establish an intentional frame with others. Yet surely they have intentional relations with the world, and even with other people. They just can’t enter into a shared intentional frame.

To clarify further: sentence meaning is, at least plausibly, a type (rather than a token.)

That helps. I certainly wouldn’t want to deny the existence of dogs simply because all one ever sees is this particular collie, dachshund, beagle, etc. If sentence meaning is a type, over what does it range? All possible hearer meanings plus the speaker meaning? I can vaguely see that one might be able to do something with this analytically, but I’m not about to try.

By Bill Benzon on 11/02/07 at 06:54 AM | Permanent link to this comment

Nah, the zombie stuff is the consciousness debate. (That’s next door.) We don’t need no stinkin zombies in the intentionality debate.

By John Holbo on 11/02/07 at 11:41 AM | Permanent link to this comment

Add a comment:

Name:
Email:
Location:
URL:

 

Remember my personal information

Notify me of follow-up comments?

Please enter the word you see in the image below: