Welcome to The Valve
Login
Register


Valve Links

The Front Page
Statement of Purpose

John Holbo - Editor
Scott Eric Kaufman - Editor
Aaron Bady
Adam Roberts
Amardeep Singh
Andrew Seal
Bill Benzon
Daniel Green
Jonathan Goodwin
Joseph Kugelmass
Lawrence LaRiviere White
Marc Bousquet
Matt Greenfield
Miriam Burstein
Ray Davis
Rohan Maitzen
Sean McCann
Guest Authors

Laura Carroll
Mark Bauerlein
Miriam Jones

Past Valve Book Events

cover of the book Theory's Empire

Event Archive

cover of the book The Literary Wittgenstein

Event Archive

cover of the book Graphs, Maps, Trees

Event Archive

cover of the book How Novels Think

Event Archive

cover of the book The Trouble With Diversity

Event Archive

cover of the book What's Liberal About the Liberal Arts?

Event Archive

cover of the book The Novel of Purpose

Event Archive

The Valve - Closed For Renovation

Happy Trails to You

What’s an Encyclopedia These Days?

Encyclopedia Britannica to Shut Down Print Operations

Intimate Enemies: What’s Opera, Doc?

Alphonso Lingis talks of various things, cameras and photos among them

Feynmann, John von Neumann, and Mental Models

Support Michael Sporn’s Film about Edgar Allen Poe

Philosophy, Ontics or Toothpaste for the Mind

Nazi Rules for Regulating Funk ‘n Freedom

The Early History of Modern Computing: A Brief Chronology

Computing Encounters Being, an Addendum

On the Origin of Objects (towards a philosophy of computation)

Symposium on Graeber’s Debt

The Nightmare of Digital Film Preservation

Richard Petti on Occupy Wall Street: America HAS a Ruling Class

Bill Benzon on Whatwhatwhatwhatwhatwhatwhat?

Nick J. on The Valve - Closed For Renovation

Bill Benzon on Encyclopedia Britannica to Shut Down Print Operations

Norma on Encyclopedia Britannica to Shut Down Print Operations

Bill Benzon on What’s an Object, Metaphysically Speaking?

john balwit on What’s an Object, Metaphysically Speaking?

William Ray on That Shakespeare Thing

Bill Benzon on That Shakespeare Thing

William Ray on That Shakespeare Thing

JoseAngel on That Shakespeare Thing

Bill Benzon on Objects and Graeber's Debt

Bill Benzon on A Dirty Dozen Sneaking up on the Apocalypse

JoseAngel on A Dirty Dozen Sneaking up on the Apocalypse

JoseAngel on Objects and Graeber's Debt

Advanced Search

Articles
RSS 1.0 | RSS 2.0 | Atom

Comments
RSS 1.0 | RSS 2.0 | Atom

XHTML | CSS

Powered by Expression Engine
Logo by John Holbo

Creative Commons Licence
This work is licensed under a Creative Commons License.

 


Blogroll

2blowhards
About Last Night
Academic Splat
Acephalous
Amardeep Singh
Beatrice
Bemsha Swing
Bitch. Ph.D.
Blogenspiel
Blogging the Renaissance
Bookslut
Booksquare
Butterflies & Wheels
Cahiers de Corey
Category D
Charlotte Street
Cheeky Prof
Chekhov’s Mistress
Chrononautic Log
Cliopatria
Cogito, ergo Zoom
Collected Miscellany
Completely Futile
Confessions of an Idiosyncratic Mind
Conversational Reading
Critical Mass
Crooked Timber
Culture Cat
Culture Industry
CultureSpace
Early Modern Notes
Easily Distracted
fait accompi
Fernham
Ferule & Fescue
Ftrain
GalleyCat
Ghost in the Wire
Giornale Nuovo
God of the Machine
Golden Rule Jones
Grumpy Old Bookman
Ideas of Imperfection
Idiocentrism
Idiotprogrammer
if:book
In Favor of Thinking
In Medias Res
Inside Higher Ed
jane dark’s sugarhigh!
John & Belle Have A Blog
John Crowley
Jonathan Goodwin
Kathryn Cramer
Kitabkhana
Languagehat
Languor Management
Light Reading
Like Anna Karina’s Sweater
Lime Tree
Limited Inc.
Long Pauses
Long Story, Short Pier
Long Sunday
MadInkBeard
Making Light
Maud Newton
Michael Berube
Moo2
MoorishGirl
Motime Like the Present
Narrow Shore
Neil Gaiman
Old Hag
Open University
Pas au-delà
Philobiblion
Planned Obsolescence
Printculture
Pseudopodium
Quick Study
Rake’s Progress
Reader of depressing books
Reading Room
ReadySteadyBlog
Reassigned Time
Reeling and Writhing
Return of the Reluctant
S1ngularity::criticism
Say Something Wonderful
Scribblingwoman
Seventypes
Shaken & Stirred
Silliman’s Blog
Slaves of Academe
Sorrow at Sills Bend
Sounds & Fury
Splinters
Spurious
Stochastic Bookmark
Tenured Radical
the Diaries of Franz Kafka
The Elegant Variation
The Home and the World
The Intersection
The Litblog Co-Op
The Literary Saloon
The Literary Thug
The Little Professor
The Midnight Bell
The Mumpsimus
The Pinocchio Theory
The Reading Experience
The Salt-Box
The Weblog
This Public Address
This Space: The Fire’s Blog
Thoughts, Arguments & Rants
Tingle Alley
Uncomplicatedly
Unfogged
University Diaries
Unqualified Offerings
Waggish
What Now?
William Gibson
Wordherders

Friday, February 10, 2006

A Question About Wrongness

Posted by Adam Roberts on 02/10/06 at 06:53 PM

Here’s a question that bothers me. How valuable is a philosopher if she or he is wrong?

Maybe there’s a problem with that word ‘wrong’. Perhaps it’s too sweeping, too dismissive, perhaps even rude and disrespectful. On perhaps it’s that ‘wrong’ is too coarsely-grained a term to be of use in philosophy: there may, after all, be too little precision in calling a spade a spade if the implement to which one is referring is in fact a left-sprocketed foot-trowel with Wiltshire orientation. But nevertheless I can’t shake the sense that there are times when wrong is the only word.

Here’s a non-philosophy example of what I mean. In mid-nineteenth-century Britain two theories offered incompatible explanations of how disease spread. One theory was that contagion passed through the air as a sort of fog or miasma of ill-health (malaria, as every schoolchild knows, literally means ‘bad air’). The other theory was that contagion was passed by miniature animacules or specks of disease called ‘germs’. Established scientific opinion debated the respective merits of these two theories with a right good will, since many millions were dying annually of various communicable illnesses. Then one day a doctor called Snow proved that the latter theory was the right one by removing the handle of a pump in a London Street called Broad on the belief the water supply from that pump was infected with typhus. The spread of typhus in that area was halted, because disease is indeed spread by tiny animacules, which we nowadays call ‘bacteria’ and ‘viruses’. Or to put it another way, the miasma theory of contagion was wrong. You can’t catch malaria from a bad smell.

To dwell for just a moment longer here: it’s not that the miasma theory is uninteresting, or that it wasn’t (and perhaps remains) an empirical fact of the way certain people thought about disease. And it’s not that I (for one) would surely have seen through its wrongness had I happened to be alive in the 1840s; perhaps I would have been a fierce advocate of it, and would have trusted into lighting incense in my bedroom to preserve me from cholera. That’s not at issue. What’s at issue is only the wrongness of the theory.

Now, here’s some philosophy. Here’s Plato, the man of whom it’s said that the writing of footnotes to his work has been the whole occupation of subsequent Western philosophy. Plato says that when we point at a chair and say ‘that’s a chair’ what we’re doing is identifying something intrinsic to the chair, noting the resemblance that this particular chair has to a perfect, ideal chair that exists in some otherworld realm which he called the Forms. This resemblance, he says, is what all the chairs in the world have in common. They may have lots of points of difference (three legs or four, black or red, big or small) but they all nevertheless share one crucial thing: their resemblance to the form of Chair. How else, Plato asks, could we see a chair and recognise it as a chair? This doctrine of ideal forms is pretty-much central to almost everything that Plato’s thought is about.

Plato is wrong here. The resemblance between these chairs is a feature of the pattern-recognising nature of human consciousness, not an aspect of the external world. This business of grouping things together into sets goes on inside people’s crania, not in the real world, nor in some otherworld.

It’s not hyperadvanced thinking to point this out; it’s more like Philosophy 101. But if Plato is wrong, we might wonder why he is still studied as a live thinker—which is to say, not as a historical curiosity, or as a rudimentary sophist whose ideas have been largely superseded, but as a Philosopher in the fullest sense. How would we feel if, on Medicine 101, the relevant professor started his lecture course with: ‘Western Medicine is all footnotes to the miasma theory; until you’ve grasped it you won’t understand anything about modern clinical practice’?

One way round the difficulty (if it is a difficulty) is Gadamer’s. A lifelong Plato scholar (and, like Plato, a fellow clearly much smarter than I) he insisted that it was not the content of Platonic thought that mattered, but the form: the dialectic interchange of thought. But this, to me, smacks rather of an attempt to salvage the semblance of rightness from a whole system that is, well, wrong. ‘It is not the content of the miasma theory that is so important today; it is the form in which it was advanced. Perhaps disease is actually spread by germs, but the most useful way to think about disease as such is via the conceptual structures of miasma.’ (I also, as a side issue, don’t find Plato’s dialectics to be all that, well, dialectical: they seem to me largely to consist of Plato’s ventriloquist-doll Socrates saying Very Clever Things and his interlocutors saying stuff like ‘how true!’ ‘yes indeed!’ and ‘of course you are right!’ But that’s not the main point I’m making.)

Here’s another example from another philosopher. Heidegger insists that time is prior to space and being. He is, accordingly, very stern about people who use spatial metaphors when talking about time (stuff like ‘the long road of life’) because, he says, it’s a consolidation of incompatible discourses. It goes without saying that time (and ‘Being’) are not garnish or add-ons to Heidegger’s philosophy; they’re core to the massy edifice of his thought. But he’s wrong about time.

Here’s a better account of the matter: Einstein argues that time and space are both dimensions of something he calls spacetime, neither one prior to the other. How can we be sure that Einstein is right and Heidegger wrong? Einstein’s theory can be tested and, if wrong, falsified; and it has been tested and it hasn’t been falsified. (It hasn’t been proven either; but neither Einstein nor Heidegger are amenable to proof like that). If we synchronize two atomic clocks and leave one at home and fly the other rapidly about in an airplane for a couple of hours, we discover that this velocitous motion through space has affected the time told off on the second clock, which fits Einstein’s theory perfectly, and which contradicts Heidegger’s.

Now I’m even more wary of mouthing the ‘Heidegger is wrong’ slogan, partly because I know so many very clever people for whom he is a central thinker, and I’d hate to think that they were all wasting their time; and partly because it veers rather close to the ad hominem ‘Heidegger was a Nazi’ argument. Of course he was a Nazi; but the fact of his being a Nazi does (in itself) disqualify his thought. On the other hand we might want to ask: ‘Martin, when you say that time is prior to space and being, what’s your evidence for that statement?’ Well, we can ask, but Martin doesn’t really do ‘evidence’. The evidence is that Martin says so, and only that.

Most of Heidegger’s philosophy proceeds by assertion, which I suppose means that in effect it’s grounded in saying: ‘Heidegger was certainly clever, and he thought about stuff a great deal, and this is what he came up with’. But that’s a very lazy way for any student of the ideas to do philosophy, particularly in regard of Heidegger, who (right or wrong) was as far from slapdash as any thinker has ever been. How could anybody who reads Heidegger be happy with merely second-hand opinions? A considered, continual attentiveness to every moment of conceptual working-through is absolutely the Heideggerian way. That’s not wrong. But he is wrong about time.

I suppose one possibility is that calling oneself a ‘Platonist’ or a ‘Heideggerian’ is actually a matter of faith, in the strong sense of that word. Because, of course, to go to a believing Christian and say ‘you know your church says that the bit of bread turns into the body of God on Sundays? That’s wrong’ isn’t going to persuade anybody to abandon their faith. What sort of gossamer faith would it be if this were enough to unsettle it? Showing that the bread stays bready would be easy enough to do, but would be irrelevant to most believers. Is this how philosophy works? My notional Heideggerian says: ‘I don’t care what you say; Heidegger’s thought speaks to me, it makes sense of the cosmos and my place in it, and I’m not giving up that on the spurious grounds of whether he’s right or wrong ….’

indeed, I’m not sure this goes far enough. I’m starting to wonder whether being wrong doesn’t have some special place in the process of contemporary thinking. Maybe it’s even the ground of modern philosophical enquiry. I don’t mean this in the obvious sense that a healthy think must beware letting her thought calcify into certainty—that, in other words, a certain openness to wrongness (‘skepticism’) is a healthy thing in a philosopher. That’s probably true, but it’s not what I mean.

The great thinker about wrongness remains Nietzsche. This is what he says in The Gay Science: ‘Man has been educated by his errors … if we removed the effects of these errors, we should also remove humanity, humaneness, and “human dignity”’ [115]. ‘The conditions of life might include error’ [121]. ‘What are man’s truths ultimately? Merely his irrefutable errors’ [265]. In Beyond Good and Evil he insists that ‘the falseness of a judgment is for us not necessarily an objection to a judgment’ [4]. ‘Truth?’ he complains in The Will to Power: ‘who has forced this word on me? But I repudiate it’ [749]. I could go on and on, but I hardly need to. Nietzsche is fascinated by wrongness.

I get the impression that lurking behind Nietzsche’s thought here is an admiration for the individual who perseveres in his or her error as if there is something heroic in that very perseverance: the heroism, the splendid tragic will, is more important than the wrongness. A timid person can stay within the bounds of what is right; it takes a great spirit to step outside that comfort zone.

As Sellars and Yeatman characterised the Roundheads and Cavaliers of the English Civil War, so philosophers (perhaps) have regarded the divide between the workings-through of modern physics and Heidegger’s creative ponderings: one is right but repulsive, the other wrong but wromantic. What sort of a man is Kierkegaard’s Knight of Faith but a man who is wrong, who knows he is wrong, but who carries on heroically in the teeth of his wrongness on the chance in a million that a miracle will ultimately prove him right and alchemically convert his leaden wrongness into the gold of being right? Isn’t this a stirring thought? As if it is somehow pusillanimous to live with a strict and pedantic adherence to correctness! In religious faith and philosophy, so in life more broadly: we admire man living the rock and roll lifestyle more than the vegan to jogs twelve miles every day and takes care to balance his chequebook. So what that the former individual will die at 50? He has lived! … or so we think, as if a life lived without error isn’t really a life at all.

But I’m struck by the thought that wrongness (of the right sort) mightn’t occupy a more central place in modern thought even than this. We can think to ourselves, admiringly, ‘well, it takes a strong person to be so wrong and still to carry on thinking and working …’ but this still characterises the wrongness as a negative symptom, like a man struggling on up the mountain even though his arm is broken (how admirable! How strong he must be!). But perhaps wrongness is the very health of the mountaineer, not his sickness. As Nietzsche suggests, perhaps error is the ground of thought, not its occasional malaise.

This is something like the celebrated defiance of observations that the doctrine of the trinity is impossible: ‘credo quia impossibile’, ‘I believe it because it is impossible.’ This strategy of argument has been greatly maligned, and for good reason [‘I believe that drinking eight pints of bleach will turn me into a superman, with the power to leap tall buildings.’ ‘But that’s impossible!’ ‘Aha! but I believe it because it is impossible’ … or more pertinently: ‘I believe that if we carpet bomb Afghanistan and Iraq killing hundreds of thousands of Arab civilians it will cure the world of Arab-based terrorism’ ‘Sir! That’s impossible’ ‘Aha! but …’]. On the other hand, maligning this piece of pseudo-logic may well be the least interesting response to it. What? You’re put off by its wrongness? Are you a philosopher or a mouse?

Critics attack Foucault’s historical research. And it’s hard to deny that as a historian he was sloppy, partial and cavalier; that he dug out historical data to flesh out pre-existent theories and ignored data that didn’t fit. But this may be to miss the point of Foucault’s achievement. Maybe it’s precisely his ground in wrongness that gives Foucault his great reach and influence as a thinker. Or take the case of Freud: very often wrong, and arguably wrong right down to the bone, it is nevertheless his wrongness that animates the reach and potency of his insights. Or take Benjamin’s essay on ‘The Work of Art in the Age of Mass Production.’ Benjamin thinks that watching jump-cuts in cinema films will radically reconfigure the way the ordinary people see their world around them. He’s wrong (in fact jump-cuts very closely mimic the way the human brain interprets visual data from the eye). Should we discard his essay on this regard? On the contrary!

Nietzsche’s fascination with wrongness is more than an example of his neat self-reflexive relativism: which is to say, he is doing more than pointing out that the urge to suppress error and uphold truth is itself, in a manner of speaking, an error. With his characteristic vocabulary of necessity, will, strength and gaiety he is asserting the centrality of error to life itself. The will to truth, Nietzsche says, is a concealed will to death. This, I think, is the heart of philosophy’s secret love for wrongness; the peculiar indulgence with which it treats its Big Thinkers, not despite but because of their errors. The urge to correct Plato, or Heidegger, of Freud is a misunderstanding of the process of philosophy. And, of course ….

This leads inevitably, of course (this being a guest post on The Valve after all) to Zizek. Is Zizek wrong? Does he misread what other philosophers say, does he muddle his arguments, is his worldview premised on the dodgy pseudo-science of Freudianism as filtered through the dodgier gobbledook-science of Lacan? Insofar as this is the case, then this, exactly, is the basis of his merit as a great thinker. We must embrace wrongness if we want to be philosophers.


Comments

This bit: Critics attack Foucault’s historical research. And it’s hard to deny that as a historian he was sloppy, partial and cavalier; that he dug out historical data to flesh out pre-existent theories and ignored data that didn’t fit. But this may be to miss the point of Foucault’s achievement. Maybe it’s precisely his ground in wrongness that gives Foucault his great reach and influence as a thinker.

Seems quite different from the rest.  It’s the content of Foucault’s historical research that’s shoddy, not necessarily the theories which inform it.  Granted, the sloppiness of his research--and the defense of it is “sloppiness,” not, as it could also be, convenience or dishonesty as his principle of selection--could point to problems inherent in his methodology, but I don’t know that it does.  (And as I work on my dissertation, I think to myself “with qualifications, I don’t think that it does.") All I point to here is that Foucault’s wrongness seems categorically different from the other ones you identify.

By Scott Eric Kaufman on 02/10/06 at 08:14 PM | Permanent link to this comment

I like ‘wromantic’. Proposed definition: commendable belief in something known to be false. (In the same semantic ballpark as ‘truthiness’.)

But more seriously. Adam writes: “Plato is wrong here. The resemblance between these chairs is a feature of the pattern-recognising nature of human consciousness, not an aspect of the external world. This business of grouping things together into sets goes on inside people’s crania, not in the real world, nor in some otherworld.”

I don’t deny that Platonism is problematic. As I have written before: Plato’s Heaven manufactures more mystery than it can consume locally. But what Adam is advocating here is, straightforwardly, a form of empirical idealism. It is a belief that the physical universe is a creation, not of our minds, but of our brains. And that is, likewise, a very problematic point of view. (The fact that two chairs resemble each other is, at least in part, a function of their physical properties. If the fact that two chairs resemble each other is to be explained in terms things going on inside crania, then the conclusion must be that the chairs themselves are not just inside our mind but inside our skulls. A proposition subject to doubt.) Another way to put the point: what is a pattern, such that we can recognize it? A pattern is a kind of form, no?

Again, it’s not that Platonism is better than the other views. It’s that it isn’t any worse than the other views. As explanations, they all share the property of being unexplanatory and/or having absurd implications. Which is sort of interesting. But the debates still have a somewhat fruitless quality, as a result of everyone being wrong. (That’s why I’m a Wittgensteinian, rather than a realist or an idealist. Isn’t that clever of me?)

At any rate, I think Adam is too quick to put questions about Platonism and the phenomenology of space and time in the same category with questions that have been more or less settled in some scientific way.

That said, I think the question is a good one. There is a sort of flabby rhetoric on behalf of philosophy: discussing all these old books gets you in touch with ‘eternal verities’. You might say, ‘eternal falsities’, rather. Now why would that be valuable? Working toward answers when you don’t believe you can actally get answers is a funny sort of business.

(Re: the absurd leap of faith stuff. I thought once about writing “Fear and Trembling For Day-Traders: Beating the Market the Kierkegaardian Way”.)

By John Holbo on 02/10/06 at 08:50 PM | Permanent link to this comment

If Plato had pointed to, e.g., a piece of iron and had said that it has the form of iron, that would have been a better start. Or (as he actually did) he might have pointed at a dog or a chicken, saying that they had the form of their species.

Using the chair example was a bad example, though I suppose you could define public engineering criteria for chairs.

On the other hand, the “triangle” and “circle” examples work pretty well. They’re the foundation of his method.

To me the problem with Plato was his attempt to move from the forms of specieis and elements and geometrical entities to things like goodness and justice etc. His desire to have unambiguous definitions of politico-ethical terms, the way we have unambiguous definitions of circles and triangles, was an OK desire. But that tack didn’t really work.

By John Emerson on 02/10/06 at 11:43 PM | Permanent link to this comment

For many folks (not all of them positivists) the basic problem of figures like Heidegger is not that they are wrong but that they are, as in Pauli’s famous putdown, not even wrong, not sufficiently coherent to achieve error.

Why do you think Nietzsche admires “heroic persistence in error”? Isn’t part of his point that errors are valuable because we learn from them?

As for physics being “right but repulsive”—have you ever enquired why people devote lives to studying the subject? Perhaps you think it’s just an evil impulse, or a total lack of imagination?

By Rich Crew on 02/10/06 at 11:47 PM | Permanent link to this comment

I think it would be a good thing if people were more careful about arriving at those types of conclusions, Rich, a point that extends particularly to the comments about Foucault as a historian made above. Deciding that you don’t have time is one thing; making up your mind about complicated things based on a received opinion or two is another.

By Jonathan on 02/11/06 at 12:27 AM | Permanent link to this comment

I think it would be a good thing if people were more careful about arriving at those types of conclusions...

Figures like Wittgenstein and Austen spent a great deal of energy on what it meant to make sense (they had, to be sure, other targets than Heidegger, though others, such as Carnap, had Heidegger very specifically in mind). You may disagree with what they had to say, but I doubt if you can say that they weren’t careful.

I’ll admit I have very little admiration for romantic persistence in error. The current occupant of the Oval Office is quite persistent in a number of errors; does this make him one of your heroes? Of course, philosophy is quite different from politics (pace Plato), but that really mean that philosophers have a more extended license for error?

By Rich Crew on 02/11/06 at 12:48 AM | Permanent link to this comment

Sure, people read Austen, but do they read Wittgenstein and Carnap? Or do they remember a third-hand citation of something about nothing nihiliating and feel reassured about dismissing difficulty?

By Jonathan on 02/11/06 at 01:05 AM | Permanent link to this comment

"For many folks (not all of them positivists) the basic problem of figures like Heidegger is not that they are wrong but that they are, as in Pauli’s famous putdown, not even wrong, not sufficiently coherent to achieve error.”

I disagree with you here, I think it’s fine to call incoherent, largely meaningless beliefs false, an incoherent and undefined propostion such as:

The afsop is nasfo and not nasfo

is false, there is no thing “afsop” nor is there a predicate “nasfo” so this propostion is false, and necessairly false because it is a claim that isn’t true. One might say it’s not a claim at all, but I think this line of thought will have a diffcult time accounting for the fact that the propostion is clearly self contradictory.

By on 02/11/06 at 03:02 AM | Permanent link to this comment

John H:  But what Adam is advocating here is, straightforwardly, a form of empirical idealism. It is a belief that the physical universe is a creation, not of our minds, but of our brains. And that is, likewise, a very problematic point of view. (The fact that two chairs resemble each other is, at least in part, a function of their physical properties. If the fact that two chairs resemble each other is to be explained in terms things going on inside crania, then the conclusion must be that the chairs themselves are not just inside our mind but inside our skulls. A proposition subject to doubt.)

I don’t understand.  I’m sure by saying that I’m in effect saying ‘I’m pitifully ignorant of philosophy’, but nevertheless:

I’m happy to accept that those things we call chairs exist in the real world: they’re ‘out there’, not only in our heads.  But the resemblance between a footstool, the royal throne in Westminster Abbey and Arthur’s Seat in Edinburgh ... that resemblance is inside our heads, not in the world as such.  Plato thought that this resemblance was in the world as such (or in a world beyond the world).  That’s what I’m saying is wrong.

I don’t believe ‘that the physical universe is a creation ... of our brains’ or of our minds neither.  I’m content with Physics’s account of the physical universe [I don’t regard Physics as repulsive, by the way, Rich C.; it seems mostly coherent and rather beautiful to me].  I don’t think I’m an idealist.

Two things occur to me: one is another way of putting it.  We might say that pattern recognition is a really important part of human psychological process, most likely for excellent evolutionary reasons.  Indeed, recognising patterns is so impressive and satisfying a mental process that some people start to believe that it’s not a mental process at all, but actually a feature of the world itself.  But it’s not.  Constellations: the stars exist in the real world, the pattern of stars as a centaur or whatever exists in our minds.

Or, another way of putting it: the problem with Platonism might not be Plato as such, but the neopolatonic move of identifying the realm of the Forms with God.  Credo quia impossibile, after all, was said about the stuff God supposedly does; religious believers get credit for believing impossible stuff about the divine.  It shows that they’re tough enough for the leap of faith.  Once that identification has been made it muddies the whole question about the rightness or wrongness of what Plato says.  It becomes a matter of faith rather than philosophy.

By Adam Roberts on 02/11/06 at 04:10 AM | Permanent link to this comment

Adam, I should write a longer follow-up, but no time right now. You might check out this Stanford Encyclopedia entry on abstract objects or - I think it’s somewhere online - David Lewis’ socratic dialogue “holes”. I think of Platonism as denoting belief in the mind-independent existence of abstract objects. But that’s not necessarily how you have to use your terms.

By John Holbo on 02/11/06 at 04:50 AM | Permanent link to this comment

Very interesting link; thank you John.  I am learning things, which is a Very Good Thing. But it doesn’t address my primary problem, I fear; Frege and White (in the linked doc) seem to be saying very interesting stuff, stuff that’s not obviously wrong.  Plato, on the other hand ...

I think of Platonism as denoting belief in the mind-independent existence of abstract objects. But that’s not necessarily how you have to use your terms… I’m happy with those terms, although I’d suggest that Platonism denotes belief in the mind-independent existence of all abstract objects, and also denotes the specific and ultimately mystical nature of that collective abstraction.  Now if I understand you, you’re not saying that’s right (your version of Plato, I mean; not mine).  Specifically you’re saying it’s no more wrong than any other philosophical system.  But I can’t buy that: it seems to me much more wrong than many other systems.

One last example from me, this yummy pancake.  I’d say that the resemblance between the pancake and Pugwash is in the eye of the beholder, and not in the pancake.  I’m not saying that the pancake is in some sense ‘only in my mind’ (it’s a concrete object, clearly); nor am I denying that it carries upon it certain marks, and that those marks approximate to a face (faces also being concrete objects).  I’m also not denying that the abstract quantity ‘resemblance’ doesn’t enter into it; clearly it does look a bit like a pirate.  My beef isn’t with the distinction between concrete and abstract, it’s with Plato.  Plato’s argument is that this pancake and the image of old Puggy next to it both resemble the Ideal reality of Pugwash, which exists in the realm of the Forms.  Such a belief is ‘no more wrong’ than any other?  Really?  Seems to me daffy as a duck.

So for me the question then becomes: why does Western philosophy have so much respect for Plato?  Why do thinkers never call him on his daffiness?  Rephrasing Platonism as ‘belief in the mind-independent existence of abstract objects’ makes him sound so much saner than saying he believes that the cartoon and the pancake are both pale imitations of an eternal, unchanging Pugwash who lives in the mystical realm.

By Adam Roberts on 02/11/06 at 05:15 AM | Permanent link to this comment

"So for me the question then becomes: why does Western philosophy have so much respect for Plato?  Why do thinkers never call him on his daffiness?  Rephrasing Platonism as ‘belief in the mind-independent existence of abstract objects’ makes him sound so much saner than saying he believes that the cartoon and the pancake are both pale imitations of an eternal, unchanging Pugwash who lives in the mystical realm”

Well Plato, in a sense, got us started. Before him metaphysics, epistemology, moral psychology, meta-ethics etc were much more confused. He seperated different questions, invented new ones, layed out something of the structure of the field, introudced various bits of philosophical culture ( i.e defining ourselves in part by our oppostion to sophistry.) and continued the march away from arguements from authority towards rationality. For that we resepct and venerate him, even if we think his ideas about topics as diverse as the innateness of knowledge, the proper attitude towards art, good government and a proper taste in music were a little nutty. The problem is when you don’t recongise some of the more wackier aspects of his ideas for what they are, that can lead to the closing of your American mind, if you get my drift.

By on 02/11/06 at 06:36 AM | Permanent link to this comment

“So for me the question then becomes: why does Western philosophy have so much respect for Plato?  Why do thinkers never call him on his daffiness?

I’m an anti-Platonist, as most people in philosophy these days AFAIK. The general idea is that, while he made a developed rational discourse possible (reasoning before Plato was very fragmentary, haphazard, and episodic), he took a wrong turn with his rationalism, idealism, and eternalism, and that vestiges of this “Platonism” still have bad effects today.

This is a far different thing than saying that Plato was always wrong and that all of his ideas were erroneous. At the time he chose the wrong fork in the road, it was a sensible thing to do. Later on we had to make a difficult correction.

Since about 1500 philosophy and science HAVE been moving away from Plato. Bertrand Russell was particularly vehement about this. However, a given anti-Platonist can often be shown to retain significant traces of Platonism.

The “footnotes on Plato” quote is from Whitehead, who was a anomaly in XXc philosophy and more Platonist than most others.

By John Emerson on 02/11/06 at 10:48 AM | Permanent link to this comment

Instead of Forms of chairs or redness, let’s say the supposed universal is the circumference of a circle (2*Pi*R), numbers, or, say, Justice.  Where is the 2*Pi*R in nature? Does the child learn that in the same way he learns the color red? Unlikely. And isn’t 2*Pi*R a priori true for humans, and not simply a matter of empirical observation or inference? (monkeys observe nature, may even make inferences, but can’t do math). The cognitivist may claim that universals are now part of the brain but that doesn’t really refute Plato’s view; it sort of modifies it, embeds the forms as part of the biochemistry of the brain. And there are quite a few mathematics people who still hold to some forms of Platonism: Russell himself included.

By x on 02/11/06 at 12:35 PM | Permanent link to this comment

I can’t really disagree with x today.

By John Emerson on 02/11/06 at 01:00 PM | Permanent link to this comment

Sure, people read Austen, but do they read Wittgenstein and Carnap? Or do they remember a third-hand citation of something about nothing nihiliating and feel reassured about dismissing difficulty?

I’m sure plenty of people read Wittgenstein. I can’t imagine getting through a philosophy graduate program without making at least some encounter with the Investigations. But perhaps I’m being naive. And as for what they’re reading in literature programs, well, I’m not the one to ask.

I’m sure Carnap is rather less frequently read, although there are plenty who can’t pronounce the name without crossing themselves, so to speak.

Ayer is also out of fashion, and yet he’s one of the few examples of a philosopher brave enough to say that he was wrong and that the whole enterprise (positivism) was misguided. An example to keep in mind in any discussion of error. And we did learn from his efforts, as Ian Hacking points out: thanks to Ayer, we know that attempts to state a coherent verification principle are doomed.

Now please excuse me while I go nihilate…

By Rich Crew on 02/11/06 at 01:53 PM | Permanent link to this comment

The afsop is nasfo and not nasfo

This is false, not for lack of references, but because any statement of the form “P and not-P” is false. At least in mathematics. And for all we know, that’s what’s being talked about in this sentence.

Unless there’s some quango with the acronym NASFO…

By Rich Crew on 02/11/06 at 01:58 PM | Permanent link to this comment

If I’m not mistaken, Wittgenstein is probably more often read by undergraduate philosophy majors than graduate students, where I’d guess that he’d be considered as part of the “history” of philosophy--a not very fashionable subject. Austin, it is.

By Jonathan on 02/11/06 at 02:37 PM | Permanent link to this comment

Wittgenstein is overrated; even Russell, commonly viewed as some Polonius-like pedant, produced far more profound writings on the foundations of mathematics, on language and logic, and on epistemology: the Princpia and other writings from that period tower over the Tractatus--the PI, while containing some interesting psychological tidbits, seems nearly like anthropology (why pomos enjoy it perhaps).  Russell and perhaps Quine (at least he ponders a mathematical platonism), and, quite differently, Chomsky offer real challenges to a pure scientific materialism, tho’ it’s debatable that any varieties of platonic realism (or platonic aesthetics) will hold when cognitivism and genetic determinism are finally worked out.

By x on 02/11/06 at 02:48 PM | Permanent link to this comment

Before our dear abbot founded the monastery, he, a disallusioned philosophy professor from Yale, in the late ‘60s, drove his VW microbus into the desert of the American southwest, and came upon a Sherwood Covington, who over the years, in fact his whole adult life, attempted to build the perfect desert dwelling, using all natural materials, yet he, uneducated in really anything except perhaps, auto repair using but a hammer and pliers, spend years gathering timber from the distant Sierra Nevada, planed said timber and created a rather unique notching system for nailess construction, that and a method of mixing native adobe with strands of wasted cacti and forming rather hardy bricks, whereupon he used this to foundation his ‘perfect’ desert dwelling, but even these very creative building blocks could not negate the fact that Sherwood did not understand certain fundamentals, such as gravity. When our future abbot arrive, as John Eastley then called, he marvelled, in the beginning, at Sherwoods creation, but over the months he realized that Sherwood wasn’t building the perfect desert dwelling, but was frantically attempting to keep what he had built years before from toppling, and what amazed the bearded, and I suppose you could say, hippie, John, was that Sherwood wasn’t even aware that he was spending his entire life propping up a mis-designed structure. Sherwood soon died, and John never revealed to Sherwood that his life’s work was all in error. But the end-life friendship resulted in John receiving the title to Sherwood’s property, and after much thought and I should add, anguish, John disassembled Sherwood’s life work, and then after drafting a well thought out design, used Sherwood’s hand-crafted building blocks to construct what is today the monastery chapel, a beautiful structure, that the now abbot always reminds us, was constructed from the building blocks loving crafted by a Sherwood Covington, his life’s work.

By Bro. Bartleby on 02/11/06 at 03:12 PM | Permanent link to this comment

"The afsop is nasfo and not nasfo

This is false, not for lack of references, but because any statement of the form “P and not-P” is false. At least in mathematics. And for all we know, that’s what’s being talked about in this sentence.

Unless there’s some quango with the acronym NASFO”

That’s exactly my point, this statement is false, even though it couldn’t possibly be true because it is, for a given sense of meaningless, meaningless. What I am arguging is that it is quite possible that what Heideggier is saying is both meaningless and false, because his sentences are necessairly non refering, against someone who arguged earlier that he is mostly just meaningless and “not even wrong”.

By on 02/11/06 at 05:56 PM | Permanent link to this comment

I’ve got no problem with respecting Plato, nor do I think of him as a living philosopher. Platonism may be alive, somehow, but Plato’s doctrines? Though, come to think of it, I’m fond of his reasoning on why he should stick around and drink hemlock rather than slip away in the night.

Basically, I’m with Adam on the doctrine of the forms. Here’s something from an unpublished ms:

9. McCulloch Contemplating the Brain of Plato

Now I want to shift gears a bit. Up until now we have been considering how a Self is constructed in a human brain. But we have not considered the question of the brain constructing an account of itself. In a sense it obviously must be possible for (instances of) the brain to ponder the brain’s nature, otherwise, just what is this essay about anyhow? And such a question is implied by the work which has been done on self-reproducing machines.  But that work is about the abstract theoretical possibility of some computing mechanism representing or recreating instances of itself, not about how humans have actually arrived at theories of brain structure and function.<FONT SIZE=1><SUP> </SUP></FONT>

How it has actually been done is a story involving thousands of years of cultural evolution, with ever more sophisticated concepts being made available for our use (Xxxxxx and Hays, 1990, Xxxxxx, 1993a, 1993b, 1996, 1997). This is no place to retell that story. But it may perhaps be useful to consider an exemplary moment or two.

Let us start relatively late in the cultural game, with Plato, his philosophy, and his brain. In his philosophy Plato wondered how some thing, such as a bed, could exist when it presented so many different appearances, appearing large and small or variously tilted, and so forth. Thus in the Theaetetus (152d) Plato has Socrates teaching a secret doctrine of Protagoras:

It declares that nothing is one thing just by itself, no can you rightly call it by some definite name, nor even say it is of any definite sort. On the contrary, if you call it ‘large,’ it will be found to be also small, if ‘heavy,’ to be also light, and so an all through, because nothing is one thing or some thing or of any definite sort. All the things we are pleased to say ‘are,’ are really in process of becoming, as a result of movement and change and of blending one with another.

Plato inferred that there must be something behind those appearances holding them together. That something was the Ideal Form, of bedness, of treeness, of rockness, of goatness, and so on, which existed in realm of Ideals.

This problem is one quite familiar to researchers in the cognitive sciences, only we do not think of it as having anything to do with the nature of beds. Rather, we think of it as having to do with the nature of perception: How can the nervous system identify objects given the multiplicity of appearances they present to the eye? Many proposals have been made in answer to this question, some rather general, others quite specific. Let us consider a brief passage from a classic essay on “How We Know Universals: The Perception of Auditory and Visual Forms” by Walter Pittsand Warren S. McCulloch (1947, pp. 46-47):

Numerous nets, embodied in special nervous structures, serve to classify information according to useful common characters.  In vision they detect the equivalence of apparitions related by similarity and congruence, like those of a single physical thing seen from various places.  In audition, they recognize timbre and chord, regardless of pitch. The equivalent apparitions in all cases share a common figure and define a group of transformations that take the equivalents into one another but preserve the figure invariant.

Pitts and McCulloch talk of figure where Plato talked of Ideals. They do not talk of some abstract disembodied realm. Rather, they talk of a group of transformations which relate perceptual “apparitions” to the figure.  Other thinkers use different terms and different mathematics. But the import is the same. There is no need for an ideal world Out There Somewhere. All talk is of the nervous system.

Given this last observation we might speculate that Plato’s philosophical efforts were, in effect, an attempt to understand the operations of his own nervous system. Just what Plato in fact knew of his nervous system isn’t quite clear. But he certainly didn’t have anything approaching a contemporary understanding. We know that he believed humans to be animated by three souls, one located in the head and concerned with reason, another in the breast ("midway between the midriff and the neck") and concerned with the passions, while the third was located in and about the liver ("between the midriff and the boundary of the navel") and concerned itself with physical appetite (Timeaus 69d-71b).  We now know that control of all of these functions is ultimately located in the brain, which is located in the head--though, of course, the nervous system has its ganglionic satellites, the cardiac, coeliac, and hypogastric plexuses, and we must also consider the chemical regulation of the endocrine system, not to mention the immune system. So, if we are going to grant that Plato was constructing a representation of the operations of his own nervous system, we also have to grant that it was a rather poor representation--just as future generations may look back in amusement at our own theories on that score.

However, there is some reason to believe that McCulloch and Pitts had a somewhat deeper view of these matters than did Plato. They had the benefit of two millenia of human intellectual effort subsequent to Plato. They could learn from Plato and his successors in a way impossible for Plato himself. Thus it is safe to say that McCulloch and Pitts knew more about Plato’s brain, considered as an arbitrary instance of the human brain, than did Plato himself. As for McCulloch’s successors, one might easily imagine them arriving at an understanding of the brain deeper than his own.  One might even imagine them arriving at a theory which allows them to understand how Plato’s brain thought the particular thoughts it did, and McCulloch’s as well, and to simulate such a theory on a computer constructed with an artistry we can scarcely imagine.

By Bill Benzon on 02/11/06 at 07:13 PM | Permanent link to this comment

Plato’s philosophy was at the very beginning—read some pre-Socratic stuff to see what I mean. So we can say that he’s a pioneer and could not have been expected to be right all the time, etc., etc.

At the same time, let’s not condescend to him too easily. In Plato’s thought the question of how we perceive things and understand things was too much mixed in with the question of how it is thing are what they are. Furthermore, he did use those stupid chair-table examples. But one of the things Plato was talking about, as x said above, was what it is to be a circle, how a circle is defined, what a circle is, and so on. Not about how we perceive circles, or what our concept of the circle is. And cog psych doesn’t touch this aspect of his philosophy, which proved fruitful over the centuries.

Given this last observation we might speculate that Plato’s philosophical efforts were, in effect, an attempt to understand the operations of his own nervous system.

We might do a lot of things, but this is only what a cognitive philosopher would say about Plato. It’s too subjectivist. Plato was also talking about things “out there”, and in the long run “forms” became things like genes, atoms, molecules, species, etc.

By John Emerson on 02/11/06 at 07:30 PM | Permanent link to this comment

"‘Man has been educated by his errors … if we removed the effects of these errors, we should also remove humanity, humaneness, and “human dignity”’ [115]. ‘The conditions of life might include error’ [121]. ‘What are man’s truths ultimately? Merely his irrefutable errors’ [265]. In Beyond Good and Evil he insists that ‘the falseness of a judgment is for us not necessarily an objection to a judgment’ [4]. ‘Truth?’ he complains in The Will to Power: ‘who has forced this word on me? But I repudiate it’ [749].”

It’s pretty easy to get along with the first two quotes; I think I’ve been educated by my errors, insofar as I’ve learned that they are errors and found some alternative.  But the third quote sounds very p & ~p, or “nasfo and not nasfo”, doesn’t it? 

And the forth too: it sounds as though Nietzsche isn’t making a distinction between “false” and “fictive.” Or something along those lines.  We don’t take issue with the fact that sentences in Austen’s works (or mentioned sentences in Austin’s work, so “fictive” probably broad enough) aren’t true, because they aren’t aren’t asserted.  But if fictive sentences were asserted (that is, were judgments), wouldn’t noting their falsity have to be an objection?

Maybe I can generalize from this point: philosophical works that are wrong often make good fiction.  (Or they would, if authorial intent matters.)

And, yeah, they can help us understand philosophers who aren’t wrong: how ignoring certain distinctions can lead to confusion, that sort of thing.  (For instance:  if Heidegger means that time is *chronologically* prior to space, yeah, that’s weird and more theological than philosophical.  But if he meant that time is *logically* prior to space—well, I’d have to see his argument.  [Or the Heideggerian equivalent.] It seems true enough that spatial metaphors for time are misleading; even Wittgenstein has funny fantasias about this in the Brown Book and elsewhere.  Etc.)

In that sense, it’s hard not to agree with Nietzsche quotes one and two.  In order for philosophy to proceed, there need to be some philosophers who get things wrong.  (Even Socrates’ sock puppets.) That’s where Nietzsche quote five is hard to stomach.  Being educated by our errors means learning something true that shows the erroneousness of our erroneous beliefs. 

(Altogether different from the error--not necessarily erroneous beliefs--of the meat-&-potatoes-gobbling rockstar.  A person can rationally justify a profligate lifestyle, but she can’t not rationally justify holding a belief that she knows to be false.)

By on 02/11/06 at 07:56 PM | Permanent link to this comment

‘What are man’s truths ultimately? Merely his irrefutable errors’

Naturally this is straight forwardly self contradictory ( is that statement an error?) but it’s also self contradictory for another reason, if the errors were really irrefutable they couldn’t be proven false, hence Nietzsche cannot provide evidence for his postion.

Nietzsche was great as a philosophical psychologist, historian and anthropologist but epistemology isn’t exactly his strong point.

By on 02/11/06 at 08:38 PM | Permanent link to this comment

"Nietzsche was great as a philosophical psychologist, historian and anthropologist but epistemology isn’t exactly his strong point.”

True, that.  But so, where he goes wrong might help us respond to the initial question, which (unless I’m wrong) was something like, “What’s wrong with being wrong?”

By on 02/11/06 at 08:44 PM | Permanent link to this comment

What annoyss me about what little I’ve read of Nietzsche is that despite his incessant claims he’s not a demythologizer, he comes across as more of a mystic, as can be seen in his impossible epistemological views. But then I haven’t read very much of him, so I am probably wrong ( shrug).

By on 02/11/06 at 08:48 PM | Permanent link to this comment

It’s been some time since I’ve read Nietzsche, but I was into him as an undergraduate. And mysticism would have been part of the formula. That’s also part of why I liked the Wittgenstein of the Tractatus.

By Bill Benzon on 02/11/06 at 09:05 PM | Permanent link to this comment

‘What are man’s truths ultimately? Merely his irrefutable errors’

This is pretty evidently a deliberate paradox. It looks like a sort of throwaway quip, too—not a profound paradox, but sort of a jokey one. Nietzsche has to be read as literature, because his expression is never straightforward. But regardless of what he said at various times, it’s not fictional literature—you can’t bracket out “truth” entirely. He doesn’t so much want to find Truth as to replace it—the non-fictional part of it is there.

By John Emerson on 02/11/06 at 09:36 PM | Permanent link to this comment

"you can’t bracket out “truth” entirely. He doesn’t so much want to find Truth as to replace it”

Well, how much pressure should we put on those shudder quotes?  Or the capital T?  I’m not sure how the grand scale project gets around the same paradoxes as the jokey thumbnail.

If what Nietzsche says is true, then we don’t need to appreciate his work on its literary or wrong-but-educational value (but then it’s hard to take seriously that he *really* wants us to forget about truth, or that he’s not just wrong on that particular point).  I’m assuming the capital T indicates a kind of metaphysical grandeur.  If what he says is supposed to be True, but he’s waging war against the concept of Truth, then . . . well, still paradoxical, but like Bill Benzon says, it kind of sounds Tractarian avant la lettre. 

Or maybe he’s fighting against placing too much metaphysical or explanatory weight on truth in the ordinary sense?  That’s a reasonable project, and distinct from the other two options; but I don’t know Nietzsche well enough to know whether he’s said anything of the sort.  (Help?)

I should say, I didn’t mean to suggest that all wrong philosophy has literary merit, or that that’s the only use for wrong philosophy.  I should also say, I don’t think that a writer’s intentions do or should entirely determine whether we read their works as fiction.  I’m pretty sure that few philosophers would like the idea.  Not even the ones who tried to make nonfictional philosophical points in fiction.  (But if one guy would, you’d think it’d be Nietzsche!  Well, and Emerson.  Ralph Waldo, I mean.)

By Jennifer on 02/11/06 at 10:16 PM | Permanent link to this comment

"mysticism would have been part of the formula. That’s also part of why I liked the Wittgenstein of the Tractatus.”

There’s nothing wrong with mysticism ( in terms of it’s aesthetics at least, logically it’s on much shakier ground), I just think Nietzsche’s mystcism disrupts other parts of the persona he tried to project. That’s the great ambiguity of what I’ve read in Nietzsche, is he a strange sort of rationalist ( as his rhetroic seems to indicate) or a confused mystic, as some of his content seems to suggest. Perhaps it’s this ambiguity that’s made him so long lasting, and made him fascinating to read. For me this ambiguity increases his strength as a psychologist and a life critic but dims his insight in other areas, such as logic, epistemology, metaphysics and ethics ( I haven’t read much but I don’t see that his critique of values is really sucessful). But perhaps I am a mere pedant to hold him responsible in these areas, because he basically was, as he said, a new sort of philosopher, one not largely concerned with the old problems, when I read him I should constantly remind myself that he is not trying to do analytic philosophy.

By on 02/11/06 at 11:20 PM | Permanent link to this comment

Pitts and McCulloch talk of figure where Plato talked of Ideals. They do not talk of some abstract disembodied realm. Rather, they talk of a group of transformations which relate perceptual “apparitions” to the figure.  Other thinkers use different terms and different mathematics. But the import is the same. There is no need for an ideal world Out There Somewhere. All talk is of the nervous system.

This is quite good and important, but cognitivists and brain scientists have not as of yet offered anything like a precise account of how perceptions, sensations, images become thoughts and ideas or memories, or of how language or mathematical concepts are realized cognitively. Linguists such as Chomsky have succeeded in convincing some that there is a “language faculty” (which seems quite platonic itself), but this language faculty has not been correlated with specific brain functioning. Broca’s area is thought to tbe source of language, but there are no corridors where syntax might be perceived interacting with neurons: linguists have not been able to address where the syntactical or mathematical rules are converted into chemicals; so in a sense Plato has not really been disproven; there may be strong grounds for claiming that all thought and awareness is neurological and biochemical, but it’s not a physiological fact-there are neurologists and “psycholinguists” and cognitivsts making inferences about various areas of the brain, but not offering some cerebral
map demonstrating how (and where) a perception (say in reading) becomes a thought.

By x on 02/11/06 at 11:22 PM | Permanent link to this comment

Broca’s area is thought to tbe source of language, but there are no corridors where syntax might be perceived interacting with neurons . . . but not offering some cerebral map demonstrating how (and where) a perception (say in reading) becomes a thought.

I don’t know how to read that passage at all. If it’s not metaphor, then it’s full of category errors and so unintelligble. If it is metaphor, well, just what does that get us?

No, it’s not been worked out and it may never be worked, fully. Still much is known, more I think than your mysterious sequence of doubts allows for. A return to Plato, or even William James, seems unlikely.

By Bill Benzon on 02/11/06 at 11:51 PM | Permanent link to this comment

Yes, “corridors,” was a metaphor. Tant pis. No mystery: I make no claims to being a Chomsky or Searle, and really tend to side more with the sociobio. writers such as Dawkins and Dennett against any forms of platonism or idealism as a whole. Nonetheless, with no firmly established science of perception or definitive account of the gap between generative syntax and biology (or math and bio.), one cannot conclusively rule out idealist or even immaterialist accounts of consciousness: when the mental acts associated with math and logic( and with decision making as well) are finally correlated with specific brain areas and functions, then metaphysics will have most likely come to an end, and merged with brain science. That is not likely to occur for decades, if ever.

By x on 02/12/06 at 12:08 AM | Permanent link to this comment

Since it was misquoted in the original, here’s Tertullian’s line (from De Carne Christi): credibile est, quia ineptum est… Et sepultus resurrexit; certum est, quia impossibile.

Not “I believe because it is impossible” but “It is credible because it is silly” and “It is certain (that he was buried and rose again) because it is impossible.” Tertullian is not saying that, while acknowledging the falsehood of his belief, he will believe it; he is saying that he holds that his belief is correct, because Marcion has shown that it is “impossible” according to “worldly wisdom” which is (with reference to Paul) untrustworthy in such matters.

Just to clear Tertullian of the accusation of irrationalism on this point. Or, at least, to get the right quotation out there.

Now back to the fun bits about the dogfood inside Plato’s skull.

By Daniel on 02/12/06 at 04:16 AM | Permanent link to this comment

A generous interpretation of Tertullian might be that he was saying Christianity is so bizzare no human could have invented it, hence god likely invented it (who else might have?), hence we should believe it QED, not the best arguement in the world but at least its a real arguement with perhaps some evidential force so long as Christianity can be proved sufficently bizzare ( i.e more bizzare than other religions.) Tertullian might have held to that interpretation but I doubt it, (shrug).

By on 02/12/06 at 05:58 AM | Permanent link to this comment

I think one of the main issues here is the degree to which the history of philosophy is of interest.  In physics, for example, you do not study Newton’s original theory, followed successively by the revisions. You study the present state of classical mechanics according to the present consensus.

Like many other areas of study, analytic philosophy wants to be science and believes that it has succeeded, more or less. And it’s a more or less valid consequence of this belief that the history of philosophy is of rather limited interest.

I’m not sure that this is true even of science. In things I read by major scientists I sometimes find them looking back on the history of their field and resurrecting someone who’s been rejected and forgotten, if only as a heuristic. The one that pops to mind was Claude Bernard, and adversary of Pasteur. Bernard was wrong about some big things, but when medical people started thinking of things systemically they found his ideas to be of interest.

Something similiar to what was done in philosophy was done in economics: political economy and economic history were stuck in the closet for a few decades. Recently the trend is in the other direction, because the older economists studied things that the new theoretical economists neglected, and these things have proven important, as the weaknesses of theoreticism became more apparent.

It’s always possible to define an ancient author in terms of some particular question upon which he was wrong. Except in cases when this is the only question he ever wrote about, this kind of refutation is dubious. I’m interested in archaic (pre-1950) philosophy because it has a much wider range than analytic philosophy and can be a source of questions and problems the way analytic philosophy cannot.

By John Emerson on 02/12/06 at 09:35 AM | Permanent link to this comment

Not to return the conversation to Plato, but he didn’t always think that there were forms of chairs and other purely physical objects. (The doubts are expresed in the first part of the Parmenides, where Parmenides brings a young Socrates to aporia by asking him whether there are forms of such things as mud and nails). Most of the interest of forms for Plato lay on things such as beauty, goodness, and abstract objects like circles and sometimes animal species. And there is scholarly controversy about the place of the forms in his later work (it certainly doesn’t look as if the exact theory that appears in the Republic, with its forms of beds and chairs, is used in dialogues like the Sophist and the Statesman, which have no such forms).

More generally, a single philosopher expresses many views in his/her career, not all of which may be wrong; some of these views may still be fruitful today, even if only as sources of disputation. Many philosophers think reading the Theaetetus on the problem of knowledge is a worthwhile endeavor, for example.

By on 02/12/06 at 10:00 AM | Permanent link to this comment

... Nonetheless, with no firmly established science of perception or definitive account of the gap between generative syntax and biology (or math and bio.), one cannot conclusively rule out idealist or even immaterialist accounts of consciousness ...

As a matter of abstract principle, I suppose so. But as a practical matter, no, I don’t think we’re going to go back. It’s a bit like me wondering where I misplaced my favorite pen knife. Until it’s been found I can’t conclusively prove that it’s not been sprited away to Pluto. But I’m not about to book passage to the outer planets in the off chance that it’ll show up there.

As for Dennett, he’s apparently taken the inconclusiveness of the neurosciences as warrant to shill for believe in meme-things that flit from brain to brain, gobbling up neural real estate, sometimes at the expense of their hosts.

By Bill Benzon on 02/12/06 at 10:07 AM | Permanent link to this comment

I just posted this on Crooked Timber, and think it is appropriate here. --BB

#

My argument is that you folks are analyzing and operating on the freckles on the skin, and ignoring what underlays this thin veil. You are wrapped up in the “our lives are bounded, fastened and festooned by laws and customs …” the surface, but either ignore, or are unwilling to take scalpel to freckled skin.

Perhaps we need to look to the ‘mad’ for advice? Certain ‘disorders’ cause the individual to peer beneath the veil of ‘humanity’ and confront the eternal void. We have amble labels for these poor folks. Prisons are full of individuals who think they understand that life is a ‘joke’ or as some say, ‘cosmic joke’ … so I’m saying that if one really believes in no Higher Power, then for the most part evolution has ‘kept that secret’ from all beings, until … until the human mind crossed over into self awareness. Then the cat is out of the bag, so to speak, we need this powerful brain to survive, because our puny bodies can’t stand against other more powerful animals, yet that powerful brain has a down side, it can now see the void of nothingness, and this too becomes a feared predator. All the “festooned laws and customs” keep that new predator at bay, and all of science and philosophy are but 2×4s nailed to the door in order that that door to the void is never really opened. A few poor souls manager by chance or circumstance to open that door, and all the rest of us pity these poor souls as they decend into ‘madness’ …

The heart of the question, if no “God” then we are simply another accumulation of atoms that came together by chance and evolution and have a very brief existence before dissolving and disassembling into the original atoms that constucted us, and that is it. Birth, life, death. Period. No rhyme or reason. And yes, we can sit about and study all this, and even philosophize about it, but again, we are just hammering more 2×4s to that door from which behind is the ultimate reality—nothing. And after all, isn’t it Truth that we say we are seeking?

By Bro. Bartleby on 02/12/06 at 11:40 AM | Permanent link to this comment

Bro, “God” is just another, bigger accumulation of atoms, and when he realizes that he becomes depressed, just as we do. But then we go to him for help, and it makes him even more depressed, because he’s supposed to be all hot shit and he knows he really isn’t.

But he’s working on his problems, at least. Give him that much credit.

By on 02/12/06 at 12:52 PM | Permanent link to this comment

There are lots of things philosophers say that are either right or else arguable: but sometimes philosophers say stuff that’s flat wrong.  Aristotle thought the apple falling from the tree fell because, innately, it wished to return to ‘its proper place’ at the centre of the earth. Wrong.  Plato argued that there’s a place where a perfect, eternal and ideal Captain Pugwash sits conversing (indeed, having the perfect, eternal and ideal conversation) with the perfect, eternal and ideal Spongebob Squarepants.  Wrong.  Heidegger thought that time is prior to space.  Wrong.

What interests me, and what sparked my post, was the way the protocols of philosophical crticism handles stuff like this.  A scientist might explain phlogistonic theory, but would never talk of the proponents of that theory in the terms Timothy the Scrivener talks of Plato: Plato got us started. Before him metaphysics, epistemology, moral psychology, meta-ethics etc were much more confused. He seperated different questions, invented new ones, layed out something of the structure of the field, introudced various bits of philosophical culture ( i.e defining ourselves in part by our oppostion to sophistry.) and continued the march away from arguements from authority towards rationality. For that we resepct and venerate him, even if we think his ideas about topics as diverse as the innateness of knowledge, the proper attitude towards art, good government and a proper taste in music were a little nutty. It’s not that these comments seem to me unrepresentative; on the contrary, they’re perfectly representative of Philosophy; an eloquent expression of the Philosophical position, or so it seems to me.  I’m happy to be corrected (I may not have read the right books), but I’ve read a fair few critical studies on Plato, some introductory and some more advanced, and they all of them treat his ideas with respect; and none of them use the ‘W’ word to describe his thought.

This seems to be unique to philosophy, and I wonder why.  The miasma theory of disease ‘got medicine started’; it even promoted some good practice (one reason why the British navy conquered other navies was that they adopted the practice of burial at sea; the French navy stored their dead in the bilge, with consequences it’s only too easy to imagine.  But the admiralty types who insisted upon burial at sea did so because of the bad smell; they did the right thing for the wrong reason.  Florence Nightingale was another believer).  All well and good: but no intellectual worker in the field of medicial studies today would therefore treat the miasma theory with reverence and respect, publish dozens upon dozens of scholarly monographs upon it, made it the foundation of university pedagogy.  Because, although it did all these things, it was wrong.

Is it that professional philosophers don’t like to think of their discipline as admitting wrongness?  For example, that metaphysical questions can never be resolved right or wrong and must be endlessly and subtly debated?  Is it that Philosophers (most of the ones I’ve met being highly intelligent people) find smart reasons to justify unsmart beliefs, after the manner of smart people everywhere?  So instead of saying that Plato believed in the eternal, unchanging Spongebob Squarepants, they say with John Holbo ‘Platonism denotes belief in the mind-independent existence of all abstract objects’?  It’s much harder to pin the wrongness tail on that donkey, certainly: but is it a good use of brainpower to find ways of rephrasing daft ideas to sound less daft?

Or is it something deeper about Philosophy; an allergy to the very concept of paradigm shift?  By which I mean: other intellectual discourses may have paradigm shifts, but not philosophy, no sir.

This would be a shame, I feel, if it’s true.  A scientist is happy admitting that her paradigm is a step-change from previous paradigms, because such an admission does not preclude the possibility that a future step-change will alter everything again.  So if you want to plot the path of a space probe to Neptune, Newton’s gravity-theory is better than Aristotle’s, but that doesn’t make Newton eternally ‘right’ or ‘true’.  ‘Better’ is a more useful category than ‘true’; so, indeed, is ‘more useful’.  But without some sense that paradigms do get superseded, I don’t see how ‘better’ can come into it.  Hence John H.’s assertion that all philosophical explanation is equally as daffy, or equally as brilliant, as Plato.  [I summarise, inexactly].

In sum: I don’t invoke ‘wrongness’ as a criterion of absolute truth, absolute falsity: I don’t see how those two notions would work.  But you don’t need absolutes to understand that some things are better and some worse at the businesses they set themselves.  And I know which side of the cart the ideas I cite above come.

By Adam Roberts on 02/12/06 at 12:52 PM | Permanent link to this comment

I think one of the main issues here is the degree to which the history of philosophy is of interest.  In physics, for example, you do not study Newton’s original theory, followed successively by the revisions. You study the present state of classical mechanics according to the present consensus.

This is probably more true of mathematics than of physics (and mathematics was certainly more of a model for the early analytic philosophers). Newtonian mechanics doesn’t get seriously revised until the 20th century. What is today taught as “classicial mechanics” (I exclude here things like thermodynamics and electromagnetism) is not so far from the early 19th century consensus. To be sure, Newton looks rather different, but the differences are mostly those of mathematical technique.

By Rich Crew on 02/12/06 at 12:52 PM | Permanent link to this comment

Oh, and thanks Daniel for correcting my Tertullian; I was quoting from memory (v. sloppy, that).  But I’m not sure that ‘it is certain because it is impossible’ clears Tertullian of the accusation of irrationalism on this point exactly.

By Adam Roberts on 02/12/06 at 12:59 PM | Permanent link to this comment

Is it time to quote Stephen Dedalus on Shakespeare?

By Jonathan on 02/12/06 at 01:12 PM | Permanent link to this comment

Plato argued that there’s a place where a perfect, eternal and ideal Captain Pugwash sits conversing (indeed, having the perfect, eternal and ideal conversation) with the perfect, eternal and ideal Spongebob Squarepants

Amusing but not really an accurate depiction of the problem of a priori knowledge or innateness (as say discussed in the Meno). Integrals don’t grow on trees; and they are true in a way that statements/facts about, er, a rat’s intestines are not. And empirical and evolutionary account of mathematical knowledge may be possible, but again the interface is escondido. Consciousness may be similiar to a computer, but no has of yet specified the exact location of many of the essential components, ie. the CPU or hard-drive (where and how are memories stored?) or network protocols. When you can put an RJ-45 in the back of your skull, jack in (or maybe wireless), and download your memories of Suzy Creamcheese on to your Mac, cognitivism and AI will have triumphed. Or something like that.

By x on 02/12/06 at 01:15 PM | Permanent link to this comment

Consciousness may be similiar to a computer, but no has of yet specified the exact location of many of the essential components ...

Not, BTW, the sort of thing I’ve ever advocated. I know there are people who believe this sort of thing, But I’m not one of them. One can believe that the mind is something the brain does without believing the class of proposals that says the brain is a digital device. The evidence on that proposition is not good, not good at all.

By Bill Benzon on 02/12/06 at 01:24 PM | Permanent link to this comment

Adam, nobody reads Aristotle’s Statics any more (where the theory of apple falls is) except for purely historical reasons. People don’t read his biology either (he seems to have been right about the sex life of squid, however, which he wrote a fair amount about). Some of his stuff has been entirely superseded, and some not.

One of the reasons why Aristotle and Plato should still be read is that they tried to cover all questions, and they were tremendously fertile and diligent. As time went on some of the questions they raised ended up being answered, and others not. If a question is still problematic today, sometimes what some earlier writer had to say about it, e.g. Aristotle or Leibniz, might be interesting and suggestive.

Agreeing with X for an unprecedented second time: Plato’s thought seems subjectivist because he believed that the mind in its deepest essence (but not in everyday use) truly reflects reality. But what Plato was primarily talking about was reality and not the mind. So the Forms/ideas are not objects of consciousnes, but exist independently.

By John Emerson on 02/12/06 at 01:45 PM | Permanent link to this comment

Hmm, does this sentence—“The resemblance between these chairs is a feature of the pattern-recognising nature of human consciousness, not an aspect of the external world” - make any sense?

I don’t think it does, especially as “human consciousness” here is intolerably vague. However, we do know, and we’ve known this since the early nineties, that “pattern recognition” is a brain activity that, like many brain activities, is based on a Darwinian competition between neural paths, a theory first suggested by Gerald Edelman. One consequence of this theory seems to be that there is no meaning in making the “exterior world” and the “internal world” categorically independent. The competition would make, in other words, no sense if the chair itself was not involved, since the criterion of success would then be sheer chance. An autonomous competition between neural paths would mean, basically, that the chair could be an elephant or the color red. I suppose that could be a philosophical position, but it is one that would have to reject the whole notion of the organism’s adaptation to the environment.

If Plato is wrong about the forms, it is certainly not for the reason outlined in this post.

By roger on 02/12/06 at 03:36 PM | Permanent link to this comment

Actually, I think Plato was at least as interested in the fact that one and the same chair can have so many different appearances, depending on how far away it is, angle of regard, and lighting conditions. The image you see is different from one case to the text, but the identity of the chair through all those differences is constant. That’s what we need the Forms for.

The beauty of geometry is that, while the triangles and lines and squares and circles we draw are all imperfect, we have a way of talking and reasoning about them exactly, perfectly. So we generalize from this to the world at large, and, violà!, we gave the theory of Ideal Forms.

BTW, Edelman did not invent either the idea of pattern recognition in the brain or of competition between alternatives. Both were in the literature before Edelman. He did put Darwinian spin on competition, but that notion of his is but one idea among many in the neurosciences. My sense is that it plays larger in the popular market than in the neuroscience community.

By Bill Benzon on 02/12/06 at 04:01 PM | Permanent link to this comment

The Forms of chairness or of redness--adjectivals--would seem to be quite different, either platonically or cognitively speaking, than the Forms of geometric objects, or equations; circles have the necessary relation of 2*Pi*R ; but objects possessing that “that which can be sat upon” quality seem far more unbounded, as Wittgenstein might have said: a stump will serve as chair as well as a throne might. As the person who mentioned the dialogue Parmenides above said, the Forms pertained to mathematical entities and to concepts such as Justice, Truth, Beauty, etc.  This is a bit out of my metaphysical league, but I tend to think Forms--the Reals, as the math people say-- are related to a non-empirical, a priori or innate knowledge; whereas defining “chairs” or redness would be dependent on experience, culture, language . If evolutionary accounts of knowledge are ultimately confirmed, then the Forms would be perhaps read as brain functions, if not discarded; but it does seem strange to hold that 2*Pi*r is an empirical fact--or a provisional, a posteriori definition as Quine seemed to suggest--in the same way that biology facts are empirical.....

By x on 02/12/06 at 04:35 PM | Permanent link to this comment

This all sort of reminds me of when a kid, all the older brothers would gather around a 32 Ford and puzzle over why the engine would not start, and in the end, it was all about hot rods and this particular Ford engine. Thoughts of physics or the origin of this Ford were not even considered, they were having too much fun getting the hot rod to start, and anticipating the real fun to follow. I think Plato was thinking similar thoughts in the dialogue with Glaucon:

[Socrates] And suppose once more, that he is reluctantly dragged up a steep and rugged ascent, and held fast until he’s forced into the presence of the sun himself, is he not likely to be pained and irritated? When he approaches the light his eyes will be dazzled, and he will not be able to see anything at all of what are now called realities.

I guess we could continue and ponder if “hot rod” is a form or essence. But, as I always say, I prefer not to.

By Bro. Bartleby on 02/12/06 at 08:28 PM | Permanent link to this comment

Bro, if God hadn’t been so annoying, people would have believed in him. He has no one to blame but himself.

By John Emerson on 02/12/06 at 08:59 PM | Permanent link to this comment

Actually, when Edelman started out, you could say that his notions had little effect on the neurological community. Now I think they’ve been absorbed, as it becomes more evident that the whole metaphoric of “hard wiring” and single loop feedbacks—the former pop sci image of the brain—don’t fit with the picture of the brain we get from modern neurology. And his notion of neural selection has a lot of evidence for it. When Edelman started out, the idea that the brain was a fixed, stable state thing past a certain stage in very early development was conventional wisdom. Now it isn’t. I think that is due, in part, to him. 

That doesn’t mean Edelman is completely right, or that he has accounted for consciousness. My point, however, was that dismissing Plato as wrong because of the obvious “pattern-recognizing nature of human consciousness,” makes little sense until you put it into some context in which it makes controversial sense—that is, it becomes part of some theory which some believe and some don’t. And if we have gone past Plato, it isn’t because we’ve learned to talk about the consciousness better, but because we have better brain science. And to talk about consciousness without talking about the environment in which it functions—which is not created by consciousness, any more than patterns are created by consciousness—seems crazy.

By roger on 02/12/06 at 09:27 PM | Permanent link to this comment

Bro, if God hadn’t been so annoying, people would have believed in him. He has no one to blame but himself.
--Narcissus

Bro, if people hadn’t been so annoying, God would have believed in them. They have no one to blame but themselves.
--Echo

By Bro. Bartleby on 02/13/06 at 12:30 AM | Permanent link to this comment

God doesn’t believe in people because he has better brain science than we do. God knows that “people” are part of folk psychology. Only atoms and the void are real.

The confirmation word I had to enter for this post was “pattern”. I am amused by this.

By Daniel on 02/13/06 at 03:15 AM | Permanent link to this comment

I feel I am learning stuff, here; not only learning more about the philosophical debates over ‘concrete’ and ‘abstract’, and not only that Bill Benzon knows a lot more, and has thought a lot more, about human consciousness than I do/have, and that I would do well to listen to him.  But I feel (more tentatively) that I’m learning how philosophers respond to this charge of wrongness:  namely

(a) that nobody wants to stand up and defend Aristotle on gravity.  (’Nobody reads that anymore ...’)

(b) that nobody wants to touch Heidegger.  For some reason.

(c) that some people agree with me that Plato’s notions on ‘the realm of the Forms’ is dodgy, but they still respect Plato for all that he did for philosophy, and such.

(d) that other people don’t agree with me that Plato is wrong.  ‘Plato’s doctrine of the forms is wrong?  You’re wrong!’ For some reason this makes me think of the Simpsons, and specifically the scene when Lisa comes home to find that Bart has stripped naked and pulled on a chicken carcass as underpants, and is dancing around the kitchen going ‘behold the chicken man!’ Lisa complains, ‘Bart! That’s tomorrow’s dinner!’ and he retorts—brilliantly, I feel—’you’re tomorrow’s dinner.’

Why do I keep coming back to cartoons?  Ho hum

My problem is still my problem (indeed I’m very happy to be told that this whole thing is only a problem for me, and not for anybody else).  Perhaps I need to turn it around.  Why don’t physicists spend time rephrasing Aristotle, or Phlogiston, in complex contemporary theoretical idioms in order to make them sound less daft?  Maybe this (as John Emerson suggested) would indeed be a useful as a heuristic strategy even if it doesn’t get us away from the original wrongness.

What would a paradigm shift in philosophy look like?  Or is the secret business of the widespread community of philosophers always to resist the paradigm shift in their discipline?  You won’t find a physicist who’s prepared to argue the case for Aristotle’s gravity; but you’ll always find philosophers who’ll argue the toss for anything previously said by another philosopher, just as you’ll always find a philosopher prepared to disagree.  That’s just philosophers.

[The codeword I had to type in for this is ‘woman53’.  What can it all mean?]

By Adam Roberts on 02/13/06 at 06:40 AM | Permanent link to this comment

"What would a paradigm shift in philosophy look like?”

Adam R, philosophy seems to be inextricably bound up with Great Man-ism.  This appears to me to be even more true for continental than for analytic philosophy.  Therefore there can be no admitted paradigm shift; change has to happen clandestinely.

Look at Newton in physics, say—he’s commonly regarded as the single greatest physicist of all time, within the field.  Does that mean that scientists go back and re-read the _Principia Mathematica_, or that there are Newtonists?  No.  What he did right has been identified and summarized; what he did wrong has been rejected.

But philosophy has no real way of comparing ideas with the natural world and seeing whether they are falsified or not.  So the entire history of the field has to be preserved as possibly of contemporary relevance.  Thus the mock-Christian apologia over Plato’s Forms, say.

By on 02/13/06 at 10:13 AM | Permanent link to this comment

I’ll try again. Plato and Aristotle tried to ask and answer every question that there was in those days. There are unquestionably better answers on many of those questions (the sex life of squid, gravity) and A&P’s contributions are pretty much forgotten now. (In the squid case, not so much because Aristotle was wrong, but because we now have more detailed information).

Other questions are still open. People still can spend a decade working on something and come up with something which looks Platonic. (Many in philosophy of mathematics, for example). I remember when I was reading Max Weber on ideal types that I thought that his presentation, which I thought was tortured and evasive, would have been better if he’d keyed it on Aristotle.

So there’s a difference between questions which we’re still working on, where Plato or Aristotle’s writing is suggestive, and questions we’ve answered, where P&A are simply wrong.

Most contemporary philosophers are anti-Platonic and anti-Aristotelian, but the ones for whom philosophical history begins in 1950 (or maybe with Frege) strike me as having limited their questions too narrowly. The same would go for many in science. If Plato’s topic were simply “Philosophy of Mind”, or brain science, then he’d be simply wrong, but it’s self-serving to define his work that way.

Only when all of the questions Plato put on the table have been given better answers will it make sense to use the word “wrong” to describe Plato’s whole body of work.

By John Emerson on 02/13/06 at 10:22 AM | Permanent link to this comment

The matter is the study of history. The historians have it right, they study the good and the bad and the ugly, after all, we are the byproducts of it all.

By Bro. Bartleby on 02/13/06 at 10:31 AM | Permanent link to this comment

Why is it that this entire post strikes me as deeply wrong?

By Adam Kotsko on 02/13/06 at 11:28 AM | Permanent link to this comment

John E, thanks for trying again.  I’ll try again myself.  “Only when all of the questions Plato put on the table have been given better answers will it make sense to use the word “wrong” to describe Plato’s whole body of work.” I’m not saying that the whole body of work is wrong.  Plato thought that, ethically, we ought to try to do the right thing.  That’s not wrong; on the contrary, I’d say that’s right.  My post was about the doctrine of Forms (and Heidegger on time).  So, Forms.  That’s some crazy shit, no?  Well:

(a) That’s not crazy, that’s still current, many many people still think that’s right.  To which I’d say, ‘wha?’

(b) We don’t believe that any more.  OK, so why is it still studied, and taught, as more than a historical curio like phlogiston?  Why does this Plato loom so hugely over contemporary philosophy?  Or, to turn it about, where are the scores and scores of academic monographs and conferences on the Miasma theory of contagion?

Adam K: why does this whole post strike you as deeply wrong?  Um.  How would you suggest I go about answering a question about your state of mind?

By Adam Roberts on 02/13/06 at 12:16 PM | Permanent link to this comment

I’ve got it!  It’s the implicit assumption that philosophy should be just like science, and the practice of drawing on science as a ready-made body of facts that can be compared and contrasted with certain propositions drawn from Heidegger or some other philosopher.  The way that Heidegger is speaking about time being prior to space does not strike me as being falsifiable in the same way that the miasma theory of contagion is—yet you’re taking it as a given that Heidegger is “wrong” on this point.  Why?  Because (your idea of science) says so?  Are there any scientists in the room who have studied both Heidegger and theoretical physics to such an extent that they can definitively prove that

(a) a critique of Heidegger based on the findings of theoretical physics would touch on Heidegger’s actual project (i.e., what Heidegger thinks he’s trying to do, not what one thinks a philosopher should be doing); and

(b) granted that such a critique would be relevant, Heidegger’s way of understanding time cannot be construed in such a way that is compatible with theoretical physics (as we presently understand it)?

If this is just another case of me getting pissed off because you’re criticizing someone I like, then we can all go get a beer and forget about the whole thing.  But I don’t like Heidegger all that much.

By Adam Kotsko on 02/13/06 at 12:28 PM | Permanent link to this comment

Read Quine’s “On What There Is” for starters: the ontology of math and logic has some relation to platonism for better or worse. Where are conditionals “located” for intance--was the law of contradiction inferred from nature? These sorts of questions are perhaps trivial to physicists, but they do seem to offer some grounds to question that all knowledge is derived from experience; or at least math/logc knowledge operates far differently than the physical sciences (tho Newton relied both on induction and the calculus of course). If the scientist or scholar wants to deny all “a prioriness” and side with the Meat Popsicle/inductivist school, great, but there is something at stake:  mathematical objectivity seems undermined, as does, say, Justice if not Truth as a whole.  That there are no universals, except perhaps defined by consensus, might be troubling to some who thought that say 2*Pi*r was true independently of mind, or that Justice was not a matter of observation, if not personal whim.

Mathematical facts and knowledge are quite a different type of activity than facts about the physical world. THat is why platonism (small “p”, rather than big P, idealism, mysticism, etc.) still is considered somewhat important. This is perhaps a different issue than the brain-science debate, but at some point brain scientists will have to be able to demonstrate how higher functions such as math/logic/syntax do realize themselves, well, cortically and biologically.  Failing that Plato might do as well, or better, than biochemistry or Screepture.

By on 02/13/06 at 12:49 PM | Permanent link to this comment

Oh sure, Adam; do tell another one.  We all know Heidegger’s sweaters were pedantically ironic.  What more evidence does one really need?

By Matt on 02/13/06 at 12:53 PM | Permanent link to this comment

Nevermind, I see the local host has conclusively addressed the problem already.

By Matt on 02/13/06 at 12:55 PM | Permanent link to this comment

. . .at some point brain scientists will have to be able to demonstrate how higher functions such as math/logic/syntax do realize themselves, well, cortically and biologically.

This is under way, and has been for some time. In Biology and Knowledge Piaget claims to have found the roots of mathematics in human biology. It’s a difficult argument in an obscure book, so I’m not saying I’m buying it. Piaget has also written quite a bit about the history of mathematical and scientific concepts as the unfolding of human cognitive processes.

Beyond this there is a variety of work on the neural and psychological underpinnings of mathematical concepts.  See e.g. Dahaene, The Number Sense and Lyn English, ed. Mathematical Reasoning: Analogies, Metaphors, and Images.

I have no idea how any of this will shake out, but the project you say that brain scientists should be doing is well under way and has a growing technical literature.

By Bill Benzon on 02/13/06 at 01:14 PM | Permanent link to this comment

Matt.  “Oh sure, Adam; do tell another one.” Me?  Er ...

Adam K.  I apologise if I’ve pissed you off.  Didn’t mean to do that.  If you’re interested, I wholehearted agree with you that “the implicit assumption that philosophy should be just like science” is a very dodgy assumption.

If you’re still interested after that, I’d suggest that it hinges on this: “The way that Heidegger is speaking about time being prior to space does not strike me as being falsifiable in the same way that the miasma theory of contagion is ...” We may have to agree to disagree on this. 

What do you mean by ‘doesn’t strike you’?  What is it about Heidegger’s time that exempts it from the kinds of experimental testing that applies to all the other kinds of time?

You say; ‘Why?  Because (your idea of science) says so?’ I’d say (humbly) that it’s Einstein’s theory of time, not mine.  My contribution to this goes no further than saying that I’m persuaded by Einstein’s theory: but I can’t think of any reason why my opinion should matter on this.  My imagination is so meagre, in fact, that I can’t even imagine a serious thinker working on the question of Time in the mid- and late-twentieth century settling down to the job by thinking ‘yes, I don’t need to read anything by this Einstein bloke, it’s enough for me to constellate Plato, Husserl and my enormously capacious brain.’ That boggles my imagination.

I’m not very good at reading ‘implicit’ or ‘coded’ messages within e-posted text.  If I were I might tag your comments as being of the ‘you dare call Heidegger wrong? You, who are not worthy to clean his boots?  Heidegger was a hundred times as intelligent and wise as you’ variety.  I, perfectly genuinely and happily, concede this last point, by the way.  What I like about Heidegger is precisely his refual to take anything about the process of thinking on trust: I like the way he works carefully and deeply through all sorts of questions that many other thinkers had assumed had long since been sorted out.  But given that this is what he does, how should we react to a position advanced in Being and Time that time and space are not co-ordinate.  Should we go ‘hmm, yes, that sounds about right to me’.  Or should we test the notion against other approaches to time?  If his statements do not admit of testing (which I take to be a synonym for ‘falsification’), then what good are they?  Whatever else a person thinks about Heidegger it seems to me that they must concede that what surrounds his writings is the very opposite of a cloud of unknowing.

By Adam Roberts on 02/13/06 at 01:27 PM | Permanent link to this comment

the project you say that brain scientists should be doing is well under way and has a growing technical literature.

Yes, sir, and I don’t mean to question that project. Reading a debate from a few years ago featuring Dr. Changeux vs. the philosopher Ricoeur, I was convinced of the utter triumph of cognitivism and neurology, and the inadequacies if not irrelevance of traditional philosophy (and humanities), analytical or especially continental. But I do think objections to a computational or cognitive models, raised by like Searle and Dreyfus (his arguments on why computers can’t learn to drive quite powerful), will have to be overcome. And they probably will be overcome; but it seems if mind may be reduced to pattern recognition and to biochemistry. to cortical function then mental constructs, interfacing should be possible, yet that of course has not been done. (William Gibson provides some decent fictional depictions of digital human “constructs").

By x on 02/13/06 at 01:29 PM | Permanent link to this comment

I was reading GH Hardy’s A Mathematician’s Apology, and he was saying that mathematicians can really only do mathematics well, and there are very few mathematicians. Reading Mr. Roberts post, it occurred to me that there are likewise very few people with an appropriately philosophical kind of mind. Mr. Roberts is obviously smart but has no philosophical sense, like I have little sense for painting. And you can try all you like to beat it out of existence by clubbing it with the authority of science, but it still isn’t going anywhere. For instance, Heidegger could catch a ride on a spaceship and come back to earth 20 years later and his theories about time would not be touched in the least by an atomic clock, and a moment’s reflection makes that clear. Which says more about the clubber than the clubbee.

By on 02/13/06 at 01:36 PM | Permanent link to this comment

No, I’m not saying Heidegger is necessarily more right than you (or anyone).  Your reference to Being and Time points in the direction that I was thinking—he’s not talking about time “as such,” but time as disclosed to Dasein (or to put it in more normal terms, time as it appears to subjectivity).  He is saying that time is more fundamental to human experience than is space (I apologize for not using sufficiently Heideggerianized language here).

I can see someone going against Heidegger in terms of the project he’s setting himself (and that’s a really great thing to do, in my opinion), but I’m still not sure that Einstein touches on what he’s talking about, not because Einstein isn’t relevant or worthwhile, but because it’s not the same kind of thing.

In his later work, it’s possible that Heidegger is encroaching on Einstein’s territory, but then again, maybe he isn’t?  I always get yelled at for not being able to exhaustively back up everything I’m saying, but I’m pretty confident that your simple assertion that “Heidegger is wrong about the priority of time” is not correct.

By Adam Kotsko on 02/13/06 at 01:42 PM | Permanent link to this comment

Mr jbjtIVIV (if I’m pronouncing your name correctly) I salute your powers of intuition.  You quite correctly intuit my complete lack of any ‘feel’ for philosophy, just as you correctly address me as ‘Mr Roberts’.  Indeed the charge goes deeper than that; I even lack the belief that there’s any such thing as a ‘feel’ for philosophy.  To my tin ear the statement:

For instance, Heidegger could catch a ride on a spaceship and come back to earth 20 years later and his theories about time would not be touched in the least by an atomic clock, and a moment’s reflection makes that clear.

possesses exactly the same form as:

For instance, a Flat Earther could catch a ride on a spaceship and circle the earth 20 times, all the time staring out of the porthole, and his belief that the Earth is flat would not be touched in the least by what he saw, and a moment’s reflection makes that clear.

Which says more about the club than the clubber, I think.

Adam K:  I hear what you’re saying.  But (my imagination failing me again) I can’t see how Being and Time works if it’s only about ‘subjective time’; that an atlas that only concerned itself with the subjective apprehension that the Earth is flat (which it certainly seems to be).  I don’t mean to be dismissive there: what I mean, specifically, is that when I read Being and Time (and a very long time it took me) I did so on the understanding that it was about more than just the way that the watched pot never seems to boil.

By Adam Roberts on 02/13/06 at 02:00 PM | Permanent link to this comment

A place to go is philosophy of mathematics. Many in the field are described, often self-described, as Platonists, which means that they think that mathematics has some kind of objective reality and is not created by human consciousness. Their doctrine is not exactly the same as Platonism; it’s a late development of Plato-like thinking, just as physics is a late development of Newton-like thinking. Kurt Goedel is one example of a mathematical Platonist, though there are many. (Ironically, Goedel actually was crazy, but his mathematical / philosophical work is unaffected by that).

Since I am not a Platonist but an anti-Platonist, but think that Plato should be read, this is becoming tedious. There are few or no actual Platonists alive today. What’s in question seems to be whether philosophy should be studied historically or not, since science is not. The point of view I’m arguing against is a positivist form of presentism: if philosophy is a science, then only the most recent results are worth bothering with.

That would be true if philosophy were a science. What happens is that from time to time specific philosophical questions are given exact scientific answers, as with gravity, in which case they no longer are philosophical questions. When a scientist looks at a philosopher and sees only what he’s interested in—the part he’s just scientized—then the philosopher disappears for the scientist. But if his original understanding of the philosopher was incomplete, his opinion is not authoritative.

If there were an enumerable quantity of philosophical questions, say 100, they could be solved one by one until there were zero. However, it’s always possible to increas the stock of questions—new answers produce new questions.

My long-running beef with analytic philosophy is that they claim to have scientized all of philosophy, but that they’ve done this by reducing the range of questions asked.

Similiar questions have arisen recently in economics. Orthodox neo-classical economic theory has reached an extraordinary stage of sophistication, but it’s under heavy attack for its narrowness.

By John Emerson on 02/13/06 at 02:00 PM | Permanent link to this comment

If you thought Platonic metaphysics was mystical and unverifiable, try Herr Heidegger’s Zeit und Sein. “Revealings,” dread, anxiety, facticity, “thrownness,” the ontic, the four-fold nature of Time, etc.: he’s like a seance-medium compared to the greek rationalists. Run the Greeks through Kant, Hegel and “phenomenologists”: you get H., a nazi poet-metaphysician who thought Gott showed himself when you’re in a bad mood ...Whatever inadequacies the Frege-Russell or empiricism in general possesses, they pale next to the product of that schwein and his bastard sons.

By jake on 02/13/06 at 02:04 PM | Permanent link to this comment

Adam, While I was out moving my vehicle in order to be in compliance with Chicago’s Byzantine parking regulations, I thought of a better way to put this than I in fact put it: Heidegger’s analysis of time in B&T is about time insofar as it really, really matters to a human subject.  To that extent, it is intimately tied to his analysis of death.  One could say that a scientific account is about time insofar as human beings are curious about it. 

It seems to me that such a distinction is justified, because I would say that an argument about what it means that I must one day die is a qualitatively different kind of argument from one about what it would be like if I were looking at a clock on a vehicle moving very near the speed of light.  The same methods cannot and should not be used in both cases (although I do not at all deny the validity and interest of Einstein’s account of time).

As I said, however, depending on how one understands the shift in Heidegger’s later thought, it is possible that those writings are encroaching on the kind of analysis that Einstein is doing and could as such be critiqued as antique or surpassed or whatever.  I’m not absolutely sure about those later writings, having not recently spent a great deal of time with them, but I am fairly sure that the Heidegger of Being and Time is not susceptible to the critique you level at him here.

By Adam Kotsko on 02/13/06 at 02:11 PM | Permanent link to this comment

What’s funny about your disavowal of philosophy is that you’re the dupe of two philosophers, Kant and then Karl Popper. Nobody believes in the falsifiability version of science, much less philosophy. Insert Keynes quote here.

By on 02/13/06 at 02:13 PM | Permanent link to this comment

After reviewing, I’d have to say that the comparison with the miasma theory of epidemics is spectacularly ill-chosen.

By John Emerson on 02/13/06 at 02:13 PM | Permanent link to this comment

Adam, my reply to you was hurried and not very clear.  I feel prompted to elaborate: I’m can’t deny, as an empirical fact of human behaviour, that people judge rightness and wrongness by different criteria on different occasions.  The same physicist who gauges rightness in the lab by one set of beliefs employs a completely different set when he goes to Mass on a Sunday.  I don’t bring in ‘faith’ in my original post in a throwaway fashion: I think it’s a really crucial matter.

Philosopher A says ‘time is a, and b, and c.’ Person B says, ‘but what about the experiment that suggests that time is not a in the least?’ The strong dismissive version of Philosopher A’s reply would be ‘clearly you don’t understand what I mean by time’, which carries the implication ‘you narrow-minded fool’.  The polite dismissive version would be ‘perhaps you and I understand different things by “time“‘.  But I dislike this because it suggests that in order to ‘get’ Heidegger I must stop thinking in a contrarian frame of mind and ‘get with the programme’, I must see things the way he sees things.  I must submit, in other words.  But I’m constitutionally disinclined to submit.  I resent Spinoza on free will, I’d rather it weren’t true, but I can’t find a way of grazing the smooth surface of the argument Spinoza presents.  And though I resent it, I admire Spinoza for not insisting I think the way he does: I can bring any discuorse I like to his case.  But I can’t bring any discourse I like to the business of reading Heidegger, without pissing off people who don’t even like Heidegger very much.  This, perhaps, is what Mr jbjtIVIV means by the ‘feel’ for philosophy: the ability to fit your mind into the flow of the thinker you’re trying to apprehend.  Must it be so?  I hope not.

This, it seems to me, is precisely the difference between Faith and Philosophy.  Faith does depend, to a greater or lesser extent upon a mental submission: a joyous and liberating submission, as I understand it.  But philosophy surely should be the exact opposite of this.

Hence the bread example.  Christianity does indeed say “this bit of bread literally turns into the body of Christ during the eucharist”.  I think it would be inept of me to go “that’s wrong”, even though (ahem) the bread remains doesn’t literally change, it stays purely bready by any standard science or common sense might apply to test it.  To accuse the eucharist of being wrong is to miss the point of the eucharist.  Is philosophy the same as this?  If I’m to be ticked off for trying to apply scientific standards to philosophy, then what’s the punishment for trying to apply religious ones?

By Adam Roberts on 02/13/06 at 02:19 PM | Permanent link to this comment

Nobody believes in the falsifiability version of science, much less philosophy.

You mean nobody in the Soothsaying program, er, postmodernism or literature, believes that. Funny that Thomas Kuhn, who stresses both verification and falsifiability, is still taught in most Phil of Science courses, as are Popper and Quine.  And I guess you should tell the students in the biology or bichemistry departments looking for physical evidence and data to confirm various hypotheses that they need no longer do that, ‘cuz some french hipsters say verification is no longer hip. 

Viva Sokal

By jake on 02/13/06 at 02:22 PM | Permanent link to this comment

Adam,

No.  No, no, no. 

The distinction I’m talking about is not the same as the distinction between the laboratory and the altar.  You don’t need to be of any particular religion to understand what Heidegger says about the fact that we never have enough time.  We all talk about time that way.  It’s not a piece of jargon. 

Talking about the time that you measure with a watch and the time that you experience as you struggle to make a meaningful life with death approaching at an unknown (but definitely rapid) pace are two different things, but the latter is not something that you have to undergo some special initiation process to understand.  Heidegger’s language in Being and Time is admittedly difficult, and there are definitely some critiques one could level at that—but the idea that he has some bizarrely idiosyncratic idea of time such that you have to mind-meld with him (or sacrifice reason) is ridiculous.

By Adam Kotsko on 02/13/06 at 02:26 PM | Permanent link to this comment

Off (my) topic: Does Plato himself ever actually use the example of the chair?  Did they have a lot of chairs sitting around back then?  I always assumed that the chair example was a standard in lectures on Plato because they are items that are normally readily available in a classroom (and conveniently enough, the chairs in a classroom are often all the same).  But this is an empirical question, and I’m not hanging my entire philosophical system on the idea that Plato never used the example of a chair.

By Adam Kotsko on 02/13/06 at 03:00 PM | Permanent link to this comment

Lets’ agree to the somewhat obvious point that Plato. Aristotle, and greek idealism are outddated and wrong in terms of providing accurate scientific models. What is it that makes them wrong? It was the rise of inductivism, and of empirical methods, starting with Bacon, Gallileo, Newton, then the English physiologists, up to Lyell and Darwin, and then Einstein (who insisted on physical confirmation of special relativity via the eclipse) and the quantum theory. And verification in various forms is an essential aspect of those empirical methods; mathematical modeling techniques of course essential as well, and probability viewed broadly. If this is how demonstrable knowledge advances, then it seems reasonable to affirm both deductive (more or less analytical) and inductive methods (synthetic) and to question non-rational types of “knowledge"--art and mysticism, theology;( perhaps the brain scientists will succeed in proving that math knowledge is a brain function as well (a type of synthesis)). That may result in large sections of the campus library gutted (i.e literature, aesthetics and theology) but that may be ultimately necessary.

By x on 02/13/06 at 03:11 PM | Permanent link to this comment

Adam K, the last time I checked I found “bed” in some translation of some dialog. Now maybe it’s “chair” in another translation of the same dialog, or maybe it’s “chair” in some other dialog. Or maybe not. But he definitely used a piece of furniture as an example. I don’t think much of anything hangs of which kind of furniture or, for that matter, or his using furniture at all.

By Bill Benzon on 02/13/06 at 03:54 PM | Permanent link to this comment

I would personally find it more amusing if Plato had never used a furniture example and the ubiquitous example of the “chair” were just a long-standing oral tradition among lecturers on Plato, but oh well.

By Adam Kotsko on 02/13/06 at 04:00 PM | Permanent link to this comment

I don’t think much of anything hangs of which kind of furniture or, for that matter, or his using furniture at all.

You got that right Doc. If some of the blog goombas around here had ever bothered to read the Meno, or even Chomsky’s cliffnotes synopsis of it, they might realize what is at stake here--and it ain’t about chairs.

By jake on 02/13/06 at 04:19 PM | Permanent link to this comment

Adam K: ‘We all talk about time that way.  It’s not a piece of jargon.’

Agreed.  But we all talk about ‘sunrise’.  The sun doesn’t actually rise.  The fact that ‘sunrise’ connects with some intuitive sense of our place in the universe (perhaps that we are too weighty, important and central to move; the sun must move around us instead) doesn’t stop it being wrong.  And, you know, being, time, mutatis mutandi ...

If you’re right about Being and Time (and I’ve no reason to doubt your account of it, which makes a great deal of sense) then it’s not what I took it to be, which is to say an intervention into debates about, you know, time.  And being.  In a larger sense.  It becomes instead ‘some stuff that a really clever German guy extrapolated from his own experience of time after reading some other philosophy’.  Which isn’t to dismiss it.  The clever German was certainly very clever.  But Einstein was another clever German, and his thinking was about the universe as a whole, not about the bit that poked into the back of his brain.

Or, you know what?  Not.  So far on this thread, amongst the many things I’ve learned (and I do think I’ve learned a great deal: thanks to all who helped rub a little more of the crud of ignorance from my brain), one main things I’ve learned is that philosophically-minded people can get really cross if you call their famous names ‘wrong’ in any particular—which is to say, in the terms of the post I originally posted, that most people think that, hey, wrong is the (uh) wrong term to apply to philosophy.  I can clearly be wrong, as many contributors to the thread have asserted many times.  But not Plato. I withdraw the term.

Nearly time for my bed now anyway.

By Adam Roberts on 02/13/06 at 04:34 PM | Permanent link to this comment

What Adam Roberts means by “wrong" is the wrong category for philosophy, and that’s because philosophy (as most people understand the term) is dealing with something like the “existential” issues we face (e.g., what it means that I must die), rather than with questions of fact that can be settled once and for all—that is, people understand philosophy not to be an attempt at scientific knowledge.

The very fact that people (such as myself) thought that you were “wrong” means that the category of “wrong” must apply—not that there’s some kind of eternal dignitas to the person of Plato (or Heidegger) that must not be violated. 

I personally think that Heidegger is wrong to take one’s own death as the most important; I agree with Levinas that the death of the other is more constitutive.  I also agree with Jean-Luc Nancy that Heidegger was wrong to place so little emphasis on “being-with” and that it should be more central to philosophical analysis.  For instance.  But saying that there is some empirical fact that someone (even someone very brilliant) came across that proves that Heidegger is wrong—no, that’s wrong.

But your dogged insistence that a scientific concept of time is the only possible concept of time just baffles me.  In the context of scientific experimentation and explanation, that concept of time is great.  Day-to-day, it doesn’t have much effect on my existence.  What does Einstein tell me about what it means that I’m going to die, that my time is limited?  Jack shit.  That’s not a problem with Einstein, just not the question he’s asking. 

(Perhaps I’m biased, though, in that my primary experience with hard-core empiricism has involved someone who is obviously commitably insane.)

By Adam Kotsko on 02/13/06 at 04:50 PM | Permanent link to this comment

The couch or bed of Republic Book X is more like a chaise longue than anything else.  You can find some vases with them about halfway down this link:

http://www.designboom.com/history/3.html

It is, I think, a little ridiculous to imagine that there are culture-independent forms for furniture, outside of time and space.  I bet that Plato didn’t believe in them, either.  In Aristotle’s Met. A.9 and in On Ideas, he implies that Platonists don’t believe in forms for artifacts.

By on 02/13/06 at 05:21 PM | Permanent link to this comment

commitably?  not a word in Webster’s unabridged

You get both eloquence and fear and trembling, man, from Billy Bob’s Drive-Through Bible College out in Peoria, Ill.

By z on 02/13/06 at 05:27 PM | Permanent link to this comment

Just to note: I think Heidegger did have some awareness of what the sciency types were saying about time. I remember running across a footnote for some book of Heisenberg’s in “Question Concerning Technology”. As I recall, Heidegger was a bit curt, saying something about “clock time” and how he wasn’t dealing with that. But then, it’s been a while since I read the essay, and I don’t have a copy ready-at-hand.

Might be worth looking at; I thought Question was an easier read than some of his other works.

By Daniel on 02/13/06 at 05:42 PM | Permanent link to this comment

What Einstein discovered is that light has a speed, whereas light for Newton or Kant was instantaneous. Newton and Einstein have nothing to say about, as you call it, what time is. Heidegger on the other hand argued that there is no time that is not the finite time of one human being. Clock time, or Einstein time, is a derivation of human time, not the other way around. How would we know of any time that wasn’t the time of one finite human being? Through clocks? But like Heidegger says, time cannot be understood in terms of motion or space, because time cannot be understood using the imagination, in the same way that death cannot be understood through the imagination. If you use clock time to bash Heidegger for his lack of rigor, Heidegger will calmly point out that time has nothing to do with clocks, and you are still trapped in picture thinking.

By on 02/13/06 at 05:49 PM | Permanent link to this comment

technically that’s known as a crock of shit , like most of Heidegger. Time defined subjectively might be good for a few poets in the alps: physics depends on time objectively defined, independent of human individuals: as Heisenberg noted in his dismissals of German Idealism apres-Copenhagen

By y on 02/13/06 at 05:55 PM | Permanent link to this comment

Physics never defines time. That’s not what they do. Go ask a physicist what the definition of time is and he will groan.

By on 02/13/06 at 06:04 PM | Permanent link to this comment

Time is not a good example. In physics there is not time; time is a dimension of space and is theoretically reversible. Heideggerian time is more like our common-sense experience of time than Einstein’s is—Einstein is the counterintuitive one. <a href="http://idiocentrism.com/prigogine2.htm">I’ve written extensively about this.<a>

You can’t write history in physicist’s time. It’s not just poetry.

By John Emerson on 02/13/06 at 06:17 PM | Permanent link to this comment

But there is biology, too. And I’m not at all sure that biology can make do with physicist’s time.

By Bill Benzon on 02/13/06 at 06:35 PM | Permanent link to this comment

Bill and John, Then we’ll have to stop doing history and biology until we find a way to do them using the only possible definition of time.  Sorry.

By Adam Kotsko on 02/13/06 at 06:36 PM | Permanent link to this comment

Classical newtonian mechanics, still applicable to the vast number of physical events (like stomachs and automobiles), depends on absolute time. Einstein, following Minkowski’s complex arguments for a non-euclidian geometry, introduces time as a 4th coordinate (perhaps that is what Emerson means by saying t is a dimension of space), and states that this overturns the classical model; perspective may of course alter the perception of events (the simultaneity example), but that does not suggest some Kantian type of subjectivity; it means that different events must be timed separately, does it not (Einstein discusses this in the lightning example). I am not sufficiently knowledgable about physics apres-Einstein, Heisenberg and Bohr, but even allowing an anthropomorphic principle would not somehow magically reintroduce the kantian ghost or violate “locality.” There may be a somewhat metaphysical issue remaining in regards to the perception of time in relation to physical events, but that’s hardly a reason to reanimate idealism....

By y on 02/13/06 at 06:39 PM | Permanent link to this comment

All living creatures live within irreversible time, which is an aspect of entropy and only appears at the thermodynamic level and not at the subatomic or cosmological level.

By John Emerson on 02/13/06 at 07:42 PM | Permanent link to this comment

Relativity, spec. and gen. did as much an anything in terms of overthrowing the objectivity of time and space; tho’ you seem at rest, but in reality you are moving at 66,600 mph; were you renting a pad on the sun and watching the earth spin round by telescope you would note that--but moving in a spaceship the speeds of earth and sun would be different--so waht is absolute frame of inertia? there really isn’t one; moreover, due to time dilation, your pals back on earth are aging a bit more slowly than you are on El Sol.  So much for Plato: tho’ Spock probably keeps a copy of the Republic around his room on the Enterprise for kix.

By x on 02/13/06 at 09:33 PM | Permanent link to this comment

Bro. John,
Perhaps you have the answer to this, from your website I can see you are a fellow of many and varied interests. I recently came across a water spider, I think it is Argyroneta aquatica (Argyronetidae), it builds a ‘diving bell’ out of an air bubble, then uses a strand of silk as a sort of anchor line, then lives underwater, catching critters and pulling them into the air bubble where they are feasted upon. Using both a strand of silk and the air bubble, the little guy looks like some sort of Jules Verne bathyscaph. Now this isn’t some sort of happenstance event the the spider stumbled upon, or a learned behavior, but innate. How does evolution explain this sort of very clever innate behavior? How does learning something beneficial, make a change in the brain (and I don’t even know the size of a spider brain), then the brain change, change the DNA, so that the next generation has the same DNA with the ‘brain change’ in it. Or is that possible? Is the DNA unaffected by changes caused by experience and memory in the brain? In the past I’ve read some papers on feral children, and in most, if not all cases, when the children had no contact with other humans, they seem to only progress mentally and physically to a state of the animals that they came into contact with. In other words, lack of language and lack of learning from others, kept them in a very savage state. It is as though none of the learned and civilized state that we find ourselves in is innate. That little underwater spider seems more sophisticated than the feral human. Or perhaps it is that spiders have been evolving longer than humans? Well, I know this all seems off topic, yet when reading the many posts, it is as though we are all in our own little air bubbles, floating in the water of life, awaiting our next meal.

By Bro. Bartleby on 02/14/06 at 02:05 AM | Permanent link to this comment

Robert Frost explains.

By John Emerson on 02/14/06 at 09:03 AM | Permanent link to this comment

I did enjoy that, but I do think that the cast of characters and circumstances in Frost’s poem are far easier to explain than spiders living in air bubbles. It is not the living their life in air bubbles, but how did that clever talent become innate in one spider brain and then embedded in the DNA so that it could be passed on to the next generation?

By Bro. Bartleby on 02/14/06 at 11:20 AM | Permanent link to this comment

Tho we live now in a age of DNA and scientific materialism, a type of platonism, or at least essentialism, shows itself in many activities, literature included. When someone discusses, say, the meaning of Ozymandias, isn’t he sort of saying something exists, a “meaning,” which the poem refers to or represents?  Apart from statements that directly refer to events in the real world--say a biology textbook-much writing does seem to rely on a type of meaning as essence--which again relates to the problem of universals. If one does assume objective meanings somehow exist apart from sentences and words, then a type of essentialism is required; one task of brain -scientists then would seemingly be to determine the location or specific neurology of meaning--semantic or literary. Perhaps Derrida deconstruction of Plato had something to do with this (however obscurely organized Of. Gramm. was) but language does seem bound up with various metaphysical commitments. Quine insisted on reference (as did Witt. of the Tractatus), instead of meaning, as a way of avoiding any discussion of platonic essences, but it’s not clear that that really eliminates the need for some semantic--and mathematical--- abstract objects.....

By x on 02/14/06 at 11:41 AM | Permanent link to this comment

Bartleby, ask a biologist. It doesn’t seem especially difficult to me.

By John Emerson on 02/14/06 at 01:32 PM | Permanent link to this comment

Bro. John, I took your advice and asked a biologist and his reply seems to stir up the matter instead of clarifying. Now I must puzzle over how ‘clones’ pass on their learning to the next generation, because DNA can’t do it.

“I have visited the jungles in Belize and Galapagos Islands. In the jungle I spent some time studying ants, specifically leafcutters and army ants. A soldier leafcutter with huge mandibles guards the entrance to the nest keeping out interlopers. If a worker comes along without a piece of leaf he bites off its head. Why is he doing this? Is it part of the struggle for available energy? No leaf means less leaf to grow fungus upon and that means less food for the colony. I spent a lot of time on the ground watching the procession and most ants do have a leaf but if they drop one they are doomed. Is the inability to hang onto a leaf in the DNA? If so what difference does it make because worker ants are cloned and their DNA is not passed on. Ants have not changed in millions of years. How can natural selection explain such behavior? Here is something about leafcutters that is even stranger. If you look closely you will see a very tiny ant called a minum riding at the top of some of the leaf pieces being carried. Why are they there? There is a fly that glues an egg to the leaf so that entrance to the nest can be gained and the maggot can develop there. What does the minum do? It throws off the fly egg.

Natural selection depends upon passing on DNA that is desirable and rejecting that that is not, but social insects are cloned. I could go on and on but it seems to me that there are other factors involved in evolution besides a struggle for energy and competition.”

By Bro. Bartleby on 02/14/06 at 02:22 PM | Permanent link to this comment

Bro, I suggest you take this to P.Z. Meyers at Pharyngula—a biologist. He may well have heard the question before.

By John Emerson on 02/14/06 at 07:35 PM | Permanent link to this comment

Adam K, one point:  ‘What Adam Roberts means by “wrong” is the wrong category for philosophy, and that’s because philosophy (as most people understand the term) is dealing with something like the “existential” issues we face (e.g., what it means that I must die), rather than with questions of fact that can be settled once and for all—that is, people understand philosophy not to be an attempt at scientific knowledge.’

I’m sorry but I don’t recognise your version of ‘scientific knowledge’.  It’s certainly not what I understand by science, or scientific knowledge ("facts that are settled once and for all“?  Puh-lease.  To reiterate a phrase I’ve used several times: in science paradigms shift, thank heavens.  It’s philosophy that seems to have the problem letting go its exploded paradigms.) I guess if that’s what you think I’ve been saying I can understand why it would make you cross; and all that I can say is that that’s not what I’ve been saying.  If you heard ‘Heidegger is wrong in every particular because he doesn’t measure up against the Eternal Truth handed down by Science’, then please believe me it wasn’t from my mouth.  Or keypad.

I don’t say Heidegger is utterly wrong.  (’saying that there is some empirical fact that someone came across that proves that Heidegger is wrong—-no, that’s wrong’).  There’s lots about Heidegger that’s interesting.  I’m not saying he’s wrong about the subjective experience of time; I’m not saying he’s wrong about Being-towards-Death; I’m not saying he’s a Nazi and we should disregard him.  (I’m very specifically not saying any of those things).  I’m making this one point: Heidegger makes an assertion about time being prior to space.  It’s not.  There’s no Kantian ‘space’ on the one hand and ‘time’ on the other; there’s this thing called spacetime, which is where we all live.

OK: I’m ripe to be persuaded that Heidegger being wrong about time being prior to space makes no difference to his larger philosophical project.  Perhaps it’s perfectly irrelevant to H.’s insights.  Perhaps it’s so trivial a point that none of the critics and philosophers working on Heid. (and I’ve read a fair few of those) so much as mention it.  But that’s not what this thread has been telling me. This thread has been saying ‘Heidegger wrong?  You’re wrong!’

My original post takes the form of a question because I genuinely don’t know how important it is that H. is wrong on this matter: is it significant? Is it trivial?  Why do philosophers never talk in terms of their boys and girls being right and/or wrong?

But I’d say this: if you’re arguing that Heidegger’s theory that time is prior to space and Einstein’s theory that there is only spacetime are both equally valid (equally good as describing and predicting the data) then you’re wrong.

If you’re saying ‘data is irrelevant to what Heidegger says’ (data like, what it feels like to get out of bed in the morning, or experience art, or breath the air) then I’d say you’re both wrong and odd.

Of course, if you’re saying: ‘maybe Heidegger is wrong about spacetime, but that doesn’t matter for reasons x y z’ then I’ll shut up and listen; although I might insert a quiet question as to why students of H. never talk about this (perhaps quite trivial) issue.

On the other hand, you’re perfectly at liberty to say: ‘No.  No no no’ again.  One ‘no’ is unlikely to persuade me; but the repetition ... that’s a whole different rhetorical ballgame.  I feel my inner doubts and uncertainties just dissolving away.

By Adam Roberts on 02/15/06 at 04:52 AM | Permanent link to this comment

It’s philosophy that seems to have the problem letting go its exploded paradigms.

If I may add a pedantic note to this conversation. The notion of paradigm was introduced into contemporary discourse through Kuhn’s work on scientific revolution. But it quickly became generalized far beyond Kuhn’s usage. Kuhn distinguished between the pre-paradegm phase of a line of inquiry and its paradigmatic, or scientific, phase. In that usage, philosophy simply isn’t organized by (proper) paradigms at all. It is preparadigmatic.

Is believe that this is a matter of mere semantics. But it does rather point up the fact that science is doing something that philosophy, for example, is not. And it also makes me wonder why philosophy ever proposes and accepts substantially new ideas at all. We know, more or less, why science does so—it’s constantly being tested against evidence, and sooner or later the tests fair or become incomprehensibly recondite in their explanation. So new ideas are considered and, eventually, a new paradigm emerges. Philosophy works differently.

It’s clear that Plato functions in current thought mostly as a historic figure. To the extent that Platonism is alive, it’s being explicated and advanced, and argued in terms many of which would be meaningless to Plato. The chatter we’ve had, for example, over cognitive science and AI would be unintelligble to him. So would Heidegger. It would seem that Heidegger “knows” something that Plato doesn’t, but that we need not count Plato wrong on that account, or on some other account.

As so we’re back where Adam Roberts began.

By Bill Benzon on 02/15/06 at 08:47 AM | Permanent link to this comment

One of the paradoxes, or whatever, of philosophy is that the ambitious philosophers all aspire to present a comprehensive finished work, but even the disciples of these philosophers immediately start tweaking their master’s system. They might describe their tweaks as mere adjustments of detail, or they might proclaim the perfection of the master’s thought and claim merely to be explaining “what he really meant”, but they still change things. (Alternatively, they might change almost everything but make formal obeisances to the master). Marxists and Freudians are especially bad in the worshipful aspect. But criticism starts immediately and responses must be made.

When someone calls themself a Platonist today, it’s a way of stancing oneself against some tendency which is clearly anti-Platonist, probably some sort of nominalism or pragmatism. As far as I know, there are no Platonists who say anything like “I’ll have to reread the Thaetetus before I can tell you what I think about that question.” I do think that there remain Marxists and Freudians who think that way.

The original post was pretty complicated and responses were all over the place. The specific question of temporality is a hard one to discuss quickly. Roughly, mathematicians and physicists still believe in a sort of Platonic timelessness, whereas for anyone dealing with living or conscious beings temporality (irreversibility or entropy) is necessary. But Heidegger’s stress on the experience of time is still a different question; I’m sure that there’s a structuralist / brain science deconstruction of subjective time experience which counters Heidegger.

As I said, philosophers remain of interest if they’re discussing a question which remains open. Scientists avoid these open questions ("mysteries") and gravitate to questions which are on the point of being soleved ("problems"). Positivism basically says that the presently-undecidable “mysteries” are not even worth talking about at all, and that’s where a lot of the disagreement about philosophy lies.

By John Emerson on 02/15/06 at 09:43 AM | Permanent link to this comment

Talk about slippery slopes. We start off asking why talk about Zizek in English departments and end up asking why take Plato seriously.

Given the practical terms of human existence, the only sensible way to pursue “undeniable truth” would be to study rhetoric. “What are man’s truths ultimately? Merely his irrefutable errors”—this seems a sensible recognition of the facts of the case. (And I happen to know that if Einstein was here, he would agree with me.)

However, “love of wisdom” doesn’t mean just a neurotic scraping away at fallible flesh to reach absolute truth. It includes love of questions about faith, justice, ethics, aesthetics, even the sheer weave of agreement and disagreement—it’s a communal impulse and practice. Philosophers do the polis in different voices, but the polis is where they work.

As members of the polis, here are some things to be gained by studying philosophy:

1) Intellectual empathy. Other people exist and those other people have minds. It’s a social good to understand the sincere careful expressions of those minds as best we can. For some of us, it’s an actual source of pleasure. At any rate, it’s good exercise.

2) Intellectual humility. Having imaginatively committed oneself to a brilliant abstract thinker, and then to the brilliant attacks, and then to the brilliant defenses of the neo-school, one might (and this is a long shot, I admit) eventually come to terms with the fact that one’s own certainties will seem transparently foolish someday. Maybe in one’s own lifetime. Maybe even by one’s self. (Of course, for most people, the history of science—particularly medical and psychiatric science—is an even more essential study. If that’s your point, I don’t dispute it.)

3) Actual insights. I don’t have to agree with everything someone says to learn something from them, even if they believed it was all one inseparable system.

Yes, being wrong comes with the territory. No, just being wronger isn’t a sign of superiority.

(See also Philo-fighter’s recent comment on another thread.)

By Ray Davis on 02/15/06 at 09:58 AM | Permanent link to this comment

But that’s not what this thread has been telling me. This thread has been saying ‘Heidegger wrong?  You’re wrong!’

That last bit seems to me an extremely ungenerous response to Adam K, Adam R (just to clarify, my original comment was somewhat ponderously ironic, and directed to Adam K).  But why should the burden of proof be entirely on the commenters or the potential experts-in-hiding, once again?  If you really wish to understand Einstein’s potential irrelevance to Heidegger, my guess is--you know--that you will have to read and work through Heidegger, and in the process perhaps acquire something like Adam K’s humility, you might say.  Now, if you’ve indeed done this, then it isn’t evident here, though there are probably a number of specific textual places where one could start.  That might be a very interesting post, and for whatever it’s worth I’d like to read it.  That all said, I apologize if the tone of this is in any way off, having only skimmed the thread.  I’m not sure it would be useful to discuss Plato (or “Plato’s wrongness") in the same close-reading post, because that seems to change the nature of the context and the claims rather significantly.  (But then again, admittedly, there are limits to blogability, not least of all regarding economy of interest, working against what might be the most responsible approach.  Still, I think one owes any potential commenters an a blogiori admission of these limits.)

In re-reading this, I think Ray Davis makes a similarly-oriented point very well, if not better.

By Matt on 02/15/06 at 11:53 AM | Permanent link to this comment

"But I’d say this: if you’re arguing that Heidegger’s theory that time is prior to space and Einstein’s theory that there is only spacetime are both equally valid (equally good as describing and predicting the data) then you’re wrong.”

I don’t know what “time is prior to space” would mean. That would take a while to try to understand, and I think anyone who tries to think about either time or space quickly comes to the conclusion that time and space are very hard to understand. To take one example, Aristotle denied the existence of three dimensional space because the idea that both space and a thing are occupying the same place is hard to understand and paradoxical. What is this space that isn’t a thing and isn’t a void? It wasn’t until the Christian Neoplatonist Philoponus that the paradoxical nature of space was affirmed, and then it took another thousand years before Galileo started applying Euclidean geometry to (sublunary) nature. So the self-evidence of something as hard to understand as Einstein’s space-time escapes me.

Heidegger begins with the phenomenological method, with our own experience of time. That seems as good a place to start as any. And I certainly wouldn’t take the leap and say that Einstein’s space-time is real time and Heidegger’s account of time is therefore wrong. Even if you were to say that Heidegger’s time is to Einstein’s as the color blue is to a certain wavelength in the electromagnetic spectrum, that’s an interesting argument that you have not made. It is not enough to wave vaguely in the direction of “Einstein” and the physics department, and what I object to in your line of argument is an implicit kind of intellectual bullying. If you want to carefully lay out what “time is not prior to
space” means, and also define time and space, I would be glad to listen and learn.

By on 02/15/06 at 11:55 AM | Permanent link to this comment

The physicists’ conclusion seems to be that time-space (spatialized time, not really time the way we think of it) is prior to and more fundamental than what we call time, which is local and dependent on entropy (which is not one of the fundamental laws of physics).

What “prior to time” would mean, however, is not clear to me.

By John Emerson on 02/15/06 at 12:10 PM | Permanent link to this comment

It’s just struck me that Adam Roberts’s feelings towards philosophers, like Plato’s feelings towards poets, might carry a bit of turf war.

I’ve heard it said that “science fiction” would be better described as “philosophical fiction,” and the more common “speculative fiction” makes a similar point.

It could be fairly maintained that sf has the advantage in any fantasized battle royale. Since we’re bound to be wrong anyway, why not openly acknowledge it in the word “fiction” rather than aspiring to “knowledge”? That shows good common sense. (My own stance as “essayist” is similarly risk-adverse.)

But lowering the stakes and loosening the rules makes the game not just safer but different. Figure skating hasn’t replaced hockey even though no team is invincible and a saber duel differs from a sword dance even if all men are mortal.

Adam R., wuddya think?

By Ray Davis on 02/15/06 at 12:55 PM | Permanent link to this comment

From Brian Leiter’s blog, a quote from just-deceased Peter Strawson that may shed some light on the question:

....there is a massive core of human thinking which has no history—or none recorded in histories of thought; there are categories and concepts which, in their most fundamental character, change not at all. Obviously these are not specialities of the most refined thinking. They are the commonplaces of the least refined thinking; and are yet the indispensable core of the conceptual equipment of the most sophisticated human beings. It is with these, their interconnexions, and the structure that they form, that a descriptive metaphysics will be primarily concerned.

Metaphysics has a long and distinguished history, and it is consequently unlikely that there are any new truths to be discovered in descriptive metaphysics. But this does not mean that the task of descriptive metaphysics has been, or can be, done once for all. It has constantly to be done over again. If there are no new truths to be discovered, there are old truths to be rediscovered. For though the central subject-matter of descriptive metaphysics does not change, the critical and analytical idiom of philosophy changes constantly. Permanent relationships are described in an impermanent idiom, which reflects both the age’s climate of thought and the individual philosopher’s personal style of thinking. No philosopher understands his predecessors until he has re-thought their thought in his own contemporary terms; and it is characteristic of the very greatest philosophers, like Kant and Aristotle, that they, more than any others, repay this effort of re-thinking.

From Strawson’s Individuals, which I haven’t read. It seems to me that much philosophy operates on a “recovery” model of inquiry and learning, instead of a “progress” model of inquiry and learning. Not that there isn’t progress, but often what progress there is is simply a more sophisticated restatement of things that were said before. (Wittgenstein’s epigraph to the PI says something like “Generally speaking progress has this about it, that it always looks bigger than it actually is” - I think that applies to philosophical questions most of all)

By on 02/15/06 at 01:29 PM | Permanent link to this comment

Mr. Roberts’ use of Occam’s razor on the philosophical tradition is to be applauded, as is his insistence on modern science being free of most philosophicai concepts. Einstein proved something regarding time (and time dilation), whereas Heidegger, another son of Kant, merely speculates. 

Yet AR is skirting one important philosophical issue, that of abstract objects, which has at least a sort of remnant of platonism to it. (see stanford encyc. of phil.).  Frege for one held that entities such as numbers, equations, logical connectives and relations must exist somewhere, yet they are neither a part of observable nature nor a property of individual minds (both of which would entail they are empirical inferences and thus the objectivity of math. seems greatly diminished). (Chomsky seems to hold to a different flavor of platonism as well--his Lang. and the Problem of Knowledge takes much from the Meno).  OK, this is all fine for the college bs session, but obviously that is a fairly large claim against a pure scientific materialism, or at least some sort of modification thereof. As the witty and talented Roger of Limited Inc pointed out brain-scientists are attempting to provide neurological accounts for mathematical knowledge and “abstract entities”, but until some specific neural module is found and the biochemistry charted one cannot really conclusively state platonism, whether trad. or more Fregean varieties, has been overturned.

By x on 02/15/06 at 01:40 PM | Permanent link to this comment

Late to the party, I know, but I just came across this, which seemed rather apropos:

“Error acquires a sense only once the play of thought ceases to be speculative and becomes a kind of radio quiz.” (Deleuze, Difference and Repetition 150)

This is part of a long critique of the notion, or at least the utility, of “error.”

By Jon on 02/15/06 at 10:37 PM | Permanent link to this comment

We might know all there is to know about brains, yet Fergean abstract objects would be as unharmed as ever. They are casually inert you see, so now empirical discovery can harm or help them, only conceptual discoveries.

I never would have expected to find a analytic philosophy disscusion on a site about literature though, this is an interesting development.

By on 02/16/06 at 12:13 AM | Permanent link to this comment

Adam K is refering to an overly popular view about philosophy, it’s a little known fact but most Anglo-American philosophers simply couldn’t give a **** about existential issues. Virtually all analytic philosophers talk about are factual issues, do theoretical entities exist? what is a mind? Do universals ( like blue or eleven ) exist?

A lot of philosophers feel a bit guilty about this state of affairs, but what are you going to do.

By on 02/16/06 at 12:29 AM | Permanent link to this comment

The tradition of nominalism already hints at how non-platonic views of mathematical entities might be constructed; and so here is offered a low-budget, beer-fueled sketch, sort of plebian shouts to accompany William of Occam’s haircut, if not decapitation of King Platon.

There is, for instance, throughout Quine a tendency to nominalism: A semantic view of numbers and variables, a view of formal language as reference, and the denial of analyticity, and a verificationist criteria of truth.. And epistemologically speaking, math/language learning depends on observation and inference from perceptions; a view of geometry as evolving from agriculture and construction is not untraditional; an abacus or ancient clay tablets show that math/counting had early connections to economics and trade. (other types of supposed universals--say colors and adjectivals--seem much more amenable to empirical explanations; blind people don’t know what blue is).

Nominalism based on a sort of basic empiricism may not dazzle like Platonic realism does, but in some sense it often appears the phil. people excited over “Math as Being” routinely overlook how efficacious that math. knowledge is in terms of practical engineering such as bridges or construction or to business. Integrals may or may not be casually forceful, but they do have some relation to nature, and indeed to space, and our perception of space. Were they invented or discovered? I am not qualified to say, but the relations of derivatives and int.s exist, or at least are represented, in a man-made space--cartesian coordinates; tho’ that’s not to deny a type of conceptualizing power of mind that is unique to some human-primates--.

By x on 02/16/06 at 01:27 AM | Permanent link to this comment

I generally don’t bother to point out typos—they pretty much come with the territory in blogsylvannia—but I’m pleasantly amused contemplating the possibility of “Fergean abstract objects.”

By Bill Benzon on 02/16/06 at 07:58 AM | Permanent link to this comment

Ray D:  “It’s just struck me that Adam Roberts’s feelings towards philosophers, like Plato’s feelings towards poets, might carry a bit of turf war. I’ve heard it said that “science fiction” would be better described as “philosophical fiction,” and the more common “speculative fiction” makes a similar point ... Adam R., wuddya think?”

Very interesting.  I certainly can’t pretend that I inhabit some pure disinterested realm from which I critique the world around me with impartial eye ... no such place.

On the other hand, and attracted though I am to the notion of SF as ‘literature of ideas’, a quasi-philosophy, I’m not wholly convinced.  My take on ‘philosophy’ is basically Deleuzian; in the sense that I take philosophy to mean ‘creating cool ideas and notions’ (more or less): and that ought to be how SF works.  But I think the truth of it is that what are sometimes taken for ideas in SF are more usually conceits: eg ‘the world is a virtual reality prison created and controlled by malign machine intelligences to subjugate humanity’ ... that’s a conceit, not an idea.

By Adam Roberts on 02/16/06 at 12:33 PM | Permanent link to this comment

norton anthology of american literature classify frost’s poetry into four categories. what are they?

By on 02/15/07 at 03:35 PM | Permanent link to this comment

Add a comment:

Name:
Email:
Location:
URL:

 

Remember my personal information

Notify me of follow-up comments?

Please enter the word you see in the image below: