Tuesday, November 21, 2006
Dave Maier on Michael Bérubé - A Bunch of Picky Philosophy Points
Dave Maier has a long post about Bérubé’s book, focusing on the problems you can get into, being a Rortyan.
My first gripe concerns the cutting of philosophical corners. It’s understandable that people, especially non-philosophers, try to deal with the issue of realism and relativism in the moral/political context without first deciding what to say about scientific or commonsense facts.
I agree with that. I think those accused would mostly deny making this mistake - but I think they mostly are. But actually I don’t think Bérubé is a model offender. Cutting corners in the classroom, i.e. simplifying for the kids, is more defensible than cutting them in certain sorts of straight scholarly argument, by trying to treat certain issues as springboards into discussion of something else. When properly they are only suited to being bogs, to bog down in.
MB applauds Rorty for not claiming that what he says (about truth and knowledge) is true, but saying instead merely that it’s useful to act as if it were true. MB says this on his own behalf in other places, seeing this as a virtuous consistency, necessary to foil the traditional realist accusation of self-refutation. (Is Rorty’s pragmatism “really true”? If we answer “yes,” the familiar thought goes, then we affirm and renounce its truth in the same breath, a contradiction; but if we answer “no,” then, Rorty feels, there’s no problem.) Again, this is skepticism; and the problem with skepticism is that it makes hash of the notion of belief (and with it of meaning; of this more below). It is true that in particular cases we may intelligibly advocate acting, for instrumental reasons, as if something were true that we do not in fact believe to be the case. But this cannot be our general attitude. It makes no sense to argue passionately for a particular view, and then, when familiar muddles cause the conversation to grind to a halt, or spin its wheels uselessly, to cut the Gordian knot by saying, “oh well, I wasn’t saying my view is true.” Of course you were. If you weren’t, then I was wrong to take you as believing it, and now I am more confused than ever.
I agree. I just gave a conference paper on Rorty and Dewey and pragmatism where I talked about a related Rortyan tic. He does this weird thing I call ‘imminent critique’ - or ‘soon you will have been critiqued’. (More coloquially, this anti-foundationalist anticipatory retrospective mood might be glossed: ‘all your bases will have been belong to us.’) He doesn’t give you reasons to believe him. He invites you to consider the possibiity of a future when ‘we’ will no longer think this way, i.e. by which time the person he is disagreeing with will, ex hypothesi, have had their paradigm shifted. But the fact that my paradigm might shift - true enough - doesn’t give me a reason to shift my paradigm. Life must be lived in a forward direction. So preaching paradigm shift in this meta-sort of way is not just intellectual weird but rhetorically unmoving. (I mean: Galileo didn’t just shout at the Pope ‘did you ever consider that someday you will have had your mind changed about astronomy?’ Lenin didn’t write an essay: ‘What is to have been done?’ [have had been done?]) The fact that he finds himself obliged to address us in this mode very much has to do with a tension between what Dave recognizes as his Pyrrhonism, and his progressivism. It’s a hard hortatory row to hoe. But Rorty is a pretty good philosopher all the same, it’s true. Also, Bérubé managed not to pick up this anticipatory retrospective habit during his time with the master, although he did maybe catch a case of thinking there’s virtuous consistency in what seems to be a species of irrationalism, as Dave says. (Not that irrationalism is always bad, but it never makes sense.)
I think Dave’s post will be very interesting to folks who are a little interested in all this stuff. Roughly: how analytically-trained philosophers tend to scratch their heads a bit at this stuff.
Good title. I am indeed analytically trained, but I must insist that I do not speak for the guild here (if I ever do). I’d like to point out that I take pains to exempt MB’s classroom strategy, and (possibly) even his political practice (but see R.A. Fox’s comment on the post) from my worry about corner-cutting.
I actually don’t mind “imminent critique” (heh), but sometimes it just fails. Give me a *reason* to talk that way, ding bing it! So, I agree that (gulp) Rorty’s row is a hard hortatory row to hoe. (Doesn’t Dennett have a joke about arguing “a rortiori,” i.e. for even more obscurely Continental reasons?).
I think it’s ‘for even more fashionable continental reasons’.
I’m a literature person, so forgive my ignorance on these issues (seriously—I don’t mean that sarcastically). I suppose I’m confused about the realist position on what might be called “epistemology of the social.”
So, a realist thinks, to use Rorty’s metaphor, that his knowledge is a mirror of nature. What he knows is, if it’s true, what is real. (Or, as a Popperian type might say, what he knows approximates the real as best as is possible right now.)
But when it comes to knowing ethics or other social phenomena, where *is* the stuff to be known? A realist can know about an atom because an atom is out there. A realist can know about mental phenomena to some extent by observing brain-scans or working backward from observations of behavior (i.e., the behaviorist) or observing what happens when someone gets a pole through their skull and survives.
On the other hand, the Sam Harris realist seems to think that he can “discover” ethical principles “out there.” But where? Same would go for aesthetic phenomena. I mean, a painting is a real thing in the world, and I agree with the realist that we can know its colors, its forms, and so on. But aesthetics is about *valuing* certain forms over others, just as ethics is about *valuing* certain behavior over others. And values, while being real things, are largely social constructions (sure sure, I’m sure there are some hard-wired human attractions to certain formal patterns or even hard-wired human attractions to certain types of behavior, but in the end, society is so far beyond “the state of nature” that would have shaped such biological structures that even if we are programmed biologically to value certain things, we are taught how to value even those values).
So I can say: I value peace, love, and harmony. What ethical prescriptions will get us there? And I can test them like a good realist and see if they work, but I’m not “discovering” ethical principles out in the world. I am actively making them up. What I’m discovering is the effects of certain ethical principles and how they compare to my desired goals. But at no point can I compare my model of ethics to some “real” ethics out in the world, like I can compare my model of chemical composition to the actual chemical composition of something.
In the end, I’m interested in the effects of philosophical worldviews on education. If one is really a realist, then education should consist of the transmission of right knowledge. Why reinvent the wheel by having kids drop different weighted objects out of a window when the truth can simply be given to students? You might teach kids how to go about finding new truths, but most science lab work through the undergraduate level is probably a hundred years behind how scientists currently work in labs.
If one is some form of constructivist—so the educational thinking goes—and you believe the world is actively constructed through our models of it, then education should consist of giving students the opportunities to construct their own knowledge, whether as individuals (Piaget) or in social situations (Vygotsky).
I’d love to hear John’s and David’s thinking on my doubtlessly moronic set up of the issues here.
I think that the point of Rorty’s imagined “world in which we no longer think this way” is that it would be about the same as a foundationalist world in which we do think this way, except that people wouldn’t be arguing about foundations of ethics, or believing in ethical truths which exist for us to find. SO it was like a comparative thought experiment.
The actual ethical problems of the non-foundational world would be the same as those of the foundational world, but just argued differently. In the foundational world there would still be about the same ethical disagreements, and in the non-foundational world there still would be ethical authority.
I certainly understand the point, John. It’s only the way of carrying it that I question. The peculiarity of it is not just that he’s ‘telling stories’, as he himself often says (which may be enough to bother some philosophers). But that he’s telling stories about the possibility of him having told stories. Very meta.
Luther, your comment deserves a long reply, which I don’t really have time for this morning. But I will say something: “If one is really a realist, then education should consist of the transmission of right knowledge.” This does not follow. A realist will almost certainly be a fallibilist. And a fallibilist will know that the practical consequence of advocating the transmission of ‘right knowledge’ will be transmission of what he THINKS is right knowledge, which may not be knowledge at all. This makes advocacy of merely transmitting this stuff sound rather unpalatable.
There’s more to your commment, obviously. Later.
Luther: Sorry, I was over at the original post getting into it with Uncle Meat, but I think we’re finished now (if anyone else wants to have a crack at him, feel free to join in). Your thought about values not being “out there” is a natural one, and has often been used to motivate anti-realism in ethics (e.g. John Mackie). And indeed, they’re not “out there” in the way that atoms are. But what way is that? Whatever way that is, it doesn’t mean (as you yourself note) that values aren’t “real”—yet neither does it mean that the distinction is an ideally sharp one (i.e. that we should indulge a metaphysical fact/value dualism). That’s the point of noting the ineliminable moment of subjectivity in *factual* (scientific or commonsense) judgments (something Searle doesn’t do, except epistemologically; which is why I criticize Bérubé for caving on this point). If you’re committed to the dualism, as metaphysical realists are, then this makes you jump up and down with indignation, because it makes it look like science isn’t (in a distinct sense of the word) “objective,” and that its deliverances are thereby put into question. But they’re not, and (returning to the moral case) neither are those of moral deliberation when we acknowledge a factual dimension; which is to say that they cannot be *reduced* to consensus. In order to be seen as norms (or facts) at all, i.e. binding on us whether we like it or not, they must have some measure of, and here we have another contested term, “independence.”
We have to watch the terms here like a hawk. Harris is a realist because he thinks that ethical values are themselves facts; but his is a dualistic position, because his conception of “fact” requires a Cartesian notion of objectivity as dualistically opposed to subjectivity. Another way of letting the dualism pull you around by the nose would be to say that because values can’t be facts in that sense, they can’t be facts in any sense at all which transcends utility or consensus. Bérubé agrees, or would like to, but I argue that he no longer possesses the resources to do so once he concedes realism in the scientific case.
So when you say “I can test [ethical prescriptions] like a good realist,” I take this to mean that you can test them to see if they really, i.e., actually, work. No problem—that’s not metaphysical realism, just empiricism (the okay kind). But it’s also anti-realism, if that means that their reality as directives is constituted by their actual utility and not by how things are morally speaking—morality collapses into prudence. However, there’s another issue (fasten seatbelt).
Now I was just telling Uncle Meat that we must distinguish, in the standard jargon, between “normative-ethical” theory, which tries to explain in what it is that the morality of moral actions consists, and “applied ethics,” which tells us what to do (and how to figure out what to do) in particular cases (and of course they have *some* bearing on each other, or we wouldn’t be interested in the former as much as we are, if we are). But realism and anti-realism are “meta-ethical” positions, concerning the meaning of moral claims (i.e., whether they even attempt to state facts, or just express emotional reactions, or what). And it is the dualism between realism and anti-realism which has been causing problems, as I’ve been saying. But once we rescue the notion of fact, taking it to mean simply something which is irreducible to consensus or utility (and not something metaphysically mysterious, something which we would need to “place somewhere"), we can say perfectly well that a moral fact is a fact like any other (in the relevant sense; while being different in other ways). If you say that torturing the innocent is okay, you are mistaken, and would be even if everyone came to agree with you. But once the back of metaphysical realism is broken (oh Lord, may that day come soon), then that meta-ethical judgment can easily be detached from the fact that you may yet, in theory mind you, convince me that in some cases it might be the right thing to do. That’s because even when we construe moral commitment as *doxastic* commitment, which the rejection of anti-realism allows, and thus as possibly mistaken though everyone believes it, doxastic commitment is essentially revisable (or “corrigible"). Not like I think you can indeed convince me (i.e. of the potential permissibility of torture); but you’re welcome to go ahead and try if you like.
But back to the question of “utility.” I referred above to the practice of judging principles by their utility as a form of anti-realism, i.e. instrumentalism: we should believe X (not because it’s really true, but) because it serves our purposes to do so (thus, I argue, eviscerating the idea that you “believe” them in the first place). But it is also possible to be a realist utilitarian; that is, to believe that moral principles are factual, not instrumentally useful—but that what it is in which their morality (really) consists is their consequences (their “utility” in this sense). Or you can be an emotivist deontologist, who believes that while moral statements only express emotional reactions rather than facts, they necessarily take the form of statements of moral rules (this would be a weird position though). In practice, as you see, this level distinction can get blurred; but that’s a natural result, given the problem of distinguishing the value of good consequences from the moral value of acting in order to bring those good consequences about (and, not incidentally, morality from prudence or obedience or whatever other value you have floating around). Okay, I’ll stop now.
Another question from an uneducated rube. ok, a fact is whatever is irreducible to utility or consenus. What accounts, though, for the irreducibility, and how is that, when it comes to norms, that this can’t be viewed ultimately as consensus or utility?
I assume that “when it comes to norms” covers both questions, as it is obvious what makes “that’s gold” a fact irreducible to consensus: we can agree all day long that that’s just pyrite, but as the saying goes, “saying doesn’t make it so.” (This was the point of Sokal’s invitation to postmodernists to “transgress” the law of gravity from outside the window of his high-rise apartment.) But not so fast. Sokal missed the point, and so would we if we left it at that. For our use of “that’s gold” is itself subject to norms. First, that our beliefs express knowledge rather than illusion—that they be true—is a norm of inquiry. Related to that is a further norm: that when we say “that’s gold,” we use the term as others do—to refer to gold and not something else. This reflects our sense that when we speak falsely, we can do either of two things wrong: we can be wrong about how things are, or we can see the world rightly but be wrong about what certain words really mean. That these two norms are distinct and yet interconstitutive is the point of Davidson’s discussion of belief and meaning. If we try to separate them, the first becomes unintelligible, while the second reduces to a purely pragmatic and not semantic norm ("speak as others do if you wish to be understood"). Consider for example how you would evaluate the following claim: “Pluto is a planet.” Even in normal cases, what we say clearly depends in some measure on how we want to talk (something which is of course itself at least partly a matter of utility and consensus). So even here, if “viewing something ultimately as consensus or utility” means that those aspects are ineliminable—that there is no question of the realist idea of “brute fact”—then, well, you may indeed want to talk that way. However, I don’t recommend it, as it tends, as in Rorty, to slide into anti-realism—as if the world had nothing to do with it, except causally—that truth is reducible to consensus. But at the end of the day we’re still trying to get things right, and this is because we are capable of getting things wrong.
With that in mind, let’s turn to moral claims. There are two main cases: intercultural (e.g., suttee, foot-binding, hijab), and intracultural (from the death penalty to some spat with your sister about who knows what), although some things, like abortion, start looking inter-subcultural or something after a while. The former example is easier to see (though not to solve). I say foot-binding is unethical. But they say a) they have achieved consensus on the matter, or b) this practice serves their ends. (Let’s say this is true, although ethical consensus is a rare thing even in homogeneous communities.) Surely this does not satisfy us. That they agree, or that it serves their ends, doesn’t make it right. This is the anti-relativist point. (On the other hand, not every divergent custom need be seen as a moral issue; and in practical terms this is one place in which the rubber hits the road: for example, albeit a simplified one—is homosexuality a moral issue? Not if you’re gay it isn’t; it’s just the way you are. For more about the practical aspects see Michael’s response to Ophelia Benson here (scroll down, past the Jodi Dean part).) This point about consensus is true even if it is our own community we are talking about. That we agree does not entail that we are right, as we must leave open the possibiity of revision; but on the other hand, given our agreement we don’t regard that possibility as a live option at the moment. But contra realism, we don’t need the idea of “super-facts” to license the idea of transcending consensus or utility.
Okay, but what about cases in which we disagree among ourselves? In this case there’s no consensus; and once we achieve consensus, we are indeed satisfied. This can make it look as if consensus was what we were really interested in, and that we no longer need the notion of consensus-transcendent fact (and if we were stuck with the dualistic conception of fact, that would be reason for us to get rid of it entirely; but we’re not, so it isn’t). But the only reason we agree that P in the first place, thus achieving consensus, is that each of us has become convinced that P is true—and this is true even if we have tweaked the concept of P to pick out what we were interested in (i.e., for utilitarian purposes—and why not). After all, now that we have decided how to use the word, isn’t it true—and (here’s the kicker) won’t it remain so even if we once again change the definition, as we will then simply describe the same facts in other ways—that Pluto is not a planet but, well, whatever it is? (Compare: if “red” meant “green”, then grass wouldn’t be red! It’d still be green; we’d just express that fact by saying “grass is red,” which would, on that interpretation, be a true statement. Think about that one for a while.) I didn’t use moral examples here, but the same ideas apply. Once we agree among ourselves that foot-binding is wrong, if someone among us (not you or I) were to say “no, it’s not,” we wouldn’t just speak differently than he; we would disagree with him. After all, he wouldn’t just speak differently; he’d act differently—the explanation of which action requires attributing different beliefs, even if it also requires attributing value commitments irreducible to belief (again, just as in the non-moral case).
Okay, I’m going around in circles again. But I’m not trying to give a knockdown argument (as if I could decide for you exactly how to modify your views—that’s your job, and yours, and yours); like Rorty, I’m using my preferred way of talking even in arguing for it. But unlike him, I want us to think of what I say as true and not merely useful, so his toothless “imminent critique,” as John calls it, won’t work for me. That doesn’t mean that in so arguing I can’t stress the utility of so talking, and indeed, the best way of becoming convinced of its truth is to take it out for a test drive and see how it handles. This takes time though, as we have to disentangle the old and new strands in our thought and talk. So we can’t separate out truth and utility so easily. (Question: did I just move up to the metalevel, or did I just make the same point again at the object level?)
Points off for not following up “you will have been critiqued” with some line like “he invites you to consider the possibility of a perfect future and then talks in the future perfect.”.
I’d like to follow that up with a question that probably has an embarrassingly obvious answer: how do you justify the claim that, while p isn’t true (not because you think it’s false but because you disavow truth talk) but it’s useful to act is if it were? Surely in a dispute about p, if you try that gambit, your disputant’s next move will be to ask why you think acting as if p were true is actually useful. Just saying “try it and see! It’ll be totally useful, I swear” won’t get you anywhere.
I was also mightily confused by the quotation from Sartre about “the truth of man"—isn’t that a sterling example of bad faith? (In fact, a lot of stuff in that section I found hard to make sense of for reasons related to the above question, but the distance separating me from my copy of the book currently makes it impossible to give a better account of precisely what I thought was going wrong.)
Ben: good question! I take it to be directed at Michael, but let me respond/elaborate:
I’m not really into Sartre, but as I understand it, the “bad faith” idea goes like this: when we’re trying to figure out what we should do, we are often reluctant to use our own judgment, because then we would be responsible for what we do, and that makes us uncomfortable (i.e., ängstig). So we look for a way to pin the responsibility on the world, so that our “judgment” simply reflects how things are out there: a moral fact that (metaphysically speaking) has nothing to do with us (as on Harris’s view). We have no choice in the matter, we say: that’s just how things are [shrug].
Sartre (and Rorty and Bérubé) doesn’t think there are any such things as “brute” moral facts (nor do I, for that matter). It’s “bad faith” to look for such things to get yourself off the moral hook. Instead, you should use your own judgment. But that doesn’t mean, as a careless realist objection goes, Sartre thinks you can do whatever you want. Not at all: again, Sartre’s not looking for a way to avoid responsibility, but to insist that such avoidance is impossible. We are always and at all times responsible for what we do. Avoiding “bad faith” means recognizing this and stepping up to take responsiblity, which means being invested in our judgments about what it is right to do.
The disagreement among the three positions—realist, antirealist, and (let’s use McDowell’s term) anti-antirealist—concerns what such judgments amount to. The realist thinks that looking for realist moral facts just is moral judgment, i.e. judgment about how things are objectively, morally speaking. But Sartre’s phenomenology holds that such judgments are (in my terms) just as much existential commitments as they are doxastic ones. In fact, the very idea of objectivity—that things are a certain way—is, on this view, conceptually and ontologically dependent on the prior idea of my judging that things are a certain way.
And that would be okay, if only it didn’t (as in Sartre et al) unnecessarily erode the doxastic component of judgment down to a mere formality. What he sees as my judgment that things are thus and so, anti-antirealists like McDowell and me would have us see as my judgment that things are thus and so. It is indeed an important reminder, if that’s what it is, that subject and object are, as Sartre does not put it, gleichursprünglich; but when I make a judgment, I’m a subject, and so if I don’t say that my judgment is true of the object, then I’m not (a subject making) a judgment at all. As I’ve already argued, this does not leave us stuck with realism, nor is it an expression of bad faith. I am as existentially invested in my judgment as Sartre would demand; but that doesn’t mean it’s not a judgment.
So what Sartre then goes on to say about fascism is motivated by the desire to escape realism (and bad faith), but as Ben suggests, it falls into another form of bad faith. This is the idea that since judgments aren’t indicative simply of how things are (that is, that they are not pure doxastic commitments, an impossibility), this means that they don’t depend at all on how things are, but depend instead on consensus (or, in other versions, on utility). Thus the skepticism masquerading as humility, which allows/requires Sartre to say that if fascism achieves consensus, heaven forbid, then it will be the “truth of man,” with nothing outside it for it to be false of (which he thinks would be required for us to say, now, that fascist morality isn’t just distasteful but wrong, morally and factually). In other words, he leaves the strong realist conception of fact in place and simply rejects facts entirely. This is a mere recoil into antirealism, and perfectly well described as itself bad faith = shying away from owning up to one’s own judgment qua judgment, only this time from the other direction.