Sunday, March 18, 2012
The Valve - Closed For Renovation
It’s probably past time I made such an announcement. Not that there’s been anything wrong with what Bill B. has been up to on his own lonesome for some time hereabouts. But the Valve was never intended to be Bill Benzon’s personal blog. Best he relocate to his own digs if it’s to be a solo operation.
I want to keep the site up. I would be sad if the archives disappeared. Lots of good stuff. But keeping the place up? ... well, we’ll see. Maybe I’ll get around to organizing some of those good old book events again soon. Best of luck to all our past authors, wherever they have wandered to by this point. And thanks to all the readers and commenters who made it such fun while it lasted. But nothing lasts forever.
Happy Trails to You
I first heard of The Valve back in late Spring or early Summer of 2005 when I caught wind of an up-coming discussion of an anthology entitled Theory’s Empire. The idea, it seems, was to look at what had by now become capital “T” Theory and, perhaps, hope! hope! lay it to rest.
So I showed up in early July and joined the commentariat. I soon found myself playing an unaccustomed role, that of the old timer who says, “Now, back in my day . . . “ Now, I’m not that old, and it wasn’t that long ago in calendar years, but it WAS before the personal computer and internet. I was an undergraduate at Johns Hopkins when the French landed in 1966 and I’d lived the ferment in literary studies that had been occasioned by those ideas. I also encountered developmental psychology, through Mary Ainsworth, and psycholinguistics through James Deese. And so I added Jean Piaget and Noam Chomsky to my repertoire while studying semiotics and comp lit in English translation under the tutelage of Dick Macksey, one of the organizers of the (in)famous Structuralism conference.
The upshot of all that is that I shuffled off to Buffalo and joined the cognitive revolution under the tutelage of David Hays in the Linguistics Department. But my degree was in English and my dissertation was on cognitive science and literary theory. I’d decided that structuralism led, not to post-structuralism, deconstruction, or postmodernism, but to cognitive science. The profession did not agree with my assessment of the situation; we parted ways several years after I got my degree.
But I kept up my intellectual program anyhow, publishing this and that, here and there. I showed up at the Theory’s Empire event to see how things were going in the literary academy and to engage that part of it that might, I thought, just might, be looking for something new. Later in that year I was asked to try out for the masthead and was accepted early in 2006. Since then The Valve has been the closest thing I’ve had for an institutional home base.
Up into the second half 2010 or so The Valve functioned as a vigorous group blog, more vigorous at some times than others, but strong and interesting. In the Spring of 2010 I set up my own blog, New Savanna, mostly so I could post on a wider range of topics than I felt appropriate to The Valve. By the end of 2010 it was clear, however, that, as a group effort, The Valve was dying. I continued to post here, as well as at New Savanna, because it was easy enough to do and because there seemed to be constant traffic from somewhere out there in the ether.
All things change, however. John Holbo, The Valve’s progenitor, informs me that it’s time for The Valve to go the way of the Phoenix and be reborn. To do that, however, it must first die. Really and truly. Dead.
And so I will cease posting at The Valve in order that this plot of cyberspace may lay fallow for awhile.
It was a good run.
I’d like to thank John and the other Valvsters for the good intellectual company and you, our readers, for your kind and generous attention.
Thursday, March 15, 2012
What’s an Encyclopedia These Days?
I remember browsing through the encyclopedia when I was young. We had an Americana, to the Britannica, which just announced that it will cease print publication, and I would spend hours reading from one entry to another. The volumes were heavy and substantial and the set of them gave a visible and tactile sense of complete knowledge. That sense, of course, was an illusion, but it was there.
The Wikipedia affords a different experience. Of course, I come to the Wikipedia as a mature adult with a great deal of intellectual sophistication; how it would appear to a bright 11-year old, I don’t know. But there’s no way to get a sense of all-knowledge-complete from the Wikipedia; you can’t see it on the shelf, you can’t handle it volume by volume. It just trails off into the ether, in many many different directions.
There is, of course, the question of accuracy and authority. I know that comparisons have been done between Wikipedia entries and, I believe, Britannica entries. And Wikipedia has come out well in these comparisons. But that’s not all there is to IT.
By IT I mean both authority and, well, accuracy, I guess. They’re closely related, but not quite the same. In the case of conventional encyclopedias, such as the Britannica, the authority resides in the institution itself. Where the entries themselves came from, who wrote them and what sources they consulted, that’s pretty much a mystery.
Not so with the Wikipedia. Every article has a Talk page, where editors, as Wikipedia contributors are called, and others discuss the article. In some cases these discussions can be quite extensive. Moreover, each article has a History page as well. This page logs every change to the article from its beginning to the present state. The change may be minor, such as the addition or deletion of a comma, or major, the addition or deletion of several paragraphs. Whatever the change, it’s logged. And, if you wish, you can view the article as it was at any stage in its history.
Strolling through that material can be very tedious, of course. The important point, though, is simply that it’s there. The article no longer has to stand alone, pretending to be a philosophical unmoved mover. It has an accessible pedigree, a visible origin. It is grounded.
THAT is an important difference. VERY important.
Wednesday, March 14, 2012
Encyclopedia Britannica to Shut Down Print Operations
From today’s New York Times:
In an acknowledgment of the realities of the digital age — and of competition from the Web site Wikipedia — Encyclopaedia Britannica will focus primarily on its online encyclopedias and educational curriculum for schools. The last print version is the 32-volume 2010 edition, which weighs 129 pounds and includes new entries on global warming and the Human Genome Project.
“It’s a rite of passage in this new era,” Jorge Cauz, the president of Encyclopaedia Britannica Inc., a company based in Chicago, said in an interview. “Some people will feel sad about it and nostalgic about it. But we have a better tool now. The Web site is continuously updated, it’s much more expansive and it has multimedia.” ...
Sales of the Britannica peaked in 1990, when 120,000 sets were sold in the United States. But now print encyclopedias account for less than 1 percent of the Britannica’s revenue. About 85 percent of revenue comes from selling curriculum products in subjects like math, science and the English language; 15 percent comes from subscriptions to the Web site, the company said.
About half a million households pay a $70 annual fee for the online subscription, which includes access to the full database of articles, videos, original documents and to the company’s mobile applications.
Monday, March 12, 2012
Intimate Enemies: What’s Opera, Doc?
It is a truth universally acknowledged that What’s Opera, Doc? is one of the finest cartoons ever made. It satirizes opera, Wagner in particular; it parodies Disney’s Fantasia, and, for that matter, it parodies the routines of its stars, Bugs Bunny and Elmer Fudd. The production was, by Warner Brother’s standards, lavish, and the layouts, by Maurice Noble, are inspired.
All of that’s obvious. What’s not so obvious is that the film plays on the nature of reality in a way that’s reminiscent of Dance of the Hours from Disney’s Fantasia. As I’ve argued in Animal Passion? Hyacinth Hippo and Ben Ali Gator, that episode depicts the inability of animal dancers to stay in role with the result that, when Ben Ali Gator courts Hyacinth Hippo we don’t know whether they’re acting roles or whether their passion is, well, real. Something like that is going on in What’s Opera, Doc? Elmer Fudd is in role as, well, Siegfried I guess, from beginning to end, but Bugs is not.
Note: I’m not going to comment on the design. But you should pay attention to it. Note the colors, the camera angles, and the use of lines. It’s really exquisite.
Kill the Wabbit
Let’s start at the beginning. As the title card and credits roll we hear an orchestra warming up. We thus know that, yep, as the title says, this is going to be opera. The opening music is wild and stormy and we see a stormy sky, and then a large hulking shadow appears projected against a cliff. More sky and lightening, and then we see that the large shadow is projected by a rather small fellow:
It this point a simple, and rather old, point has been made: things aren’t always what they seem to be. The camera zooms in and it’s Elmer Fudd, in heroic costume as a Nordic warrior, informing us that he’s “hunting wabbits.”
As Elmer sings “Kill the wabbit” while poking his spear into a rabbit hole, Bugs hears him and is rather distressed. Bugs approaches and delivers his classic line, “What’s up, Doc?” Bugs, however, is not in costume and so not, presumably, in role. He’s just Bugs.
For whatever reason, Elmer is completely oblivious to the identity of this character, which has us, the audience, wondering what indeed is up? When Bugs asks the mighty Nordic warrior just how he expects to kill the rabbit, Elmer replies that he’ll use his spear and magic helmet. The spear, of course, is just a standard weapon, the Nordic warrior equivalent of Elmer’s more usual shotgun. But a magic helmet, that’s something else entirely and gives the Nordic warrior powers that Elmer never had. Note how the helmet glows when Elmer mentions it:
Perhaps sensing a challenge, Elmer offers to demonstrate the helmet’s powers and climbs to the top of a tall promontory and summons up foul weather in a scene reminiscent of the dream sequence from The Sorcerer’s Apprentice. Lightening strikes a tree next to Bugs, who’s OK. But he starts running.
It’s then, when they’re far apart, then Elmer realizes that that creature, that’s the wabbit!
Elmer sets out in hot pursuit until he comes to a halt at the foot of another promontory, atop of which he spies, and is smitten by, Brünnhilde. Brünnhilde, we see instantly, is Bugs in drag, and she’s plunked atop an enormously overweight horse, as though the horse had to make up for the lack of fat on the proverbial operatic Fat Lady as played by Bugs. We assume, of course, that Bugs knows full well he’s in costume, playing a role.
They dance a pas de deux, Elmer looking rather boyish, at the end of which Bugs retreats atop another tower, this one with steps up the side and a gazebo at the top. They sing a passionate duet as Elmer climbs the steps. He reaches her, she falls into his arms, and her helmet falls away, revealing rabbit’s ears. The jig is up!
Elmer is enraged. As Bugs flees, the Nordic warrior invokes his powers again and conjures up a powerful storm, leaving Bugs apparently dead from a lightening strike.
Upon spotting Bugs’ body, Elmer is suddenly filled with remorse. He rushes to the body, picks it up, and carries it off to, well, it’s not clear exactly where, Valhalla one presumes. As he does so the camera zooms in on Bugs, who reveals that he’s alive, asking: What did ya’ expect in an opera, a happy ending?
According to the Wikipedia article on What’s Opera, Doc? this is one of only three cartoons in which Elmer gets the “upper hand” on Bugs. The others are Rabbit Rampage and Hare Brush. Neither of them is, shall we say, a “standard” Bugs and Elmer cartoon. And in neither of them does Elmer think that he’s killed Bugs.
Rabbit Rampage is an exercise in meta-level surrealism comparable to Duck Amuck. Bugs is the lone on-screen character addressing himself to an unseen artist who keeps redrawing both the setting and Bugs himself. The camera pulls back at the very end to reveal that Elmer’s the animator. Far from being Bugs’ killer, Elmer is, in a sense, the one who creates bugs, who gives him, if not life, at least visible substance.
In Hare Brush, Bugs and Elmer switch roles. As the cartoon opens Elmer is a wealthy industrialist who thinks he’s a rabbit; he hops on all fours and munches a carrot. His board commits him to an asylum, where we see him in his room in a rabbit suit. This that and the other happens; Bugs takes a mind-altering pill; and voilà! bunny-suit Elmer gets chased around the countryside by Bugs-as-Elmer with a shotgun and wearing hunting clothes. A tax agent nabs Bugs-as-Elmer and hauls him off to prison, with bunny-suit Elmer basking in his triumph. But: he doesn’t think Bugs is dead and there’s no mourning.
So, in one case, Rabbit Rampage, Elmer has had the upper hand from the beginning, but we don’t know that because we aren’t aware of his role until the end. In the other case we have, if not quite role reversal from the beginning, certainly role confusion; Elmer thinks he’s a rabbit and Bugs hunts him. The point is that Elmer having the upper hand at the END isn’t just a matter of the final moves in the plot. Rather, it seems to entail a reconfiguration of the whole cartoon from beginning to end. We’re in the intellectual territory Lévi-Strauss entered in his studies of myth, where he showed how a large body of South American myths was based on the strategic rearrangement and transformation of a relatively few underlying elements (see my posts From Bollocks to Lévi-Strauss on Myth and The King’s Wayward Eye: For Claude Lévi-Strauss).
And that reconfiguration certainly includes playing with the conventions of reality. In Rabbit Rampage the so-called fourth wall is destroyed from the very beginning. At every step of the way we are told, in one way or another, that this is a cartoon. In Hare Brush Elmer is crazy and Bugs is drugged.
Things aren’t what they seem.
What’s Up, Really?
That’s certainly the case with What’s Opera, Doc? From the beginning to the end, Elmer is caught up in a role in an opera; he never appears as ordinary Elmer. When he enters, Bugs does not appear to be playing a role. But he’s worried about this Nordic warrior who’s after him. At first he’s just Bugs. But, when Nordic warrior sees him as “the wabbit” he dons a costume and takes a role in the opera. He becomes Nordic warrior’s beloved Brünnhilde.
The two then dance together and serenade one another. That is quite unlike anything that happened in either Hare Brush or Rabbit Rampage. And this, I suggest, is why, at the end, Elmer mourns the dead Bugs¬—who isn’t really dead. Yes, when he realizes the deception he goes into a rage and, in that rage, conjures up a storm that lays Bugs/Brünnhilde out for dead. When the storm dissipates and he sees Bug/Brünhilde there, well his rage is gone too and so he mourns the wabbit, the wabbit with whom he’d danced a dance of love and sang a song of love.
What else could he do?
We’re in the land of myth logic and the rules are different from those in the real world. In myth logic mourning is the necessary answer to passionate love, as destructive rage is the necessary answer to deception. And perhaps that’s it, it was the deception that angered Elmer/Nordic warrior and it was the deception for which he sought revenge. That is, he wasn’t merely hunting a wabbit, as he was at the beginning, he was exacting revenge.
And that’s different from simply hunting rabbits.
At this point I see a pile of questions which I’m not prepared to address. For one thing, Elmer vs. Bugs had been a staple of Warner Brothers cartoons for years. Most people in the audience would know this. But how would What’s Opera, Doc? play for those who didn’t know that? And what about relatively young children who had not yet absorbed the conventions of cartoons, such as the fact that, no matter how much violence we see, no one is injured?
Not only is the Bugs/Elmer conflict a known item, but it’s almost always presented as an on-going conflict. Elmer and Bugs have a long-standing relationship. Elmer’s not hunting any arbitrary rabbit, he’s hunting this particular wabbit. It’s personal, and has been for some time.
What does it mean to be locked into THAT kind of conflict? It’s as though a significant component of Elmer’s identity is invested in his conflict with Bugs. That kind of conflict is steeped in ambivalence. The love duet in this cartoon was no mere act; it revealed an aspect of the relationship between Bugs and Elmer that’s otherwise been completely masked in standard-issue cartoon violence and conflict.
While Bugs and Elmer aren’t even the same species, much less the same family, that is only appearance. Or, if you will, that’s art. The response these cartoons evoke in us, the audience, that response speaks to close personal relationships. It’s about family. Wife and husband, parent and child, sibling and sibling, that’s what we’re dealing with. Those relationships are fraught with ambivalence, ambivalence that’s on full display in What’s Opera, Doc?
Now all I need’s a good explicit argument to that effect, rather than a few paragraphs of tap dancing and hand waving. That argument, that’s going to take more than a blog post, much more.
BTW, did you look closely at those screen shots, the color and layout?
* * * * *
Bonus points: As you know, Michael Barrier insists on the importance of animate acting. That is extraordinarily important in this cartoon. Pick one scene an explicate the acting subtleties it displays.
Sunday, March 11, 2012
Alphonso Lingis talks of various things, cameras and photos among them
A new journal, Singularum, has an interview with philosopher Alphonse Lingis, who translates Merleau-Ponty, writes, travels, and takes photos.
I had long resisted buying a camera, thinking that there was something false about collecting images of things seen and people encountered and who have passed on, trying to retain the past. I thought that what was real was what from a trip left one changed. I started taking pictures when a friend who was taking me to the airport gave me a camera on the way.
I soon realized that the camera had changed my perception. The light: it was no longer just cleared space in which things took form; it had direction, it led the gaze, its shafts excavated situations isolated in the dark, sometimes it spread in a scintillating, dazzling, blazing medium without boundaries. Shadows took on substance; they stretched, flowed, condensed things in themselves. It occurred to me that I saw them that way when I was a child. Things looked different: the contours of shadows and of things that overlapped other things pushed out the contours that contained things in themselves. Flat surfaces showed corrugations, grain, stubble and texture, and sheets of gleam. And the continuity of the landscape drifting by would be abruptly broken by momentary events—the spiraling neck of a heron probing the space, the poised pause of an antelope, the legs of a child in an arabesque she will never be able to do once grown up, the grin of a passerby at something inward. The landscape is abruptly splintered, a segment isolates, magnetizes and pulls the glance into it.
Thursday, March 08, 2012
Feynmann, John von Neumann, and Mental Models
Since George Dyson’s recent history of modern computing, Turing’s Cathedral: The Origins of the Digital Universe, was written in part to restore John von Neumann to prominence, I thought I’d republish a double review, lightly edited, I wrote some years ago: “A Tale of Two Geniuses,” Journal of Social and Evolutionary Systems, 17(2): 227-230, 1994. Richard Feynman was one of the geniuses and John von Neumann was the other.
* * * * * *
Genius: The Life and Science of Richard Feynman, by James Gleick, New York: Pantheon Books, 1992, 532 pp.
John von Neumann, by Norman Macrae, New York, Pantheon Books, 1992, 405 pp.
Students of cognitive evolution and of twentieth century thought are fortunate in the simultaneous appearance of these two biographies. No doubt the simultaneity is mostly coincidence. The physicist Richard Feynman is most widely known, alas, for two autobiographical collections of anecdotes which reveal him to be a waggish and riggish anti-establishment sort; he is most deeply known for his contributions to quantum electrodynamics. John von Neumann was a thoroughly establishment sort - soldiers guarded his hospital room as he lay dying of brain cancer just in case he let out defense secrets in his sleep - and is most widely know as the name which appears in phrases like “computers using the von Neumann architecture.” The two men crossed paths in Los Alamos, where they worked on the atomic bomb. That crossing is a reasonable place to begin our review.
Feynman was recruited to Los Alamos while still a graduate student. He was in charge of group T-4, Diffusion Problems. The problem was to figure out how neutrons, which drive the fission reaction, diffuse through the explosive core. Knowing the rate and pattern of diffusion was essential to determining the mass and configuration of fissile material. Since the late 30s von Neumann had been working on similar problems in connection with shock waves and explosions in general and so was able to help the Los Alamos effort between 1943 and 1945.
The difficulty was that the relevant equations could not be solved analytically. Rather, it was necessarily to simulate neutron diffusion numerically by calculating the step-by-step motion of individual neutrons. That requires lots of calculations, which were performed by a group of people operating calculators. The problems would be broken into components; each person would be responsible for one component, with each problem being passed from person to person as individual components where calculated.
Computing and von Neumann
That, of course, is the general way computers solve problems, with the computational plan being an algorithm. But, they did not have computers at Los Alamos. Computers came after the war and von Neumann was central to the effort. He understood that the computer is essentially a logical device and clarified that logic with the concepts of the stored program (Macrae, pp. 282-284), the fetch-execute cycle (pp. 287), and conditional transfer (see Bernstein 1963, 1964, pp. 60 ff.). That is to say, von Neumann clearly differentiated between the physical structures and connections of the devices from which the computer is constructed and the logical requirements which those devices have to fulfill. For that he is the progenitor of the computer.
Later on von Neumann initiated the use of computers in weather modeling. This, plus his earlier work on shock waves and the atomic bomb, makes him one of the founders of numerical analysis, a loose collection of techniques important in many scientific and technical fields. While pursuing the conceptual foundations of life, he worked out the concept of the cellular automaton, a highly parallel kind of computational device which is much favored by current theorists of chaos and dynamical systems. His work on game theory created a new field of economic and strategic analysis. Before the war von Neumann did important work on the mathematical foundations of quantum mechanics.
Feynmann and Quantum Mechanics
And so we segue to Feynman, whose most important work was that which he in the late 1940s on quantum electrodynamics. The quantum world is notorious as the domain where the fundamental stuff of the universe acts sometimes like a wave, sometimes like a particle. Particles and waves are readily visualized. But how can you visualize something which is both and neither? And, if you can’t visualize it, then how do you get the physical intuition which is, for many, so important to scientific thinking (cf. Miller, 1986)? It was Feynman’s genius to create diagrammatic conventions for quantum interactions which made physical intuition much easier and facilitated calculation as well. The so-called Feynman diagrams became ubiquitous once Feynman introduced them and, in 1949, Freeman Dyson [father of George Dyson] proved the diagrams to be equivalent to the more rigorously mathematical, and less intuitive, axiomatic approach of Julian Schwinger and Sin-Itoro Tomonaga.
Feynman went on to do important work in superfluids, weak nuclear force and, while on sabbatical, did some creditable molecular biology. In the wake of the Challenger disaster Feynman received a great deal of attention by performing a simple demonstration with ice water and a rubber ring. That simple demonstration unmasked the self-serving bureaucratic disregard for reality which led to the Challenger disaster. He also served on the board of directors of Thinking Machines, Inc., whose massively parallel computers are often used to implement models based on von Neumann’s concept of the cellular automaton.
Mental Math, Computing, and Late 20th Century Thought
With this return of von Neumann we get to the point of this review: When taken together, what do Feynman and von Neumann give us? Their intellectual lives crossed paths only once, albeit in a caldron whose intellectual fecundity may, in the long run, outlive the bomb which was its immediate purpose. For the most part, they worked in separate arenas. But taken together those various arenas encompass much of the deepest and most rigorous scientific and technical thinking of our century. Perhaps by looking at their work we can gain some insight into the basis of that thinking.
Mental mathematics is a motif which crops up in both books. Feynman and von Neumann worked in fields were calculational facility was widespread and both were among the very best at mental mathematics. In itself such skill has no deep intellectual significance. Doing it depends on knowing a vast collection of unremarkable calculational facts and techniques and knowing one’s way around in this vast collection. Before the proliferation of electronic calculators the lore of mental math used to be collected into books on mental calculation. Virtuosity here may have gotten you mentioned in “Ripley’s Believe it or Not” or a spot on a TV show, but it wasn’t a vehicle for profound insight into the workings of the universe.
Yet, this kind of skill was so widespread in the scientific and engineering world that one has to wonder whether there is some connection between mental calculation, which has largely been replaced by electronic calculators and computers, and the conceptual style, which isn’t going to be replaced by computers anytime soon. Perhaps the domain of mental calculations served as a matrix in which the conceptual style of Feynman, von Neumann, (and their peers and colleagues) was nurtured.
Piaget (1976, pp. 320 ff.) talks of higher mental processes which operate on lower level processes; perhaps the world of calculations is the lower level world over which all these thinkers built their higher level processes. In the terms David Hays and I introduced in our account of cognitive evolution (Benzon & Hays, 1990), these higher level processes implement models. Earlier science had been based on theories, while philosophy is grounded in rationalization, with modeling, theorizing and rationalizing understood as distinct types of abstract conceptualization. I’m suggesting that these conceptual models, which are central to 20th century thought, were originally constructed in a mental space richly populated with the tricks and procedures of mental calculation. It is in such a mental space that von Neumann and Feynman made their contributions to our thought.
If so, does the advent of the electronic calculator and computer mean that we have just thrown away the possibility of such deep thought by raising a generation of thinkers who routinely turn to calculators and computers for tasks that Feynman, von Neumann, and their many colleagues would peform in their heads? It is possible. But, another possibility is more interesting, and not nearly so depressing in its implications.
Yes, the models of Feynman and von Neumann are built on a foundation of calculational wizardry. But most of that wizardry was irrelevant to that genius. It turns out that the relevant component is not only required in programming computers, but is used there in a form unadulterated by a vast collection of mere facts and details. Thus programming, with its own collection of tricks and procedures, provides a much more effective basis for creating the conceptual matrix required to understand particle physics, game theory, and, of course, computing itself. A student who has learned to program need not be a Feynman or a von Neumann to grasp matters which, not so long ago, only Feynman and von Neumann and very few others, could grasp. Thus, far from destroying the necessary conceptual matrix, computers may make that matrix more readily available.
This is, of course, only a speculation; not even that, it is that most fragile of conceptual objects, a Mere Speculation. Disreputable as they are, Mere Speculations are nonetheless unavoidable, for they are starting points. One can only wonder at how many Mere Speculations von Neuman and Feynman must have considered as they worked their way toward their more substantial contributions.
Benzon, W. L. & Hays, D.G. (1990) “The Evolution of Cognition.” Journal of Social and Biological Structures, 13, 297-320.
Bernstein, J. (1963, 1964) The Analytical Engine. New York: Random House.
Miller, A. I. (1986) Imagery in Scientific Thought. Cambridge, MA: The MIT Press.
Piaget, J. (1976) The Grasp of Consciousness. Cambridge, MA: Harvard University Press.
Wednesday, March 07, 2012
Support Michael Sporn’s Film about Edgar Allen Poe
Several years ago I spent a delightful evening at New York City’s Museum of Modern Art viewing retrospective of Michael Sporn’s films. Everyday I check his blog, which is a treasure trove for those interested in animation. Now I’m asking you to support his Kickstarter project, which involves a biography of Edgar Allen Poe that he’s been working on for several years.
Here’s how Sporn describes the film:
The Animatic, above, is a rough representation of animation in progress. It helps us tell the story. We hope to turn the many segments started into completed animation to be able to thrust the feature film, POE, into complete production. The Kickstarter money will do that for us and help satisfy the needs of the possible distributors and financiers who are already interested.
What’s the story?
Edgar Allan Poe was a brilliant writer who lived a very short and eccentric life. He died at the age of 40 and in that time created literary genres including the detective mystery, the sci fi epic, the horror story, and many of the most beautiful love poems imaginable. Within this life there is a very dynamic story to be told.
The film opens with baby Edgar dragged from theater to theater by his ever-squabbling actor parents. They travel to cities up and down the East Coast performing, as their marriage falls apart. Edgar’s father disappears, and his mother dies of consumption. The three year old watches the last theater his mother performs in burn to the ground. He’s left an orphan, and the film begins.
Poe’s life was destroyed not by drugs or alcohol, as is often stated, but by absolute poverty, and this is the crux of our film. Many of the women in his life died of consumption and illness as he was too poor to be able to care for them properly.He, himself, died in a poorhouse hospital.
Our film will show various biographical key points in his life and will use selections from his great fiction to depict this dramatic story.
The film is now completely scripted and story-boarded and 20 minutes of an animatic have been completed. Four Poe stories will be set in counterpoint to the biography: The Premature Burial, Murder in the Rue Morgue, The Black Cat, and Ms. in a Bottle.
Now’s you chance to step into film history by supporting this project.
Monday, March 05, 2012
Philosophy, Ontics or Toothpaste for the Mind
Writing in, of all places, The New York Times, Colin McGinn, a distinguished philosopher—for only distinguished philosophers get to appear in “the paper of record”—has called for a rebranding of the discipline of philosophy. No, “rebranding” isn’t his word, though it was astutely used by one of the commenters. McGinn just called for a name change. “Ontics” is his suggested alternative.
McGinn notes that the name is misleading to non-philosophers, who “immediately assume you are in the business of offering sage advice, usually in the form of unargued aphorisms and proverbs.” And when you try to explain, well, they just don’t get it. Whatever this discipline is, “lover of wisdom”—the etymological meaning of the name—is too generic.
Well, sure, yeah, it is. But then, is what McGinn does, or what most academic researchers do, is that wisdom in any meaningful sense? Thomas Kuhn famously argued that what most scientists do is rather like puzzle-solving, and he did not mean the term at all pejoratively. The point of the term was to suggest that most scientists—and McGinn thinks of philosophy as science, in a broad sense of the term—work within fairly well-specified conceptual boundaries.
Which they do. And so it is with most academics. That’s just the nature of the enterprise.
There is tremendous respect for the mythology of “going boldly where no man has gone before,” but little on-the-ground tolerance for that activity in the flesh. I rather suspect that McGinn wouldn’t recognize one of the bold ones if she bit him in the ass. Whatever it is that McGinn does, is there any reason whatever to suspect that he gives a fig about wisdom?
Not, mind you, that I think “wisdom” a particularly good term for “going boldly where no man has gone before.”
But it’s not a bad term for that, not at all. And, yes, I know the phrase comes from the cheesy opening of a cheesy classic TV science-fiction program. That cheesy TV program, Star Trek, spoke to a deep need for adventure. We can argue about how well it spoke to that need, but the need is real and I’m willing to grant that the Gene Roddenberry’s animating impulse was a desire to speak to adventure, and that he was sincere in that.
On the other hand, a senior academic who whines about his discipline’s public image in, yes, the paper of record, The New York Times, can’t possibly know or have known intellectual adventure. This, as Graham Harman has noted, seems to be the core of McGinn’s plaint:
Our current name is harmful because it posits a big gap between the sciences and philosophy; we do something that is not a science. Thus we do not share in the intellectual prestige associated with that thoroughly modern word. We are accordingly not covered by the media that cover the sciences, and what we do remains a mystery to most people.
In the words of Rodney Dangerfield, “I don’t get no respect.”
Dangerfield is a comedian. McGinn is not, nor is he a philosopher. He is, at best, an ontician, whatever that is.
Perhaps one reason so many people hold the academic world in contempt is that they sense that those who run it have utterly given up on the quest for truth, for, yes, wisdom. Those words from Star Trek mean something to people—they mean something to me, something very important. People want to believe that those privileged to live the academic life believe in and support the questing for truth. Perhaps the public senses, if not quite knows, that academicians have lost site of the truth. And so the public has little respect for the intellectual life.
They want to see something higher, something noble, something worthy of sacrifice and respect. What does McGinn do? He treats his professional work as though it were toothpaste for the mind, that’s what he does. How can anyone respect that? McGinn surely doesn’t.
I mean, really! Kvetching in the NYTimes about not getting respect? He gets THAT pulpit handed to him and that’s what he does with it?
With mouthpieces like that, the academy doesn’t need enemies.
Friday, March 02, 2012
Nazi Rules for Regulating Funk ‘n Freedom
J.J. Gould has a short piece in The Atlantic that lists Nazi regulations for dance orchestras on Czeckslovakia:
- Pieces in foxtrot rhythm (so-called swing) are not to exceed 20% of the repertoires of light orchestras and dance bands;
- in this so-called jazz type repertoire, preference is to be given to compositions in a major key and to lyrics expressing joy in life rather than Jewishly gloomy lyrics;
- As to tempo, preference is also to be given to brisk compositions over slow ones so-called blues); however, the pace must not exceed a certain degree of allegro, commensurate with the Aryan sense of discipline and moderation. On no account will Negroid excesses in tempo (so-called hot jazz) or in solo performances (so-called breaks) be tolerated;
- so-called jazz compositions may contain at most 10% syncopation; the remainder must consist of a natural legato movement devoid of the hysterical rhythmic reverses characteristic of the barbarian races and conductive to dark instincts alien to the German people (so-called riffs);
- strictly prohibited is the use of instruments alien to the German spirit (so-called cowbells, flexatone, brushes, etc.) as well as all mutes which turn the noble sound of wind and brass instruments into a Jewish-Freemasonic yowl (so-called wa-wa, hat, etc.);
- also prohibited are so-called drum breaks longer than half a bar in four-quarter beat (except in stylized military marches);
- the double bass must be played solely with the bow in so-called jazz compositions;
- plucking of the strings is prohibited, since it is damaging to the instrument and detrimental to Aryan musicality; if a so-called pizzicato effect is absolutely desirable for the character of the composition, strict care must be taken lest the string be allowed to patter on the sordine, which is henceforth forbidden;
- musicians are likewise forbidden to make vocal improvisations (so-called scat);
- all light orchestras and dance bands are advised to restrict the use of saxophones of all keys and to substitute for them the violin-cello, the viola or possibly a suitable folk instrument.
H/t Graham Harman.
Monday, February 27, 2012
The Early History of Modern Computing: A Brief Chronology
This chronology is from a Guardian interview with George Dyson, who’s just written Turing’s Cathedral: The Origins of the Digital Universe. One of the central features of the book is to restore prominence to John von Neumann, the great Hungarian polymath.
1936 Alan Turing submits his paper ‘On computable numbers, with an application to the Entscheidungs problem’ to the Proceedings of the London Mathematical Society.
1941 Konrad Zuse working in isolation in Germany, builds the Z3. He knows nothing about Turing’s work.
1944 The first Colossus computer is operational at Bletchley Park, Buckinghamshire, significantly contributing to the allied war effort by doubling the codebreakers’ output. It contained 1,500 thermionic valves, was the size of a room and weighed around a ton. In all, 10 Colossus computers were in use by the end of the war.
1945 John von Neumann publishes a paper setting out the architecture of a stored-program computer.
1946 First public showing of the Eniac computer built in the preceding three years at the University of Pennsylvania.
1952 Von Neumann’s IAS computer becomes operational and is extensively cloned – there is no patent.
Sunday, February 26, 2012
Computing Encounters Being, an Addendum
As soon as I finished yesterday’s post on Brian Smith’s On the Origin of Objects, I had a thought: Ahh...so THAT’s why the philosophy of computing leads to metaphysics. If your intuitions about computing are dominated by your practice of arithmetic, well, that’s calculation, and calculation is only an aspect of computing has it has evolved since World War II.
Consider the opening paragraph to the Preface of Domain-Driven Design by Eric Evans (xiv):
Leading software designers have recognized domain modeling and design as critical topics for at least 20 years, yet surprisingly little has been written about what needs to be done or how to do it. Although it has never been formulated clearly, a philosophy has emerged as an undercurrent in the object community, a philosophy I call domain-driven design.
In that paragraph the object community is not a fellowship of philosophers, it’s a bunch of computer programmers using languages such as C++ or Java and working in a style that came to be called object-oriented long before the philosophers re-coined the phrase for their own purposes.
But that’s a side-note.
What I want you to think about: “domain modeling” and “domain-driven,” about that “domain.” The point is that programs are ABOUT some world, aka the domain. Programs are written to perform some useful task, e.g. inventory control in a liquor store, simulating fluid flow around an airfoil, formatting strings of alpha-numeric characters in text documents (a word processor), moving fictional characters and objects through fictional worlds as in, e.g. video games, and so forth. To write such programs one must model the application domain: What kinds of objects, relations, and processes are in that domain? One then writes software that not only embodies that model, but also gives human users the ability to control and manipulate that model. That is, the software itself embodies, enables, translates an interaction between humans and the world.
Programming’s a very difficult job, and one that’s not often done well. There’s lots of lousy software in the world and the failure rate of custom software projects is ferocious. But that’s another side note.
So, here we have this complex and difficult task, writing software. And now Brian Cantwell Smith wants to philosophize, not about the writing of software, or even about the software itself, but about what the software does: computing, in the largest sense of the word. How can he NOT engage in metaphysics? If computing, in that largest sense (as opposed to the more restricted notion of calculation), is always about the world, then a philosophy of computing must also be a philosophy of the world.
So, it’s inevitable then that, properly engaged, a philosophy of computing must be metaphysics of the deepest sort.
Now, how do we get from that to the observation I made two months ago that reading some current philosophers seems, in places, very much like reading computerists thinking about knowledge representation? Both kinds of talk are very abstract probings about basic things, relations between them, and processes among them all. And there seems to be some kind of deep rift between these folks HERE who want to make objects the basic stuff of the world and those folks over THERE who see processes and relations as the basic stuff. What I’m wondering is whether or not, at a sufficiently abstract level, these competing views aren’t mutually interconvertible?
Saturday, February 25, 2012
On the Origin of Objects (towards a philosophy of computation)
While crusing the web I came across a 1996 book by Brain Cantwell Smith, On the Origin of Objects. Smith is a computer scientist who was, in fact, in search of a theory of computation but found himself smack in the middle of metaphysics. Interesting, no? Just what computing is, is not exactly clear. And with folks, such a Stephen Wolfram (and he wasn’t the first), proposing that the universe is, beneath it all, a giant computer of some sort, well, you can see how chasing down the nature of computation could be interesting.
The publisher’s blurb was provocative:
Everything that exists - objects, properties, life, practice - lies Smith claims in the “middle distance,” an intermediate realm of partial engagement with and partial separation from, the enveloping world. Patterns of separation and engagement are taken to underlie a single notion unifying representation and ontology: that of subjects’ “registration” of the world around them.
That had just a whiff of object-oriented ontology about it, though the book’s date puts it before the term was coined.
I found an ontology site that had excerpts from the book, from critics, and from Smith’s reply. It had this bit from the book’s conclusion:
Overall, the project was to develop what I called a successor metaphysics, one that would honor the following pretheoretic requirements (345-246):
1. Do justice to what is right about:
a. Constructivism: a form of humility, or so at least I characterized it, requiring that we acknowledge our presence in, and influence on, the world around us; and
b. Realism: the view that adds to constructivism’s claim that “we are here” an equally profound recognition that we are not all that is here, and that as a result not all of our stories are equally good.
2. Make sense of pluralism: the fact that knowledge is partial, perspectival, and never wholly extricable from its (infinite) embedding historical, cultural, social, material, economic and every other kind of context. The account of pluralism must:
a. Avoid devolving into nihilism or other forms of vacuous relativism, and in particular not be purchased at the price of (successors notions of) excellence, standards, virtue, truth, or significance; and
b. Not license radical incommensurability, provide an excuse to build walls, or in any other way stand in the way of interchange, communion, and struggle for common ends.
Two additional criteria were applied to how these intuitions are met:
3. Be irreductionist—ideologically, scientifically, and in every other way. No category, from sociality to electron, from political power to brain, from origin myth to rationality to mathematics, including the category “human,” may be given a priori pride of place, and thereby be allowed to elude contingency, struggle, and price.
4. Be nevertheless foundational, in such a way as to satisfy our undiminished yearning for metaphysical grounding. That is, or so at least I put it, the account must show how and what it is to be grounded simpliciter - without being grounded in a, for any category a.
Along the way, the account should:
5. Reclaim tenable, lived, work-a-day successor versions of many mainstay notions of the modernist tradition: object, objective, true, formal, mathematical, logical, physical, etc.”
That site, in turn, sent me to an interesting and curious review of the book by R. P. Loui, that appeared in Artificial Intelligence, 106: 353-358, December 1998. Early on we find this:
ORIGIN OF OBJECTS is thus an important book, even a beautiful book. It reasserts its author as one of the deepest and erudite thinkers of computing. It is also, to this reviewer, an intellectually uninteresting book and thoroughly frustrating to read. These are two separate points: First, the book is a meditation on some metaphysical questions posed by symbol systems, and the author admits repeatedly that this is a purely metaphysical exercise (I would have demanded an apology rather than an admission). The problem is that metaphysics is a love-it or hate-it area of philosophy that has no practical implications (unlike, for example, epistemology which is crucially involved in explicating formal criteria for knowledge and belief). Second, the book frustrates this reviewer because the author, as is his reputation from prior work, is incapable of getting to the point. It is not just that Brian Smith seems to lean toward the semi-literary style of, for example, Friedrich Nietzsche. It is probably due to a mismatch of interests between writer and reader (a mismatch I suspect Brian Smith will have with most readers in AI). Both of these points are developed below.
The remarks about not getting to the point and writing a literary style, they might prick up the ears of my continental philosophical friends. Smith’s problematic appears to be this:
What is the idea? Basically, Brian Smith believes that there is one “real world out there” (his “metaphysical monism") while there is an apparent arbitrariness in how we symbolize it (our “ontological pluralism” p. 375). He feels that this mismatch of monism and pluralism is an intolerable situation which must be remedied.
To deal with this problem Smith has recourse to the aforementioned middle distance:
At this new level, there is “registration.” It is a level at which reference to the world can be made, but not through linguistic commitment or through symbols that have intentionality (reference). It is the level that philosophers of science might call pre-theoretic: there is an observer, but there are not yet data, since data are not theory-neutral. This level is made possible by the embedding of anobserver in the “real world”, and of course, Brian Smith aims to permitthat observer to be an AI program as well as a biological system.
And so forth and so on. I can’t keep going on like this because I’ll end up quote most of the review, which doesn’t make much sense when you can go there and read it for yourself. Loui concludes by asserting that the book “has catapulted Brian Smith, for a time (as long as the philosophy of computing is mistaken to be the modern philosophy of mind), ahead of all who would today claim to be computing’s principal philosopher.” He is surely correct in his parenthentical; whatever the philosophy of computing is, it cannot be considered to be a philosophy of mind, real or artificial.
I’ll conclude with three (from over a dozen) quotations Loui culled from the book:
... designers, playwrights, artists, ... are drawn into the act ... . Few fields, if any, are being left behind; ... it would be a mistake to think that these people are just users of computation. On the contrary, they are participating in its invention. ... The line between specifically computational expertise and general computational literacy is fading ... (p. 360)
... notions of mathematical proof [are] being revised ... . Other distinctions are collapsing, such as those between and among theories, models, simulations, implementations ... (p. 360)
... we are post-Newtonian, in the sense of being inappropriately wedded to a particular reductionism of scientism, inapplicable to so rich an intentional phenomenon. Another generation of scientists may be the last thing we need. Maybe, instead, we need a new generation of magicians. (p. 361)
Thursday, February 23, 2012
Symposium on Graeber’s Debt
Crooked Timber is running a symposium on David Graeber’s Debt: The First 5000 Years. Contributions so far:
- Seminar on David Graeber’s Debt: The First 5000 Years – Introduction, Chris Bertram
- The unmourned death of the double coincidence, John Quiggan
- The World Economy is not a Tribute System, Henry Farrell
- Debt Jubilee or Global Deleveraging, Barry Finger
- The end of debt, John Quiggan
- The Return of Grand Narrative in the Human Sciences, Neville Morley
- The Dangers of Pricing the Infinite, Malcolm P. Harris, on student debt
All are worth reading, as are many of the comments. I’ll end with the last paragraph from Bertram’s introduction:
Does Graeber find in utopian and democratic resistance to the Axial empires an historic precedent for the Occupy movement to emulate? Perhaps our best possibilities lie not in grand schemes of societal transformation but in developing the “baseline communism” and the democratic instincts that persist even in the heart of modern capitalism. The anarchist writer Colin Ward used a phrase from Ignazio Silone – “the seed beneath the snow” – to make a similar idea vivid. We cannot take the beast on in a direct assault, and nor should we, but we can work together to develop a more human society within the nooks and crannies of the commercial one.
Sounds a bit like a plug for the Transition Movement, which originated in England and has since spread around the world.
Tuesday, February 21, 2012
The Nightmare of Digital Film Preservation
I have distinct memories of the days when the prospect of digital media everywhere led to thoughts of how easy it would be to preserve everything: Digital Will Never Die! The basic idea was that, as digital is All or Nothing, the signal is strong and clear and so resistant to degradation. All we have to do is just keep transferring it from one substrate to another as the substrates wear out.
Piece of cake.
That’s not how things have worked out. David Bordwell has written a useful essay on the nasty problems of digital preservation: Pandora’s digital box: Pix and pixels. The law of unintended consequences strikes again, and again; as Bordwell observes: “It seems likely that digital projection has, in unintended and unexpected ways, put the history of film in jeopardy.” There are many problems, more than I care even to list, much less summarize. Let one little paragraph stand for many:
Storing 4K digital masters costs about 11 times as much as storing a film master. You can store the digital master for about $12,000 per year, while the film master averages about $1,100.
For that’s what it all comes down to, cost.