Because politics is complicated, and if you want to live in a democracy you have to try to figure it out.

But America is famously anti-intellectual, however, so we’ve somehow wound up in the situation we’re in right now, aptly summed up by Eugene Robinson:

The nation demands the impossible: quick, painless solutions to long-term, structural problems. While they’re running for office, politicians of both parties encourage this kind of magical thinking. When they get into office, they’re forced to try to explain that things aren’t quite so simple — that restructuring our economy, renewing the nation’s increasingly rickety infrastructure, reforming an unsustainable system of entitlements, redefining America’s position in the world and all the other massive challenges that face the country are going to require years of effort. But the American people don’t want to hear any of this. They want somebody to make it all better. Now.

In plain English, this state of affairs is called “wishful thinking.” It doesn’t work that well for my four-year-old kid, and it isn’t going to work that well for you, either.

The world is a complicated place. No one is going to figure it out for you, but they will figure it out for themselves and use that knowledge to take advantage of you. And the danger comes, as I’ve always seen and heard it described, when too many people surrender their powers of intellect to a few demagogues and power brokers. That’s when really bad things like revolutions and tyrannies and dictatorships happen. See history for examples.

Now, I find myself once again suggesting that studying literature is the way to help us avoid these kinds of bad outcomes. And surely, if there are any political scientists or financiers or historians reading this blog, they’re already sharpening their argumentative knives to prove that it’s really their discipline, not literature, that’s really practical when it comes to dealing with these problems. And once again, I’ll simply note that literature is classically defined as “literary productions as a whole; the body of writings produced in a particular country or period, or in the world in general” (OED).

As you’ll learn in your literature classes, it’s the context–the historical context, the political context, the intellectual context, the debates that one book responds to and then creates afterward as other people read it–that really counts. Reading one book is almost as bad as reading no books. The point is to start reading, to become a skilled and informed reader, and to keep reading–until you know everything that’s relevant.

Andrew Pettegree, Professor of History at the University of St. Andrews in Scotland, was recently interviewed by The Boston Globe, and his interview landed on the front page. The lead-in is very interesting in terms of book history:

In the beginning, before there was such a thing as a Gutenberg Bible, Johannes Gutenberg laid out his rows of metal type and brushed them with ink and, using the mechanism that would change the world, produced an ordinary little schoolbook. It was probably an edition of a fourth-century grammar text by Aelius Donatus, some 28 pages long. Only a few fragments of the printed sheets survive, because no one thought the book was worth keeping.

“Now had he kept to that, doing grammars…it probably would all have been well,” said Andrew Pettegree, a professor of modern history at the University of St. Andrews and author of “The Book in the Renaissance,” the story of the birth of print. Instead, Gutenberg was bent on making a grand statement, an edition of Scripture that would cost half as much as a house and would live through the ages. “And it was a towering success, as a cultural artifact, but it was horribly expensive,” Pettegree said. In the end, struggling for capital to support the Bible project, Gutenberg was forced out of his own print shop by his business partner, Johann Fust.

Inventing the printing press was not the same thing as inventing the publishing business. Technologically, craftsmen were ready to follow Gutenberg’s example, opening presses across Europe. But they could only guess at what to print, and the public saw no particular need to buy books. The books they knew, manuscript texts, were valuable items and were copied to order. The habit of spending money to read something a printer had decided to publish was an alien one.

Nor was print clearly destined to replace manuscript, from the point of view of the book owners of the day. A few fussy color-printing experiments aside, the new books were monochrome, dull in comparison to illuminated manuscripts. Many books left blank spaces for adding hand decoration, and collectors frequently bound printed pages together with manuscript ones.

“It’s a great mistake to think of an absolute disjunction between a manuscript world of the Middle Ages and a print world of the 16th century,” Pettegree said.

And then, of course, the article goes on to make a link between early printers and the early years of the internets. To be fair, it’s a link explored by Pettegree in his comments. Gotta make things relevant, always, I guess.

Here’s where Pettegree gets into the nitty-gritty of what we actually know about early publishing:

Most narratives of print have relied on looking at the most eye-catching products — whether it’s Gutenberg’s Bible or Copernicus or the polyglot Bible of Plantin — these are the ones which seem to push civilization forward. In fact, these are very untypical productions of the 16th-century press.

I’ve done a specific study of the Low Countries, and there, something like 40 percent of all the books published before 1600 would have taken less than two days to print. That’s a phenomenal market, and it’s a very productive one for the printers. These are the sort of books they want to produce, tiny books. Very often they’re not even trying to sell them retail. They’re a commissioned book for a particular customer, who might be the town council or a local church, and they get paid for the whole edition. And those are the people who tended to stay in business in the first age of print.

Lots of interesting new twists on the famous narrative of the Gutenberg Bible and the Nuremburg Chronicle (which I recently saw three copies of, back in May when I was up at Penn reviewing their Reading Pictures exhibition, which just closed a couple of weeks ago). Worth reading in full, I think, though I did try to excerpt the best bits above. I’m going to poach a few ideas from this article in a few weeks, when I deliver my introductory lecture to the Renaissance in British Literature I.

So that you don’t become a know-nothing:

It would be nice to dismiss the stupid things that Americans believe as harmless, the price of having such a large, messy democracy. Plenty of hate-filled partisans swore that Abraham Lincoln was a Catholic and Franklin Roosevelt was a Jew. So what if one-in-five believe the sun revolves around the earth, or aren’t sure from which country the United States gained its independence?

But false belief in weapons of mass-destruction led the United States to a trillion-dollar war. And trust in rising home value as a truism as reliable as a sunrise was a major contributor to the catastrophic collapse of the economy. At its worst extreme, a culture of misinformation can produce something like Iran, which is run by a Holocaust denier.

It’s one thing to forget the past, with predictable consequences, as the favorite aphorism goes. But what about those who refuse to comprehend the present?

Political debate is conducted largely through words. So how else are you going to learn how to handle words–how to parse through an argument and discover whether it’s based on faulty reasoning–how to articulate your own opinions, responses, and rebuttals?

OK, to be fair, you can learn some of these things in other humanities fields, like philosophy or political science. But one of the definitions of literature is “literary productions as a whole; the body of writings produced in a particular country or period, or in the world in general” (OED). Therefore philosophy and political science are literature, at least in the broad sense. Double-majoring in something practical, like political science or English, and learning how to read, speak, and listen, is a helpful complement to the theoretical knowledge you pick up during all those ivory-tower business classes–you know, the ones conducted in a lecture hall, with none of the pressures or politics of an actual workplace.

So you should double-major in a humanities field–unless, I suppose, you have an ingenious plan for doing business sans communication.

In one of the last essays Tony Judt wrote before he died, he explains why words are irreplaceable:

Cultural insecurity begets its linguistic doppelganger. The same is true of technical advance. In a world of Facebook, MySpace and Twitter (not to mention texting), pithy allusion substitutes for exposition. Where once the internet seemed an opportunity for unrestricted communication, the commercial bias of the medium – “I am what I buy” – brings impoverishment of its own. My children observe of their own generation that the communicative shorthand of their hardware has begun to seep into communication itself: “People talk like texts.”

This ought to worry us. When words lose their integrity so do the ideas they express. If we privilege personal expression over formal convention, then we are privatising language no less than we have privatised so much else. “When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean – neither more nor less.” “The question is,” said Alice, “whether you can make words mean so many different things.” Alice was right: the outcome is anarchy.

In Politics and the English Language, Orwell castigated contemporaries for using language to mystify rather than inform. His critique was directed at bad faith: people wrote poorly because they were trying to say something unclear or else deliberately prevaricating. Our problem is different. Shoddy prose today bespeaks intellectual insecurity: we speak and write badly because we don’t feel confident in what we think and are reluctant to assert it unambiguously (“It’s only my opinion …”). Rather than suffering from the onset of “newspeak”, we risk the rise of “nospeak”.

I am more conscious of these considerations now than at any time in the past. In the grip of a neurological disorder, I am fast losing control of words even as my relationship with the world has been reduced to them. They still form with impeccable discipline and unreduced range in the silence of my thoughts – the view from inside is as rich as ever – but I can no longer convey them with ease. Vowel sounds and sibilant consonants slide out of my mouth, shapeless and inchoate. The vocal muscle, for 60 years my reliable alter ego, is failing. Communication, performance, assertion: these are now my weakest assets. Translating being into thought, thought into words and words into communication will soon be beyond me and I shall be confined to the rhetorical landscape of my interior reflections.

No longer free to exercise it, I appreciate more than ever how vital communication is: not just the means by which we live together but part of what living together means. The wealth of words in which I was raised were a public space in their own right – and properly preserved public spaces are what we so lack today. If words fall into disrepair, what will substitute? They are all we have.

(P.S. I’m also pleased to hear someone else call out the kind of political “discourse” at the top of the page as the product of “know-nothings.” I’ve been calling these kinds of willfully ignorant people “know-nothings” since at least the 2008 election.)

From a column about a subversive Norman Rockwell:

I, of course, prefer the subversive interpretation. I am one of those readers who prefers the bloody-minded Robert Frost of “Design” and “Never Again Would Bird’s Song Be the Same” to the kindly old white-maned Yankee bard cherished by my third-grade teacher, Miss Martin — all the while admiring Frost for being able to appeal to the both of us.

Likewise I see Jane Austen as a hard-headed realist, intolerant of fools of all kinds and occasionally cruel in her judgment of well-meaning idiots, a far cry from the Gentle Jane of the current (nauseating) Austen industry.

Pretty accurate, I’d say, though the products of the Gentle Jane school are not as nauseating as he would make out.

From an article on the changes in King’s College, Cambridge, and more generally in Oxbridge, since the 60s generation. The author, the recently deceased historian Tony Judt, remembers the 60s fondly as a time when things seemed balanced between tradition and change, but winds up in a pretty conservative position of thinking that a return to that kind of balance would be better than what he sees as the uncontrolled and insubstantiated idealism of the present. He backs this position up with evidence, looking at the various problems caused by an endless program of educational reform in England, including the ironic ascendancy of public schools at a time when the official stance is supposed to be equality of opportunity. As Judt points out, “It does seem curious to curse the private schools for thriving in a market while enthusiastically rewarding bankers for doing so.”

The following quote, however, is not about the state of education as a political issue, but about Judt’s encounter with what he calls “real teaching,” which he connects with the idea of intellectual liberalism. Observe:

But what Leach did stand for—more than Annan and certainly more than the intellectually undistinguished John Shepherd—was pure smarts: an emphasis further accentuated when Leach was succeeded by the incomparable Bernard Williams. I served for a while as a very junior member on the College Fellowship Electors with Williams, John Dunn, Sydney Brenner (the Nobel Prize winner in medicine), Sir Frank Kermode, Geoffrey Lloyd (the historian of ancient science), and Sir Martin Rees (the Astronomer Royal). I have never lost the sense that this was learning: wit, range, and above all the ability (as Forster put it in another context) to connect.

My greatest debt, though I did not fully appreciate it at the time, was to Dunn, then a very young college Research Fellow, now a distinguished professor emeritus. It was John who—in the course of one extended conversation on the political thought of John Locke—broke through my well-armored adolescent Marxism and first introduced me to the challenges of intellectual history. He managed this by the simple device of listening very intently to everything I said, taking it with extraordinary seriousness on its own terms, and then picking it gently and firmly apart in a way that I could both accept and respect.

That is teaching. It is also a certain sort of liberalism: the kind that engages in good faith with dissenting (or simply mistaken) opinions across a broad political spectrum. No doubt such tolerant intellectual breadth was not confined to King’s. But listening to friends and contemporaries describe their experiences elsewhere, I sometimes wonder. Lecturers in other establishments often sounded disengaged and busy, or else professionally self-absorbed in the manner of American academic departments at their least impressive.

Perhaps the reform we need is a genuine and sustained engagement with other ideas. That’s not a reform that has to be hammered out against the opposition of state legislatures, but can begin right in the classroom. I come back once again to the example of Socrates, who always treats the ideas of others seriously. After all, if an idea is held–and still more if it becomes widespread–then clearly it has the power to persuade and inform the actual lives of real people. And thus an idea deserves serious treatment, even if that treatment is disagreement and refutation, and not just contempt. And if a real person holds that idea, it seems better to treat the idea seriously and disagree gently, as in the example above of Judt and Dunn, where Dunn treats the youthful Judt with great respect instead of holding him in contempt for what might be termed a simple mistake.

Last Tuesday, Frank Kermode died. A loss, because he was a critic’s critic. I’ve read several of his books, including his classics The Sense of an Ending and The Genesis of Secrecy, and they really are brilliant.

This part, from the conclusion of the NYT obit, sums up an important part of Kermode’s appeal:

His writing, though, reached beyond academia. His “Shakespeare’s Language” (Farrar, Straus & Giroux, 2000) — which traced the development of the playwright through the evolution of his poetry and concluded that “Hamlet” signified a major turning point — was a best seller in England.

At the time he worried about the book’s accessibility, telling an interviewer for The Irish Times: “What I do is despised by some younger critics, who want everything to sound extremely technical. I spent a long time developing an intelligible style. But these critics despise people who don’t use unintelligible jargon.”

Perhaps there was a touch of sarcasm in the comment, a bit of grumbling. But he clearly had little patience for critics who seemed to write only for other critics. As he wrote in “Pieces of My Mind: Essays and Criticism 1958-2002” (Farrar, Straus & Giroux, 2003), criticism “can be quite humbly and sometimes even quite magnificently useful.” But it must also “give pleasure,” he added, “like the other arts.”

To understand criticism as an art, and not just a science, is important. The best critical books are, like any other book, individual and not reproducible. Kermode has that quality when you’re reading him, that you’re listening to a very fine mind thinking about literature and writing about it with a certain elegance and beauty.

The loss is profound because it’s irreplaceable.

Because science (and business, for that matter) are doing it now, too.

From an article by Geoffrey Harpham, Director of the National Humanities Center down in beautiful North Carolina:

Autonomy, singularity, creativity–each of these terms names both a long-standing concern of the humanities and a set of contemporary projects being undertaken in the sciences.

Many such projects–from the relatively familiar such as stem-cell research and the Human Genome Project to the more exotic, such as attempts to upload the component parts of consciousness into a computer, bioinformatics, and advanced nanotechnology–appear to have serious implications for our basic understanding of human being. These projects may well force us to modify our understanding of traditional moral and philosophical questions, including the definition of and value attached to such presumptively nonhuman concepts as “the animal” and “the machine.”

Humanists, who have been only partially aware of the work being done by scientists and other nonhumanists on their own most fundamental concepts, must try to overcome their disciplinary and temperamental resistances and welcome these developments as offering a new grounding for their own work. They must commit themselves to be not just spectators marveling at new miracles, but coinvestigators of these miracles, synthesizing, weighing, judging and translating into the vernacular so that new ideas can enter public discourse.

They–we–must understand that while scientists are indeed poaching our concepts, poaching in general is one of the ways in which disciplines are reinvigorated, and this particular act of thievery is nothing less than the primary driver of the transformation of knowledge today. For their part, those investigating the human condition from a nonhumanistic perspective must accept the contributions of humanists, who have a deep and abiding stake in all knowledge related to the question of the human.

We stand today at a critical juncture not just in the history of disciplines but of human self-understanding, one that presents remarkable and unprecedented opportunities for thinkers of all descriptions. A rich, deep and extended conversation between humanists and scientists on the question of the human could have implications well beyond the academy. It could result in the rejuvenation of many disciplines, and even in a reconfiguration of disciplines themselves–in short, a new golden age.

Harpham’s call for a conversation between scientists and humanists about the nature of “the human” isn’t quite as far-fetched as it might seem, since he, as Director of the NHC, has been running these kinds of talks for several years now. Some of the findings have found their way into print in a special issue of Daedalus. So the question for other humanists is, I suppose, what science have you read lately?

Catching up on my blog-reading this morning. I came across this brilliant bit excerpted from Fretful Porpentine’s undergraduate diaries.

And then there is a page of cynical, but probably accurate, advice for dealing with professors. I’m fairly sure that I figured out most of these the hard way.

Rules for Students

1 ) Don’t ever forget how powerless you truly are. If you may speak freely to a professor, it’s by his will, not your own. Know then to bite your tongue, when to not & smile.

2 ) On the other hand, you have an advantage because you know your prof far better than he will ever know you. Also, you’re trained to listen and he’s trained to talk. Keep your ears open & learn to judge character!

3 ) Rapport is a gift from heaven. Don’t question it, analyze it, or push it too far. Do enjoy it.

4 ) Expect to do all the listening & almost all the remembering.

5 ) Gossip only with fellow students.

6 ) If your prof is in the habit of bad-mouthing her colleagues, do not trust her.

7 ) Demand no favors.

8 ) An acid tongue is OK, but don’t forget to smile!

9 ) Be honest — but know when to keep silent.

10 ) Remember they’re human (as if I could ever forget).

11 ) Even if your prof is a priest or a deacon, don’t ask him to deliver your wedding sermon. You will get a bad sermon.

I can categorically state that I was not so perceptive as an undergrad. Probably “presumptive” would be a better word to describe my self of fifteen years ago. Also “stupid,” “brash,” “suffering from foot-in-mouth disease.”

And now that I think of it, how little things have changed.

I don’t have full access to the Chronicle (or at least, not without more computer-internet trouble than it’s sometimes worth). But this article, about how the Shakespeare Quarterly has experimented pretty successfully with a kind of open, non-anonymous reviewing, should be interesting reading for anyone who is thinking about new, digital possibilities for research.

Because somebody has to figure out what to do with e-readers.

I’ve tended to be a bit negative about e-reading. Partly because I just like holding a book, and partly because many studies seem to suggest that it significantly changes the reading experience–which I’m pretty happy with the way it is, thanks.

Given that I approach this subject negatively, I was surprised to find myself fascinated by Adrian Johns‘s talk “As our Readers Go Digital. Johns makes a provocative case for the benefits of using e-readers. He does this by relating such uses back to the development of print–his field of expertise–with a nifty reading of a passage from Milton’s “Areopagitica”:

First, the advent of this dislocated, ‘universal’, skeptical reading practice – along with the places and formats that sustained it – provoked constructive responses as well as calls for simple repression. The problem was to uphold cultural order, and to do that one must confirm “good” reading in the new environment of rivalry. You see the problem addressed first in John Milton’s famous tract on censorship, Areopagitica. Much of Milton’s tract was devoted to defining the responsibilities of reading in this febrile world. Consider his marvelous image of London:

Behold now this vast city, a city of refuge, the mansion house of liberty….The shop of war hath not there more anvils and hammers waking to fashion out the plates and instruments of armed justice in defense of beleaguered truth than there be pens and heads there, sitting by their studious lamps, musing, searching, revolving new notions and ideas wherewith to present, as with their homage and their fealty, the approaching reformation; others as fast reading, trying all things, assenting to the force of reason and convincement. What could a man require more from a nation so pliant and so prone to seek after knowledge? What wants there to such a towardly and pregnant soil but wise and faithful laborers to make a knowing people, a nation of prophets, of sages, and of worthies? … Where there is much desire to learn, there of necessity will be much arguing, much writing, many opinions; for opinion in good men is but knowledge in the making.

The question was how to frame what Milton called a “generous prudence” that could furnish common ground for sparring readers beyond all institutions. It did not exist when Milton was writing. By the time of, say, Addison and Steele, less than a century later, it did: there were widely accepted norms of good and bad, expert and inexpert reading – in coffeehouses, of periodicals. Coffeehouse culture had become civilized, learned. As Thomas Hobbes said, even science itself came to find a home in the media and practices of this environment.

Johns sees an analogy between the creation and development of the infamous Enlightenment public sphere and the prospects ahead of us in the digital age. He calls those of us who care about judicious reading to both maintain those values and instill them in students. This seems to me a very important point in the wash of fears and panic-mongering about the new information onslaught:

This matter of readerly responsibility is, I think, important in a generalized sense. It may even be the key to the 3 Ps’ role in digital culture. We, like Milton, live at a moment when knowledge has left institutions and is wandering through strange and ill-defined spaces. We too face fears of skepticism, credulity, and a world where the civility of reading – its role in civic life – is up for grabs. In early modern Europe, it was only when a civility of reading was developed – partly through the revival of older skills, partly by a new forensics of the book – that corrosive credulity and skepticism were checked and a “public sphere” could come into being. Much of the Enlightenment depended on that achievement. What we have in prospect is another historic shift of the same order of magnitude. We too need to establish foundations for it: we need standards for creditable reading. As in Casaubon’s day, they will surely come from a merger of the old and the new, and some element of them will be forensic. That is, they will involve source criticism – both textual and algorithmic, including a comprehension of metadata – and historical – the history of the book, for example – at the same time.

Someone needs to articulate this. The fact that books have “left the building” leaves us inside the building with a more important task than ever. But it is something that calls for real change in the institutional and professional structures we inhabit – change of a kind not often enough appreciated. The roles of librarian and faculty need to be interpermeable again (as they were in Milton’s day). It isn’t enough for librarians to become “informationists,” as those at Johns Hopkins reportedly have. When places, practices, and publics are all up for grabs, to forge them anew we all need to become something more – something for which the seventeenth century had a much better word: intelligencers. Intelligencers were the skilled mediators of Milton’s day – people like Samuel Hartlib, Henry Oldenburg, Marin Mersenne, and Nicolas-Claude Fabri de Peiresc – who made possible the maturing of rancor, credulity, and skepticism into a relatively polite and cosmopolitan public sphere. We need more like them now.

I am not at all sure that academia is well equipped to produce these intelligencers; none of those I have just named came from universities. But if not us, who?

This turn to the ethics of e-reading, and to the need to invent new ethical standards to accommodate it, seems to me much needed. Ethical questions, and more broadly human questions, tend to be given little if any attention in discussions of e-reading technology. I recently blogged about an example here, where the questions being asked are about speed-reading rather than comprehension. Or, in other words, about the results that it’s easiest to communicate in our current public sphere. How many of us know how fast we read? Probably not many, but all of us have some ideas about what we’ve read and why.

To quote a favorite film, Miller’s Crossing, “I’m talkin’ about friendship. I’m talkin’ about character. I’m talkin’ about – hell. Leo, I ain’t embarrassed to use the word – I’m talkin’ about ethics.”