Page 3137 – Christianity Today (2024)

Page 3137 – Christianity Today (1)

  • Advancing the stories and ideas of the kingdom of God.

    • My Account
    • Log In
    • Log Out
    • CT Store
    • Page 3137 – Christianity Today (4)
    • Page 3137 – Christianity Today (5)
    • Page 3137 – Christianity Today (6)
    • Page 3137 – Christianity Today (8)
    • Page 3137 – Christianity Today (9)

Neil Gussman

Nerve gas and other unconventional weapons.

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

Nerve gas is becoming the weapon of choice for tv doomsday scenarios. In last year’s season of 24, for example, Russian terrorists steal twenty canisters of a made-for-tv nerve gas and threaten to kill tens of thousands of people. They do manage to kill about 100 people, despite the best efforts of series hero Jack Bauer (Kiefer Sutherland).

Page 3137 – Christianity Today (11)

War of Nerves: Chemical Warfare from World War I to Al-Qaeda

Jonathan Tucker (Author)

Anchor Books

496 pages

$18.95

Watching season five of 24 makes it clear why we should be afraid of gas, particularly nerve gas, although this terrifying weapon was cleaned up and tamed for tv. The “Weaponized Centox” featured on 24 kills its victims with the lethal efficiency of real-world nerve gas—vx, Tabun, Sarin, and so on—but unlike other actual nerve gases, Centox then conveniently disappears.1 Real nerve gas poses a huge decontamination problem. It sticks to walls and wings, cars and computers, and it is just as deadly on the skin as in the air. When the tv nerve gas Centox is released within CTU (Counter Terrorism Unit) headquarters in Los Angeles, the gas quickly kills nearly half of the staff, but those who make it to sealed rooms and survive simply return to their workstations and resume the high-tech fight against determined terrorists inside and outside the government.

Personally, I would not want to be tapping on a keyboard and drinking coffee in a room that had held a lethal dose of nerve gas just a few minutes before. But if TV gets the details wrong, it gets the terror right. Closed, crowded places make tempting targets for terrorists. The 24 terrorists attack a mall, offices, and attempt to attack thousands of homes through the natural gas system.

If you are interested in the history of the most deadly class of chemicals used in warfare, War of Nerves by Jonathan B. Tucker recounts many tales of developing, producing, and deploying chemical weapons, with a particular focus—as the title suggests—on nerve gas. The author of previous books on smallpox and leukemia and editor of a volume on chemical and biological warfare, Tucker takes the reader from the German laboratory where the first nerve agent was developed right up to the present.

So absorbing is Tucker’s chronicle that you may lose track of time while learning how an errant U.S. Army test of vx nerve gas killed thousands of sheep in Utah in the 1960s. Lest you think this is exaggeration, I asked my 15-year-old daughter, Lisa, to read chapter 16 while we were on a rather long drive to a mall. When we arrived, she had two pages left and wanted to finish the chapter rather than run straight in to Abercrombie & Fitch. Chapter 16 describes the life of the man responsible for the Tokyo subway nerve gas attack that left twelve dead and hundreds injured. Most histories of chemical warfare would not slow a teenager on the way to a clothes store.

In his dramatic style, Tucker occasionally reaches beyond knowable facts to get inside the mind of his subjects. He says that Dr. Gerhard Schrader, in his lab at I.G. Farben, “[a]s always, felt a pleasant tingle of anticipation as a new substance emerged from the synthetic process.” At the time, December 23, 1936, Dr. Schrader was working in a lab decorated with “a large framed photograph of German Chancellor Adolf Hitler in heroic profile.” A man in these circ*mstances could have experienced a tingle for any number of reasons: chemistry, Christmas, or Hitler’s portrait. But Tucker doesn’t hesitate to read minds.

Aside from this quibble, the stories Tucker finds of ordinary people are both delightful and chilling. Delightful because they are well told and give the reader some insight into the kind of person who would develop or mass-produce weapons of mass destruction. Chilling because his subjects focus on the problem at hand—making thousands of tons of nerve gas, for example—with no apparent qualm. It’s the job. They do it.

My favorite of Tucker’s tales is the story of Boris Libman, a native of Latvia who could have walked straight out of the works of Aleksandr Solzhenitsyn. Born in 1922, Libman was just 18 when the invading Russians confiscated his family’s land and property and drafted him into the Soviet Army. He was seriously wounded early in the war, returned to duty after a long recovery, and was again badly wounded, the second time left for dead. He survived the war and applied to study at the Moscow Institute for Chemistry tuition-free as an honorably discharged disabled veteran. Libman was turned down because he was officially dead. He managed to prove he was alive, attended university, and became quite a talented chemical engineer. He supervised production of thousands of tons of nerve gas on impossible schedules for many years. In trying to do his best for the Soviet Union, he made an error with a containment pond for toxic wastes. A storm caused a flood, the pond burst its dike, and tons of toxic waste poured into the Volga River. Months later the delayed effects of the spill killed millions of fish for 50 miles downriver. Libman was blamed and sent to a labor camp to appease an outraged public. But as it turned out, no one else could run the nerve gas plant, and Libman was quietly released and returned to work after one year.

Fear of toxic gas and wild exaggeration of its dangers have their American roots in the debate over chemical warfare after World War I. In Chemical Warfare: A Study in Restraints (first published by Princeton University Press in 1968 and now reissued by Transaction with a new introduction by Jeanne Guillemin), Frederic J. Brown recalls the terror of gas during the years between the world wars. “Propagandists were totally irresponsible in their exaggerations of new weapons developments,” Brown writes. He quotes H. G. Wells on the aftermath of a fictional chemical attack by aircraft using the Centox of the 1930s, what Wells called “Permanent Death Gas”:

[the area attacked] was found to be littered with the remains not only of the human beings, cattle and dogs that strayed into it, but with the skeletons and scraps of skin and feathers of millions of mice, rats, birds and such like small creatures. In some places they lay nearly a metre deep.

Not quite “blood as deep as horses’ bridles,” but still a vision to warm the heart of apocalypse addicts.

Brown—Lieutenant General, retired, U.S. Army; he was a junior officer when he wrote the book—carefully recounts the military history of the use and, more significantly, the non-use of chemicals as weapons in both world wars and the period in between. Thorough and well documented, his book also captures the policy decisions and leaders’ attitudes that kept chemical weapons, for the most part, off World War II battlefields.

Brown’s book has the fat footnotes that have long been out of style even in scholarly publishing, but these footnotes are a delight for the reader who wants details. On page 18 is a three-paragraph, nearly full-page, small-type footnote describing President Woodrow Wilson’s attitude toward gas warfare, with references to his biography and a meeting with the French commander at the battle of Ypres.

Sometimes the footnotes illuminate and enliven a rather dull passage. In a section on civil defense Brown says, “Since it has to be assumed that an enemy would use the most destructive mixture of weapons available, gas shelters had to be bomb- and fireproof as well as gasproof.” Why is this true? Note 48 at the bottom of the page explains: “High explosives to penetrate collective shelters and homes, incendiaries to drive the population into the streets, gas to kill in the streets.” Brown tends to the passive voice in the text but can be vivid in the notes.

While the combatants of World War I expected gas warfare in future conflicts, none of the combatants in World War II attacked each other with gas with the exception of limited use in China. The aversion to gas warfare stands in stark contrast to the other two weapons introduced in World War I: the tank and the bomber. When World War II began in September of 1939, the German tanks backed by bombers made short work of Poland. The following spring the same German juggernaut ripped through France, Belgium, and Holland and defeated every major allied combatant except the United Kingdom. In the Pacific, the Japanese showed how effective ship-based bombers could be, winning many victories against neighboring countries in the early years of the war and eventually bringing the U.S. into the war with the carrier-based bomber attack on Hawaii on December 7, 1941.

The bomber and the tank became indispensable weapons for the major combatants of World War II, but gas warfare did not. Brown says the first reason was revulsion by military professionals. A small group of senior officers strove to make chemical warfare integral to the plans of the U.S. military, but most professional officers wanted no part of warfare they saw variously as inhumane, cowardly, and out of their control. Gas is also more complicated to use than conventional weapons. Gas warfare creates a logistics burden all its own: using gas means providing protective equipment for all friendly soldiers operating in the area affected by gas. Gas munitions displace conventional rounds. The more gas rounds fired, the fewer explosive rounds that can be fired by the same gun. In the fast-moving battles of World War II, persistent gas would slow the successful attacker, forcing his soldiers to operate in an area they contaminated. And in the case of naval use of gas, there is a potential disaster in any ship having a magazine loaded with gas rounds. Any leak of toxic gas inside a ship leaves the entire crew in a contaminated container with little prospect of escape.

Brown shows how politics pushed the warring nations further away from the use of gas. First use by one army meant retaliation by the other. Germany and England bombed each other throughout most of the war. Even when one country was clearly winning, the other was able to retaliate. If one side used gas, the other would be sending gas back across the Channel in short order. Neither of these particularly vulnerable countries wanted to provoke gas warfare, nor did they want any of their allies to add gas to the mix of weapons. Also, the men at the head of the largest armies in the war were for their own reasons strongly opposed to gas warfare. Hitler was gassed during World War I and Brown shows that the German leader did not seriously consider using gas until the final days of the war. Franklin Delano Roosevelt was opposed to gas as a “barbarous and inhumane” weapon; he stated to the world in 1943 that the United States would not initiate gas warfare but would retaliate in kind if necessary.

Brown’s main narrative closes at the end of World War II. He shows that gas was never seriously considered as an alternative to the use of the atomic bomb or invasion of the Japanese mainland. In his conclusion Brown judges that the circ*mstances which prevented the use of chemical warfare in World War II still obtained in 1968. The professional military was largely opposed to the use of chemical warfare, and the main antagonists of the postwar period—the United States and the Soviet Union— both had many allies who would not want gas or nuclear weapons used on their soil.

Quite rightly, Brown took a measure of comfort in reflecting that the restraints which existed in World War II continued in the Cold War era. Alas, this modest reassurance does not carry over to our own day. Terrorists are not soldiers. As their name suggests, their purpose is to inflict terror on the civilian population, while at the same time they can trust traditional Western reticence not to respond with indiscriminate murder in retaliation.

For readers who would like to see Brown’s book come to life, at least in fiction, I recommend Tom Clancy’s Red Storm Rising. This 20-year-old best seller describes a conventional war in Western Europe in the late 20th century in which neither side uses chemical or nuclear weapons. The reasons could have been lifted straight from Chemical Warfare. The soldiers on both sides of the conflict share the attitude toward gas and nuclear weapons that Brown describes. And in a prescient prologue, Clancy’s World War III begins with Arab terrorists blowing up a Soviet refinery, causing a crippling fuel shortage.

If I found the hopeful note in Brown’s conclusion tied closely to the circ*mstances of the Cold War, I found some practical hope in Tucker’s book. His long descriptions of the problems encountered by Saddam’s chemists in the Iran-Iraq war—along with the troubles encountered by the cult that attacked the Tokyo subway—show how difficult it is to make nerve gas. The ingredients are corrosive and dangerous. The equipment required to make it is specialized and difficult to obtain. Even the most talented chemists and chemical engineers Tucker introduces in the book faced huge difficulties producing nerve gas—and in many cases failed partially or completely. Even for those with millions and millions of dollars to spend, nerve gas synthesis is very, very difficult. Luckily for us, no weapon in the real world is as easy to use or works quite as well as its fictional counterpart.

Neil Gussman writes a column on the history of chemistry for Chemical Engineering Progress magazine.

1. “Weaponized” means put in a bomb, artillery shell, mine, or other system for use. In 24, the nerve agent was loaded into pressurized cylinders that were intended for release in ventilation systems. Why the U.S. government would weaponize nerve gas in a form most useful for theft and use by terrorists rather than for the battlefield is a question only the show’s writers can answer.

Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromNeil Gussman

Gary M. Burge

Seeing through the eyes of Palestinians.

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

My first foray into the troubled world of Israeli-Palestinian politics came in the late 1980s, not long after the outbreak of the first Uprising (the Intifada). I was leading trip after trip to Israel, guiding students around the countryside teaching the Bible from a wide array of biblical sites. When I finally stepped off Israel’s well-worn tourist trail, I was astonished at what I saw in the Palestinian territories. I thought I was in another country.

I wrote up these experiences in 1993 (Who Are God’s People in the Middle East?) and naïvely thought that my evangelical readers would be fascinated to learn that there was another side to the story. That there was something else, some other narrative, if you just turned off the famous road to Bethlehem, if you dared go to Bet Jalla or even Hebron. Some evangelical readers were interested. Some were decidedly not. My formal work on the problem—as a theologian masquerading as a political scientist—began in the 1990s and culminated with another book (Whose Land? Whose Promise?) that set me deeper at odds with evangelicalism’s political-right turn during that same decade. We had publicly decided on our narrative to explain the Middle East, and we weren’t going to budge.

Most work in this area concerns the plight of those Palestinians who live under Israeli military occupation (over 3 million) or the many others (4.2 million in the Middle East alone) who have been made refugees by Israeli land seizures following a series of wars (1948, 1967, 1973). When television footage shows conflict with so-called Palestinian “terrorists” with gunfire and smoking tires, these scenes generally come from Gaza or the Occupied West Bank, areas captured by Israel in fighting that effectively took millions of Arabs captive. Two uprisings later—one that began in 1988 and another in 2000—the story continues to draw our interest. Peace proposals rise and fall; exhausted people hope and despair; the news moves on.

But there are two more stories that are now unfolding. First, Israeli historians have begun to revisit their country’s cultural narrative and challenge its sacred mythologies.1 Some Israelis (such as the prolific Benny Morris) and American Jews (Norman G. Finkelstein at DePaul University) have been bold in their critique and in many cases inflammatory. They challenge the usual perception of a besieged Israel surrounded by massed Arab armies and the near-miracle victories of wars in 1948 and 1967. Using research from Israel’s own archives and the stories of Arab and Israeli witnesses, they describe the conquests of these early decades, atrocities on both sides, the mass expulsion of Palestinians, and the intentional confiscation of Arab land. And they understand the present Palestinian uprising as a reaction to Israeli oppression and occupation. When they refer to these events as an “Israeli Pogrom” against Arabs, their words are guaranteed to ignite debate. And when they refer to walled-in Palestinian villages with terms such as “apartheid” or “prisoner camps,” well, you can just imagine.

Despite the combative voices either dismantling the Israeli myth (Finkelstein) or defending it at all costs (Alan Dershowitz), even mainstream Israeli scholars and diplomats are writing up the story with a new sensitivity to the evils brought by both sides in this conflict. Shlomo Ben-Ami is an Israeli with a distinguished record of achievement: Oxford-educated, he taught at Tel Aviv University before beginning an illustrious career in public service that culminated with an appointment as Israel’s Minister of Foreign Affairs. He was a participant in many of the Arab-Israeli peace conferences and attended the Camp David summit in 2000. In Scars of War, Wounds of Peace: The Israeli-Arab Tragedy, Ben-Ami gives what may be the most comprehensive study of Israel’s modern history to date. What is remarkable is his ability—limited still, but impressive nonetheless—to openly describe the faults of Israel’s own behavior:

The State of Israel was born in war, and it has lived by the sword ever since. This has given the generals and the military way of thinking … a paramount role in the Jewish state and too central a function in defining both Israel’s war aims and her peace policies. Throughout, the army preached “activism” and frequently overreacted to real and sometimes imaginary threats.

It is hard to imagine such words written by an Israeli government leader even 15 years ago. To be sure, Ben-Ami spares no words when it comes to Arafat and the leadership of the PLO more generally. But this is to be expected. Ben-Ami’s greatest despair, he tells us, was prompted by the failure of overtures to Arafat made during Ehud Barak’s tenure, when opportunities for concession and agreement were lost thanks to arrogance on both sides. The results of this failure were catastrophic, giving the Palestinians into the hands of Hamas, destroying the Israeli peace movement, and handing the reins of Israeli politics to the far right.

But certainly Ben-Ami exhibits a new sensibility, a new willingness to acknowledge what is happening in the occupied territories. And what is happening there? Consider the case of Hamdi Aman. One Saturday afternoon in the spring of 2006, Hamdi, 28 years old, living in Gaza and proud of his new white Mitsubishi, rounded up some family members to take them for a drive. He pulled up to an intersection in the Tel al-Hawa neighborhood. An SUV approached from the rear and began to pass. Then all hell roared. A missile fired from a silent Israeli attack helicopter slammed into the SUV, destroying it. But at the same time, the explosion and its shrapnel ripped through Hamdi’s car, killing his wife, Naima (27), his mother, Hanan (46), and his six-year-old son, Muhannad. Hamdi’s three-year-old daughter, Maria, and his uncle, Nahed (34), remain hospitalized with severed spinal cords, unable to breathe on their own. This once proud father today is left alone with a two-year-old son, Mu’min. (I include the personal names of these victims so that I can remember that they are not mere statistics, occupying another couple of lines in the accident record.)

The Israeli response? An investigation, regret, and a promise to learn to reduce such risks in the future. According to Israeli civil rights groups the Israeli army has killed 234 Palestinians like this in the last five years—and they’ve killed 123 innocent bystanders. The army claims that they are assassinating “terrorists”—but they cannot be sure who is in the targeted car, nor is the person given a trial. They are simply executed from on high. And the bystanders? Collateral damage.

Now here is the shift. Israeli condemnations of Palestinian violence now share the spotlight with Israeli condemnations of Israeli violence. You can read about Hamdi’s story in the newspapers (as I did), where Israelis express frustration with their own country’s violence. The same is true regarding Israel’s disproportionate bombing attack on Lebanon in August 2006. Not only were vast residential areas ruined, non-military targets destroyed, and 1200 civilians killed, but in the last days of the war—even as the world was calling for a stop to it all—Israel dropped 1.2 million cluster bomblets all over South Lebanon, virtually turning it into a minefield. (The American government, which supplied the cluster bombs, was oddly silent.) And for the sharper critics, this behavior on both sides deserves no finer word than “terrorism.”

The second important development comes from within Israel itself. Few realize that before Israel’s war of independence in 1948, Palestinians were the majority. In 1948 the population of Israel proper (excluding the current occupied territories) was about 1.5 million, of whom 900,000 were Arabs. When the war was over, 85 percent of these Arabs were uprooted and became refugees. This was a well-planned Israeli strategy to shift (or “ethnically cleanse”) the population and build their state. The Palestinians left behind eventually were integrated into the growing Israeli society; now numbering about a million, they are Israeli citizens. But they live a tenuous life between two worlds—one Palestinian compared it with “holding two watermelons with one hand.”

The Israeli government has produced a great deal of research on these Arabs inside Israel. For instance, they average five persons per household (50 percent more than Jews), they have a 60 percent higher unemployment rate, 75 percent of them are “low income,” and a high proportion rely on welfare payments. Their town councils receive about 50 percent less money per capita than Jewish town councils. They own less land (they are 16 percent of the population but own 3.5 percent of available land). And since land allocation is generally done with an eye on ethnicity (e.g., Israeli settlements), the idea of allocating land for a “Palestinian settlement” is deemed by most Israelis as absurd. The government even subsidizes mortgages, but this is a benefit for those who serve in the military (as most Jews, but few Arab Israelis, do).

I had thought that I was fairly shock-proof when it came to the Palestinian story. And then I came across the little book Coffins on Our Shoulders. The Experience of the Palestinian Citizens of Israel. A number of things make it unique. It is authored by one Jewish scholar and one Palestinian scholar. Dan Rabinowitz teaches sociology at Tel Aviv University, while Khawla Abu-Baker is a lecturer at Emek Yezreel College. Rabinowitz and Khawla discovered that not only were their careers similar (as academics invested in the social sciences) but their family roots took them both back to early 20th-century Haifa. His family came to Haifa from Eastern Europe (Kiev, Ukraine) and hers migrated from the village of Ya’abad (near Jenin in today’s West Bank). Two families—one Jewish, one Arab—both young, looking for a future. But that is where their common experience ends.

Each chapter in this book does something remarkable. The authors walk us through the history of Israel/Palestine by telling the narrative of their family’s story in a given period. First you may hear Rabinowitz (“Asher Bodankin was born around 1883 in Pinsk on the border zone between Byelorussia and Poland”) and then it is Baker’s turn (” ‘Aarif Abu-Shamla was born in 1903 to a well-to-do rural family in Ya’abad”). The narratives are told with honesty and respect. And then the authors interpret the period together.

The book shows in stunning detail how Israeli Arabs are marginalized and dismissed—to a degree that reminded me of the African American experience in the mid-20th century. Through each war, through every uprising, the crushing of the Baker family identity in Israel is told. In contrast, Dan Rabinowitz describes how his family was heir to an expanding Israel’s successes: opportunity for education and jobs, access to social advantages, mortgages subsidized by the government—the list is endless. And he admits that his family simply did not “see” the Palestinian world.

But his conversion came in 1989. In his doctoral work at Cambridge, Rabinowitz had studied the relationship between Arabs and Israelis in Nazareth. He lived in Nazareth Illit (or Upper Nazareth, which is Jewish) while a large Arab community was in the lower “bowl” of the village. By the 1980s he was a freelance journalist and in 1989 it was a story, a horrible story, that got him.

A small army jeep convoy, entering the nearby village of Faqu’a, was hit with a barrage of stones. The convoy sped away, but the commanders felt that the unit’s pride had been wounded. And so they returned with a vengeance to teach Faqu’a a lesson. First they destroyed some Palestinian houses. Dan picked up the story there and interviewed a number of the village residents. But then the army unit returned, this time with intent to kill. One of Dan’s primary interviewees, Yusuf Abu-Na’im, saw the soldiers, ran from them through an olive orchard, but was hunted down and executed. Dan Rabinowitz was never the same. He now saw what his family had missed.

Meanwhile, Khawla Abu-Baker’s life had taken her into social work in a psychological care center in Gaza, helping train therapists how to use role-play for victims of the violence. But the story she tells is entirely different. She and the younger generation she describes represent a body of Palestinians who will no longer accept wholesale discrimination. When told by some “but your life here is better than, say, those other Arabs in Gaza,” she will recoil and ask what sort of state Israel pretends to be.

The book begins with a scene that is as symbolic as it is poignant. One of the gems of Arab Galilee is the small mountain village of ‘Iblin. This is where the now-famous Father Elias Chacour has built Mar Elias College, a school dedicated to higher education for Jews, Muslims, and Christians. And this is where anyone can meet young Arabs who understand their culture and the way in which opportunities are systematically denied them—and who are determined to resist.

During a recent graduation ceremony at Mar Elias, a choir sang what has become a deeply moving Palestinian song, Mawtini (“My Homeland”). When the first notes began, the audience stood in silence. They continued standing for the next song as well. It was Samih al-Qasem’s “Muntasib al-Qama” (“The Standing Tall”). And here one refrain stands out:

Standing tall I march,
My head held high,
An olive branch in my palm,
A coffin on my shoulder,
On I walk.

Dan Rabinowitz and Khawla Abu-Baker tell us story after story of olive branches and coffins, story after story of Palestinian and Israeli mistakes along the way—but above all they describe a “new sociological generation called the Stand Tall Generation. Its representatives and leaders, many of them women, display a new assertive voice, abrasive style, and unequivocal substantive clarity. They have unmitigated determination, confidence, and a sense of entitlement the likes of which had only seldom been articulated previously by Palestinians addressing the Israeli mainstream.”

If asked to recommend one book to place in the hands of friends who already know the basic facts of the Israeli-Palestinian conflict, or think they do, Coffins on Our Shoulders is the book I would choose.

Gary M. Burge is professor of New Testament at Wheaton College & Graduate School. He is the author of Whose Land? Whose Promise? What Christians Are Not Being Told About Israel and the Palestinians (Pilgrim).

1. See Ilan Pappé, “The Post-Zionist Discourse in Israel, 1900-2001,” Holy Land Studies, Vol 1, No. 1 (2002), pp. 9-35.

Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromGary M. Burge

Eugene MCCarraher

The ambiguous legacy of William Jennings Bryan.

Before they were the party of Howard Dean and Nancy Pelosi, the party of Bill and Hillary Clinton, before they were the party of Franklin Delano Roosevelt and Harry Truman, the Democrats were the party of William Jennings Bryan, whose apparent erasure from the pantheon of Democratic heroes recalls the clumsy removal of Trotsky from Bolshevik photographs. Not that Democrats don’t have excellent reasons to forget the Great Commoner. A three-time loser in the race to the White House, Bryan also failed to turn his oratorical gifts against racism and segregation, and he ended his life with a public and imperishable display of scientific ignorance. But even as they are gloating over their resounding triumph in the 2006 midterm elections, Democrats would be well-advised to remember Bryan.

Page 3137 – Christianity Today (14)

A Godly Hero: The Life of William Jennings Bryan

Michael Kazin (Author)

Anchor Books

432 pages

$15.11

A good place to start is the famous but little-read “Cross of Gold” speech, Bryan’s address to the Democratic convention in 1896. Unlike today’s gerrymandered speeches—with the lexicon and syntax of demographics inscribed on every neutered paragraph—Bryan’s oration was a lavish political sermon, an un-triangulated exposition of the ways of God to Wall Street. Clad only in “the armor of a righteous cause”—that of “the producing masses of this nation and the world”—Bryan preached against that leviathan his descendants are too timid and f*ckless to name: “the encroachments of organized wealth,” the unelected government of money and property, the devotees of Mammon who consider democracy a franchise of corporate capital. This isn’t the twaddle of “values,” or the high-priced pabulum of consultants, speechwriters, and other peddlers of the latest fashions in euphemism and sophistry. It’s the clarion of populist insurgency, leavened and propelled by the spirit of the prophets, the battle-cry of the meek and lowly who’ve been promised the earth as their estate.

Michael Kazin considers Bryan a prophet whose challenge to the first Gilded Age might inspire resistance to ours, the second. In his timely biography, Kazin holds up Bryan as the prototype for a resurgent populist liberalism and for a “Christian left” inspired to crusade for the peace and justice of the Kingdom. Routinely vilified by intellectuals as the exemplary rube of fundamentalism—”a peasant come home to the barnyard,” as H. L. Mencken described him at the Scopes trial—Bryan becomes, in Kazin’s tale, a knight of democratic nobility, a defender of the faith that commoners are wiser than pedants, clerics, and moneybags.

It’s a righteous cause, and Kazin will surely lift the spirits of liberals with his account of Bryan’s “applied Christianity,” a “radically progressive interpretation of the Gospels” in which the Beatitudes were the measure of modernity. At its best, Bryanism was one of our broadest and most charitable political visions. That latitude and charity stemmed from Bryan’s evangelical faith, and Kazin, though an avowed unbeliever, is too respectful of the abundant historical evidence to leave the American Left in its undogmatic slumber.

Was Bryan’s social gospel an aberrant episode in the history of the evangelical moral economy?

Alas, Bryan’s vision had its limits, its degrees of myopia and patches of blindness, and they raise serious questions, which Kazin doesn’t always answer or even raise, about the legacy of American populism. For all its incendiary rhetoric—perhaps even because of it—the populist tradition has never posed a serious or even genuine threat to capitalism. Its whiteness prolonged the tyranny of Jim Crow and infected the cultural nationalism now in play in debates about “immigration reform.” And its evangelical religiosity, even as it provided a language of social reform, sustained the mythology of possessive individualism. Like most tribunes, Bryan was an equestrian in plebeian’s clothing, a minister without portfolio in the government of property and empire. In the flamboyant and combustible art of demagoguery, Bryan was without peer, arguably the most big-hearted practitioner of an often malevolent trade. The most popular of losers, Bryan ended his life a magnificent ruin, and his relevance may lie in the lessons he drew from the magnitude of his failure.

The Great Commoner was born in 1860 in Salem, Illinois, to Silas and Mariah Bryan. The future scourge of plutocracy grew up in a prosperous political household. A stalwart Democrat, Silas was a well-to-do lawyer, judge, and farmer who served in the state senate; Mariah ran the farm, joined temperance groups, and educated Will in the bedrock verities of “the Bible, the McGuffey’s Readers, and a geography text.” Though raised a Baptist, Will defected, at the age of thirteen, to the Cumberland Presbyterians, who had renounced the traditional Calvinist doctrines of election and predestination.

In Salem and in Jacksonville, where he attended a private academy to prepare for college, Will learned the gospel according to Jesus and Jefferson. Mobilized in “the Democracy,” as the party was called, this credo combined a tolerant if not quite ecumenical Christianity with a romance of small proprietorship. Even after the Civil War and its acceleration of industrial capitalism, Americans still cherished the vision of a “producer’s republic,” an evangelical Protestant empire where the omnicompetent male freeholder, commonsensically interpreting “his” Bible, exercised firm but benevolent dominion over family, property, and government. But this Christian herrenvolk democracy was a strictly white-faced affair: as Kazin notes, Democratic campaign literature abounded with images of “popeyed, electric-haired, and slack-jawed black men.” In addition to hastening the demise of Reconstruction, racism held together the motley and fractious Democratic coalition: in the industrial North, with its urban immigrants and “Bourbon” moneybags; the Jim Crow South, garrison of white supremacy after “Redemption” from Reconstruction; the Midwest, home to small-town entrepreneurs and struggling, often insolvent farmers.

After graduating from Illinois College, Bryan studied law in Chicago, where for the first time he saw the human damage wrought in the nexus of graft and industrialism. Though scandalized by the poverty and corruption, Bryan seems to have had little interest in the new metropolitan culture of ethnic and religious diversity. After passing the bar exam, he set up a practice in Lincoln, Nebraska, where he spent much of his time collecting debts that farmers owed to their corn suppliers. (When Bryan later excoriated the cruelty of “the money power,” he spoke from guilty experience.) He also married Mary Baird, a steely and intelligent woman who learned German to read political economy. (Was Das Kapital on the list, I hope?)

Nebraska Democrats were led by their home-grown Bourbons, who combined fiscal conservatism with an aversion to “moral crusades” such as prohibition and redistribution of wealth. (Their ideological descendants are “fiscally responsible,” pro-choice Democrats—that is, junior-varsity Republicans.) Against the Bourbons stood a younger legion of jacobins, firebrands for farmers, railroad workers, and small businessmen. Having shoveled manure for Bourbon creditors, Bryan, eager to lose his publican stench, aligned himself with the insurgents, and found therein his political salvation. Elected to the House of Representatives in 1890, Bryan soon became a national figure, bearing the banner of righteous resistance to Wall Street and “the plutocracy.”

When Bryan took up the insurgent standard, he ventured into a zone of turbulence that stretched well beyond the farms of Nebraska. On both sides of the Atlantic, an epic battle was underway over the future of capitalist modernity. Against the forces of money and steel were arrayed the battalions of bread and roses: socialists, anarchists, a host of other zealots against the new imperium of capital. Since 1989, we’ve all been encouraged to consign these radicals to the dustbin of history, vilify them as heralds of the Gulag, and forget their hopes that something saner and lovelier than avarice could rule the world.

In the United States, these hopes took shape in programs for what was popularly called “the cooperative commonwealth.” From Henry George’s “single-tax” scheme of land redistribution, to the consumerist collectivism (complete with credit cards) advocated by Edward Bellamy in the utopian bestseller Looking Backward, to the “workingmen’s democracy” envisioned by the Knights of Labor, Americans confronted the corporate reconstruction of capitalism with proposals for bringing the titanic forces of industry and science under some kind of democratic control. Because the machinery of the industrial state—the gold standard, exorbitant railroad shipping rates, miserable factory conditions—was mastered by distant, faceless, and unaccountable captains of finance and production, runaway capitalism threatened not only the material livelihoods of farmers and workers but the integrity of producer democracy itself.

Like its later, urban cousin Progressivism, Populism was another version of the “cooperative commonwealth.” Responding to the demolition of small proprietorship and artisanal skill by corporate consolidation and technology, both Populists and Progressives saw their historical moment as an enormous debate about the nature and destiny of democracy in industrial America. Ranging from Jane Addams’ Hull House to the editorial offices of the New Republic, and reaching its high point of political excitement in Theodore Roosevelt’s presidential run in 1912, Progressivism put down its firmest roots among urban middle-class professionals. Emerging from the Grange and the Farmers’ Alliances, and crystallizing in the People’s Party in 1892, Populism found its greatest support among farmers and small-town entrepreneurs.

Often in debt to suppliers and railroads, these rural yeomen detested the gold standard and its tight-money grip on their lives, and so Populists naturally focused on the coinage of silver. But far from being crackpots obsessed with free silver, Populists advanced a comprehensive and formidable agenda for the extension of popular power: women’s suffrage, workers’ rights to organize and bargain collectively, a graduated income tax, federal insurance for bank deposits, regulation (if not outright nationalization) of railroads and communications. At their most imaginative, Populists encouraged producers’ cooperatives, community-owned banks (the ancestors of today’s credit unions), and a “subtreasury” plan which, unlike the present Federal Reserve system, would have put the nation’s monetary policies under direct Congressional control.

Oddly, Kazin doesn’t do much to locate Bryan in relation to this history—he skips hastily over the Farmers’ Alliances, and provides nothing of even a thumbnail sketch of Populism itself. Still, his unconventional characterization of Bryan as a “radical progressive” both deftly underlines the continuity between Populists and their urban cousins and counters the lingering caricature of Bryanism as a purely rural phenomenon. From his tumultuous campaign of 1896 to the end of his life, Bryan espoused much that was dear to the hearts of both rural and urban reformers, dropping only the subtreasury plan as an impossibly radical scheme. (He even admired the German Social Democrats, an unlikely sympathy for a peasant from the barnyard.)

Still, even if the American political economy had been renovated according to Bryan’s just and humane specifications, it would not have been fundamentally transformed. The problem with Kazin’s anointment of Bryan as “radically progressive” is that it evades the question of what was “radical” about Populism in the first place. While it’s true that the Democrats denatured the Populist agenda by fixating on free silver, Populism was always a half-way covenant between capitalism and socialism—which is to say, an amended version of the old covenant of capital. The spectral presence of the old “producer’s republic” hovered around Populism, obscuring the realities of class in rhetoric about “the producing masses.” Bryan’s Cross of Gold speech was a sterling example. “The man who is employed for wages,” Bryan asserted, “is as much a business man as his employer.” Tell that to the workers at Wal-Mart.

Like many other scholarly partisans of lowercase populism—Christopher Lasch springs to mind—Kazin desires an alternative to what he considers a moribund socialist tradition. But populist rhetorical shorthand about “plutocracy,” “Wall Street,” or “working Americans” has long been a surrogate for serious thought about capitalism as a system. The substitution of palaver about “the people” for political analysis goes a long way in explaining why populists have been such easy marks for currency panaceas (like free silver) that preserve the power of finance capital; for “tax the rich” schemes that leave the architecture of accumulation relatively undisturbed; for pabulum about “fairness” that ignores the structural imperatives of capitalism and postpones indefinitely all reflection about the nature of wealth itself.

Kazin rightly contends that Bryan and other Populists proved more prescient than most of their Progressive and socialist contemporaries about the dangers of centralized power and the alienation spawned by large-scale production. But he neglects to consider that those same contemporaries also realized, more clearly than the Populists, that the corporate transformation of the economy was raising unprecedented questions about the nature of property, the politics of the workplace, and the meaning of labor. Rather than look to the Social Democrats, Bryan could have turned to British guild socialists like G. D. H. Cole and J. N. Figgis—the latter a theorist of Christian socialism—who offered answers to these questions that combined the decentralist and artisanal features of populism with a greater respect for modern technology and cosmopolitanism. Given the indomitable survival of the “producer republic” in our moral economy—where regulations on multi-national firms are rebuffed in the name of “private enterprise”—you don’t have to be a socialist, Christian or otherwise, to think that Americans have not even begun to grapple with, let alone resolve, these issues. For that expensively indefinite postponement, we have Populism to thank, in part.

We also have Populism to thank, in part, for the persistence of racism, unctuously disguised today in battles over “immigration reform” by references to “our historic national character.” Bryan’s indisputable racism plainly embarrasses Kazin, who concedes that his support for Jim Crow was “his one great flaw.” (For her part, Mary thought that poor whites were wading in the shallow end of the gene pool. The “mountain people” she endured while in Dayton, Tennessee would, she lamented, “marry and intermarry until the stock is very much weakened.”)

True to the whiteness that held the Democrats together, Bryan always endorsed “suffrage qualifications,” and as late as 1924 he was shielding the Ku Klux Klan from denunciation in the party platform—this at a time when the Klan was at its most visible, influential, and ferocious. Though he loathed the race-baiting of southern Democrats, Bryan never posed the slightest challenge to white supremacy, refusing to lend his eloquence and moral authority to an assault on the most wicked and glaring injustice of his time. His acquiescence in the antics of James Vardaman, Ben Tillman, and Josephus Daniels enabled that grotesque marriage of Jim Crow and labor liberalism whose final tragic offspring was George C. Wallace. And when that unholy union ended with the civil rights legislation of the 1960s, the messy and fateful divorce delivered the South into the hands of the Republican Party.

Kazin’s attempts at damage control only further tarnish Bryan’s reputation. Hinting that Bryan felt “a certain discomfort with white supremacy,” Kazin cites a “Poem on Colored Man” among Bryan’s papers, glossing it by arguing that its counsel of Christian forbearance is “not as patronizing as it sounds.” Maybe not, but it’s hard to imagine Bryan giving the same advice to beleaguered white farmers. There’s also the example of W. Thomas Soders, an attorney whose acerbic letter to Bryan, cited at length by Kazin, was a minor masterpiece of prophetic censure. “You call yourself a Christian,” Soders scolded. “Pray tell me what kind of Christianity is this you profess?” Bryan’s chilly and ludicrous reply—your letter, he told Soders, implied “what the colored race would do if they had the power”—revealed the guilt and hysterical fear that made whiteness so heavy and vicious a burden on everyone.

However sardonic and accusatory, Soders’ appeal to Bryan’s Christian faith pointed to the evangelical heart of his politics. Hard as it might be to imagine a time when evangelicals could stand against business, the biblical texture of American culture once ensured that Scripture provided the standards by which Americans condemned the new pharaohs of capital. Well-versed in religious history, Kazin reminds us that evangelicalism is not necessarily the religion of which unbridled capitalism is the economy. Still, it’s no secret that evangelicals have, on the whole, taken a hard turn to the right, and until Kazin only a handful of writers have even noted Bryan’s once-happy alignment of evangelical religion and progressive politics.

One could argue that Bryan’s finest hours as an evangelical politician were not on the campaign trail but in the White House. His tenure as Secretary of State for President Woodrow Wilson (1913-15), together with his opposition to imperialism, form a case study in the evangelical peace witness. Of course, like his domestic agenda, Bryan’s foreign policy was marred by racism. Kazin strongly implies that Bryan’s disapproval of U. S. expansion after the Spanish-American War had as much to do with fear of racial contamination as it did with any Christian aversion to empire-building. Later, Bryan matched Wilson’s odious condescension to “our little brown brothers” in Mexico with his own doubt that Haitians, “a largely unchurched black nation,” would “do much to save themselves.” (Apparently, Bryan had never heard of Toussaint Louverture.)

But Bryan also embraced Emilio Aguinaldo, the Filipino ex-guerrilla leader, and befriended Leo Tolstoy, whose Christian but unchurched anarcho-pacifism would seem light-years in sensibility from populist evangelicalism. Bryan and Tolstoy shared a powerful revulsion at the demonic power of violence, even when employed in allegedly “just” causes. (Incidentally, Bryan opposed capital punishment, arguing—like Tolstoy—not on the utilitarian ground of deterrence but on the eminently theological ground that even murderers shared in the imago Dei.)

In the spirit of ploughshares, Bryan proposed that the United States negotiate a series of bilateral treaties stipulating that each side submit quarrels to an outside investigative tribunal, and postpone armed conflict for at least a year. Today, alas, Bryan would face the umbrage of “Christian realists,” scouring his meekness as a symptom of impotence and spiritual rot, or of nativists like Patrick Buchanan, opposing the “surrender” of national sovereignty.

His most courageous act as secretary was his resignation in 1915. Refusing any longer to tolerate Wilson’s wrong-headed and duplicitous resolve to enter the war in Europe, Bryan set a standard of integrity that many of us have seen both honored and trampled in our lifetimes. When Jimmy Carter approved a foolish and disastrous rescue mission during the Iran hostage crisis in 1980, Cyrus Vance quit rather than front for a policy he disavowed. If only others had emulated Vance and Bryan. I can’t imagine Bryan, if confronted with the deaths of half a million Iraqis thanks to his government’s sanctions, making Madeline Albright’s ghoulish reply: “We think it’s a price worth paying.” (That’s always an easy call to make when the currency in question is other people’s lives.) It’s hard to see Bryan dissembling as lissomely as Condoleeza Rice about torture and illegal imprisonment. And Bryan might now be praying for Colin Powell, who disgraced himself and his office shilling for an invasion that he now admits he suspected was based on flimsy evidence.

Bryan’s peace-mongering, as well as his social gospel, depended on the strength of Protestant cultural authority, whose demise largely accounts for the self-pitying and punitive sense of dispossession that infects today’s Christian Right. But by the same token, Bryan’s career discredits all facile equations of evangelical social thought with unfettered capitalism. It’s easy to show that evangelicals have been insensitive to the structural character of evil, defining social injustice as the sum of individual failings, and social reform as the unadjusted tally of personal regenerations. But Bryan’s inclination to activist government, together with the electoral support he did elicit from evangelical voters, demonstrates that the evangelical imagination was not completely hostage to the producer’s republic.

Still, Billy Sunday and Russell Conwell (the latter’s surname worthy of Dickens) were at least as popular as Bryan. Sunday famously dubbed Jesus “a real scrapper,” while Conwell promised “acres of diamonds” to the plucky and ingenious faithful. Today, millions of evangelicals long to be impaled on Bryan’s cross of gold, as eager to “name it and claim it” as were Conwell’s beguiled fans. Proud as they might be of Bryan’s defense of biblical inerrancy, many if not most evangelicals would find his politics insupportable. Bryan would be a prophet with honor, but a politician without a base.

Because Kazin is uninterested in theology, academic or popular, he doesn’t appreciate that Bryan and Conwell represented a larger argument about the political trajectory of evangelicalism. He attributes the rightward movement of evangelicalism to the usual suspects—fear of cultural modernity, liberal distrust of public piety, material prosperity—but never entertains the possibility that evangelical political theology might also be a culprit. As Mark Noll’s recent work has emphasized, evangelical faith partook of the individualist, “common-sense” ideology of antebellum America—and thus, gave credence to the proprietary conception of democracy that both inspired and lamed the populist tradition.

If that’s true, Bryan’s beloved failure as a presidential candidate raises two historical and theological questions: Was Bryan’s social gospel an aberrant episode in the history of the evangelical moral economy? And is evangelical anthropology so bound up with possessive individualism that it precludes a coherent and enduring social gospel? I cheerfully pose these questions as a provocation to evangelicals, and especially to proponents of an “evangelical Left,” who most need to answer them.

Bryan’s hope for a Christian commonwealth prompted the final failure of his career: his foolhardy decision to testify as an “expert witness” at the Scopes Trial. Worldly wisdom would have advised him to steer clear of the ambush in Dayton. Clarence Darrow ate Bryan for lunch, of course, and Kazin makes none of the usual excuses on account of age or misguided valor. Bryan knew, or should have known, what he was getting into. (Kazin muses wittily that “it was the seventh day of the trial, and Bryan should have rested.”)

But if one wanted to cite an example of how God extracts pearls from the dung of defeat, one couldn’t do much better than Bryan’s undelivered speech to the jury. (Barred on account of the judge’s ruling, it appeared posthumously as an appendix to Bryan’s memoirs.) Kazin cites it only briefly, but it’s a magnificent testament to the moral and political import of Christian love, and Bryan might never have composed it if he hadn’t gone to Dayton. Read past Bryan’s scientific ineptitude, and it becomes apparent that his opposition to evolution stemmed partly from the same anti-elitism that kindled his politics—directed, in this case, at the aristocracy of learning in the universities. Pointing to “distinguished educators and scientists,” an “irresponsible oligarchy of self-styled ‘intellectuals,'” Bryan warned that their lack of a “heavenly vision” ensured their deference to the strong, and especially to the state. If the undelivered speech was a bit too heavy on grand-standing—”self-styled” is always a revelation of status anxiety—it nevertheless offered a critique of the military-industrial-educational complex a generation before the New Left.

Bryan was no theologian, but he clearly sensed that beatitude was indeed the crux of the matter. The battle in Dayton, he claimed, was “a renewal of the issue in Pilate’s court”: the struggle for jurisdiction in human affairs between “the law of force” and “the law of love.” Figured, for Bryan, in Roman imperial might and Darwinian natural selection, the law of force was indiscriminate and unforgiving, and its verdicts always ratified the dumb and evanescent grandeur of power. But the “meek and lowly Nazarene”—the “Apostle of Love,” perhaps the greatest loser of all—presided over a very different court, whose judgments, inerrant and however severe, were always delivered with the majestic quality of mercy. Here is no “childish theology,” by which Mencken thought Bryan was “deluded.” It’s a genuine Christian realism to fathom that fidelity and witness, not victory and dominion, should be our bedrock political commitments.

Measuring the veracity of the gospel, not by its “contribution” to wealth or hegemony, but by its faithfulness to the Beatitudes, Bryan’s apostolate of love—lethal and improvident foolishness by the standards of Caesar—is the realpolitik of charity, if you will. Our most lyrical, generous, and pacific demagogue, Bryan learned the power in weakness. I seriously doubt that Democrats will find it marketable, but like it or not, it’s a truth that Americans—Democrats and Republicans too, anarchists and Greens—urgently need to hear.

Eugene McCarraher teaches humanities at Villanova University. He is writing The Enchantments of Mammon: Corporate Capitalism and the American Moral Imagination.

Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromEugene MCCarraher

Sarah Hinlicky Wilson

The theology of a comic strip.

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

First you have to heft it. The Complete Calvin and Hobbes feels like a critical edition. It’s the work of ten years, once at a peak circulation of 2,400 newspapers, 3,160 strips in all, first collected in 17 books with 30 million copies already in print, now assembled in a 22-and-a-half pound, three-volume set running to 1,440 pages. Every strip—from the beginning in November 1985 to the last day of 1995—plus every cover from the individual collections, as well as the bonus material in the treasury collections, finds its place here. The CC&H has a few things the previous publications lack, such as colored Sunday panels from Attack of the Killer Monster Snow Goons, a new essay by Watterson with some kinder words about Universal Syndicate (with whom he battled for years over licensing rights), and early comic incarnations of Calvin with his hair in his eyes like the eventual bully Moe. If it is still a trifle less than Compleat—it lacks the commentary of The Calvin and Hobbes Tenth Anniversary Book and the black-and-white originals of Sunday pages in the gallery edition of Calvin and Hobbes Sunday Pages 1985-1995—it is an impressive testimony to the cultural significance of the strip all the same.

Page 3137 – Christianity Today (16)

The Complete Calvin and Hobbes

Bill Watterson (Author), Bill Watterson (Illustrator)

Andrews McMeel Publishing

1440 pages

$124.99

Calvin and Hobbes thus bookended stands as an oeuvre, a body of work, and inevitably invites scholarship. Calvin himself set the stage for it with his infamous report on “Bats: The Big Bug Scourge of the Skies,” and his academically adept book report, “The Dynamics of Interbeing and Monological Imperatives in Dick and Jane: A Study in Psychic Transrelational Gender Modes.” And so, at this inauguration of a new wave of Calvinism, and in honor of the icon of total depravity himself, a few predictions about the future of the field are in order.

There will be, of course, compilations of mere trivia. For instance, one might list the six R-rated movies Calvin tries to see despite Mom and Rosalyn’s proscriptions: Venusian Vampire Vixens, Attack of the Coed Cannibals, Vampire Sorority Babes, Killer Prom Queen, Cannibal Stewardess Vixens Unchained, and Sorority Row Horror. There are two exceptions to Chocolate Frosted Sugar Bombs for breakfast—once, early on, there’s plain old Crunchy Sugar Bombs, and then, much later, there’s a go at the parents’ Pulp-N-Stuff. (On another occasion, Calvin drops an Alka-Seltzer into Raisin Bran.) Mabel Syrup is Calvin’s favorite writer, author not only of Hamster Huey and the Gooey Kablooie—requiring squeaky voices, gooshy sound effects, and the happy hamster hop from a much-afflicted Dad—but also of Commander Coriander Salamander and ‘er Singlehander Bellylander. Three heroes grace the pages of Calvin’s comic books—captains Maim, Napalm, and Steroid—and only two monsters under the bed get a name: Maurice and Winslow.

Historians of a higher order will want to trace the genetic influences and the transmutations thereof in Calvin and Hobbes. Minimalistic precursor Peanuts makes its presence felt, for instance, in subtle ways. Anxiety in Calvin or his parents is indicated by parentheses around the eyes, characteristic of Charlie Brown and friends. Then there is the Schulzian emotional reality of Calvin’s life, which evinces the torture of childhood much more than the illicit decals and T-shirts care to notice. (“People who get nostalgic about childhood,” comments a scuffed-up Calvin just shoved by lower-case-lettered Moe, “were obviously never children.”) Calvin doesn’t play psychiatrist, but he does hawk his wares at a great variety of Lucy-like booths during his ten-year career, selling Great Ideas, a Swift Kick in the Butt, Scientific Names, a Suicide Drink, Candid Opinions, and a Frank Appraisal of Your Looks. By way of contrast, while the Peanuts kids are lonely in a crowd and devoid of any adult presence, Calvin lives with his best and faithful friend, but at the mercy of grown-ups. Charles Schulz said there were no adults in Peanuts because they simply wouldn’t fit in the strip, but in Watterson’s world, Calvin shrinks to make room for the grown-ups, to whom he is only knee-high. Even Hobbes towers above him, friend though he is.

A rival school might focus on the Krazy Kat influence. Its members would tout the little-known introductory essay Watterson wrote for the first volume of The Komplete Kolor Krazy Kat. They would observe how the moon in early Calvin and Hobbes strips has the distinct “melon wedge” shape the cartoonist mentions in said essay, and compare the common feline traits of Krazy and Hobbes. Striking, too, is how Watterson follows Herriman’s use of scenery as a character in its own right. In the enormous, overstuffed Krazy Kat panels, the backdrop keeps changing even when the figures stay put. Calvin, for his part, moves so fast in his little wagon that the backdrop constantly changes just to keep up with him. The aspiring national forest of white birch and deep green woods giddily chases the mostly indifferent protagonist.

Influence is, of course, the most elusive chalice in the historical quest; one may as well draw attention to the characters’ Muppet mouths and Looney Tune dives over the edge of cliffs. Other Calvinists will investigate the artistic development of the strip on its own terms. Already at the 12th daily strip, they will observe, Calvin and Hobbes race along in the wagon to the soundtrack of philosophical speculation about fate, fatefully headed for a splash in the lake. In the first month, a strip addresses the pernicious pull of TV, that opiate of the masses which even Karl Marx couldn’t foresee, a frequent theme of the whole decade’s output. The third Sunday page makes use of the entire “throwaway” panel across the top of the strip—designed to be dispensable for newspaper editors who wanted to save space—to showcase a long if rather cartoony alien landscape. The fourth Sunday page does the same. Already, this early, the possibilities of the redesigned, post-sabbatical Sunday strips are foreshadowed. There’s no denying that the new Sunday page format begat a rambunctious dynamism, up to 20 panels in a single strip, innovative layouts, and sometimes the restraint of a Japanese woodcut. The gain, however, was balanced by a loss: with the new Sunday pages, all the energy went out of the dailies, featuring fewer serial stories and weaker humor.

Theorists and critics of art will seize the opportunity here to insert themselves into the dialogue. Calvin is not only the object of paradigmatic struggles between big evil syndicate and lonely little artist. He is also an artist in his own right, an avant-garde sculptor of suburban postmodern snow sculptures. (He wanted to be a neo-deconstructionist, but Mom wouldn’t let him.) He’s torn between marketable traditional snowmen and more meaningful works that insult or disturb the viewer: Bourgeois Buffoon; The Torment of Existence Weighed Against the Horror of Nonbeing; and the entire blank landscape, post-commentary and post-symbolism because art is dead, signed with his name at the bargain price of a million dollars. Hobbes demurs. It doesn’t match his furniture.

The psychologists, for their part, will have little use for such esoteric tomfoolery. They are intrigued by the family system. They observe that, but for the obvious loss of charm, the strip could just as well be called Calvin and Mom and Dad. The parents, although never named beyond their generic labels, appear as often, and with as much impact on Calvin’s life, as Hobbes himself. And whereas the tiger gets only one solo strip, Calvin-free, Mom and Dad manage a fair few more across the years. The struggle entangling the family is inevitable and universal, between the values of the child and the values of the adult. Calvin thoughtlessly defends the former—thoughtlessness itself is reserved to children, not yet condemned to relentless self-awareness—while Dad, a patent lawyer, and Mom, a homemaker of abundant hobbies, work to instill their recalcitrant offspring with the latter. They never succeed, but that’s not to say Calvin doesn’t absorb adult values at all. He does, but they’re the virtues of the vicious: greed, lust for fame, aversion to work, the manufacture of endless excuses for oneself. By the end of the strip’s run, Calvin is increasingly less the exuberant child and increasingly more the mouthpiece for the ideal American—i.e., someone who could stand to build some character.

But enough of psychology, the philosophers will interrupt. The metaphysicians among them will devote themselves to the problem of Hobbes’ reality, neither a somber-faced stuffed animal who magically springs to life nor a mere figment of Calvin’s imagination—he is altogether too uncooperative for that. If anything, he most resembles Mom’s green dinner glop that battles Calvin and sometimes serenades him. Hobbes, though, is neither Calvin’s better half nor his psychopomp. He is every bit the rascal that Calvin is, though considerably more disguised in his misdeeds, especially in a rousing game of Calvinball or at the sight of Susie Derkins. If Calvin is the unrepentant sinner, the ethicists will observe, then Hobbes is the Pharisee, smug of his virtuous living and immensely proud of not being human.

And that is but the beginning of the moral problems posed by the strip. Is it right to charge ten billion dollars for a dinosaur skeleton constructed from backyard trash? Is it wrong to steal a truck from a bully if he stole it from you first? How about the quandaries raised by recent advances in corrugated cardboard technology? One must consider what benefits could be gained for science by transmogrifying oneself into a 500-story tall gastropod, a slug the size of the Chrysler building. One must decide whether the duplicator is better used as a counterfeit money machine or as a clone generator to supply oneself with a baseball team. What if the duplicator has an ethicator tacked on—and what if one’s good side is prone to badness?

Taking that cue, the theologians will indulge in exegesis of a more elevated nature. Calvin, avowedly named for his predestinarian predecessor, is a bit mystified about the whole Santa Claus thing. “Why all the secrecy? Why all the mystery?” he wonders. “If the guy exists, why doesn’t he ever show himself and prove it? And if he doesn’t exist, what’s the meaning of all this?” Hobbes, scratching his head, remembers that Christmas is a religious holiday, but Calvin counters that he has the same questions about God. Two strips later, though, he’s made a Pascalian wager: it’s worth it to him to believe if the end result is tons of loot.

As the years roll on, Calvin realizes that the loot is also contingent upon his good behavior—or so the popular carols and disinformation of adults would like him to believe. The lure of a slushball square in Susie’s face confronts his limitless greed in an epic struggle. Calvin therefore tries to rationalize. It should count for more if a bad kid tries to be good than if a naturally good kid is good. Ten spontaneous, if reluctant, acts of good will a day should compensate for a year’s sordid sin. Relinquishing retaliation rights on Susie should produce presents by the truckload (never mind that Calvin provoked her in the first place and lied to Mom about it). Still, every Christmas without fail, Calvin is acquitted of his crimes and showered with gifts, even when he learns the wrong lesson from it. A parable of God’s love for the sinner and justification by faith, not works, the theologians infer—good Calvinism, indeed.

Sarah Hinlicky Wilson is the pastor at St. John Lutheran Church in Trenton, New Jersey, and a doctoral candidate in systematic theology at Princeton Theological Seminary.

Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromSarah Hinlicky Wilson

Andrew Jones

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

Since the days of Columba, Patrick, and Augustine of Canterbury, the British Isles have been home to more than their share of missionaries. So it may be appropriate that as the Christian Vision Project turns its attention to Christianity’s global scope and mission, we begin with an essay by Andrew Jones, a globe-hopping consultant on church planting who lives in the Orkney Islands off Scotland’s northern coast. Jones, best known as the writer of the weblog tallskinnykiwi.typepad.com, is an irrepressible New Zealander who chronicles the wide and sometimes wild world of innovative efforts to proclaim the gospel in the midst of “emerging global culture.” He is the first respondent to our “big question” for 2007, posed as Western Christians adjust to their minority status in global Christianity, and as technologies of travel and communication make cross-cultural encounters ever more accessible to the majority world and minority world alike: What must we learn, and unlearn, to be agents of God’s mission in the world?

Pilgrim, Pilgrim, where have you been?
I’ve been to London to visit the emerging church scene.
Pilgrim, Pilgrim, what did you there?
I found a little queen sitting on her chair.

What did we go out to see? The same thing we always see. The same thing, but in a different place. We seek out sameness. We go to a foreign city to eat noodles, and end up with a hamburger and fries. We know that global church growth is largely happening in the margins, among ordinary people, without big budgets or impressive credentials. But when we go out to worship with the “indigenous” church in Colombia or Malaysia or Italy, we end up sitting on a pew singing expat choruses with a national pastor who has colonized himself for our approval. To be discovered. To be seen by people who do not have eyes to see.

We search for the ubiquitous but discover the obvious. We hunt the exotic but are haunted by the echo of our expectations. We seek judges but we see kings. Or in the allegory of my inverted nursery rhyme, we go out to watch mice but get wooed by a monarch.

By focusing our attention on Western look-a-likes rather than the God-breathed expressions of ekklesia, we miss the joy of participating with the global church. We also miss the blessing these networks and ministries can offer us. But even more tragic is the reinforcement of our western stereotypes as superior models, each one another mega-brick in the colonial tower of Western Christian supremacy. Any attempts at finding a third space, where their world and ours could meet, are thwarted by our search for what appears successful in our own eyes.

We need to learn to see the unexpected and unlearn our compulsion to see the expectable.

“What did you go out to see?” Jesus asked the crowds, in reference to a popular desert pilgrimage to John the Baptist. They expected a monarch, but God sent a monk. Outmoded expressions of prophetic ministry, warped by the greed of the Sadducees and the short-sightedness of the Pharisees, had to be unlearned.

Jesus’ disciples had to be taught how to see. The disciples saw the clean robe of Jairus; Jesus saw the stained garment of a bleeding woman. The disciples saw a prostitute groveling at Jesus’ feet; Jesus saw a servant preparing his body for burial. The disciples saw a threatening alien force teaching in Jesus’ name; Jesus saw more partners for the harvest. Jesus saw a woman giving two coins, illustrating the mysterious generosity of Kingdom economics; the disciples would not have seen anything at all if Jesus had not pointed her out.

Paul had to teach the Corinthians how to see. They saw a church composed of small élite circles, each well-defined group following their own celebrity, whether Apollos, Peter, or Paul. But there was only one church, Paul told them. God’s servants were watering it, but God was causing the growth. Being agents of God’s mission starts with seeing what God is bringing to life, in order that we may water it. But before we water it, we have to find it.

The tiny: At a recent meeting in Johannesburg, Bindu Choudhrie explained how she and her husband Victor, a medical doctor, started several thousand churches in their region of India over the last decade. But if you went out to see something spectacular, you might miss it completely. The leaders are workers, housewives, students, and, in some cases, children. There is no large Easter or Christmas celebration to photograph—they don’t celebrate those festivals. There are no weekly services to attend—they meet daily in homes over meals.

In his book Greet the Ekklesia, Victor describes it as a secret fellowship. “We do not go to church, as we are the Ekklesia, wherever we happen to meet, in a house or anywhere else. The house ekklesia is not a series of meetings in someone’s house on a particular day, at a certain time, led by a particular leader. It is a household of God consisting of twenty-four-hours-a-day and seven-days-a-week relationships.”

Tiny like a mustard seed. To see the tiny, we will need to unlearn the value system that has guided our vision. Thomas Friedman has called it the Cold War Mindset, a way of seeing that places undue value on size, weight, and longevity. That not only sums up the inhumane system of the grinding mechanical-industrial world: it pretty much describes how we used to introduce Christian conference speakers.

What did we go out to see? The influential missionary Roland Allen was once asked by his board to report some spectacular stories from the field. His response was unexpected: “I do not trust spectacular things. Give me the seed growing secretly every time.”

The virtual: Much of our life has been relocated to the Web. But when we try to see church online with old eyes, we miss it. If older folk find themselves squinting awkwardly into the mysterious world of new media, our screen-age children have less of a vision problem. “Generation Text” are at home with the computer screen, which is replacing the movie screen as the primary visual medium. Cinematography taught us to see a sequential world where the future was always replacing the present and displacing the past. A disconnected world of cuts and invisible edits. We looked for the new as the old dissolved in a cross fade. Interestingly, the worship services of Western churches reflected the same mindset.

The computer screen shifts this way of seeing toward complexity and modularity, placing the power of navigation in the user’s hands, teaching the eye to look for different things and in different ways.

We see that new media images are actually composites of nested layers. We can send layers to the back or bring them to the front. We can fade them with transparency or composite them with other layers—but we don’t have to delete them.

We value continuity over cut. We expect navigation. Our eyes look for hyperlinks, places where we have been, places to go next. We seek the romance of virtual pilgrimage through what Lev Manovich has described as “navigable space.”

We see the visible representation as less permanent than the invisible code that informs it—say, a sequence of numbers in a database that may or may not be accurately represented, depending on the operations performed on that data or the quality of the screen itself. Young people find it easy to see church this way also. Invisible and yet experiential, mystical yet tangible, global and yet aggregated locally and uniquely each time.

What do you go out to see? A cyberchurch with regular service times? You will probably find it if you look hard enough, but you might miss Church 2.0, that strange collection of new church forms native to the Web. Pastor and blogger Tim Bednar describes it this way: “I participate with bloggers who collectively link the cyberchurch into existence.” Whatever ekklesia will look like on the web, we first need to learn how to see it.

The indigenous: What do we go out to the desert to see? Do we see cheap fireworks, casinos, and tacky souvenirs? Or a special people called out by God for global missions in this new millennium? That’s what my friend Richard Twiss sees. Richard is a member of the Rosebud Lakota/Sioux Tribe and President of Wiconi International. “No other people group is so uniquely positioned for global missions as First Nations people are today,” says Richard, whose mission sends out teams of “Native men and women who follow the Jesus Way and are skilled traditional drummers, singers, and dancers, to communicate the love of the Father with audiences worldwide.” In the past three years teams from Richard’s mission have seen thousands come to know the Creator in outdoor events and house meetings in the country of Pakistan. It seems God is raising up a post-colonial mission force out of the margins of our own culture, out of a people who have felt the sting of colonialism themselves.

The Kingdom of God is at once tiny and massive, both of which are hard to see. Kingdom potentials are tiny like mustard seeds, buried like treasure, sunken like fishing nets, inconspicuous like yeast working its way through the whole lump of dough. And yet the final product is blatantly visible. The mustard seed grows into a tree so large that the “birds of the sky” see it from a distance and nest in its branches. Caged fowl do not see it. They only see a coop and a fence. They need to leave the familiar, to seek out a bird’s-eye view of what God is doing—to be flung out into all the world.

Andrew Jones leads the Boaz Project, which supports church planting movements in the global emerging culture.

Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromAndrew Jones

John Wilson

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

In my very first semester of college—in 1966, at Chico State College, since elevated to California State University, Chico—I had two extraordinary professors. One was a professor of philosophy, Marvin Easterling, who later was killed in an accident while riding his bicycle. The other was a professor of English, Lennis Dunlap.

“Mr. Dunlap,” he was called, because he stopped after the master’s degree. He once told me that the prospect of doctoral work was simply too tedious to contemplate. He was from the South—Tennessee, I think—and he had studied, among other places, at the Sorbonne. He was then in his early forties, handsome in a rather Mephistophelean way, with a sonorous voice and the posture of an equestrian. Unlike most members of the English Department, he dressed with impeccable style—he tried in vain to instruct me in such matters—and was said to have an independent income. Along with Hugh Kenner, he was the most intelligent man I have known.

His favorite period of literature was Restoration drama, especially the plays of Wycherly and Congreve: witty, sophisticated, unencumbered by illusions—a category that included the evangelical Christianity in which I had been raised, and from which I had for the moment detached myself. Amoral? No, but the code of a natural (not hereditary) aristocracy, embodied by the superior couples of the Restoration stage.

Which brings us to Beaumarchais. If you are even a casual opera-goer, chances are you have taken in a performance of Mozart’s The Marriage of Figaro or Rossini’s The Barber of Seville, both of them based on plays by Pierre Augustin Caron de Beaumarchais (1732-1799). Hardly known today except among certain scholars of French culture, Beaumarchais nevertheless created some of the best-known characters in world literature, above all the barber Figaro, who like Homer’s Odysseus is never at a loss no matter how daunting the circ*mstances.

The same could said of Figaro’s creator, as the epigraph to Hugh Thomas’ Beaumarchais in Seville suggests. “My inexhaustible good humour never left me for a moment,” Beaumarchais wrote to his father on January 28, 1765, in the course of describing his adventures in Madrid. Fleshed out in Thomas’ splendidly entertaining narrative, this is a morality of sorts, an attitude toward life.

Thomas, a distinguished historian, has written a book in which great learning is worn lightly. It’s short—the main text isn’t much more than 150 smallish pages, the lines generously spaced—and its very title, Beaumarchais in Seville, is a joke, though one with a point: Beaumarchais never was in Seville, but his visit to Spain in 1764-65, most of that time spent in Madrid, allowed him to create the imaginary Seville that still—250 years later—brings tourists to the real Seville.

Very well, you say, but why should I care? It sounds like a coterie book. Not at all. It is good to inhabit for a little while a time and place distant from our present, and Thomas is an excellent guide. His first chapter is called “A Golden Age,” golden in part because in 1764 the world was more or less at peace, but for other reasons too: “At that time the Industrial Revolution had not begun, even in England, though a few iron wheels already defaced her countryside. But most towns remained beautiful: even their suburbs.” This sets the tone for the book; we must keep in mind Thomas’ subtitle, “An Intermezzo,” not to mention his playful list of dramatis personae, consisting of 58 people, or an average of more than three per page.

So we are taken back to a moment when every person of note must have that marvel of technology, a wristwatch. “The new age,” Thomas writes, “was indicated by King Louis XIV standing in Versailles with a stopwatch in his hand. A minister arrived on the stroke of ten in the morning. The king said, ‘Ah, monsieur, you almost made me wait.'” Beaumarchais’ father, André-Charles Caron, was among the leading watchmakers in Paris, hence a reasonably affluent paterfamilias, proud father of six surviving children: five daughters and a son.

Doted on by his sisters, the young Beaumarchais flourished. He was protean, inventing (when he was 21) a device that significantly improved the accuracy of timepieces (and winning a battle against a well-established clockmaker who had tried to steal the invention and claim it as his own), then going to the royal court, where he soon became the music teacher for Louis XV’s four daughters. “From the moment he arrived in Versailles,” a friend recalled, “all the women were struck by his height, his elegant figure, the regularity of his features, his assured look, his lively mien, the dominating air which he seemed to have and which seemed to elevate everything and everyone surrounding him.”

Indeed. He had married a slightly older widow, who died soon afterward, and parlayed various royal appointments to be able to add “de Beaumarchais” to the end of his name. He wrote little plays and made friends with an exceedingly rich financier who took a shine to him and began giving him money to establish himself.

What brought him to Spain was an appeal from two of his sisters, who had been living for some time in Madrid. One of them had received a promise of marriage from a curious character, José Clavijo y Fajardo, who had started a Spanish magazine modeled on The Spectator and who was later to become well known as a naturalist. Clavijo was not fulfilling his promise, and Beaumarchais agreed to travel to Madrid to straighten things out. At the same time, he was entrusted with a complex and (from this vantage point) insanely ambitious commission by his financier friend and patron, for which purpose he was given a large sum. Finally, Beaumarchais’ father asked him to try and recover money owed to him by a number of grandees in Madrid, who had their watches but had never paid up.

Believe me, I am ruthlessly simplifying, and yet we haven’t even left France yet. You will have to read Thomas’ book to get the flavor of his account of life in Madrid and Spain more generally. Imagine a story with more twists and turns than could be accommodated in an opera, no matter how convoluted the plot, a comic tale with dark undercurrents (one part of the multipart commission—which ended with failure across the board—was to try and obtain the license to sell slaves to the Spanish Empire), set against a Spanish background largely unfamiliar to us, though with touches that we know from the Spain of fiction and drama and poetry and music. (There is a wonderful description of the fandango, “this obscene dance.”) Added to that—and for some readers the highlight—is Thomas’ account of certain experiences in Spain—including plays Beaumarchais saw or may have seen—that could have influenced Beaumarchais’ two masterpieces, especially the conception of the characters.

And keep in mind that there are morals to be drawn from this well-told tale. For example, it is good to be handsome, like Louis XV (“the best-looking monarch to be seen on a throne for many years”). It is bad to be ugly, like the pious Charles III of Spain, who moreover cuts an absurd figure because after the death of his domineering wife, when he is still a relatively young man, he neither remarries nor—in marked contrast both to his French counterpart and to Beaumarchais—indulges in amorous adventures.

When Beaumarchais meets in Spain an extraordinarily lovely young woman, Madame de Croix, whose military husband is always off somewhere, he persuades her—overcoming her initial refusal—to try and seduce Charles, evidently hoping that he might thus be able to exercise influence of benefit to France on the king of Spain. But although Charles is clearly mightily attracted to her, he withstands the temptation, for which he is portrayed here as an earnest fool.

The lovely Madame de Croix does not hold this against Beaumarchais, and they are a couple for the remainder of his time in Spain. In the summing up at the end, we learn that “she seems to have had no more intrigues, and gave herself up to religion. The Baron de Gleichen in Paris remembers her retaining her wonderful looks into old age. ‘She exists only for the poor,’ wrote Baron Gleichen, who added that she remained vivacious to the end of her life.” This helps a little to ward off the chill that has crept unannounced into Seville.

Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromJohn Wilson

Allen C. Guelzo

Richard Hofstadter and scholarly fashion.

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

Fashion is fickle even among historians. At the time of his death from leukemia in the fall of 1970, Richard Hofstadter was Columbia University’s DeWitt Clinton Professor of American History, a two-time winner of the Pulitzer Prize (for The Age of Reform in 1956 and Anti-Intellectualism in American Life in 1964), intellectual godfather to Eric McKitrick, Christopher Lasch, Linda Kerber, and Eric Foner, vice-president of the Organization of American Historians, and an oracle among American historians. Today, Hofstadter’s reputation is nearly as dead as Marley’s doornail. His books remain in print, but they tend to be read as period pieces, or as provocatively entertaining essays, rather than serious historical analysis. They are the sort of thing one assigns to undergraduates to perk up interest in an American history survey course, or to graduates in a seminar devoted to historical fashions.

Although Hofstadter died at the comparatively young age of 54, he was part of a generation, along with Arthur Schlesinger, Bernard De Voto, Daniel Boorstin, and Perry Miller, which still understood history writing to be a species of the humanities, in which felicity of style and a Continental broadness of interpretive reach were virtues. He had a vague identification with late 19th-century American thought, through The Age of Reform and Social Darwinism in American Thought (his first book, in 1944), but for all practical purposes, there was no specific “era” on which he hung his hat. In truth, Hofstadter was an editorialist of the American experience, and he was profoundly uninterested in either slogging monkishly through archives, or the people who worked in them (whom he described as “archive rats”). Conclusions rather than method, and bon mots rather than footnotes, were his long suit. “If one were to compare the proportion of time given to expression with that given to research,” he once remarked, “my emphasis is on the first.” Even though he filled the most prestigious chair among American university historians, people rarely read Hofstadter because he was an academic, nor did he care that most of his readership was itself not academic.

This was not the direction in which American history writing was headed in 1970. In that year, Michael Zuckerman, Philip Greven, and Kenneth Lockridge published three landmark studies of colonial New England which sent methodological shock-waves through the guild of American academic historians and stamped doom on history-writing in the epicurean style represented by Hofstadter. All three books—Zuckerman’s Peaceable Kingdoms, Greven’s Four Generations: Population, Land and Family in Colonial Andover, Massachusetts, and Lockridge’s A New England Town: The First Three Hundred Years—rested on exhaustive analysis, not just of archives, but of the minutiae of everyday life which had slipped the attention even of the archivists. They wallowed in probate inventories, tax lists, marriage and death records, county clerk records, all of which supported narratives which looked more like anthropology than history. What they offered was a picture of what Peter Laslett, one of the British pioneers of this “new” history, called “a world we have lost”—a pre-industrial world of “human size” in which “the whole of life went forward in the family” and “industry and agriculture lived together in some sort of symmetry.” In very short order, the new methods and the misty world of “pre-capitalism” would become an invitation to political romanticization; but in 1965, when Laslett wrote those words, the real point he wanted to make was that the life of the modern industrial world “makes us very different from our ancestors.”1 And, to the dismay of Richard Hofstadter, made it infinitely more difficult to write historical editorials based on the lives of people who turned out to be as incommensurate with modern experience as the rocks on Mars.

David Brown’s biography of Hofstadter describes him as an exemplar of “twentieth-century liberalism,” but in fact there was very little about Richard Hofstadter which could be called “liberal.” His father, a furrier, was a skeptical Jew from Buffalo, New York; his mother was an observant Lutheran who had young Richard baptized, but who died when the boy was ten, and as a result, Hofstadter grew up resentful, introverted, and never sure of belonging anywhere. Despite the onset of the Depression, Emil Hofstadter prospered, and was able to put both Richard and his sister through college. But like many of his generation, Hofstadter’s insulation from the ravages of the Depression imparted no sense of security; if anything, it only made him more hostile to the machinery of American commercial society which had kicked so many others in the American middle class down the rungs of the economic ladder. He was captivated by Charles Beard’s The Rise of American Civilization (1927) and its leering revelation that American history was really governed by corporate greed rather than Enlightenment idealism. And when in 1934 he met Felice Swados, a committed Marxist and rookie novelist, Hofstadter’s conversion to the Left became complete. He joined the National Student League (one of a bevy of talented radical Stalinist groups in the 1930s, alongside the Young Communist League, Young People’s Socialist League, and the Spartacus Youth League), married Felice, and enrolled as a law student at Columbia.

Hofstadter quickly became bored with law; instead, he wrote a master’s thesis on the Agricultural Adjustment Act, and set to work on a doctoral program under Merle Curti. At the same time, he and Felice joined the Young Communist League, and then formally attached themselves to the Communist Party in 1938. Felice loved the CP; Richard was less enthusiastic, especially after the news of the Stalin show-trials gradually became public, and even more after rubbing shoulders with the CP leadership, Earl Browder and Max Schachtman. Hofstadter had discovered that he loved history more than he loved the working class—or rather, that he had never loved the working class at all, and had no confidence that a dictatorship of the proletariat would be much easier to live with than fascism. “The Communist Party,” he would later write, “wanted no writers who would not subject themselves to its characteristic rigid discipline,” and that discipline was dominated by a “cult of proletarianism” which Hofstadter loathed.2

Hofstadter won his PhD in 1942, and took a job teaching at the University of Maryland, where he made common radical cause with three other newly hired faculty, Kenneth Stampp, Frank Freidel, and C. Wright Mills. He was still oozing Marxist hostility to the New Deal, but it was tempered by a suspicion of any mass movements, or anything which grabbed for power in the name of political righteousness. Then, in 1944, Felice was diagnosed with cancer (she died in July of 1945), and—armed with the urge to escape from the blandness of life in Maryland—Hofstadter leapt at an offer to fill Curti’s professorship at Columbia, and began teaching there in 1946. He dropped away from the CP, and from almost everything else in his past.

It may be a misnomer to say that Hofstadter taught at Columbia, since he actually found teaching onerous and students boring. Brown’s interviews with Hofstadter’s one-time students and advisees routinely turned up a portrait of a man who was distant, aloof, and defensive of his own time, even with his own colleagues. His passion was writing. “I’m not a teacher,” he explained to Eric Foner, “I’m a writer.” The urge to distance himself from people extended to his histories of popular movements. The Age of Reform, which won him the 1956 Pulitzer in History, was a savage questioning of the bona fides of the Progressives, in which the reformist urges of Theodore Roosevelt, William Jennings Bryan, and the other happy warriors of Bull Moose persuasion were defrocked of their benevolent intentions and exposed as incubators of “status-anxiety,” a disease which impelled those who felt power slipping from their control to try to regain it by demagoguery. The sardonic pleasure Hofstadter displayed in exposing the rage, the nativism, and the anti-Semitism that seethed beneath the surface of the American Midwestern heartland angered both the Left (who liked to coo, in hom*oerotic socialist-realist fashion, over the Tom Joads of the plains) and the Right. Hofstadter did not care. He had come to see the place of the intellectual—by which he meant, himself—as an endangered cosmopolitan pocket in a sea of mass rural idiocy.

Feeling the hypocrisy became a cultural habit for Hofstadter, and his writing oozed a kind of schadenfreude about the failures and limitations of American politics (which he characterized as a compound of anti-intellectualism, paranoia, and self-delusion) and American democracy. In The American Political Tradition and the Men Who Made It (1948), Hofstadter offered a series of semi-biographical vignettes (of Jefferson, Jackson, Calhoun, Lincoln, Bryan, Wilson, Hoover, and both Roosevelts) which clawed away the comfortable heroism enwrapping each of them and left the reader wondering if American democracy had any genuineness at all. (His comment on the Emancipation Proclamation became one of the most memorable one-liners in American historiography: “The Emancipation Proclamation of January 1, 1863, had all the moral grandeur of a bill of lading”3). In 1964, he won his second Pulitzer (this time in general nonfiction) for Anti-Intellectualism in American Life, in which he attacked evangelicalism, capitalism, and “our persistent, intense and sometimes touching faith in the efficacy of popular education” for promoting faith-based stupidity and a cavalierly instrumental attitude toward the life of the mind. The rebirth of conservative intellectualism in the 1960s agitated him even more, since he saw in the Goldwater Right nothing but another menacing upsurge of populist fascism.

Yet, contemptuous as he was of American democracy, Hofstadter also saw that the great conundrum for American intellectuals was that they could not live without politics (whether for protection or for access to resources) and could not live with it (because of its inherent corrupting force). This was a dangerous moment for Hofstadter, and for two reasons. First, a critical politics which loses all confidence in how to deal prudentially with power risks a fatal descent into irony and cheap rib-nudging. And the truth was that much of Hofstadter’s coruscating denunciations of progressivism, emancipation, and mass democracy could easily be read as history done with a smirk, as though H.L. Mencken and Sinclair Lewis had collaborated on a new narrative of the American past.

Christopher Lasch, sensing this in 1965, complained of his mentor that Hofstadter had sold his store to a cheap leftist snobbery—or, as Brown puts it, that Hofstadter was himself suffering from “status-anxiety.” But the other danger for Hofstadter was posed in the mid-Sixties by a New Left which was utterly and unapologetically intoxicated by the prospect of power. The New Left was fully as impatient as Hofstadter with the dull embourgeoisem*nt of the American classes, and accepted the call of Herbert Marcuse to university students to constitute themselves as the revolutionary vanguard by overthrowing capitalism in its real fortress, the citadel of bourgeois morality. But they turned their attack first on the university, which they saw as the processor of capitalist enculturation and not (as Hofstadter did) as a safe-house for the mind. The campus shut-downs of 1964-65 and the “occupation” of Columbia’s classrooms by Students for a Democratic Society in April, 1968 infuriated Hofstadter, who regarded sds (despite its lineal descent from the National Student League) as the newest enemy of academic freedom. “I was raised in the 1930s, on a more severe brand of Marxism,” Hofstadter sniffed, “What you have, in place of revolutionaries, are clowns like Abbie Hoffman and Jerry Rubin.”

The wind was still blowing from the Left in Hofstadter’s mind, but it was very much the wind of the Old Left of the 1930s. When Hofstadter delivered the commencement address at Columbia that spring, offering a dogged defense of the integrity of uncoerced intellectual life, over three hundred Columbia students stood up and walked out.

Brown has not had an enviable task in writing a biography of Richard Hofstadter. Beatrice Hofstadter (his second wife, who died in 1986) kept a protective watch over her husband’s papers, and even for this book, Brown was refused permission to quote from Hofstadter’s letters and manuscripts. Nor was Hofstadter one of those colorful academic eccentrics who, like Noel Annan’s The Dons, splayed bizarre but noteworthy behavior behind himself like the wake of a ship. He was a man without hobbies, without passions, and without drinking buddies. Brown has compensated for this absence of the remarkable by mining the papers of Hofstadter’s colleagues and students, especially Kenneth Stampp, Eric McKitrick, and Alfred Kazin, and teasing important material out of interviews with over thirty others who knew Hofstadter. The result makes an impressive shape of a life which might otherwise have appeared consumed by the innately uninteresting humdrum of academia.

What we miss in this, however, are two things. First, Brown offers us remarkably little political context for understanding Hofstadter, especially at the crucial nexus in the late 1940s which split the New Left from the Old Left. Hofstadter was a Leftist without any hope or faith in revolution, and a democrat who regretted that democracy had to include the booboisie, and that makes for a very fuzzy picture indeed if we have no understandings of where the ideological lines were drawn among New York intellectuals in the post-World War II decades. Not to have a clear sense of what first attracted him to the CP in the 1930s, or to have a thorough definition of his subsequent position in contrast to, say, Wright Mills (who embraced the New Left) or Sidney Hook (who repudiated it) is a serious flaw in this book.

Second, Brown tends to see the resurgence of the Right as an intellectual movement largely through Hofstadter’s eyes, as alarming in volume but philosophically insignificant by unit. This underestimation of the hitting power of Right intellectuals has been one of the chronic failures of the American Left; and as Hofstadter’s own attitude demonstrates, there is no real cure for this failure, since the logic of Left politics actually requires that intellectuals on the Right be defined, ipso facto, as an impossibility. Brown remarks pretty sharply that whether it was “out of fear, anger or fantasy, the Far Right inspired Hofstadter to write some of the most original studies of American political culture ever produced.” But “the Left never provoked such a productive reaction.” Hofstadter preferred “to instruct radicals, not—as he had conservatives—to diagnose their mental tics.”

So, despite the fact that Hofstadter lived his entire life “in an era dominated by liberal politics,” he insisted on describing himself as “politically alienated.” And from what, exactly? Born to the modest privileges of the urban upper-middle-class, he treated peace, plenty, and truth as the normal setting of human life, and intolerance, hypocrisy, and inequality as intolerable aberrations, when the norm of human history has been exactly the other way around. While making a university subsidized apartment on the upper East Side his home and a place on Cape Cod his summer retreat, and bathing in book contracts worth $1.3 million dollars at the time of death, Hofstadter nonetheless had never a good word to say about the nation, the politics, or the economic system which guaranteed his entitlements to these things. And despite the Andes of corpses which “a more severe brand of Marxism” piled up around the world in the 20th century, it was not the abominations of Stalin but the infelicities of Abraham Lincoln’s prose which summoned forth his most vivid malediction. The vital power of Richard Hofstadter’s oeuvre lay in the grace and color of his writing. But it was an almost entirely negative power, in the service of a freedom he wanted for himself, but not necessarily for anyone else.

Allen C. Guelzo is Henry R. Luce Professor of the Civil War Era and director of the Civil War Era Studies program at Gettysburg College. He is at work on a book about the Lincoln-Douglas debates of 1858.

1. Peter Laslett, The World We Have Lost: England Before the Industrial Age, 2nd ed. (Scribners, 1971), pp. 17, 22.

2. Richard Hofstadter, Anti-Intellectualism in American Life (Knopf, 1963), pp. 291-292.

3. Richard Hofstadter, The American Political Tradition and the Men Who Made It (Knopf, 1948), p. 131.

Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromAllen C. Guelzo

Sean Everton

A look at the 2004 presidential campaign.

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

In the fall of 2000 at the Houston meetings of the Society for the Scientific Study of Religion (SSSR), I sat in on a paper presentation by Kraig Beyerlein about a study that he and Mark Chaves had conducted regarding the political activities of religious congregations in the United States. While the results of their study changed somewhat by the time it appeared in the Journal for the Scientific Study of Religion (JSSR),1 the story they told in Houston remained essentially the same: namely, that religious traditions tend to specialize when it comes to political activism. Conservative Protestants tend to do one thing, mainline Protestants another, and Roman Catholics still another. One result, more than any of the others, caught my eye: Black congregations are 7 times more likely than mainline Protestant churches, 24 times more likely than conservative Protestant churches and 42 times more likely than Roman Catholic churches to invite a political candidate to come and speak. Since we were in the midst of a presidential campaign, I could not help wondering who was speaking at more churches – Al Gore or George Bush. I suspected that it was Al Gore. That is, if it is true that Black congregations are more likely than any other type of congregation to invite a political candidate to come and speak, and since African Americans tend to vote for Democratic candidates, then it seemed likely that Al Gore was receiving more invitations from churches to come and speak than was George Bush.

I initially planned to test my hunch on the 2000 campaign, but for a variety of reasons it became easier to wait until 2004 to track where and when the presidential and vice-presidential candidates spoke. My working hypothesis, of course, was that Senators John Kerry and John Edwards would visit and speak at more churches than George Bush and Dick Cheney. As it turned out, my hunch was right.

To track where the candidates visited from March 3, 2004 (the day when John Kerry effectively wrapped up the Democratic nomination) through November 2, 2004 (the day of the 2004 Presidential election) I gathered data from numerous sources.2

I tracked the campaign appearances only of President Bush and Senator Kerry until the latter selected a running mate (July 6). From that point on, I tracked the appearances of Vice-President Dick Cheney and Senator Edwards as well. Into one category—Church Campaign Appearances—I sorted all speaking appearances by any of the candidates at places of worship (Christian or otherwise)3 and grouped them according to broad denominational classification (Roman Catholic, Mainline Protestant, Conservative Protestant, Black Protestant, and Other). In a separate category—Other Faith-Based Campaign Appearances—I sorted candidate appearances at non-church events that had ties to faith-based institutions or movements.

Examples of appearances by President Bush in this second category include his appearance at the Knights of Columbus (Roman Catholic) gathering in Dallas, his via-satellite addresses to the annual meetings of the Southern Baptist Convention and the National Association of Evangelicals, his meeting with the Pope at the Vatican, his videotape addresses to faith-based gatherings such as the National Hispanic Prayer Breakfast, and a speech at Concordia University. Examples of appearances in this category on the part of Senators Kerry and Edwards include Kerry’s appearances at the Quadrennial Conference of the African Methodist Episcopal Church and the annual gathering of the National Baptist Convention, Kerry’s various meetings with Black Protestant and Roman Catholic clergy, and Edwards’ speech at the Congressional Black Caucus’ Annual Prayer Breakfast in Washington, D.C. Finally, I coded secular appearances by the candidates into a variety of categories: private venues (e.g., homes, hotels); community centers, parks, fairgrounds; convention centers, stadiums, and arenas; elementary and high schools, colleges, universities, and technical schools; other public venues (e.g., White House news conferences and photo opportunities); and television and radio interviews.

What did I find? On the one hand, I discovered how seldom any of the candidates actually appeared at events that could be construed as religiously based. Less than 2 percent of the combined campaign appearances of all four candidates occurred at houses of worship, and less than 5 percent could be interpreted as faith-related in any way. That is, out of approximately 1,400 total campaign appearances, the four candidates spoke at a total of only 20 churches and made only 43 other faith-based campaign appearances. On the other hand, Senators Kerry and Edwards visited and spoke at far more churches than did President Bush and Vice-President Cheney. The former appeared and spoke at nineteen churches while the latter spoke at only one. Not surprisingly, most of these appearances occurred at African American churches. Indeed, the one time that President Bush appeared and spoke at a church, it, too, was an African American church. Interestingly, the only candidate to speak at a conservative Protestant church was Senator Edwards, who addressed the faithful at First Baptist Church, Canton, North Carolina. (Figure 1 illustrates these findings.)

Of course, one could argue that although Bush and Cheney did not speak at churches, they did speak at other religiously based events and that is how they reached out to their conservative base. Such an assertion would be true. As I have already noted, George Bush met with religious leaders, addressed faith-based conventions and conferences, and even spoke at a church-related university. However, so did John Kerry and John Edwards. As Figure 2 illustrates, while Bush and Cheney made 18 non-church faith-based campaign appearances, Kerry and Edwards made twenty-five.

What are we to make of these findings? They certainly challenge the widespread perception that Christian conservatives are the most politically active religious group in the United States. Ever since the born-again Jimmy Carter ran for and was elected President in 1976, largely because of the support he received from conservative Protestants, academics and the media have been fascinated with the political activism of religious conservatives. At the same time, however, they have virtually ignored the political activities of other religious groups.

Kenneth Wald, for example, in his introductory text on religion and politics, devotes an entire chapter to the political activism of conservative Protestants, but then devotes only one additional chapter to summarize the political activism of other religious groups such as Roman Catholics, Mainline Protestants, Black Protestants and Jews.4 Wald also provides a very helpful chart that summarizes the major organizations of the Christian Right along with groups opposed to the Christian Right, but he provides no similar chart for organizations with ties to any other religious constituency (e.g., the Christian Left), even though they certainly do exist.5 Perhaps just as telling is the coverage that George Bush’s addresses to the annual meeting of the Southern Baptist Convention (SBC) received as compared to the coverage of John Kerry’s address to the General Conference of the African Methodist Episcopal Church. To be sure, Kerry’s visit came on the day that he announced that he had chosen John Edwards as his running mate, so it is not surprising that the media did not devote too much space to his visit. But then again, Bush did not even attend the SBC’s annual meeting. He addressed the delegates via satellite. Kerry, at least, showed up.6

While the standard perception of religion and political activism in the United States is undoubtedly driven, in part, by the role that conservative religious leaders such as Jerry Falwell, Pat Robertson, Ralph Reed, and James Dobson have played in shaping the Republican Party’s political agenda, it also probably reflects the fear of many on the political Left that the goals of conservative Protestants threaten the very core of American democracy. For example, Flo Conway and Jim Siegelman are convinced that conservative Protestants are waging “a guerrilla war on our private thoughts, feelings, and beliefs, on our nation’s timeless values and historic freedoms.”7 Similarly, Sara Diamond contends that while it would be a mistake to regard the Christian Right as a monolithic movement, it appears “to be united in a single overall effort: to take eventual control over the political and social institutions in the United States and—by extension—in the rest of the world.”8

And Christian Smith tells the story of an acquaintance who, while attending an Ivy League graduate program in the social sciences, heard a professor remark in class that “If American evangelicals had had political power during the McCarthy Era in the 1950s, there would have been another holocaust.” Smith’s friend noted that in response to this remark, not “one student … raised an eyebrow. The idea appeared perfectly credible to the class, and the discussion moved on.”9

Yet, how credible is this idea? Are conservative Protestants as “dangerous” as many believe? Christian Smith’s study of American evangelicalism suggests they are not. He notes that “many of the conventional assumptions about evangelicals and politics … are misguided and simplistic. When it comes to politics, evangelical views are replete with diversity, complexity, ambivalence, and incongruities.”10 For instance, while he found that most evangelicals believe that Christians should be involved in politics, by this most of them simply meant “informed voting.” “Politics for the majority of evangelicals,” Smith writes, “is not a trumpet call to take sides in the much-ballyhooed ‘culture wars,’ but a matter of basic citizen responsibilities and rights.”11 To be sure, Smith encountered evangelicals who wanted to impose their morality on others, but they accounted for only a small percentage of the evangelicals he interviewed; indeed, they accounted for a smaller percentage of those he interviewed than those who believed that Christians should stay out of politics altogether.

While we cannot generalize Smith’s findings to all conservative Protestants, they are consistent with other studies that indicate that while conservative Protestants share certain theological views, they can hardly be regarded as a monolithic voting block and may not be as committed to the Republican Party as they are often portrayed.12 Furthermore, apart from handing out Christian Right voter guides, when it comes to most forms of political activism, conservative Protestants are remarkably inactive. Beyerlein and Chaves found that conservative Protestant congregations are the least likely group to tell people at worship about opportunities for political activity, to form groups to organize a demonstration or a march, lobby elected officials, discuss politics, or register people to vote. In fact, even when it comes to handing out voter guides, they are less active than Black Protestant congregations.

By contrast, Black Protestant congregations score high on almost every measure of political activity. They are more likely to tell people at worship about opportunities for political activity, to form groups to discuss politics or organize voter registration campaigns, to distribute voter guides, or (as we have already seen) invite someone running for office as a visiting speaker. And this does not appear to be a recent development. In 1992, 1996, and 2000, Bill Clinton and Al Gore visited and spoke at several black churches during the closing days of their presidential campaigns,13 and Andrew Young and Martin Luther King, Sr. dragged Jimmy Carter around to a number of African American churches and had him meet with numerous African American clergy prior to the 1976 Presidential election.14 And lest we forget, in 1960 the Kennedy campaign covertly distributed two million pamphlets at African American churches on the Sunday before the election.15

Political activism in African American churches is by no means limited to presidential campaigns. Anecdotal evidence suggests that it is common for African American political candidates to solicit the blessings of their pastors and ministerial associations. For example, Frederick Harris has documented Carol Mosely Braun’s appearance before a gathering of African American ministers to help jump start her 1992 senatorial campaign,16 and Mary Sawyer’s study of 14 members of the Congressional Black Caucus found that 13 received endorsem*nts from pastors, ten received endorsem*nts from ministerial bodies, ten spoke at Black churches during their campaigns, and five received financial contributions from churches.17

It is also helpful to note that the congregational survey on which Beyerlein and Chaves’s study is based asked if any (not just presidential) political candidates were ever invited to come and speak.

Am I claiming that the Bush-Cheney campaign ignored its religious base? Not at all. The Bush-Cheney campaign clearly sought to activate its religious base by having President Bush meet with clergy and speak at various non-church gatherings of the faithful. Moreover, there is some evidence to suggest that the campaign did not have to be too proactive in courting its religious base. For example, Rick Warren, the influential conservative Protestant pastor and author of The Purpose-Driven Life, sent an email to 136,000 pastors urging them to compare the candidates on five non-negotiable issues: abortion, euthanasia, human cloning, same-sex marriage, and stem cell research.18 My point is not to suggest a lack of initiative on the part of President Bush and his reelection campaign in this respect; rather, I am simply arguing that the Kerry–Edwards campaign was quite busy promoting its own faith-based initiative: the wooing of African American churchgoers.

While financial contributions by churches to candidates do violate irs guidelines, speaking appearances by political candidates at churches do not.19 An interesting study would be to track how the media and church-state watchdog groups cover such events. Do they focus on some events and not others? Are they selective in the “violations” they highlight? Also, given the recent attention that the irs has paid to a sermon preached at an Episcopal Church in Los Angeles, some enterprising researcher may find it interesting to discover whether the irs is selective in the attention it pays to the political activities of congregations.

One cannot help wonder how far back into the past we can project current patterns of religious politicking by political candidates. A worthy research project would be to comb the archives of national and regional newspaper coverage of past presidential campaigns to see how Democratic (and Republican) courting of the faithful has changed over the past 40 years. But that is a study for another day. For now we need to be content knowing that some of the assumptions that many have held regarding the political activism of presidential candidates and people of faith have simply been wrong, which is why sometimes there is no substitute for good, hard, empirical data. —An earlier version of this paper was presented at the 2005 meetings of the Society for Scientific Study of Religion, the Religious Research Association, and the Association for the Study of Religion, Economics, and Culture in Rochester, New York. I wish to thank Larry Iannacone for his helpful suggestions and continual encouragement. I would also like to thank Courtney Magner for help in gathering data on John Kerry’s campaign appearances.

Page 3137 – Christianity Today (21)

Total Church Campaign AppearancesMarch 3, 2004 – November 2, 2004

Page 3137 – Christianity Today (22)

Other Faith-Based Campaign AppearancesMarch 3, 2004 – November 2, 2004

Sean Everton is a Ph.D. candidate in sociology at Stanford University and a full-time lecturer at Santa Clara University.

1. Kraig Beyerlein and Mark Chaves, “The Political Activities of Religious Congregations in the United States,” Journal for the Scientific Study of Religion, Vol. 42, pp. 229-246.

2. Sources I used included the print and online versions of national and regional newspapers, the Yahoo! News Service, U.S. Newswire, LexisNexus, Democracy in Action’s coverage of the 2004 campaign, official campaign websites, White House news sources, and Yahoo! News Photos Slide Shows.

3. I did not include instances where the candidates showed up for worship but did not address the faithful. Throughout the campaign John Kerry regularly attended Roman Catholic masses, and President Bush often worshipped at St. John’s Episcopal Church when he was in Washington, D.C.

4. Kenneth D. Wald, Religion and Politics in the United States, 4th ed. (Rowman & Littlefield, 2003).

5. Sojourners, www.sojo.net; the Baptist Peace Fellowship of North America, www.bpfna.org/home; and the Center for Progressive Christianity, www.tcpc.org, are three such examples.

6. To be fair, the New York Times did cover Kerry’s visit to the National Baptist Convention in considerable depth.

7. Flo Conway and Jim Sigelman, Holy Terror: The Fundamentalist War on America’s Freedoms in Religion, Politics and Our Private Lives (Dell, 1984), p. 9, quoted in Christian S. Smith, Christian America? What Evangelicals Really Want (Univ. of California Press, 2000).

8. Sara Diamond, Spiritual Warfare: The Politics of the Christian Right (South End Press, 1989), p. 45.

9. Smith, Christian America?, p. 92.

10. Ibid, p. 94.

11. Ibid, p. 98.

12. Nancy J. Davis and Robert V. Robinson, “Religious Orthodoxy in American Society: The Myth of a Monolithic Camp,” Journal for the Scientific Study of Religion, Vol. 35, No. 2 (1996); Michael Hout and Andrew M. Greeley, “A Hidden Swing Vote: Evangelicals,” New York Times, September 4, 2004.

13. Gwen Ifill, “Clinton Rallies Supporters for Final ‘Long Walk’,” New York Times, November 2, 1992, A1, 14; Alison Mitchell, “Avoid ‘Politics of Division,’ Says Clinton,” New York Times, November 4, 1996, A1, B7; Katharine Q. Seelye and Kevin Sack, “The 2000 Campaign: The Vice President; Focus Is on Crucial States in Campaign’s Final Hours,” New York Times, November 6, 2000, A24.

14. C. Eric Lincoln and Lawrence H. Mamiya, The Black Church in the African American Experience (Duke Univ. Press, 1990), p. 215.

15. Taylor Branch, Parting the Waters: America in the King Years, 1954-1963 (Simon & Schuster, 1988).

16. Fredrick C. Harris, Something Within: Religion in African-American Political Activism (Oxford Univ. Press, 1999), pp. 12-26.

17. Mary R. Sawyer, “Black Politics, Black Faith,” 1982, cited in Lincoln and Mamiya, p. 216.

18. Alan Cooperman and Thomas B. Edsall, “Evangelicals Say They Led Charge for the G.O.P.,” Washington Post, November 8, 2004.

19. K. Hollyn Hollman, “Churches and Political Campaigns,” Report from the Capital, May, 2004.

Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromSean Everton

Paul Merkley

Three views (including former President Carter’s).

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

The literature which serves the historian of the relations between the churches and the State of Israel is sparse and for the most part lightweight. Most of the books that actually get read on this theme, those that are put out by publishers of religious literature and which are available in “Christian bookstores,” are polemical, dedicated to either denigrating or exalting Israel’s performance as the civil host of all the Christians who live and work in Israel.

Page 3137 – Christianity Today (24)

Page 3137 – Christianity Today (25)

Palestine: Peace Not Apartheid

Jimmy Carter (Author)

264 pages

$15.14

Two things need to happen before serious scholarly histories of this story begin to appear. The first is that academic historians must come to recognize the centrality of this theme (the relations between the churches of the Holy Land and the Jewish State) in the overall story of the relations of State of Israel with the whole world. The second is that archives held by the principal participants in this story must be opened for disinterested investigation. The unwillingness of the Vatican to allow outsiders into its archives is well-known, but other Christian bodies—including the Orthodox churches, the Anglicans, the Lutherans, and still more who have played a part in the religious life of the Holy Land—have been every bit as reluctant to let strangers into their basem*nt archives, and have accordingly paid the price of being mistrusted by scholars and misrepresented in the scholarly histories. So long as this state of affairs exists—so long as the materials necessary for writing honest history are unavailable—the amateurs and the partisans and the court-historians and the mindless polemicists are able to present their generalizations, their rumors, and their partisan prejudices as the story.

Uri Bialer’s new book, Cross on the Star of David: The Christian World in Israel’s Foreign Policy, 1948-1967, shows how serious historical research is done. This is the kind of research that historians wait for before they write their weighty books. It is (in other words) the kind of research that the popular historians flee from, when it exists. Bialer has been first in line at the archives of all the governmental bodies in the State of Israel—the Israel State Archives, the Central Zionist Archives, the Foreign Ministry and Ministry of Religion—as they have opened, in the last few years, files of the 1940s and 1950s. He has augmented these findings with research in the British Foreign Office and elsewhere. Apart from the occasional bit of supporting quotation from published books, everything that is told us in this book is documented by reference to these archives.

Among the themes which figured in early dealings between Israeli authorities and the churches were titles to property, missionary activities, the right to run schools and other facilities, and the right of representatives of the churches who are not citizens to travel in and out of Israel or to reside and to work in Israel. Of interest to historians of foreign policy are the connections that Israeli authorities made between settlement of these local issues and the behavior of parent church bodies in Europe as well as attitudes of nation-states which had among their citizens large numbers of members of certain churches which in the past had behaved as their protectors.

The recorded exchanges between the many parties to these negotiations make for colorful reading. Even more colorful are internal memoranda and diary entries which Bialer has located and quoted. We are shown a great deal that is not pretty. Here is Foreign Minister Sharett on his negotiations with Vatican principals (from the pope down) over the latter’s refusal to recognize the State and its determination to wreck Israel’s chances for survival by imposing “international status” upon Jerusalem: “[This is for them] a matter of retribution, the squaring of an account concerning something that happened here in Jerusalem, if I am not mistaken, 1,916 years ago when Jesus was crucified… . [They are saying] that the Jews need to know once and for all what they did to us and now there is an opportunity to let them feel it.” Here is Cardinal Tardini, the Vatican’s Secretary of State: “I have always been convinced that there was no real need to establish that state… . Its existence is a constant source of danger of war in the Middle East. Now that Israel exists, there is, of course, no possibility of destroying it, but every day we pay the price of this mistake.” As for diplomacy: “There is no possibility of contact or negotiations with the killers of God.” This is the kind of history that grownups like, because it requires us to make our own judgments about motives and meaning.

Apart from offering a few sensible observations on later developments (such as the Holy See’s foot-dragging about recognition of the State of Israel, down to the year 1993), Bialer concludes his account with the year 1967. The context changed drastically after the Six-Day War, when the nominal headquarters of most of the major churches and large numbers of their properties as well as their adherents were transferred from Jordanian to Israeli jurisdiction. We must hope that Bialer will show the same zeal in pouncing upon all the documentation as it becomes available for these later chapters.

In Christian Zionism: Road-Map to Armageddon?, Stephen Sizer advances the fantasy, previously elaborated by a host of anti-Zionist polemicists, that the long and honorable history of defense by Christians of Israel’s right to be Israel is merely an epiphenomenon of the history of a singular, off-center school of theology called premillennial dispensationalism. According to this thesis, all Christian Zionists are mindless acolytes of a Sanhedrin of pamphleteers which carries on the teachings of John Nelson Darby.

By my casual reckoning, about 80 percent of the book is devoted to a sedulous taxonomy of End Times speculation. The project began as a doctoral thesis for which Sizer bravely sifted through the mountain of English-language prophetic theology from the 17th to the end of the 20th century and disposed its components into categories: amillennialist, postmillennialist, and premillennialist—the latter further divided into covenantal and dispensationalist, and, in the latter section of the book, apocalyptic-dispensationalist and political dispensationalist. Do not despair: there are charts.

Early in the book, Sizer outlines a sequence of political figures who carried the message of premillennial dispensationalism forward into a plan of action for establishing a Jewish state. The list breaks off with Balfour, and thus Sizer spares himself having to explain the connection between dispensationalism and Woodrow Wilson, Franklin Roosevelt, Harry Truman, and their successors in the front ranks of political actors after 1918.

Among major misrepresentations of historical fact too numerous to list, let alone to deconstruct, I take the case of Arthur James Balfour, he of the Balfour Declaration, who stands in this book for the entire class of Christian Zionists. We learn that he was a man who was “brought up in an evangelical home and was sympathetic to Zionism because of the influence of dispensational teaching,” hence naïve, uncultivated, weak-minded, his thinking processes dulled, like those of the rest of us Christian friends of Israel today, by low-brow pamphleteering and thus easily led by the Zionists. Balfour, dim bulb that he was, “regarded history as an instrument for carrying out a Divine Purpose.” (Since when did this become a heresy?)

In truth, Lord Arthur James Balfour was a member of the most prominent political family of his day, noted for its achievements in science and the arts; he had a place at the very heart of British intellectual and artistic circles, was educated up to his ears, and was a widely published critical-academic philosopher, which earns him a long entry today in the Encylopedia of Philosophy. The quotient of dispensationalism in Balfour’s intellectual makeup was zero.

In fact, of all the major Christian Zionists whom Sizer describes as standing at the end of the line whose head and fount is the dispensationalist Prophet, John Darby, only one, William Blackstone, was in fact a dispensationalist, or, for that matter, speculated at all about covenants and dispensations. (And how on earth did the notoriously agnostic Lord Palmerston get into this sequence of the mindless dupes of premillennial dispensationalism?)

Sizer’s cartoon-Balfour stands for all the Christian Zionists jerked around by scheming Jews. Think of contemporary Christian Zionists, puppets of the Likud, cheering from the sidelines, never questioning, never doubting, as bulldozers destroy the vineyards and homes of Palestinians (as illustrated on the cover of the book), as illegal settlements are expanded towards the never-admitted but palpable goal of extending Israel’s boundaries to include Damascus, Beirut, Amman, and Baghdad—perhaps, who knows, to China. Like the cartoon-Balfour, Christian visitors to Israel are swiftly taken captive by State-appointed tour-guides who drag everybody off to Yad Vashem (which exists “to represent Israel as a victim”) and then to the Wailing Wall and Masada in order “to perpetuate a favorable image of Israel, stifle criticism and reinforce their claim to the land.” Related to this red herring is the one about being in love with cosmic-death scenarios inspired by provocative passages in Daniel and Revelation. The debt which Sizer owes to the Chomsky-Finkelstein-Ateek school of the History of Israel is readily apparent.

Some of my best friends are premillennial dispensationalists, but we get along anyway. For a Christian Zionist of my ilk, a full and sufficient biblical mandate is in Genesis 12, with special reference to verse 3: “I will bless those who bless you, and I will curse him who curses you, and in you all the nations of the world shall be blessed”—a text which Sizer turns inside out on page 147.

It does not seem of any interest to Sizer to note that we stand today on historical ground very different from that of the age of the dispensationalist prophetic conferences. What we have to speculate about today is whether the being of Israel should be undone by human force. Christian Zionists are realists. They no longer attend conferences in which anyone proposes a theory about Israel’s coming into existence. Their speculations about what is right and wrong, what should be done and not done, start from the premise that Israel is. Anti-Zionists, meanwhile, live in the same counterfactual world as do the Muslims who speculate about the legitimacy of Zion.

It is a common feature of anti-Christian Zionist literature that little interest is shown in the actual historical circ*mstances that brought the modern State of Israel into existence. In Sizer’s book there is absolutely none, unless we count this oddity on page 148: “in 1948 the U.S. government was just as opposed to the founding of the State of Israel [as was] Britain.” Is this revisionism, or what? It is Franklin Roosevelt attacking the Japanese fleet at Pearl Harbor. Did none of that long list of people who are thanked on the Acknowledgements page twig to this incriminating bit of confusion? Does InterVarsityPress not have fact-checkers? This is embarrassing. It is, however, all we have to indicate that Sizer knows that once there was no State of Israel but now there is—somehow.

With this book, says Colin Chapman in his back-cover appreciation, “Sizer has thrown down the gauntlet in a way that demands a response from those who support the state of Israel for theological reasons.” Well, anytime, anywhere.

Even before Jimmy Carter’s Palestine: Peace Not Apartheid had been published, and while reviewers were still reading the embargoed pre-publication text, the book was making news—and possibly even making some history.

Over the summer months, Carter’s view of the Hezbollah war had been broadcast widely. That was, in brief, that the Olmert and Bush governments had been lying in wait to rain destruction upon innocent Lebanon and that an excuse was finally found when a few “militants” had slipped across the border and captured Israeli soldiers. Israel’s goal, a Carthaginian peace, had only been prevented when all the nations of the world stood together at the un.

Carter insisted that in expressing these views, “I think I represent the vast majority of Democrats in this country.” Then-House Minority Leader Nancy Pelosi, running for re-election, took a different slant: “With all due respect to former President Carter, he does not speak for the Democratic Party on Israel.” Leaders of the Democratic Party took out an ad in the Jewish Daily Forward to proclaim that “For 58 years and counting Democrats stand with Israel.” Included among pictures of several Democratic presidents was one of President Jimmy Carter standing with Prime Minister Menachem Begin at the time of the Camp David negotiations. The ad brought on angry letters making the counter-claim that Carter, since leaving office in 1980, has evolved into a sleepless enemy of Israel’s peace.

Almost as though he were rising to prove this very point, Carter announced his new book, whose four-word title and subtitle, Palestine: Peace Not Apartheid, as everyone immediately saw, could run on the banner under which Israel’s enemies worldwide have gathered since the Durban Conference of August, 2001.

Newspapers reported in October that Carter’s book was to have been released on November 2, but that the publishers had responded to the panic of the politicians by holding off publication until November 16, a few days after the election. In any case, enough was leaked to make clear that the title of the book did not mislead: Carter had updated the line he has taken for a quarter-century now—that Israel is conducting “a system of oppression, apartheid, and sustained violence,” that “Israel’s continued control and colonization of Palestinian land have been the primary obstacles to a comprehensive peace in the Holy Land.”

The irony is that Carter’s book has probably drawn more attention and therefore been more of an issue than if the book had just gone out like other books on the date announced and been bought and wrapped for Christmas for Dad.

Carter is not an anti-Jewish ideologue. His views are not irrational, they are just unbalanced—driven by an unquenchable private need for vindication. He cannot let go of the fact that the only part of his Camp David Accords of 1978-1979 which has lasted (and that just barely) is the achievement of a Peace Treaty and exchange of diplomatic recognition between Israel and Egypt. He proclaimed at the time that the three parties (the United States, Egypt, and Israel) were committed under the Accords to persuade the Palestinians and all the Arab nations to resolve their quarrel with Israel along parallel lines. Because Israeli and American opinion can be affected by the disquisitions of former presidents and because Arab opinion cannot, Carter has been working out his frustration regarding the failure of the larger hopes for “Middle East peace” against the former ever since, seeking to shame us all into setting things straight.

But Carter’s Camp David formula was built on a fantasy: that the Arab world’s complaint against Israel has to do with geography. The creation of the State of Israel is an intolerable reversal of the judgment of the Prophet Muhammad that, for their refusal to heed his voice, “humiliation and wretchedness were stamped upon them [the Jews] and they were visited with wrath from Allah” (Sura II: 61; cf., Sura III: 112). It is for this unforgivable assault on the credibility of Islam that Israel cannot be permitted to stand.

There is not a word about Islam in Carter’s book, except in passing as a benign presence (like the Christian church, here and there) consoling lives lived in the shadow of Jewish oppression. Neither is there any developed attention to the dynamic of terror, except to note in passing that decent people don’t do certain things—never naming the names of those who proudly claim “responsibility,” thus leaving us with the impression that the failure of decency is evenly distributed. Indeed, it is the Palestinians who are the primary victims of terror, since Israel seizes upon “provocative acts by Arab militants” as excuses for “devastating military response.” Admittedly, “Some Palestinians react by honoring suicide bombers as martyrs to be rewarded in heaven and consider the killing of Israelis as victories.” Regrettable, but perfectly understandable.

This allusion to “provocative acts” just about uses up Carter’s interest in discussing terrorism. What is more interesting to him is Israel’s inexplicable practice of locking up “thousands of Palestinians” in its prisons. Indeed, “one of the vulnerabilities of Israel, and a potential cause of violence is the holding of prisoners … [including] the revered prisoner, Marwan Bargouti.” (Bargouti is “revered,” in case you didn’t know, because he is directly responsible for the murder of several Israeli citizens. To Israel it makes sense that he should be a prisoner. To Carter, it does not.) In view of this policy of locking up thousands of people (inexplicable except in terms of some kind of congenital sadism), we are invited to admire the tactical genius which motivates the kidnapping of Israeli soldiers—namely, the reckoning that in the past Israel has exchanged “1,150 Palestinians for three Israelis in 1985; 123 Lebanese for the remains of two Israeli soldiers in 1996,” and so on. This passage, in my view, is the lowest point so far in Jimmy Carter’s descent into total Chomskyism.

Carter’s handle on the Gaza withdrawal of 2005 is consistent with his commitment to never admitting an honorable motive to any Israeli action. The Israelis may have thought that hauling off the Israeli residents, and leaving Gaza to the Gazans, would register with the world as an exercise to reduce the extent of her “occupation.” But Jimmy Carter shares Mahmoud Abbas’ logic: “Israel is constantly bringing more land under her occupation”—ergo, withdrawing is really a cunning way of expanding. As for the Gazans, Israel intends to “strangle” them.

Carter does not mention those philanthropic Jews who put up millions of dollars in early 2005 in order to meet the needs of a population said to be suffering because of Israeli oppression, transferring ownership and custody of the scientifically advanced, productive greenhouses and orchards—the most advanced facilities of their kind in the world—cost-free, to the local Arabs. The Arab response was to trash everything, carry off all the pipes and equipment and hoses and sprinklers, and then to plant in the garbage dump that remained beds for the missiles which rain down terror over the Negev today. (According to Carter, that Gaza today has no greenhouses and no commerce sufficient to justify opening up the ports for traffic abroad is attributable to Israel’s “system of oppression, apartheid, and sustained violence.”)

Near the end of the book, Carter pauses to reflect:

It must be noted that by following policies of confrontation and inflexibility, Palestinians have alienated many moderate leaders in Israel and America and have not regained any of their territory or other basic rights. The fate of all Palestinians depends on whether those in the occupied territories choose to pursue their goals by peaceful means or by continued bloodshed.

This is well said. Alas, whatever better angel (or passing whim) inspired that gesture toward seeing the Israeli point of view, it is gone and utterly forgotten when we get to the last page:

The bottom line is this: Peace will come to Israel and the Middle East only when the Israeli government is willing to comply with international law, with the Roadmap for Peace, with American official policy, with the wishes of a majority of its own citizens – and honor its own previous commitments – by accepting its legal borders. All Arab neighbors must pledge to honor Israel’s right to live in peace under these conditions.

But this is not the frame of mind of the people who so recently elected Hamas to be their government, and who consistently tell the pollsters, by whacking great margins, that there will never be peace until Israel ceases to exist. The Palestinians are never going to embrace this healthy attitude so long as international voices with the prestige of Jimmy Carter keep up their unrelenting assault on Israel’s right to life.

Paul Charles Merkley is the author of Christian Attitudes Towards the State of Israel (Mc-Gill-Queen’s Univ. Press) and American Presidents, Religion, and Israel (Praeger).

Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromPaul Merkley

Donald A. Yerxa

Turning point in the Pacific War.

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

In August 1942, on an obscure island in the Solomon Islands inhabited by 15,000 Melanesians and about fifty Europeans (mostly missionaries), the United States launched its first offensive of the Pacific War. It was probably the only time in the war that Japan and the United States met on more or less equal terms, and the outcome remained in doubt for several months. Both sides eventually recognized that Guadalcanal might well be the decisive campaign of the war and poured reinforcements into the South Pacific theater. Though they fought doggedly, in the end the Japanese could not match superior American airpower, firepower on the ground, and logistical support. In early February 1943, the Imperial Navy evacuated the tattered, malnourished remnants of a once-proud Japanese ground force. Although years of fighting remained in the Pacific, the strategic postures of Japan and the United States had shifted irreversibly. The Japanese, not the Americans, were on the defensive. Guadalcanal was the turning point in the Pacific War.1

Few battles in American history stir the emotions like Guadalcanal. Mention of it conjures up images of beleaguered Marines in hideous jungle conditions desperately defending what for a few months was the most precious real estate in the Pacific, of rotting corpses, of emaciated Japanese soldiers attempting to blunt American firepower with little more than courage and determination, of deadly Japanese Long Lance torpedoes sending many American warships and crews to their graves in the shark-infested waters of Iron Bottom Sound, and of underpowered American P-400s and sturdy F4F Wildcat fighters scrambling from Henderson Field to meet daily attacks from Mitsubishi-built Betty bombers and Zero fighter-escorts. The epic air, sea, and land campaign—”triphibious” in Churchillspeak—still serves as a source of inspiration, horror, instruction, scholarly debate, box-office receipts, and authors’ royalties.

Guadalcanal had it all. Horrific combat in the air, on the seas, and especially on the ground. Colorful heroes, ignominious failures, and ordinary men on both sides who died far from home. Guadalcanal was a test of the strategic instincts of the best and the brightest of both Japan and the United States. It was a test of will and, some would argue, of national character. Above all, it was a test of the ability of each combatant to conduct operations hundreds of miles away from main bases in some of the most hostile physical conditions on the planet. It soon became a prolonged campaign of attrition, where the ability to provide food, medical supplies, war materiel, and more troops would be decisive.

To appreciate the drama and significance of Guadalcanal, we must do our best to bracket our knowledge of how the war in the Pacific turned out. To be sure, the battle of Midway in June was a dramatic victory for the United States, and the loss of four Japanese aircraft carriers compared to the single American flattop, Yorktown, was a major reversal for the Imperial Navy. But in and of itself, Midway was not decisive. The Imperial Fleet, though weakened, was still formidable; the Japanese army had not yet tasted defeat; and the Rising Sun still flew over much of the Central and South Pacific.

During the first six months of 1942, the Japanese had pushed deeper into the South Pacific, seizing Rabaul in the Bismarck Archipelago, Lae and Salamua in New Guinea, and Tulagi in the Solomon Islands. This was an effort to consolidate earlier gains as well as to establish a network of bases that could support air and sea operations against Allied counterattack. In what must have seemed like a relatively routine decision at the time, the Japanese naval brass in mid-June 1942 authorized the construction of an airbase on the island of Guadalcanal, about 25 miles across the Sealark Channel from a seaplane base already established at Tulagi. A functioning air base at Guadalcanal not only would enhance the Japanese defensive perimeter in the South Pacific, it would threaten vital sea lanes to Australia. Throughout June and July 1942 advance units and construction forces began to clear land and build an air strip on the site of a coconut plantation.

The activity on Guadalcanal did not go unnoticed. Air reconnaissance, analysis of Japanese radio traffic, and the reports of civilian coastwatchers organized by the Australian navy all confirmed that an airfield was being developed on Guadalcanal. American naval planners, who wanted to capitalize on the momentum of Midway, were already devising a more aggressive effort in the South Pacific than prewar plans had envisaged. The new intelligence about Japanese activity in the Solomons convinced the planners to invade Guadalcanal as soon as possible.

Soon after his arrival on July 25 at the major Japanese base at Truk on his way to Rabaul to assume command of the newly organized Outer South Seas Force, Vice Admiral Gunichi Mikawa quizzed naval staffers about the prospects of an Allied attack on Guadalcanal. He was reassured that this could not happen. The staffers were, of course, very wrong. And Mikawa would not have long to wait for his fears to be realized. B-17s from the island of Espíritu Santo began regular bombing raids on July 31, and increased radio activity suggested to Japanese intelligence that the Americans were planning something against Guadalcanal. With stunning speed, American planners had pulled together Operation “Watchtower,” and a flotilla of transports containing the 1st Marine Division steamed from New Zealand undetected. A strong naval escort that included three of the four American aircraft carriers in the Pacific offered protection.

The Japanese on Guadalcanal were dumbfounded to see a large Allied fleet offshore on the morning of August 7. At first, the landings at Lunga Point went so smoothly that the Marines reported the whole operation seemed like a “peace-time drill.” The troops moved inland unopposed through a coconut plantation, but their advance slowed as cautious Marines encountered the Guadalcanal jungle for the first time. It would prove to be almost as formidable an opponent as the Japanese. Forced to navigate by compass through thick jungle and across a winding stream, the Leathernecks were soon behind schedule. Other units moved slowly along the beach from the landing zone in disorganized fashion. Back at the beachhead the scene was one of chaos. Supplies piled up on the beach, which became so littered that follow-on landing craft could not off-load their cargoes. The Marine shore party was too small to accommodate the volume of traffic streaming from the transports. This was the first American amphibious assault of the war in the Pacific, and the efficiency of subsequent operations was not present. Nevertheless, on the second day of the operation, the Marines captured the deserted airstrip. The Japanese offered almost no resistance save for ineffective air attacks on the American transports.2 Any celebration, however, was definitely premature.

In the early morning hours of August 9, the U.S. Navy suffered one of its most galling defeats ever. Using effective nighttime tactics that took advantage of vastly superior Japanese long-range torpedoes, Admiral Mikawa led a strike force of cruisers and destroyers against a stronger Allied naval force near Savo Island off the northwest tip of Guadalcanal. When it was all over, the Allies had lost four heavy cruisers (three American, one Australian) and a destroyer. Fearing counterattack from carrier-based American aircraft, the Japanese commander retreated, losing only one destroyer on the way back to Rabaul.

As humiliating as the defeat of Savo Island was, it could have been much worse. American transports still offloading necessary supplies were a more important strategic target than the naval units that protected them. In some respects Savo Island resembled Pearl Harbor: both were great tactical victories that missed larger strategic objectives. Still, Savo Island and the prospect of Japanese air attacks spooked Admiral Frank Fletcher, who ordered his precious carriers to retire. His controversial decision had enormous repercussions. Without the cover of American carrier-based aviation, the transports were vulnerable to Japanese air attack. The senior officer for the expeditionary force, Rear Admiral Richmond Kelly Turner, decided to keep the transports off Guadalcanal throughout the daylight hours of August 9 without air cover. That night they retreated to New Caledonia, their holds still carrying both men and cargo needed ashore. “The marines,” as historian Richard B. Frank notes, “were now alone.”3

Marine commander Major General Alexander Archer Vandegrift was in a tough position. He had virtually no air cover. Only a fraction of the necessary supplies had been offloaded before the transports departed. About 1,800 troops hadn’t made it ashore either. He would have to hold on until the ships and planes returned. Fortunately for the Marines, the early days of the campaign were relatively uneventful. Both sides engaged in some limited reinforcement activity using destroyers pressed into service as transports. The Americans landed a small force trained to set up an advanced air base along with some aviation fuel and spare parts. Getting the captured airstrip ready for flight operations was critical.

We have only sketchy details of the first real engagement with Japanese troops. Lt. Col. Frank Goettge, a Marine intelligence officer who wanted to follow up on a report that a group of Japanese to the west of American lines might be prepared to surrender, was leading a patrol of about 25 men. Goettge’s patrol was ambushed sometime during the night of August 12-13. Only three Marines escaped, one of whom claimed that the Japanese attackers used swords and bayonets to butcher wounded Americans. At sunrise on August 19, the Marines held off a reckless charge near the Matanikau River, inflicting disproportionate casualties on the Japanese. Although these first encounters were relatively small-scale operations, they were revealing. Early on, stories spread throughout the American ranks of Japanese treachery and brutality. The Marines were prepared to respond accordingly. The Japanese, for their part, were convinced that the Americans were soft and could not stand up to the fierce determination of the Imperial troops. Their flair for “tactically dramatic” assaults—seemingly bordering on the suicidal—stemmed from their dismissive assessment of the American fighting spirit.4

Back at Imperial General Headquarters in Tokyo, army and navy planners deliberated on their next move. Guadalcanal must be retaken, although the New Guinea campaign and the assault on Port Moresby remained their strategic priority. Furthermore, because they believed the American force stranded at Guadalcanal was relatively small and could easily be overrun, the Japanese initially diverted only modest forces from the New Guinea operation to Guadalcanal. They assigned the task of recapturing the now operable airfield to Colonel Kiyonao Ichiki, whose actions as a company commander in China in 1937 precipitated the famous Marco Polo Bridge Incident.

On August 19, six Japanese destroyers landed Ichiki’s detachment of approximately 1,000 men about twenty miles from Lunga Point. Two days later the impetuous Ichiki attacked the Marine perimeter at Alligator Creek (inaccurately known as the Battle of the [nearby] Tenaru River). The Japanese never came close to the airfield. As was the case a few days earlier at Matanikau River, the Japanese forces were mauled as they advanced in the open against superior firepower. With almost 800 of his troops lying dead on the shores of Alligator Creek, Ichiki committed suicide. The Japanese Army was not used to such defeats. Neither were the Americans. The waste of soldiers’ lives amazed the Marines. And they were disgusted when wounded Japanese used hand grenades to blow themselves up rather than surrender when Marines approached.

On August 16, a powerful Japanese fleet had left Truk for Guadalcanal. At this point in the war, Japan had more naval assets in the South Pacific than did the United States. Loaded with troops and equipment, the Japanese transports that steamed toward Vandegrift’s Marines could count on escort and cover from four carriers, one escort carrier, four battleships, 16 cruisers, and 30 destroyers. To confront this powerful armada, the USN had only three carriers, one battleship, seven cruisers, and 18 destroyers in South Pacific waters.

Contact between the two fleets occurred on August 24, but the resulting Battle of the Eastern Solomons was inconclusive. The Japanese lost an escort carrier, a destroyer, and a transport, while the American fleet carrier Enterprise had to retire to Pearl Harbor for repairs. Both sides, especially the Japanese, seemed more concerned about losing carriers than gaining a decisive victory.5 The important thing to note, however, is that the Japanese convoy turned back.

For the remainder of August and early September, the Japanese conducted a daytime air war against Guadalcanal-based American airpower, all the while using destroyers to make the high-speed run at night down “the Slot” (the channel running between the islands in the Solomon chain) to deposit men and supplies. These “Tokyo Express” runs would often continue down the Guadalcanal coast to bombard Henderson Field (named after Major Lofton Henderson, the first Marine aviator killed in the Battle of Midway) and surrounding American positions. This led to what Richard Frank has called “a curious tactical situation” in which the Americans enjoyed overall command of the skies and seas around Guadalcanal in the daylight, while the Japanese controlled the waters at night.

With the benefit of hindsight, the historian can claim that the Japanese approach favored the Americans, even though their hold on Guadalcanal was still precarious. The amazing capacity of Stateside shipyards and aircraft plants would certainly call into question any Japanese effort to fight a campaign of attrition in the Pacific (though recent studies such as Tim Maga’s America Attacks Japan: The Invasion that Never Was show how integral attrition was to Japan’s thinking down to the very end). But it is important to resist playing the hindsight card and to recall the grand strategic context of September 1942. Hitler’s armies threatened Stalingrad, and Rommel’s Afrika Corps was outside Alexandria, Egypt. The Joint Chiefs of Staff had made Operation Torch in North Africa America’s top strategic priority. Consequently, the Guadalcanal campaign had to compete for resources. Yet even in these global circ*mstances, the JCS (especially Admiral Ernest King) had a keener understanding of the strategic importance of Guadalcanal than the Japanese high command. If the Marines could hold on and keep Henderson Field open, eventually help would come.

By September, the Tokyo Express had landed enough new troops to enable the Japanese to mount an attack on Marine lines. They chose the southern defensive perimeter of the airfield. The assault was marred by poor coordination between the various units, which lost cohesion in the thick jungle terrain. On the evening of September 12-13, the Japanese struck American positions along Edson’s Ridge, named in honor of the Marine commander whose forces—augmented by paratroopers—held their ground in savage and often confused closehand combat. Edson’s troops inflicted heavy casualties on the Japanese, who endured unbelievable hardship as they retreated through the jungle to base camps with almost no food or medical supplies. The Battle of Edson’s Ridge probably was the pivotal battle in this pivotal campaign in the Pacific War. Had the Japanese secured the highlands to the south of Henderson Field, they could have severely disrupted, if not prevented, flight operations. And Edson’s Ridge was important in another respect: it helped convince the Japanese that to beat the Americans they would have to make Guadalcanal the centerpiece of their Pacific strategy.

To make good on the resolve to elevate Guadalcanal’s strategic status, none other than Admiral Isoroku Yamamoto developed a grand plan. In October, the Combined Fleet would support a large, high-speed convoy which would land sufficient troops and supplies to retake Guadalcanal. Japanese battleships would position themselves offshore prior to the landings and bombard Henderson Field, making it unusable for Marine aviators. With air superiority the Japanese could interdict and then retake the island.

As Yamamoto prepared the knock-out blow, Japanese planes based at Rabaul kept up the pressure with daily bombing raids. The two sides also sparred in several ground actions near the Matanikau River. (One of these skirmishes provided John Hersey with the material for his combat report, Into the Valley.)

Yamamoto’s operation began with a very loud bang on the evening of October 13-14. Two battleships lobbed about 1,000 shells at Henderson Field in what the Americans called “the Bombardment.” It was a terrifying event for those on the receiving end, and it succeeded in temporarily knocking out the main airstrip, as well as destroying most of the aircraft and aviation fuel. But a smaller airstrip and a couple of dozen fighters managed to survive. From all accounts this was perhaps the most desperate time of the entire campaign for the Americans. Their ability to command the skies was questionable; the USN was much weaker than Yamamoto’s Combined Fleet; and a large Japanese convoy was en route.

On October 14-15, the Japanese offloaded about 4,500 troops and two-thirds of their supplies before American air attacks forced the transports to retire, sinking three of them. Heroic action by the handful of planes still flying out of Henderson Field combined with newly arrived reinforcements and air strikes from the carrier Hornet made life miserable for the Japanese ashore. Yamamoto countered with an air strike from two of his carriers on the morning of October 17, but American cryptanalysts detected the attack in advance, and Wildcat fighters fought off the attackers, who inflicted only limited damage. Nevertheless, the Leathernecks braced themselves for a major Japanese ground assault on positions surrounding Henderson Field.

Yamamoto’s plan called for a swift Japanese ground assault, building on the “shock and awe” impact of the Bombardment. But the assault did not occur until the evening of October 24-25. In typical Japanese tactical fashion, simplicity was sacrificed for multi-pronged attacks. In theory, coordinated assaults made sense, but in the jungles of Guadalcanal maintaining the cohesion of even small units was extremely difficult. Complicated maneuver requiring the coordination of large units was virtually impossible. From the start, the operation was a mess. Advance patrols got lost, and the main units groped blindly in dense jungles with very little semblance of order. The Japanese finally attacked a defensive line commanded by Marine Lt. Col. Lewis “Chesty” Puller. Supported by substantial artillery fire, Puller’s men mowed down the Japanese, who once again underestimated both the devastating impact of American firepower and the enormous challenge of the Guadalcanal terrain.

While the Japanese effort to dislodge the Marines collapsed in disarray, the two navies met again. The USN had just appointed a new commander for the South Pacific, one of the most memorable senior naval officers of World War II: Vice Admiral William F. “Bull” Halsey. The appointment of Halsey, who exuded confidence and an aggressive spirit, was a shot in the arm for American forces. At the Battle of the Santa Cruz Islands, the USN lost the carrier Hornet and a significant number of planes but succeeded in turning back a stronger Japanese naval force, decimating its aircraft and aircrew strength. Three of four Japanese carriers engaged in the battle had to retire to home waters for repairs. The Japanese found it harder to replace these losses than the Americans did to deploy another aircraft carrier. All the while, the battle of attrition in the air raged. During October the Japanese lost 131 planes at a cost of 103 American aircraft. Time was running out on Guadalcanal for the Rising Sun.

In mid-November, the Japanese attempted one last major effort to turn the tide in the South Pacific. This time, they assembled a convoy carrying 30,000 troops to land on Guadalcanal and overwhelm the Americans. But a naval covering force ran into American units on November 13, and a tough, close-range night action ensued. In this first phase of the naval Battle of Guadalcanal, both sides lost a few warships (including the American cruiser Juneau and the aging Japanese battleship Hiei). From the larger strategic perspective, the most important aspect of this engagement was the abortive return of the Japanese convoy to port. But the battle is remembered more for the tragic saga of the Juneau‘s survivors, who drifted on rafts in shark-infested waters for days. Their plight has become one of the most sobering stories of the war. Of the 683 sailors serving on the Juneau, only 14 survived. This was the largest proportional loss of life of any American warship of cruiser size or larger during the entire war. Among the losses were the five Sullivan brothers from Waterloo, Iowa, who had requested—against normal navy practice—to serve together on the same warship.

The Japanese convoy took to sea again the next night. This time Halsey had two battleships waiting in ambush at Iron Bottom Sound. The ensuing engagement off Savo Island on the evening of November 14-15 was an American victory. Not only did the Japanese lose the battleship Kirishima, a destroyer, and several transports, but they were only able to land about 2,000 troops along with just a few days’ supply of food and ammunition. Compounding the difficulty for the Japanese, the naval action served to cover the successful landing that same evening of about 5,500 Americans with tons of supplies.

The relative ability of each side to reinforce and supply the troops on Guadalcanal was the single most significant factor in the campaign. Sustained military operations on the island required a steady stream of food, ammunition, medical supplies, and more men to replace those lost in combat and to disease. Despite several successful naval encounters, the Imperial Navy proved incapable of providing the necessary logistical support. The naval Battle of Tassafaronga on November 30 is a perfect example. In attempting to reinforce their troops again, a covering force of Japanese destroyers inflicted another humiliating defeat on the USN, battering several cruisers. But that result was of secondary strategic importance: what really mattered was that critical supplies never reached the desperate Japanese forces on Guadalcanal. Meanwhile the American supply train was becoming more efficient with each passing week. Troops and supplies poured into Guadalcanal. By December, when the Army took over operations from the Marines, the American force totaled 50,000 men.

From a military perspective, the remainder of the campaign was anticlimactic, though by no means uneventful. Life for the Japanese troops on Guadalcanal became wretched. Many died of starvation, and many more were so weakened by malaria and malnutrition that they could not fight. The Japanese high command, recognizing it had lost the campaign, smuggled a fresh force onto the island to serve as a rearguard to allow the survivors the opportunity to retreat to the northwest corner of Guadalcanal for risky evacuation by sea. Meanwhile U.S. troops steadily—albeit perhaps too cautiously, given the pathetic condition of their opponents—pushed the Japanese back, annihilating pockets of resistance. In early February 1943, the Japanese skillfully executed evacuations of nearly 13,000 from Cape Esperance. But this was not a Pacific Dunkirk. The Japanese were not retreating in order to return to offensive operations sometime in the future. They would never again adopt a strategically offensive stance in the Pacific War. Organized resistance on Guadalcanal ended on February 9, though the last known Japanese straggler surrendered in October 1947.

Historian Eric Bergerud maintains that Guadalcanal was a catastrophe for the Japanese. Approximately 25,000 Japanese soldiers—by one estimate, two-thirds of all the Japanese who served on Guadalcanal—died. The total exceeds 30,000 when Japanese sailor and airmen fatalities are included. The Japanese lost 24 warships and almost 700 aircraft. Allied losses were also high: 25 Allied warships (one Australian cruiser, the rest American) were sunk, including two fleet carriers, and over 600 aircraft were shot down or destroyed. The United States suffered approximately 7,000 fatalities in the campaign. But, as Bergerud reminds us, the ratio of these losses does not adequately measure Guadalcanal’s significance. Prior to the campaign, Tokyo assumed that American soldiers would not stand up to Japanese infantry. Guadalcanal proved that the Americans could match the Japanese in courage and resolve, while exceeding them in firepower and besting them in logistics. Guadalcanal also had an enormous impact on Japanese strategy and operations in the South Pacific. Efforts to retake Guadalcanal came at the expense of the Japanese drive on Port Morseby in New Guinea, which collapsed in the face of counterattacks by Australian and American forces. The Japanese hold on the South Pacific, which looked so strong in the summer of 1942, crumbled one year later. Guadalcanal changed everything.

Almost everyone who writes about the ground combat in the Pacific War, especially at Guadalcanal, comments on how brutal and unrestrained the fighting was. Bergerud, for example, describes the South Pacific battlefield as an intensely savage place where no quarter was given. The Pacific War, he writes, was “the most vicious light-infantry war ever fought by industrial nations.”6

Accounting for the combat savagery is by far the most controversial and troubling issue of the Guadalcanal campaign for military historians. Both sides developed a visceral hatred of each other. In one of the most influential accounts of the problem, John Dower attributed the carnage of the Pacific War to racial hatred. The Japanese cultivated stereotypes of Americans as unclean, materialistic demons, whereas American magazines and cartoons crudely presented the Japanese as bespectacled, buck-toothed simians. Racial hatred and dehumanization of the enemy, Dower concludes, led to a merciless war.7

Craig Cameron has refined Dower’s thesis, offering a stronger link between racism and regular battlefield behavior. Cameron is concerned with the images that Marines had of themselves and the “Japanese Other” that shaped their actions in combat. It is a rich, often disturbing argument. The Marines saw themselves as the “warrior representatives” of American national character, and they landed at Guadalcanal prepared for “a clash of warring samurai.” As the fighting raged on, however, the Marines developed a view of their Japanese opponents that differed sharply from their view of themselves. In American eyes, the Japanese fought with “almost demoniacal fanaticism” and contempt for life. And they could be counted upon to resort to treachery and cunning. Given this approach to combat, the notion that the Japanese were fellow warriors evaporated, as did much of the Americans’ restraint with Japanese prisoners and wounded as the Guadalcanal campaign unfolded.8

Bergerud has responded thoughtfully to the Dower-Cameron argument. He concludes that ingrained racism and a faulty appreciation for “the Other,” although undoubtedly present, are inadequate explanations, no matter how appealing they might be to the prevailing sensibilities of the academy. He argues that the Marines’ hatred of the Japanese—which, by the way, was unlike the passions articulated by American sailors and airmen in the campaign—arose out of firsthand experience with the Japanese battle-ethos of death.

According to Bergerud, the only explanation for this visceral hatred and lack of restraint on the battlefield is fear mingled with a lust for revenge. Early on, the Marines perceived that the Japanese were uniquely cruel fighters who preferred death to surrender, even when there was no clear military purpose involved. The fate of Goettge’s patrol and Ichiki’s suicidal attack at the Tenaru confirmed this. Every encounter with the Japanese generated an intense sense of danger and fear. Since the Japanese would do anything to kill Americans, the Marines took no chances. The “savage physical environment” of Guadalcanal only intensified the fear. Visibility was often limited to a few yards in a jungle filled with strange and threatening sounds. Without dismissing the ferocity of American combat practices, Bergerud points the finger at the Japanese military government for indoctrinating soldiers “to find meaning in oblivion, and to accept the frightening idea that spiritual purification comes through purposeful death.”9

The past—especially its military dimension—is a vast storehouse for those who would rummage its contents to find lessons for the present moment. Predictably, Guadalcanal gets trotted out a lot these days. Those who believe that war inevitably unleashes unspeakable evil can find plenty of evidence in the Guadalcanal campaign to support their case. Others who derive inspiration from acts of bravery and sacrifice in war look to Guadalcanal’s rich supply of source material. No wonder we are still drawn to Guadalcanal. Something very significant happened there—something that altered the course of the Pacific War, and that continues both to inspire and to repulse us.

Donald A. Yerxa is editor of Historically Speaking and a professor of history at Eastern Nazarene College.

1. This paragraph—indeed, the bulk of this essay—is largely based on two superb books: Richard B. Frank, Guadalcanal: The Definitive Account of the Landmark Battle (Penguin, 1990); and Eric Bergerud, Touched with Fire: The Land War in the South Pacific (Penguin, 1996).

The literature on Guadalcanal may exceed that of any other single campaign in World War II. Some of the best military writers have been attracted to Guadalcanal, including Richard Tregaskis, whose Guadalcanal Diary became a best seller as soon as it was published in 1943 and remains a model of war reporting, and the dean of mid-20th century naval historians Samuel Eliot Morison. For the reader with other interests in life, a modest list is recommended. Ronald H. Spector’s Eagle Against the Sun: The American War with Japan (Vintage, 1984) remains the best single-volume treatment of the War in the Pacific. Bergerud’s Touched with Fire is unsurpassed in its ability to convey what the Guadalcanal campaign was like for both sides. Bergerud properly treats Guadalcanal alongside parallel operations in New Guinea. And while historians usually place the adjective definitive in sneer quotes, it is hard to imagine a better operational history of the Guadalcanal campaign from both sides than Frank’s book. His narrative keeps an eye on the grand strategic context while masterfully integrating the complex air-land-sea components of this campaign.

2. The Japanese offered fairly stiff resistance against a parallel attack on nearby Tulagi, but they were soon subdued.

3. Frank, Guadalcanal, p. 120.

4. Ibid, p. 133.

5. Overall, the Americans were more willing to risk carriers than the Japanese, even though they had fewer available at this time. The reasons go beyond superior shipbuilding capacity. Strategically speaking, the United States, unlike Japan, was a maritime-oriented seapower, whereas Japan, despite its magnificent navy, remained at core a land power. The bulk of Japan’s military resources in 1942 and throughout the war were devoted to the ground war in China. Throughout modern history, as naval historian Clark Reynolds argues in several important books, land powers are typically more reluctant to commit their naval forces in all-out battle, knowing that they lack the resources to replace lost ships.

6. Bergerud, Touched with Fire, p. 271.

7. John Dower, War Without Mercy: Race & Power in the Pacific War (Pantheon, 1986).

8. Craig Cameron, American Samurai: Myth, Imagination, and the Conduct of Battle in the First Marine Division, 1941-1951 (Cambridge Univ. Press, 1994), pp. 30-48, 89-129.

9. Bergerud, Touched with Fire, p. 403-25.

Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromDonald A. Yerxa

Page 3137 – Christianity Today (2024)
Top Articles
Lyndsey Johnson Email & Phone Number | Seeking Employment hard worker ready for opportunity Contact Information
ACCENTRA CREDIT UNION USA ABA Routing Number List
Nullreferenceexception 7 Days To Die
Promotional Code For Spades Royale
Mcgeorge Academic Calendar
Faridpur Govt. Girls' High School, Faridpur Test Examination—2023; English : Paper II
Lamb Funeral Home Obituaries Columbus Ga
Trabestis En Beaumont
Cumberland Maryland Craigslist
Crazybowie_15 tit*
Graveguard Set Bloodborne
Jesus Revolution Showtimes Near Chisholm Trail 8
Assets | HIVO Support
Enderal:Ausrüstung – Sureai
WWE-Heldin Nikki A.S.H. verzückt Fans und Kollegen
Gwdonate Org
Craigslist Pets Athens Ohio
Seattle Rpz
Conan Exiles Colored Crystal
Www Craigslist Milwaukee Wi
VERHUURD: Barentszstraat 12 in 'S-Gravenhage 2518 XG: Woonhuis.
Heart and Vascular Clinic in Monticello - North Memorial Health
Lisas Stamp Studio
67-72 Chevy Truck Parts Craigslist
Village
THE FINALS Best Settings and Options Guide
Loslaten met de Sedona methode
Netwerk van %naam%, analyse van %nb_relaties% relaties
Labcorp.leavepro.com
The Fabelmans Showtimes Near Baton Rouge
Pioneer Library Overdrive
UPC Code Lookup: Free UPC Code Lookup With Major Retailers
Pickle Juiced 1234
Gold Nugget at the Golden Nugget
Viewfinder Mangabuddy
Muziq Najm
How are you feeling? Vocabulary & expressions to answer this common question!
Urban Blight Crossword Clue
Silive Obituary
Wal-Mart 140 Supercenter Products
Directions To The Closest Auto Parts Store
Sand Castle Parents Guide
Panolian Batesville Ms Obituaries 2022
8776725837
Collision Masters Fairbanks
Gas Buddy Il
Sacramentocraiglist
Jackerman Mothers Warmth Part 3
Tito Jackson, member of beloved pop group the Jackson 5, dies at 70
Research Tome Neltharus
Besoldungstabellen | Niedersächsisches Landesamt für Bezüge und Versorgung (NLBV)
David Turner Evangelist Net Worth
Latest Posts
Article information

Author: Gov. Deandrea McKenzie

Last Updated:

Views: 6248

Rating: 4.6 / 5 (46 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Gov. Deandrea McKenzie

Birthday: 2001-01-17

Address: Suite 769 2454 Marsha Coves, Debbieton, MS 95002

Phone: +813077629322

Job: Real-Estate Executive

Hobby: Archery, Metal detecting, Kitesurfing, Genealogy, Kitesurfing, Calligraphy, Roller skating

Introduction: My name is Gov. Deandrea McKenzie, I am a spotless, clean, glamorous, sparkling, adventurous, nice, brainy person who loves writing and wants to share my knowledge and understanding with you.