Erika Lorraine Milam on Creatures of Cain: The Hunt for Human Nature in Cold War America

After World War II, the question of how to define a universal human nature took on new urgency. Creatures of Cain charts the rise and precipitous fall in Cold War America of a theory that attributed man’s evolutionary success to his unique capacity for murder. Drawing on a wealth of archival materials and in-depth interviews, Erika Lorraine Milam reveals how the scientists who advanced this “killer ape” theory capitalized on an expanding postwar market in intellectual paperbacks and widespread faith in the power of science to solve humanity’s problems, even to answer the most fundamental questions of human identity.

What surprised you when you were researching the book?

I never intended to write about violence. The book started as a kernel of a story about the development and reception of an educational program called Man: A Course of Study, or MACOS. When Americans learned that the Soviet Union had launched the world’s first man-made satellite into orbit, they feared the technological prowess of Soviet engineers and scientists would quickly outstrip their own, unless they poured significant energy into science education. The result was a series of educational programs developed by experts and made available for use in elementary school classrooms around the country: the PSSC, BSCS, and others. MACOS was the first to tackle questions central to the social sciences. Led by cognitive psychologist Jerome Bruner, it focused students’ attention on three questions: “What is human about human beings? How did they get that way? How can we become more so?” I wanted to know more. The program, I discovered, used a wide array of materials—among them: films, booklets, and board games—to get students to contemplate these larger questions about the diverse communities in which they lived. But quickly I realized, too, that when MACOS was adopted by local school systems it was met with protests from community members who objected to the violent content of the materials. It was difficult for me to square the project’s intentions with the accusations hurled at it only a few years later. My research snowballed. Debates over violence during the Cold War—its causes and consequences—served as proxies for scientists thinking about questions of sex, race, and their own contested authority to answer these fundamental issues. This book is the result.

You interviewed a lot of people for the book, what was that like?

Thanks for asking me this! Creatures of Cain would have been a very different book without the generosity of the scientists and writers who took the time to speak with me about their research. In reconstructing past events, historians necessarily rely on archival research. This works brilliantly when people have already deposited their correspondence and papers in an archive, but those collections are more rare than you would think, are often highly curated, and are usually available only after someone has died. (Not everyone is keen to have future historians read through old letters.) When working on recent history, talking with scientists while they are still alive allows historians like myself access to voices and perspectives that would otherwise be difficult to include. Much about a scientist’s life is never recorded in a paper trail: from the books and experiences people found inspiring when they were teenagers to the friends and colleagues who sustained them during and after graduate school. Talking with people about their histories is thus invaluable, especially in trying to recreate informal networks of collaboration that I would have otherwise missed. Plus, I find it thrilling to meet people in person. The lilting cadence of a voice, the disorderliness of an office, or the art on a wall: each of these things leaves a singular impression impossible to glean from the written word alone.

How did you choose the images for the book?

For centuries images have played a crucial role in communicating scientific ideas, including concepts of human nature. After the Second World War, with the exciting coverage of paleoanthropological fossil discoveries in Africa and nature documentaries about modern human cultures from all over the world, still and moving images stirred audiences’ interests in anthropological topics. When selecting images for the book, I chose to emphasize drawings and illustrations that depicted the theories under discussion or scientists hard at work. Their striking visual styles reflect both the artistic conventions of the time and the highly visual nature of scientific conversations. More so than photographs, which can easily be read as flat representations of the past, I hope these images center readers’ attentions on the creativity required to bring theories of human nature to life.

How did you become a historian of science?

I came to the history of science fortuitously. In my undergraduate and early graduate work, I studied biology. Only in my second year of graduate school in the Ecology and Evolutionary Biology program at the University of Michigan did I come to realize that there was a whole community of people, like me, who were interested in the humanistic study of science, technology, and medicine. I started reading books on the history of evolutionary theory, on gender history, and on the history of American science. I was gripped. Now I study how intellectual and social concerns are tightly bound together within scientific inquiries. I find especially fascinating research on the biological basis of sex and aggression in human behavior—each of which touches on the broader question of what it means to be human in a naturalistic world.

What are the lessons for us today that we learn from Creatures of Cain?

When I talk about my project, people ask me whether the growing violence of the struggle for Civil Rights domestically or the escalating Vietnam War made it easier for scientists and citizens to embrace the idea that humans were naturally murderous. The “killer ape” theory, as it came to be known, posited that the crucial divide between humans and all other animals lay in our capacity to kill other members of our own species. Did the violence of the era, perhaps, explain why it was easy to imagine the history of humanity as characterized by violence and only punctuated by moments of peace? I answer by saying that only a decade earlier, in the wake of the death and horrific atrocities of the Second World War, scientists chose instead to emphasize the importance of emphasizing the fundamental unity of humankind. Only through a common struggle against the environment, they argued, had our human ancestors survived life on the arid savannah—we humans may have clawed our way to the present, but we did it together. Biological theories of human nature have been used both to dehumanize and to promote progressive anti-racist conceptions of humanity as a whole. As these accounts demonstrate in juxtaposition, there is no consistent correlation between the desire to biologize human nature and either periods of violence or schools of ideological persuasion.

Equally important, fundamental questions about the nature of humanity—in the colloquial scientific books I make the center of my analysis—have helped recruit and inspire generations of students to pursue careers in the natural and social sciences. Even though such discussions rarely appear in the pages of professional scientific journals, they are central to how scientific and popular ideas about human nature change. Drawing a sharp distinction between specialist and non-specialist publications would thus distort the history of ideas about human nature in these decades. After all, scientists read (and reviewed) colloquial scientific publications, too, especially when exploring new ideas outside their immediate expertise.

When observations that chimpanzees also killed chimpanzees became broadly known in the latter half of the 1970s, it spelled the end of the killer ape theory. Although the idea that aggression provided the secret ingredient to the unique natural history of humanity has faded, this theory helped lay the groundwork for how scientists conceptualize human nature today.

Bonus question (if you dare): Please summarize the book in a tweet.

Oh wow! Okay, here’s a sentence from the introduction that actually fits: “In its broadest scope, Creatures of Cain demonstrates that understanding the historical fate of any scientific vision of human nature requires attending to the political and social concerns that endowed that vision with persuasive power.”

Erika Lorraine Milam is professor of history at Princeton University. She is also the author of Looking for a Few Good Males: Female Choice in Evolutionary Biology.

An Interview with the Authors of Dark Matter Credit

Imagine a world without banks. Because there are no credit cards, you have to pay cash for everything, and there’s no way to borrow either. How do you buy car or a house, or start a new business? You hide cash under your mattress. Such a world would be desperately poor, or so research in economics teaches us. Yet someone Europe managed to become rich long before banks spread across the continent. How was that possible?

Dark Matter Credit by Philip T. Hoffman, Gilles Postel-Vinay, and Jean-Laurent Rosenthal solves the mystery. Using data on 250,000 loans from France, the authors found that credit abounded in Europe well before banks opened their doors, thanks to a huge shadow credit system whose importance no one has ever measured before. The system let nearly a third of French families borrow way back in 1740, and by 1840 it funded as much mortgage debt as the 1950s US banking system. And when banks finally appeared, it out-competed them, helping people to borrow, save, and even make payments. It thrived right up to World War I, not just in France but Britain, Germany, and the United States, only to be killed off by government intervention after 1918.

According to the authors, their discovery overturns standard arguments about banks and economic growth and reveals a shadow system made up of thousands of loans between individuals, as in modern peer to peer lending.  Dark Matter Credit sheds light on the problems peer to peer lending will face as it spreads and suggests how those problems can be solved.

What led you to uncover a huge and unknown shadow banking system?

We knew that people were borrowing and lending long before banks existed, because thousands of loan contracts survived in the French archives. We wanted to know how that was possible without banks. How did the lenders know that the borrowers would repay? After all, there was no such thing as a credit score or even an easy way to tell if property had been mortgaged, and potential lenders had for centuries been worried about the risk of default. Could lenders only make loans to family members or close friends? Was that how credit markets worked? If so, lending would have been severely limited.  Early investigations suggested, though, that lending was not so small, and not as local as previous scholars had thought. We suspected that informal intermediaries were matching borrowers and lenders and increasing the level of confidence in the market. To get at what had actually happened, we set out to measure all this lending across France and to analyze what made it possible.

How much lending was there?

Well in 1840, outstanding mortgage debt came to 27 percent of GDP. That was almost as much as in the United States during the housing boom in the 1950s, when there were numerous banks, savings and loans, and government backed mortgages, but all the lending in France was done without any bank involvement, and without any of the government support that stimulated housing construction in the United States. Even way back in 1740, the credit system in France allowed a third of all families to borrow and lend. And the system was incredibly persistent: it was only killed off by government intervention after 1918, but even as late as 1931, it was still providing 90 percent of all borrowers with their loans

How did it work?

The loans, it turns out, were arranged by notaries, who had been drawing up legal documents and preserving official copies of records since the Middle Ages. Over time, they began serving as real estate brokers and providing legal and financial advice, and since they knew who had money to lend and who was creditworthy, they were soon matching lenders up with borrowers who had good collateral and were likely repay. And if they couldn’t find a match among their own clients, they referred borrowers and lenders to one another. One notary might send a good borrower off to another notary, or he might receive a lender from yet another notary. That allowed loans to be made when the borrowers and lenders didn’t know one another. The loans didn’t pass through banks at all—they were all loans between individuals, as in modern, web based peer to peer lending, but all without the web obviously.

Did it do anything else?

The notaries also helped people make payments and manage their savings. And their loan business continued to thrive after banks opened their doors. There were in fact more banks in France than anyone imagined (we know—we counted them), but it took them nearly a century to make any serious inroads into mortgage lending. We also discovered that notaries and bankers actually cooperated with one another to devise a new way for peasants to pay their bills at a time when doing so was difficult outside of cities. This sort of innovation is surprising because it runs counter to an influential argument that financial markets should have been stifled by the legal system prevailing in France and many other parts of the world—so called civil law, which was supposedly less favorable to financial development than British and American common law. That argument is also contradicted by the fact that the notaries themselves were thriving loan brokers, because the notaries kept the written records that were at the heart of the civil law.

How did you measure all the lending?

We visited a lot of archives! We had to because we started in a period before there were any government statistics about lending. So we assembled loan information from original contracts and fiscal sources. Of course, reading a quarter of a million loan contracts would have been impossible, but we also knew that summaries of the loans survived in French tax archives from the early eighteenth century up through the 1900s. The tax records plus some ingenious sampling allowed us to gather the data on our quarter of a million loans and to estimate what was happening in the credit market for France as whole across two centuries. With the sample, we could analyze the impact of urbanization, economic growth, financial crises, and enormous institutional changes during the French Revolution and the nineteenth century.   We also investigated the spread of banking in France and the interaction between bankers and notaries, and we compared French banking with banking in Britain. The comparison suggested that Britain probably lacked as strong a peer to peer lending system as in France, although it did have one. Evidence from other countries implies that similar systems operated in Germany, and the United States in 1900. They too had big peer to peer lending systems that have yet to be explored. And one has recently cropped up in China, but it has caused massive losses and triggered protests, because of problems that the French system avoided.

Philip T. Hoffman is the Rea A. and Lela G. Axline Professor of Business Economics and History at the California Institute of Technology. Co-author Gilles Postel-Vinay is professor emeritus at the Paris School of Economics, and co-author Jean-Laurent Rosenthal is the Rea A. and Lela G. Axline Professor of Business Economics and the Ronald and Maxine Linde Leadership Chair in the Division of the Humanities and Social Sciences at the California Institute of Technology.

Luke Hunter on Carnivores of the World

Covering all 250 species of terrestrial, true carnivores, from the majestic polar bear and predatory wild cats to the tiny least weasel, Luke Hunter’s comprehensive, up-to-date, and user-friendly guide, Carnivores of the World, features 93 color plates by acclaimed wildlife artist Priscilla Barrett that depict every species and numerous subspecies, as well as more than 400 drawings of skulls and footprints. Features new to this edition include revised and expanded species coverage, a distribution map for every species, 25 new behavioral illustrations, and much more. Detailed species accounts describe key identification features, distribution and habitat, feeding ecology, behavior, social patterns, reproduction and demography, status, threats, lifespan, and mortality. An introduction includes a concise overview of taxonomy, conservation, and the distinct families of Carnivora.

What’s new in the second edition?

The text has been completely revised for the second edition, with new data and observations published since 2011 to update and improve the original text throughout. By way of one example, most reproductive data for the Andean Bear in the first edition had been collected from captive animals, but the first population-level information from long-term research on the species in the wild (in Peru) was published in 2018, and has been incorporated in the book. Similarly, some species which were very poorly known at the time I wrote the first edition have since been the focus of at least one dedicated research effort, providing much better information for the new book; examples include the Bush Dog, Fishing Cat and Narrow-striped Boky.

A major addition in the new edition is the inclusion of 9 new species delineated since 2011, largely as a result of recent genetic analyses. Perhaps the most dramatic example is the African Wolf, formerly believed to be an African population of the Eurasian Golden Jackal.  The new book covers numerous cases where one species has been re-classified into two or even three, e.g. European, Asian and Japanese badgers, Northern and Southern Oncillas, and Mainland and Sunda Leopard Cats.

Finally, the IUCN Red List category indicating degree of endangerment has been revised for most carnivores, I provide a new assessment of Population Trend for each species, and the second edition includes distribution maps for every species based on the most recent IUCN Red List population data.

It is surprising that so many new species have been described since the first edition was published. How did these discoveries arise?

All new species in the book arose largely as a result of advances in genetic technology which has made very powerful and cost-effective analyses widely accessible to researchers. It has allowed geneticists to look with ever-increasing resolution at the differences between populations which, in some cases, turned out to be a so-called “cryptic species.” The same process has also revealed cases where populations formerly considered to be separate species (based mainly on appearance) actually have minor genetic differences, subsuming two former species into one. For example, Grandidier’s Vontsira is now regarded as a distinct population of the Broad-striped Vontsira. Whereas the first edition included accounts of 245 species, edition 2 covers 250 species, nine of them newly described.

To many readers, uncovering new species by genetic differences probably does not have the same excitement as news of an entirely unknown animal never before seen by scientists being discovered in a remote corner of the globe. Do you think the new species in the book are as interesting or even valid?

The question of validity is an interesting one; even geneticists debate the degree of genetic divergence indicative of two distinct species (versus lower-level delineations, for example, indicative of sub-species). There is the genuine danger of a ‘gold-rush’ in which researchers rush to publish new discoveries based on relatively minor distinctions between populations: there are already examples in the scientific literature. I took a conservative approach in the book, and included only those new species supported by strong published evidence and that are generally accepted by relevant authorities e.g. the International Union for Conservation of Nature Species Survival Commission (IUCN SSC) Specialist Groups devoted to carnivores.

Even with that, the question of validity remains a moving target. I believe that any newly discovered genetic distinctions must reflect other significant biological differences, such as in morphology, ecology, distribution and especially in reproductive isolation, the classic (some say old-fashioned!) defining characteristic of species. This is not always well understood, even for some of the new species included in this new edition. In an introductory section on the 13 families of terrestrial carnivores, I list other cases that I consider borderline or questionable; these are not treated as full species in the book but some may eventually be recognized as such with better data and analyses in future. This is a story that will continue to unfold.

Priscilla Barrett’s artwork is superb, with many species which have never been so accurately and beautifully painted. What was it like working with her?

Priscilla is an exceptional collaborator. With her zoology background, she brings a scientist’s rigor to the process. She draws on her vast collection of reference material- photos of museum skins and samples, sketches and notes from the field- and we also used hundreds of recent camera-trap images, supplied by colleagues from around the world, including of many species or forms that have otherwise never been photographed in the wild. The result is art that is not only beautiful but also highly accurate; viewing Priscilla’s carnivores, I always feel a surge of recognition, that she has captured the true essence of each species.

Beyond each individual piece of art, each plate benefits from Priscilla’s very intuitive sense of design. The process started with her sketching rough lay-outs to decide the poses for each species or form, and how each interacted with the others on the page. Once we had decided that a plate worked, she painted all of the components. It has been very rewarding for me to come to understand how that process produces complete plates with both balance and life.

Field guides to mammals are becoming more common. Do you think this reflects greater interest in watching mammals?

Two colleagues who recently published a review of mammal-watching put it nicely when they said ‘Mammalwatching today is arguably where bird-watching was a century ago.’ That said, the same paper notes how mammal-focused tourism has increased dramatically in the last couple of decades, not only for the large charismatic species that every safari-goer to Africa wants to see, but increasingly for small and often difficult-to-see species requiring specialist guides and local knowledge.

Amateur mammal-watchers have also contributed to scientific discoveries including the first documented record, with terrific photos, of the virtually unknown Pousargues’ mongoose in Uganda since the 1970s, and the first records of Pale Fox and Rüppell’s Fox from northeastern Ethiopia; I referred to both papers for the second edition. I also had access to many dozens of trip reports written by mammal-watchers since the first edition. There’s little doubt all this reflects an increase in mammal-focused tourism, a trend that I am sure will continue. And one, I hope, that helps foster the growing demand for more and better mammal-focused field guides!

 

Luke Hunter is one of the world’s leading authorities on wild carnivores. His books include Wild Cats of the World and Cheetah. He lives in New York City.

Stanley Corngold on Walter Kaufmann: Philosopher, Humanist, Heretic

Walter Kaufmann (1921–1980) was a charismatic philosopher, critic, translator, and poet who fled Nazi Germany at the age of eighteen, emigrating alone to the United States. He was astonishingly prolific until his untimely death at age fifty-nine, writing some dozen major books, all marked by breathtaking erudition and a provocative essayistic style. He single-handedly rehabilitated Nietzsche’s reputation after World War II and was enormously influential in introducing postwar American readers to existentialism. Until now, no book has examined his intellectual legacy. Stanley Corngold’s Walter Kaufmann provides the first in-depth study of Kaufmann’s thought, covering all his major works.

How did you come to write this book?

There is an immediate cause and a deeper one. The immediate cause was the Princeton University Press’s renewed interest in the work of Walter Kaufmann. After publishing a new edition of Kaufmann’s masterwork Nietzsche: Philosopher, Psychologist, Antichrist, the Press decided to republish another distinguished work by Kaufmann—The Faith of a Heretic (1959, 2015). I was approached to write a preface and gladly accepted. To do the job I read a good deal more of Kaufmann and was struck by his astonishing range of interests and the clear and vital precision of his writing. I then proposed a book to the Press that would cover the (near) entirety of his corpus—Walter Kaufmann: Philosopher, Humanist, Heretic—and here it is—a critical compendium to all his major works.

You said there was a deeper reason.

Yes, my “experience” of Walter goes back to early days. As I note in a chapter on Kaufmann’s extraordinary first book, “In summer 1954, a naval cadet in the NROTC unit at Columbia University, I lay sprawling on the steel floor of the destroyer USS Steinaker reading Nietzsche: Philosopher, Psychologist, Antichrist, the cover quite visible and flagrant. An officer saw me and shouted, ‘Why are you wasting your time reading this book!’ Ever since then, I have felt myself especially protective of this book, the author, and his subject.

Is that necessary? Does Nietzsche need protection from serious readers?

One reads that Kaufmann, on arriving at Princeton in 1947 as an assistant professor of philosophy, was introduced to Albert Einstein; both, after all, were German-Jewish émigrés from Berlin. Einstein asked Kaufmann about the subject of his Harvard Ph.D. thesis and Kaufmann replied, “Nietzsche’s Theory of Values.” Einstein is supposed to have responded, “But that is simply dreadful!” Nietzsche had been stained with a (mostly spurious) Nazi stripe. But Kaufmann was certainly not stopped in his tracks by Einstein’s dismay or other scholars’ horror of the subject. His 1950-masterwork is an original and decisive defense of Nietzsche as a serious thinker in a humanistic tradition of Bildung (or self-formation)—a thesis that has produced volumes of critical commentary by professional philosophers even until today, some 70 years later!

Weren’t you and Walter Kaufmann contemporaries—at least for a time—at Princeton?

We were. I’d like to recall my first encounter with Walter, though, which preceded our few, informal meetings at Princeton—they were few and informal because, at that time, owing to my training, I belonged to a rival school of thought—Deconstruction or, better, Rhetorical Analysis—that called for a different way of reading Nietzsche, tending to “put under erasure” all his substantive claims. I’ll quickly add that almost all of Kaufmann’s oppositional readers were dependent on his superb Nietzsche translations! But a certain resistance to Kaufmann’s work on my part had set in at that time and even beginning with his in-person presentation of the Existentialist worldview at Columbia University in 1955. To my regret, I was unable to feel myself addressed for the very callow reason that I could not expect a professor who himself looked like an undergraduate and, as I recall, wore lederhosen, to speak with much authority. Since then, evidently, I have learned to take him very seriously!

Do you treat Kaufmann’s life and personality in your book?

Only glancingly. I’ve been eager to follow Kaufmann’s own instruction, and to address the very best part of him in the pages that he wrote. That is how he wished to be remembered. But you cannot overlook the striking features of his life and personality: the fact, for example, that at the age of 13, being dissatisfied with his converted-father’s Lutheran account of the Holy Spirit, he demanded an official state document certifying his withdrawal from the church, which prepared him for his conversion to Judaism. In fact, his heritage was Jewish in the very first place. What stands out is the extraordinary boldness of a very young man in 1933, no doubt aware of Hitler’s ascension to power, converting “back” to Judaism!

Do you treat him, then, as a Jewish writer?

Well, it is not perfectly clear what a “Jewish writer” is, beside the obvious, but the thrust of your question is to ask about his commitment to Judaism. The answer is that soon after his arrival in the United States in 1939 (he attended Williams College), he turned away from this and any other devotion to the rituals of a church or synagogue. On the other hand, his work is marked by a deep admiration for the ethical teachings of the Hebrew Bible. And he remained attached to the “religious experience” of both himself and others.

What do you mean by “religious experience” outside of an attachment to this or that world religion?

One could quote Einstein, in this case, to give color to Kaufmann’s position. Einstein speaks of “the mysterious … the fundamental emotion which stands at the cradle of true art and true science … the experience of mystery—even if mixed with fear—that engendered religion. A knowledge of the existence of something we cannot penetrate, of the manifestations of the profoundest reason and the most radiant beauty, which are only accessible to our reason in their most elementary forms—it is this knowledge and this emotion that constitute the truly religious attitude; in this sense, and in this alone, I am a deeply religious man.” Kaufmann, the humanist, would locate the “mysterious” in the human aspiration to overcome its “ontological deficit”—in a word, to become more. Challenged to explain this fundamental aspiration, Kaufmann wrote, early on: “As human beings, we have ideals of perfection which we generally find ourselves unable to attain. We recognize norms and standards of which we usually fall short; we long for a triumph over old age, suffering, and death; we yearn for perfection and immortality—and seem incapable of fulfillment. We desire to be ‘as gods,’ but we cannot be so.” And still, we strive—or ought to strive. This is his great refrain: a heightening of the Faustian ideal of continual effort or—equally—of the Nietzschean ideal of self-overcoming.

And religion in this?

Toward the end of his short life, his passion for religion was enriched, if you like, by his pilgrimage to the places of religion. He traveled around the world five times and seems to have covered most of the ground by walking. He inspected the sacred places in Asia and the Middle East that armchair philosophers encounter only in photographs and in his later work Religion in Four Dimensions supplied us with these very photographs in a brilliant format.

Do you think the work of Walter Kaufmann has contemporary relevance? And whom did you imagine as your audience?

I have learned a ton from Kaufmann, both by absorbing his statements and by pushing myself to respond to them, either with gratitude or resistance. The latter, especially, called for solid commentary: I was pushed to defend my objections. I do hope the book conveys this lively obligation to the readers I wish for it.

Does a book on Walter Kaufmann inspire other books?

A mathematician, Carl Faith, recalled in his memoirs that in the 70s he had seen Walter Kaufmann and Erich Kahler—a polymathic émigré and, if I may say so, (Thomas) Mann’s best friend—frequenting Princeton’s PJ’s Pancake House.

This led me to the figure of Erich Kahler and the discovery that ca. 1940 in Princeton there was a Kahler Circle, involving several of the great German, mostly German-Jewish émigrés then living in Princeton, including, besides Erich Kahler, Thomas Mann, Albert Einstein, Hermann Broch, and to some extent Ernst Kantorowicz, Erwin Panofsky, and Kurt Gödel. I think a wonderful book could be written about the Circle’s world of thought.

Stanley Corngold is professor emeritus of German and comparative literature at Princeton University. His many books include The Fate of the Self: German Writers and French Theory; Complex Pleasure: Forms of Feeling in German Literature; Lambent Traces: Franz Kafka (Princeton); and Franz Kafka: The Ghosts in the Machine. He lives in Princeton, New Jersey.

 

Dave Colander: Where Economics Went Wrong

Economics

Milton Friedman once predicted that advances in scientific economics would resolve debates about whether raising the minimum wage is good policy. Decades later, Friedman’s prediction has not come true. In Where Economics Went Wrong, David Colander and Craig Freedman argue that it never will. Why? Because economic policy, when done correctly, is an art and a craft. It is not, and cannot be, a science. The authors explain why classical liberal economists understood this essential difference, why modern economists abandoned it, and why now is the time for the profession to return to its classical liberal roots. Contending that the division between science and prescription needs to be restored, Where Economics Went Wrong makes the case for a more nuanced and self-aware policy analysis by economists.

Where Economics Went Wrong is a somewhat audacious title. Can you briefly tell us what’s wrong with economics?

Why have a firewall? The firewall discourages applied policy economists from trying to be too scientific, and economic scientists from worrying too much about policy implications of their work. The firewall is necessitated by the values inherently applied policy analysis. Scientific methodology isn’t designed to resolve differences in values. If a theorist is thinking about policy, the theory won’t be as creative as it can be. And if applied policy economics is too related to current theory, it won’t be as creative as it can be. Applied policy work requires that scientific methodology be integrated with more open and discursive engineering and philosophical methodologies that are designed to narrow differences in values and sensibilities and arrive at solutions to policy problems.

Our central argument is that scientific work and applied policy work are best done when there is a firewall between science and policy. Classical liberal economists had such a firewall, and we are calling for a return to Classical liberal methodology.

There are a lot of books out there criticizing economics; how does your critique differ?

The biggest difference is that we aren’t criticizing all of economics, but only one aspect of it—how economics relates theory to policy. We see ourselves as friendly critics, critiquing from the inside the economics profession, rather than from outside. In our view most of the outside critiques of economics miss their mark—they don’t convey the way top economists see themselves doing economics, which leads top economists to discount the critiques. Our critique is focused narrowly on economists’ blending of economic science and economic policy methodology.

 How does the subtitle of the book, Chicago’s Abandonment of Classical Liberalism, fit into your story?

Chicago is a useful case study for us because it was the last bastion of Classical liberalism in U.S. economics. It was Classical liberalism’s Alamo. Classical liberalism included both a methodology and a set of policy recommendations. The methodology involved keeping a firewall between economics science and policy for the protection of both science and policy. Classical liberals argued that if scientific researchers had policy views, those policy views would influence their science and their science would be tainted. If economists used scientific justifications for policy, which didn’t make clear that policy had to have a value component, policy would be tainted. It was a broad tent, not a narrow tent, methodology, and it reached its high point with the work of John Stuart Mill.

In the 1930s that changed; Classical liberalism was abandoned and was replaced with a new semi-scientific Pigovian welfare economics that blended science and policy into one field. Solutions to policy problems were to be found in better science, not in reasoned discourse.

The applied policy revolution started outside Chicago—at schools such as MIT and Harvard,and was quite pro-government interventionist. It seemed as if economic science was directing government to intervene in the economy. Chicago economists, led by Frank Knight, objected to both the change in methodology and the interventionist nature of the policy recommendations.

With the advent of the Chicago school of economics, the intellectual leadership of Chicago economics moved from Knight to Milton Friedman and George Stigler. They gave up Knight’s methodological fight, and concentrated on objecting to the interventionist nature of the new policy approach. They developed a pro-market scientific economic theory based on the Coase Theorem that led to the policy results they wanted. They presented it as a scientific alternative to the newly developed government interventionist scientific economics theory. In doing so they abandoned Classical liberal methodology, which held that science did not lead to policy recommendations. So the Chicago case study nicely highlights where economics went wrong.

What’s your solution to what’s wrong with economics?

Our solution is to bring back the firewall between science and policy. Using the Classical liberal approach, economic science includes only those aspects of economic reasoning and thinking that all economists agree can be scientifically determined. By design, there should be almost no debate about scientific economic theory. If there is serious debate about the theory, then the theory hasn’t reached the level of scientific theory; it is simply an hypothesis that needs further empirical study. Policy analysis uses economic science, but it also uses any other insights and analysis that the policy economist finds useful to arrive at policy conclusions.

The approach we are advocating for applied policy has much in common with engineering methodology. It is much looser and more open than scientific methodology. Engineering methodology is designed to solve problems, not to find truth. For an applied policy economist a scientific theory is simply a useful heuristics, to be used when useful. Engineering methodology specifically allows for the integration of values and does not present itself as infallible. It invites challenges and discursive exploration. Using an engineering methodology will make values in economics more transparent, and more subject to philosophical debate that can clear up some of the value and sensibility differences.

Can you be more explicit about how an engineering methodology differs from a scientific methodology?

Adopting an engineering methodology involves a change in how economists think about theory and policy. For an applied policy economist, theory becomes simply a useful heuristic.Debates about science are reduced enormously because the domain of economic science is reduced. In policy analysis a much broader pluralistic methodology is used. Scientific methodology is designed to discover truth, which means it must be very precise. Engineering methodology is designed to solve problems in the least cost fashion. It is far less precise because precision is costly.

How do you see such a change coming about?

Slowly, but surely. We see it more as an evolutionary change than revolutionary change. The change is already occurring. Many top economists are already following the Classical liberal methodology we advocate—they just don’t call it that. So one of the goals of the book is   to highlight their work and encourage young economists to use it as a role model. In the last chapter of the book we consider the work of six top economists who do quite different types of economics—they include theorists,empirical economists, and applied policy economists—who are all currently following what we call a classical liberal methodology. We show how that methodology influences the work they do and the interpretation they give to their work.

Our advice to other economists is to follow their lead. That means that:

  • in policy work, economists should be far less worried about carefully following scientific methodological guidelines; they should replace those scientific guidelines with educated common sense engineering guidelines designed to answer the type policy questions they are dealing with.
  • in theoretical work economists should stop worrying about relating theory to policy and let their imagination roam without concern about policy. They should go where few economists have gone before.
  • in blended theoretical and empirical work, economists should be more creative and less concerned about dotting i’s and crossing t’s. Leave that for the theoretical clean-up crew.
  • in econometric work, economists should use all the evidence that sheds light on the issue, not just the limited evidence that meets the profession’s current version of scientific rigor.

Our advice is for economists to free themselves from historically determined methodological scientific conventions and replace those conventions with pragmatic state-of-the-art conventions that take advantage of technological computational and analytic advances.

David Colander is Distinguished College Professor at Middlebury College. His many books include The Making of an Economist, Redux and Complexity and the Art of Public Policy (both Princeton). Craig Freedman is the author of Chicago Fundamentalism and In Search of the Two-Handed Economist.

Jason Brennan on When All Else Fails

Brennan When All Else FailsThe economist Albert O. Hirschman famously argued that citizens of democracies have only three possible responses to injustice or wrongdoing by their governments: we may leave, complain, or comply. But in When All Else Fails, Jason Brennan argues that there is a fourth option. When governments violate our rights, we may resist. We may even have a moral duty to do so. The result is a provocative challenge to long-held beliefs about how citizens may respond when government officials behave unjustly or abuse their power.

What led you to write this book?

Almost daily for the past year, I have come across news stories about police officers using excessive violence against civilians, or about people being arrested and having their lives ruined over things that shouldn’t be crimes in the first place. I watched the Black Lives Matter protests and started reading histories of armed resistance. I watched as president after president killed innocent civilians while pursuing the “War on Terror.” I see people’s lives destroyed by the “War on Drugs,” which continues on the same course even though we have strong evidence it makes things worse, not better. Every day, government agents acting ex officio are committing severe injustices. 

I ascertained that contemporary philosophy was largely impotent to analyze or deal with these problems. Most political philosophy is about trying to construct a theory of an ideal, perfectly just society, which means philosophers usually imagine away the hard problems rather than consider how to deal with those problems. Philosophers often try to justify the government’s right to commit injustice, but they often rely upon irrelevant or incoherent models of what governments and their agents are like. For example, Suzanne Dovi’s theory of political representation is grounded in a false theory of voter behavior, while John Rawls’s argument for government simultaneously assumes people are too selfish to pay for public goods, and government agents are too angelic to abuse their power. I saw an opening not only to do original philosophy, but to do work that bears on the pressing events of our times.

You can see that in the book. The “thought experiments” I use are all based on actual cases, including police officers beating up black men who did nothing more than roll slightly past a stop sign; officers shooting unarmed, subdued men; governments spying on and wiretapping ordinary citizens; drone strikes on innocent civilians; throwing people in jail for smoking marijuana or snorting cocaine; judges having to enforce absurd sentences or unjust laws; and so on.

Can you give a summary of your argument?

The thesis is very simple: the conditions under which you may exercise the right of self-defense or the right to defend others against civilians and government agents are the same. If it is permissible to defend yourself or others against a civilian committing an act, then it is permissible to defend yourself or others against a government agent committing that same act. For instance, if I wanted to lock you in my basement for a year for smoking pot, you’d feel no compunction in defending yourself against me. My thesis is that you should treat government agents the same way.

My main argument is also simple: Both laypeople and philosophers have offered a few dozen arguments trying to defend the opposite conclusion: the view that government agents have a kind of special immunity against defensive resistance. But upon closer examination, we’ll see each of the arguments are bad. So, we should conclude instead that our rights of self-defense or to defend others against injustice do not simply disappear by government fiat. On closer inspection, there turns out to be no significant moral difference between the Commonwealth of Virginia imprisoning you for owning pot and me imprisoning you in my basement for the same thing.

To be clear,  I am not arguing that you may resist government whenever you disagree with a law. Just as I reject voluntarism on the part of government—I don’t think governments can simply decide right and wrong—so I reject voluntarism on the part of individuals. Rather, I’m arguing that you may resist when governments in fact violate people’s rights or in fact cause unjust harm.

Some will no doubt complain this thesis is dangerous. In some ways it is, and I take care to highlight how to be careful about it in the book. But on the other hand, the opposite thesis—that we must defer to government injustice—is no doubt even more dangerous. People tend to be deferential and conformist. Most people will stand by and do nothing while armed officers send people to death camps. Stanley Milgram showed most people will electrocute another person to death because a man in a white lab coat told them to. If anything, defenders of the other side—of the view that we should defer to government injustice—have a duty to be cautious pushing their dangerous view.

Can you talk a bit about the meaning behind the title? What exactly has to fail in order to justify the actions you describe?

Usually, lying, stealing, destroying property, hurting others, or killing others is wrong. However, you may sometimes perform such actions in self-defense or in defense of others. The basic principle of defense, codified in both common law and commonsense morality, is this: you may use a defensive action (such as sabotage, subterfuge, deceit, or violence) against someone else when they are initiating a severe enough injustice or harm, but only if it is necessary to defend yourself. Here, “necessary” means that you cannot use violence if a nonviolent means of defense is equally effective; you cannot use deceit if a non-deceitful means of defense is equally effective. So, the title is meant to signal that defensive actions—such as deceit or violence—are, if not quite last resorts, not first resorts either. 

What is the place of uncivil disobedience within a peaceful and successful polity?

What we call “civil disobedience” is a form of public protest. In civil disobedience, people publicly and explicitly break the law for the purpose of trying to have the law changed. They will often accept legal punishment, not necessarily because they think punishment is warranted and that even bad laws must be respected, but because it is strategic to do so to garner sympathy for their cause. Civil disobedience is about social change.

But self-defense is not about social change. If I kill a would-be mugger, I’m not trying to reduce crime or change gun policy. I’m trying to stop myself from being the victim of that particular injustice. Similarly, if you had been present and had acted in defense of Eric Garner, you would not necessarily have been trying to fix American policing—you would have just been trying to save Garner’s life. Defensive actions—or uncivil disobedience—are about stopping particular wrongdoers from committing particular harms or violating particular people’s rights. 

What are your thoughts on recent protests and movements such as Take a Knee, Me Too, and March for our Lives?

Globally, US policing and US criminal policy are outliers. American criminal justice is unusually punitive and harsh. We have 4.4% of the world’s population but around 25% of the world’s prisoners. We give longer, harsher sentences than illiberal countries such as Russia or China. Our police are unusually violent, even to the most privileged in our society. I applaud movements that bring attention to these facts.

It wasn’t always this way. In the 1960s, though the US had a higher than normal crime rate, its sentence lengths, imprisonment rate, and so on, were on the high end but similar to those of other liberal, rich, democratic countries. But starting in the 1970s, things got worse. 

Right now, Chris Surprenant and I are writing a book called Injustice for All explaining why this happened and offering some ideas about how to fix it. We argue that the problem is not explained by racism (as leftists argue), the War on Drugs (as libertarians argue), or crime and family collapse (as conservatives argue), though these things are each important factors. Rather, the US criminal justice system became dysfunctional because nearly every person involved—from voters to cops to judges to politicians—faces bad incentives created by bad rules.

Are there examples from history of individuals or groups following your philosophy with success?

Two recent books, Charles Cobb Jr.’s This Non-Violent Stuff’ll Get You Killed and Akinyele Omowale Umoja’s We Will Shoot Back provide strong evidence that the later “nonviolent” phase of civil rights activism succeeded (as much as it has) only because in earlier phases, black Americans involved in protest armed themselves in self-defense. Once murderous mobs and law enforcement learned that they would fight back, they turned to less violent forms of oppression, and activists in turn began using the nonviolent tactics with which we are familiar.

Do you think there are changes that can be made that would lessen instances in which uncivil disobedience is justified?

A facile answer: all governments have to do is respect citizens’ rights.

More realistically: we need to train police differently, change recruitment tactics, and stop using SWAT teams so often. We should decriminalize many behaviors that are currently criminalized. We need to change tax codes so that poor localities are not dependent upon law enforcement issuing tickets to gain revenue. We need Congress to rein in the executive branch’s war and surveillance powers.

But even these kinds of ideas are too facile, because there is no willpower to make such improvements. Consider an example: violent crime in the US has been dropping since 1994 (and no, it’s not because we keep locking up all the violent criminals). Yet most Americans mistakenly believe, year after year, that crime is rising. They feel scared and vote for politicians who promise to be tough on crime. The politicians in turn support more confrontational, occupying-force style methods of policing. Here, we know what the problem is, but to fix the system we need to fix the voters, and we don’t know how to do that. To be clear, When All Else Fails is not a theory of social change, and not a prescription for fixing persistent or systematic social problems. As I often tell my political economy students, while we may know which institutions work better than others, no one yet has a good account of how to move from bad institutions to good.

Jason Brennan is the Robert J. and Elizabeth Flanagan Family Professor of Strategy, Economics, Ethics, and Public Policy at Georgetown University’s McDonough School of Business. His many books include Against Democracy and The Ethics of Voting.

Ethan Shagan on The Birth of Modern Belief

ShaganThis landmark book traces the history of belief in the Christian West from the Middle Ages to the Enlightenment, revealing for the first time how a distinctively modern category of belief came into being. Ethan Shagan focuses not on what people believed, which is the normal concern of Reformation history, but on the more fundamental question of what people took belief to be. Brilliantly illuminating, The Birth of Modern Belief demonstrates how belief came to occupy such an ambivalent place in the modern world, becoming the essential category by which we express our judgments about science, society, and the sacred, but at the expense of the unique status religion once enjoyed.

What led you to write this book?

Good works of history often begin with a chance discovery that sticks like a splinter in the historian’s mind: something weird or surprising in the historical record that demands an explanation. In this case, that oddity was something I found in Martin Luther’s collected writings: his claim that most people do not believe that God exists. This struck me as utterly outlandish. Besides the fact that more or less everyone in sixteenth-century Europe believed in God, Luther also wrote elsewhere that atheism was virtually impossible because knowledge of God is imprinted on all human souls. So what on earth was going on? Upon further research, I found other versions of this same bizarre claim popping up elsewhere in the sixteenth century. John Calvin wrote in his Institutes of the Christian Religion that anyone who follows their own passions in defiance of heavenly judgment “denies that there is a God”—the translator of the modern English edition changed this passage to “virtually denies that there is a God,” presumably because he thought the original must have been some sort of mistake. The radical spiritualist Sebastian Franck claimed, far more drastically, that “there is not a single believer on earth!” These remarkable and unexpected ideas were not written in obscure places, nor were they written by unknown people. So why had no historian ever written about them before?

These discoveries set me on a journey that has lasted seven years. I started with the intuition that “belief” itself had changed its meaning over time. Thus, for instance, Luther could say that everyone knows God exists, but he could still argue that most people do not believe God exists, because he took “belief” to be a more difficult condition. But from there I had to figure out what preexisting, medieval understandings of belief Luther was rejecting. Then I had to figure out how the different factions in the Reformation interpreted belief. And then, most importantly, I set myself the task of figuring out how a modern understanding of “belief” emerged. Hence this became a book about the birth of modern belief: a whole new way of imagining the relationship between religion and other kinds of knowledge, which we take to be absolutely timeless and natural but was in fact an invention of the seventeenth century and a touchstone of the Enlightenment. 

Can you explain a bit about the book’s argument? What do you mean by a modern category of belief?

Belief has a history; the concept changes over time. We take it for granted that “belief” means private judgment or opinion. From that assumption, which we assume is timeless but is in fact profoundly modern, lots of other conclusions follow which seem equally unquestionable. For example, if belief is private judgment, then our beliefs might change over time in light of new evidence or further reflection. Likewise, if belief is opinion, then our belief on any particular issue might be probable rather than absolute: we might reasonably say we believe something if we think it’s likely, even if we’re uncertain. Most importantly, if belief is private judgment, then I might believe a religious doctrine in more or less the same sense that I believe that Lee Harvey Oswald acted alone, or that our sun is part of the Milky Way galaxy.

None of this would have been taken for granted in the Western tradition before the seventeenth century, and indeed a great deal of intellectual energy was poured into denying that any of it was true. Of course, people sometimes used the verb “believe” (credo in Latin, glauben in German, etc.) in a colloquial way—“I believe this peach is ripe,” or “I believe my husband loves me”—but a vast range of theology and philosophy was devoted to the proposition that this was totally different from belief in its proper, religious sense. To believe required an absolute, certain conviction, guaranteed to be true by reliable authority. Anything lesser or different could easily be denounced as unbelief, a failure of the mind and soul; anyone who believed wrongly, or insufficiently, or for the wrong reasons, or in the wrong way, might be taken not to believe at all. So my book is a history of how belief was freed from these constraints, creating the conditions in which religion could flourish in a secular age, but only at the cost of relinquishing the special status religion had previously enjoyed.

It seems intuitive that modern belief formed as a reaction against the Church, but how was it also a reaction against Luther and Calvinism?

Lots of people think that the Reformation produced religious liberty, because in the Reformation individuals—like Luther purportedly saying, “Here I stand, I can do no other”—insisted upon their own conscientious right to believe differently from the Roman Catholic Church. But this is quite wrong. Luther and his allies did indeed insist that their own beliefs were genuine, and that their own consciences were inviolable. But in the very act of making this claim for themselves, they insisted that all other beliefs were not simply false, they were not even beliefs at all. When early modern Protestants claimed the right to believe as they would, they were creating a new and exclusive category of belief to which others did not have access. So the Reformation did not inaugurate modern belief. Instead it produced a new kind of authoritarianism: whereas Catholics disciplined people to believe, Protestants accepted that belief was rare, and instead disciplined unbelievers. The reaction against these twin pillars of orthodoxy thus came from dissidents within both traditions. Modern belief emerged in fits and starts, not as a revolution against Christianity, but as a revolution from within Christianity by mutineers whose strained relationship to orthodoxy necessitated a more porous understand of belief.

How does the modern idea of belief travel through later intellectual movements such as the Enlightenment? Did it undergo changes there as well?

This is really a book about the Enlightenment, as much or more than it’s a book about the Reformation, because it was in the Enlightenment that modern belief truly emerged as a powerful force in the world. But the Enlightenment you’ll find in these pages may not be the one you expect.

First, it is an Enlightenment that is inclusive of religion rather than against religion. I do not deny, of course, that there was a “radical Enlightenment” which attempted, often quite explicitly, to undermine the claims of organized Christianity. But by far the more significant project of the Enlightenment was to reestablish religion on a new basis, to render it not only compatible with reason but a partner in the task of criticism which was at the heart of eighteenth-century ideas. The Enlightenment thus pioneered a question which we take for granted today, but which had received remarkably little attention previously: on what grounds should I believe? There were many different answers in the Enlightenment—as there remain today—but the task of Enlightenment religion was to tear down the medieval architecture of the mind which had strictly separated belief, knowledge, and opinion, and had thus made the question itself virtually meaningless. Enlightenment Christianity established what the Reformation had not: the sovereignty of the believing subject.

Second, my Enlightenment is not about the triumph of reason, but rather the triumph of opinion. Modern critics of the Enlightenment, on both the Left and the Right, often denigrate Enlightenment reason—and not without reason, if you’ll pardon the pun—as a false universal which allowed a new orthodoxy to establish itself as the natural frame of all argument rather than a peculiar argument in its own right. But this understanding of the Enlightenment, which takes Immanuel Kant as its avatar, misses huge swathes of late-seventeenth and eighteenth-century thought which instead privileged opinion, a kind of judgment that was particular rather than universal. In this book, I want to resuscitate an Enlightenment that privileged autonomous judgment rather than judgment constrained by someone else’s reason, and thus led to new kinds of spiritualism as much as it led to new kinds of scientism. At its worst, this modern spirit of autonomy produces the world of “alternative facts” and “fake news;” but at its best, it produces the conditions of freedom that allow for peace in a diverse society.

What is the relationship between the history of belief and secularization?

Every page of this book is engaged at least obliquely with the secularization question, but one of my key points is that secularization is the wrong question.

Secularization assumes that the crucial development in modernity is the creation of spaces outside or apart from religion; in modernity, this argument goes, religion has been relegated to a separate, private sphere. But by contrast, what I find is that modernity’s encounter with religion is not about segregating belief from the world, but rather about the promiscuous opening of belief to the world. Belief becomes, in modernity, not the boundary separating religious claims from other kinds of knowledge, but rather the least common denominator of all knowledge. Here my favorite example is the claim of many modern Christians that scientific knowledge—like the theory of evolution, for instance—is just another form of belief. This claim would have been literally nonsensical before the seventeenth century, because the whole point of belief was to preserve a special prestige for Christianity: science was a different beast altogether, belonging to different mental faculties and defended in different ways. The fact that scientific theories can now be understood as beliefs suggests that instead of thinking about the rise of a modern secular, we instead need to think about what happened when the walls separating religious belief from other kinds of knowledge-claims were breached.

What do you hope readers will take away from reading this book?

That belief has proliferated rather than waned in modernity, but only because the definition of belief has changed in our society to make it compatible with diversity, democracy, and freedom of thought. The old world of belief—where it was structured by authority, and where it functioned as an axis of exclusion to preserve orthodoxy—is dead and buried, and we should be thankful for its demise rather than nostalgic for the oppressive unity it once provided.

Ethan H. Shagan is professor of history at the University of California, Berkeley. He is the author of The Rule of Moderation: Violence, Religion, and the Politics of Restraint in Early Modern England and Popular Politics and the English Reformation. He lives in Orinda, California.

Kieran Setiya: Idleness as Flourishing

This article was originally published by Public Books and is reprinted here with permission.

It is hard work to write a book, so there is unavoidable irony in fashioning a volume on the value of being idle. There is a paradox, too: to praise idleness is to suggest that there is some point to it, that wasting time is not a waste of time. Paradox infuses the experience of being idle. Rapturous relaxation can be difficult to distinguish from melancholy. When the academic year comes to an end, I find myself sprawled on the couch, re-watching old episodes of British comedy panel shows on a loop. I cannot tell if I am depressed or taking an indulgent break. As Samuel Johnson wrote: “Every man is, or hopes to be, an Idler.”[1.Samuel Johnson, The Idler, no. 1, April 15, 1758; reprinted in The Idler and The Adventurer, edited by W. J. Bate, John M. Bullitt, and L. F. Powell (Yale University Press, 1963), pp. 3–4.] As he also wrote: “There are … miseries in idleness, which the Idler only can conceive.”[2.Johnson, The Idler, no. 3, April 29, 1758; in The Idler and The Adventurer, p. 11.]

This year brings three new books in praise of wasting time: a manifesto by MIT professor Alan Lightman; a critical history by philosopher Brian O’Connor; and a memoir by essayist Patricia Hampl. Each author finds a way to write in the spirit of idleness. Yet none of them quite resolves our double vision. Even as they bring its value into focus, they never shake a shadow image of the shame in being idle.

Why idleness now? Because we are too busy, too frantic; because of the felt acceleration of time. Lightman supplies a measure. “Throughout history,” he writes, “the pace of life has always been fueled by the speed of communication.”

When the telegraph was invented in the nineteenth century, information could be transmitted at the rate of about four bits per second. By 1985, near the beginnings of the public Internet, the rate was about a thousand bits per second. Today, the rate is about one billion bits per second.

We are in principle accessible anywhere, at any time; we can be texted, emailed, tagged: “The world today is faster, more scheduled, more fragmented, less patient, louder, more wired, more public.” There is not enough downtime. So Lightman argues in his brisk, persuasive essay. His snapshots of the relevant social science portray the grim effects of over-connection in our digital age: young people are more stressed, more prone to depression, less creative, more lonely but never really alone. Our time is ruthlessly graphed into efficient units. The walking speed of pedestrians in 32 cities increased by 10 percent from 1995 to 2005.

With its brief chapters and bright illustrations, Lightman’s book is itself well-designed for the attention deficits of the internet era, perfect for the postliterate teenager or the busy executive with only an hour to spare. It makes an elegant case for downtime: unstructured and undistracted, time to experiment and introspect. For Lightman, this is the kind of time-wasting that is not a waste of time. It augments creativity, which draws on undirected or “divergent” thinking. It replenishes and repairs us. And it gives us space in which to find ourselves.

Lightman’s definition of “wasting time” as undirected introspection is deliberately tendentious. The phrase could just as well describe the smartphone addict playing Angry Birds. Ironically, one of the most intriguing studies in Lightman’s book concerns the positive impact of trivial games. Asked to come up with new business ideas, people who were forced to procrastinate with Minesweeper or Solitaire for several minutes were “noticeably more creative.” Lightman does not pause to ask whether this effect can be scaled up. (I pushed it pretty far myself in graduate school, with mixed results.) But he offers a suggestive catalog of artists and scientists whose best ideas arrived when they were staring at a wall.

Lightman ends with concrete, practical prescriptions: 10-minute silences during school days, “introspective” college courses that give students more time to reflect, electronics-free rooms at work, unplugged hours at home. The changes are not radical and leave intact the media ecology in which we are to live. “It is within the power of each of us as individuals,” Lightman writes, “to make changes in our way of living to restore our inner lives. … With a little determination, each of us can find a half hour a day to waste time.”

Perhaps it is modesty, or realism, that prevents Lightman from seeking social remedies for a social problem. In the short term, he suggests, we have to work on ourselves: a conservative therapy for what ails us. Lightman’s apology for wasting time is conservative in other ways, too. He celebrates not downtime itself but its instrumental value, its usefulness as a means to integrity and achievement. Lightman cites psychologist Abraham Maslow on two forms of creativity: the kind that involves an artistic escape from stress and the kind that fuels “‘self-actualization,’ the desire to become the best we can be.” For Lightman,

there is a kind of necessary homeostasis of the mind: not a static equilibrium but a dynamic equilibrium in which we are constantly examining, testing, and replenishing our mental system, constantly securing the mental membrane between ourselves and the external world, constantly reorganizing and affirming ourselves.

If this is wasting time, who has the energy for it?

Not Brian O’Connor, who makes bolder, larger claims on behalf of being idle. Idleness flouts the prevailing social order and the conception of autonomy as arduous self-fashioning that Lightman and Maslow share. O’Connor traces the exhausting project of self-constitution to Kant and Hegel, through Karl Marx. What Lightman depicts as the ultimate purpose of wasting time, O’Connor sees as an alien imposition, an order issued without authority. Modern philosophy instructs us to make something of ourselves, but it has no right to tell us what to do, and its edicts are appropriated by societies that make exorbitant demands for work, tie recognition to material success, and exalt the individual at the cost of real community. For O’Connor, idleness is indifference to productive work and social prestige; it rejects the need for guiding purpose or self-formation. He adds to the acknowledged benefits of downtime its value as social critique.

Although O’Connor’s book has a guiding purpose, it nonetheless stays true to the ethos of idling. For the most part, O’Connor is content to answer the case against idleness made by its philosophical critics, not to argue for idleness itself. The burden of proof is placed on the opponents of being idle, who must work to convince the idler he is wrong. The idler’s objections are appropriately laconic.

O’Connor’s principal antagonist is Kant, who argues that we must make every choice as if we were legislating for all, and that we have a consequent duty to develop our talents. Scholars may query O’Connor’s interpretation of Kant as drawing on “that special feeling of worthiness” that comes from being useful to society. But even if he is wrong about this, O’Connor is right to find in Kant a vision of freedom as responsibility, of autonomy as work: the daunting project of determining how to be. For Kant, freedom requires one to live by principles one can will as laws for every rational being. One must bring this severe ambition to everything one does; only then is one entitled to be happy. “It is,” O’Connor writes, “a profound theoretical justification of an idea that has now become commonplace: that a life worth living is one marked by effort and achievement.” The idea that a good life calls for onerous self-creation fuels Nietzsche’s injunction to “become who you are” and Sartre’s existentialism.

Marx is a more difficult customer, since his emphasis on the alienation of labor under capitalism could easily be read as a critique of work. In fact, it is a call for the transformation of work into new, authentic forms. Marx’s idea of alienation was developed by Herbert Marcuse, the closest O’Connor gets to an intellectual ally. For Marcuse, alienation involves the internalization of goals that have nothing to do with what we really want. In order to function, contemporary society requires its members to be alienated in this way. What O’Connor finds suspicious in both Marx and Marcuse is the desire to solve the problems of alienation by changing the nature of work, rather than putting it in its place. Describing the conditions of work under communism, Marx writes: “What appears as a sacrifice of rest may also be called a sacrifice of idleness, of unfreedom, of unhappiness.” Marcuse strives instead for a synthesis of work and play.

O’Connor sees no hope of reconciling labor with leisure. Where Marx wants to “hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner,” O’Connor wonders why he can’t just take a nap.[3.Karl Marx and Friedrich Engels, The German Ideology, translated from the German by Salo Ryazanskaya, in Karl Marx: Selected Writings, 2nd ed., edited by David McClellan (Oxford University Press, 2000), p. 185.] Work needs to be transformed, but even after its transformation, it should not be our model of meaning in life and it cannot subsume the value of being idle. Idleness is freedom not just from alienated labor, but from the pressures of autonomy and authenticity. It is another mode of flourishing, against which the lure of striving and success should seem, at best, a lifestyle choice.

What O’Connor’s provocations miss is that for Kant, and for Sartre, the responsibility for oneself that defines autonomy is at the same time a responsibility to others. It is one thing to slack off when I could develop my talents; that is no one’s problem but my own. It is another to be idle in the face of urgent need, and so to be indifferent to suffering. John Berger wrote: “On this earth there is no happiness without a longing for justice.”[4.John Berger, Hold Everything Dear (Verso, 2007), p. 102.] It has been an aspiration of philosophers since Plato to show that this is true. An adequate defense of idleness would have to address that aspiration, to assuage the idler’s guilt. I may not owe it to myself to strain and struggle, but don’t I owe it to you?

Ironically, the work that most directly confronts the tension between idleness and ethical responsibility is neither a manifesto nor a monograph, but an essay in the spirit of Montaigne. Like Montaigne, Patricia Hampl is moved to reflect by grief and writes in conversation with someone she has lost. Like Montaigne, she rates description over narrative. And like Montaigne, she is willing to meander. Framed by a pilgrimage to Montaigne’s tower near Bordeaux, Hampl’s book does not arrive at his estate for more than two hundred pages and stops at its destination for a perfunctory eight. On the way, it pays visits to the homes of authors, saints, and scientists who embraced idleness by retiring from the world.

The most memorable are two Anglo-Irish women, Sarah Ponsonby and Lady Eleanor Butler, who eloped together unsuccessfully, disguised as men, in 1778. Returned to their homes, they wore their families down and were permitted to leave together two months later, setting up a cottage in Llangollen, Wales, where they lived on their limited family income, reading books, writing letters, and tending their garden, “famous for wishing to be left alone.” They were visited by celebrities from Shelley and Byron to the Duke of Wellington and Sir Walter Scott.

What the Ladies of Llangollen have in common with Montaigne is a strategy of “[retreat] during ages of political mayhem,” in their case the French Revolution, in his the Reformation. Today, many of us may also feel tempted to retreat. The way of life the Ladies called “our System,” with its monastic regularity and disdain for social expectations, is subversively attractive. Like Montaigne’s essays, it assures us that “the littleness of personhood is somewhere alive, taking its notes,” that it is okay to “enjoy yourself in the littleness of the moment” when the narrative of history goes awry. Withdrawal is not defeat. And if it is irresponsible to withdraw completely, doing so has a point. The limit cases of Montaigne or Ponsonby and Butler, whose idleness did not serve some further goal, show that wasting time is worthwhile in itself. This is what we see in the model their lives present even if, in the face of our obligations to others, it is not a model for us.

It may not even be a model for them. At the end of her book, Hampl quotes a passage from Montaigne: “We say; ‘I have done nothing today.’ What, have you not lived? That is not only the fundamental but the most illustrious of your occupations … He says this in his Essai titled—what else?—‘On Idleness.’” Except he doesn’t. The quotation is from the sprawling essay “Of Experience,” with which the Essays close. “Of Idleness” is an earlier piece, a distillation of self-doubt in which Montaigne indicts his enterprise: “The soul that has no fixed goal loses itself.” If he commits his extravagances to paper, he writes, it is in order “to make my mind ashamed of itself.”[5.Michel de Montaigne, “On Idleness,” The Complete Essays of Montaigne, translated from the French by Donald M. Frame (Stanford University Press, 1958), p. 21.]

Like Montaigne, who played a diffident but competent role in politics—he was mayor of Bordeaux—most of us forge a rotten compromise between idleness and industry. What else can we do? We see the flourishing of life in the little moments, as we see the scale of its shirked responsibilities. To manage our ambivalence is necessary work.

Kieran Setiya is professor of philosophy at the Massachusetts Institute of Technology. He is the author of Midlife: A Philosophical Guide, Reasons without Rationalism (Princeton) and Knowing Right from Wrong. He lives in Brookline, Massachusetts, with his wife and son.

Idleness: A Philosophical Essay by Brian O’Connor is available here.

Hassan Malik on Bankers and Bolsheviks

In a year that has seen emerging markets, including Argentina and Turkey, experience major market crashes, Hassan Malik’s Bankers and Bolsheviks is a timely reminder of the long history of emerging market booms and busts. Bankers and Bolsheviks charts the story of the foreign investment surge that made Russia the largest net international borrower in the global bond market, and the collapse which culminated in the largest default in history in the aftermath of the Bolshevik Revolution. Based on research in government and banking archives in four countries and three languages, the story is truly global. It focuses on the leading gatekeepers of international finance in Europe and the United States, showing their thinking about the most significant emerging market of the age through some of the most important events in world history.

Many scholars, writers and filmmakers have engaged with the period you chose to write about. What in particular attracted you to it?

I was always struck by how frequently financial history surveys focus on a few set stories and episodes – the Dutch Tulipmania of the seventeenth century, the hyperinflation in Weimar Germany, or the 1929 stock market crash – but how rarely they mention Russia, especially given the scale of the Russian borrowing binge in the late nineteenth and early twentieth centuries. As a banker living and working in Moscow during mid 2000s, I was constantly walking by pre-revolutionary buildings that had once housed banks. These vestiges of a previous Russian boom piqued my interest in the role of finance during the revolutionary period and inspired me to approach the subject through the archives and writings of key individual players in this drama. The Russian case was particularly interesting given that all the major players in global finance were able to participate in Russian markets. Unlike other emerging markets that were dominated by a single country or bank, the Russian story featured a diverse group of actors, and so provided an ideal vantage point from which to write about global finance during the first modern age of globalization.

What are the parallels with today’s standoff between Ukraine and Russia over sovereign debt?

Central to the book is the notion of “odious debt” – the idea that a population cannot be held liable for the debts contracted on its behalf but without its consent by an illegitimate regime. The Bolshevik default of 1918 was remarkable for reasons other than sheer magnitude. Unlike Argentina in 2001 or Greece in 2012, the Bolsheviks not only defaulted but repudiated the debts contracted by pre-revolutionary governments. It is notable that the Bolsheviks were not outliers in this respect – moderate liberals in Russia also objected to debts the Tsarist government in particular raised in international bond markets.

Fully 100 years on, the Ukrainian government is fighting Russian claims on a similar basis with respect to a bilateral loan structured as a $3bn Eurobond contracted by the government of Viktor Yanukovych in December 2013, shortly before it was overthrown in the 2014 uprising. The Ukrainian government ultimately defaulted on the loan in 2015. Like the Bolsheviks in 1918, the current Ukrainian government claims that Yanukovych was a dictator ruling without the consent of his people, and that therefore, they should not be held accountable for debts contracted by his government. Like the Bolsheviks and liberal opponents to the Tsarist regime in the early twentieth century, the present Ukrainian government is also claiming that the creditor in question actively sought to undermine and control the debtor country.

What lessons does the book hold for investors in emerging market bonds today?

Another of the book’s central messages is that investment in emerging markets does not happen in a vacuum. Politics matter, on several levels. Most obviously, managing and hedging against geopolitical risk remains very important. Global politics also influenced thinking about Russia, even amongst ostensibly clear-eyed investors. Fears of an ascendant Germany during the time period discussed in the book are mirrored in present-day apprehension about the rise of China and relative decline of “the West.” More specifically, such fears can generate biases and influence investment decisions. The strategic decisions of the first National City Bank of New York – one of the largest in the world at the time, and a forerunner to Citigroup – were heavily influenced, for example, by the wartime context, and led to a remarkable expansion of the bank’s operations in Russia on the eve of the Bolshevik revolution.

Politics also operate on a subtler level. The case of Russia, for example, demonstrates how the act of investing itself became a political act–when investors enter an emerging market, they often are aligning themselves with a particular set of political forces. Bankers in Russia at the time failed to appreciate the degree to which they were becoming entwined in domestic politics – and with the Tsarist regime in particular. Today, a similar theme is evident along the New Silk Road that China is developing across Eurasia, Africa, and the Indian Ocean as part of President Xi Jingping’s Belt and Road Initiative.

What are the implications for China’s Belt and Road Initiative?

The investment wave Russia witnessed during the first modern age of globalization was inextricably intertwined with contemporary geopolitics. While notionally private French, British, and American banks were key gatekeepers channeling capital into Russia, they did so in a particular geopolitical context. The French and Russian authorities in particular cooperated to a significant degree in channeling French savings to Russian markets. The French, however, frequently failed to persuade Russia to direct industrial orders to French firms, which often lost out to their German rivals.

In this respect, China’s Belt and Road Initiative is markedly different from the Franco-Russian financial ties of the Belle Époque. Under the BRI, China extends loans largely to developing countries for infrastructure projects built primarily by Chinese workers employed by Chinese engineering firms, using mainly Chinese equipment and materials. At a time when Chinese economic growth is slowing and there are signs of excess capacity in areas such as the construction industry, the BRI holds significant promise for China, not least since it diversifies the country’s trade routes away from contested territory such as the South China Sea. The benefit to countries receiving BRI funds is less clear. While there is little doubt that infrastructure is being built, the utility of some projects is arguable; and crucially, there is little transparency with regard to the commercial terms of the deals, to say nothing of contracting processes.

Several cases of questionable China-related deals are already evident. Before the formal launch of the BRI in 2013, Sri Lanka infamously signed a deal for a Chinese port of dubious feasibility and under terms that saw Sri Lanka’s debt balloon. When a new government faced difficulties in making payments, the Chinese ultimately took control of the strategic asset via a 99-year lease. More recently, erstwhile Malaysian premier Najib Razak signed major Chinese investment deals under the BRI. His successor has attacked the deals as shady and wasteful, and has already announced their cancellation in the amount of at least $22bn.

As the Malaysian case shows, the Chinese government – like foreign investors in Tsarist Russia – is willing to sign deals with leaders of contested legitimacy. The latter, in turn, are incentivized to seek BRI funding given the relatively higher degree of scrutiny and conditionality imposed by more traditional lenders such as the World Bank or individual developed countries. As both the Malaysian and Russian cases show, however, such an approach carries the risk that new regimes – whether they arrive through revolution or the ballot box – can question, push to renegotiate, or outright repudiate debts contracted by their predecessors.

Have emerging markets evolved, or have they repeated cycles of boom and bust that are fundamentally the same, with only superficial changes in context? Are the mistakes of the past vis-à-vis emerging markets destined to be repeated?

It would be simplistic to say that history repeats itself in emerging markets, but at the same time, financial history can be useful in thinking about historical analogs to current market conditions and potential future scenarios. Of course, government and businesses in emerging markets have evolved both over the centuries, as well as in the last several decades that witnessed the growth of “emerging markets” as a specific institutional asset class. For instance, macroeconomic management has shifted dramatically over the last 20 years in markets from Argentina to Russia, not least through the abandonment of fixed exchange rate regimes that contributed to past crises. At the same time, macroeconomic prescriptions directed at emerging markets from institutions such as the IMF, academia, and the investment community have themselves changed as investors and economists learn and re-learn lessons from the major EM crises of recent years.

Emerging markets have changed in other respects, too. Tsarist Russia attracted investors in part due to its relatively large population and resource base. Today, Russia’s demographics are seen as a handicap by investors, as is the economy’s dependence on commodity exports. Of course, even high-growth Asian economies have become victims of their success, with improvements in living standards and life expectancies contributing to ageing populations in major emerging markets such as China and India.

Nevertheless, there are strong continuities. The political dimension in particular remains very real in emerging markets, as seen in the major market moves surrounding regime changes in places such as Argentina, Brazil, India, and Malaysia in recent years. In this respect, there are strong parallels between emerging markets today and in the past.

Hassan Malik is an investment strategist and financial historian. He earned a PhD at Harvard University and was a postdoctoral fellow at the European University Institute in Florence and the Institute for Advanced Study in Toulouse. He lives and works in London.

 

 

Kevin Mitchell: Wired that way – genes do shape behaviours but it’s complicated

Many of our psychological traits are innate in origin. There is overwhelming evidence from twin, family and general population studies that all manner of personality traits, as well as things such as intelligence, sexuality and risk of psychiatric disorders, are highly heritable. Put concretely, this means that a sizeable fraction of the population spread of values such as IQ scores or personality measures is attributable to genetic differences between people. The story of our lives most definitively does not start with a blank page.

But exactly how does our genetic heritage influence our psychological traits? Are there direct links from molecules to minds? Are there dedicated genetic and neural modules underlying various cognitive functions? What does it mean to say we have found ‘genes for intelligence’, or extraversion, or schizophrenia? This commonly used ‘gene for X’ construction is unfortunate in suggesting that such genes have a dedicated function: that it is their purpose to cause X. This is not the case at all. Interestingly, the confusion arises from a conflation of two very different meanings of the word ‘gene’.

From the perspective of molecular biology, a gene is a stretch of DNA that codes for a specific protein. So there is a gene for the protein haemoglobin, which carries oxygen around in the blood, and a gene for insulin, which regulates our blood sugar, and genes for metabolic enzymes and neurotransmitter receptors and antibodies, and so on; we have a total of about 20,000 genes defined in this way. It is right to think of the purpose of these genes as encoding those proteins with those cellular or physiological functions.

But from the point of view of heredity, a gene is some physical unit that can be passed from parent to offspring that is associated with some trait or condition. There is a gene for sickle-cell anaemia, for example, that explains how the disease runs in families. The key idea linking these two different concepts of the gene is variation: the ‘gene’ for sickle-cell anaemia is really just a mutation or change in sequence in the stretch of DNA that codes for haemoglobin. That mutation does not have a purpose – it only has an effect.

So, when we talk about genes for intelligence, say, what we really mean is genetic variants that cause differences in intelligence. These might be having their effects in highly indirect ways. Though we all share a human genome, with a common plan for making a human body and a human brain, wired so as to confer our general human nature, genetic variation in that plan arises inevitably, as errors creep in each time DNA is copied to make new sperm and egg cells. The accumulated genetic variation leads to variation in how our brains develop and function, and ultimately to variation in our individual natures.

This is not metaphorical. We can directly see the effects of genetic variation on our brains. Neuroimaging technologies reveal extensive individual differences in the size of various parts of the brain, including functionally defined areas of the cerebral cortex. They reveal how these areas are laid out and interconnected, and the pathways by which they are activated and communicate with each other under different conditions. All these parameters are at least partly heritable – some highly so.

That said, the relationship between these kinds of neural properties and psychological traits is far from simple. There is a long history of searching for correlations between isolated parameters of brain structure – or function – and specific behavioural traits, and certainly no shortage of apparently positive associations in the published literature. But for the most part, these have not held up to further scrutiny.

It turns out that the brain is simply not so modular: even quite specific cognitive functions rely not on isolated areas but on interconnected brain subsystems. And the high-level properties that we recognise as stable psychological traits cannot even be linked to the functioning of specific subsystems, but emerge instead from the interplay between them.

Intelligence, for example, is not linked to any localised brain parameter. It correlates instead with overall brain size and with global parameters of white matter connectivity and the efficiency of brain networks. There is no one bit of the brain that you do your thinking with. Rather than being tied to the function of one component, intelligence seems to reflect instead the interactions between many different components – more like the way we think of the overall performance of a car than, say, horsepower or braking efficiency.

This lack of discrete modularity is also true at the genetic level. A large number of genetic variants that are common in the population have now been associated with intelligence. Each of these by itself has only a tiny effect, but collectively they account for about 10 per cent of the variance in intelligence across the studied population. Remarkably, many of the genes affected by these genetic variants encode proteins with functions in brain development. This didn’t have to be the case – it might have turned out that intelligence was linked to some specific neurotransmitter pathway, or to the metabolic efficiency of neurons or some other direct molecular parameter. Instead, it appears to reflect much more generally how well the brain is put together.

The effects of genetic variation on other cognitive and behavioural traits are similarly indirect and emergent. They are also, typically, not very specific. The vast majority of the genes that direct the processes of neural development are multitaskers: they are involved in diverse cellular processes in many different brain regions. In addition, because cellular systems are all highly interdependent, any given cellular process will also be affected indirectly by genetic variation affecting many other proteins with diverse functions. The effects of any individual genetic variant are thus rarely restricted to just one part of the brain or one cognitive function or one psychological trait.

What all this means is that we should not expect the discovery of genetic variants affecting a given psychological trait to directly highlight the hypothetical molecular underpinnings of the affected cognitive functions. In fact, it is an error to think of cognitive functions or mental states as having molecular underpinnings – they have neural underpinnings.

The relationship between our genotypes and our psychological traits, while substantial, is highly indirect and emergent. It involves the interplay of the effects of thousands of genetic variants, realised through the complex processes of development, ultimately giving rise to variation in many parameters of brain structure and function, which, collectively, impinge on the high-level cognitive and behavioural functions that underpin individual differences in our psychology.

And that’s just the way things are. Nature is under no obligation to make things simple for us. When we open the lid of the black box, we should not expect to see lots of neatly separated smaller black boxes inside – it’s a mess in there.

Innate: How the Wiring of our Brains Shapes Who We Are by Kevin Mitchell is published via Princeton University Press.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

Daniel Rodgers on As a City on a Hill

Rodgers“For we must consider that we shall be as a city upon a hill,” John Winthrop warned his fellow Puritans at New England’s founding in 1630. More than three centuries later, Ronald Reagan remade that passage into a timeless celebration of American promise. How were Winthrop’s long-forgotten words reinvented as a central statement of American identity and exceptionalism? In As a City on a Hill, leading American intellectual historian Daniel Rodgers tells the surprising story of one of the most celebrated documents in the canon of the American idea. In doing so, he brings to life the ideas Winthrop’s text carried in its own time and the sharply different yearnings that have been attributed to it since.

How did you come to write this book? 

Like many book projects, this one began when with a sense of surprise. “We shall be as a city on a hill” has been part of the core rhetoric of American nationalism since the 1980s when Ronald Reagan began using it as a signature phrase in his speeches. In modern times, it is virtually impossible to discuss the “American creed” and the main themes in American civic culture without it. Like other teachers of American history, I had taught the Puritan text from which Reagan had taken the phrase to hundreds of students. “A Model of Christian Charity,” John Winthrop had titled his “lay sermon” in 1630. Here I said, with the confidence of repeating a rock-solid certainty, lay the origins of the idea of special, world-historical destiny that had propelled American history from its very beginnings.

But I was wrong. The closer I looked at the text that speechwriters, op-ed contributors, preachers, historians, political scientists, and so many others thought they knew so well, the more I began to realize that the story of Winthrop’s “Model of Christian Charity” held a string of surprises. Rather than running as a continuous thread through American history, Winthrop’s text had almost immediately dropped out of sight where it stayed, unread and unimportant, for generations. When historians and social commenters revived it two and a half centuries after its writing, they did so in the act of making it into a radically different document than it had been at its origins. Winthrop had placed a plea for charity and intense mutual obligations, not greatness, at the heart of his “Model.” How had this core meaning been lost? How had Winthrop’s sense of the acute vulnerability of his project been replaced by confidence that the United States had a unique and unstoppable mission to be a model to the world? How had this story of forgetting and remembering, erasure and revision, reuse and contention actually unfold? It was when these puzzles began to accumulate in my mind that I realized this book about the continuous reshaping of the past needed to be written.

What exactly did John Winthrop mean by “a city on a hill,” then?

The chasm between Winthrop’s use of those words and what they were claimed to mean when his “Model of Christian Charity” burst into public notice in the mid twentieth century is immense. On the eve of the Puritan settlement of New England, Winthrop meant the phrase “we shall be as a city on a hill” as a warning. As he used the words, a “city on a hill” was a city exposed to the “eyes of all people;” it was a place of high conspicuousness. To live there was to live under the critical scrutiny of a God who might, in a moment, make it a “story and a byword through the world.” There was nothing comforting about it.

At its best and most demanding, Winthrop’s “Model” had urged, the mission of the Puritan project in America was to realize a mutual “charity” deeper than any modern society had yet achieved. It was to be a community where the temptations of unrestrained commerce and self-interest would be held in check by an ethic of love and mutuality. He and his fellow New Englanders fell short of that ideal, as the book’s sketches of some of the early New England recipients of Puritan public charity show. But Winthrop’s “city on a hill” promised, nonetheless, a radically different future than capitalist America was to realize.

When Winthrop’s phrase was reinjected into politics at the end of the twentieth century, it stood not for mutual love and obligation, not for the rules of fair lending and public responsibility for the poor, but for the cornucopia of goods and liberties that the United States was destined by history to spread to the rest of the world. It stood for the uniqueness of the United States among all other nations. It radiated the power of modern American capitalism. It reassured Americans in a globalizing world that their mission was timeless. How had a warning morphed into a conviction of enduring greatness?  How had a vision of a charitable society been reimagined as a celebration of American material abundance? How could a phrase be refilled with such radically different contents and yet, in the end, be made to appear as if it had been a stable foundation stone of the American “idea” from the beginning?  In the task of tracing that story across four centuries, in and beyond the United States, has been the challenge and the exhilaration of the book’s writing.

Many historians spell out powerful straight-line stories of America. You have said that you are more interested in the unexpected that lies, half-hidden, beneath the overly familiar stories we tell about our past.

All good history writing needs to hold both those impulses in play, but the past is often very different than the version that has been straightened out and encapsulated in public memory. Winthrop and his contemporaries were not the Founders of America as our linearized historical narratives routinely describe them. At the outset, they were English folk in flight from their worldly, commercial, and libertine culture. They had to be made post facto into founders of a nation they never envisioned. The “city on a hill” phrase did not have a continuous presence in American political rhetoric, as we conventionally assume; and it was in no way unique to Americans. You’ll find far more references to the phrase among the founders of Liberia (for good reason; they knew the world’s first black republic was going to be the object of intense critical scrutiny) than among the eighteenth-century founders of the United States (who rarely used it all). Although we conventionally describe a sense of uniquely high moral responsibility for the world as distinctive to Americans, it was not exceptional to them; they shared that sentiment with the peoples of almost all the great powers at the turn of the twentieth century. We associate the American sense of mission with the enduring force of the Protestant heritage in the American past. But among many contemporary evangelical Protestants the relationship of the “city on a hill” phrase to the nation of the United States is much more vexed and troubled than straight-line history imagines it.  

Part of the challenge of writing history is a willingness to take seriously these elements of strangeness, these crooked, disorienting departures from the expected and to follow them to the surprises toward which they lead. The other part is to puzzle out the processes by which this never-linear history, so full of close calls and contingencies, gets ironed out in the stories we tell about ourselves: how, under the pressures of nationalism, our history is made into something easier to swallow and more reassuring to live with.

Is the story of Winthrop’s sermon unique or are there similar examples of the invention of foundational documents in American history?

 No other text that we now think to be as fundamental to our statement of who we are went missing for as many centuries as John Winthrop’s “Model of Christian Charity.” But many of the symbols of modern nationalism are much more recent than we imagine, and the meanings Americans have invested in them have changed almost as remarkably as in the case of Winthrop’s text. The Declaration of Independence is a critically important example. The Declaration that we know now, with its promise of equal rights and liberties to all, wasn’t a foundational document in its own time. At many July 4 ceremonies in the generation after 1776, those opening lines of the Declaration were not read at all. As the Declaration’s preamble began to be revived, contests over its meanings revived as well. The Declaration didn’t begin to be imagined as carrying a fundamental criticism of slavery until abolitionists read it again with a radically new moral urgency in the 1840s. It wasn’t imagined to carry the fully panoply of human rights that we now associate with it until the mid-twentieth century. It was a document continuously remade by those who used it. Those struggles and those reworkings—not the document itself—form our national story.  

John Winthrop’s “city on a hill” sermon, written in a moment of high anxiety, eclipsed by hundreds of other patriotic texts as the nation took shape, its core theme of mutuality forgotten and misremembered and the rest embraced as if it had been part of the unitary American consciousness from the beginning, is a story of the same sort. Its story is a history of struggles to remake and remember a civic culture. We live within these struggles now and within some of the terms that Winthrop himself wrestled with. As they are reminded of that, I hope readers will see themselves—as well as an unexpected America—in this book’s pages.

Daniel T. Rodgers is the Henry Charles Lea Professor of History Emeritus at Princeton University. His books include Age of Fracture, winner of the Bancroft Prize; Atlantic Crossings; Contested Truths; and The Work Ethic in Industrial America. He lives in Princeton, New Jersey.

François-Xavier Fauvelle on The Golden Rhinoceros

FauvelleFrom the birth of Islam in the seventh century to the voyages of European exploration in the fifteenth, Africa was at the center of a vibrant exchange of goods and ideas. It was an African golden age in which places like Ghana, Nubia, and Zimbabwe became the crossroads of civilizations, and where African royals, thinkers, and artists played celebrated roles in the globalized world of the Middle Ages. The Golden Rhinoceros brings this unsung era marvelously to life, taking readers from the Sahara and the Nile River Valley to the Ethiopian highlands and southern Africa.

How did this book come about?

This book came about for two reasons. The first is scholarly. As an historian and archaeologist, I have worked in several regions of Africa (the Horn of Africa, South Africa, and countries on both sides of the western Sahara) and have been lucky enough to visit archeological sites in other places. My research has made me understand that despite the profound cultural differences between these different regions there existed a point of convergence: their participation in a global system of exchange during the Middle Ages. This phenomenon had similar and synchronous effects on several African societies, particularly their participation in religious, economic, political and architectural “conversations” with other powers of the time, notably within the Islamic world. The second rationale behind this book is civic. French president Nicolas Sarkozy’s speech in Dakar in 2007 where he claimed that “the African has not fully entered into history,” made me understand that there was a severe shortage of works on African history that were both serious and accessible. Some Africanist historians took it upon themselves to respond to this scandalous speech. For my part, what I found scandalous was not that this speech could be delivered, but that it was audible in our society, that there was room for it to be heard. For me, the blame lies with scholars rather than politicians. The Golden Rhinoceros attempts to address this by making what we know about medieval African history available to a large audience.

Is there a method to how you structured the book? 

I have always been very sensitive to the argument of American historian Hayden White. He believed that historians generally narrate history in a conservative way. In my opinion, one of the conservative ways of writing the history of ancient Africa is to write it so that it conceals the characteristics of African societies and the available documentation in order to imitate the history of medieval and modern Europe. I wrote The Golden Rhinoceros to respond to a particular challenge presented by ancient African history: the fragmentary character of the written and archaeological documentation. Thus, this book is organized into small chapters that seek to embrace the fragmentary nature of the documentation by opening “windows,” but without covering up the lacunas, without leading the reader to think it is possible to tell this history in a linear fashion. I have also sought to lay out two different levels of reading: each chapter is followed by a short bibliographic essay that tells another story, that of the documentation.

How did the experiences of ordinary Africans of the Middle Ages differ from their counterparts in Europe?

This is a difficult question to answer because the written sources, primarily Arabic, which were produced outside of African societies, tell us mostly about capital cities, political elites, diplomatic relations, and the buying and selling of luxury merchandise. I have focused the book on these aspects, as they allow us to better observe the agency of African societies. Nevertheless, this approach sometimes opens small windows onto the lives of ordinary people. Take for example this request which was formulated before Sultan Sulayman of Mâli in 1352: a Muslim cleric had come from a village and presented himself before the sultan.  He said that locusts had spoken to him, saying that God had sent them to destroy the harvest because of the oppression reigning in the land. We have here a window, very small but very illuminating, onto the political order and the language in which recriminations regarding power in the medieval kingdom of Mâli were expressed.

What would you like readers to take away from this book?

I would like readers to understand that African societies were not “tribes” frozen forever in their landscape; that their social organization changed over time; that they participated in global exchange; that they created institutions and cities; that they adopted and adapted forms of religion coming initially from the outside, such as Christianity and Islam. That’s the first take away. There is a second: It’s one thing to understand that African societies have a history, it’s another to realize that medieval African societies were the contemporaries of the Islamic, European, Indian or Chinese societies of the period, and that they participated in a larger conversation. It’s why I speak of a medieval Africa that should be seen as part of a global Middle Ages that contained other provinces. For me, from the point of view of an Africanist, the goal is not, in the words of Dipesh Chakrabarty, “to provincialize Europe,” but to conceive of a multi-provincial world in which Africa has its place. Finally, a third take away: I would like for The Golden Rhinoceros to contribute to putting our knowledge of the history of Africa into the current conversation about history; to have it participate in the shared conceptions of world civilizations; and to influence teaching and discussion on the historical trajectories of societies and the methods of the historical discipline.

What did Africa offer during the Middle Ages that other regions did not?

Several very strong singularities should be highlighted. One is that forms of centralized power and the accumulation of prestige, sophisticated systems of exchange, and a cultural and material finesse existed without being accompanied by the widespread use of writing (except in Ethiopia). Another is that although the regions of Africa under discussion were not conquered by Arab armies in the seventh century, many of their societies, in any case their elites, adopted Islam because it allowed them to access a political, commercial, juridical and intellectual language common to the whole Islamic world (the Maghreb, Egypt, the Arabian Peninsula, and Persia). These political elites, as in Mâli for example, invented ways for Islam and local religions to coexist. This coexistence is also found at the level of the linguistic, economic, and technological diversity of African societies: it’s a characteristic of the longue durée in African history, which distinguishes Africa from other regions of the world that became culturally homogeneous to a higher degree when they were integrated into centralized political formations. But far be it from me to promote an angelic vision of African history: we must not forget that several of these societies (the Ethiopian kingdoms, for example) raided their neighbors to export slaves to the Islamic world.

Why do you think the history of medieval Africa has been neglected?

This is a complex question. The Arab authors of the Middle Ages had no problem admiring the political sophistication of the African kingdoms (such as al-Bakrî, who wrote approvingly of Ghâna in the eleventh century) or investigating their history (such as Ibn Khaldûn did for Mâli at the end of the fourteenth century), although the Islamic societies to which they belonged imported massive numbers of black slaves from these regions. In contrast, the slave trade practiced by the Europeans and their American colonies was accompanied by a monstrous ideology that not only negated the humanity of the captives, but also the singularity, and thus the historicity, of their societies. This negation has stealthily managed to install itself in modern mentalities. It lives on in multiple forms, whether it’s coldly saying that Africans have no history, or shutting away African art behind museum showcases with “ethnic” labels which lead one to think that objects produced by Africans reflect unchanging African “souls.” History is a remedy against such beliefs.

François-Xavier Fauvelle is senior fellow at the National Center for Scientific Research (CNRS) in Toulouse, France, and one of the world’s leading historians of ancient Africa. The author and editor of numerous books, he has conducted archaeological digs in South Africa, Ethiopia, and Morocco.