Browse Our Literature Catalog 2019

Our new Literature catalog includes one of Jane Austen’s most charming youthful “novels”-in-miniature, a look at how New York’s Lower East Side inspired new ways of seeing America, a compelling history of the national conflicts that resulted from efforts to produce the first definitive American dictionary of English, and much more.

If you’ll be at MLA 2019 in Chicago this weekend, stop by Booths 220-222 to see our full range of recent literature titles.

Most people think Jane Austen wrote only six novels. Fortunately for us, she wrote several others, though very short ones, while still a young girl.

Austen was only twelve or thirteen when she wrote The Beautifull Cassandra, an irreverent and humorous little masterpiece. Weighing in at 465 occasionally misspelled words, it is a complete and perfect novel-in-miniature, made up of a dedication to her older sister Cassandra and twelve chapters, each consisting of a sentence or two. This charming edition features elegant and edgy watercolor drawings by Leon Steinmetz and is edited by leading Austen scholar Claudia L. Johnson.

New York City’s Lower East Side, long viewed as the space of what Jacob Riis notoriously called the “other half,” was also a crucible for experimentation in photography, film, literature, and visual technologies. This book takes an unprecedented look at the practices of observation that emerged from this critical site of encounter, showing how they have informed literary and everyday narratives of America, its citizens, and its possible futures.

How the Other Half Looks reveals how the Lower East Side has inspired new ways of looking—and looking back—that have shaped literary and popular expression as well as American modernity.

In The Dictionary Wars, Peter Martin recounts the patriotic fervor in the early American republic to produce a definitive national dictionary that would rival Samuel Johnson’s 1755 Dictionary of the English Language. But what began as a cultural war of independence from Britain devolved into a battle among lexicographers, authors, scholars, and publishers, all vying for dictionary supremacy and shattering forever the dream of a unified American language.

Gift Guide: Biographies and Memoirs!

Not sure what to give the reader who’s read it all? Biographies, with their fascinating protagonists, historical analyses, and stranger-than-fiction narratives, make great gifts for lovers of nonfiction and fiction alike! These biographies and memoirs provide glimpses into the lives of people both famous and forgotten:

Galawdewos Life of Walatta-Petros book coverThe radical saint: Walatta-Petros

Walatta-Petros was an Ethiopian saint who lived from 1592 to 1642 and led a successful nonviolent movement to preserve African Christian beliefs in the face of European protocolonialism. Written by her disciple Galawdewos in 1672, after Walatta-Petros’s death, and translated and edited by Wendy Laura Belcher and Michael Kleiner, The Life of Walatta-Petros praises her as a friend of women, a devoted reader, a skilled preacher, and a radical leader, providing a rare picture of the experiences and thoughts of Africans—especially women—before the modern era.

This is the oldest-known book-length biography of an African woman written by Africans before the nineteenth century, and one of the earliest stories of African resistance to European influence. This concise edition, which omits the notes and scholarly apparatus of the hardcover, features a new introduction aimed at students and general readers.

 

Devlin_Finding Fibonacci book coverThe forgotten mathematician: Fibonacci

The medieval mathematician Leonardo of Pisa, popularly known as Fibonacci, is most famous for the Fibonacci numbers—which, it so happens, he didn’t invent. But Fibonacci’s greatest contribution was as an expositor of mathematical ideas at a level ordinary people could understand. In 1202, his book Liber abbaci—the “Book of Calculation”—introduced modern arithmetic to the Western world. Yet Fibonacci was long forgotten after his death.

Finding Fibonacci is Keith Devlin’s compelling firsthand account of his ten-year quest to tell Fibonacci’s story. Devlin, a math expositor himself, kept a diary of the undertaking, which he draws on here to describe the project’s highs and lows, its false starts and disappointments, the tragedies and unexpected turns, some hilarious episodes, and the occasional lucky breaks.

 

The college president: Hanna Gray Gray_Academic Life book cover

Hanna Holborn Gray has lived her entire life in the world of higher education. The daughter of academics, she fled Hitler’s Germany with her parents in the 1930s, emigrating to New Haven, where her father was a professor at Yale University. She has studied and taught at some of the world’s most prestigious universities. She was the first woman to serve as provost of Yale. In 1978, she became the first woman president of a major research university when she was appointed to lead the University of Chicago, a position she held for fifteen years. In 1991, Gray was awarded the Presidential Medal of Freedom, the nation’s highest civilian honor, in recognition of her extraordinary contributions to education.

Gray’s memoir An Academic Life is a candid self-portrait by one of academia’s most respected trailblazers.

 

The medieval historian: Ibn Khaldun Irwin_Ibn Khaldun book cover

Ibn Khaldun (1332–1406) is generally regarded as the greatest intellectual ever to have appeared in the Arab world—a genius who ranks as one of the world’s great minds. Yet the author of the Muqaddima, the most important study of history ever produced in the Islamic world, is not as well known as he should be, and his ideas are widely misunderstood. In this groundbreaking intellectual biography, Robert Irwin presents an Ibn Khaldun who was a creature of his time—a devout Sufi mystic who was obsessed with the occult and futurology and who lived in a world decimated by the Black Death.

Ibn Khaldun was a major political player in the tumultuous Islamic courts of North Africa and Muslim Spain, as well as a teacher and writer. Irwin shows how Ibn Khaldun’s life and thought fit into historical and intellectual context, including medieval Islamic theology, philosophy, politics, literature, economics, law, and tribal life.

 

The novelist and philosopher: Iris Murdoch Murdoch_Living on Paper book cover

Iris Murdoch was an acclaimed novelist and groundbreaking philosopher whose life reflected her unconventional beliefs and values. Living on Paper—the first major collection of Murdoch’s most compelling and interesting personal letters—gives, for the first time, a rounded self-portrait of one of the twentieth century’s greatest writers and thinkers. With more than 760 letters, fewer than forty of which have been published before, the book provides a unique chronicle of Murdoch’s life from her days as a schoolgirl to her last years.

The letters show a great mind at work—struggling with philosophical problems, trying to bring a difficult novel together, exploring spirituality, and responding pointedly to world events. We witness Murdoch’s emotional hunger, her tendency to live on the edge of what was socially acceptable, and her irreverence and sharp sense of humor. Direct and intimate, these letters bring us closer than ever before to Iris Murdoch as a person.

Jason Brennan on When All Else Fails

Brennan When All Else FailsThe economist Albert O. Hirschman famously argued that citizens of democracies have only three possible responses to injustice or wrongdoing by their governments: we may leave, complain, or comply. But in When All Else Fails, Jason Brennan argues that there is a fourth option. When governments violate our rights, we may resist. We may even have a moral duty to do so. The result is a provocative challenge to long-held beliefs about how citizens may respond when government officials behave unjustly or abuse their power.

What led you to write this book?

Almost daily for the past year, I have come across news stories about police officers using excessive violence against civilians, or about people being arrested and having their lives ruined over things that shouldn’t be crimes in the first place. I watched the Black Lives Matter protests and started reading histories of armed resistance. I watched as president after president killed innocent civilians while pursuing the “War on Terror.” I see people’s lives destroyed by the “War on Drugs,” which continues on the same course even though we have strong evidence it makes things worse, not better. Every day, government agents acting ex officio are committing severe injustices. 

I ascertained that contemporary philosophy was largely impotent to analyze or deal with these problems. Most political philosophy is about trying to construct a theory of an ideal, perfectly just society, which means philosophers usually imagine away the hard problems rather than consider how to deal with those problems. Philosophers often try to justify the government’s right to commit injustice, but they often rely upon irrelevant or incoherent models of what governments and their agents are like. For example, Suzanne Dovi’s theory of political representation is grounded in a false theory of voter behavior, while John Rawls’s argument for government simultaneously assumes people are too selfish to pay for public goods, and government agents are too angelic to abuse their power. I saw an opening not only to do original philosophy, but to do work that bears on the pressing events of our times.

You can see that in the book. The “thought experiments” I use are all based on actual cases, including police officers beating up black men who did nothing more than roll slightly past a stop sign; officers shooting unarmed, subdued men; governments spying on and wiretapping ordinary citizens; drone strikes on innocent civilians; throwing people in jail for smoking marijuana or snorting cocaine; judges having to enforce absurd sentences or unjust laws; and so on.

Can you give a summary of your argument?

The thesis is very simple: the conditions under which you may exercise the right of self-defense or the right to defend others against civilians and government agents are the same. If it is permissible to defend yourself or others against a civilian committing an act, then it is permissible to defend yourself or others against a government agent committing that same act. For instance, if I wanted to lock you in my basement for a year for smoking pot, you’d feel no compunction in defending yourself against me. My thesis is that you should treat government agents the same way.

My main argument is also simple: Both laypeople and philosophers have offered a few dozen arguments trying to defend the opposite conclusion: the view that government agents have a kind of special immunity against defensive resistance. But upon closer examination, we’ll see each of the arguments are bad. So, we should conclude instead that our rights of self-defense or to defend others against injustice do not simply disappear by government fiat. On closer inspection, there turns out to be no significant moral difference between the Commonwealth of Virginia imprisoning you for owning pot and me imprisoning you in my basement for the same thing.

To be clear,  I am not arguing that you may resist government whenever you disagree with a law. Just as I reject voluntarism on the part of government—I don’t think governments can simply decide right and wrong—so I reject voluntarism on the part of individuals. Rather, I’m arguing that you may resist when governments in fact violate people’s rights or in fact cause unjust harm.

Some will no doubt complain this thesis is dangerous. In some ways it is, and I take care to highlight how to be careful about it in the book. But on the other hand, the opposite thesis—that we must defer to government injustice—is no doubt even more dangerous. People tend to be deferential and conformist. Most people will stand by and do nothing while armed officers send people to death camps. Stanley Milgram showed most people will electrocute another person to death because a man in a white lab coat told them to. If anything, defenders of the other side—of the view that we should defer to government injustice—have a duty to be cautious pushing their dangerous view.

Can you talk a bit about the meaning behind the title? What exactly has to fail in order to justify the actions you describe?

Usually, lying, stealing, destroying property, hurting others, or killing others is wrong. However, you may sometimes perform such actions in self-defense or in defense of others. The basic principle of defense, codified in both common law and commonsense morality, is this: you may use a defensive action (such as sabotage, subterfuge, deceit, or violence) against someone else when they are initiating a severe enough injustice or harm, but only if it is necessary to defend yourself. Here, “necessary” means that you cannot use violence if a nonviolent means of defense is equally effective; you cannot use deceit if a non-deceitful means of defense is equally effective. So, the title is meant to signal that defensive actions—such as deceit or violence—are, if not quite last resorts, not first resorts either. 

What is the place of uncivil disobedience within a peaceful and successful polity?

What we call “civil disobedience” is a form of public protest. In civil disobedience, people publicly and explicitly break the law for the purpose of trying to have the law changed. They will often accept legal punishment, not necessarily because they think punishment is warranted and that even bad laws must be respected, but because it is strategic to do so to garner sympathy for their cause. Civil disobedience is about social change.

But self-defense is not about social change. If I kill a would-be mugger, I’m not trying to reduce crime or change gun policy. I’m trying to stop myself from being the victim of that particular injustice. Similarly, if you had been present and had acted in defense of Eric Garner, you would not necessarily have been trying to fix American policing—you would have just been trying to save Garner’s life. Defensive actions—or uncivil disobedience—are about stopping particular wrongdoers from committing particular harms or violating particular people’s rights. 

What are your thoughts on recent protests and movements such as Take a Knee, Me Too, and March for our Lives?

Globally, US policing and US criminal policy are outliers. American criminal justice is unusually punitive and harsh. We have 4.4% of the world’s population but around 25% of the world’s prisoners. We give longer, harsher sentences than illiberal countries such as Russia or China. Our police are unusually violent, even to the most privileged in our society. I applaud movements that bring attention to these facts.

It wasn’t always this way. In the 1960s, though the US had a higher than normal crime rate, its sentence lengths, imprisonment rate, and so on, were on the high end but similar to those of other liberal, rich, democratic countries. But starting in the 1970s, things got worse. 

Right now, Chris Surprenant and I are writing a book called Injustice for All explaining why this happened and offering some ideas about how to fix it. We argue that the problem is not explained by racism (as leftists argue), the War on Drugs (as libertarians argue), or crime and family collapse (as conservatives argue), though these things are each important factors. Rather, the US criminal justice system became dysfunctional because nearly every person involved—from voters to cops to judges to politicians—faces bad incentives created by bad rules.

Are there examples from history of individuals or groups following your philosophy with success?

Two recent books, Charles Cobb Jr.’s This Non-Violent Stuff’ll Get You Killed and Akinyele Omowale Umoja’s We Will Shoot Back provide strong evidence that the later “nonviolent” phase of civil rights activism succeeded (as much as it has) only because in earlier phases, black Americans involved in protest armed themselves in self-defense. Once murderous mobs and law enforcement learned that they would fight back, they turned to less violent forms of oppression, and activists in turn began using the nonviolent tactics with which we are familiar.

Do you think there are changes that can be made that would lessen instances in which uncivil disobedience is justified?

A facile answer: all governments have to do is respect citizens’ rights.

More realistically: we need to train police differently, change recruitment tactics, and stop using SWAT teams so often. We should decriminalize many behaviors that are currently criminalized. We need to change tax codes so that poor localities are not dependent upon law enforcement issuing tickets to gain revenue. We need Congress to rein in the executive branch’s war and surveillance powers.

But even these kinds of ideas are too facile, because there is no willpower to make such improvements. Consider an example: violent crime in the US has been dropping since 1994 (and no, it’s not because we keep locking up all the violent criminals). Yet most Americans mistakenly believe, year after year, that crime is rising. They feel scared and vote for politicians who promise to be tough on crime. The politicians in turn support more confrontational, occupying-force style methods of policing. Here, we know what the problem is, but to fix the system we need to fix the voters, and we don’t know how to do that. To be clear, When All Else Fails is not a theory of social change, and not a prescription for fixing persistent or systematic social problems. As I often tell my political economy students, while we may know which institutions work better than others, no one yet has a good account of how to move from bad institutions to good.

Jason Brennan is the Robert J. and Elizabeth Flanagan Family Professor of Strategy, Economics, Ethics, and Public Policy at Georgetown University’s McDonough School of Business. His many books include Against Democracy and The Ethics of Voting.

Ethan Shagan on The Birth of Modern Belief

ShaganThis landmark book traces the history of belief in the Christian West from the Middle Ages to the Enlightenment, revealing for the first time how a distinctively modern category of belief came into being. Ethan Shagan focuses not on what people believed, which is the normal concern of Reformation history, but on the more fundamental question of what people took belief to be. Brilliantly illuminating, The Birth of Modern Belief demonstrates how belief came to occupy such an ambivalent place in the modern world, becoming the essential category by which we express our judgments about science, society, and the sacred, but at the expense of the unique status religion once enjoyed.

What led you to write this book?

Good works of history often begin with a chance discovery that sticks like a splinter in the historian’s mind: something weird or surprising in the historical record that demands an explanation. In this case, that oddity was something I found in Martin Luther’s collected writings: his claim that most people do not believe that God exists. This struck me as utterly outlandish. Besides the fact that more or less everyone in sixteenth-century Europe believed in God, Luther also wrote elsewhere that atheism was virtually impossible because knowledge of God is imprinted on all human souls. So what on earth was going on? Upon further research, I found other versions of this same bizarre claim popping up elsewhere in the sixteenth century. John Calvin wrote in his Institutes of the Christian Religion that anyone who follows their own passions in defiance of heavenly judgment “denies that there is a God”—the translator of the modern English edition changed this passage to “virtually denies that there is a God,” presumably because he thought the original must have been some sort of mistake. The radical spiritualist Sebastian Franck claimed, far more drastically, that “there is not a single believer on earth!” These remarkable and unexpected ideas were not written in obscure places, nor were they written by unknown people. So why had no historian ever written about them before?

These discoveries set me on a journey that has lasted seven years. I started with the intuition that “belief” itself had changed its meaning over time. Thus, for instance, Luther could say that everyone knows God exists, but he could still argue that most people do not believe God exists, because he took “belief” to be a more difficult condition. But from there I had to figure out what preexisting, medieval understandings of belief Luther was rejecting. Then I had to figure out how the different factions in the Reformation interpreted belief. And then, most importantly, I set myself the task of figuring out how a modern understanding of “belief” emerged. Hence this became a book about the birth of modern belief: a whole new way of imagining the relationship between religion and other kinds of knowledge, which we take to be absolutely timeless and natural but was in fact an invention of the seventeenth century and a touchstone of the Enlightenment. 

Can you explain a bit about the book’s argument? What do you mean by a modern category of belief?

Belief has a history; the concept changes over time. We take it for granted that “belief” means private judgment or opinion. From that assumption, which we assume is timeless but is in fact profoundly modern, lots of other conclusions follow which seem equally unquestionable. For example, if belief is private judgment, then our beliefs might change over time in light of new evidence or further reflection. Likewise, if belief is opinion, then our belief on any particular issue might be probable rather than absolute: we might reasonably say we believe something if we think it’s likely, even if we’re uncertain. Most importantly, if belief is private judgment, then I might believe a religious doctrine in more or less the same sense that I believe that Lee Harvey Oswald acted alone, or that our sun is part of the Milky Way galaxy.

None of this would have been taken for granted in the Western tradition before the seventeenth century, and indeed a great deal of intellectual energy was poured into denying that any of it was true. Of course, people sometimes used the verb “believe” (credo in Latin, glauben in German, etc.) in a colloquial way—“I believe this peach is ripe,” or “I believe my husband loves me”—but a vast range of theology and philosophy was devoted to the proposition that this was totally different from belief in its proper, religious sense. To believe required an absolute, certain conviction, guaranteed to be true by reliable authority. Anything lesser or different could easily be denounced as unbelief, a failure of the mind and soul; anyone who believed wrongly, or insufficiently, or for the wrong reasons, or in the wrong way, might be taken not to believe at all. So my book is a history of how belief was freed from these constraints, creating the conditions in which religion could flourish in a secular age, but only at the cost of relinquishing the special status religion had previously enjoyed.

It seems intuitive that modern belief formed as a reaction against the Church, but how was it also a reaction against Luther and Calvinism?

Lots of people think that the Reformation produced religious liberty, because in the Reformation individuals—like Luther purportedly saying, “Here I stand, I can do no other”—insisted upon their own conscientious right to believe differently from the Roman Catholic Church. But this is quite wrong. Luther and his allies did indeed insist that their own beliefs were genuine, and that their own consciences were inviolable. But in the very act of making this claim for themselves, they insisted that all other beliefs were not simply false, they were not even beliefs at all. When early modern Protestants claimed the right to believe as they would, they were creating a new and exclusive category of belief to which others did not have access. So the Reformation did not inaugurate modern belief. Instead it produced a new kind of authoritarianism: whereas Catholics disciplined people to believe, Protestants accepted that belief was rare, and instead disciplined unbelievers. The reaction against these twin pillars of orthodoxy thus came from dissidents within both traditions. Modern belief emerged in fits and starts, not as a revolution against Christianity, but as a revolution from within Christianity by mutineers whose strained relationship to orthodoxy necessitated a more porous understand of belief.

How does the modern idea of belief travel through later intellectual movements such as the Enlightenment? Did it undergo changes there as well?

This is really a book about the Enlightenment, as much or more than it’s a book about the Reformation, because it was in the Enlightenment that modern belief truly emerged as a powerful force in the world. But the Enlightenment you’ll find in these pages may not be the one you expect.

First, it is an Enlightenment that is inclusive of religion rather than against religion. I do not deny, of course, that there was a “radical Enlightenment” which attempted, often quite explicitly, to undermine the claims of organized Christianity. But by far the more significant project of the Enlightenment was to reestablish religion on a new basis, to render it not only compatible with reason but a partner in the task of criticism which was at the heart of eighteenth-century ideas. The Enlightenment thus pioneered a question which we take for granted today, but which had received remarkably little attention previously: on what grounds should I believe? There were many different answers in the Enlightenment—as there remain today—but the task of Enlightenment religion was to tear down the medieval architecture of the mind which had strictly separated belief, knowledge, and opinion, and had thus made the question itself virtually meaningless. Enlightenment Christianity established what the Reformation had not: the sovereignty of the believing subject.

Second, my Enlightenment is not about the triumph of reason, but rather the triumph of opinion. Modern critics of the Enlightenment, on both the Left and the Right, often denigrate Enlightenment reason—and not without reason, if you’ll pardon the pun—as a false universal which allowed a new orthodoxy to establish itself as the natural frame of all argument rather than a peculiar argument in its own right. But this understanding of the Enlightenment, which takes Immanuel Kant as its avatar, misses huge swathes of late-seventeenth and eighteenth-century thought which instead privileged opinion, a kind of judgment that was particular rather than universal. In this book, I want to resuscitate an Enlightenment that privileged autonomous judgment rather than judgment constrained by someone else’s reason, and thus led to new kinds of spiritualism as much as it led to new kinds of scientism. At its worst, this modern spirit of autonomy produces the world of “alternative facts” and “fake news;” but at its best, it produces the conditions of freedom that allow for peace in a diverse society.

What is the relationship between the history of belief and secularization?

Every page of this book is engaged at least obliquely with the secularization question, but one of my key points is that secularization is the wrong question.

Secularization assumes that the crucial development in modernity is the creation of spaces outside or apart from religion; in modernity, this argument goes, religion has been relegated to a separate, private sphere. But by contrast, what I find is that modernity’s encounter with religion is not about segregating belief from the world, but rather about the promiscuous opening of belief to the world. Belief becomes, in modernity, not the boundary separating religious claims from other kinds of knowledge, but rather the least common denominator of all knowledge. Here my favorite example is the claim of many modern Christians that scientific knowledge—like the theory of evolution, for instance—is just another form of belief. This claim would have been literally nonsensical before the seventeenth century, because the whole point of belief was to preserve a special prestige for Christianity: science was a different beast altogether, belonging to different mental faculties and defended in different ways. The fact that scientific theories can now be understood as beliefs suggests that instead of thinking about the rise of a modern secular, we instead need to think about what happened when the walls separating religious belief from other kinds of knowledge-claims were breached.

What do you hope readers will take away from reading this book?

That belief has proliferated rather than waned in modernity, but only because the definition of belief has changed in our society to make it compatible with diversity, democracy, and freedom of thought. The old world of belief—where it was structured by authority, and where it functioned as an axis of exclusion to preserve orthodoxy—is dead and buried, and we should be thankful for its demise rather than nostalgic for the oppressive unity it once provided.

Ethan H. Shagan is professor of history at the University of California, Berkeley. He is the author of The Rule of Moderation: Violence, Religion, and the Politics of Restraint in Early Modern England and Popular Politics and the English Reformation. He lives in Orinda, California.

Kieran Setiya: Idleness as Flourishing

This article was originally published by Public Books and is reprinted here with permission.

It is hard work to write a book, so there is unavoidable irony in fashioning a volume on the value of being idle. There is a paradox, too: to praise idleness is to suggest that there is some point to it, that wasting time is not a waste of time. Paradox infuses the experience of being idle. Rapturous relaxation can be difficult to distinguish from melancholy. When the academic year comes to an end, I find myself sprawled on the couch, re-watching old episodes of British comedy panel shows on a loop. I cannot tell if I am depressed or taking an indulgent break. As Samuel Johnson wrote: “Every man is, or hopes to be, an Idler.”[1.Samuel Johnson, The Idler, no. 1, April 15, 1758; reprinted in The Idler and The Adventurer, edited by W. J. Bate, John M. Bullitt, and L. F. Powell (Yale University Press, 1963), pp. 3–4.] As he also wrote: “There are … miseries in idleness, which the Idler only can conceive.”[2.Johnson, The Idler, no. 3, April 29, 1758; in The Idler and The Adventurer, p. 11.]

This year brings three new books in praise of wasting time: a manifesto by MIT professor Alan Lightman; a critical history by philosopher Brian O’Connor; and a memoir by essayist Patricia Hampl. Each author finds a way to write in the spirit of idleness. Yet none of them quite resolves our double vision. Even as they bring its value into focus, they never shake a shadow image of the shame in being idle.

Why idleness now? Because we are too busy, too frantic; because of the felt acceleration of time. Lightman supplies a measure. “Throughout history,” he writes, “the pace of life has always been fueled by the speed of communication.”

When the telegraph was invented in the nineteenth century, information could be transmitted at the rate of about four bits per second. By 1985, near the beginnings of the public Internet, the rate was about a thousand bits per second. Today, the rate is about one billion bits per second.

We are in principle accessible anywhere, at any time; we can be texted, emailed, tagged: “The world today is faster, more scheduled, more fragmented, less patient, louder, more wired, more public.” There is not enough downtime. So Lightman argues in his brisk, persuasive essay. His snapshots of the relevant social science portray the grim effects of over-connection in our digital age: young people are more stressed, more prone to depression, less creative, more lonely but never really alone. Our time is ruthlessly graphed into efficient units. The walking speed of pedestrians in 32 cities increased by 10 percent from 1995 to 2005.

With its brief chapters and bright illustrations, Lightman’s book is itself well-designed for the attention deficits of the internet era, perfect for the postliterate teenager or the busy executive with only an hour to spare. It makes an elegant case for downtime: unstructured and undistracted, time to experiment and introspect. For Lightman, this is the kind of time-wasting that is not a waste of time. It augments creativity, which draws on undirected or “divergent” thinking. It replenishes and repairs us. And it gives us space in which to find ourselves.

Lightman’s definition of “wasting time” as undirected introspection is deliberately tendentious. The phrase could just as well describe the smartphone addict playing Angry Birds. Ironically, one of the most intriguing studies in Lightman’s book concerns the positive impact of trivial games. Asked to come up with new business ideas, people who were forced to procrastinate with Minesweeper or Solitaire for several minutes were “noticeably more creative.” Lightman does not pause to ask whether this effect can be scaled up. (I pushed it pretty far myself in graduate school, with mixed results.) But he offers a suggestive catalog of artists and scientists whose best ideas arrived when they were staring at a wall.

Lightman ends with concrete, practical prescriptions: 10-minute silences during school days, “introspective” college courses that give students more time to reflect, electronics-free rooms at work, unplugged hours at home. The changes are not radical and leave intact the media ecology in which we are to live. “It is within the power of each of us as individuals,” Lightman writes, “to make changes in our way of living to restore our inner lives. … With a little determination, each of us can find a half hour a day to waste time.”

Perhaps it is modesty, or realism, that prevents Lightman from seeking social remedies for a social problem. In the short term, he suggests, we have to work on ourselves: a conservative therapy for what ails us. Lightman’s apology for wasting time is conservative in other ways, too. He celebrates not downtime itself but its instrumental value, its usefulness as a means to integrity and achievement. Lightman cites psychologist Abraham Maslow on two forms of creativity: the kind that involves an artistic escape from stress and the kind that fuels “‘self-actualization,’ the desire to become the best we can be.” For Lightman,

there is a kind of necessary homeostasis of the mind: not a static equilibrium but a dynamic equilibrium in which we are constantly examining, testing, and replenishing our mental system, constantly securing the mental membrane between ourselves and the external world, constantly reorganizing and affirming ourselves.

If this is wasting time, who has the energy for it?

Not Brian O’Connor, who makes bolder, larger claims on behalf of being idle. Idleness flouts the prevailing social order and the conception of autonomy as arduous self-fashioning that Lightman and Maslow share. O’Connor traces the exhausting project of self-constitution to Kant and Hegel, through Karl Marx. What Lightman depicts as the ultimate purpose of wasting time, O’Connor sees as an alien imposition, an order issued without authority. Modern philosophy instructs us to make something of ourselves, but it has no right to tell us what to do, and its edicts are appropriated by societies that make exorbitant demands for work, tie recognition to material success, and exalt the individual at the cost of real community. For O’Connor, idleness is indifference to productive work and social prestige; it rejects the need for guiding purpose or self-formation. He adds to the acknowledged benefits of downtime its value as social critique.

Although O’Connor’s book has a guiding purpose, it nonetheless stays true to the ethos of idling. For the most part, O’Connor is content to answer the case against idleness made by its philosophical critics, not to argue for idleness itself. The burden of proof is placed on the opponents of being idle, who must work to convince the idler he is wrong. The idler’s objections are appropriately laconic.

O’Connor’s principal antagonist is Kant, who argues that we must make every choice as if we were legislating for all, and that we have a consequent duty to develop our talents. Scholars may query O’Connor’s interpretation of Kant as drawing on “that special feeling of worthiness” that comes from being useful to society. But even if he is wrong about this, O’Connor is right to find in Kant a vision of freedom as responsibility, of autonomy as work: the daunting project of determining how to be. For Kant, freedom requires one to live by principles one can will as laws for every rational being. One must bring this severe ambition to everything one does; only then is one entitled to be happy. “It is,” O’Connor writes, “a profound theoretical justification of an idea that has now become commonplace: that a life worth living is one marked by effort and achievement.” The idea that a good life calls for onerous self-creation fuels Nietzsche’s injunction to “become who you are” and Sartre’s existentialism.

Marx is a more difficult customer, since his emphasis on the alienation of labor under capitalism could easily be read as a critique of work. In fact, it is a call for the transformation of work into new, authentic forms. Marx’s idea of alienation was developed by Herbert Marcuse, the closest O’Connor gets to an intellectual ally. For Marcuse, alienation involves the internalization of goals that have nothing to do with what we really want. In order to function, contemporary society requires its members to be alienated in this way. What O’Connor finds suspicious in both Marx and Marcuse is the desire to solve the problems of alienation by changing the nature of work, rather than putting it in its place. Describing the conditions of work under communism, Marx writes: “What appears as a sacrifice of rest may also be called a sacrifice of idleness, of unfreedom, of unhappiness.” Marcuse strives instead for a synthesis of work and play.

O’Connor sees no hope of reconciling labor with leisure. Where Marx wants to “hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner,” O’Connor wonders why he can’t just take a nap.[3.Karl Marx and Friedrich Engels, The German Ideology, translated from the German by Salo Ryazanskaya, in Karl Marx: Selected Writings, 2nd ed., edited by David McClellan (Oxford University Press, 2000), p. 185.] Work needs to be transformed, but even after its transformation, it should not be our model of meaning in life and it cannot subsume the value of being idle. Idleness is freedom not just from alienated labor, but from the pressures of autonomy and authenticity. It is another mode of flourishing, against which the lure of striving and success should seem, at best, a lifestyle choice.

What O’Connor’s provocations miss is that for Kant, and for Sartre, the responsibility for oneself that defines autonomy is at the same time a responsibility to others. It is one thing to slack off when I could develop my talents; that is no one’s problem but my own. It is another to be idle in the face of urgent need, and so to be indifferent to suffering. John Berger wrote: “On this earth there is no happiness without a longing for justice.”[4.John Berger, Hold Everything Dear (Verso, 2007), p. 102.] It has been an aspiration of philosophers since Plato to show that this is true. An adequate defense of idleness would have to address that aspiration, to assuage the idler’s guilt. I may not owe it to myself to strain and struggle, but don’t I owe it to you?

Ironically, the work that most directly confronts the tension between idleness and ethical responsibility is neither a manifesto nor a monograph, but an essay in the spirit of Montaigne. Like Montaigne, Patricia Hampl is moved to reflect by grief and writes in conversation with someone she has lost. Like Montaigne, she rates description over narrative. And like Montaigne, she is willing to meander. Framed by a pilgrimage to Montaigne’s tower near Bordeaux, Hampl’s book does not arrive at his estate for more than two hundred pages and stops at its destination for a perfunctory eight. On the way, it pays visits to the homes of authors, saints, and scientists who embraced idleness by retiring from the world.

The most memorable are two Anglo-Irish women, Sarah Ponsonby and Lady Eleanor Butler, who eloped together unsuccessfully, disguised as men, in 1778. Returned to their homes, they wore their families down and were permitted to leave together two months later, setting up a cottage in Llangollen, Wales, where they lived on their limited family income, reading books, writing letters, and tending their garden, “famous for wishing to be left alone.” They were visited by celebrities from Shelley and Byron to the Duke of Wellington and Sir Walter Scott.

What the Ladies of Llangollen have in common with Montaigne is a strategy of “[retreat] during ages of political mayhem,” in their case the French Revolution, in his the Reformation. Today, many of us may also feel tempted to retreat. The way of life the Ladies called “our System,” with its monastic regularity and disdain for social expectations, is subversively attractive. Like Montaigne’s essays, it assures us that “the littleness of personhood is somewhere alive, taking its notes,” that it is okay to “enjoy yourself in the littleness of the moment” when the narrative of history goes awry. Withdrawal is not defeat. And if it is irresponsible to withdraw completely, doing so has a point. The limit cases of Montaigne or Ponsonby and Butler, whose idleness did not serve some further goal, show that wasting time is worthwhile in itself. This is what we see in the model their lives present even if, in the face of our obligations to others, it is not a model for us.

It may not even be a model for them. At the end of her book, Hampl quotes a passage from Montaigne: “We say; ‘I have done nothing today.’ What, have you not lived? That is not only the fundamental but the most illustrious of your occupations … He says this in his Essai titled—what else?—‘On Idleness.’” Except he doesn’t. The quotation is from the sprawling essay “Of Experience,” with which the Essays close. “Of Idleness” is an earlier piece, a distillation of self-doubt in which Montaigne indicts his enterprise: “The soul that has no fixed goal loses itself.” If he commits his extravagances to paper, he writes, it is in order “to make my mind ashamed of itself.”[5.Michel de Montaigne, “On Idleness,” The Complete Essays of Montaigne, translated from the French by Donald M. Frame (Stanford University Press, 1958), p. 21.]

Like Montaigne, who played a diffident but competent role in politics—he was mayor of Bordeaux—most of us forge a rotten compromise between idleness and industry. What else can we do? We see the flourishing of life in the little moments, as we see the scale of its shirked responsibilities. To manage our ambivalence is necessary work.

Kieran Setiya is professor of philosophy at the Massachusetts Institute of Technology. He is the author of Midlife: A Philosophical Guide, Reasons without Rationalism (Princeton) and Knowing Right from Wrong. He lives in Brookline, Massachusetts, with his wife and son.

Idleness: A Philosophical Essay by Brian O’Connor is available here.

Rebecca Bedell on Moved to Tears

Rebecca Bedell Moved to Tears book coverIn her new book Moved to Tears, Rebecca Bedell overturns received ideas about sentimental art, arguing that major American artists—from John Trumbull and Charles Willson Peale in the eighteenth century and Asher Durand and Winslow Homer in the nineteenth to Henry Ossawa Tanner and Frank Lloyd Wright in the early twentieth—produced what was understood in their time as sentimental art. This was art intended to develop empathetic bonds and to express or elicit social affections, including sympathy, compassion, nostalgia, and patriotism. In this Q&A, she discusses the ways sentimental art has been misunderstood, and why it is important today.

What is new in the book? What did you hope to accomplish?

I hope both to uproot the still tenacious modernist prejudice against sentimental art and to transform our understanding of it. So many art critics, art historians, artists, and others regard “sentimental art” as a synonym for “bad art.” I want to redefine and complicate ideas about sentimental art: what it looks like, who made it, the cultural work it does.

Isn’t there bad sentimental art?

Yes, of course. There’s also bad abstract art, bad Impressionist art, bad portraits—but we don’t dismiss those entire categories of art because of that.

I associate sentimental art with Victorian genre painting. Is that what you focus on?

No. I do not associate sentimental art with particular subject matter, nor do I locate it in the Victorian era alone. I’ve tried to suggest in the book the extent to which the sentimental pervaded artistic production (and reception) from the later eighteenth century onward. It touched nearly all categories of subject matter: portraits, history painting, religious imagery, landscape, and so on. It affected the creation not only of painting, sculpture, prints, and photography, but also architecture, landscape design, and public spectacles.

Who are the key figures in the book?

The artists I address range from John Trumbull and Charles Willson Peale in the late eighteenth century, to Andrew Jackson Downing, Thomas Cole, Winslow Homer, Mary Cassatt, John Singer Sargent, and others in the nineteenth, to Henry Ossawa Tanner and Frank Lloyd Wright in the early twentieth.

So, what is sentimental art?

Sentimental art has fundamentally to do with connectedness, with our connectedness to others, to place, to the conditions of our existence. Sentimental art aims to develop empathetic bonds and to represent and elicit what were called in the eighteenth century the “social affections,” those emotions that bind us together, including tenderness, affection, sympathy, compassion, and patriotism.

I see sentimental art as part of the broader “sentimental project,” as historians have termed it, launched from Great Britain in the eighteenth century. Its ambition was to transform individuals and society through the cultivation of sympathy. Abolitionism, penal reform, child labor laws, and societies for the prevention of cruelty to animals were all, in some measure, parts of the project.

In working on the book, did you come upon anything that surprised you?

I began the project by combing through eighteenth- and nineteenth-century books, newspapers, and magazines for the use of the word “sentimental” in relation to art. The first instance I found of this was a surprise to me. A writer for a Boston newspaper in the 1780s described John Trumbull’s Revolutionary War paintings as sentimental, and in a very positive way. That was my first hint that sentimental art’s early associations were not with the feminine and the domestic, but with the masculine, the public, and the political.

Where did the book begin? What launched you on this project?

As an art historian and teacher, I have been thinking about these issues and themes for a long time. But in a way, this project began in a big way for me during Barack Obama’s presidency, when he was selecting a new Supreme Court justice. He said that one of the qualities that he valued in jurists was empathy. The backlash against that statement was so intense and powerful that it shocked me. To me, empathy, an ability to think oneself into the subject position of someone different from oneself, seems a critically important quality in a judge.  Where did this angry, visceral reaction against the connective emotions of the sentimental come from?

At the same time, in my readings in my field of American art, I was continually coming upon statements such as, “Winslow Homer was never sentimental,” “John Singer Sargent’s paintings of children are never sentimental.” Yet their works—at least some of them—looked sentimental to me.  Why this need to deny the presence of the sentimental in the works of artists we admire?

All of this came together to launch me on this project. I had become conscious of a broad societal aversion to and rejection of the sentimental in both art and public life, and I wanted to understand it historically. What caused this aversion? Where did it come from? When did it begin?

Is sentimental art still being made today?

Certainly.  Steven Spielberg is one of the great sentimental filmmakers of our time. Ken Burns too. Much of the environmental art being created today is deeply concerned with our connectedness to the natural world. Some of the most powerful art associated with the Black Lives Matter movement, such as Carrie Mae Weems’s recent work, is, in my understanding of the term, sentimental. In fact, I think it is difficult to identify any artists whose work excludes the sentimental completely. Its emotions—compassion, sympathy, affection, pity, concern—are fundamental to our human identities. I don’t think they can ever be wholly suppressed, and indeed one of my discoveries in my research and writing is that the sentimental is at the core of much of the art we admire and enjoy the most.

Rebecca Bedell is associate professor of art and chair of the Art Department at Wellesley College. She is the author of The Anatomy of Nature: Geology and American Landscape Painting, 1825–1875 (Princeton). She lives in Cambridge, Massachusetts.

UPress Week Blog Tour: #TurnItUp Arts and Culture

Welcome to the University Press Week blog tour. We’re kicking off today by turning up the volume on arts and culture with these fantastic university press offerings from our colleagues: Duke University Press writes about how partnerships with museums have helped them build a strong art list, Athabasca University Press offers a playlist by author Mark A. McCutcheon of all the songs featured in his book, The Medium Is the Monster: Canadian Adaptations of Frankenstein and the Discourse of Technology. Rutgers University Press dedicates a post to their book, Junctures in Women’s Leadership: The Arts by Judith Brodsky and Ferris Olin. Over at Yale University Press, you can read a piece by author Dominic Bradbury about how immigrants enrich a country’s art and architecture, then head over to University of Minnesota Press for a post about their author Adrienne Kennedy, who will be inducted into the Theater Hall of Fame today. Stay tuned for a great lineup of #TurnItUP posts throughout the week!

Jack Zipes: The Rise of Édouard Laboulaye from the Dead

I am not certain when the urge or itch began, but about ten years ago, when I founded the series of Oddly Modern Fairy Tales with Hanne Winarsky, then senior editor at Princeton University Press, I began to “rebel” against the classical well-known fairy tales, not to mention the insipid Disney fairy-tale films. I realized that they had become stale and commodified and had no historical relevance. The fairy tale is a mysterious hybrid genre and has secrets about our past to reveal if you value each tale’s historical idiosyncrasies. As a scholar of these tales, I realized you cannot deal with present socio-political-cultural conditions unless you have a firm grasp on historical transformation. Consequently, all my concerns as a scholar of folklore and fairy-tale studies and, also as a writer and translator of tales, made a huge U-Turn. Indeed, I began to search and research the gaps of the past that we needed to fill and still need to fill to make the present more substantial and pave the way for a better future.

In the particular case of folk and fairy tales, this led me to discover and uncover highly significant writers and illustrators of fairy tales in the nineteenth and twentieth centuries. Since I have always been a library nerd, a used book pack rat, and a flea market junky, it was not difficult for me to sniff out numerous neglected authors and their works. In the course of ten years, I have been fortunate not only to find amazing collections of fairy tales written by Kurt Schwitters, Bela Balázs, Naomi Mitchison, Walter De La Mare, Lafacadio Hearn, but also numerous unusual fairy tales by British writers of the 1930s, workers’ tales of the early twentieth century, and “decadent” French fairy tales of the late nineteenth century. Moreover, the books in the series have been edited by superb scholars and writers such as Maria Tatar, Marina Warner, Philip Pullman, Gretchen Schulz, Lewis Seifert, and Michael Rosen. Thanks to these works – with more to come – we now know that the popular fairy tale did not end and will not end in a homogenized form of happily ever after. Rather, the fairy tale as genre has never ended as a fraudulent happy end, it continues to startle us through diverse and extraordinary versions throughout the world.

The plans for the future include fabulous Japanese fairy tales by Lafcadio Hearn, Chinese stories of the early twentieth century during the onset of communism, Jewish tales by Nister, a somewhat bizarre rabbi, radical fairy tales written by Hermynjia zur Mühlen, an Austrian aristocrat, turned communist, provocative and dazzling Italian fairy tales from the late nineteenth and early twentieth century, Lisa Tetzner’s fairy-tale novel Hans Sees the World, about a boy’s adventures during the 1929 depression, and Yuri Olesha’s Three Fat Men, which concerns an upside-down world in Russia during the 1930s.

What makes Édouard Laboulaye’s political fairy tales of the late nineteenth century significant for today and for history is that he was truly the foremost writer of political fairy tales in all of Europe. In fact, I know of no other writer or politician in the nineteenth century who used the fairy tale so deftly and ironically to oppose tyranny. In addition, Laboulaye was very much an internationalist. He know many foreign languages and had an extraordinary knowledge of folk tales from oral traditions in Italy, Senegal, Egypt, Estonia, Russia, Germany, Iceland, and other countries, and he adapted them to sharpen their political implications and make them more acute. Furthermore, he was certainly a proto feminist: almost all of his tales have feisty female protagonists who courageously oppose stupid fathers, unjust husbands, and corrupt male courts of power. The major tale in my current collection, “Slap-Bam, or The Art of Governing Men,” is a wonderfully humorous narrative that argues for the importance of women in shaping the politics of a country.

Is such relevance reflected, then, in the nature of our current study of folklore and fairy tales at universities? How is it possible for such a writer like Édouard Laboulaye to escape the eyes of university students and their professors? Although political scientists in France are well aware of Laboulaye’s importance – a recent conference in France was dedicated to his work in jurisprudence and history – I have not read one single essay or book about his work in literature and folklore. Is this the fault of French literary scholars caught in the barbed wire and babble of French critical theory all over the world? Is this the fault of most universities in the world that do not have folklore programs, or which have eliminated them? I am not certain. But I have a certain urge and itch to find out why.

A. A. Long on How to Be Free An Ancient Guide to the Stoic Life (according to Epictetus)

How-to-be-free-epictetus-ancient-romeHow to be Free is a book for every place and occasion. I can say this without any pride or self-promotion because the ideas of the book are not my own but those of the ancient Stoic philosopher Epictetus, and they have stood the test of time. In fact his guide to life, which I translate and introduce here, is more relevant and needful today than at any period in its long and salutary history. I say this because the freedom that Epictetus promises and justifies—freedom to take charge of one’s own individual thoughts and actions—is under attack by market capitalism, commercial advertising, social media, and cyber aggression. By manipulating desires and infiltrating mindsets, these powerful forces are undermining autonomy and personal independence with disastrous results. They are a main cause of the anxiety and depression that oppresses so many people, through the fear of falling short in health, wealth, personal success, relationships, appearance, and status.

Epictetus counters the pressures of the external environment by making a deceptively simple distinction—between things that are up to us (call them U things) and things that are not up to us (call them N things). U things comprise our will and our motivations, our likes and dislikes, our actions and reactions, our feelings and emotions—in other words the essential person that each of us is. N things comprise everything else—the state of the world, the people around us, our work and income, even our bodies because our limbs and physical wellbeing are not absolutely under our direct control. This is a stark distinction. Its value is to highlight the notion that what we want or do not want, what matters or does not matter to us, depends primarily on our own individual decisions, and not what is done to us by others. On this view, it is we ourselves, and not outside forces, that ultimately determine our happiness and unhappiness and condition our reactions.

The freedom that this book seeks to promote has two sides: one side is freedom to act without constraint by external forces, whether people or media pressures or mistaken impressions that we have to react in certain ways; the other side is freedom from disabling emotions and anxieties that inhibit the full exercise of our will and mental capacity. Along with freedom Epictetus emphasizes self-sufficiency and competing with oneself to be as good as possible in facing the challenges of life. Read this book as you approach a cold shower. You will feel great when it is over, toned up and ready for anything.

A. A. Long is professor emeritus of classics and affiliated professor of philosophy at the University of California, Berkeley. His many books include Epictetus: A Stoic and Socratic Guide to LifeStoic Studies, and (with Margaret Graver) Seneca: Letters on Ethics. He lives in Kensington, California.

Philip Freeman: How to Be a Friend (according to Cicero)

In a world where social media, online relationships, and relentless self-absorption threaten the very idea of deep and lasting friendships, the search for true friends is more important than ever. In this short book, which is one of the greatest ever written on the subject, the famous Roman politician and philosopher Cicero offers a compelling guide to finding, keeping, and appreciating friends. With wit and wisdom, Cicero shows us not only how to build friendships but also why they must be a key part of our lives. For, as Cicero says, life without friends is not worth living. Translator Philip Freeman has taken the time to answer some questions about How to be a Friend.

Who was Cicero?

A Roman lawyer, politician, and philosopher who lived in one of the most dangerous places and important times in human history—first-century BC Rome. He was friends and sometimes enemies with Julius Caesar and almost every other key player at the end of the Roman Republic. It was an age of war, revolution, and mass slaughter, yet also a time of amazing creativity. Cicero saw it all and lived long enough to write about it until Marc Antony finally had his head cut off.

What did he write about?

Practically everything. God, religion, sex, greed, growing old—you name it. He was also a key political philosopher. The American founding fathers were huge Cicero fans. In fact, the American government as found in the US Constitution is largely based on the writings of Cicero. But one of his best little works is about the subject of friendship.

Why should we care what Cicero says about friendship? I mean, he lived over two thousand years ago. Surely in an age of social media, all the rules have changed.

Friendship—like all the important things in life—doesn’t change at all as the centuries pass. How people make and communicate with friends may have shifted in some ways, but the crucial role of friendship in our lives never will. We all hunger for the ties we make with friends whether we’re in ancient Rome or a modern California suburb. Without some form of friendship in their lives, most people would wither away and die, spiritually if not physically. We are social creatures who desperately need meaningful connections with others. Cicero is right when he says that life without friends is simply not worth living.

Cicero talks about different kind of friendships. What does he mean?

He says we all by necessity have different types of friendships, each good in its own way. There are friendships of utility such as those we have with our auto mechanic or dentist. You can have hundreds of these in your life. They are an essential part of living in any society in which you must interact with others. But you’re hopefully not going to tell your most intimate secrets to the guy who sells you bagels at the corner shop. Then there are friendships of pleasure, the dozen or more people you enjoy hanging out with at the local pub or in your neighborhood. Finally there are the deepest of friendships you have with only a handful of people—or maybe just one or two—friends you tell everything to and would take a bullet for if necessary. These last sort of friends are what Cicero calls “another self.”

What’s the best way to tell if a person can be a true friend?

Cicero would say look if they’re willing to be honest with you. Not honest in a hurtful way—plenty of people will do that—but honest because they care deeply about you. A true friend will tell you if a boyfriend you’re crazy about is bad news even if you don’t want to hear it. That kind of friend is willing to risk even the friendship for the sake of honesty. If you find friends like that, never let them go.

Can a bad person have friends?

A good way to answer this is to look at the extreme case of Voldemort in the Harry Potter books and movies. He’s a character totally focused on himself who cares nothing about others except how he can use them for his own purposes. Thankfully there are few Voldemorts in the real world, but I imagine all of us know people who seem to use others only for their what they can get from them. These selfish sorts could have friendships of utility, maybe even of pleasure, but never true friendships.

Would Cicero be on Facebook?

I think he would love Facebook. He was an accomplished letter writer, the only social medium of the day. We actually have a collection of many of his letters, especially those he sent to his best friend Atticus who lived far away in Greece. But I think Cicero would draw an important distinction between posting photos of his cat to thousands of followers and intimate interactions with his closest friends, whether written or face-to-face. Cicero would probably say that the social media universe can be a good thing if used properly and terribly harmful to the soul if not.

Philip Freeman is the editor and translator of How to Grow Old, How to Win an Election, and How to Run a Country (all Princeton). He is the author of many books, including Searching for Sappho (Norton) and Oh My Gods: A Modern Retelling of Greek and Roman Myths (Simon & Schuster). He holds the Fletcher Jones Chair of Western Culture at Pepperdine University and lives in Malibu, California.

Green: Ten Facts You Didn’t Know about the Color Green

Pastoureau Green book coverGreen is the color of cash, and also of protecting the environment. A green light means go, but a green-tinged emoji means someone is about to be sick. Where did these cultural meanings come from, and how have they developed and shifted throughout history? Michel Pastoureau’s book Green: The History of a Color takes readers from ancient times to the present day, exploring the role of green in Western societies over thousands of years.

Green is just one title in Pastoureau’s acclaimed series on the history of colors in European society! This National Color Day, don’t miss Red, Blueand Black.

How many of these facts about green did you know?

1. The ancient Egyptian god Ptah was depicted with a green face. In Egyptian painting, green was a beneficial color that protected against evil.

2. The Roman emperor Nero was known for eating a large amount of leeks he consumed, which was unusual for a high-ranking person at that time. Leeks were strongly associated with the color green, and even lent their name to one of the Greek words for the color, prasinos.

3. The Roman Empire’s chariot races featured two opposing stables: the Blues and the Greens. The Blues represented the Senate and the patrician class, while the Greens represented the people. Each stable was backed by a large, influential organization with a network of clientele and a lobby that extended far outside the racecourse.

4. The prophet Muhammad favored the color green. After becoming the dynastic color of the Fatimids, green came to be the sacred color of Islam as a whole.

5. During the Middle Ages, green was the color of hope for pregnant women in particular. Pregnant women in paintings were often shown wearing green dresses.

6. Possessing a green shield, tunic, or horse’s quarter sheet often meant that a knight was young and hotheaded. One well-known example of a “green knight” is found in the late fourteenth-century Sir Gawain and the Green Knight.

7. In Gothic stained-glass windows, green was the color of demons, sorcerers, dragons, and the Devil himself.

8. Dyeing in green was difficult during the Middle Ages. Green dyes from plants produced faint and unstable color that grew even more faded when mordant, or fixative, was applied. Because of this instability, green came to represent inconstancy, duplicity, and betrayal. Judas, for example, is often shown dressed in green.

9. Another obstacle to dyeing in green was the way the dyeing trades were organized. Professional dyers were licensed to dye only in certain colors. This made mixing colors—such as blue and yellow, which make green—next to impossible. Even dyers who broke the regulations and used both blue and yellow dyes had to possess the then-rare knowledge that blue and yellow combined make green. This combination may seem obvious to us now, but in pre-Newtonian color classifications, green was never located anywhere near yellow.

10. Schweinfurt green was a shade developed in Germany in 1814 and made from copper shavings dissolved in arsenic. It was used to make paint, dye, and painted paper. When exposed to humidity, the arsenic evaporates and can be toxic. According to some theories of Napoleon’s death, he was poisoned by his wallpaper.

Qualification, Exclusion, and the Art of Bill Traylor

by Leslie Umberger

Leslie Umberger Bill Traylor Between Worlds book coverBill Traylor, regarded today as one of America’s most important artists, was born into an enslaved family in rural Alabama around 1853. Traylor and his family continued to work as farm laborers after Emancipation, work that Traylor himself spent some seven decades doing. In the late 1920s, Traylor moved by himself to Montgomery, Alabama. About a decade later, no longer able to take on heavy physical labor, he began to make drawings. What does it mean for Traylor, untrained as an artist, to now be held in such high esteem?

Certainly, part of what makes Traylor’s story so profound is that he chose to become an artist of his own volition; no one suggested he make drawings or showed him how to do it. In fact, in the days of slavery, literacy was strictly the privilege of whites. Reading and writing were regarded as tools of empowerment, and blacks seeking these tools were often harshly punished. Traylor never became literate, and in his time and place, the very act of taking up pencil and paper might have been viewed as an affront to white society—even if it was becoming increasingly common for African Americans to be both educated and successful.

So what Traylor did was radical in multiple ways. He was among the first generation of black people to become American citizens, and Traylor grappled with the meaning of that identity as he sat in the black business district of Montgomery in the 1930s and 1940s and watched a rising class of business owners and community leaders—finely dressed, educated black folks who were strong, creative, and were assertively shaping a cultural identity distinct from that of white America. Traylor created a record not just of his own selfhood, but also of the oral and vernacular culture that had shaped him.

Many terms are bandied about for untrained artists; we often hear them called self-taught, folk, visionary, or “outsider.” Traylor may not have conceptualized being an artist in a predetermined or conventional way, but the way we talk about him and his art matters. Traylor lived and worked quite literally in a different world than that of the mainstream fine arts.. And as is true with any artist, the facts of his life provide meaningful contexts and deeply inform the work he made. It is highly significant that Traylor came through slavery and lived the rest of his days in the Jim Crow South—this life powerfully undergirds the entire body of work.

Still, when we speak of an artist as being successful or important only within a subcategory of art, we diminish an artist’s larger validity. To say, for example, that Traylor is among America’s “most important self-taught artists” is to qualify his importance, to send a signal that his work is ultimately lesser than that of trained, mainstream artists—that it exists in a subcategory without full rank. To call an artist an “outsider” is to note difference as the foremost framework. The term describes the artist, not the art, and ultimately functions as a euphemism for race, class, or social agency. Marketers often grab encompassing terms because they are easy, but “outsider” has always been a disparaging way of grouping individuals by difference, rather than seeking to foster a broader understanding of art and its diverse makers.

Understanding context in a deep way brings meaning to art that is unique and unaffiliated with the mainstream art world, yet it is key to remember that qualifiers always signal disparity. We recognize that it is demeaning and inappropriate to say, for example, that someone is “among the best female employees,” or “among the best black experts,” but we have yet to fully extend this to artists like Traylor. It has been clear for decades that Traylor is among the most important self-taught artists; his work fetches blue-chip prices and is recognized and collected the world over. Today we need to look at the magnitude of what he did against the larger backdrop of art in his nation. He is one of America’s most important artists—no qualifier welcome. Between Worlds fleshes this out and proposes a different, more encompassing course that moves beyond an exclusionary past.

Exhibition Schedule
Smithsonian American Art Museum
September 28, 2018–March 17, 2019

Leslie Umberger is curator of folk and self-taught art at the Smithsonian American Art Museum. She is the curator of the exhibition Between Worlds: The Art of Bill Traylor and the author of the accompanying exhibition monograph.