Come visit us at BookExpo 2015: Booth #1538

Fall 2015 seasonalIt’s a big day for authors, booksellers, publishers, librarians, and readers alike. Book Expo America begins today at New York City’s Jacob K. Javits Center, where the main exhibit hall opens at 1 pm, and an assortment of conferences, author signings, and other special events will be taking place between today and Friday, May 29. We hope you’ll stop by and see Princeton University Press at booth #1538, and pick up our new Fall 2015 seasonal catalog (you can download it directly to your device here.) We have quite a diverse and impressive lineup this season, with new books from Nobel Prize-winning economists George Akerlof and Robert Shiller, philosopher (and author of #1 New York Times Bestseller, On Bullshit) Harry Frankfurt, economist Robert Gordon, interdisciplinary scholar Lynn Gamwell, architectural historian Neil Levine, and many more. We appreciate the dedicated work of the authors and staff that helped to make this list possible, and can’t wait to share it with you.

You can find out more about purchasing tickets at the BEA website. Hope to see you there!

A Q&A with Richard Alba and Nancy Foner, authors of Strangers No More: Immigration and the Challenges of Integration in North America and Western Europe

With immigration at a record high, migrants and their children are a rapidly growing population whose integration needs have never been more important. Shedding new light on questions and concerns, Strangers No More is the first look at immigrant assimilation across six Western countries: Britain, France, Germany, the Netherlands, the United States and Canada. Recently the authors, Richard Alba and Nancy Foner, provided context for their book and answered some questions on immigration, including how individual nations are being transformed, why Islam proves a barrier for inclusion in Western Europe in particular, and what future trends to expect.

Foner jacketWhy does understanding immigrant integration in Western Europe and America matter?

Put simply, it’s one of the key issues of the twenty-first century on both sides of the Atlantic.

What makes it so urgent? The numbers: Western European countries as well as the US and Canada have been faced with incorporating millions of immigrants whose cultures, languages, religions, and racial backgrounds differ from those of most long-established residents.

Future trends: The challenges of integrating immigrants and their children—so they can become full members of the societies where they live—are likely to become even more important in the coming decades in the face of (1) continued demand for new immigrant inflows and (2) demographic shifts in which the huge number of people of immigrant origin—immigrants as well as their children—will constitute a much larger share of the adult population.  Large portions of the immigrant-origin populations of these countries are going to come from the “low-status” groups—such as Turks in Germany, Pakistanis in Britain, and Mexicans in the U.S.—that are the focus of the book. There is no question that their opportunities are critical for the future.

Does any one country come out clearly ahead?

Basically, the answer is no. The book’s comparison of four European countries, Britain, France, Germany, and the Netherlands, and two in North America, the United States and Canada, shows that when it comes to the integration of low-status immigrants—in terms of jobs, income and poverty, residential segregation, electoral success, children’s education, intermarriage, and race and religion—there are no clear-cut winners and losers. Each society fails and succeeds in different ways. Nor is there a consistent North America- Europe divide: Canada and the United States as well as countries within Europe differ in ways they’ve provided opportunities, and erected barriers, for immigrants.

So how is the United States doing?

In some ways the U.S. looks good compared to the continental European countries in the book. The U.S. has been quick (like Canada) to extend a national identity to immigrants and their children. Rates of intermarriage between those of immigrant origin and whites are relatively high. The U.S. has a pretty good record of electing immigrant-origin politicians, and is the only country to vote in the child of a non-Western immigrant to the highest national office.

In other ways, the U.S. has the highest bars to integration of all the six countries. The rate of residential segregation experienced by many immigrant families stands out as extreme. The disadvantages immigrants and their children confront in terms of their economic status is greatest in the U.S., which has the most severe economic inequality. The US also has the largest number—and proportion—of undocumented immigrants, who are denied basic rights and opportunities.

Aren’t all these countries being transformed by immigration?

Yes, they are. One could say that the face of the West is inevitably changing. During the next quarter century, a momentous transition to much greater diversity will take place everywhere. As the post-World War II baby booms—and such groups, made up largely of the native majority group, are found throughout North America and Western Europe– retire from work and become less socially active in other ways, they are going to be replaced by groups of young adults who in some countries will be relatively few in number, and everywhere will be more diverse, more likely to have grown up in immigrant homes.

The “mainstream” of these countries will change, too, in that the people who will occupy positions of authority and visibility will be much more diverse than in the past. We already see this occurring in the U.S., where younger workers in well-paid jobs are less likely to come from the non-Hispanic white group than their predecessors did.   But there is a paradox. At the same time – and a cause for real concern—many young people of immigrant background are being left behind because of grossly unequal opportunities.

But why is Islam a much greater barrier to inclusion for immigrants and their children in Western Europe than it is in the United States?

One reason is basic demographics: a much larger proportion of immigrants in Western Europe are Muslim than in the U.S., where the great majority are Christian. Also, Muslim immigrants in the U.S. have a lower socioeconomic profile than those in Europe. Second: the way Christian religions in Europe have been institutionalized, and historically entangled with the state, has made it difficult for Islam to achieve equal treatment. In the U.S., the constitutional principles of religious freedom and separation of church and state have allowed Muslims more space to develop their own religious communities. Third: a secular mindset dominates in most Western European countries as compared to the high level of religiosity in the United States so that claims based on religion, and Islam in particular, have much less acceptance and legitimacy in Europe.

What is the good news—and the more positive side of the story?

One positive is the growing success of immigrant minorities in winning local and national political office in all six countries. Children of immigrants are mixing and mingling with people in other groups, including long-established natives, in schools, neighborhoods, and workplaces. The emergence of super-diverse neighborhoods contributes to the sense that ethnic and racial diversity is a normal order of things.

Intermarriage rates are rising among some immigrant groups in all the countries, so that more family circles bring together people of immigrant origin and longer-established natives—and children of mixed backgrounds are increasingly common. In the U.S., one out of seven marriages now crosses the major lines of race or Hispanic ancestry; and most of these intermarriages involve individuals from immigrant backgrounds and whites. Everywhere at least some children of low-status immigrants are getting advanced academic credentials and good jobs. And while racial and religious divisions seem like intractable obstacles, over time the barriers may loosen and blur.

Richard Alba is Distinguished Professor of Sociology at the Graduate Center of the City University of New York. His books include Blurring the Color Line and Remaking the American Mainstream. Nancy Foner is Distinguished Professor of Sociology at Hunter College and the Graduate Center of the City University of New York. Her books include From Ellis Island to JFK and In a New Land.








#WinnerWednesday: Congratulations, Ellen Wu!

Ellen D. Wu – The Color of Success: Asian Americans and the Origins of the Model Minority

Finalist for the 2015 Theodore Saloutos Memorial Book Award, Immigration and Ethnic History Society

The Theodore Saloutos Memorial Book Award is given annually to the book judged best on any aspect of the immigration history of the United States.  “’Immigration history’ is defined as the movement of peoples from other countries to the United States, of the repatriation movements of immigrants, and of the consequences of these migrations, for both the United States and the countries of origin.” The Immigration and Ethnic Historical Society has complete information on this award here.

Wu has written on “the model minority myth” for the LA Times, and has answered questions about her book here. She also won The Immigration and Ethnic Historical Society’s Outstanding First Book Award this year.  Congratulations, Ellen!


The Color of Success:
Asian Americans and the Origins of the Model Minority
Ellen D. Wu
Hardcover | $39.50 / £27.95 | ISBN: 9780691157825
376 pp. | 6 x 9 | 19 halftones.eBook | ISBN: 9781400848874
Endorsements | Table of Contents
The Color of Success embodies exciting developments in Asian American history. Through the lens of racial liberalism and cultural diplomacy, Ellen Wu offers a historically grounded analysis of the Asian American model minority in the contexts of domestic race politics and geopolitics, and she unveils the complexities of wartime and postwar national inclusion.”
Eiichiro Azuma, University of Pennsylvania

Medieval Relativisms by John Marenbon

In a commencement speech at Dickinson College yesterday that focused on the virtues of free speech and free inquiry, Ian McEwan referenced the golden age of the pagan philosophers. But from the turn of the fifth century to the beginning of the eighteenth, Christian intellectuals were as fascinated as they were perplexed by the “Problem of Paganism,” or how to reconcile the fact that the great thinkers of antiquity, whose ideas formed the cornerstones of Greek and Roman civilization, were also pagans and, according to Christian teachings, damned. John Marenbon, author of the new book Pagans and Philosophers, has written a post explaining that relativism (the idea that there can be no objective right or wrong), is hardly a post-modern idea, but one that emerged in medieval times as a response to this tension.

Medieval Relativisms
By John Marenbon

Pagans and Philosophers jacketRelativism is often thought to be a characteristically modern, or even post-modern, idea. Those who have looked more deeply add that there was an important strand of relativism in ancient philosophy and they point (perhaps wrongly) to Montaigne’s remark, made late in the sixteenth century, that ‘we have no criterion of truth or reason than the example and idea of the opinions and customs of the country where we are’ as signalling a revival of relativist thinking. But the Middle Ages are regarded as a time of uniformity, when a monolithic Christianity dominated the lives and thoughts of everyone, from scholars to peasants – a culture without room for relativism. This stereotype is wrong. Medieval culture was not monolithic, because it was riven by a central tension. As medieval Christian thinkers knew, their civilization was based on the pagan culture of Greece and Rome. Pagan philosophers, such as Plato and Aristotle, were their intellectual guides, and figures from antiquity, such as the sternly upright Cato or Regulus, the general who kept the promise he had given to his enemies even at the cost of his life, were widely cited as moral exemplars. Yet, supposedly, Christian truth had replaced pagan ignorance, and without the guidance and grace provided for Christians alone, it was impossible to live a morally virtuous life. One approach to removing this tension was to argue that the pagans in question were not really pagans at all. Another approach, though, was to develop some variety of limited relativism.

One example of limited relativism is the view proposed by Boethius of Dacia, a Master in the University of Paris in the 1260s. Boethius was an Arts Master: his job was to teach a curriculum based on Aristotle. Boethius was impressed by Aristotelian science and wanted to remain true to it even on those points where it goes against Christian teaching. For example, Christians believe that the universe had a beginning, when God created it, but Aristotle thought that the universe was eternal – every change is preceded by another change, and so on, for ever. In Boethius’s view, the Christian view contradicts the very principles of Aristotelian natural science, and so an Arts Master like himself is required to declare ‘The world has no beginning’. But how can he do so, if he is also a Christian? Boethius solves the problem by relativizing what thinkers say within a particular discipline to the principles of that discipline. When the Arts Master, in the course of teaching natural science, says ‘The world has no beginning’, his sentence means: ‘The world has no beginning according to the principles of natural science’ – a statement which is consistent with declaring that, according to Christian belief the world did have a beginning. Relativizing strategies were also used by theologians such as Henry of Ghent, Duns Scotus and William of Ockham to explain how some pagans can have even heroic virtue and yet be without the sort of virtue which good Christians alone can have.

These and other medieval relativisms were limited, in the sense that one reference frame, that of Christianity, was always acknowledged to be the superior one. But Boethius’s relativism allowed pragmatically a space for people to develop a purely rational scientific world-view in its own terms, and that of the theologians allowed them to praise and respect figures like Cato and Regulus, leaving aside the question of whether or not they are in Hell. Contemporary relativists often advocate an unlimited version of relativism, in which no reference frame is considered superior to another. But there are grave difficulties in making such relativism coherent. The less ambitious medieval approach might be the most sensible one.

John Marenbon is a senior research fellow at Trinity College, University of Cambridge, honorary professor of medieval philosophy at Cambridge, and a fellow of the British Academy. He is the author and editor of many books, including Abelard in Four Dimensions, The Oxford Handbook of Medieval Philosophy, The Cambridge Companion to Boethius, and Medieval Philosophy: An Historical and Philosophical Introduction.

#MammothMonday: PUP’s pups sound off on How to Clone a Mammoth

The idea of cloning a mammoth, the science of which is explored in evolutionary biologist and “ancient DNA expert” Beth Shapiro’s new book, How to Clone a Mammoth, is the subject of considerable debate. One can only imagine what the animal kingdom would think of such an undertaking, but wonder no more. PUP staffers were feeling “punny” enough to ask their best friends:


Chester reads shapiro

Chester can’t get past “ice age bones”.


Buddy reads shapiro

Buddy thinks passenger pigeons would be so much more civilized… and fun to chase.


Tux reads shapiro

Tux always wanted to be an evolutionary biologist…


Stella reads Shapiro

Stella thinks 240 pages on a glorified elephant is a little excessive. Take her for a walk.


Murphy reads shapiro

A mammoth weighs how much?! Don’t worry, Murphy. The tundra is a long way from New Jersey.


Glad we got that out of our systems. Check out a series of original videos on cloning from How to Clone a Mammoth author Beth Shapiro here.

Win a copy of Relativity: 100th Anniversary Edition by Albert Einstein through Corbis!

We are teaming with Corbis Entertainment to offer this terrific giveaway through their official Albert Einstein Facebook page. Contest details below, but please head over to the “official Facebook page of the world’s favorite genius” to enter!

Enter for a chance to win a FREE COPY of “Relativity: 100th Anniversary Edition” by Albert Einstein!

An interview with Josiah Ober, author of The Rise and Fall of Classical Greece

The period considered classical Greece (roughly the 4th through 5th century BC) had a profound effect on Western civilization, forming the foundations of politics and philosophy, as well as artistic and scientific thought. Why did Greece experience such economic and cultural growth—and why was it limited to this 200-year period? Josiah Ober, Professor of Political Science and Classics at Stanford University and author of The Rise and Fall of Classical Greece, took the time to explain the reasons behind Greece’s flourishing, and what its economic rise and political fall can tell us about our own world.

The Rise and Fall of Classical GreeceWhat was the rise of classical Greece and when and why did it happen?

JO: Basically, sustained economic growth lead to the rise of Ancient Greek civilization.

At the Early Iron Age nadir, in ca. 1000 BCE, the Greek world was sparsely populated and consumption rates hovered near subsistence. Some 650 years later, in the age of Aristotle, the population of the Greek world had increased at least twenty-fold. During that same period, per capita consumption probably doubled.

That rate of growth is far short of modern rates, but it equals the growth rate of the two standout societies of early modern Europe: Holland and England in the 16th to 18th centuries. Historians had long thought that the Greek world was impoverished and its economy overall static – which of course made Greek culture (art, philosophy, drama, and so on) seem that much more “miraculous.” But, thanks to the recent availability and quantification of a huge mass of data, drawn from both documentary and archaeological sources, we can now trace the amazing growth of the Greek economy, both in its extent (how many people, how much urbanization, and so on), and in terms of per capita consumption (how well people lived).

So the rise of the Greek world was predicated on sustained economic growth, but why did the Greek economy grow so robustly for so long?

JO: In the 12th century BCE, the palace-centered civilization of Bronze Age Greece collapsed, utterly destroying political and social hierarchies. Surviving Greeks lived in tiny communities, where no one was rich or very powerful. As Greece slowly recovered, some communities rejected attempts by local elites to install themselves as rulers. Instead, ordinary men established fair rules (fair, that is, for themselves) and governed themselves collectively, as political equals. Women and slaves were, of course, a very different story. But because these emerging citizen-centered states often out-competed elite-dominated rivals, militarily and economically, citizenship proved to be adaptive. Because participatory citizenship was not scalable, Greek states stayed small as they became increasingly democratic. Under conditions of increasingly fair rules, individuals and states rationally invested in human capital, leading to increased specialization and exchange. The spread of fair rules and a shared culture across an expanding Greek world of independent city-states drove down transaction costs. Meanwhile competition encouraged continuous institutional and technological innovation. The result was 700+ years of of world-class efflorescence, marked by exceptional demographic and per capita growth, and by immensely influential ideas, literature, art, and science. But, unlike the more familiar story of ancient empires, no one was in running the show: Greece remained a decentralized ecology of small states.

So what about the fall?

JO: There are two “falls” – one political and one economic. The economic fall is the decline of the Greek economy from its very high level in the age of Aristotle to a “premodern Greek normal” of low population and near-subsistence consumption levels with the disintegration of the Roman empire. That low normal had pertained before the rise of the city-state ecology. After the fall, it persisted until the 20th century. But we also need to explain an earlier political fall. Why, just when the ancient Greek economy was nearing its peak, were Philip II and Alexander (“the Great”) of Macedon able to conquer the Greek world? And then there is another puzzle: Why were so many Greek city-states able to maintain independence and flourishing economies in the face of Macedonian hegemony? The city-states were overtaken by the Macedonians in part because human-capital investments created a class of skilled and mobile experts in state finance and military organization. Hired Greek experts provided Philip and Alexander with the technical skills they needed to build a world-class army. But meanwhile, deep investments by city-states in infrastructure and training made fortified cities expensive to besiege. As a result, after the Macedonian conquest, royal taxes on Greek cities were negotiated rather than simply imposed. That ensured enough independence for the Greek cities to sustain economic growth until the Roman conquest.

What does the economic rise and political fall of classical Greece have to tell us about our own world?

JO: The new data allows us to test the robustness of contemporary theories of political and economic development. In the classical Greek world, political development was a primary driver of economic growth; democracy appears to be a cause rather than simply an effect of prosperity. The steep rise and long duration of the city-state ecology offers a challenge to neo-Hobbesian centralization theories of state formation, which hold that advanced economic and political development requires the consolidation of centralized state power. The comparatively low rate of ancient Greek income inequality, along with the high rate of economic growth, suggests that the negative correlation of sustained growth with extreme inequality, observed in some recent societies, is not a unique product of modernity. Finally, the history of the ancient Greek world can be read as a cautionary tale about the unanticipated consequences of growth and human capital investment: It reveals how innovative institutions and technologies, originally developed in the open-access, fair-rules context of democratic states, can be borrowed by ambitious autocrats and redeployed to further their own, non-democratic purposes.

How did you get interested in the topic of rise and fall – was it just a matter of “Edward Gibbon envy”?

JO: Gibbon is amazing, as a prose stylist and historian. But the origin of my project actually goes back to a quip by a senior colleague at the very beginning of my career: “The puzzle is not why the Greek world fell, it is why it lasted more than 20 minutes.” Twenty-five years ago (and fifteen years after my colleague’s quip), the historical sociologist W.G. Runciman claimed that classical Greece was “doomed to extinction” because the Greek city-states were, “without exception, far too democratic.” True enough: the classical Greek world eventually went extinct. But then, so did all other ancient societies, democratic or otherwise. The Greek city-state culture lasted for the better part of a millennium; much longer than most ancient empires. I’ve long felt that I owed my colleague a solution to his puzzle. This book is an attempt to pay that debt.

Josiah Ober is the Mitsotakis Professor of Political Science and Classics at Stanford University. His books include Democracy and Knowledge, Political Dissent in Democratic Athens, The Athenian Revolution, and Mass and Elite in Democratic Athens (all Princeton). He lives in Palo Alto, California.


A Q&A with Cormac Ó Gráda, author of Eating People is Wrong

Cormac Ó Gráda’s new collection of essays on famine—which range in focus from from the economic history to the psychological toll—begins with a taboo topic. Ó Gráda argues that cannibalism, while by no means a universal feature of these calamities, has probably occurred more frequently than previously recognized. Recently he answered some questions on his book, Eating People is Wrong, and Other Essays on Famine, Its Past, and Its Future, its somber title, and his early interest in The Great Irish Famine.

O'Grada jacketWhy did you write this book?

CÓG: When Famine: A Short History (Princeton, 2009) came out, I wanted it to be my last book on the subject. So Eating People is Wrong was not a question of ‘what will I do next?’ I just realized a few years later that I had still had ideas to contribute on topics that would make for a new, different kind of book on famine. These topics ranged from famine cannibalism to the Great Leap Forward, and from market failure to famine in the 21st century; the challenge was to merge the different perspectives that they offered into what would become this new book.  The idyllic résidence I spent in the south of France courtesy of the Fondation des Treilles in the autumn of 2013 was when the different parts came together. By the end of that stay, I had a book draft ready.

What inspired you to get into your field?

CÓG: It is so long ago that I am bound to invent the answer… But I have always had an amateur interest in history—as lots of Irish people tend to have—whereas my academic training was in economics. Economic history seemed a good way of marrying the two, and that has been my chosen field since my time as a graduate student in the 1970s. I began as a kind of jack-of-all-trades economic historian of Ireland, focusing on topics as different as inheritance patterns and famine, or migration and banking. This work culminated in a big economic history of Ireland in 1994. My interest in the Great Irish Famine of the 1840s goes back to my teens, but that interest was sharpened after getting to know Joel Mokyr (also a PUP author) in the late 1970s. Economics taught me to think of the Irish in comparative terms, and that led eventually to the study of famines elsewhere. My books have all been solo efforts, but I have very lucky and privileged to write papers with some great co-authors, and some of these papers influenced the books.

How did you come up with the title or jacket?

CÓG: The title is an ironic nod to Malcolm Bradbury’s eponymous novel (which most people seem ignorant of). A friend suggested it to me over a pint in a Dublin bar. One of the themes of the chapter on famine cannibalism, to which the title refers, is the need to realize that famines not only do terrible things to people, but that people do terrible things to one other in times of famine. Peter Dougherty and his team at PUP came up with jacket. The image is graphic and somber without being sensationalist, which is what I had hoped for.

What is your next project?

CÓG: There is no single all-consuming project. A lot of my research in recent years has been collaborative work on British economic history with UCD colleague Morgan Kelly. So far the results of that work have appeared—when we are lucky—in academic journals rather than in books. We have plans to continue on this basis, but we are also involved in an interesting piece of research with Joel Mokyr on the origins of the Industrial Revolution, and that may eventually yield a monograph by the three of us. I also want to revise several unpublished papers in Irish economic history and to get them published singly or, perhaps, as a monograph. Finally, Guido Alfani of Bocconi University in Milan and I are editing a book on the history of famine in Europe. This is coming along well. The end product will consist of nine specialist country chapters, a cross-country analysis of the famines of World War II, and an overview by Alfani and me.

What are you currently reading?

CÓG: I am at page 630 (so another hundred or so pages to go) of Stephen Kotkin’s Stalin, vol. 1 (Penguin, 2014), which brings the story of Iosif Vissarionovich only as far as 1928. I have been interested in Soviet economic history since the late Alexander Erlich introduced me to the topic in Columbia in the 1970s, and this is what attracted me to Kotkin’s riveting tome—which, however, turns out to rather uninterested in the economic issues! I am also reading Maureen Murphy’s Compassionate Stranger: Asenath Nicholson and the Great Irish Famine (Syracuse, 2015), an account of an eccentric but appealing American evangelist who toured Ireland, mostly on foot, in the years leading up to and during the Great Hunger. I was familiar with Nicholson’s own published accounts of her travels, but knew very little about her otherwise, so Murphy’s book is a revelation.   My current bedtime reading is Henning Mankell’s The Man from Beijing (2010).

Cormac Ó Gráda is professor emeritus of economics at University College Dublin. His books include Famine: A Short History and Black ’47 and Beyond: The Great Irish Famine in History, Economy, and Memory (both Princeton).

Which of these 15 myths of digital-age English do you believe?

One Day in the Life of the English Language by Frank Cioffi, a new style guide that eschews memorization in favor of internalizing how sentences actually work, handily refutes these 15 myths of digital-age English. Think brevity is best? Swear by your default settings? Feel sure the internet is a “total latrine”? Try out this “True or False” test and see whether you’re the digital-age wordsmith you thought you were:

Myth 1 image1.  In the age of the tweet, short and concise is always the best.
True, true, short messages are often the best. But not always. Sometimes one needs to go on at some length. Sometimes it is necessary to provide a context, especially if one is trying to communicate more than just minimal information. And sometimes the very brevity or terseness of a tweet makes it impossible to understand.

2.  My word processing program doesn’t let me change margins, spacing, or other aspects of format.
Most word processing programs can be set up to accommodate any standard style; however, you need to use the program’s capabilities and not always accept default settings. In Microsoft Word, for example, many writers allow the program its silly default—to put an extra line space between paragraphs of the same format. This should be unselected as a default off the “paragraph” menu.

Myth 3 image3.  My word processing program will highlight and automatically fix any errors I make.
These automatic correction programs are notoriously unreliable, as they often “fix” writing that is in fact correct. For example, at first I thought one of my students had subject-verb agreement problems; then I noted that the program tried to get me to introduce such errors into my own work. You, not the program, are the mind behind the words. Don’t rely on your program to fix everything. Let it check—but you check too.

4.  “Logical punctuation” is the best option in most situations.
This idea usually refers to putting punctuation either inside or outside of quotation marks. The logicality of doing so or not doing so has been questioned by many. It’s probably best to follow conventions of a given style, unless you are not working within any particular field. In that case, you can invent new rules; just don’t expect others to understand or follow them.

5. People don’t really read anymore; they merely “scan a page for information.”
Gary Shteyngart brings up this idea in his 2011 novel Super Sad True Love Story. It’s interesting and has some truth to it: I agree that many people don’t read with a lot of care or seek to understand and internalize the written ideas they encounter. But some do. Think of that “some” as your audience. At the same time, consider the needs of an audience that just “scans the page.” Ask yourself, “Does this page I’ve just written include information worth scanning?”

Cioffi jacket6.  Anyone can publish written material nowadays, so what’s the value of Standard Written English?
With the Internet, it’s true that anyone can publish now. And many self-publishing options are open to any writer seeking to get work in print. Simply publishing something is now less a guarantee of its excellence or importance than it once was, but if you strive to have your work read—by more than family and friends—it will have to respect some standard forms and conventions. Or to put it another way, no matter what your publishing goals, if you want people to read your work, you will have to write with a high level of competence and lucidity.

7.  People are much less precise and exact than they used to be, now that they have computers to rely on.
This is clearly not the case in all situations. In fact, people must be much more careful now with details such as spelling, especially when entering passwords or usernames. In many digital contexts, attentiveness to language accuracy is obligatory. If you are inattentive, you often can’t even use the computer or the program. If you don’t respect the syntax of a program, it just won’t run.

8.  “Talking street” is what most people want to do anyway.
I think that most people have to use multiple forms of English. They might speak one way to their family, one way to their friends, one way on their jobs, and another way, perhaps, when they need to write a paper for a college course they are taking. People can and should become multilingual.

9.  Most grammatical stuff is of minor importance—kind of too boring and persnickety to bother with.
I agree that there are more important things in the world, but I have been making the argument throughout this book that in fact these “minor” matters do seem to make a difference to some people—and a major difference to a small minority. And writ large, they make a big difference in our society. Admittedly, there is a persnickety quality to some of the material, but isn’t specialization all about being persnickety?

10.  Someone else can “wordsmith” my ideas; I just generate them.
The line between the idea and the expression of it is very fine; that is, how you say something is often inextricable from what you say. You need to take charge of not just coming up with a basic idea or notion but also of how that idea gets expressed. If you have a stake in how an idea exists in its final form, you should take great care with its exact verbal formulation.

11.  Since so many “styles” (MLA, APA, Chicago . . .) are available and used by various specialties, it’s pointless to worry about this kind of superficial overlay.
There are a lot of forms and styles, to be sure. But you need to find the form that’s conventional in your professional field and use that. If you don’t, you almost automatically label yourself an “outsider” to that field, or perhaps even an interloper. And sometimes, just abiding by the conventions of a style gains you credibility in and of itself, allows entrée into a field.

12.  There’s no possibility of an original idea anymore: it’s all been said.
One certainly feels as though this might be possible, considering the ever-expanding scope of the Internet and the existence of over seven billion human minds on the planet. However, each of us has his or her own individual experience—which is unique. And out of that, I feel, originality can emerge. You must really want that originality to emerge, though, and resist succumbing to the pressure of the multitude to simply conform to what’s standard, acceptable, predictable, dull.

13.  If something is published on the Internet, it’s true.
I know that no one really believes this. But I want to emphasize that a great deal of material on the Internet is simply false—posted by people who are not reliable, well-informed, or even honest. Much Internet material that claims to be true is in fact only a form of advertising. And finally, do keep in mind that almost anyone can create websites and post content, whether they are sane or insane, children or adults, good or evil, informed or misinformed.

myth 4 image14.  The Internet is a total latrine.
A few years ago, I heard a well-known public intellectual give a talk for which this was the thesis. And there are certainly many things on the Internet and about the Internet that bear out such a judgment. However, there are also some amazing things, which prompt me to say that the Internet is the greatest accumulation of information and knowledge in the history of humankind. But you need to learn how to use it efficiently and effectively, and sort the good from the bad.

Myth 15 image

15.  I can cut and paste my way through any college paper assignment.
There are many opportunities to create what looks like your own work—cutting and pasting here, auto- summarizing there, adding a few transitional sentences, and mashing it all together. I don’t recommend this kind of work; it doesn’t really benefit you to create it. You want to write papers of your own, ones that express your own ideas and that use your own language. The cut-and-pasters are ultimately sacrificing their humanity, as they become people of the machine. And when they’re caught, the penalties can be severe.

How did you do?

Frank L. Cioffi is professor of English at Baruch College, City University of New York, and has taught writing at Princeton and Indiana universities and at Bard and Scripps colleges. He is the author of The Imaginative Argument: A Practical Manifesto for Writers (Princeton), among other books.

Graphics by Chris Ferrante

Happy Birthday, Søren Kierkegaard

Lowrie jacket5-8 Kierkegaard_TheSeducersDiaryIntroversion has been having a moment of late, and today happens to be the birthday of one of the world’s most famous—and brilliant—introverts. To quote the (excellent) copy for A Short Life of Kierkegaard by Walter Lowrie, Kierkegaard was “a small, insignificant-looking intellectual with absurdly long legs, a veritable Hans Christian Andersen caricature of a man.” In life, he often hid behind pseudonyms, and yet, he remains one of the most important thinkers of modern times. Read about Kierkegaard’s turbulent life in this classic biography (literary duel? Check. Tragic love affair? Check.) or sample The Seducer’s Diary, which John Updike called, “An intricate curiosity—a feverishly intellectual attempt to reconstruct an erotic failure as a pedagogic success, a wound masked as a boast.”

Happy Birthday, Søren Kierkegaard.

Read Chapter 1 of The Seducer’s Diary here.

Read the Introduction to A Short Life of Kierkegaard here.

An interview with Nancy Woloch, author of A Class by Herself

Nancy Woloch’s new book, A Class by Herself: Protective Laws for Women Workers 1890s-1990s, looks at the historical influence of protective legislation for American women workers, which served as both a step toward modern labor standards and as a barrier to equal rights. Recently, Nancy took the time to answer some questions about the book, her reasons for writing it, and the modern day legacies of this legislation, from pregnancy law, to the grassroots movement to raise the minimum wage.

Woloch jacketWhy did you write this book?

NW: Conflict over protective laws for women workers pervades twentieth-century US women’s history. These laws were everywhere. Since the early 1900s, almost every state enacted some sort of women-only protective laws—maximum-hour laws, minimum wage laws, night work laws, factory safety laws. Wherever one turns, the laws spurred debate, in the courts and in the women’s movement. Long drawn to the history of these laws and to the arguments that they generated, I saw the opportunity to carve out a new narrative: to track the rise and fall of protective laws from their roots in progressive reform to their collapse in the wake of Title VII of the Civil Rights Act of 1964, and beyond. Here was a chance to fuse women’s history and legal history, to explore social feminism, to reconstruct a “constitutional conversation,” and to ferret around all the topics that protective laws touch — from transatlantic connection to social science surveys to the rise of equal rights. Above all, the subject is contentious. Essentially, activist women disrupted legal history twice, first to establish single-sex protective laws and then to overturn them. This was irresistible.

What is your book’s most important contribution?

NW: My book shows the double imprint that protective laws for women workers left on US history. The laws set precedents that led to the Fair Labor Standards Act of 1938 and to modern labor law, a momentous achievement; they also sustained a tradition of gendered law that abridged citizenship and impeded equality until late in the century.

Which groups of women activists first supported women-only protective laws?

NW: I focus on members of the National Consumers’ League, a pressure group formed in 1898 and led as of 1899 by reformer Florence Kelley. One of the most vibrant and successful reform organizations of the Progressive Era, the NCL enabled the campaign for protective laws to move forward. I also focus on the federal Women’s Bureau, started in 1920, which inherited the mission of the NCL: to preserve and promote protective laws. Other women’s associations, too, were involved; so were women labor leaders. But the NCL and the Women’s Bureau were most crucial. Women who promoted women-only protective laws endorsed a dual rationale: the laws would redress disadvantages that women faced in the labor force and provide “industrial equality”; they would also serve as an “entering wedge” to labor standard for all workers. The dual rationale persisted, with variations, for decades.

 How did you come up with the title?

NW: “A Class by Herself” is a phrase used by Justice David J. Brewer in Muller v. Oregon, the landmark Supreme Court decision of 1908 that upheld a state ten-hour law for women workers in factories and laundries. Woman, Justice Brewer stated, “is properly placed in a class by herself, and legislation designed for her protection may be sustained, even when like legislation is not necessary for men and could not be sustained.” Two issues intersect in the Muller case: Can the state impose labor standards? Is classification by sex constitutional? The fusion of issues shapes my narrative.

The Muller case remains fascinating. I am stunned with the exceptional leverage that Florence Kelley grasped when she intervened in the final appeal of the case. I am struck with the link that Muller’s lawyers posited between employers’ interests and equal rights; with the fragile relationship between the famous Brandeis brief and the Brewer opinion; and with the way that Justice Brewer challenged Brandeis for dominance. I still ask myself: Who took advantage of whom? Looking back on Muller, I find an intriguing contrast between that case and the Supreme Court case that terminally rejected the Muller principle, UAW v. Johnson Controls (1991). This is when single-sex protective laws definitively expired. Johnson Controls also offers a counter-image of the 1908 case.

Did classification by sex ever help women workers?

NW: Yes, of course. Women-only state protective laws might provide benefits to women workers. In many instances, they provided shorter hours, higher wages, or better working conditions, just as reformers envisioned. But women-only laws always had built-in liabilities. Laws based on “difference” perpetuate difference. They entail hierarchy, stratification, and unequal power. They can quash opportunity, advancement, and aspiration. Once embedded in law, classification in sex might be adapted to any goal conjured up by lawmakers, or, as a critic in the 1920s pointed out, used to impose whatever restrictions “appeal to the caprice or prejudice of our legislators.”

What sort of challenges did you face as an author?

NW: Protective laws were tough customers. They fought back; they resisted generalization; they defied narrative. Part of the challenge was that I deal with a great mass of legislation –several hundred state laws — and each type of law followed its own trajectory. I also cover the laws and their ramifications over many decades. To estimate the impact of protective laws on women workers at any given time was a hazardous undertaking; one could not easily measure the negative effects, or what one critic called the “debit side.” Changing circumstances compound the problem; the effects of the laws were always in flux. Not least, protective laws generate controversy among historians; to tackle this subject is to stroll through a minefield. A special challenge: to cope with the end of protective laws in the 1960s and 1970s.

What was the biggest surprise you encountered in writing this book?

NW: The role of “surprise” itself was a surprise. Progressive reformers who promoted women-only labor laws in the early 1900s could not see around corners, anticipate shifts in the economy, or envision changes in the female work force. Nor could their successors or their opponents. Much of my narrative is a story of close calls and near misses, of false hopes and unexpected consequences, of accident and unpredictability. The theme of the unforeseen peaks with the addition of “sex” to Title VII of the Civil Rights bill of 1964; the impact of the amended Title VII on women-only protective laws was yet more of a surprise. I was surprised myself, as narrator, by the complexity of the downfall of protective laws. I was also surprised to discover the key role that “overtime” played in my story and the gradual mutation in its meaning over the decades.

Does your subject have present-day legacies?

NW: Definitely. In a sense, single-sex protective laws sank totally out of sight when they capsized in the 1970s. But in another sense, many facets of the history of protective laws reverberate; the echoes pervade current events. Labor standards are now a global issue, as illustrated in Bangladesh in 2012 and 2013. The fire in a garment factory on the outskirts of Dhaka that killed 117 workers, so reminiscent of the 1911 Triangle fire, and the yet more lethal collapse of an 8-story building, with garment production on its upper floors, underline the need for safety regulation everywhere. Closer to home, the drive to improve labor standards continues. Most recently, we have seen a grassroots movement to raise the minimum wage and efforts to revise federal law on the threshold for overtime. Reconciling work and parenthood impels discussion. Pregnancy law remains a challenge; enforcement of the Pregnancy Discrimination Act of 1978 has spurred more litigation than anyone expected. A recent case is Young v. United Parcel Service (2015). Beyond that, demands for compensated parental leave proliferate. President Obama’s proposal to fund parental leave, though unlikely to move forward right now, at least keeps the issue on the table. Finally, equal employment opportunity cases remain a challenge, from the Lily Ledbetter case of 2007 to the dismissed Wal-Mart case of 2011. Title VII, which catalyzed the end of single-sex protective law, turns out to be a work in progress.

George Akerlof and Robert Shiller pose with their new book jacket

Nobel Prize winners Robert Shiller and George Akerlof got the chance to pose with the phenomenal cover for their forthcoming book, Phishing for Phools, the lead title on our Fall 2015 list (stay tuned for the posting of our new seasonal catalog!)  The drawing on the cover is an original by New Yorker cartoonist Edward Koren, and the jacket design is by our own Jason Alejandro. You can catch George talking about the book, which is a fascinating look at the central role of manipulation in economics, at this lecture at Duke University.

Akerloff and Shiller