Keith Whittington: The kids are alright

SpeakIt has rapidly become a common trope that the current crop of college students belong to a generation of “snowflakes.” Unusually sensitive, unusually intolerant, the kids these days are seen by some as a worrying threat to the future of America’s liberal democracy. High-profile incidents on college campuses like the shouting down of Charles Murray at Middlebury College and the rioting in the streets of Berkeley during an appearance by Milo Yiannopoulos give vivid support for the meme. Some surveys of the attitudes of millennials about tolerance and free speech lend some further credence to the snowflake characterization. When the Knight Foundation and Gallup asked college students whether diversity or free speech was more important, a slim majority chose diversity. When a Brookings Institution fellow asked college students whether it was acceptable to use force to silence a speaker making “hurtful” statements, a surprisingly large number said yes.

Should we be worried about the children? Perhaps not. Context matters, and some of the current hand-wringing over events on college campuses has tended to ignore the broader context. In particular, when told that the current generation of students do not seem fully supportive of free speech and tolerance of disagreement, we are rarely told in comparison to what. Compared to a perfect ideal of American values, the current generation of students might fall somewhat short—but so do the generations that preceded them. We aspire to realize our beliefs in tolerance and liberty, but we muddle through without a perfect commitment to our civil libertarian aspirations.

It would be a mistake to be overly complacent about American public support for civil liberties, including free speech, but we should also be cautious about rushing into excessive pessimism about the current generation of college students. It has been a routine finding in the public opinion literature going back decades that Americans express high levels of support for the freedom of speech in the abstract, but when asked about particular forms of controversial speech that support begins to melt away. In the middle of the twentieth century, for example, one study found that more than three-quarters of a sample of lawyers thought that university students should have the freedom to invite controversial speakers to campus, but less than half of the general public agreed. When asked if the government should be allowed to suppress speech that might incite an audience to violence, less than a fifth of the leaders of the American Civil Liberties Union said yes, but more than a third of the members of the ACLU were ok with it. In the 1950s, Americans said they supported free speech, but they also said the speech of Communists should be restricted. In the 1970s, Americans said they supported free speech, but they also said the speech of racists should be restricted. In the 2000s, Americans said they supported free speech, but they also said the speech of Muslims and atheists should be restricted.

Current American college students say that speakers with whom they strongly disagree should be allowed to speak on campus. But a majority of liberal college students changed their mind when they are told that such a speaker might be racist, and more than a third of conservative college students changed their mind when they are told that such a speaker might be “anti-American.” Fortunately, the evidence suggests that only a tiny minority of college students favor activists taking steps to disrupt speaking events on campus. Those numbers are not ideal, but it is important to bear in mind that the college-educated tend to be more tolerant to disagreeable speakers and ideas than is the general public, and that is pretty much as true now as it has been in the past. Public support for the freedom of speech has not always stood firm, and campus debates over the scope of free speech are likely to have large consequences for how Americans think about these issues in the future.

We should draw some lessons from recent events and surveys, but the lesson should not be that current students are delicate snowflakes. First, we should recognize that the current generation of college students is not unique. They have their own distinctive concerns, interests, and experiences, but they are not dramatically less tolerant than those who came before them. Second, we should appreciate that tolerance of disagreement is something we as a country have to constantly strive for and not something that we can simply take for granted. It is easy to support freedom for others in the abstract, but it is often much more difficult to do so in the midst of particular controversies. The current group of college-age Americans struggle with that tension just as other Americans do and have before. Third, we should note that there is a vocal minority on and off college campuses who do in fact question liberal values of tolerance and free speech. They do so not because they are snowflakes but because they hold ideological commitment at odds with values that are deeply rooted in the American creed. Rather than magnifying their importance by making them the avatar of this generation, those who care about our democratic constitutional commitments should work to isolate them and show why theirs is not the best path forward and why diversity, tolerance, and free speech are compatible and mutually reinforcing values and not contrasting alternatives. It is an ongoing project we hold in common to understand and reaffirm the principles of free speech that underlie our political system. Today’s college students are not the only ones who could benefit from that lesson.

Keith E. Whittington is the William Nelson Cromwell Professor of Politics at Princeton University and a leading authority on American constitutional theory and law. He is the author of Speak Freely: Why Universities Must Defend Free Speech

Everything to play for: Winston Churchill, the rise of Asia, and game changers

By Dr. John C. Hulsman

HulsmanThe ability to know when game-changing events are actually happening in real time is to see history moving. It is an invaluable commandment in the mastering of political risk analysis. To do so, an analyst must adopt an almost Olympian view, seeing beyond the immediate to make sense of what is going on now by placing it into the broader tapestry of world history itself.

The rewards for this rare but necessary ability are legion, for it allows the policy-maker or analyst to make real sense of the present, assessing the true context of what is going on presently and what is likely to happen in the future. It is jarring to compare the lacklustre abilities of today’s Western politicians—so far behind the curve in seeing the game-changing rise of Asia and the decline of the West as we enter a new multipolar age—to the phenomenal analytical abilities of earlier statesmen of vision, such as the querulous, needy, challenging, maddening, often wrongheaded but overwhelmingly talented greatest Prime Minister of England.

Churchill Rejoices over Pearl Harbor

In the hustle and bustle of the everyday world, recognizing game-changing events can prove exceedingly difficult. Being surrounded by monumental goings on makes separating the very important from the essential almost impossible. So it was in December 1941, undoubtedly the turning point of the Second World War. During that momentous month, the Red Army turned back the Nazi invasion at the very gates of Moscow, marking the first time Hitler’s war machine had met with a real setback. But for all that the Battle of Moscow mattered enormously, it did nothing to change the overall balance of forces fighting the war, with the outcome still sitting on a knife’s edge.

But half a world away, something else did. At 7:48 AM in Hawaii, on December 7, 1941, the Imperial Navy of the Empire of Japan, attacking without warning as it had done in the earlier Russo-Japanese War, unleashed itself against the American Pacific Fleet, serenely docked at Pearl Harbor that Sunday morning. The damage was immense. All eight American battleships docked at Pearl were struck, and four of them sunk. The Japanese attack destroyed 188 US aircraft, while 2,400 were killed and 1,200 wounded. Japanese losses were negligible.

The Japanese attack on Pearl Harbor misfired spectacularly, changing the course of the war fundamentally, drawing America into the conflict as the decisive force which altered the correlation of power around the world. Stalin, with his back still to the wall in the snows of Russia, did not immediately grasp the game-changing significance of what had just happened any more than Franklin Roosevelt did, now grimly intent on surveying the wreckage of America’s Pacific Fleet and marshalling the American public for global war.

These were pressing times and it is entirely human and understandable that both Stalin and FDR had other more immediate concerns to worry about during those early December days. But Winston Churchill, the last of the Big Three, immediately latched onto the game-changing significance of what had just occurred. For the Prime Minister understood, even in the chaos of that moment, that the misguided Japanese attack had just won Britain and its allies the war and amounted to the game changer a hard-pressed London had been praying for.

In his history of World War II, Churchill wrote of that seminal day, ‘Being saturated and satiated with emotion and sensation, I went to bed and slept the sleep of the saved and thankful.’ The great British Prime Minister slept well that night because he understood the fluidity of geopolitics, how a single event can change the overall global balance of power overnight, if one can but see.

On December 11, 1941, compounding Tokyo’s incredible blunder, Germany suicidally declared war on America. Hitler, vastly underestimating the endless productive capacity of the United States, didn’t think the declaration mattered all that much. The miscalculation was to prove his doom, as the US largely bankrolled both its Russian and British allies, supplying them with both massive loans and a limitless supply of armaments and material. Because of Pearl Harbor and Hitler’s disastrous decision, America would eventually eradicate the dark night of Nazi barbarism. Churchill was right in seeing the full consequences of what was going on at that pivotal time. December 1941 saved the world.

The decline of the West and the rise of Asia is the headline of our times

In the crush of our 24-hour news cycle, it is all too easy—as it was during the stirring days of World War II—to miss the analytical forest for the trees. Confusing the interesting from the pivotal, the fascinating from the essential, remains an occupational hazard for both policy-makers and political risk analysts. But beneath the sensory overload of constant news, the headline of our own time is clear if, like, Churchill we can but see.

Our age is one where the world is moving from the easy dominance of America’s unipolar moment to a multipolar world of many powers. It is characterized by the end of 500-plus years of western dominance, as Asia (especially with the rise of China and then India) is where most of the world’s future growth will come from, as well as a great deal of its future political risk. The days of International Relations being largely centered on Transatlantic Relations are well and truly at an end, as an economically sclerotic and demographically crippled Europe recedes as a power, and even the United States (still by far the most powerful country in the world) sinks into relative decline.

To understand the world of the future requires a knowledge of Asia as well as Europe, of macroeconomics as well as military strategy, of countries the West has given precious little thought to, such as China, India, Indonesia, Turkey, Argentina, Brazil, South Africa, Saudi Arabia, and Mexico, as well as the usual suspects such as a declining Russia and Europe. International Relations has become truly ‘international’ again. And that, coupled with the decline of the West and the Rise of Asia, is the undoubted headline of the age. Churchill, and all first rate analysts who understand the absolute value of perceiving game-changing events, would surely have agreed.

Dr. John C. Hulsman is the President and Co-Founder of John C. Hulsman Enterprises, a prominent global political risk consulting firm. For three years, Hulsman was the Senior Columnist for City AM, the newspaper of the city of London. Hulsman is a Life Member of the Council on Foreign Relations, the pre-eminent foreign policy organisation. The author of all or part of 14 books, Hulsman has given over 1520 interviews, written over 650 articles, prepared over 1290 briefings, and delivered more than 510 speeches on foreign policy around the world. His most recent work is To Dare More Boldly; The Audacious Story of Political Risk.

Julian Zelizer on The Presidency of Barack Obama

ZelizerBarack Obama’s election as the first African American president seemed to usher in a new era, and he took office in 2009 with great expectations. But by his second term, Republicans controlled Congress, and, after the 2016 presidential election, Obama’s legacy and the health of the Democratic Party itself appeared in doubt. In The Presidency of Barack Obama, Julian Zelizer gathers leading American historians to put President Obama and his administration into political and historical context. Engaging and deeply informed, The Presidency of Barack Obama is a must-read for anyone who wants to better understand Obama and the uncertain aftermath of his presidency.

What was your vision for this book? What kind of story are you trying to tell?

My goal with this book is to provide an original account of the Barack Obama that places his presidency in broader historical context. Rather than grading or ranking the president, my hope is to bring together the nation’s finest historians to analyze the different key issues of his presidency and connect them to a longer trajectory of political history. Some of the issues that we examined had to do with health care, inequality, partisan polarization, energy, international relations, and race.

How did you approach compiling the essays that make up this book? What criteria did you use when choosing contributors?

The key criteria was to find historians who are comfortable writing for the general public and who are interested in the presidency—without necessarily thinking of the president as the center of their analysis. I wanted smart historians who can figure out how to connect the presidency to other elements of society—ranging from the news media to race relations to national security institutions.

What do you see as the future of Obama’s legacy?

Legacies change over time. There will be more appreciation of aspects of his presidency that are today considered less significant, but which in time will be understood to have a big impact. Our authors, for instance, reveal some of the policy accomplishments in areas like the environment and the economy that were underappreciated during the time he was in the White House.  In other ways, we will see how some parts of the presidency that at the time were considered “transformative” or “path-breaking”—such as his policies on counterterrorism—were in fact extensions and continuations of political developments from the era.

How did the political landscape of the country change during Obama’s tenure?

While we obtained many new government programs, from climate change, to ACA, to the Iran Nuclear Deal, we also saw the hardening of a new generation of conservatism who were more rightward in their policies and more aggressive, if not ruthless, in their political practices. Some of his biggest victories, such as the Affordable Care Act, pushed the Republican Party even further to the right and inspired them to be even more radical in their approach to legislative obstruction.

What do you hope readers will take away from reading this book?

I hope that they will have a better sense of where this presidency came from, some of the accomplishments that we saw during these eight years, and some of the ways that Obama was limited by the political and institutional context within which he governed. I want readers to get outside the world of journalistic accounts and come away understanding how Obama’s presidency was a product of the post-1960s era in political history.

Julian E. Zelizer is the Malcolm Stevenson Forbes, Class of 1941 Professor of History and Public Affairs at Princeton University and a CNN Political Analyst. He is the author and editor of eighteen books on American political history, has written hundreds of op-eds, and appears regularly on television as a news commentator.

Michael Brenner explains why a Jewish State is “not like any other state”

BrennerIs Israel a state like any other or is it unique? As Michael Brenner argues in In Search of Israel, the Zionists attempted to put an end to the millennia-old history of the Jews as the archetypical “other” by creating a Jewish state that would be just like any other state, but today, Israel is regarded as anything but a “normal” state. Instead of overcoming the Jewish fate of otherness, Israel has in fact become the “Jew among the nations.” Israel ranks as 148th of the 196 independent states in terms of geographical area, and as 97th in terms of population, which is somewhere between Belize and Djibouti. However, the international attention it attracts is exponentially greater than that of either. Considering only the volume of media attention it attracts, one might reasonably assume that the Jewish state is in the same league as the United States, Russia, and China. In the United States, Israel has figured more prominently over the last three decades than almost any other country in foreign policy debates; in polls across Europe, Israel is considered to be the greatest danger to world peace; and in Islamic societies it has become routine to burn Israeli flags and argue for Israel’s demise. No other country has been the target of as many UN resolutions as Israel. At the same time, many people around the world credit Israel with a unique role in the future course of world history. Evangelical Christians regard the Jewish state as a major player in their eschatological model of the world. Their convictions have influenced US policies in the Middle East and the opinions of some political leaders in other parts of the world.

Why does Israel attract so much attention?

The answer lies in history. Many people call Israel “the holy land” for a reason: it is here where the origins of their religions were shaped. The Jewish people too are regarded as special: they played a crucial role in the theological framework of the world’s dominant religions. In Christianity and in Islam, Jews were both seen as a people especially close to God and at the same time uniquely rejected by God. While over the last two hundred years these ideas have become secularized, many stereotypes have remained. That the Jews became victims of the most systematic genocide in modern history lent them yet another mark of uniqueness. After two thousand years in exile, the fact that Jews returned to their ancient homeland to build a sovereign state again surrounded the people and place with additional mystique.

Did the Zionists view themselves as unique?

The irony is that the Zionist movement was established at the end of the 19th century precisely in order to overcome this mark of difference and uniqueness. Many Zionists claimed that they just wanted to be like anyone else. Chaim Weizmann, longtime leader of the Zionist movement and Israel’s first president, was quoted with saying: “We just want to be another Albania,” meaning a small state that nobody really cares about. Even Israel’s founding document, the declaration of independence, says that Israel has the right to be “like all other nations.” But at the same time the notion of being different, perhaps being special, was internalized by Zionists as well. Many of its leaders argued that a Jewish state has a special responsibility. Even the most secular among them regarded Israel’s serving as “a light unto the nations” as a crucial part of a prophetic tradition.

Does this mean that Zionism was a religious movement?

Not at all. Most of its early leaders were strictly secular. Theodor Herzl, the founder of Zionism, knew no Hebrew and in fact very little about Jewish traditions. But he wanted to establish a model state for humanity, and saw the formation of Israel as an example for the liberation of African-Americans. Long before any other state granted voting rights for women, he let women be active participants in the Zionist congresses. He drew a flag for the future Jewish state that had seven stars, symbolizing a seven-hour-workday for everyone. David Ben-Gurion, the first prime minister of Israel, was a Socialist and rejected organized religion. But just like Herzl, he believed in the mission of a model state that could spread the prophetic ideals of universal peace and equality among the nations.

Why then is Israel seen by many today not as a model state but as a pariah state?

Herzl discussed other potential destinations, such as Argentina and British East Africa, as refuge for the persecuted European Jews. But the only place Jews had an emotional connection with was the territory they had originated from. Over centuries, Jews prayed for their return to the land of Israel. But it was not an empty land. The Arab Palestinians soon developed their own ideas of nationhood and rejected the growing Jewish immigration. In the meantime, antisemitism increased in Europe and other countries closed their doors to Jewish refugees. The establishment of the State of Israel in 1948 came too late to save the lives of millions of Jews who perished in the Holocaust. But by then, most of the world recognized the Jews’ right to their own state in their ancient homeland, as reflected in the 1947 UN partition of Palestine into a Jewish and an Arab state. Yet the Arab world did not see why they should pay the price for the sins of the Europeans. The situation reflected the parable of a person (the Jews) jumping out of the window of a burning house (Europe) and hitting another person (the Palestinians) on the street in order to save his own life. The ongoing conflict of two peoples over the same land, combined with the special significance of this land in the eyes of the world, led to a situation where even outsiders have strong opinions. For Evangelical Christians, Israel fulfills a divine mission, while for others, especially in the Arab world, Israel is regarded as a foreign intruder in the tradition of the medieval Crusaders and modern Imperialists.

So, can Israel one day become just a “normal state?”

To begin with, let me qualify this question. The idea of a “normal state” is a fiction altogether. Every state sees itself as special. But it is true that some states receive more attention from the rest of the world than others. Can Israel just be another Albania in the eyes of the world, or relegated in our attention to its place among the nations between Djibouti and Belize? I do not believe so. The history of Jerusalem is different from that of Tirana (Albania’s capital), and the Jews have attracted so much more attention than nations of comparable size. Thus, Israel will most likely always remain in the limelight of media attention. However, let us not forget: The people in Israel live their everyday lives just like everywhere else. They worry about their jobs and about their sports teams, they want their children to be safe and successful in school, and they dream of a peaceful future. In this deeply personal sense, Israel has become a state just like any other.

Michael Brenner is the Seymour and Lilian Abensohn Chair in Israel Studies and director of the Center for Israel Studies at American University and Professor of Jewish History and Culture at Ludwig Maximilian University in Munich. His many books include A Short History of the Jews.

Jason Rosenhouse: Yummy, Delicious Pi!

RosenhouseHere is a classic bar bet for you: take a wine glass, the kind with a really long stem. Ask whoever is near you to guess whether the height of the glass or the circumference at the top is greater. Most people will choose the height. In fact, they will regard it as obvious that the height is greater. But they will be wrong! Unless it is a very oddly-shaped glass, the circumference will be significantly greater. (Of course, you will need a piece of string to convince your mark of that.) It is a remarkably effective optical illusion.

As we all learned in grade school, the circumference of a circle is pi times the diameter, and pi is just a little greater than three. So the circumference at the top will be three times longer than the diameter. Any glass taller than that would be unpleasant to drink from.

Apparently knowing something about pi can make you money. Who said math isn’t practical?

I remember being fascinated by pi as a kid. When my father—a chemical engineer—first told me about it, I asked him if there was also a number called cake. The number pi is typically defined as a sort of geometrical object: it is the ratio of the circumference of a circle to its diameter. We could also say that pi is the area of a circle whose diameter is one. Yet somehow it keeps appearing in the most unexpected of places.

For example, suppose you pick two whole numbers at random, by which I mean the usual numbers like 1, 2, 3, 4, and so on. Sometimes the two numbers will share a common factor, like 4 and 6, which share a common factor of 2. Other times the two numbers will share no common factor (other than 1), like 3 and 7. Pairs like the second are said to be relatively prime. It turns out the probability that a pair of randomly chosen numbers is relatively prime is 6 divided by pi squared. Not a circle in sight, yet there is pi!

Or imagine that you have a very large sheet of notebook paper whose lines are one inch apart. Suppose you take a one-inch needle and drop it from a height onto the paper. The probability that the needle hits a line is 2 divided by pi. Only lines this time. Still no circles. This is called the Buffon needle problem, if you were curious.

One of the first things you learn about pi is that it is an irrational number, which means it is an infinite, non-repeating decimal. My sixth grade math teacher told me it was just crazy that a number should behave like that, and that is why it is called irrational. You can imagine my disappointment when I later learned that it is irrational only in the sense that it cannot be expressed as a ratio of whole numbers. I like my teacher’s explanation better. You can find fractions that are good approximations, like 22/7 or 355/113, but approximations are not the real thing.

The fact that pi is an infinite, non-repeating decimal, and that it cannot be written simply in terms of whole numbers, makes it difficult to write down at all. That is why we just give it a name, pi, and call it a day. We could as easily have called it Harry the number if we wanted to, but perhaps that lacks gravitas.

Pi is one of the special numbers of mathematics. Another is e, which is typically defined in ways that require calculus, and which have nothing to do with circles. This is another of those strange, irrational numbers that seems to keep popping up in unexpected places. Still another is i, which is defined to be the square root of minus 1, a number so bizarre it is commonly said to be imaginary. And we certainly should not forget the two most special numbers of them all, by which I mean 1 and 0.

Perhaps having experienced social ostracism at the hands of more normal numbers, the five special numbers have gotten together to create one of the most remarkable equations in mathematics. It is called Euler’s identity, and says:


It is remarkable that these five special numbers, defined in contexts entirely separate from one another, should play together so well. At the risk of seeming melodramatic, religions have started over less.

So take a moment this March 14 to give some thought to the most delicious number we have: pi. We will not have another perfect square day until May 5, 2025 (a date that will be written 5/5/25). And since e is 2.72 when rounded to two decimal places, we will never have an e day until February is granted 72 days. Or perhaps someday we will dramatically increase the size of the calendar, and then we will have e day on the second day of the twenty-seventh month.

But pi day comes every year. Enjoy it!

Jason Rosenhouse is a professor of mathematics at James Madison University in Harrisonburg, Virginia. He is the author or editor of six books, including The Monty Hall Problem: The Remarkable Story of Math’s Most Contentious Brainteaser, and Among the Creationists: Dispatches From the Anti-Evolutionist Frontline. His book Taking Sudoku Seriously, coauthored with Laura Taalman, received the 2012 Prose award, from the American Association of Publishers, for popular science and mathematics. With Jennifer Beineke, he is the editor of the Mathematics of Various Entertaining Subjects series, published by Princeton University Press and the Museum of Mathematics in New York. He is currently working on a book about logic puzzles, to be published by Princeton.

Jeffrey Bub & Tanya Bub: There are recipes for Pi. But quantum mechanics?

There’s a recipe for Pi, in fact quite a few recipes. Here’s one that dates to the fifteenth century, discovered by the Indian mathematician and astronomer Nilakantha:


For the trillions of decimal places to which the digits have been calculated, each digit in the decimal expansion of Pi occurs about one-tenth of the time, each pair of digits about one-hundredth of the time, and so on. Its still a deep unsolved mathematical problem to prove that this is in fact a feature of Pi—that the digits will continue to be uniformly distributed in this sense as more and more digits are calculated—but the digits aren’t totally random, since there’s a recipe for calculating them.

Quantum mechanics supplies a recipe for calculating the probabilities of events, how likely it is for an event to happen, but the theory doesn’t say whether an individual event will definitely happen or not. So is quantum theory complete, as Einstein thought, in which case we should try to complete the theory by refining the recipe, or are the individual events really totally random?

Einstein didn’t like the idea that God plays dice with the universe, as he characterized the orthodox Copenhagen interpretation of quantum mechanics adopted by Niels Bohr, Werner Heisenberg, and colleagues. He wrote to his friend the physicist Max Born:

I find the idea quite intolerable that an electron exposed to radiation should choose of its own free will, not only its moment to jump off, but also its direction. In that case, I would rather be a cobbler, or even an employee in a gaming house, than a physicist.

But Einstein was wrong. Consider this puzzle. Could you rig pairs of coins according to some recipe so that if Alice and Bob, separated by any distance, each toss a coin from a rigged pair heads up, one coin lands heads and the other tails, but if they toss the coins any other way (both tails up, or one tails up and the other heads up), they land the same? It turns out that if each coin is designed to land in any way at all that does not depend on the paired coin or how the paired coin is tossed—if each coin has its own “being-thus,” as Einstein put it—you couldn’t get the correlation right for more than 75% of the tosses. This is a version of Bell’s theorem, proved by John Bell in 1964.


What has this got to do with quantum randomness? The coin correlation is actually a “superquantum” correlation called a PR-correlation, after Sandu Popescu and Daniel Rohrlich who came up with the idea. Quantum particles aren’t correlated in quite this way, but measurements on pairs of photons in an “entangled” quantum state can produce a correlation that is close to the coin correlation. If Alice and Bob use entangled photons rather than coins, they could simulate the coin correlation with a success rate of about 85% by measuring the polarizations of the photons in certain directions.

Suppose Alice measures the polarizations of her photons in direction A = 0 or A′ = π/4 instead of tossing her coin tails up or heads up, and Bob measures in the direction B = π/8 or B′ = −π/8 instead of tossing his coin tails up or heads up. Then the angle between Alice’s measurement direction and Bob’s measurement direction is π/8, except when Alice measures in the direction A′ and Bob measures in the direction B′, in which case the angle is 3π/8. According to the quantum recipe for probabilities, the probability that the photon polarizations are the same when they are measured in directions π/8 apart is cos2(π/8), and the probability that the photon polarizations are different when they are measured in directions 3π/8 apart is sin2(3π/8) = cos2(π/8). So the probability that Alice and Bob get outcomes + or − corresponding to heads or tails that mimic the coin correlation is cos2(π/8), which is approximately .85.

Bell’s theorem tells us that this pattern of measurement outcomes is closer to the coin correlation pattern than any possible recipe could produce. So God does play dice, and events involving entangled quantum particles are indeed totally random!

BubTanya Bub is founder of 48th Ave Productions, a web development company. She lives in Victoria, British Columbia. Jeffrey Bub is Distinguished University Professor in the Department of Philosophy and the Institute for Physical Science and Technology at the University of Maryland, where he is also a fellow of the Joint Center for Quantum Information and Computer Science. His books include Bananaworld: Quantum Mechanics for Primates. He lives in Washington, DC. They are the authors of Totally Random: Why Nobody Understands Quantum Mechanics (A Serious Comic on Entanglement).

Gaming out chess players: The Italian Renaissance and Vladimir Putin

By Dr. John C. Hulsman

HulsmanIf learning the precious truth that we can be the danger (see my Gibbon column of last week) is the first commandment of political risk analysis, gaming out chess players is surely another. Chess players—foreign policy actors playing the long game, possessing fixed, long-term strategic goals even as they use whatever tactical means come to hand to achieve them—are rare birds indeed. Patient, low-key, but implacable, chess players do that rarest of things: they actually think ahead and are not prisoners of short-term day-to-day events, instead conditioning all that they do in furtherance of their long-term strategy.

Chess players manage to cloak their dogged, disciplined strategies, hiding them in plan sight of our frenetic 24-hour news cycle, from a world that does not generally follow such fixed principles and cannot really conceive of how others might be able to hold to a clear strategic line. In a world of tacticians, it is easy for a strategist to conceal themselves.

Pope Julius II as the true hero of The Prince

Following on from the Crusades, the western world entered a period of cultural and political regeneration we now call the Renaissance. As is true for most eras, it was more politically chaotic, brutal, and bloody than it seems in retrospect. In the confusing, uncertain milieu of early-sixteenth century Italy, a man arose who fit the tenor of his times.

Pope Julius II has been shamefully underrated by history, as his contemporary Niccolo Machiavelli—the author of The Prince, the bible of modern realpolitik—instead lionized failed Bond villain Cesare Borgia rather than the more successful pope. However, we have five centuries of distance from the swirling events of the Renaissance, allowing us to take up the more dispassionate, chess-playing view that Machiavelli urges on us. So let us here re-write the ending of The Prince, this time using Julius II as the proper analytical hero of the piece.

Julius was born Giuliano Della Rovere around 1443. Like Cesare Borgia, his path to power was speeded along by close familial contacts to the papacy. Della Rovere was the much-loved nephew of Pope Sixtus IV, becoming his uncle’s de facto prime minister. Following on from the death of Sixtus, Della Rovere assumed that he would succeed him. However, he was beaten out by Cardinal Rodrigo Borgia, Cesare’s father, who assumed the title of Pope Alexander VI. So Della Rovere, in good chess player fashion, tried to undercut Alexander, knowing his time was coming.

When Alexander VI died in 1503 (and with the lightning quick demise of his successor, Pope Pius III, in just 26 days) Della Rovere at last made his long-considered move. He deceived the supposedly worldly Cesare and ran rings around him diplomatically, securing the papal throne by means of bribery, both in terms of money and future promises. With Cesare throwing the powerful Borgia family’s crucial support behind him, the new papal conclave was one of the shortest in history, with Della Rovere winning on only the second ballot, taking all but two cardinals’ votes. He ascended to the papal throne at the end of 1503.

Now that Cesare had outlived his usefulness, Julius withdrew his promised political support from him in true Machiavellian fashion, seeing to it that the Borgias found it impossible to retain their political control over the papal states of central Italy. Julius rightly reasoned that to fail to eradicate the Borgia principality would have left the Vatican surrounded by Borgia possessions and at Cesare’s very limited mercy.

Without papal support Cesare’s rule on his own—without the critical backing his father Alexander VI had provided—lasted merely a matter of months, with his lands reverting to Julius and the papacy itself. Julius had run rings around Machiavelli’s hero, fulfilling the chess-playing maxim that securing one’s political position leads to political stability and long-term rule. That, Niccolo, is what a real chess player looks like.

Making sense of Putin

However, chess players are not just relic of the byzantine Renaissance age. Russian President Vladimir Putin is a perfect modern-day example of a chess player, as all the many devious tactics he pursues ultimately amount to a very single-minded effort to restore Russian greatness, often by blunting the West’s drives into what he sees as Russia’s traditional sphere of influence in the countries surrounding it. In other words, the Russian strong man resembles another chess player, former French President Charles De Gaulle, in his single-minded efforts to restore pride and great power status to his humiliated country.

As such, Putin’s many gambits: theatrically opposing the US despite having a puny, corrupt economy the size of Texas; pursuing an aggressive adventurist policy against the pro-Western government in Ukraine; intervening to decisive effect in the horrendous Syrian war; all serve one overarching strategic goal. They are designed to make the world (and even more the Russian people) change their perceptions about Russia as a declining, corrupt, demographically challenged former superpower (which it is), and instead see it as a rejuvenated global great power, one that is back at the geo-strategic top table.

Despite all facts to the contrary (and in the end, as was true for De Gaulle’s France, the facts just don’t bear out the incorrect perception that Russia will again be a superpower), Putin has been very successful in (wrongly) changing global perceptions of Russia’s place in the world. It is also the reason the current tsar has an 80% approval rating in his own country, as he has restored pride to his formerly humiliated countrymen. By knowing what ultimately motivates the chess-playing Putin, we in the West can do a far better job in assessing the entirely explicable tactical gambits emanating from the Kremlin.

The rewards for spotting the rare chess player

Despite the difficulty in spotting them, it is well worth the time trying to game out chess players, perhaps the rarest of creatures in global politics. For once they are analytically brought to ground, the fixed, rational, patterns that chess players live by means a true analytical understanding of them is possible, as well as a far better understanding of the world in which they live.

Dr. John C. Hulsman is the President and Co-Founder of John C. Hulsman Enterprises, a successful global political risk consulting firm. For three years, Hulsman was the Senior Columnist for City AM, the newspaper of the city of London. Hulsman is a Life Member of the Council on Foreign Relations, the pre-eminent foreign policy organization. The author of all or part of 14 books, Hulsman has given over 1520 interviews, written over 650 articles, prepared over 1290 briefings, and delivered more than 510 speeches on foreign policy around the world. His most recent work is To Dare More Boldly; The Audacious Story of Political Risk.

Robert Irwin on Ibn Khaldun: An Intellectual Biography

IrwinIbn Khaldun (1332–1406) is generally regarded as the greatest intellectual ever to have appeared in the Arab world—a genius who ranks as one of the world’s great minds. Yet the author of the Muqaddima, the most important study of history ever produced in the Islamic world, is not as well known as he should be, and his ideas are widely misunderstood. In this groundbreaking intellectual biography, Robert Irwin provides an engaging and authoritative account of Ibn Khaldun’s extraordinary life, times, writings, and ideas.

Who was Ibn Khaldun?
Wali al-Din Ibn Khaldun was born in 1332 in Tunis. In his youth he was tutored by some of finest scholars of the age before going on to occupy high offices at various North African courts and at the court of Granada in Muslim Spain. He became, among other things, a diplomat and a specialist in negotiating with the Arab and Berber tribesmen of the North African interior and on occasion he led the tribesmen in battle. Later he moved to Cairo where he was to occupy various senior judicial and teaching posts under the Mamluk Sultans. In 1401 he had a famous meeting with the Turco-Mongol would-be world conqueror Timur (also known as Tamerlane), outside the walls of Damascus which was under siege by Timur. Having escaped becoming Timur’s honored captive, he returned to Egypt. In 1406 he died and was buried in a Sufi cemetery in Cairo. Despite his active career in politics, law, diplomacy, and teaching, he is chiefly famous for his great book, the Muqaddima, (the translation of which is currently published in three volumes by Princeton University Press, as well in a single-volume abridgment).

Why is Ibn Khaldun’s Muqaddima so important?
This big book asked big questions. The Muqaddima started out as a study of the laws of history and it has gone on to win great praise from modern historians. Arnold Toynbee described it as ‘undoubtedly the greatest work of its kind that has ever been created by any mind in any time or place.’ Hugh Trevor-Roper agreed; ‘It is a wonderful experience to read those great volumes, as rich and various, as subtle, deep and formless as the Ocean, and to fish up from them ideas old and new.’ The Muqaddima has attracted similar praise from philosophers, sociologists, anthropologists, economists and Islamicists.

Ibn Khaldun began by asking how do historians make mistakes in their interpretation of events and what kinds of information should be recognized by historians as good evidence or bad evidence. Then he set out to understand the origins of civilization and the causes of the rise and fall of dynasties. As he continued his investigations, his book broadened out to become what was effectively an encyclopedia of Muslim society and culture.

Given his importance, there are already quite a few books on Ibn Khaldun. What is new about yours?
There are indeed so many translations of Ibn Khaldun and books about him that something like half the history of Orientalism can be deduced from the contrasting readings of the Muqaddima produced by such scholars as Silvestre de Sacy, Quatremère, Von Kremer, Monteil, Gibb, Hodgson, Hourani and Gellner. Some of the books by my predecessors are pretty good and I owe debts to those who have gone before me. Nevertheless many of their readings of the Muqaddima have been selective and have stressed and, I think, overstressed the logicality of Ibn Khaldun’s admittedly powerful mind and in doing so they have neglected the inconsistencies, ambiguities, and eccentricities that make the Muqaddima such a fascinating text. Mine is the first book to focus closely on the importance of the occult in Ibn Khaldun’s thought and his intense interest in methods of predicting the future. It is also the first to bring out the importance of North African ruins and the moralizing messages that he took from them. Although he was an outstanding thinker, he was also a man of his time and there has been a tendency to underplay the North African and strictly Muslim context of the Muqaddima. I have also sought to bring out the distinctive quality of Ibn Khaldun’s writing by contrasting it with famous texts by Froissart, Machiavelli, Vico, Montesquieu, Spengler, and others.

His ideas have been described as anticipating those of Montesquieu, Comte, Darwin, Marx, and Toynbee, among others. So was he a ‘modern’ thinker?
As new disciplines evolved in the West in the nineteenth and twentieth centuries, their leading scholars frequently sought to create intellectual lineages for their chosen subjects and so Ibn Khaldun came to be hailed as ‘the world’s first anthropologist’ or ‘the first ever cultural historian’ or as a ‘proto-Marxist.’ Though there is some justice in such tributes, the quest for relevance can be a dangerous thing, as an overemphasis on similarities may conceal or distort past ways of thinking and living. As the novelist L.P. Hartley observed, ‘The past is a foreign country; they do things differently there.’ Ibn Khaldun’s remarkable ability to formulate general laws based on the close observation of discrete phenomena gives his thinking the delusive appearance of modernity, but he wrote in the service of fourteenth-century Islam. Moreover there is no evidence that he influenced Montesquieu, there is no continuity between Ibn Khaldun’s sociological formulations and those of Comte and there is no indication that Ibn Khaldun had anticipated Darwin’s ideas about the survival of the fittest.

Why did you write this book?
It feels as though I have been living with Ibn Khaldun since I first read the Muqaddima as a student in the 1960s. So it was high time that I took a close look at the assumptions and vocabulary that underpinned his thinking. To spend so much time with a polymathic genius has been both demanding and exhilarating. But there is also something else. As already noted, his Muqaddima is encyclopedic in scope. It not only covers history and philosophy, but also religion, social studies, administrative structures and title-holding, geography, economics, literature, pedagogy, jurisprudence, magic, treasure hunting, diet, dream interpretation, and much else. So a study of his masterpiece can serve as a panoptic guide to Muslim thought and life in the Middle Ages. There is nothing to match it either in the Islamic world or in medieval Christendom.

Robert Irwin is senior research associate at the School of Oriental and African Studies in London and a former lecturer at the University of St. Andrews, Scotland. His many books include Dangerous Knowledge: Orientalism and Its Discontents and Memoirs of a Dervish: Sufis, Mystics, and the Sixties, as well as seven novels. He is a Fellow of the Royal Society of Literature.

Omnia El Shakry: Genealogies of Female Writing


Throughout Women’s History Month, join Princeton University Press as we celebrate scholarship by and about women.

by Omnia El Shakry

In the wake of the tumultuous year for women that was 2017, many female scholars have been reflecting upon their experiences in the academy, ranging from sexual harassment to the everyday experiences of listening to colleagues mansplain or even intellectually demean women’s work. Indeed, I can vividly recall, as a young assistant professor, hearing a senior male colleague brush off what has now become a canonical text in the field of Middle East studies as “merely” an example of gender history, with no wider relevance to the region. Gender history rolled off his tongue with disdain and there was an assumption that it was distinct from real history.

Few now, however, would deign to publicly discount the role that female authors have played in the vitality of the field of Middle East studies. In recognition of this, the Middle East Studies Association of North America has inaugurated new book awards honoring the pioneering efforts of two women in the field, Nikkie Keddie and Fatima Mernissi. I can still remember the first time I read Mernissi’s work while an undergraduate at the American University in Cairo. Ever since my freshman year, I had enrolled in Cultural Anthropology courses with Soraya Altorki—a pioneering anthropologist who had written about Arab Women in the Field and the challenges of studying one’s own society. In her courses, and elsewhere, I was introduced to Lila Abu-Lughod’s Veiled Sentiments, an ethnography of poetry and everyday discourse in a Bedouin community in Egypt’s Western desert. Abu-Lughod’s narrative was sensitive to questions of positionality, a lesson she both drew from and imbued with feminism. A second piece of writing, this time an article by Stefania Pandolfo on “Detours of Life” that interpreted the internal logic of imagining space and bodies in a Moroccan village gave me a breathtaking view of ethnography, the heterogeneity of lifeworlds, and the work of symbolic interpretation. 

In hindsight I can see that these early undergraduate experiences of reading, and studying with, female anthropologists profoundly impacted my own writing. Although I would eventually become a historian, I remained interested in the ethnographic question of encounters, and specifically of how knowledge is produced through encounters­—whether the encounter between the colonizer and the colonized or between psychoanalysis and Islam. In my most recent book, The Arabic Freud: Psychoanalysis and Islam in Modern Egypt, I ask what it might mean to think of psychoanalysis and Islam together, not as a “problem” but as a creative encounter of ethical engagement. Rather than conceptualizing modern intellectual thought as something developed in Europe, merely to be diffused at its point of application elsewhere, I imagine psychoanalytic knowledge as something elaborated across the space of human difference.

There is yet another female figure who stands at the door of my entry into writing about the Middle East. My grandmother was a strong presence in my early college years. Every Friday afternoon I would head over to her apartment, just a quick walk away from my dorm in downtown Cairo. We would eat lunch, laugh and talk, and watch the subtitled American soap operas that were so popular back then. Since she could not read or write, we would engage in a collective work of translation while watching and I often found her retelling of the series to be far more imaginative than anything network television writers could ever have produced.

Writing for me is about the creative worlds of possibility and of human difference that exist both within, but also outside, of the written word. As historians when we write we are translating between the living and the dead, as much as between different life worlds, and we are often propelled by intergenerational and transgenerational bonds that include the written word, but also exceed it.

Omnia El Shakry is professor of history at the University of California, Davis. She is the author of The Arabic Freud: Psychoanalysis and Islam in Modern Egypt.

Katrina van Grouw on the difficulty of answering a simple question

Artist/scientist/author/illustrator… To me, names are important, and it’s vital to be described by one that fits. Ironically it seems to be my lot in life to evade classification.

Throughout Women’s History Month, join Princeton University Press as we celebrate scholarship by and about women.

“What do you do for a living?”

It’s a harmless enough question; one that ideally requires a short answer, like “astronaut” or “driving instructor”. And yet, the closest thing to a concise answer that emerges from my ensuing stream of incoherent mumbling are the words: “I produce books.”

I produce books; beautiful books that communicate beautiful science to everyday people. (I’m actually a very good communicator, both in writing and in front of an audience, but the reason why this particular question always throws me off balance will hopefully become clear as you read on.) Each book takes multiple years to create. I work on them full-time, seven days a week; think about them every minute of every day, and dream about them at night. They’re my obsession, my passion, my entire reason to live.

You might be wondering how a single book can take so long, but these are rather original, large, illustrated books with around 400 drawings in each. I am author, illustrator, conceiver and designer. For the anatomical illustrations, the mounted skeletons are invariably drawn from skeletons that we’ve cleaned and articulated at home (Husband does all the preparation work, though we’re both adept at it) as very few museum specimens are sufficiently accurate, or mounted in the required posture. So we need to obtain the specimens and do months of preparation before the illustration work can even begin.

Cattle lined up in a stall, in various stages of undress, seemed the best way to illustrate the result of “double muscling”, most obvious in the hindquarters of beef cattle. Images like this are only of real use as illustrations in a book, with the sole function of clarifying the text.

Although the drawings are, to many people, the main selling point, there’s a difference between “art books”—collections of an artist’s work on a loose theme—and illustrated books that are created to communicate a message, Mine are not art books, despite being very beautiful. My newest book, Unnatural Selection, in particular, is text-led with the illustrations serving purely to elucidate the writing.

The sorts of images necessary to illustrate a book might also be very different from the pictures an artist will produce for their own sake. Many people assume that I produce tightly detailed anatomical drawings out of choice, as works of art in their own right, and some even assume I’m some sort of arty Goth chick who’s “into skeletons”. I’ll never forget the reaction of a lady at an art demonstration (I was using the opportunity to produce illustrations for The Unfeathered Bird) who stormed out in obvious disgust muttering, “The things people draw!”

I’ve only ever produced anatomical drawings as a means to an end—as a way of communicating (though my books’ illustrations), or investigating the underlying structure of animals that I picture, alive, in my personal artwork. In my previous incarnation, as a fine artist, my creations were very, very different—loose and dark and expressive—though also concerned with the underlying structure of things and inspired by similar subjects to my books. I was deeply engrossed in large drawings of towering sea cliffs and geological formations when Princeton University Press offered to publish The Unfeathered Bird, an idea I’d been incubating for nearly 20 years. The book was supposed to be a temporary diversion, but when the time came to return to my previous artwork I found that the moment had passed. I’d moved on.

There’s a difference between artwork produced for its own sake to hang on the wall, and drawings made exclusively as book illustrations to supplement text. My anatomical drawings were only ever intended for illustration, or as a way of understanding the structure of living animals.

People have mourned this departure from the picture-making art world without appreciating that it’s impossible to move backward, even if I’d wanted to. I’ve evolved in a new direction and discovered something that ticks all the boxes for me creatively and intellectually: books.

Books offer the potential to be far more than the sum of their parts. For me it’s the entire book —the interaction of text with images, the design, the way I choose to express myself, and most of all the concept —that’s the final work of art. I love the challenge of making decisions about the best arrangement of content, or the angle of approach, confident that the answer exists but having to reach it through months of independent thought. Producing books encompasses not just my drawing skills, but writing, research, communication and my intellect most of all, and tests me to my limits. I can think of nothing finer.

The line between art and illustration is a fine one. Many works of fine art can function superbly well as illustrations, and many illustrations are sublime works of art in their own right. The distinction is not in the creations but in the professions. Being an illustrator usually involves working to someone else’s brief and taking instructions from a non-illustrator about how the illustration should be done. Just the thought of it fills me with contempt! I have no imagination when it comes to commissioned work, no passion for other people’s projects, and no inclination to subject myself to other people’s will. The purpose of illustration is to illuminate text, so it’s something of an oxymoron to describe someone primarily as an illustrator when it’s their own text they’re illustrating. For these reasons, and because I’m exceedingly proud of my written work, I dislike being described as a natural history illustrator, preferring to think of myself as an author or as an author/illustrator.

One of the challenges I enjoy most is clarifying a scientific idea through cleverly conceived illustrations. These four Budgerigars (or is it just one?) are showing how pigment layers combine to produce colors.

Even this invites preconceptions, however. When people hear the word “author” they immediately think of fiction. And when the author is a woman, and also illustrates her own books, people think of children’s fiction. After that, explaining that you actually produce books about evolution and morphology for adults is just a confirmation of their automatic expectation that your books are dull, super-specialised, and only of interest to a very limited niche market. Their response is always the same, and if I had a pound for every time someone said this, I’d be very rich indeed:

“You’re not exactly J K Rowling, then.”

To be honest, there are actually very few full time non-fiction authors. Most other authors of evolution books are university professors or researchers who would definitely describe themselves as biologists first and foremost. For many, writing books is something that’s expected of them, as part of their job.

I’d dearly love to have been able to call myself a biologist. I am, however, entirely self-taught so don’t believe I deserve that title (and certainly not the title “anatomist” which I have been called on occasion). Ironically, there are plenty of self-taught artists who claim the title “artist” as their own almost as readily as they pick up a pencil, but anyone without a relevant university degree is considered a fraud if they call themselves a scientist. Names are important, and it’s vital to be described by one that fits, although ironically it seems to be my lot in life to evade classification.

My desire for an academic education was held back – not by any lack of ability, but by a prodigious talent for drawing. The school I attended was a veritable nest of sirens – mesmerising, charismatic teachers who would lure talented and unsuspecting children into their inner sanctum and set about re-creating them in their own image. I’ll never forget the intoxicating evenings at the home of my art teacher, a particularly alluring and manipulative siren named Jill; mesmerized by her beauty, the way her long hair, released from its schoolroom bun, caught the glow of the firelight as we sat listening to Bob Dylan; enraptured by the music of her voice as she languidly spoke of art and poetry and literature, of all the things I must learn to love, and all the things I mustn’t waste my time on. When I finally awoke from the dream and remembered my passion for biology it was too late. It was only after every attempt to scrape in to an academic science education had failed that I at last, very reluctantly, committed myself to a future as a fine artist.

Being self-taught isn’t necessarily a bad thing, however. It’s by having to read and reason alone that you learn to question and think, and to draw conclusions from first hand observation. Also, by struggling to learn scientific concepts for yourself you appreciate the parts that are difficult to grasp so you become naturally better able to communicate them to other people. I’m very proud of what I’ve achieved on my own and have absolutely no doubt that my books contain a far better scientific message as a result of taking this difficult path than they ever would have otherwise.

So much for “What do you do…” but now we get on to the second part of the question— “for a living?”

Most people judge success purely in terms of whether not you make enough money to live on and, if so, how affluently you manage to live. Producing books is more of a life than a living. It’s not about making money; it’s about bringing something into the world that deserves to exist. Realistically, no-one can honestly claim to earn a sustained income from projects that take so long to complete, so you’re faced with the dilemma of whether to do other paid work—in which case the task will take even longer—or to accept the lack of income and all the feelings of worthlessness that come with it, for the sake of devoting yourself exclusively to that project. I now do the latter, though it wasn’t out of choice.

In fact my personal preference is to have a day job with nice people who say good morning and ask how my weekend was. I’ve endured my share of poverty over the years; I’ve burned the furniture to keep warm and once even masqueraded as a waitress in a busy pub so that I could eat the leftovers from people’s plates. However, it’s not for the money that I like to have a job; it’s mostly because I find I need the company and routine. Neither option is better or more worthy than the other; it’s simply a question of how you prefer to live.

I’ve tried various day jobs. At first I purposely selected the most menial jobs I could in a deliberate effort to keep “job” and “career” separate. The first was plucking chickens on an assembly line at an abattoir. This was followed by a succession of soul-destroying occupations: as a bird bander on a nature reserve for £90/week (that one even came with accommodation: a rat-infested caravan); data entry; photocopying; and, worst of all, being forgotten about altogether and paid to do nothing. Trust me—it’s not as good as it sounds.

Eventually my skills as a self-taught ornithologist and specimen preparator came to my rescue when a job arose as curator of the bird research collections at the British Natural History Museum. At the interview I talked enthusiastically about The Unfeathered Bird (still in its embryonic form) and showed photographs of skins and skeletons I’d prepared. Getting that job made me feel like the Ugly Duckling when it discovered it was a swan. You never saw anyone so happy. The job, I considered, was worth moving back south for, where properties are more expensive; worth downsizing to a tiny house and sacrificing my art studio and etching press. A few years later bad news followed good news on the same day like two barrels of a shotgun. I was invited to write a book (independently from the museum) about the history of bird art. And I was forbidden, by the head of department, from ever producing books in my spare time.

My husband now has “my” job. We’d job-shared in my final year, before I sacrificed the museum for my right to produce books, and fortunately he was able to take over my hours, so as a couple we suffered no loss of earnings. After the head of department had retired, I tried, and failed to get another post at the museum, and had similar fortune elsewhere too, leaving me utterly broken.

By now you might be starting to understand why “What do you do for a living?” is such a difficult question for me. Book royalties come but once a year and as a modern hard-working woman there’s a stigma to having to admit that our household income is virtually all provided by my husband’s job. No-one’s interested in hearing that that job used to be my own. They fill in the gaps with preconceptions: “successful scientist husband (he must be a scientist as he works at the Natural History Museum) generously supporting his (artist) wife’s hobby.”

Many people mourn the fact that I no longer do pictures like this large seascape. But artistic development is a one way trip. For me now, producing books ticks all the creative and intellectual boxes.

I love writing for an audience, so when Princeton University Press asked me to write a blog post for International Women’s Day I agreed instantly, even though I didn’t know what on Earth I’d have to say. I’ve never had a proper career, and never had a family, so I wasn’t able to talk about equal pay, or maternity leave, or sexual harassment at work. So I started writing about myself instead, and discovered that I do have something to say.

Labels, judgements, and stereotypes; pink/blue; dolls/action men; art/science; it’s one thing to loathe preconceptions from others, but how many of us are aware of them in our own behaviour? Equality isn’t just in the hands of employers; it’s the responsibility of every single one of us—women as much as men. Once we start to accept that each and every one of us has a very unique story to tell, we might be less inclined to make generalisations. And finally, what about the prejudices we level at ourselves? As a perfectly-balanced author/illustrator with a matching chip on each shoulder, I can see that change won’t happen overnight. But by challenging my own discomfort about gender expectations, what we do, and who earns the wages, I hope to someday manage to proudly look someone in the eye and say, “I produce books.”

 Katrina van Grouw, author of The Unfeathered Bird (Princeton), inhabits that no-man’s-land midway between art and science. She holds degrees in fine art and natural history illustration and is a former curator of ornithological collections at a major national museum. She’s a self-taught scientist with a passion for evolutionary biology and its history.

Robert Wuthnow on The Left Behind

WuthnowWhat is fueling rural America’s outrage toward the federal government? Why did rural Americans vote overwhelmingly for Donald Trump? And, beyond economic and demographic decline, is there a more nuanced explanation for the growing rural-urban divide? Drawing on more than a decade of research and hundreds of interviews, Robert Wuthnow brings us into America’s small towns, farms, and rural communities to paint a rich portrait of the moral order—the interactions, loyalties, obligations, and identities—underpinning this critical segment of the nation. Wuthnow demonstrates that to truly understand rural Americans’ anger, their culture must be explored more fully. Moving beyond simplistic depictions of the residents of America’s heartland, The Left Behind offers a clearer picture of how this important population will influence the nation’s political future.

You argue that rural America’s politics cannot be understood in terms of economic problems, but require a cultural explanation. What do you mean by that?

What I learned from the research over the past decade in which my research assistants and I interviewed hundreds of rural Americans is that their identity is deeply connected with their communities. We cannot understand rural Americans by thinking of them only as individuals. They have to be understood in terms of their communities. I call these moral communities because people feel obligated to them and take their cues about what is right and good from their neighbors. These moral communities define their way of life. But these ways of life are slipping away. Population is declining, schools are closing, jobs are disappearing, and young people are moving away. Even families who are doing well economically feel the changes. They are having to commute farther for work and to conduct business. The major forces shaping society are beyond their control. People feel threatened and misunderstood.

Are you suggesting that Donald Trump appealed to this sense of displacement? Did rural voters win him the election?

Many factors went into the 2016 presidential election. Political analysts are still sorting them out. Rural voters did opt for Trump is greater percentages than urban voters. My research was less concerned with the election, though, than with understanding at a deeper level what people in small towns and on farms value and why they feel threatened. You have to spend time in rural communities and talk at length with people to understand this. You can go out as a reporter and ask them about politics. But ordinarily they don’t talk that much about politics. They live from day to day going to work, taking their kids to school, maybe volunteering for a local church or club, and maybe helping their neighbors. They see problems, but basically like their communities and want them to stay strong. If you just see rural Americans as voters, you miss the warp and woof of their daily lives.

When they did talk about politics, the people you studied seemed to be totally alienated from Washington. What troubles them about the federal government?

They voiced two major complaints about Washington: the federal government is distant and at the same time it is intrusive. Washington’s distance is both geographic (in most cases) and cultural. It is perceived as catering to urban interests. Washington bureaucrats don’t seem to care about rural America or even inquire about its needs. They seem to look down on people in small towns. Washington nevertheless intrudes on daily life through taxes, regulations, and unfunded mandates. Besides that, Washington deviates from small town residents’ notions of common sense. It strikes them that big bureaucracy is inevitably inefficient and ineffective.

From what you’ve learned about rural Americans, would you think they now have buyer’s remorse and will vote Democratic next time?

Some may. Current trade policies have hurt farm families. Rural hospitals and small-town schools are struggling. Nothing is being done to promote jobs in rural areas or to address the opioid epidemic. Rural people are certainly aware that President Trump is very different from them in terms of origin, wealth, and values. But many rural Americans have been Republicans all their lives and are unlikely to change their affiliations. In those locations, voting preferences happen in Republican primaries. Anger at Washington, as we know, can be directed at “establishment” Republicans as well as at Democrats.

You paint a largely sympathetic portrait of rural America, but you also say you heard things you disagreed with. Can you say something about that?

The most disturbing comments were ones with blatant racist overtones. These were not common but surfaced in reference to President Obama especially in the South. Comments about immigrants were more mixed than Trump’s anti-immigrant rhetoric might suggest. Farmers and construction companies often relied on immigrant labor. Sometimes small towns were happy to have newcomers and had done well adapting schools and service programs to immigrant families. Negative sentiment mostly focused on undocumented immigrants and Muslims.

How are rural churches faring these days?

Church-going still occurs at higher rates in rural communities than in cities. Depopulation has forced some congregations to merge or close. Clergy sometimes minister to congregations in several locations, much like circuit riders did in the nineteenth century. Membership may be declining and aging, but churches still provide vital community services, including assistance to the poor.

There are approximately 14,000 small towns in the United States and the rural population is estimated at somewhere between 30 and 50 million people. Surely you observed a great deal of variety.

Absolutely. The biggest differences are between towns of fewer than 5,000 people and towns with 10,000 to 25,000 people. While most of the smaller towns are declining, most of the larger ones are holding their own or growing. It also helps growth to be a county seat and located near an interstate highway. Towns with better climate and natural amenities are doing well too. Agriculture is the mainstay of small town America, but the most common jobs are often in social services. I was surprised at how many towns have small manufacturing plants. Many of these towns of course are struggling to prevent plant closings.

You grew up in a small town in Kansas. How did that experience affect your research? Did you find that things nowadays are dramatically different?

My hometown, like many small towns, is smaller than it was by about 50 percent. It is also more ethnically diverse. Farms are larger. People commute to other towns to work. Townspeople have had to work hard to keep the hospital open and build a new middle school. The ambience is a mixture of cautious optimism and concern. I found that it other places too. People are proud of their community. It’s home. But they worry. When a school closes or a large family moves away, it affects everyone. As one resident put it, “It tears at your gut.”

Robert Wuthnow is the Gerhard R. Andlinger ’52 Professor of Social Sciences at Princeton University. His many books include American Misfits and the Making of Middle-Class Respectability, Small-Town America, and Remaking the Heartland.

Nancy Woloch: The roots of International Women’s Day

WolochInternational Women’s Day has roots on the left. The idea for such a day arose among socialist women in the US and Europe early in the 20th century. A New York City women’s socialist meeting of 1909 endorsed the plan. So did the International Socialist Women’s Conference that met in Copenhagen in August 1910 as part of the larger Internationalist Socialist Congress. The hundred delegates from seventeen nations who attended the women’s conference shaped a demanding agenda. In what manner would socialist women support woman suffrage? Might they join forces with “bourgeois” feminists to accept restricted forms of enfranchisement, as urged by British delegates? Or did the socialist campaign for woman suffrage involve “the political emancipation of the female sex for the proletarian class-struggle,” as claimed by German delegates? The Germans won that point. In other areas, the women delegates found more unity. Denouncing militarism, they spoke for peace. They urged international labor standards for women workers, such the 8-hour day, limits on child labor, and paid support for pregnant workers and new mothers. Finally, they endorsed a day of activism around the globe to promote women’s emancipation, a counterpart to the May Day marches of socialists. “[W]omen of all nationalities have to organize a special Woman’s Day, which in first line has to promote woman suffrage propaganda,” wrote German socialist Clara Zetkin and her comrades. “This demand must be discussed in connection with the whole woman question according to the socialist conception of social things.” As of 1913, socialist women chose March 8th as the date for International Women’s Day.  

Women activists of the 1960s in Chicago revived the socialist strategy to promote women’s emancipation. Adopted by the United Nations in 1975, International Women’s Day now sponsors less politicized and more broadly inclusive goals; proponents celebrate facets of women’s achievement and champion action to achieve gender equity. Over the decades, on March 8 of each year, events around the globe underscore common themes such as equal rights, women and peace, and opposition to violence against women. In the recent words of the UN Secretary-General, Antonio Guterres, the celebration of International Women’s Day seeks “to overcome entrenched prejudice, support engagement and activism, and promote gender equity and women’s empowerment.”

Workplace rights are key issues for advocates of International Women’s Day, just as they were for defenders of labor standards a century ago. The growth of labor standards—such as maximum-hour laws and minimum wage laws—is the subject of my book, A Class by Herself: Protective Laws for Women Workers, 1890s-1990s. With global roots and global impact, labor standards remain vital for women workers today. Women constitute almost half the workforce of the world and half of migrant workers, often the least protected of employees. Current concerns include the minimum wage, overtime pay, paid family leave, workplace safety, and opposition to sexual harassment. Labor organizers worldwide focus on job segregation, the gender wage gap, and the need for policies to integrate work and family. Celebrants of International Women’s Day share such goals and seek to uphold labor standards around the globe.


Nancy Woloch teaches history at Barnard College, Columbia University. She is the author of A Class by Herself: Protective Laws for Women Workers, 1890s–1990s.

Report of the socialist party delegation and proceedings of the International socialist congress at Copenhagen, 1910 (Chicago: H.G. Adair, 1910), pp. 19-23.
Temma Kaplan, “On the Socialist Origins of International Women’s Day,” Feminist Studies 11, no. 1 (Spring, 1985), pp. 163-171.
Nancy Woloch, A Class by Herself: Protective Laws for Women Workers, 1890s-1990s
(Princeton: Princeton University Press, 2015).