Jan Assmann on The Invention of Religion

ReligionThe Book of Exodus may be the most consequential story ever told. But its spectacular moments of heaven-sent plagues and parting seas overshadow its true significance, says Jan Assmann, a leading historian of ancient religion. The story of Moses guiding the enslaved children of Israel out of captivity to become God’s chosen people is the foundation of an entirely new idea of religion, one that lives on today in many of the world’s faiths. The Invention of Religion sheds new light on ancient scriptures to show how Exodus has shaped fundamental understandings of monotheistic practice and belief. It is a powerful account of how ideas of faith, revelation, and covenant, first introduced in Exodus, shaped Judaism and were later adopted by Christianity and Islam to form the bedrock of the world’s Abrahamic religions.

The title of your book is The Invention of Religion. How is this to be understood? Aren’t there many religions? And have they all been invented?

This is correct. Primal, tribal, and ancient religions go back to time immemorial. We may call them “primary religions.” They are based on experience and are equivalent to general culture; there is no way to conceive of them as an independent system based on rules and values of its own. In my book, I am dealing with “secondary religion” that does not go back to time immemorial but has a definite date in history when it was founded or “invented.” Religion in this new sense is not based on experience but revelation; it is set off from the older primary religion and therefore from general culture, forming a system of its own. The first secondary religion is Second Temple Judaism as it developed during the Babylonian Exile and as it was established around 520 BCE. Rabbinic Judaism, Christianity, and Islam followed its model, as does our concept of “religion.”

If revelation is the distinctive feature of “secondary religion,” how do you explain that all religions know of ways by which the gods reveal their intentions to humankind, such as prodigies, oracles, dreams, etc.?

We must distinguish between occasional and singular revelations. Occasional revelations occur once in a while, refer to specific situations, and address specific recipients. Singular revelations occur once and for all time, encompass the entirety of human—individual, social, political—existence, and address a whole people or group of believers such as Jews, Christians, Muslims, etc. Revelation in this sense is an act of foundation, establishing a “covenant“ between God and men. Whereas primary religions need rituals, attention, diligence in order not to miss the divine intimations and to interpret them correctly—and this is exactly what the Latin term religio means according to Cicero—religions of the new type need memory, codification, canonization of the revealed texts, and faith in the revealed truth, i.e. the covenant. For this reason, Lactance, a Christian, derived the word religio not from relegere, or “to diligently observe,” but from religari, or “to bind oneself.”

“Faith“ is another category that one would assume to be necessary for all religions, not only for Second Temple Judaism and the religions based on or following this model.

In a general sense, yes. But religion based on revelation requires faith in a specific and much stronger sense. Faith in the general, weak sense is based on experience and evidence, i.e. immanent, this-worldly truth. Faith in the new, strong sense is based on revealed truth, which is transcendent and extramundane. This is a truth that cannot be verified by experience and researched, but can only be attested by staying true to the covenant and its laws, even under conditions of suffering. The term “martyr” comes from Greek martys “witness” and means him who by his violent death testifies to the truth of God’s covenant. Faith, truthfulness, and loyalty mean the same (aemunah in Hebrew). This kind of faith does not exist in primary religions and is the exclusive innovation of Biblical monotheism in its post-exilic form of Second Temple Judaism.

The main topic of the book of Exodus, however, seems still to be the exodus of the Israelites from Egypt (yitzi’at mitzra’im) and not “revelation” for which there is not even a word in Hebrew.

That “revelation” is the main topic of 2.Mose becomes clear by a careful thematic analysis of the book. The book comprises three parts. Part one (chapters 1-15a) contains God’s revelation to Moses in the Burning Bush, the 10 plagues, and the Miracle of the Sea, revealing his overwhelming power. Part 2 (chapters 15b-24) contains the revelation of the covenant and the closing ceremony. The last part (chapters 25-40) contains the revelation and construction of the Tabernacle, interrupted by the scene of the Golden Calf. Each of these parts contains scenes of revelation, which is thus shown to be the overarching theme. That there is no word for “revelation” in Hebrew is the reason why this new and revolutionary concept is unfolded in form of a lengthy narrative.

Being an Egyptologist, what brought you to venture into the field of Biblical studies and how does your approach as an outsider differ from that of professional Biblical scholars who wrote on the book of Exodus?

My “egyptological” approach to the Bible focuses on the triad of culture, identity, and memory that is typical of Cultural Studies, whereas the approach of Biblical Studies mostly focuses on textual criticism, the distinction of different layers of redaction and composition. According to the Bible, the Israelites fled from Egypt and not from any other country of the Ancient World. This fact alone constitutes a challenge for Egyptology. Egypt seems to stand for something that the Torah is opposing with particular vehemence. A closer reading of the book of Exodus reveals that it is not religion—the Egyptian cult—what is rejected, but the political system of sacral kingship, the king as god and the deification of the state. All the ancient oriental kingdoms share this idea in a greater or lesser degree, but Egypt is the most extreme realization of this idea. Egypt, therefore, represents the world which Israel was to exit—or to be liberated from—in order to enter a new paradigm for which Flavius Josephus coined the term “theocracy,” meaning “God is king” instead of “The king is god,” the principle of sacral kingship. This originally political idea gradually evolved into what we now understand as “religion.”

The Exodus of the Israelites from Egypt is commonly taken as a historical fact, unlike the events that you subsume under the concept “revelation“ and consequently interpret as religious imagination.

Egyptology tells us that there is no archaeological, epigraphical, or literary evidence of any Hebrew mass emigration from Egypt in the Late Bronze Age, the narrated time. The book of Exodus is not a historical account but a foundational myth, though replete with historical reminiscences and experiences such as the expulsion of the Hyksos, the oppression of Palestine by Egyptian colonization, the Solomonic oppression of his people through heavy corvée—leading to the separation of the Northern Tribes—and finally the annihilation of the Northern Kingdom by the Assyrians in 722 and of the Southern Kingdom by the Babylonians 597/87. What is decisive is not the narrated time—”What may really have happened in the 13th ct. BCE?”—but the various times of narration when this myth was first formed and became eventually codified and canonized as the foundational story of Second Temple Judaism. As a foundational myth, the Exodus belongs within the same sphere of religious imagination as the scenes of revelation.

And Moses? The name, Egyptologists tell us, is Egyptian. This seems to be historical evidence after all.

This is true, Moses (Moshe) is an Egyptian name, meaning “born of” like the Greek –genes. Hermogenes would be Thut-mose. There are many attempts at identifying Moses with Egyptian figures bearing the element –mose in their names, none of them convincing. Sigmund Freud made of Moses a follower of Akhenaten, the heretic king, who after this king’s death emigrated from Egypt to Canaan and took the Hebrews along, because Akhenaten’s monotheistic cult of the Sun (Aten) was persecuted and abolished in Egypt. Some even identify Moses with Akhenaten. All this is pure fancy. There is not the least link between Akenaten’s monotheism, which is just a new cosmology, deducing every life and existence from the sun, and the religion founded and proclaimed by Moses, that has nothing to do with cosmology but is based on the political idea of covenant, an alliance between God and his people. The ideas of revelation, covenant, and faith have no correspondents in Egypt nor in any other ancient religion.

In your book you characterize the new religion as a “monotheism of loyalty,” based on the distinction between loyalty and betrayal, and distinct from a “monotheism of truth,” based on the distinction between true and false, which is also typical of the new religion. How do these two forms of monotheism go together?

In my book Moses the Egyptian (1997), I defined Biblical monotheism as based on the distinction of true and false, which I dubbed the “Mosaic Distinction” and described as an innovation that “secondary religions” introduced into the ancient world, where this distinction between true God and false gods, true religion and false religions, was totally alien. After a close reading of the Torah I realized that this distinction only occurs with the later prophets (Deutero-Isaiah, Jeremiah, Ezechiel and others), whereas the Torah, i.e. the books that are truly “Mosaic,” is about the distinction between loyalty and betrayal. This distinction is linked to the concept of a covenant between a “jealous” God, the liberator from Egyptian slavery, who requires absolute fidelity, and his Chosen People that has constantly to be admonished not to “murmur” and not to turn to other gods. The ideas of covenant, loyalty, and faith remain always, even in Christianity and Islam, the cantus firmus in the polyphony of the sacred scriptures and merge perfectly well with the idea of the One true God, the creator of heaven and earth, which is to be found in the prophetic scriptures. The first, particularist distinction concerns the chosen people whose gratitude and loyalty is requested for their liberator, and the second, universalist distinction concerns the idea of God the creator who cares for all human beings and all life on earth.

In some of your previous books you stated a connection between monotheism and violence as implied in the distinction between true and false in religion. Does this connection appear in a different light when the issue is not truth but loyalty?

All “secondary religions” are intolerant, because they arise in opposition to the primary religions before and around them. The “monotheism of truth,” therefore, is incompatible with religions excluded as “false.” This is a matter of logic and cognition. The “monotheism of loyalty,” on the other hand, based on the distinction between loyalty and betrayal, implies a form of violence that is mainly directed against members of the own group who are viewed as apostates or transgressors, as is shown by the “primal scene” of this form of violence, the scene of the Golden Calf. It is this form of violence with which we are mostly confronted today. It is only directed against outsiders if the distinction between inner and outer, apostates and strangers is blurred and all human beings are requested to enter the covenant and obey to its laws, as is the case with certain radical islamist and evangelist groups.

Then intolerance and violence are necessary implications of the Exodus tradition?

Nothing could be more alien to the theology of Exodus. The distinction between Israel and the “nations” (goyîm) that is drawn here has no violent and antagonistic implications. That the nations observe other laws and worship other gods is perfectly in order, because they are not called into the covenant. The only exception is made for the “Canaanites,” the indigenous population of the Promised Land, who must be expelled and exterminated and who are obviously no other that those Hebrews who do not live according to the laws of the covenant. These “Canaanites,” however, are but a symbol for the “primary religion” that Second Temple Judaism, especially the Puritan radicalism of the Deuteronomic tradition, is opposing. We must not forget, however, that it is not hatred and violence, but love that forms the center of the idea of covenant. The leading metaphor of the covenant is matrimonial love and the “megillah” (scroll) that is read during the feast of Passover is the Song of Songs, a collection of fervent love songs. God’s “jealousy” is part of his love.

Jan Assmann is honorary professor of cultural studies at the University of Konstanz and professor emeritus of Egyptology at the University of Heidelberg, where he taught for nearly three decades. He is the author of many books on ancient history and religion, including From Akhenaten to Moses, Cultural Memory and Early Civilization, and Moses the Egyptian.

Observing Passover throughout history: A History of Judaism

This week, Jews all over the world are celebrating Passover, commemorating the exodus of the Israelites from enslavement in Egypt. What is the history of this ancient festival, and how has it been observed over the centuries? Martin Goodman’s A History of Judaism, a sweeping history of the religion over more than three millennia, includes fascinating glimpses of how Passover has evolved through the various strains, sects, and traditions of Judaism.

While the Second Temple stood, Passover (or Pesach in Hebrew) was one of three annual pilgrimage festivals. Every adult Jewish male was obligated to journey to the Temple for the festival. On the first night of Pesach, men, women, and children enjoyed a huge barbecue of roasted lamb along with a narration of the exodus story. For the following seven days, they abstained from leavened foods. Jews who couldn’t make it to the Temple ate roasted lamb and retold the exodus story at home. In the late fifth century BCE, the Jews of Elephantine, on the island of Yeb in the Nile river, received the following instructions in a letter from Jerusalem:

  • On the 14th day of the month of Nisan, observe the Passover at twilight.
  • Observe the Festival of Unleavened Bread from the 15th of Nisan to the 21st of Nisan, eating only unleavened bread for these seven days.
  • Do not work on the 15th or 21st of Nisan.
  • Do not drink any fermented beverages during this period.
  • Remove and seal up any leavened products, which must not be seen in the house from sunset on the 14th of Nisan until sunset on the 21st of Nisan.

-paraphrased from B. Porten, Archives from Elephantine: The Life of an Ancient Jewish Military Colony (Berkeley, 1968), 128-33

Over two thousand years after the Elephantine Jews received their instructions from Jerusalem, rabbis and students were still discussing the exact meaning of the festival’s proscriptions. In this passage, Aryeh Leib b. Asher Gunzberg, a Lithuanian rabbi who died in 1785, weighs in on a disagreement between the Talmud commentaries of Rashi and those of the tosafists, medieval commentators writing after Rashi:

“The Talmud says that the search for and removal of leavened matter on the eve of the Passover is merely a rabbinical prescription; for it is sufficient, according to the commands of the Torah, if merely in words or in thought the owner declares it to be destroyed and equal to the dust. Rashi says that the fact that such a declaration of the owner is sufficient is derived from an expression in Scripture. The tosafot, however, claim that this cannot be derived from the particular expression in Scripture, since the word there means ‘to remove’ and not ‘to declare destroyed’. The mere declaration that it is destroyed is sufficient for the reason that thereby the owner gives up his rights of ownership, and the leavened matter is regarded as having no owner, and as food for which no one is responsible, since at Passover only one’s own leavened food may not be kept, while that of strangers may be kept. Although the formula which is sufficient to declare the leavened matter as destroyed is not sufficient to declare one’s property as having no owner, yet, as R. Nissim Gerondi, adopting the view of the tosafot, explains, the right of ownership which one has in leavened matter on the eve of Passover, even in the forenoon, is a very slight one; for, beginning with noon, such food may not be enjoyed; hence all rights of ownership become illusory, and, in view of such slight right of ownership, a mere mental renunciation of this right suffices in order that the leavened matter be considered as without an owner. R. Aryeh Leib attempts to prove the correctness of this tosafistic opinion as elaborated by R. Nissim, and to prove at the same time the incorrectness of Rashi’s view, from a later talmudic passage which says that from the hour of noon of the eve [of Passover] to the conclusion of the feast the mere declaration of destruction does not free a person from the responsibility of having leavened matter in the house; for since he is absolutely forbidden to enjoy it, he has no claim to the ownership, which he renounces by such a declaration.”

-Excerpted and adapted from the article on pilpul by Alexander Kisch in I. Singer, ed., The Jewish Encyclopaedia, 12 vols. (New York, 1901-6), 10:42

More pragmatic concerns were also on the agenda for nineteenth-century thinkers. In a discussion unimaginable to their Second Temple forebears, Solomon Kluger of Brody and Joseph Saul Nathansohn of Lemberg clashed in 1859 over whether matzo-making machines were allowable. Even today, handmade is often preferred to machine-made matzo.

The millennia of discussion over Passover and its observance are reflected – and predicted – by this timeless story from the Mishnah:

“‘It is related of Rabbi Eliezer, Rabbi Joshua, Rabbi Elazar ben Azariah, Rabbi Akiva, and Rabbi Tarfon that they once met for the Seder in Bnei Brak and spoke about the Exodus from Egypt all night long, until their disciples came and said to them: ‘Masters! The time has come to say the morning Shema!’”

-Ch. Raphael, A Feast of History (London, 1972), 28 [229]

Forget speaking about the exodus all night long – we could speak about speaking about the exodus all night long! To learn more about the diversity of practices and opinions in Judaism through the ages, check out Martin Goodman’s A History of Judaism.

Matthew Salganik: Invisibilia, the Fragile Families Challenge, and Bit by Bit

Salganik

This week’s episode of Invisibilia featured my research on the Fragile Families Challenge. The Challenge is a scientific mass collaboration that combines predictive modeling, causal inference, and in-depth interviews to yield insights that can improve the lives of disadvantaged children in the United States. Like many research projects, the Fragile Families Challenge emerged from a complex mix of inspirations. But, for me personally, a big part of the Fragile Families Challenge grew out of writing my new book Bit by Bit: Social Research in the Digital Age. In this post, I’ll describe how Bit by Bit helped give birth to the Fragile Families Challenge.

Bit by Bit is about social research in the age of big data. It is for social scientists who want to do more data science, data scientists who want to do more social science, and anyone interested in the combination of these two fields. Rather than being organized around specific data sources or machine learning methods, Bit by Bit progresses through four broad research designs: observing behavior, asking questions, running experiments, and creating mass collaboration. Each of these approaches requires a different relationship between researchers and participants, and each enables us to learn different things.

As I was working on Bit by Bit, many people seemed genuinely excited about most of the book—except the chapter on mass collaboration. When I talked about this chapter with colleagues and friends, I was often greeted with skepticism (or worse). Many of them felt that mass collaboration simply had no place in social research. In fact, at my book manuscript workshop—which was made up of people that I deeply respected—the general consensus seemed to be that I should drop this chapter from Bit by Bit.  But I felt strongly that it should be included, in part because it enabled researchers to do new and different kinds of things. The more time I spent defending the idea of mass collaboration for social research, the more I became convinced that it was really interesting, important, and exciting. So, once I finished up the manuscript for Bit by Bit, I set my sights on designing the mass collaboration that became the Fragile Families Challenge.

The Fragile Families Challenge, described in more detail at the project website and blog, should be seen as part of the larger landscape of mass collaboration research. Perhaps the most well known example of a mass collaboration solving a big intellectual problem is Wikipedia, where a mass collaboration of volunteers created a fantastic encyclopedia that is available to everyone.

Collaboration in research is nothing new, of course. What is new, however, is that the digital age enables collaboration with a much larger and more diverse set of people: the billions of people around the world with Internet access. I expect that these new mass collaborations will yield amazing results not just because of the number of people involved but also because of their diverse skills and perspectives. How can we incorporate everyone with an Internet connection into our research process? What could you do with 100 research assistants? What about 100,000 skilled collaborators?

As I write in Bit by Bit, I think it is helpful to roughly distinguish between three types of mass collaboration projects: human computation, open call, and distributed data collectionHuman computation projects are ideally suited for easy-task-big-scale problems, such as labeling a million images. These are projects that in the past might have been performed by undergraduate research assistants. Contributions to human computation projects don’t require specialized skills, and the final output is typically an average of all of the contributions. A classic example of a human computation project is Galaxy Zoo, where a hundred thousand volunteers helped astronomers classify a million galaxies. Open call projects, on the other hand, are more suited for problems where you are looking for novel answers to clearly formulated questions. In the past, these are projects that might have involved asking colleagues. Contributions to open call projects come from people who may have specialized skills, and the final output is usually the best contribution. A classic example of an open call is the Netflix Prize, where thousands of scientists and hackers worked to develop new algorithms to predict customers’ ratings of movies. Finally, distributed data collection projects are ideally suited for large-scale data collection. These are projects that in the past might have been performed by undergraduate research assistants or survey research companies. Contributions to distributed data collection projects typically come from people who have access to locations that researchers do not, and the final product is a simple collection of the contributions. A classic example of a distributed data collection is eBird, in which hundreds of thousands of volunteers contribute reports about birds they see.

Given this way of organizing things, you can think of the Fragile Families Challenge as an open call project, and when designing the Challenge, I draw inspiration from the other open call projects that I wrote about such as the Netflix Prize, Foldit, and Peer-to-Patent.

If you’d like to learn more about how mass collaboration can be used in social research, I’d recommend reading Chapter 5 of Bit by Bit or watching this talk I gave at Stanford in the Human-Computer Interaction Seminar. If you’d like to learn more about the Fragile Families Challenge, which is ongoing, I’d recommend our project website and blog.  Finally, if you are interested in social science in the age of big data, I’d recommend reading all of Bit by Bit: Social Research in the Digital Age.

Matthew J. Salganik is professor of sociology at Princeton University, where he is also affiliated with the Center for Information Technology Policy and the Center for Statistics and Machine Learning. His research has been funded by Microsoft, Facebook, and Google, and has been featured on NPR and in such publications as the New Yorker, the New York Times, and the Wall Street Journal.

Dr. John C. Hulsman: Will the US ever escape the Losing Gambler Syndrome in Afghanistan?

HulsmanThe Losing Gambler Syndrome is a fact of the human condition that casino magnates have come to well understand. When someone loses big at the tables, almost always they have an overwhelming urge to invest ever more resources to make good on their catastrophic losses, rarely bothering to think about the reasons for these losses in the first place. Dad cannot go back to Mom telling her he has lost the kids’ college fund at the roulette table, so he keeps playing . . . and keeps losing. The reason for his demise—the terrible odds—is never analytically addressed.

Policymakers are not immune to this folly, often doubling down on a bad assessment emotionally in order to wipe the slate clean of their intellectual mistakes. I saw this doleful analytical process up close and personal in Washington as the Iraq War slid toward the abyss; very often those policymakers urging ever-greater efforts in Iraq from the American people did so largely to make good on their already monumental strategic losses.

History’s graveyard is replete with losing gamblers

Anyone who has ever walked the mile and a half in that beautiful, tragic open field between Seminary and Cemetery Ridges at Gettysburg knows that the Confederate assault on the third day of the battle should never have been made. The simple reason for Pickett’s disastrous charge is that Robert E. Lee had emotionally invested too much at Gettysburg to easily turn back. The famed Confederate general was both desperate and overconfident, a fatal combination. Lee was held intellectual hostage by his tantalizing near success (and actual failure) on the second day of the battle, becoming an unwitting prisoner of the Losing Gambler Syndrome.

Likewise, as the years rolled by without the United States ever finding a political ally in South Vietnam with local political legitimacy, it never seems to have occurred to Lyndon Johnson that the lack of such a partner was a sure sign to get out, not to redouble his efforts.

When will they ever learn?

Tragically, the losing gambler’s curse continues today, with America’s seemingly endless war in Afghanistan being a textbook example. Within of few months of 9/11, American-led forces had routed the Taliban and dislodged al-Qaeda from its bases. However, then the war goals fatefully shifted. To prevent al-Qaeda’s resurgence, the US ended up endlessly propping up weak, corrupt, unrepresentative governments in Kabul.

As these governments did not have sufficient organic political legitimacy, the US found itself mired in an unwinnable situation, as without Taliban involvement in the central government (the Taliban represent almost exclusively the interests of the Pashtun, the largest single tribe in the country) any local rule was bound to be seen as inherently unrepresentative. This political reality is at the base of the 16-year unwinnable war in Afghanistan.

Doubling down yet again

Yet President Trump’s ‘new plan’ (there have been an endless number of these over the past decade and a half) does nothing to deal with this central political conundrum. Despite in his campaign saying the war in Afghanistan had been ‘a total disaster,’ the President was persuaded by his respected Secretary of Defense, James Mattis, and National Security Adviser H.R. McMaster, to increase American troop levels in-country to 16,000, ignoring the fact that during the Obama administration 100,000 American soldiers had been fighting there, all to no avail.

I suspect a key reason for this strange decision is that both Generals Mattis and McMaster served with distinction in Afghanistan. Like Lee, President Johnson, and the neo-conservatives huddled around George W. Bush, both have invested too much emotionally and practically to turn back, whatever the fearful odds.

So an unwinnable war is set to continue, as the unsolvable political reality at its base goes unremarked upon. The losing gambler’s syndrome tells us that once resources and intellectual credibility have been expended, it is all too tempting, whether met with crisis or entranced by near-success, to keep doing what has been failing up until that point. It is entirely understandable to do this, but as Gettysburg, Vietnam, and Iraq point out, practically disastrous. Policymakers must instead have the courage to look at failure straight in the eye and make adjustments to mitigate its effects, rather than doubling down and inviting more.

Dr. John C. Hulsman is the president and cofounder of John C. Hulsman Enterprises, a successful global political risk consulting firm. For three years, Hulsman was the Senior Columnist for City AM, the newspaper of the city of London. Hulsman is a Life Member of the Council on Foreign Relations, the preeminent foreign policy organization. The author of all or part of 14 books, Hulsman has given over 1520 interviews, written over 650 articles, prepared over 1290 briefings, and delivered more than 510 speeches on foreign policy around the world. His most recent work is To Dare More Boldly: The Audacious Story of Political Risk.

Keith Whittington: The kids are alright

SpeakIt has rapidly become a common trope that the current crop of college students belong to a generation of “snowflakes.” Unusually sensitive, unusually intolerant, the kids these days are seen by some as a worrying threat to the future of America’s liberal democracy. High-profile incidents on college campuses like the shouting down of Charles Murray at Middlebury College and the rioting in the streets of Berkeley during an appearance by Milo Yiannopoulos give vivid support for the meme. Some surveys of the attitudes of millennials about tolerance and free speech lend some further credence to the snowflake characterization. When the Knight Foundation and Gallup asked college students whether diversity or free speech was more important, a slim majority chose diversity. When a Brookings Institution fellow asked college students whether it was acceptable to use force to silence a speaker making “hurtful” statements, a surprisingly large number said yes.

Should we be worried about the children? Perhaps not. Context matters, and some of the current hand-wringing over events on college campuses has tended to ignore the broader context. In particular, when told that the current generation of students do not seem fully supportive of free speech and tolerance of disagreement, we are rarely told in comparison to what. Compared to a perfect ideal of American values, the current generation of students might fall somewhat short—but so do the generations that preceded them. We aspire to realize our beliefs in tolerance and liberty, but we muddle through without a perfect commitment to our civil libertarian aspirations.

It would be a mistake to be overly complacent about American public support for civil liberties, including free speech, but we should also be cautious about rushing into excessive pessimism about the current generation of college students. It has been a routine finding in the public opinion literature going back decades that Americans express high levels of support for the freedom of speech in the abstract, but when asked about particular forms of controversial speech that support begins to melt away. In the middle of the twentieth century, for example, one study found that more than three-quarters of a sample of lawyers thought that university students should have the freedom to invite controversial speakers to campus, but less than half of the general public agreed. When asked if the government should be allowed to suppress speech that might incite an audience to violence, less than a fifth of the leaders of the American Civil Liberties Union said yes, but more than a third of the members of the ACLU were ok with it. In the 1950s, Americans said they supported free speech, but they also said the speech of Communists should be restricted. In the 1970s, Americans said they supported free speech, but they also said the speech of racists should be restricted. In the 2000s, Americans said they supported free speech, but they also said the speech of Muslims and atheists should be restricted.

Current American college students say that speakers with whom they strongly disagree should be allowed to speak on campus. But a majority of liberal college students changed their mind when they are told that such a speaker might be racist, and more than a third of conservative college students changed their mind when they are told that such a speaker might be “anti-American.” Fortunately, the evidence suggests that only a tiny minority of college students favor activists taking steps to disrupt speaking events on campus. Those numbers are not ideal, but it is important to bear in mind that the college-educated tend to be more tolerant to disagreeable speakers and ideas than is the general public, and that is pretty much as true now as it has been in the past. Public support for the freedom of speech has not always stood firm, and campus debates over the scope of free speech are likely to have large consequences for how Americans think about these issues in the future.

We should draw some lessons from recent events and surveys, but the lesson should not be that current students are delicate snowflakes. First, we should recognize that the current generation of college students is not unique. They have their own distinctive concerns, interests, and experiences, but they are not dramatically less tolerant than those who came before them. Second, we should appreciate that tolerance of disagreement is something we as a country have to constantly strive for and not something that we can simply take for granted. It is easy to support freedom for others in the abstract, but it is often much more difficult to do so in the midst of particular controversies. The current group of college-age Americans struggle with that tension just as other Americans do and have before. Third, we should note that there is a vocal minority on and off college campuses who do in fact question liberal values of tolerance and free speech. They do so not because they are snowflakes but because they hold ideological commitment at odds with values that are deeply rooted in the American creed. Rather than magnifying their importance by making them the avatar of this generation, those who care about our democratic constitutional commitments should work to isolate them and show why theirs is not the best path forward and why diversity, tolerance, and free speech are compatible and mutually reinforcing values and not contrasting alternatives. It is an ongoing project we hold in common to understand and reaffirm the principles of free speech that underlie our political system. Today’s college students are not the only ones who could benefit from that lesson.

Keith E. Whittington is the William Nelson Cromwell Professor of Politics at Princeton University and a leading authority on American constitutional theory and law. He is the author of Speak Freely: Why Universities Must Defend Free Speech

Everything to play for: Winston Churchill, the rise of Asia, and game changers

By Dr. John C. Hulsman

HulsmanThe ability to know when game-changing events are actually happening in real time is to see history moving. It is an invaluable commandment in the mastering of political risk analysis. To do so, an analyst must adopt an almost Olympian view, seeing beyond the immediate to make sense of what is going on now by placing it into the broader tapestry of world history itself.

The rewards for this rare but necessary ability are legion, for it allows the policy-maker or analyst to make real sense of the present, assessing the true context of what is going on presently and what is likely to happen in the future. It is jarring to compare the lacklustre abilities of today’s Western politicians—so far behind the curve in seeing the game-changing rise of Asia and the decline of the West as we enter a new multipolar age—to the phenomenal analytical abilities of earlier statesmen of vision, such as the querulous, needy, challenging, maddening, often wrongheaded but overwhelmingly talented greatest Prime Minister of England.

Churchill Rejoices over Pearl Harbor

In the hustle and bustle of the everyday world, recognizing game-changing events can prove exceedingly difficult. Being surrounded by monumental goings on makes separating the very important from the essential almost impossible. So it was in December 1941, undoubtedly the turning point of the Second World War. During that momentous month, the Red Army turned back the Nazi invasion at the very gates of Moscow, marking the first time Hitler’s war machine had met with a real setback. But for all that the Battle of Moscow mattered enormously, it did nothing to change the overall balance of forces fighting the war, with the outcome still sitting on a knife’s edge.

But half a world away, something else did. At 7:48 AM in Hawaii, on December 7, 1941, the Imperial Navy of the Empire of Japan, attacking without warning as it had done in the earlier Russo-Japanese War, unleashed itself against the American Pacific Fleet, serenely docked at Pearl Harbor that Sunday morning. The damage was immense. All eight American battleships docked at Pearl were struck, and four of them sunk. The Japanese attack destroyed 188 US aircraft, while 2,400 were killed and 1,200 wounded. Japanese losses were negligible.

The Japanese attack on Pearl Harbor misfired spectacularly, changing the course of the war fundamentally, drawing America into the conflict as the decisive force which altered the correlation of power around the world. Stalin, with his back still to the wall in the snows of Russia, did not immediately grasp the game-changing significance of what had just happened any more than Franklin Roosevelt did, now grimly intent on surveying the wreckage of America’s Pacific Fleet and marshalling the American public for global war.

These were pressing times and it is entirely human and understandable that both Stalin and FDR had other more immediate concerns to worry about during those early December days. But Winston Churchill, the last of the Big Three, immediately latched onto the game-changing significance of what had just occurred. For the Prime Minister understood, even in the chaos of that moment, that the misguided Japanese attack had just won Britain and its allies the war and amounted to the game changer a hard-pressed London had been praying for.

In his history of World War II, Churchill wrote of that seminal day, ‘Being saturated and satiated with emotion and sensation, I went to bed and slept the sleep of the saved and thankful.’ The great British Prime Minister slept well that night because he understood the fluidity of geopolitics, how a single event can change the overall global balance of power overnight, if one can but see.

On December 11, 1941, compounding Tokyo’s incredible blunder, Germany suicidally declared war on America. Hitler, vastly underestimating the endless productive capacity of the United States, didn’t think the declaration mattered all that much. The miscalculation was to prove his doom, as the US largely bankrolled both its Russian and British allies, supplying them with both massive loans and a limitless supply of armaments and material. Because of Pearl Harbor and Hitler’s disastrous decision, America would eventually eradicate the dark night of Nazi barbarism. Churchill was right in seeing the full consequences of what was going on at that pivotal time. December 1941 saved the world.

The decline of the West and the rise of Asia is the headline of our times

In the crush of our 24-hour news cycle, it is all too easy—as it was during the stirring days of World War II—to miss the analytical forest for the trees. Confusing the interesting from the pivotal, the fascinating from the essential, remains an occupational hazard for both policy-makers and political risk analysts. But beneath the sensory overload of constant news, the headline of our own time is clear if, like, Churchill we can but see.

Our age is one where the world is moving from the easy dominance of America’s unipolar moment to a multipolar world of many powers. It is characterized by the end of 500-plus years of western dominance, as Asia (especially with the rise of China and then India) is where most of the world’s future growth will come from, as well as a great deal of its future political risk. The days of International Relations being largely centered on Transatlantic Relations are well and truly at an end, as an economically sclerotic and demographically crippled Europe recedes as a power, and even the United States (still by far the most powerful country in the world) sinks into relative decline.

To understand the world of the future requires a knowledge of Asia as well as Europe, of macroeconomics as well as military strategy, of countries the West has given precious little thought to, such as China, India, Indonesia, Turkey, Argentina, Brazil, South Africa, Saudi Arabia, and Mexico, as well as the usual suspects such as a declining Russia and Europe. International Relations has become truly ‘international’ again. And that, coupled with the decline of the West and the Rise of Asia, is the undoubted headline of the age. Churchill, and all first rate analysts who understand the absolute value of perceiving game-changing events, would surely have agreed.

Dr. John C. Hulsman is the President and Co-Founder of John C. Hulsman Enterprises, a prominent global political risk consulting firm. For three years, Hulsman was the Senior Columnist for City AM, the newspaper of the city of London. Hulsman is a Life Member of the Council on Foreign Relations, the pre-eminent foreign policy organisation. The author of all or part of 14 books, Hulsman has given over 1520 interviews, written over 650 articles, prepared over 1290 briefings, and delivered more than 510 speeches on foreign policy around the world. His most recent work is To Dare More Boldly; The Audacious Story of Political Risk.

Julian Zelizer on The Presidency of Barack Obama

ZelizerBarack Obama’s election as the first African American president seemed to usher in a new era, and he took office in 2009 with great expectations. But by his second term, Republicans controlled Congress, and, after the 2016 presidential election, Obama’s legacy and the health of the Democratic Party itself appeared in doubt. In The Presidency of Barack Obama, Julian Zelizer gathers leading American historians to put President Obama and his administration into political and historical context. Engaging and deeply informed, The Presidency of Barack Obama is a must-read for anyone who wants to better understand Obama and the uncertain aftermath of his presidency.

What was your vision for this book? What kind of story are you trying to tell?

My goal with this book is to provide an original account of the Barack Obama that places his presidency in broader historical context. Rather than grading or ranking the president, my hope is to bring together the nation’s finest historians to analyze the different key issues of his presidency and connect them to a longer trajectory of political history. Some of the issues that we examined had to do with health care, inequality, partisan polarization, energy, international relations, and race.

How did you approach compiling the essays that make up this book? What criteria did you use when choosing contributors?

The key criteria was to find historians who are comfortable writing for the general public and who are interested in the presidency—without necessarily thinking of the president as the center of their analysis. I wanted smart historians who can figure out how to connect the presidency to other elements of society—ranging from the news media to race relations to national security institutions.

What do you see as the future of Obama’s legacy?

Legacies change over time. There will be more appreciation of aspects of his presidency that are today considered less significant, but which in time will be understood to have a big impact. Our authors, for instance, reveal some of the policy accomplishments in areas like the environment and the economy that were underappreciated during the time he was in the White House.  In other ways, we will see how some parts of the presidency that at the time were considered “transformative” or “path-breaking”—such as his policies on counterterrorism—were in fact extensions and continuations of political developments from the era.

How did the political landscape of the country change during Obama’s tenure?

While we obtained many new government programs, from climate change, to ACA, to the Iran Nuclear Deal, we also saw the hardening of a new generation of conservatism who were more rightward in their policies and more aggressive, if not ruthless, in their political practices. Some of his biggest victories, such as the Affordable Care Act, pushed the Republican Party even further to the right and inspired them to be even more radical in their approach to legislative obstruction.

What do you hope readers will take away from reading this book?

I hope that they will have a better sense of where this presidency came from, some of the accomplishments that we saw during these eight years, and some of the ways that Obama was limited by the political and institutional context within which he governed. I want readers to get outside the world of journalistic accounts and come away understanding how Obama’s presidency was a product of the post-1960s era in political history.

Julian E. Zelizer is the Malcolm Stevenson Forbes, Class of 1941 Professor of History and Public Affairs at Princeton University and a CNN Political Analyst. He is the author and editor of eighteen books on American political history, has written hundreds of op-eds, and appears regularly on television as a news commentator.

Michael Brenner explains why a Jewish State is “not like any other state”

BrennerIs Israel a state like any other or is it unique? As Michael Brenner argues in In Search of Israel, the Zionists attempted to put an end to the millennia-old history of the Jews as the archetypical “other” by creating a Jewish state that would be just like any other state, but today, Israel is regarded as anything but a “normal” state. Instead of overcoming the Jewish fate of otherness, Israel has in fact become the “Jew among the nations.” Israel ranks as 148th of the 196 independent states in terms of geographical area, and as 97th in terms of population, which is somewhere between Belize and Djibouti. However, the international attention it attracts is exponentially greater than that of either. Considering only the volume of media attention it attracts, one might reasonably assume that the Jewish state is in the same league as the United States, Russia, and China. In the United States, Israel has figured more prominently over the last three decades than almost any other country in foreign policy debates; in polls across Europe, Israel is considered to be the greatest danger to world peace; and in Islamic societies it has become routine to burn Israeli flags and argue for Israel’s demise. No other country has been the target of as many UN resolutions as Israel. At the same time, many people around the world credit Israel with a unique role in the future course of world history. Evangelical Christians regard the Jewish state as a major player in their eschatological model of the world. Their convictions have influenced US policies in the Middle East and the opinions of some political leaders in other parts of the world.

Why does Israel attract so much attention?

The answer lies in history. Many people call Israel “the holy land” for a reason: it is here where the origins of their religions were shaped. The Jewish people too are regarded as special: they played a crucial role in the theological framework of the world’s dominant religions. In Christianity and in Islam, Jews were both seen as a people especially close to God and at the same time uniquely rejected by God. While over the last two hundred years these ideas have become secularized, many stereotypes have remained. That the Jews became victims of the most systematic genocide in modern history lent them yet another mark of uniqueness. After two thousand years in exile, the fact that Jews returned to their ancient homeland to build a sovereign state again surrounded the people and place with additional mystique.

Did the Zionists view themselves as unique?

The irony is that the Zionist movement was established at the end of the 19th century precisely in order to overcome this mark of difference and uniqueness. Many Zionists claimed that they just wanted to be like anyone else. Chaim Weizmann, longtime leader of the Zionist movement and Israel’s first president, was quoted with saying: “We just want to be another Albania,” meaning a small state that nobody really cares about. Even Israel’s founding document, the declaration of independence, says that Israel has the right to be “like all other nations.” But at the same time the notion of being different, perhaps being special, was internalized by Zionists as well. Many of its leaders argued that a Jewish state has a special responsibility. Even the most secular among them regarded Israel’s serving as “a light unto the nations” as a crucial part of a prophetic tradition.

Does this mean that Zionism was a religious movement?

Not at all. Most of its early leaders were strictly secular. Theodor Herzl, the founder of Zionism, knew no Hebrew and in fact very little about Jewish traditions. But he wanted to establish a model state for humanity, and saw the formation of Israel as an example for the liberation of African-Americans. Long before any other state granted voting rights for women, he let women be active participants in the Zionist congresses. He drew a flag for the future Jewish state that had seven stars, symbolizing a seven-hour-workday for everyone. David Ben-Gurion, the first prime minister of Israel, was a Socialist and rejected organized religion. But just like Herzl, he believed in the mission of a model state that could spread the prophetic ideals of universal peace and equality among the nations.

Why then is Israel seen by many today not as a model state but as a pariah state?

Herzl discussed other potential destinations, such as Argentina and British East Africa, as refuge for the persecuted European Jews. But the only place Jews had an emotional connection with was the territory they had originated from. Over centuries, Jews prayed for their return to the land of Israel. But it was not an empty land. The Arab Palestinians soon developed their own ideas of nationhood and rejected the growing Jewish immigration. In the meantime, antisemitism increased in Europe and other countries closed their doors to Jewish refugees. The establishment of the State of Israel in 1948 came too late to save the lives of millions of Jews who perished in the Holocaust. But by then, most of the world recognized the Jews’ right to their own state in their ancient homeland, as reflected in the 1947 UN partition of Palestine into a Jewish and an Arab state. Yet the Arab world did not see why they should pay the price for the sins of the Europeans. The situation reflected the parable of a person (the Jews) jumping out of the window of a burning house (Europe) and hitting another person (the Palestinians) on the street in order to save his own life. The ongoing conflict of two peoples over the same land, combined with the special significance of this land in the eyes of the world, led to a situation where even outsiders have strong opinions. For Evangelical Christians, Israel fulfills a divine mission, while for others, especially in the Arab world, Israel is regarded as a foreign intruder in the tradition of the medieval Crusaders and modern Imperialists.

So, can Israel one day become just a “normal state?”

To begin with, let me qualify this question. The idea of a “normal state” is a fiction altogether. Every state sees itself as special. But it is true that some states receive more attention from the rest of the world than others. Can Israel just be another Albania in the eyes of the world, or relegated in our attention to its place among the nations between Djibouti and Belize? I do not believe so. The history of Jerusalem is different from that of Tirana (Albania’s capital), and the Jews have attracted so much more attention than nations of comparable size. Thus, Israel will most likely always remain in the limelight of media attention. However, let us not forget: The people in Israel live their everyday lives just like everywhere else. They worry about their jobs and about their sports teams, they want their children to be safe and successful in school, and they dream of a peaceful future. In this deeply personal sense, Israel has become a state just like any other.

Michael Brenner is the Seymour and Lilian Abensohn Chair in Israel Studies and director of the Center for Israel Studies at American University and Professor of Jewish History and Culture at Ludwig Maximilian University in Munich. His many books include A Short History of the Jews.

Gaming out chess players: The Italian Renaissance and Vladimir Putin

By Dr. John C. Hulsman

HulsmanIf learning the precious truth that we can be the danger (see my Gibbon column of last week) is the first commandment of political risk analysis, gaming out chess players is surely another. Chess players—foreign policy actors playing the long game, possessing fixed, long-term strategic goals even as they use whatever tactical means come to hand to achieve them—are rare birds indeed. Patient, low-key, but implacable, chess players do that rarest of things: they actually think ahead and are not prisoners of short-term day-to-day events, instead conditioning all that they do in furtherance of their long-term strategy.

Chess players manage to cloak their dogged, disciplined strategies, hiding them in plan sight of our frenetic 24-hour news cycle, from a world that does not generally follow such fixed principles and cannot really conceive of how others might be able to hold to a clear strategic line. In a world of tacticians, it is easy for a strategist to conceal themselves.

Pope Julius II as the true hero of The Prince

Following on from the Crusades, the western world entered a period of cultural and political regeneration we now call the Renaissance. As is true for most eras, it was more politically chaotic, brutal, and bloody than it seems in retrospect. In the confusing, uncertain milieu of early-sixteenth century Italy, a man arose who fit the tenor of his times.

Pope Julius II has been shamefully underrated by history, as his contemporary Niccolo Machiavelli—the author of The Prince, the bible of modern realpolitik—instead lionized failed Bond villain Cesare Borgia rather than the more successful pope. However, we have five centuries of distance from the swirling events of the Renaissance, allowing us to take up the more dispassionate, chess-playing view that Machiavelli urges on us. So let us here re-write the ending of The Prince, this time using Julius II as the proper analytical hero of the piece.

Julius was born Giuliano Della Rovere around 1443. Like Cesare Borgia, his path to power was speeded along by close familial contacts to the papacy. Della Rovere was the much-loved nephew of Pope Sixtus IV, becoming his uncle’s de facto prime minister. Following on from the death of Sixtus, Della Rovere assumed that he would succeed him. However, he was beaten out by Cardinal Rodrigo Borgia, Cesare’s father, who assumed the title of Pope Alexander VI. So Della Rovere, in good chess player fashion, tried to undercut Alexander, knowing his time was coming.

When Alexander VI died in 1503 (and with the lightning quick demise of his successor, Pope Pius III, in just 26 days) Della Rovere at last made his long-considered move. He deceived the supposedly worldly Cesare and ran rings around him diplomatically, securing the papal throne by means of bribery, both in terms of money and future promises. With Cesare throwing the powerful Borgia family’s crucial support behind him, the new papal conclave was one of the shortest in history, with Della Rovere winning on only the second ballot, taking all but two cardinals’ votes. He ascended to the papal throne at the end of 1503.

Now that Cesare had outlived his usefulness, Julius withdrew his promised political support from him in true Machiavellian fashion, seeing to it that the Borgias found it impossible to retain their political control over the papal states of central Italy. Julius rightly reasoned that to fail to eradicate the Borgia principality would have left the Vatican surrounded by Borgia possessions and at Cesare’s very limited mercy.

Without papal support Cesare’s rule on his own—without the critical backing his father Alexander VI had provided—lasted merely a matter of months, with his lands reverting to Julius and the papacy itself. Julius had run rings around Machiavelli’s hero, fulfilling the chess-playing maxim that securing one’s political position leads to political stability and long-term rule. That, Niccolo, is what a real chess player looks like.

Making sense of Putin

However, chess players are not just relic of the byzantine Renaissance age. Russian President Vladimir Putin is a perfect modern-day example of a chess player, as all the many devious tactics he pursues ultimately amount to a very single-minded effort to restore Russian greatness, often by blunting the West’s drives into what he sees as Russia’s traditional sphere of influence in the countries surrounding it. In other words, the Russian strong man resembles another chess player, former French President Charles De Gaulle, in his single-minded efforts to restore pride and great power status to his humiliated country.

As such, Putin’s many gambits: theatrically opposing the US despite having a puny, corrupt economy the size of Texas; pursuing an aggressive adventurist policy against the pro-Western government in Ukraine; intervening to decisive effect in the horrendous Syrian war; all serve one overarching strategic goal. They are designed to make the world (and even more the Russian people) change their perceptions about Russia as a declining, corrupt, demographically challenged former superpower (which it is), and instead see it as a rejuvenated global great power, one that is back at the geo-strategic top table.

Despite all facts to the contrary (and in the end, as was true for De Gaulle’s France, the facts just don’t bear out the incorrect perception that Russia will again be a superpower), Putin has been very successful in (wrongly) changing global perceptions of Russia’s place in the world. It is also the reason the current tsar has an 80% approval rating in his own country, as he has restored pride to his formerly humiliated countrymen. By knowing what ultimately motivates the chess-playing Putin, we in the West can do a far better job in assessing the entirely explicable tactical gambits emanating from the Kremlin.

The rewards for spotting the rare chess player

Despite the difficulty in spotting them, it is well worth the time trying to game out chess players, perhaps the rarest of creatures in global politics. For once they are analytically brought to ground, the fixed, rational, patterns that chess players live by means a true analytical understanding of them is possible, as well as a far better understanding of the world in which they live.

Dr. John C. Hulsman is the President and Co-Founder of John C. Hulsman Enterprises, a successful global political risk consulting firm. For three years, Hulsman was the Senior Columnist for City AM, the newspaper of the city of London. Hulsman is a Life Member of the Council on Foreign Relations, the pre-eminent foreign policy organization. The author of all or part of 14 books, Hulsman has given over 1520 interviews, written over 650 articles, prepared over 1290 briefings, and delivered more than 510 speeches on foreign policy around the world. His most recent work is To Dare More Boldly; The Audacious Story of Political Risk.

Robert Irwin on Ibn Khaldun: An Intellectual Biography

IrwinIbn Khaldun (1332–1406) is generally regarded as the greatest intellectual ever to have appeared in the Arab world—a genius who ranks as one of the world’s great minds. Yet the author of the Muqaddima, the most important study of history ever produced in the Islamic world, is not as well known as he should be, and his ideas are widely misunderstood. In this groundbreaking intellectual biography, Robert Irwin provides an engaging and authoritative account of Ibn Khaldun’s extraordinary life, times, writings, and ideas.

Who was Ibn Khaldun?
Wali al-Din Ibn Khaldun was born in 1332 in Tunis. In his youth he was tutored by some of finest scholars of the age before going on to occupy high offices at various North African courts and at the court of Granada in Muslim Spain. He became, among other things, a diplomat and a specialist in negotiating with the Arab and Berber tribesmen of the North African interior and on occasion he led the tribesmen in battle. Later he moved to Cairo where he was to occupy various senior judicial and teaching posts under the Mamluk Sultans. In 1401 he had a famous meeting with the Turco-Mongol would-be world conqueror Timur (also known as Tamerlane), outside the walls of Damascus which was under siege by Timur. Having escaped becoming Timur’s honored captive, he returned to Egypt. In 1406 he died and was buried in a Sufi cemetery in Cairo. Despite his active career in politics, law, diplomacy, and teaching, he is chiefly famous for his great book, the Muqaddima, (the translation of which is currently published in three volumes by Princeton University Press, as well in a single-volume abridgment).

Why is Ibn Khaldun’s Muqaddima so important?
This big book asked big questions. The Muqaddima started out as a study of the laws of history and it has gone on to win great praise from modern historians. Arnold Toynbee described it as ‘undoubtedly the greatest work of its kind that has ever been created by any mind in any time or place.’ Hugh Trevor-Roper agreed; ‘It is a wonderful experience to read those great volumes, as rich and various, as subtle, deep and formless as the Ocean, and to fish up from them ideas old and new.’ The Muqaddima has attracted similar praise from philosophers, sociologists, anthropologists, economists and Islamicists.

Ibn Khaldun began by asking how do historians make mistakes in their interpretation of events and what kinds of information should be recognized by historians as good evidence or bad evidence. Then he set out to understand the origins of civilization and the causes of the rise and fall of dynasties. As he continued his investigations, his book broadened out to become what was effectively an encyclopedia of Muslim society and culture.

Given his importance, there are already quite a few books on Ibn Khaldun. What is new about yours?
There are indeed so many translations of Ibn Khaldun and books about him that something like half the history of Orientalism can be deduced from the contrasting readings of the Muqaddima produced by such scholars as Silvestre de Sacy, Quatremère, Von Kremer, Monteil, Gibb, Hodgson, Hourani and Gellner. Some of the books by my predecessors are pretty good and I owe debts to those who have gone before me. Nevertheless many of their readings of the Muqaddima have been selective and have stressed and, I think, overstressed the logicality of Ibn Khaldun’s admittedly powerful mind and in doing so they have neglected the inconsistencies, ambiguities, and eccentricities that make the Muqaddima such a fascinating text. Mine is the first book to focus closely on the importance of the occult in Ibn Khaldun’s thought and his intense interest in methods of predicting the future. It is also the first to bring out the importance of North African ruins and the moralizing messages that he took from them. Although he was an outstanding thinker, he was also a man of his time and there has been a tendency to underplay the North African and strictly Muslim context of the Muqaddima. I have also sought to bring out the distinctive quality of Ibn Khaldun’s writing by contrasting it with famous texts by Froissart, Machiavelli, Vico, Montesquieu, Spengler, and others.

His ideas have been described as anticipating those of Montesquieu, Comte, Darwin, Marx, and Toynbee, among others. So was he a ‘modern’ thinker?
As new disciplines evolved in the West in the nineteenth and twentieth centuries, their leading scholars frequently sought to create intellectual lineages for their chosen subjects and so Ibn Khaldun came to be hailed as ‘the world’s first anthropologist’ or ‘the first ever cultural historian’ or as a ‘proto-Marxist.’ Though there is some justice in such tributes, the quest for relevance can be a dangerous thing, as an overemphasis on similarities may conceal or distort past ways of thinking and living. As the novelist L.P. Hartley observed, ‘The past is a foreign country; they do things differently there.’ Ibn Khaldun’s remarkable ability to formulate general laws based on the close observation of discrete phenomena gives his thinking the delusive appearance of modernity, but he wrote in the service of fourteenth-century Islam. Moreover there is no evidence that he influenced Montesquieu, there is no continuity between Ibn Khaldun’s sociological formulations and those of Comte and there is no indication that Ibn Khaldun had anticipated Darwin’s ideas about the survival of the fittest.

Why did you write this book?
It feels as though I have been living with Ibn Khaldun since I first read the Muqaddima as a student in the 1960s. So it was high time that I took a close look at the assumptions and vocabulary that underpinned his thinking. To spend so much time with a polymathic genius has been both demanding and exhilarating. But there is also something else. As already noted, his Muqaddima is encyclopedic in scope. It not only covers history and philosophy, but also religion, social studies, administrative structures and title-holding, geography, economics, literature, pedagogy, jurisprudence, magic, treasure hunting, diet, dream interpretation, and much else. So a study of his masterpiece can serve as a panoptic guide to Muslim thought and life in the Middle Ages. There is nothing to match it either in the Islamic world or in medieval Christendom.

Robert Irwin is senior research associate at the School of Oriental and African Studies in London and a former lecturer at the University of St. Andrews, Scotland. His many books include Dangerous Knowledge: Orientalism and Its Discontents and Memoirs of a Dervish: Sufis, Mystics, and the Sixties, as well as seven novels. He is a Fellow of the Royal Society of Literature.

Omnia El Shakry: Genealogies of Female Writing

Arabic

Throughout Women’s History Month, join Princeton University Press as we celebrate scholarship by and about women.

by Omnia El Shakry

In the wake of the tumultuous year for women that was 2017, many female scholars have been reflecting upon their experiences in the academy, ranging from sexual harassment to the everyday experiences of listening to colleagues mansplain or even intellectually demean women’s work. Indeed, I can vividly recall, as a young assistant professor, hearing a senior male colleague brush off what has now become a canonical text in the field of Middle East studies as “merely” an example of gender history, with no wider relevance to the region. Gender history rolled off his tongue with disdain and there was an assumption that it was distinct from real history.

Few now, however, would deign to publicly discount the role that female authors have played in the vitality of the field of Middle East studies. In recognition of this, the Middle East Studies Association of North America has inaugurated new book awards honoring the pioneering efforts of two women in the field, Nikkie Keddie and Fatima Mernissi. I can still remember the first time I read Mernissi’s work while an undergraduate at the American University in Cairo. Ever since my freshman year, I had enrolled in Cultural Anthropology courses with Soraya Altorki—a pioneering anthropologist who had written about Arab Women in the Field and the challenges of studying one’s own society. In her courses, and elsewhere, I was introduced to Lila Abu-Lughod’s Veiled Sentiments, an ethnography of poetry and everyday discourse in a Bedouin community in Egypt’s Western desert. Abu-Lughod’s narrative was sensitive to questions of positionality, a lesson she both drew from and imbued with feminism. A second piece of writing, this time an article by Stefania Pandolfo on “Detours of Life” that interpreted the internal logic of imagining space and bodies in a Moroccan village gave me a breathtaking view of ethnography, the heterogeneity of lifeworlds, and the work of symbolic interpretation. 

In hindsight I can see that these early undergraduate experiences of reading, and studying with, female anthropologists profoundly impacted my own writing. Although I would eventually become a historian, I remained interested in the ethnographic question of encounters, and specifically of how knowledge is produced through encounters­—whether the encounter between the colonizer and the colonized or between psychoanalysis and Islam. In my most recent book, The Arabic Freud: Psychoanalysis and Islam in Modern Egypt, I ask what it might mean to think of psychoanalysis and Islam together, not as a “problem” but as a creative encounter of ethical engagement. Rather than conceptualizing modern intellectual thought as something developed in Europe, merely to be diffused at its point of application elsewhere, I imagine psychoanalytic knowledge as something elaborated across the space of human difference.

There is yet another female figure who stands at the door of my entry into writing about the Middle East. My grandmother was a strong presence in my early college years. Every Friday afternoon I would head over to her apartment, just a quick walk away from my dorm in downtown Cairo. We would eat lunch, laugh and talk, and watch the subtitled American soap operas that were so popular back then. Since she could not read or write, we would engage in a collective work of translation while watching and I often found her retelling of the series to be far more imaginative than anything network television writers could ever have produced.

Writing for me is about the creative worlds of possibility and of human difference that exist both within, but also outside, of the written word. As historians when we write we are translating between the living and the dead, as much as between different life worlds, and we are often propelled by intergenerational and transgenerational bonds that include the written word, but also exceed it.

Omnia El Shakry is professor of history at the University of California, Davis. She is the author of The Arabic Freud: Psychoanalysis and Islam in Modern Egypt.

Christie Henry talks with Hanna Gray for International Women’s Day

This post is a transcribed excerpt from a forthcoming Open Stacks podcast interview.

I couldn’t be more fortunate to be in the company of Hanna Gray, Professor Emeritus of History at the University of Chicago and Jeff Deutsch, director of the seminary co-op. As a proud member of the University of Chicago diaspora, I am in awe and admiration of these two individuals, whose integrity and erudition animate the scholarly culture. We meet on the occasion of the imminent publication of Professor Gray’s memoir, An Academic Life. Professor Gray and I overlapped briefly in 1993 as inhabitants of the 5801 Ellis Avenue Building, now Levi Hall. At the time, the University of Chicago Press occupied two floors of the building, and the University Administration was on the fifth floor. Two months after I joined the Press, Professor Gray stepped away from the presidency. But the resonance of her leadership endured for the entire 25 years I was on campus. She was the first European born and woman to lead the University of Chicago. As our paths intersect again, I now have the privilege of being the first woman to Direct Princeton University Press, and in that capacity, also to be the publisher of Professor Gray’s forthcoming memoir. I have savored reading the pages of this work and learning more about the fortitude and intelligence she used to shape experiences for so many of us at USC and throughout the world.

GrayChristie: We could use hours of conversation given that so many themes of our discussion—particularly the investment in thought and the benefits gained from communal thinking—are resonating beautifully. I wanted to ask you about on the privilege and responsibilities of being first. You were the first European born president of the University of Chicago as well as the first female provost at Yale and first female president at Chicago. You talk about these opportunities that you have had as you being in the right place at the right time. And I think that that’s often the way I have described my own narrative, as I too have been lucky to be in the right place at the right time. But if one of the responsibilities we carry is to try to create that right place and right time for others to enjoy these opportunities—and especially now as we’re thinking about how to intentionally diversify the demographics of publishing and of the university—what were some of your experiences of creating those right places and right times? Consider this my plea for advice as to how to be intentional and less serendipitous in creating opportunities for others.

Hanna: I’m the first European born president of the University of Chicago but we haven’t had a lot of presidents. So it’s not the biggest deal right? [laughs] I think my work at Yale was more complicated because it was a very early stage in the coeducation of Yale. Women wanted to be seen so much as integral parts of the university, but there were not a lot of women—to put it mildly—on the faculty.

The women surrounding the university wanted things to happen very quickly. And obviously my role was to be concerned for the whole university not only for those who were women.

And at the same time, I felt that I could understand the situation of women much more than my male colleagues had over the years, and obviously a lot needed to be done at Yale. And so there was always this tension between my knowing that and working to address it. And the sense on the part of many women was that not enough was being done because they hoped for almost overnight change, which is of course impossible. I mean, you know how appointments are made in institutions and obviously as provost or President, as I was briefly, you can only do so much. It’s not you who make the appointments. You could encourage appointments you can allocate appointments, but you shouldn’t have quota systems. Rather you have to wait until those opportunities come up and you have to prioritize and so on and so forth. It was very difficult for women who saw themselves as competent. Why was there not for them a position in the history of art, as an art historian so well-trained and so ready to be a member of a good department? But there were no places. There were no positions in that area. Those kinds of issues were there all the time. And so the question of pace was a very big question and I think I made a difference.

We made a slow difference, but that slow difference obviously was not satisfying to those who didn’t benefit from it. And that is an issue that one confronts as one hopes to make a difference. Institutions that move slowly move slowly in part because that’s their way. They don’t know how to run. But that moves slowly also because process is so important and people need to feel things have been done fairly and appropriately and according to policies and rules that everybody understands and has one hopes been a part of shaping. Now when I came back to the University of Chicago, the situation was very different.

Chicago, of course, has always been a coeducational institution that had women on the faculty from day one. But the extraordinary thing about the University of Chicago, which speaks to the larger history of women in higher education in America, was that the percentage of women on the faculty when I became president was no larger than it had been on the opening day of the university. That was an extraordinary fact and it was something I had seen in my own earlier time at the university where I was, I think, one of the first women to be appointed to her husband’s department.

There were some obstructions to women’s progress within the university. There were some women on the faculty, of course, but none of them were in the sciences except for medicine. But even there, there weren’t so many. And I think I was one of—I forget, how many—five, in the social sciences altogether. And then, one of only two tenured female faculty at some point. We did make steady progress because the institution had made, I think, an institutional determination that these figures were ridiculous and they did not represent “our” institution, which prides itself on going against the tide. Chicago recognizes merit where merit is due, and it should certainly be doing just that. It wasn’t always smooth progress and it certainly did not involve quotas of any kind, but we steadily did increase the number of women. And I think that having a woman president was a help in that respect. And I think once again, my responsibility was for the whole institution and for being sure that the appropriate appointments were made and other policies were followed. There was clearly some weight to the kind of encouragement. And you know, just the fact of being a woman made a difference.

Check this space later this month to listen to the complete interview on Open Stacks.