Cynthia Miller-Idriss on The Extreme Gone Mainstream

The past decade has witnessed a steady increase in far right politics, social movements, and extremist violence in Europe. Scholars and policymakers have struggled to understand the causes and dynamics that have made the far right so appealing to so many people—in other words, that have made the extreme more mainstream. In this book, Cynthia Miller-Idriss examines how extremist ideologies have entered mainstream German culture through commercialized products and clothing laced with extremist, anti-Semitic, racist, and nationalist coded symbols and references. Required reading for anyone concerned about the global resurgence of the far right, The Extreme Gone Mainstream reveals how style and aesthetic representation serve as one gateway into extremist scenes and subcultures by helping to strengthen racist and nationalist identification and by acting as conduits of resistance to mainstream society. Read on to learn more about how the extreme has gone mainstream.

Why did you write this book?
I stumbled across the new forms of commercialization analyzed in this book while I was sorting through photographers’ databases in search of a cover photo for my first book. I was immediately hooked—fascinated by how much had changed in German far right subculture since I had completed my prior fieldwork five years earlier. The skinhead aesthetic that had dominated the youth scene since the 1980s had all but disappeared, and was replaced with mainstream-style, high-quality commercial brands laced with far right ideology, symbols, and codes. I planned to write an article about it, but the project wouldn’t let me go. I literally found myself waking up in the middle of the night thinking about the codes, trying to disentangle their meanings and wondering whether youth even understood them. I felt compelled to understand it, and that’s what led to this book.

How does the coding work within the commercial products?
The brands and products encode historical and contemporary far right, nationalistic, racist, xenophobic, Islamophobic, and white supremacist references into iconography, textual phrases, colors, script, motifs, and product names within products that are essentially identical to other mainstream youth clothing styles. The code deployment carefully toes the line of legality in Germany, sometimes marketing directly to consumers’ awareness of legal bans of particular symbols and phrases.

How could a t-shirt enable extremist radicalization? That sounds like a stretch.
I get that question all the time. People are sometimes skeptical that clothing could be consequential for recruitment into or radicalization within far right subcultures. But I found that the iconography, symbols, and codes embedded in this clothing does exactly that. The clothes send messages about the ideal nation, set normative expectations for masculine behavior, disparage and dehumanize ethnic and religious minorities, and valorize violence as a means to achieve nationalist goals. They establish legitimacy within and provide access to far right scenes, signal political affiliation (allowing youth to find like-minded others) and act as icebreakers for conversations at school, in clubs, at parties, and in stadiums. They also act as conduits of resistance towards—and carriers of extremist ideas into—mainstream society, as youth wear the clothing in peer groups, around siblings, classmates, or work colleagues and help establish what is ‘cool’ or desired within and across subcultures. So yes: far right clothing and style can be a gateway to extremism.

Why do youth join extremist movements?
There is no single explanation for how youth become radicalized. But what I argue in this book is that extremism is not only driven by political or ideological motives but also by emotional impulses related to belonging to a group and resisting mainstream authorities. Traits and emotions like collective identity, belonging, heroism, loyalty, strength, and trust, resistance, transgression, hatred, anger, rebellion, and violence were clearly evident in the messages sent through the iconography and symbols in clothing and products marketed to far right youth. I argue that youth are initially attracted to extremist scenes for their emotional resonance, rather than for ideological reasons alone. Ideological indoctrination and radicalization come later.

You argue that clothes can create identity. What do you mean?
Most people understand that clothes express identity—we all make choices about what we will wear that say something about our personality, even if it’s simply that we don’t care at all about fashion. But for many young people in particular, clothing, fashion, and style is deeply intertwined with their identities and their exploration of those identities, as they play around with various subcultural scenes: goths, punks, skaters and others are all immediately recognizable by their styles. But there has been less attention to the ways in which consumption and style not only reflects identity but also helps to reinforce or even create it. Consumers can strengthen their eco-identities by “buying” green, for example, or their religious identities by buying halal or kosher food. Extremist style works the same way—I argue that the coded messages, symbols, and ideas being communicated not only reflect but also help to create and strengthen far right identity for these consumers.

Why should scholars read this even if they don’t work on the far right?
This book makes a critical intervention into mainstream sociological thought that should be interesting for a broad range of scholars, because the findings challenge conventional thinking about economic and material objects. While social scientists have long known that material objects hold symbolic power for communities (as Durkheim’s work on totemism showed, for example), economic objects have been locked into a narrower view as a result of the seminal influence of Karl Marx’s understanding of commoditization as exchange. I challenge the prevailing way of thinking by contending that commodities—economic objects—are not only the end results in an unequal production process, but are also cultural objects which carry emotion, convey meaning, and constitute identities. Consumer goods and material culture, I argue, are not only important for our understandings of inequality but also for how they may constitute identity and motivate political (including extremist) action.

You’re an American. Why do you study Germany?
I initially studied German as a way of connecting to my own lost family history—my great-great-grandfather emigrated from Germany in the late 1800s.  But once I lived in Germany, I became fascinated with how Germans confront the past and with the deep investments Germans make in combating the far right. But despite years of study, residence, and fieldwork in Germany, as an American I will always be an outsider in ways that inevitably impact my observations of cultural, social, and political phenomena. Any explanatory success I have in this book is due in large part to the formal and informal feedback and support I have received over the years from native German and European experts and colleagues. To my great surprise, they never blinked at this strange outsider who wanted to interrogate one of the darkest aspects of German history and contemporary youth subculture. It is my sincere hope that some of the findings in this book will be of some use to German scholars, activists, and educators who work to understand and combat far right violence every day.

 

Miller-IdrissCynthia Miller-Idriss is associate professor of education and sociology and director of the International Training and Education Program at American University. Her books include Blood and Culture: Youth, Right-Wing Extremism, and National Belonging in Contemporary Germany.

Michael J. Ryan: A Taste for the Beautiful

Darwin developed the theory of sexual selection to explain why the animal world abounds in stunning beauty, from the brilliant colors of butterflies and fishes to the songs of birds and frogs. He argued that animals have “a taste for the beautiful” that drives their potential mates to evolve features that make them more sexually attractive and reproductively successful. But if Darwin explained why sexual beauty evolved in animals, he struggled to understand how. In A Taste for the Beautiful, Michael Ryan, one of the world’s leading authorities on animal behavior, tells the remarkable story of how he and other scientists have taken up where Darwin left off and transformed our understanding of sexual selection, shedding new light on human behavior in the process. Vividly written and filled with fascinating stories, A Taste for the Beautiful will change how you think about beauty and attraction. Read on to learn more about the evolution of beauty, why the sight of a peacock’s tail made Darwin sick, and why males tend to be the more “beautiful” in the animal kingdom.

What made you interested in the evolution of beauty?

For my Masters degree I was studying how male bullfrog set up and defend territories. They have a pretty imposing call that has been described as ‘jug-a-rum;’ it is used to repel neighboring males and to attract females. In those days it was thought that animal sexual displays functioned only to identify the species of the signaler. For example, in the pond where I worked you could easily tell the difference between bullfrogs, leopard frogs, green frogs, and spring peepers by listening to their calls. Females do the same so they can end up mating with the correct species. Variation among the calls within a species was thought of to be just noise, random variation that had little meaning to the females.

But sitting in this swamp night after night I was able to tell individual bullfrogs apart from one another and got used to seeing the same males with the loudest deepest calls in the same parts of the pond. I began to wonder that if I could hear these differences could the female bullfrogs, and could females decide who to mate with based on the male’s call? And also, if some calls sounded more beautiful to me, did female frogs share with me the same aesthetic?

I never got to answer these questions with the bullfrogs but I decided to pursue this general question when I started at Cornell University to work on my PhD degree.

Why did Darwin say that the sight of the peacock’s tail, an iconic example of sexual beauty, made him sick?

Darwin suffered all kinds of physical maladies, some probably brought on by his contraction of Chagas disease during his voyage on the Beagle. But this malady induced by the peacock’s tail probably resulted from cognitive dissonance. He had formulated a theory, natural selection, in which he was able to explain how animals evolve adaptations for survival. Alfred Russel Wallace developed a very similar theory. All seemed right with the world, at least for a while.

But then Darwin pointed out that many animal traits seem to hinder rather than promote survival. These included bright plumage and complex song in birds, flashing of fireflies, male fishes with swords, and of course the peacock with its magnificently long tail. All of these traits presented challenges to his theory of natural selection and the general idea of survival of the fittest. These sexy traits are ubiquitous throughout the animal kingdom but seem to harm rather than promote survival.

Sexual selection is Darwin’s theory that predicts the evolution of sexual beauty. How is this different from Darwin’s theory of natural selection?

The big difference between these two theories is that one focuses on survivorship while the other focuses on mating success. Both are important for promoting evolution, the disproportionate passage of genes from one generation to the next. An animal that survives for a long period of time but never reproduces is in a sense genetically dead. Animals that are extremely attractive but do not live long enough to reproduce also are at a genetic dead-end. It is the proper mix of survivorship and attractiveness that is most favored by selection. But the important point to realize is that natural selection and sexual selection are often opposed to one another; natural selection for example, favoring shorter tails in peacocks and sexual selection favoring longer tails. What the bird ends up with is a compromise between these two opposing selection forces.

In most of evolutionary biology the emphasis is still more on survival than mating success. But sometimes I think that surviving is just nature’s clever trick to keep individuals around long enough so that they can reproduce.

Why is it that in many animals the males are the more beautiful sex?

In most animals there are many differences between males and females. But what is the defining character? What makes a male a male and a female a female? It is not the way they act, the way they look, the way they behave. It is not even defined by the individual’s sex organs, penis versus vagina in many types of animals.

The defining characteristic of the sexes is gamete size. Males have many small gametes and females have fewer large gametes.  The maximum number of offspring that an animal can sire will be limited by the number of gametes. Therefore, males could potentially father many more offspring than a female could mother.

But of course males need females to reproduce. So this sets up competition where the many gametes of the males are competing to hook up with the fewer gametes of the females. Thus in many species males are under selection to mate often, they will never run out of gametes, while females are under selection to mate carefully and make good use of the fewer gametes they have. Thus males are competing for females, either through direct combat or by making themselves attractive to females, and females decide which males get to mate. The latter is the topic of this book.

Why is sexual beauty so dangerous?

The first step in communication is being noticed, standing out against the background. This is true whether animals communicate with sound, vision, or smells. It is especially true for sexual communication. The bind that males face is they need to make themselves conspicuous to females but their communication channel is not private, it is open to exploitation by eavesdroppers. These eavesdroppers can make a quick meal out of a sexually advertising male. One famous example, described in this book, involves the túngara frog and its nemesis, the frog-eating bat. Male túngara frogs add syllables, chucks, to their calls to increase their attractiveness to females. But it also makes them more attractive to bats, so when these males become more attractive they also become more likely to become a meal rather than a mate.

The túngara frog is only one example of the survival cost of attractive traits. When crickets call, for example, they can attract a parasitic fly. The fly lands on the calling male and her larvae crawl off of her onto the calling cricket. The larvae then burrow deep inside the cricket where they will develop. As they develop they eat the male from the inside out, and their first meal is the male’s singing muscle. This mutes the male so he will not attract other flies who would deposit their larvae on the male who would then become competitors.

Another cost of being attractive is tied up with the immune system. Many of the elaborate sexual traits of males develop in response to high levels of testosterone. Testosterone can have a negative effect on the immune system. So as males experience higher testosterone levels that might produce more attractive ornaments, but these males are paying the cost with their ability to resist disease.

How did you come to discover that frog eating bats are attracted to the calls of túngara frogs?

The credit for this initial discovery goes to Merlin Tuttle. Merlin is a well-known bat biologist and he was on BCI the year before I was. He captured a bat with a frog in its mouth. Merlin wondered how common this behavior was and whether the bats could hear the calls of the frogs and use those calls to find the frogs.

When Stan Rand and I discovered that túngara frogs become more attractive when they add chucks to their calls, we wondered why they didn’t produce chucks all the time. We were both convinced that there were some cost of producing chucks and we both thought it was likely the ultimate cost imposed by a predator.

Merlin contacted Stan about collaborating on research with the frog-eating bat and frog calls, and Stan then introduced Merlin to me. The rest is history as this research has blossomed into a major research program for a number of people.

Is beauty really in the eye of the beholder?

Yes, but it is also in the ears, the noses, the toes and any other sense organ recruited to check out potential mates. All of these sense organs forward information to the brain where judgements about beauty are made. So it is more accurate to say beauty is in the brain of the beholder. It might be true that the brain is our most important sex organ, but the brain has other things on its mind besides sex. It evolves under selection to perform a number of functions, and adaptations in one function can lead to unintended consequences for another function. For example, studies of some fish show that the color sensitivity of the eyes evolves to facilitate the fish’s ability to find its prey. Once this happens though, males evolve courtship colors to which their females’ eyes are particularly sensitive. This is called sensory exploitation.

A corollary of ‘beauty is in the brain of the beholder’ is that choosers, usually females, define what is beautiful. Females are not under selection to find out which males are attractive, by determining which males are attractive. They are in the driver’s seat when it comes to the evolution of beauty.

What is sensory exploitation?

We have probably all envisioned the perfect sexual partner. And in many cases those visions do not exist in reality. In a sense, the same might be true in animals. Females can have preferences for traits that do not exist. Or at least do not yet exist. When males evolve traits that elicit these otherwise hidden preferences this is called sensory exploitation. We can think of the evolution of sexual beauty as evolutionary attempts to probe the ‘preference landscape’ of the female. When a trait matches one of these previously unexpressed preferences, the male trait is immediately favored by sexual selection because it increases his mating success.

A good example of this occurs in a fish called the swordtail. In these fishes males have sword-like appendages protruding from their tails. Female swordtails prefer males with swords to those without swords, and males with longer swords to males with shorter swords. Swordtails are related to platyfish, the sword of swordtails evolved after the platyfish and swordtails split off from one another thousands of years ago. But when researchers attach a plastic sword to a male platyfish he becomes more attractive to female platyfish. These females have never seen a sworded male but they have a preference for that trait nonetheless. Thus it appears that when the first male swordtail evolved a sword the females already had a preference for this trait.

Do the girls really get prettier at closing time, as Mickey Gilly once sang?

They sure do, and so do the boys. A study showed that both men and women in a bar perceive members of the opposite sex as more attractive as closing time approaches. This classic study was repeated in Australia where they measured blood alcohol levels and showed that the ‘closing time’ effect was not only due to drinking but to the closing time of the bar.

The interpretation is of these results is that if an individual wants to go home with a member of the opposite sex but none of the individuals meet her or his expectations of beauty, the individual has two choices. They can lower their standards of beauty or they can deceive themselves and perceive the same individuals as more attractive. They seem to do the latter.

Although we do not know what goes on in an animal’s head, they show a similar pattern of behavior. Guppies and roaches are much more permissive in accepting otherwise unattractive mates as they get closer to the ultimate closing time, the end of their lives. In a similar example, early in the night female túngara frogs will reject certain calls that are usually unattractive, but later in the night when females become desperate to find a mate they become more than willing to be attracted to these same calls.  It is also noteworthy that middle-aged women think about sex more and have sex more often than do younger women.

Deception seems to be widespread in human courtship. What about animals?

Males have a number of tricks to deceive females for the purpose of mating. One example involves moths in which males make clicking sounds to court females. When males string together these clicks in rapid succession it sounds like the ‘feeding buzz’ of a bat, the sound a bat makes as it zeros in on its prey. At least this is what the female moths think. When they hear these clicks they freeze and the male moth is then able to mount the female and mate with her with little resistance as she appears to be scared to death, not of the male moth but of what she thinks is a bat homing in on her for the kill.

Other animals imitate food to drive female’s attention. Male mites beat their legs on the water surface imitating vibrations caused by copepods, the main source of food for the water mites. When females approach the source of these were vibrations they find a potential mate rather than a potential meal.

What about peer pressure? We know this plays a role in human in interactions, and the influence our perceptions of beauty? What about animals, can they be subjected to peer pressure?

Suppose a woman looks at my picture and is told rate my attractiveness on the scale of 1 to 10. Another woman is asked to do the same but in this picture I am standing next to an attractive woman. Almost certainly I will get a higher score the second time; my attractiveness increases although nothing about my looks have changed, only that I was consorting with a good-looking person.  This is referred to as mate choice and it is widespread in the animal kingdom.

Mate choice copying was first experimentally demonstrated in guppies. Female guppies prefer males who have more orange over those who have less orange. In a classic experiment, females were given a choice between a more than a less colorful male. They preferred the more colorful male and then were returned to the center of the tank for another experiment. In this instance they saw the less colorful and less preferred male courting a female. That female was removed and the test female was tested for her preference for the same two males once again. Now the female changes her preference and prefers what previously had been the less preferred male. She too seems to be employing mate choice copying.

Many animals learn by observing others. Mate choice copying seems to be a type of observational learning that is common in many animals in many domains. It might suggest that we be careful with whom we hang out.

What is the link between sexual attraction in animals and pornography in humans?

Animal sexual beauty is often characterized by being extreme: long tails, complex songs, brilliant colors, and outrageous dances. The same is often true of sexual beauty in humans. Female supermodels, for example, tend to be much longer and thinner than most other women in the population, male supermodels are super-buff—hardly normal. Furthermore, in animals we can create sexual traits that are more extreme than what exists in males of the population, and in experiments females often prefer these artificially exaggerated traits, such as: even longer tales, more complex songs, and more brilliant colors than exhibited by their own males. These are called supernormal stimuli. Pornography also creates supernormal stimuli not only in showcasing individuals with extreme traits but also in creating social settings that hardly exist in most societies, this manufactured social setting is sometimes referred to as Pornotopia.

RyanMichael J. Ryan is the Clark Hubbs Regents Professor in Zoology at the University of Texas and a Senior Research Associate at the Smithsonian Tropical Research Institute in Panama. He is a leading researcher in the fields of sexual selection, mate choice, and animal communication. He lives in Austin, Texas.

 

Scott Cowen on Winnebagos on Wednesdays

CowenIn Winnebagos on Wednesdays: How Visionary Leadership Can Transform Higher Education, Scott Cowen, president emeritus of Tulane University, acknowledges the crisis in higher education but also presents reasons for optimism as courageous leaders find innovative strategies to solve the thorny problems they face. Telling stories of failure and triumph drawn from institutions all across the nation, Cowen takes the reader on a fascinating trip through varied terrain. Recently, Cowen answered some questions about his book and what he sees as a burgeoning opportunity to reshape higher education for the future.

What’s up with the title? I’m intrigued.

The title emanated from something that happened in the spring of my first year as president of Tulane, after an undefeated season for our football team. I made the coach an offer he couldn’t refuse—and he refused. He said he was leaving for Clemson, where the program was so spectacular that fans lined up their Winnebagos on Wednesdays in anticipation of Saturday games. That’s when I realized Tulane was, for various crazy reasons, in the entertainment business, and we weren’t on the A list. For me, the anecdote became a metaphor for all the absurdities and challenges confronting higher education, and started me thinking about how to stop the madness and tackle our problems.

Why did you write the book?

We’re obviously at a tipping point in higher education, with rising skepticism about its value and escalating demands for accountability, affordability, and access. It’s a moment to take stock, and I finally had time to do just that—reflect on my entire career and the lessons I’d learned. A key moment for me was Hurricane Katrina, when the survival of Tulane hung in the balance. I saw then that the two critical elements required to sustain and invigorate an institution are an inspired, distinctive mission and leaders who have the guts and determination to convert that mission into meaningful results. In the book, I tell stories about crisis points when leadership counted most: I parse some of the failures but also show innovative approaches that are being implemented right now, and that point the way to creative, practicable solutions. My aim was to get people to rethink the issues that plague all our schools—value and impact, diversity, financial sustainability, athletics, medicine, mission, governance, leadership—in order to improve our institutions and forge a path into the future.

What are some of the most important challenges facing higher education?

The list above itemizes many of them, but financial sustainability in particular cuts across all the issues. Right now, the cost of a college education is out of reach for too many, fueling perceptions of elitism and raising questions about value. In the book I describe the many efforts underway to cut costs, expand enrollments, develop innovative no-loan policies, and create new programs that enhance the “real world” value of a degree. But the ultimate challenge facing the sector is not the solving of any single conundrum. The fundamental task before us is to find the right people, the right governance, and the right mission if higher education is going to continue to be an engine of innovation and progress. A friend of mine from my days in business management used to say, “Success is all about the ‘who.’”  And that’s the bottom line: people make the organization. They establish the structures, define the mission, set the tone, and create the ethos that helps an institution thrive.

What is the most surprising news in the book?

The news comes from schools people don’t know much about. Looking at the usual suspects—the Harvards, Yales, and Stanfords—you won’t see large scale institutional transformation occurring; the more famous and successful an institution is, the more likely it is to stick with the tried-and-true—and, ironically, the more likely it is to be sowing the seeds of its own decline. The most promising changes are occurring at lesser-known schools, where innovations have dramatically heightened impact. For example, Xavier University, a small Catholic historically black school in New Orleans, led for decades by Norman Francis, has become the major pipeline for black doctors, scientists, and pharmacists in the U.S. Paul Quinn, a historically black college in Dallas led by Michael Sorrell, has reinvented itself as a hub of urban entrepreneurship while expanding its outreach to Latino students. Arizona State University, with Michael Crow at the helm, has increased its student body by 50% to 83,000, expanded African-American and Hispanic enrollment, consolidated departments into institutes with shared administrative costs, launched innovative online programs, and forged partnerships with other universities and private businesses. The University of Southern California, under Steve Sample’s leadership, has become a first-rate academic institution and an anchor for south central Los Angeles, with well-funded institutes, a dramatic increase in productive research, and a menu of cutting-edge interdisciplinary studies. Northeastern University, with its century-old co-op program—academic credits for experiential learning in a range of paid internships—has finally caught up with the current zeitgeist, becoming a magnet for students seeking preparation for the job market. All these schools, and many others, demonstrate that regional presence, pedagogic innovation, and a strong sense of mission create value and enhance impact.

How are the leaders we need tomorrow different from those of the past?

The scholar-president—typically a white male in his sixties—may fast be becoming a relic. Given current cultural shifts and upheavals, it’s clear that we need leaders who are more diverse on every measure—race, gender, geography, ideology, experience— reflecting the nation at large, and with the potential to be models and mentors on their campuses. In addition to personal histories, we should also be looking for traits like versatility and adaptability. In the coming era, university presidents will need to be agents of change, crafting new directions that keep pace with unfolding events; skillful executives who can steer complex multibillion-dollar organizations; astute assessors of talent; and inventors of creative solutions to the problems they will inevitably face. We are likely to see more “unfiltered” leaders, without the standard scholarly résumé, who promise to bring fresh perspectives drawn from the worlds of business, government, and the military, and more “blended” leaders, with one foot in the ivory tower and one in the outside world, who should be able to bring the two domains together in fruitful ways. But the sine qua non of an effective president is the quality of emotional intelligence: the ability to listen and empathize is an indispensable skill in the fractious times we live in. At this precarious moment, when we are facing a paradigm shift in priorities and possibilities, we need people at the helm who will preserve what is best from the past, invent novel approaches for the future, and embody the enduring values of civility, compassion, and integrity.

Scott Cowen is president emeritus and distinguished university chair of Tulane University. His books include The Inevitable City (St. Martin’s) and Innovation in Professional Education (Jossey-Bass). Cowen has written for such publications as the New York Times and the Wall Street Journal.

Jörg Rüpke on Pantheon: A New History of Roman Religion

In this ambitious and authoritative book, Jörg Rüpke provides a comprehensive and strikingly original narrative history of ancient Roman and Mediterranean religion over more than a millennium—from the late Bronze Age through the Roman imperial period and up to full-fledged Christianization. While focused primarily on the city of Rome, Pantheon fully integrates the many religious traditions found in the Mediterranean world, including Judaism and Christianity. This generously illustrated book is also distinguished by its unique emphasis on “lived religion,” a perspective that stresses how individuals’ experiences and practices transform religion into something different from its official form. The result is a radically new picture of both Roman religion and a crucial period in Western religion—one that influenced Judaism, Christianity, Islam, and even the modern idea of “religion” itself. With its unprecedented scope and innovative approach, Pantheon is anunparalleled account of ancient Roman and Mediterranean religion.

In a world where religion is changing its face in rapid and unexpected ways, how is Roman religion, two millennia older, similar?
Rome was perhaps the largest city of the world before the modern period. The religious practices and beliefs of a million people from all over Europe, West Asia, North Africa and occasionally beyond were as varied as religion is in today’s megacities. It is interesting to see how Roman lawmakers and judges dealt with such a situation. And it is even more interesting to see how ‘normal’ citizens understood and used such a religious pluralism. Different gods at every corner, shrines on walls, polemical graffiti, people earning their living by selling religious goods and services, shaven heads or loud music—there is more to discover and learn than the solemnity of the emperor having a bull killed on the Capitoline hill.

Why did you invent a fictitious figure at the start of your history?
Religion is about people claiming to have religious experiences and valuing religious knowledge. There is no religion if everybody thinks that their neighbors addressing a divine being is just ridiculous. But religious experiences or knowledge cannot be simply decreed. To understand the unbelievable dynamics of ancient religion—the invention of statues and monumental temples, to think that gods would enjoy horse races or self-mutilation, etc.—a historian needs to get an idea of what went on in people’s head. We will never know, but we can imagine. Rhea is an avatar to tell us what a woman at the beginning of the Iron Age might have thought. As the basis for these thoughts are archaeological traces of deposits, meals, tombs, hearths, etc. I thought it would be more honest to invent such a speaker and her reflections instead of crediting an attested person without evidence that can be firmly ascribed to them.

How do Judaism and Christianity figure in your book?
I tell the story of nearly a millennium, from the 8th and 7th centuries BCE to the middle of the fourth century CE. From the Roman point of view, Jews show up in the second half of that period only, people calling themselves “Christians” even later, and Muslims are beyond the horizon. Apart from occasional troubleshooting in Judaea or Alexandria, it was only at the very end of antiquity that Jews and in particular Christians are important on a large scale. Before that they were simply a small minority. I tried to balance this. In terms of pages they are overrepresented. In terms of their significance they are massively underrepresented.

What is your favorite god from this large ancient pantheon?
I write about ancient religion, I don’t participate in it! But this was fascinating: ancient polytheism is not about large number of gods or a clear division of labor. It was about empowering (nearly) everybody to arrange and sometimes create their own divine helpers and addressees. If I pray at the end of an interview to Mercury with his quick tongue, to violent Mars and to Silvanus, lord of the endless woods, the interviewer should be careful…

PantheonJörg Rüpke is vice-director and permanent fellow in religious studies at the Max Weber Center for Advanced Cultural and Sociological Studies at the University of Erfurt, Germany, and has been a visiting professor at the Collège de France, Princeton University, and the University of Chicago. His many books include On Roman Religion and From Jupiter to Christ.

Bryan Caplan on The Case against Education

CaplanDespite being immensely popular—and immensely lucrative—education is grossly overrated. In this explosive book, Bryan Caplan argues that the primary function of education is not to enhance students’ skill but to certify their intelligence, work ethic, and conformity—in other words, to signal the qualities of a good employee. Learn why students hunt for easy As and casually forget most of what they learn after the final exam, why decades of growing access to education have not resulted in better jobs for the average worker but instead in runaway credential inflation, how employers reward workers for costly schooling they rarely if ever use, and why cutting education spending is the best remedy. Romantic notions about education being “good for the soul” must yield to careful research and common sense—The Case against Education points the way.

The “signaling model of education” is the foundation of your argument. What is this model?

The standard view of education, often called the “human capital model,” says that education raises income by training students for their future jobs. The signaling model, in contrast, says that education raises income by certifying students for their future jobs. Doing well in school is a great way to convince employers that you’re smart, hard-working, and conformist. Once they’re convinced, career rewards naturally follow.

Could you give an analogy?

Sure. There are two ways to raise the value of a diamond. One is to hand it to an expert gem smith so he can beautifully cut the stone. The other is to hand it to a reputable appraiser with a high-powered eyepiece so he can certify the pre-existing excellence of the stone. The first story is like human capital; the second story is like signaling.

Is it really either/or?

Of course not. The human capital and signaling models both explain part of education’s career benefits. But I say signaling is at least half the story—and probably more.

And why should we care about signaling?

Key point: In the human capital model, students go to school and learn how to produce the extra income they’ll ultimately earn. They get more of the pie because they make the pie bigger. In the signaling model, in contrast, school raises students’ income without raising their productivity. They get more of the pie without making the pie bigger. How is that possible?  Because in the signaling model, education is redistributive; it’s a way to grab more for yourself at the expense of the rest of society.

Selfishly speaking, of course, is doesn’t really matter why education pays. But from a social point of view—a public policy point of view—it makes all the difference in the world. If the signaling model is right, education enriches the individual student, but actually impoverishes society. Using education in order to spread prosperity is like telling the audience at a concert to stand up in order to see better. What works for the individual fails for the group.

But isn’t assessing workers’ quality socially valuable?

To some extent. But once workers have been ranked, giving everyone extra years of education is socially wasteful. Furthermore, since the status quo is supported by hundreds of billions of dollars of subsidies, we’re probably underusing alternative certification methods like apprenticeships, testing, boot camps, and so on.

In 2001, Michael Spence won a Nobel Prize for his work on educational signaling. Can the idea really be so neglected?  What is your value-added here?

Signaling enjoys high status in pure economic theory. But most empirical labor and education economists are dismissive. Either they ignore signaling, cursorily acknowledge it in a throw-away footnote, or hastily conclude it’s quantitatively trivial. My book argues that there’s overwhelming evidence that signaling is a mighty force in the real world. There’s strong evidence inside of economics—and even stronger evidence in educational psychology, sociology of education, and education research. And finally, signaling has abundant support from common sense.

You say that both common sense and academic research support signaling. What are the top common sense arguments?

First and foremost, there’s the chasm between what students study in school and what they actually use on the job. How many U.S. jobs actually tap workers’ knowledge of history, social science, literature, poetry, or foreign language?

Signaling also explains why students are far more concerned about grades than actual learning. They want “easy A’s”—not professors who teach lots of job skills. Signaling explains why cheating pays—a successful cheater profits by impersonating a good student. And signaling explains why students readily forget course material the day after the final exam. Once you’ve got the good signal on your transcript, you can usually safely forget whatever you learned.

And what are the top research arguments?

First, there’s the diploma or “sheepskin” effect. Fact: Graduation years are vastly more lucrative than intermediate years. This is hard for human capital to explain: Do schools wait until senior year to finally start teaching useful job skills?  But it flows naturally out of conformity signaling. If your society says you should complete a four-year degree, anyone who only does 3.9 years looks weird, and hence bad. It’s just like going shirtless to a job interview: Either you don’t understand the social convention, you don’t care about it, or you’re actively defying it.

Second, there’s credential inflation. Fact: Over the last century, employers have dramatically increased the amount of education you need to get any given job. In the modern U.S. economy, many waiters and bartenders have college degrees. This would have been almost unheard of seventy years ago. This is hard for human capital theory to explain. Why would employers pay extra for workers with superfluous credentials?  But it’s simple for signaling to explain: When overall education rises, you need more education to distinguish yourself—to convince employers you’re worth hiring and training.

Third, there’s the employer learning/statistical discrimination literature. That’s rather wonkish, so anyone curious should just read the discussion in my book.

Finally, there’s the contrast between personal and national payoffs for education. Fact: Researchers have never found a country where education fails to noticeably raise individuals’ income. But there’s a messy debate about the effect of education on nations’ income. Plenty of researchers find that raising a country’s national average education level has little or no economic benefit for the country as a whole—precisely as signaling predicts. While others find modest national payoffs, the average estimate of the social return for education is far below the average estimate of the selfish return.

Wait, what’s the difference between “selfish” and “social” returns?

The selfish return evaluates educational investment from the point of view of the individual student. The social return evaluates educational investment from the point of view of the whole society (including all benefits the individual student enjoys, of course). A standard example: If taxpayers provide free tuition, your selfish return will generally exceed the social return, because you invest only your time, while society invests your time plus taxpayers’ money. The primary wedge between selfish and social returns for me, however, is signaling: If education boosts your salary more than your productivity, the selfish return exceeds the social.

Suppose you’re right. How should the education system be reformed?

Above all, we need far less education. And the cleanest way to get far less education is to sharply cut government education spending. Won’t this make education less accessible? Absolutely. But if I’m right, employers will no longer expect you to have the education you can no longer afford. In other words, spending cuts will cause credential deflation. You’ll once again be able to get low- and middle-skill jobs with a high school degree—or less.

We also need more vocational education, especially for early teens. Most researchers detect solid selfish payoffs. And if you take signaling seriously, the social advantages of teaching plumbing instead of poetry should be very large indeed.

If you’re right about signaling, should students drop out of school?

No. The whole point of the signaling model is that school is selfishly rewarding but socially wasteful. Although I also argue that even in the current regime, weaker students’ odds of academic success are so slim they’d be better off just getting a job (and job experience) straight out of high school.

Aren’t you being too much of an economist?  Isn’t the real point of education to spread enlightenment and sustain civilization?

For an economist, I have broad interests. Ideas and culture are my life. But if you look at the data, there’s little sign that education causes much enlightenment or civic understanding. Even at top schools, most students are intellectually and culturally apathetic, and most professors are uninspiring.

Given today’s political climate, who do you think will be most receptive to your message?  The most hostile?

Support for education is bipartisan. Most people, regardless of party, favor more and better education. It’s no accident that both Bushes wanted to be known as “education presidents.” That said, I think my biggest supporters will be pragmatists and fiscal hawks. And my biggest opponents will be ideological fans of education and fiscal doves. Most progressives will probably dislike my book, but they really shouldn’t. If you care about social justice, you should be looking for reforms that help people get good jobs without fancy degrees.

You’re a full professor at George Mason and a Princeton Ph.D. How can you of all people possibly challenge the social value of education?

I see myself as a whistleblower. Personally, I’ve got nothing to complain about; the education system has given me a dream job for life. However, when I look around, I see a huge waste of students’ time and taxpayers’ money. If I don’t let them know their time and money’s being misspent, who will? And if I wasn’t a professor, who would take me seriously?

Bryan Caplan is professor of economics at George Mason University and a blogger at EconLog. He is the author of The Myth of the Rational Voter: Why Democracies Choose Bad Policies. He lives in Oakton, Virginia.

Tim Rogan: What’s Wrong with the Critique of Capitalism Now

RoganWhat’s wrong with capitalism? Answers to that question today focus on material inequality. Led by economists and conducted in utilitarian terms, the critique of capitalism in the twenty-first century is primarily concerned with disparities in income and wealth. It was not always so. In The Moral Economists, Tim Rogan reconstructs another critical tradition, developed across the twentieth century in Britain, in which material deprivation was less important than moral or spiritual desolation. Examining the moral cornerstones of a twentieth-century critique of capitalism, The Moral Economists explains why this critique fell into disuse, and how it might be reformulated for the twenty-first century. Read on to learn more about these moral economists and their critiques of capitalism.

You begin by asking, ‘What’s wrong with capitalism?’ Shouldn’t we start by acknowledging capitalism’s great benefits?

Yes, absolutely. This was a plan for the reform of capitalism, not a prayer for its collapse or a pitch for its overthrow. These moral economists sought in some sense to save capitalism from certain of its enthusiasts—that has always been the project of the socialist tradition out of which these writers emerged. But our question about capitalism—as about every aspect of our social system, every means by which we reconcile individual preferences to arrive at collective decisions—should always be ‘What’s wrong with this?;’ ‘How can we improve this?;’ ‘What could we do better?’ And precisely how we ask those questions, the terms in which we conduct those debates, matters. My argument in this book is that our way of asking the question ‘What’s wrong with capitalism?’ has become too narrow, too focused on material inequality, insufficiently interested in some of the deeper problems of liberty and solidarity which the statistics recording disparities of wealth and income conceal.

Was this critique of capitalism also a critique of economics, and if so what do these critics add to the usual complaints against economics—about unrealistic assumptions, otherworldly models, indifference to historical developments such as financial crises, etc?

Yes, the moral economists were critical of economics. But although their criticisms might sound like variations on the familiar charge that economists make unreal assumptions about the capacities and proclivities of individual human beings, the moral economists’ challenge to mainstream economics was different. The most influential innovators in economics since the Second World War have been behavioral scientists pointing out that our capacity to make utilitarian calculations is not as high as economists once took it to be. Part of what the success of this series of innovations is that the ideal of reducing every decision to a calculation of utility retains its allure, even as we come to realize how fallible our real-time calculations are. Behavioral economists have found our capacity to think like rational utilitarian agents wanting. But when did the capacity to think like a rational utilitarian agent become the measure of our humanity? This is the question moral economists have been asking since the 1920s. Initiated by historians determined to open up means of thinking outside economic orthodoxy, since joined by mathematically-trained economists concerned to get a more realistic handle on the relationship between individual values and social choice, the moral economists’ enterprise promises a far more profound reconstitution of political economy than behavioral economics has ever contemplated.

Doesn’t the profile of these writers—dead, male, English, or Anglophile, writing about a variety of capitalism long since superseded—limit their contemporary relevance?

No. Their main concern was to discover and render articulate forms of social solidarity which the dominant economic discourse concealed. They found these on the outskirts of ‘Red Vienna’, on railroads under construction in post-war Yugoslavia, but most of all in the north of England. They believed that these inarticulate solidarities were what really held the country together—the secret ingredients of the English constitution. Though they belonged to a tradition of social thought in Britain that was skeptical towards Empire and supportive of the push for self-determination in India and elsewhere, they raised the prospect that the same dynamics had developed in countries to which British institutions had been exported—explaining the relative cohesion of Indian and Ghanaian democracies, for instance. More broadly E. P. Thompson in particular argued that factoring these incipient solidarities into constitutional thinking generated a more nuanced understanding of the rule of law than nineteenth-century liberalism entailed: in Thompson’s hand the rule of law became a more tensile creed, more capable of accommodating the personal particularities of the law’s subjects, more adept at mitigating the rigors of rational system to effect justice in specific cases. The profiles of the late-twentieth century commentators who continue the critical tradition Tawney, Polanyi and Thompson developed—especially Amartya Sen—underscore that tradition’s wider relevance.

Aren’t these writers simply nostalgists wishing we could return to a simpler way of life?

No. Tawney especially is often seen as remembering a time of social cohesion before the Reformation and before the advent of international trade and wishing for its return. This perception misunderstands his purpose.

Religion and the Rise of Capitalism draws sharp contrasts between two distinct iterations of European society – the late medieval and the modern. But this was a means of dramatizing a disparity between different societies developing in contemporary England—the society he encountered working at Toynbee Hall in London’s East End, where social atomization left people demoralized beyond relief, on the one hand; the society he encountered when he moved to Manchester to teach in provincial towns in Lancashire and Staffordshire, where life under capitalism was different, where the displacement of older solidarities was offset by the generation of new forms of cohesion, where many people were poor but where the social fabric was still intact.

The demoralized East End was the product of laissez faire capitalism—of the attempt to organize society on the basis that each individual was self-sufficient, profit-minded, unaffected by other human sentiments. The political crisis into which Britain was pitched in the late Edwardian period underlined how untenable this settlement was: without a sense of what more than the appetite for wealth motivated people, there could be no ‘background of mutual understanding’ against which to resolve disputes. At the same time the answer was not simply stronger government, a bigger state. The latent solidarities Tawney discovered in the north of England carried new possibilities: the facility of market exchange and the security of an effective state could be supplemented by informal solidarities making everyday life more human than the impersonal mechanisms of market and government allowed.

Polanyi and Thompson brought their historical settings forward into the nineteenth century, making their writings feel more contemporary. But they were both engaged in much the same exercise as Tawney—using history to dramatize disparities between different possibilities developing within contemporary society. They too had come into contact with forms of solidarity indicating that there was more than calculations of utility and the logic of state power at work in fostering social order.  Polanyi and then especially Thompson advanced their common project significantly when he found a new terminology with which to describe these incipient solidarities. Tawney had talked of ‘tradition’ and ‘convention’ and ‘custom,’ and Polanyi had followed Tawney in this—refusing to associate himself with Ferdinand Tonnies concept of Gemeinschaft and Henry Maine’s system of ‘status’ when pressed to, but offering no cogent concept through which to reckon with these forms of solidarity himself. Thompson’s concept of the ‘moral economy’ made the kinds of solidarities upon which they had all focused more compelling.

Does subscribing to a moral critique of capitalism mean buying into one of the prescriptive belief systems out of which that critique materialized? Do you need to believe in God or Karl Marx in order to advance a moral critique of capitalism without embarrassment?

No. Part of the reason that this critique of capitalism went out of commission was because the belief systems which underpinned it—which, more specifically, provided the conceptions of what a person is which falsified reductive concepts of ‘economic man’—went into decline. Neither Tawney nor Thompson was able to adapt to the attenuation of Christian belief and Marxian conviction respectively from which their iterations of the critique had drawn strength. Polanyi’s case was different: he was able to move beyond both God and Marx, envisaging a basis upon which a moral critique of capitalism could be sustained without relying on either belief system. That basis was furnished by the writings of Adam Smith, which adumbrated an account of political economy which never doubted but that economic transactions are embedded in moral worlds.

This was a very different understanding of Adam Smith’s significance to that with which most people to whom that name means something now have been inculcated. But it is an account of Adam Smith’s significance which grows increasingly recognizable to us now—thanks to the work of Donald Winch, Emma Rothschild and Istvan Hont, among others, facilitated by the end of Cold War hostilities and the renewal of interest in alternatives to state- or market-based principles of social order.

In other words there are ways of re-integrating economics into the wider moral matrices of human society without reverting to a Christian or Marxian belief system. There is nothing extreme or zealous about insisting that the moral significance of economic transactions be recognized. What was zealous and extreme was the determination to divorce economics from broader moral considerations. This moral critique of capitalism represented a recognition that the time for such extremity and zeal had passed. As the critique fell into disuse in the 1970s and 1980s, some of that zeal returned, and the last two decades now look to have been a period of especially pronounced ‘economism.’ The relevance of these writings now, then, is that they help us to put the last two decades and the last two centuries in perspective, revealing just how risky the experiment has been, urging us to settle back in now to a more sustainable pattern of economic thought.

You find that this moral critique of capitalism fell into disuse in the 1970s and 1980s. Bernie Sanders declared in April 2016 that instituting a ‘truly moral economy’ is ‘no longer beyond us.’ Was he right?

Yes and no. Sanders’ made this declaration at the Vatican, contemplating the great papal encyclicals of Rerum Novarum and Centesimus Annus. The discrepancies between what Sanders said and what Popes Leo XIII and Pope John Paul II before him said about capitalism is instructive. The encyclicals have always focussed on the ignominy of approaching a person as a bundle of economic appetites, on the apostasy of abstracting everything else that makes us human out of our economic thinking. Sanders sought to accede to that tradition of social thought—a tradition long since expanded to encompass perspectives at variance with Catholic theology, to include accounts of what a person is which originate outside the Christian tradition. But Sanders’s speech issued no challenge to the reduction of persons to economic actors. In designating material inequality the ‘great issue of our time,’ Sanders reinforced that reductive tendency: the implication is that all we care about is the satisfaction of our material needs, as if redistribution alone would solve all our problems.

The suggestion in Sanders speech was that his specific stance in the utilitarian debate over how best to organise the economy has now taken on moral force. There is an ‘individualist’ position which favors free enterprise and tolerates inequality as incidental to the enlargement of aggregate utility, and there is a ‘collectivist’ stance which enlists the state to limit freedom to ensure that inequality does not grow too wide, seeing inequality as inimical to the maximizing of aggregate utility. The ‘collectivists’ are claiming the moral high ground. But all they are really proposing is a different means to the agreed end of maximizing overall prosperity. The basis for their ‘moral’ claims seems to be that they have more people on their side—a development which would make Nietzsche smile, and should give all of us pause. There are similar overtones to the rallying of progressive forces around Jeremy Corbyn in the UK.

The kind of ‘moral economy’ Sanders had in mind—a big government geared towards maximizing utility—is not what these moral economists would have regarded as a ‘truly moral economy’. The kinds of checks upon economic license they had in mind were more spontaneous and informal—emanating out of everyday interactions, materializing as strictures against certain kinds of commercial practice in common law, inarticulate notions of what is done and what is not done, general conceptions of fairness, broad-based vigilance against excess of power. This kind of moral economy has never been beyond us. The solidarities out of which it arises were never eradicated, and are constantly regenerating.

Tim Rogan is a fellow of St. Catharine’s College, Cambridge, where he teaches history. He is the author of The Moral Economists: R. H. Tawney, Karl Polanyi, E. P. Thompson, and the Critique of Capitalism.

Jerry Z. Muller on The Tyranny of Metrics

Today, organizations of all kinds are ruled by the belief that the path to success is quantifying human performance, publicizing the results, and dividing up the rewards based on the numbers. But in our zeal to instill the evaluation process with scientific rigor, we’ve gone from measuring performance to fixating on measuring itself. The result is a tyranny of metrics that threatens the quality of our lives and most important institutions. In this timely and powerful book, Jerry Muller uncovers the damage our obsession with metrics is causing—and shows how we can begin to fix the problem. Complete with a checklist of when and how to use metrics, The Tyranny of Metrics is an essential corrective to a rarely questioned trend that increasingly affects us all.

What’s the main idea?

We increasingly live in a culture of metric fixation: the belief in so many organizations that scientific management means replacing judgment based upon experience and talent with standardized measures of performance, and then rewarding or punishing individuals and organizations based upon those measures. The buzzwords of metric fixation are all around us: “metrics,” “accountability,” “assessment,” and “transparency.” Though often characterized as “best practice,” metric fixation is in fact often counterproductive, with costs to individual satisfaction with work, organizational effectiveness, and economic growth.

The Tyranny of Metrics treats metric fixation as the organizational equivalent of The Emperor’s New Clothes. It helps explain why metric fixation has become so popular, why it is so often counterproductive, and why some people have an interest in pushing it. It is a book that analyzes and critiques a dominant fashion in contemporary organizational culture, with an eye to making life in organizations more satisfying and productive.

Can you give a few examples of the “tyranny of metrics?”

Sure. In medicine, you have the phenomenon of “surgical report cards” that purport to show the success rates of surgeons who perform a particular procedure, such as cardiac operations. The scores are publicly reported. In an effort to raise their scores, surgeons were found to avoid operating on patients whose complicated circumstances made a successful operation less likely. So, the surgeons raised their scores. But some cardiac patients who might have benefited from an operation failed to get one—and died as a result. That’s what we call “creaming”—only dealing with cases most likely to be successful.

Then there is the phenomenon of goal diversion. A great deal of K-12 education has been distorted by the emphasis that teachers are forced to place on preparing students for standardized tests of English and math, where the results of the tests influence teacher retention or school closings. Teachers are instructed to focus class time on the elements of the subject that are tested (such as reading short prose passages), while ignoring those elements that are not (such as novels). Subjects that are not tested—including civics, art, and history—receive little attention.

Or, to take an example from the world of business. In 2011 the Wells Fargo bank set high quotas for its employees to sign up customers who were interested in one of its products (say, a deposit account) for additional services, such as overdraft coverage or credit cards. For the bank’s employees, failure to reach the quota meant working additional hours without pay and the threat of termination. The result: to reach their quotas, thousands of bankers resorted to low-level fraud, with disastrous effects for the bank. It was forced to pay a fortune in fines, and its stock price dropped.

Why is the book called The Tyranny of Metrics?

Because it helps explain and articulate the sense of frustration and oppression that people in a wide range of organizations feel at the diversion of their time and energy to performance measurement that is wasteful and counterproductive.

What sort of organizations does the book deal with?

There are chapters devoted to colleges and universities, K-12 education, medicine and health care, business and finance, non-profits and philanthropic organizations, policing, and the military. The goal is not to be definitive about any of these realms, but to explore instances in which metrics of measured performance have been functional or dysfunctional, and then to draw useful generalizations about the use and misuse of metrics.

What sort of a book is it? Does it belong to any particular discipline or political ideology?

It’s a work of synthesis, drawing on a wide range of studies and analyses from psychology, sociology, economics, political science, philosophy, organizational behavior, history, and other fields. But it’s written in jargon-free prose, that doesn’t require prior knowledge of any of these fields. Princeton University Press has it classified under “Business,” “Public Policy,” and “Current Affairs.” That’s accurate enough, but it only begins to suggest the ubiquity of the cultural pattern that the book depicts, analyzes, and critiques. The book makes use of conservative, liberal, Marxist, and anarchist authors—some of whom have surprising areas of analytic convergence.

What’s the geographic scope of the book?

In the first instance, the United States. There is also a lot of attention to Great Britain, which in many respects was at the leading edge of metric fixation in the government’s treatment of higher education (from the “Teaching Quality Assessment” through the “Research Excellence Framework”), health care (the NHS) and policing, under the rubric of “New Public Management.” From the US and Great Britain, metric fixation—often carried by consultants touting “best practice”—has spread to Continental Europe, the Anglosphere, Asia, and especially China (where the quest for measured performance and university rankings is having a particularly pernicious effect on science and higher education).

Is the book simply a manifesto against performance measurement?

By no means. Drawing on a wide range of case studies from education to medicine to the military, the book shows how measured performance can be developed and used in positive ways.

Who do you hope will read the book?

Everyone who works in an organization, manages an organization, or supervises an organization, whether in the for-profit, non-profit, or government sector. Or anyone who wants to understand this dominant organizational culture and its intrinsic weaknesses.

Jerry Z. Muller is the author of many books, including Adam Smith in His Time and Ours and Capitalism and the Jews. His writing has appeared in the New York Times, the Wall Street Journal, the Times Literary Supplement, and Foreign Affairs, among other publications. He is professor of history at the Catholic University of America in Washington, D.C., and lives in Silver Spring, Maryland.

Jonathan Haskel & Stian Westlake on Capitalism without Capital

Early in the twenty-first century, a quiet revolution occurred. For the first time, the major developed economies began to invest more in intangible assets, like design, branding, R&D, and software, than in tangible assets, like machinery, buildings, and computers. For all sorts of businesses, from tech firms and pharma companies to coffee shops and gyms, the ability to deploy assets that one can neither see nor touch is increasingly the main source of long-term success. But this is not just a familiar story of the so-called new economy. Capitalism without Capital shows that the growing importance of intangible assets has also played a role in some of the big economic changes of the last decade.

What do you mean when you say we live in an age of Capitalism without Capital?

Our book is based on one big fact about the economy: that the nature of the investment that businesses do has fundamentally changed. Once businesses invested mainly in things you could touch or feel like buildings, machinery, and vehicles. But more and more investment now goes into things you can’t touch or feel: things like research and development, design, organizational development—“intangible’ investments. Today, in developed countries, businesses invest more each year intangible assets than in tangibles. But they’re often measured poorly or not at all in company accounts or national accounts. So there is still a lot of capital about, but it has done a sort of vanishing act, both physically and from the records that businesses and governments keep.

What difference does the rise of intangible investments make?

The rise of intangible investment matters because intangible assets tend to behave differently from tangible ones—they have different economic properties. In the book we call these properties the 4S’s—scalability, sunkenness, synergies, and spillovers. Intangibles can be used again and again, they’re hard to sell if a business fails, they’re especially good when you combine them, and the benefits of intangible investment often end up accruing to businesses other than the ones that make them. We argue that this change helps explain all sorts of important concerns people have about today’s economy, from why inequality has risen so much, to why productivity growth seems to have slowed down.

So is this another book about tech companies?

It’s much bigger than that. It’s true that some of the biggest tech companies have lots of very valuable intangibles, and few tangibles. Google’s search algorithms, software, and prodigious stores of data are intangibles; Apple’s design, brand, and supply chains are intangibles; Uber’s networks of drivers and users are intangible assets. Each of these intangibles is worth billions of dollars. But intangibles are everywhere. Even brick and mortar businesses like supermarkets or gyms rely on more and more intangible assets, such as software, codified operating procedures, or brands. And the rise of intangibles is a very long-term story: research by economists like Carol Corrado suggests that intangibles investment has been steadily growing since the early twentieth century, long before the first semiconductors, let alone the Internet.

Who will do well from this new intangible economy?

The intangible economy seems to be creating winners and losers. From a business point of view, we know that around the world, there’s a growing gap between the leading businesses in any given industry and the rest. We think this leader-laggard gap is partly caused by intangibles. Because intangibles are scalable and have synergies with one another, companies that have valuable intangibles will do better and better (and have more incentives to invest in more), while small and low performing companies won’t, and will lag ever further behind.

There is a personal dimension to this too. People who are good at combining ideas, and who are open to new ideas, will do better in an economy where there are lots of synergies between different assets. This will be a boon for educated, open-minded people, people with political, legal, and social connections, and for people who live in cities (where ideas tend to combine easily with one another). But others risk being left further behind.

Does this help explain the big political changes in recent years?

Yes—after the EU referendum in the UK and the 2016 presidential election in the US, a lot of pundits were asking why so many so-called “left behind” communities people voted for Brexit or Donald Trump. Some people thought they did so for cultural reasons, others argued the reasons were mainly economic. But we would argue that an intangible economy, these two reasons are linked: more connected, cosmopolitan places tend to do better economically in an intangible economy, while left-behind places suffer from an alienation that is both economic and cultural.

You mentioned that the rise of intangible investment might help explain why productivity growth is slowing. Why is that?

Many economists and policymakers worry about so-called secular stagnation: the puzzling fact that productivity growth and investment seems to have slowed down, even though interest rates are low and corporate profits are high, especially since 2009. We think the growing importance of intangibles can help explain this in a few ways.

  • There is certainly some under-measurement of investment going on—but as it happens this explains only a small part of the puzzle.
  • The rate of growth of intangible investment has slowed a bit since 2009. This seems to explain part of the slow-down in growth (and also helps explain why the slowdown has been manly concentrated in total factor productivity)
  • The gap between leading firms (with lots of intangibles) and laggard firms (with few) may have created a scenario where a few firms are investing in a lot of intangibles (think Google and Facebook) but for most others, it’s not worth it, since their more powerful competitors are likely to get the spillover benefits.

Does the intangible economy have consequences for investors?

Yes! Company accounts generally don’t record intangibles (except, haphazardly, as “goodwill” after an acquisition). This means that, as intangible assets become more important, corporate balance sheets tell investors less and less about the true value of a company. Much of what equity analysts spend their days doing is, in practice, trying to value intangibles.

And there’s lots of value to be had here: research suggests that equity markets undervalue intangibles like organizational development, and encourage public companies to underinvest in intangibles like R&D. But informed investors can take advantage of this—which can benefit both their own returns and the performance of the economy.

Jonathan, you’re an academic, and Stian, you are a policymaker. How did you come to write this book together?

We started working together in 2009 on the Nesta Innovation Index, which applied some of the techniques that Jonathan had worked on to measure intangibles to build an innovation measurement for the UK. The more we thought about, the clearer it became that intangibles helped explain all sorts of things. Ryan Avent from the Economist asked us to write a piece for their blog about one of these puzzles, and we enjoyed doing that so much we thought we would try writing a book. One of the most fun parts of writing the book was being able to combine the insights from academic economic research on intangibles and innovation with practical insights from innovation policy.

CapitalismJonathan Haskel is professor of economics at Imperial College Business School. Stian Westlake is a senior fellow at Nesta, the UK’s national foundation for innovation. Haskel and Westlake are cowinners of the 2017 Indigo Prize.

Geoff Mulgan on Big Mind: How Collective Intelligence Can Change Our World

A new field of collective intelligence has emerged in the last few years, prompted by a wave of digital technologies that make it possible for organizations and societies to think at large scale. This “bigger mind”—human and machine capabilities working together—has the potential to solve the great challenges of our time. So why do smart technologies not automatically lead to smart results? Gathering insights from diverse fields, including philosophy, computer science, and biology, Big Mind reveals how collective intelligence can guide corporations, governments, universities, and societies to make the most of human brains and digital technologies. Highlighting differences between environments that stimulate intelligence and those that blunt it, Geoff Mulgan shows how human and machine intelligence could solve challenges in business, climate change, democracy, and public health. Read on to learn more about the ideas in Big Mind.

So what is collective intelligence?

My interest is in how thought happens at a large scale, involving many people and often many machines. Over the last few years many experiments have shown how thousands of people can collaborate online analyzing data or solving problems, and there’s been an explosion of new technologies to sense, analyze and predict. My focus is on how we use these new kinds of collective intelligence to solve problems like climate change or disease—and what risks we need to avoid. My claim is that every organization can work more successfully if it taps into a bigger mind—mobilizing more brains and computers to help it.

How is it different from artificial intelligence?

Artificial intelligence is going through another boom, embedded in everyday things like mobile phones and achieving remarkable break throughs in medicine or games. But for most things that really matter we need human intelligence as well as AI, and an over reliance on algorithms can have horrible effects, whether in financial markets or in politics.

What’s the problem?

The problem is that although there’s huge investment in artificial intelligence there’s been little progress in how intelligently our most important systems work—democracy and politics, business and the economy. You can see this in the most everyday aspect of collective intelligence—how we organize meetings, which ignores almost everything that’s known about how to make meetings effective.

What solutions do you recommend?

I show how you can make sense of the collective intelligence of the organizations you’re in—whether universities or businesses—and how to become better. Much of this is about how we organize our information commons. I also show the importance of countering the many enemies of collective intelligence—distortions, lies, gaming and trolls.

Is this new?

Many of the examples I look at are quite old—like the emergence of an international community of scientists in the 17th and 18th centuries, the Oxford English Dictionary which mobilized tens of thousands of volunteers in the 19th century, or NASA’s Apollo program which at its height employed over half a million people in more than 20,000 organizations. But the tools at our disposal are radically different—and more powerful than ever before.

Who do you hope will read the book?

I’m biased but think this is the most fascinating topic in the world today—how to think our way out of the many crises and pressures that surround us. But I hope it’s of particular interest to anyone involved in running organizations or trying to work on big problems.

Are you optimistic?

It’s easy to be depressed by the many examples of collective stupidity around us. But my instinct is to be optimistic that we’ll figure out how to make the smart machines we’ve created serve us well and that we could on the cusp of a dramatic enhancement of our shared intelligence. That’s a pretty exciting prospect, and much too important to be left in the hands of the geeks alone.

MulganGeoff Mulgan is chief executive of Nesta, the UK’s National Endowment for Science, Technology and the Arts, and a senior visiting scholar at Harvard University’s Ash Center. He was the founder of the think tank Demos and director of the Prime Minister’s Strategy Unit and head of policy under Tony Blair. His books include The Locust and the Bee.

William A. P. Childs on Greek Art and Aesthetics in the Fourth Century B.C.

Greek Art and Aesthetics in the Fourth Century B.C. analyzes the broad character of art produced during this period, providing in-depth analysis of and commentary on many of its most notable examples of sculpture and painting. Taking into consideration developments in style and subject matter, and elucidating political, religious, and intellectual context, William A. P. Childs argues that Greek art in this era was a natural outgrowth of the high classical period and focused on developing the rudiments of individual expression that became the hallmark of the classical in the fifth century. Read on to learn more about fourth century B.C. Greek art:

Why the fourth century?

The fourth century BCE has been neglected in scholarly treatises with a  few recent exceptions: Blanch Brown, Anticlassicism in Greek Sculpture of the Fourth Century B.C.; Monographs on Archaeology and the Fine Arts sponsored by the Archaeological Institute of America and the College Art Association of America 26 (New York, 1976); and Brunilde Ridgway, Fourth-Century Styles in Greek Sculpture, Wisconsin Studies in Classics (Madison, WI, 1997).

One reason is simply that taste has been antithetical to the character of the century. Thus literary critics disparaged the wild reassessments of mythology by Euripides at the end of the fifth century as well as his supposedly colloquial language, and treated the sophists as morally dishonest.

Socially the century was marked by continuous warfare and the rise of  a new, rich elite. Individuals were as important, or more important, than society/community; artists were thought to have individual styles that reflected their personal vision. This was thought to debase the grandness of the high classic and replace it with cheap sensationalism and pluralism that defied straight-forward categorization.

The age-old hostility to Persia was revived, it seems largely for political reasons, while Persian artistic influence permeated much of the ornaments of the new, wealthy elite: mosaics, rich cloth, and metal work. At the same time Persia was constantly meddling in Greek affaires, which produced a certain hypocritical political atmosphere.

And, finally, Philip of Macedon brought the whole democratic adventure of the fifth century to a close with the establishment of monarchy as the default political system, and Alexander brought the East into the new Hellenic or Hellenistic culture out of which Roman culture was to arise.

Clearly most of the past criticism is true; it is our response that has totally changed, one assumes, because our own period is in many respects very similar to the character of the fourth century.

What is the character of the art of the fourth century?

On the surface there is little change from the high classical style of the fifth century—the subject of art is primarily religion in the form of votive reliefs and statues dedicated in sanctuaries. The art of vase-painting in Athens undergoes a slow decline in quality with notable exceptions, though it comes to an end as the century closes.

Though the function of art remains the same as previously, the physical appearance changes and changes again. At the end of the fifth century and into the first quarter of the fourth there is a nervous, linear style with strong erotic overtones. After about 370 the preference is for solidity and quiet poses. But what becomes apparent on closer examination is that there are multiple contemporary variations of the dominant stylistic structures. This has led to some difficulty in assigning convincing dates to individual works, though this is exaggerated. It is widely thought that the different stylistic variations are due to individual artists asserting their personal visions and interpretations of the human condition.

The literary sources, almost all of Roman date, do state that the famous artists, sculptors and painters, of the fourth century developed very individual styles that with training could be recognized in the works still extant. Since there are almost no original Greek statues preserved and no original panel paintings, it is difficult to evaluate these claims convincingly. But, since there are quite distinct groups of works that share broad stylistic similarities and these similarities agree to a large extent with the stylistic observations in the literary sources, it is at least possible to suggest that these styles are connected in some way with particular, named artists of the fourth century. However, rather than attributing works to the named artists, it seems wiser simply to identify the style and recognize that it conveys a particular character of the figure portrayed. This appears also applicable to vase-paintings that may reflect the styles of different panel painters. There are therefore Praxitelian and Skopaic sculptures and Parrhasian and Zeuxian paintings. Style conveys content.

The variety of styles as expressive tools indicates that there is a variety of content. A corollary of this fact is that the artist is presenting works that must be read by the viewer and therefore do not primarily represent social norms but are particular interpretations of both traditional and novel subjects: Aphrodite bathes, a satyr rests peacefully in the woods, and athletes clean themselves. In brief, the heroic and the divine are humanized and humans gain a psychological depth  that allows portraits to suggest character.

Was the cultural response to these developments purely negative as most modern commentaries suggest?

The question of the reception of art and poetry in the Greek world particularly of the archaic and classical periods has occupied scholars for at least the last two hundred years. It has been amply documented that artisans and people we consider artists were generally repudiated by the people composing the preserved texts of literature and historical commentary. For example, Plato is generally considered a conservative Philistine. Most modern commentators are appalled by his criticism of poetry and the plastic arts in all forms. Yet the English romantic poets of the late 18th and early 19th centuries thought Plato a kindred spirit. It was only in the late 19th and early 20th centuries that the negative assessment of Plato’s relation to poetry and art became authoritative.  However one wishes to assess Plato’s own appreciation of poetry and art, it is eminently clear that he had an intimate knowledge of contemporary art. Equally his criticism of people who praise art indicates that precisely what he criticizes is what Athenian society expected and praised. It does not require a large leap to surmise that Plato is the first art critic with a sophisticated approach though somewhat disorganized. His student, Aristotle, had the organization and perhaps a more nuanced view of art, but it is perhaps not an exaggeration to suggest that Aristotle was not as sensitive to art as was his teacher.

The fact of the matter is that from Homer on, the descriptions of objects, though very rare, are uniformly very appreciative. For Homer the wonder of life-likeness is paramount, a quality that endures down to the fourth century despite the changing styles and patent abstractions before the fourth century. At least in the fourth century artists also became wealthy and must have managed large workshops.  So the modern view that artisans/artists were considered inferior members of society appears to be a social evaluation by the wealthy and leisured.

In the fourth century BCE Greek artists embark on on an inquiry into individual expression of  profound insights into the human condition as well as social values. It is the conscious recognition of the varied expressive values of style that creates the modern concept of aesthetics and the artist.

ChildsWilliam A.P. Childs is professor emeritus of classical art and archaeology at Princeton University.

Barry Eichengreen on How Global Currencies Work

At first glance, the modern history of the global economic system seems to support the long-held view that the leading world power’s currency—the British pound, the U.S. dollar, and perhaps someday the Chinese yuan—invariably dominates international trade and finance. In How Global Currencies Work, three noted economists provide a reassessment of this history and the theories behind the conventional wisdom. Read on to learn more about the two views of global currencies, changes in international monetary leadership, and more.

Your title refers to “two views” of global currencies. Can you explain?
We distinguish the “old view” and the “new view”—you can probably infer from the terminology to which view we personally incline. In the old view, one currency will tend dominate as the vehicle for cross-border transactions at any point in time. In the past it was the British pound; more recently it has been the U.S. dollar; and in the future it may be the Chinese renminbi, these being the currencies of the leading international economies of the nineteenth, twentieth, and twenty first centuries. The argument, grounded largely in theory, is that a single currency has tended to dominate, or will dominate, because it pays for investors and producers when engaging in cross-border transactions; specifically, it pays for them to do cross-border business in the same currency as their partners and competitors. This pattern reflects the convenience value of conformity—it reflects what economists refer to as “network externalities.” In this view, it pays to quote the prices of one’s exports in the same units in which they are quoted by other exporters; this makes it easy for customers to compare prices, enabling a newly competitive producer to break into international markets. It pays to denominate bonds marketed to foreign investors in the same currency as other international bonds, in this case to make it easier for investors to compare yields and maximize the demand for the bonds in question.

In what we call the new view, on the other hand, several national currencies can coexist—they can play consequential international roles at the same point in time. In the modern world, it is argued, network externalities are not all that strong. For one thing, interchangeability costs are low as a result of modern financial technology. The existence of deep and liquid markets allows investors and exporters to do business in a variety of different currencies and switch all but effortlessly between them—to sell one currency for another at negligible cost. The existence of hedging instruments allows those investors to insure themselves against financial risks—specifically, against the risk that prices will move in unexpected ways. Prices denominated in different currencies are easy to compare, since everyone now carries a currency converter in his or her pocket, in the form of a smartphone. These observations point to the conclusion, which is compelling in our view, that several national currencies can simultaneously serve as units of account, means of payment and stores of value for individuals, firms and governments engaged in cross-border transactions.

In our book we provide several kinds of evidence supporting the relevance of the new view, not just today but in the past as well. We suggest that the old view is an inaccurate characterization of not just the current state of affairs but, in fact, of the last century and more of international monetary history.

What exactly motivated you to write this book?
We were worried by the extent to which the old view, which pointed to a battle to the death for international monetary supremacy between the dollar and the renminbi, continues to dominate scholarly analysis and popular discourse. This misapprehension gives rise to concerns that we think are misplaced, and to policy recommendations that we think are misguided. Renminbi internationalization, the technical name for policies intended to foster use of China’s currency in cross-border transactions not just within China itself but among third countries as well, is not in fact an existential threat to the dollar’s international role. To the contrary, it is entirely consistent with continued international use of the greenback, or so our evidence suggests.

In addition, making a convincing case for the new view requires marshaling historical, institutional and statistical material and analyzing the better part of a century. We though this extensive body of evidence cried out for a book-length treatment.

To what revisions of received historical wisdom does your analysis point?
We use that historical, institutional and statistical analysis to show that the old view of single-currency dominance is inaccurate not just for today but also as a description of the situation in the first half of the twentieth century and even in the final decades of the nineteenth. In the 1920s and 1930s, the pound sterling and the dollar both in fact played consequential international roles. Under the pre-World War I gold standard, the same was true of sterling, the French franc and the German mark. Our reassessment of the historical record suggests that the coexistence of multiple international currencies, the state of affairs toward which we are currently moving, is not the exception but in fact the rule. There is nothing unprecedented or anomalous about it.

And, contrary to what is sometimes asserted, we show that there is no necessary association between international currency competition and financial instability. The classical gold standard was a prototypical multiple international and reserve currency system by our reading of the evidence. But, whatever its other defects, the gold standard system was a strikingly stable exchange-rate arrangement.

Finally, we show that, under certain circumstances at least, international monetary and financial leadership can be gained and lost quickly. This is contrary to the conventional wisdom that persistence and inertia are overwhelmingly strong in the monetary domain owing to the prevalence of network effects. It is contrary to the presumption that changes of the guard are relatively rare. It is similarly contrary to the presumption that, once an international currency, always an international currency.

So you argue, contrary to conventional wisdom, that changes in international monetary leadership can occur quickly under certain circumstances.  But what circumstances exactly?
The rising currency has to confront and overcome economic and institutional challenges, while the incumbent has to find it hard to keep up. Consider the case of the U.S. dollar. As late as 1914 the dollar played essentially no international role despite the fact that the U.S. had long since become the single largest economy. This initial position reflected the fact that although the U.S. had many of the economic preconditions in place—not only was it was far and away the largest economy but it was also the the number-one exporter—it lacked the institutional prerequisites. Passage of the Federal Reserve Act in 1913 corrected this deficiency. The founding of the Fed created a lender and liquidity provider of last resort. And the Federal Reserve Act authorized U.S. banks to branch abroad, essentially for the first time. World War I, which disrupted London’s foreign financial relations, meanwhile created an opening, of which the U.S. took full advantage. Over the first post-Fed decade, the greenback quickly rose to international prominence. It came to be widely used internationally, fully matching the role of the incumbent international currency, the British pound sterling, already by the middle of the first post-World War I decade.

The shift to dollar dominance after World War II was equally swift. Again the stage was set by a combination of economic and institutional advances on the side of the rising power and difficulties for the incumbent. The U.S. emerged from World War II significantly strengthened economically, the UK significantly weakened. In terms of institutions, the U.S. responded to the unsettled monetary and financial circumstances of the immediate postwar period with the Marshall Plan and other initiatives extending the country’s international financial reach. The UK meanwhile, was forced to resort to capital controls and stringent financial regulation, which limited sterling’s appeal.

What are the implications of your analysis for the future of the international monetary and financial system?
The implications depend on the policies adopted, prospectively, by the governments and central banks that are the issuers of the potential international currencies. Here we have in mind not just the dollar and the renminbi but also the euro, the Euro Area being the third economy, along with the U.S. and China with the economic scale that is a prerequisite for being able to issue a true international currency. If all three issuers follow sound and stable policies, then there is no reason why their three currencies can’t share the international stage for the foreseeable future—in effect there’s no reason why they can’t share that stage indefinitely. The global economy will be better off with three sources of liquidity, compared to the current status quo where it is all but wholly dependent on one.

In contrast, if one or more of the issuers in question follows erratic policies, investors will flee its currency, since in a world of multiple international and reserve currencies they will have alternatives—they will have somewhere to go. The result could then be sharp changes in exchange rates.  The consequence could be high volatility that would wreak havoc with national and international financial markets. So while a world of multiple international currencies has benefits, it also entails risks. Policy choices—and politics—will determine  whether the risks or benefits dominate in the end.

EichengreenBarry Eichengreen is the George C. Pardee and Helen N. Pardee Professor of Economics and Political Science at the University of California, Berkeley. His books include Hall of Mirrors, Exorbitant Privilege, Globalizing Capital, and The European Economy since 1945Arnaud Mehl is principal economist at the European Central Bank. Livia Chiţu is an economist at the European Central Bank.

Josephine Quinn: The Phoenicians never existed

The Phoenicians traveled the Mediterranean long before the Greeks and Romans, trading, establishing settlements, and refining the art of navigation. But who these legendary sailors really were has long remained a mystery. In Search of the Phoenicians by Josephine Quinn makes the startling claim that the “Phoenicians” never actually existed. Taking readers from the ancient world to today, this monumental book argues that the notion of these sailors as a coherent people with a shared identity, history, and culture is a product of modern nationalist ideologies—and a notion very much at odds with the ancient sources. Read on to learn more about the Phoenicians.

Who were the Phoenicians?

The Phoenicians were the merchants and long-distance mariners of the ancient Mediterranean. They came from a string of city-states on the coast of the Levant including the ports of Tyre, Sidon, Byblos, and Beirut, all in modern Lebanon, and spoke very similar dialects of a language very similar to Hebrew. Their hinterland was mountainous and land connections were difficult even between these neighboring cities themselves, so the Phoenicians were very much people of the sea. They had a particular genius for science and navigation, and as early as the ninth or tenth century BCE, their ships were sailing the full length of the Mediterranean and out through the straits of Gibraltar to do business on the Atlantic coast of Spain, attracted by the precious metals of the west. Levantine migrants and traders began to settle in the Western Mediterranean at least a century before Greeks followed suit, founding new towns in Spain, Sardinia, Sicily, and North Africa. Their biggest Western colony was at Carthage in modern Tunisia, a city which eventually eclipsed the homeland in importance, and under its brilliant general Hannibal vied with Rome for control of the Mediterranean: when Carthage was eventually destroyed by Roman troops in 146 BCE, it was said to be the wealthiest city in the world.

But doesn’t your book suggest that the Phoenicians didn’t even exist?

Not quite! The people we call Phoenician certainly existed as individuals, and they often have fascinating stories, from the Carthaginian noblewoman Sophonisba, who married not one but two warring African kings, to the philosopher Zeno of Kition on Cyprus, who moved to Athens and founded the Stoic school of philosophy. But one of the really intriguing things about them is how little we know about how they saw themselves—and my starting point in this book is that we have no evidence that they saw themselves as a distinct people or as we might say, ethnic group.

“Phoenician” is what the Greeks called these people, but we don’t find anyone using that label to describe themselves before late antiquity, and although scholars have sometimes argued that they called themselves “Canaanite,” a local term, one of the things I show in my book is how weak the evidence for that hypothesis really is. Of course, to say that they didn’t think of themselves as a distinct people just because we don’t have any evidence for them describing themselves as such is an argument from silence, and it could be disproved at any moment with the discovery of a new inscription. But in the meantime, my core argument is when we don’t know whether people thought of themselves as a collective, we shouldn’t simply assume that they did on the basis of ancient or modern parallels, or because ethnic identity seems “natural.”

So how did the Phoenicians see themselves?

This is the question I’m most interested in. Although there is no surviving Phoenician literature that might help us understand the way these people saw the world, Phoenician inscriptions reveal all sorts of interesting and sometimes surprising things that people wanted to record for posterity. They certainly saw themselves as belonging to their own cities, like the Greeks: they were “Byblians,” or “Sidonians,” or “Sons of Tyre.” But one of the things that I suggest in my book is that in inscriptions they present themselves first and foremost in terms of family: where a Greek inscription might give someone’s own name and that of their father, a Phoenician one will often go back several generations—16 or 17 in some cases. And then Phoenician-speaking migrants develop new practices of identification, including regional ones. We see particularly close relationships developing between neighboring settlements in the diaspora, and between people who are from the same part of the homeland. But we also see new, Western identities developing—‘Sardinian,’ for instance—which bring together Phoenicians, Greeks, and the local population.

And I think we can get further by looking at the evidence for cultural practices that Phoenician speakers share—or don’t share. So child sacrifice rituals seem to be limited to a small number of Western settlements around Carthage, but the cult of the god Melqart, the chief civic deity of Tyre, is practiced by people of Levantine origin all over the Mediterranean. And on my interpretation, Melqart’s broad popularity is quite a late development—in the fifth or fourth century BCE—which would suggest that a sense of connectivity between Phoenician-speakers in the diaspora got stronger the longer people had been away from their homeland. But at the same time, the cult reached out to other Mediterranean populations, since Melqart was celebrated by Greeks (and later Romans) as the equivalent of their own god Herakles.

Politics played a part in the construction of identities as well, and this is particularly apparent in one episode where an attempt seems to have been made to impose the notion of ‘being Phoenician’ on other people. By the late fifth century BCE Carthage was the dominant power in the western Mediterranean, controlling trade routes and access to ports, taxing defeated enemies, and beginning to acquire overseas territory as well, at the expense of other Levantine diaspora settlements. And at pretty much exactly this time they begin to mint coinage, and their very first coins have an image of a palm tree—or, in Greek, a phoinix, which is also the Greek word for Phoenician. It’s hard to resist the impression that celebrating a common ‘Phoenician’ heritage or identity put a useful political spin on the realities of Carthaginian imperial control.

If there’s so little evidence for genuine Phoenician identity in the ancient world, where does the modern idea of “the Phoenicians” come from?

The name itself comes from the Greeks, as we’ve already said, but they didn’t use it to delineate a specific ethnic or cultural group: for them, “Phoenician” was often a pretty vague and general term for traders and sailors from the Levant, there wasn’t a lot of cultural or ethnic content to it. You don’t get the same kind of detailed ethnographic descriptions of Phoenicians as you do of, for instance, Egyptians and Greeks. And the Romans followed suit: in fact, their particular focus on Carthage meant that the Latin words for “Phoenician”—poenus and punicus—were often used to mean ‘North African’ in general.

It wasn’t until the modern period that the idea of the Phoenicians as a coherent ethnic group fully emerged, in late nineteenth century European histories of Phoenicia that relied heavily on new and specifically European ideas about nationalism and natural cultures. This is when we first find them described as a racial group, with an “ethnic character.” And these notions were picked up enthusiastically in early twentieth century Lebanon, where the idea that the Lebanese had formed a coherent nation since antiquity was an important plank of the intellectual justification for a new Lebanese state after the collapse of the Ottoman empire—another story I tell in the book.

A more recent example of this comes from Anthony D. Smith’s wonderful 1988 book, The Ethnic Origins of Nations, which argues that although true nations are a modern phenomenon, they have precursors in ancient and medieval ethno-cultural communities. Among his ancient examples are what he sees as ‘pan-Phoenician sentiments’ based on a common heritage of religion, language, art and literature, political institutions, dress and, forms of recreation. But my argument is that in the case of the Phoenicians at least we are not dealing with the ancient ethnic origins of modern nations, but the modern nationalist origins of an ancient ethnicity.

Is there any truth to the stories that the ancient Phoenicians reached America?

I’m afraid not! It’s an old idea: in the early eighteenth century Daniel Defoe argued, not long after he published Robinson Crusoe, that the Carthaginians must have colonized America on the basis of the similarities he saw between them and the indigenous Americans, in particular in relation to “their idolatrous Customs, Sacrificings, Conjurings, and other barbarous usages in the Worship of their Gods.” But the only real evidence that has ever been proposed for this theory, an inscription “found” in Brazil in 1872, was immediately diagnosed by specialists as a fake.

The idea that Phoenicians got to Britain, and perhaps even Ireland, makes more sense. Cornish tin could certainly have been one attraction. There’s no strong evidence though for Phoenician settlement on either island, though the possibility captivated local intellectuals in the early modern period. One of the chapters I most enjoyed writing in this book is about the way that scholars in England concocted fantasies of Phoenician origins for their homeland, in part as a way of differentiating their own maritime power from the more territorial, and so “Roman,” French empire—at the same time as the Irish constructed a Phoenician past of their own that highlighted the similarity of their predicament under Britain’s imperial yoke to that of noble Carthage oppressed by brutal Rome.

These are of course just earlier stages in the same nationalist ‘invention of the Phoenicians’ that came to fruition in the nineteenth century histories we’ve already discussed: stories about Phoenicians helped the British and the Irish articulate their own national identities, which in turn further articulated the idea of the Phoenicians themselves.

Why did you write this book?

One reason was I really wanted to write a book about the ancient Mediterranean that wasn’t limited to Greece and Rome—though plenty of Greeks and Romans snuck in! But there’s another reason as well: “identity” has been such a popular academic topic in recent decades, and I wanted to explore its limits and even limitations as an approach to the ancient world. There are lots of reasons to think that a focus on ethnic identity, and even self-identity more generally, is a relatively modern phenomenon, and that our ideas about the strength and prevalence of ancient ethnic sentiments might be skewed by a few dramatic but unusual examples in places like Israel and perhaps Greece. I wanted to look at a less well-known but perhaps more typical group, to see what happens if we investigate them not as “a people,” but simply as people.

 

QuinnJosephine Quinn is associate professor of ancient history at the University of Oxford and a fellow of Worcester College. She is the coeditor of The Hellenistic West andThe Punic Mediterranean.