Bryan Wagner on a controversial folktale: The Tar Baby

WagnerPerhaps the best-known version of the tar baby story was published in 1880 by Joel Chandler Harris in Uncle Remus: His Songs and His Sayings, and popularized in Song of the South, the 1946 Disney movie. Other versions of the story, however, have surfaced in many other places throughout the world, including Nigeria, Brazil, Corsica, Jamaica, India, and the Philippines. The Tar Baby: A Global History by Bryan Wagner offers a fresh analysis of this deceptively simple story about a fox, a rabbit, and a doll made of tar and turpentine, tracing its history and its connections to slavery, colonialism, and global trade. Wagner explores how the tar baby story, thought to have originated in Africa, came to exist in hundreds of forms on five continents.

What is the tar baby story?

BW: There are hundreds of versions of the story, involving many characters and situations. It’s not possible to summarize the story in a way that can encompass all of its variants. The story does, however, follow a broad outline. I provide the following example in the book: “A rabbit and a wolf are neighbors. In the summer, the rabbit wastes his time singing songs, smoking cigarettes, and drinking wine, while the wolf stays busy working in his fields. The rabbit then steals from the wolf all winter. The next year, the wolf decides he will catch the rabbit by placing a tar baby, a lifelike figurine made from tar softened with turpentine, on the way to his fields. When the rabbit meets the tar baby in the road, and the tar baby does not reply to his greetings, the rabbit becomes angry and punches, kicks, and head-butts the tar baby until he is stuck at five points and left to the mercy of the wolf. The rabbit, however, is not trapped for long as he tricks the wolf into tossing him into the briar patch where he makes his escape.” In addition to this summary, I also provide an appendix with versions of the story transcribed in Nigeria, Tanzania, South Africa, the Cape Verde Islands, the Bahamas, Corsica, Brazil, Mexico, Colombia, the Philippines, and the United States. I also include a map of these stories representing when and where they were collected.

Why did you write a book about the tar baby story?

BW: The tar baby has some familiar associations. People think about the ways in which the term “tar baby” has been used as a racial slur. Or they think about it as a figure of speech referring to a situation that gets worse the harder you try to solve it. Or they think about the version of the story that was published by Joel Chandler Harris in Uncle Remus: His Songs and His Sayings (1881). Or they think about the adaptation of the Uncle Remus stories in the Walt Disney movie Song of the South (1946). Most people don’t know that that the story of the tar baby was not invented by Harris. They don’t know that the story exists in hundreds of versions in the oral tradition that were collected on five continents in the late nineteenth and early twentieth centuries. Scholars during these decades were fascinated by the story. They wanted to know how the story came to exist in all of these far-flung places. Some people, including Harris, thought the tar baby story was a key example of the cultural tradition that slaves brought with them from Africa to the Americas. Others believed that the tar baby originated not in Africa but in India or France. Still others believed it was invented by American Indians and borrowed by African Americans. The argument was fierce, and the stakes were high. Did culture belong to a race of people? Or did it cross over racial lines? Did culture construct or transcend racial identity? These questions have stayed with us even as they have been applied to a wide range of examples. It is important to recognize that the tar baby was one of the earliest and most important cases through which these questions were formulated.

The tar baby story is important to ideas about culture and race. Is it also important for politics?

BW: Yes that’s right. Increasingly over the twentieth century, scholars looked to trickster stories like the tar baby for evidence of how peasants and slaves reflected on the politics of everyday life.

Peasants and slaves told stories like the tar baby, it was argued, to share lessons about how to survive in a hostile world where the cards were stacked against you. These ideas were essential to intellectual movements like the new social history and certain strains of political anthropology. At the same time, other scholars have questioned this approach, arguing that it turns politics into the uninhibited pursuit of self-interest, failing to account for the importance of cooperation. I think that scholars have been right to bring these big questions about culture and politics to the story, but I also think that the answers they have discovered in the story have been insufficient. My book approaches the tar baby as a collective experiment in political philosophy. It argues that we need to understand the ways in which the story addresses universal problems—freedom and captivity, labor and value, crime and custom—if we are to gauge its powerful allure for the slaves, fugitives, emigrants, sailors, soldiers, and indentured workers who brought it all the way around the world.

What about the story’s longstanding association with racism? Is “tar baby” a racist term?

BW: That last one is a complex question, but the short answer is yes. Some people like William Safire and John McWhorter have argued that the racism associated with the term “tar baby” is a recent invention, and that the term’s original meaning is not about race. This is disproven by the fact that there are examples from the early nineteenth century where the term was already being used as a racial slur specifically directed at African American children. Harris published his first version of the tar baby story in the Atlanta Constitution at a time when the newspaper was using the term as a racial slur in its news articles. The term’s racism is not incidental to the story. This is also confirmed by the fact that illustrations from early versions of the story represent the tar baby as having phenotypically African facial features. In complex ways, the story is about the history of racism, and for this reason, I don’t think the term should be used in an offhand way as a figure of speech for an intractable situation. This usage is offensive not least for its willful ignorance of the long history of suffering and exploitation that the story attempts in its own way to comprehend.

Bryan Wagner is associate professor in the English Department at the University of California, Berkeley. He is the author of Disturbing the Peace: Black Culture and the Police Power after Slavery and Tar Baby: A Global History.

Dennis Rasmussen: National Friendship Day

Today, August 6, is National Friendship Day. Rather than celebrate this Hallmark holiday by sending a slew of greeting cards, as its originators hoped, I propose to use it to raise and answer a fascinating but seldom-asked question: What was the greatest friendship in the history of philosophy?

I am convinced that the answer is clear, once the leading contenders have been considered: the greatest of all philosophical friendships was that of David Hume and Adam Smith. Hume is, after all, widely regarded as the most important philosopher ever to write in English, and Smith is almost certainly history’s most famous theorist of commercial society, or what we would now call capitalism. They are two of the most significant figures in the entire Western tradition, and they were best friends for most of their adult lives. My new book, The Infidel and the Professor, follows the course of Hume and Smith’s friendship from their first meeting in 1749 until Hume’s death more than a quarter of a century later, examining both their personal interactions and the impact that each had on the other’s outlook.

During the course of writing the book I frequently invited fellow political theorists, philosophers, and intellectual historians to nominate alternative friendships as the greatest in the history of philosophy. Most people’s first instinct was to say Socrates and Plato, but given the four-decade age disparity between them, their relationship was probably more one of teacher and student, or perhaps mentor and protégé, than one of equals, and in any case the record of their personal interactions is scant. Ditto for Plato and Aristotle. Locke and Newton admired one another, but could hardly be said to be close friends. Heidegger and Arendt had more of a (stormy) romantic relationship than a friendship, as did Sartre and de Beauvoir (with somewhat less drama). As for Montaigne and La Boétie, Lessing and Mendelssohn, Bentham and James Mill, Hegel and Schelling, Marx and Engels, and Whitehead and Russell, in each of these cases at least one member of the pair falls considerably below Hume and Smith in terms of impact and originality. Emerson and Thoreau approach closer to their level, if we choose to count them as philosophers rather than literary figures. The strongest contenders among philosophers are probably Erasmus and Thomas More, but in terms of influence and depth of thought most would give the clear nod to Hume and Smith.

Given their stature and influence it is remarkable that no book has heretofore been written on Hume and Smith’s personal or intellectual relationship. One likely reason for this is that friendships are more difficult to bring to life than feuds and quarrels: conflict makes for high drama, while camaraderie does not. It is perhaps not surprising, then, that there have been many books written on philosophical clashes—think of David Edmonds and John Eidinow’s Wittgenstein’s Poker and Rousseau’s Dog, Yuval Levin’s The Great Debate, Steven Nadler’s The Best of All Possible Worlds, Matthew Stewart’s The Courtier and the Heretic, and Robert Zaretsky and John Scott’s The Philosophers’ Quarrel, to name only a few recent titles—but far fewer on philosophical friendships. Even biographies of Hume tend to devote less attention to his long friendship with Smith than to his brief quarrel with Rousseau, which, sensational as it may have been, was not nearly as central to Hume’s life and thought.

The relative lack of attention paid to philosophical friendships, while understandable, is unfortunate. Friendship was understood to be a key component of philosophy and the philosophical life from the very beginning, as even a cursory reading of Plato or Aristotle should remind us. The latter famously claimed that friendship is the one good without which no one would choose to live even if he possessed all other goods, and Hume and Smith clearly concurred. Hume held that “friendship is the chief joy of human life,” and Smith proclaimed that the esteem and affection of one’s friends constitutes “the chief part of human happiness.” Indeed, Hume proposed a small thought experiment to prove Aristotle’s point. “Let all the powers and elements of nature conspire to serve and obey one man,” he suggests. “Let the sun rise and set at his command: The sea and rivers roll as he pleases, and the earth furnish spontaneously whatever may be useful or agreeable to him. He will still be miserable, till you give him some one person at least, with whom he may share his happiness, and whose esteem and friendship he may enjoy.”

Aristotle divides friendships into three types: those motivated by utility, those motivated by pleasure, and—the highest and rarest of the three—those motivated by virtue or excellence. Smith draws a similar distinction in his first book, The Theory of Moral Sentiments, though he insists that the latter alone “deserve the sacred and venerable name of friendship.” Smith’s relationship with Hume represents a nearly textbook model of this kind of friendship: a stable, enduring, reciprocal bond that arises not just from serving one another’s interests or from taking pleasure in one another’s company, but also from the shared pursuit of a noble end—in their case, philosophical understanding.

An examination of Hume and Smith’s personal and intellectual relationship thus allows for a different kind of reflection on friendship than is found in the works of Plato, Aristotle, Cicero, Montaigne, Bacon, and the like. Whereas these leading philosophers of friendship tend to analyze the concept in the abstract—the different forms that friendship takes, its roots in human nature, its relationship to self-interest, to romantic love, and to justice—a consideration of Hume and Smith allows us to see that rare thing, a philosophical friendship of the very highest level in action: a case study, as it were. As my book aims to show, it is a friendship very much worth celebrating.

RasmussenDennis C. Rasmussen is associate professor of political science at Tufts University. His books include The Pragmatic Enlightenment. He lives in Charlestown, Massachusetts.

Alexandra Logue: Are Faculty Missing in Action?

This post was originally published on the blog of Alexandra Logue

Last fall, an article in Inside Higher Ed authored by Judith Shapiro, President of the Teagle Foundation and former President of Barnard College, made the following statement:

“For the most part, however, faculty members have simply been missing in action when it comes to dealing with campus upheavals around race and racism.”

I agree with this statement, but I would expand it to say that faculty members have frequently been missing in action with regard to all kinds of controversial issues.  At many (most?) institutions, faculty are rewarded with promotions, raises, and tenure first for their research (largely based on their individual efforts), second for their teaching (again, largely based on their individual efforts), and only third for their service, which would include working together with others to make their colleges congenial and productive places for the colleges’ diverse inhabitants.  The faculty who produce the most work of direct benefit to themselves are largely those faculty who keep to themselves, focus on their own work, and stay out of the way of college conflagrations.  Consistent with this statement, research has shown that faculty do not feel safe expressing views with which others may disagree until they have had the final promotion to full professor (not, as some people think, until they have tenure).

An example of these tendencies concerns credit transfer among the 19 undergraduate colleges of The City University of New York, at which approximately 10,000 students transfer each fall alone.  Credit transfer is a controversial subject, just one reason being that whether the receiving college counts the credits or not can directly affect the college’s, as well as a department’s, funds, and whether faculty members have sufficient enrollment to teach certain courses.  Although ensuring that credits transfer can benefit students, it can also mean depriving faculty and/or a college of something desirable to them.  Thus it is no surprise that, although for over 40 years problems with credit transfer were seen as the worst problems for CUNY students, and although the faculty issued some statements about those problems, the faculty took no actions to solve the problems.  When the central administration finally instituted a system (known as Pathways) that guaranteed credit transfer for some courses, and thus directly affecting some faculty’s courses, only then did some faculty spend significant amounts of time on the credit transfer issue, with most of those faculty objecting to Pathways, including filing law suits against it.  This prompted one CUNY Distinguished Professor, in his testimony at a public hearing on Pathways, to say to the faculty in the audience: “Where have you been?  Where have you been for 40 years?”

Although there is nothing wrong with working hard to benefit oneself, we also need to provide clear incentives for faculty to work together for the benefit of students, as well as for the rest of the higher education community.

There is more about these issues in my forthcoming book Pathways to Reform:  Credits and Conflict at The City University of New York.

LogueAlexandra W. Logue is a research professor at the Center for Advanced Study in Education at the Graduate Center, CUNY. From 2008 to 2014, she served as executive vice chancellor and university provost of the CUNY system. She is the author of The Psychology of Eating and Drinking and Self-Control: Waiting Until Tomorrow for What You Want Today. She lives in New York City.

Peter Ungar: It’s not that your teeth are too big: your jaw is too small

UngarWe hold in our mouths the legacy of our evolution. We rarely consider just how amazing our teeth are. They break food without themselves being broken, up to millions of times over the course of a lifetime; and they do it built from the very same raw materials as the foods they are breaking. Nature is truly an inspired engineer.

But our teeth are, at the same time, really messed up. Think about it. Do you have impacted wisdom teeth? Are your lower front teeth crooked or out of line? Do your uppers jut out over your lowers? Nearly all of us have to say ‘yes’ to at least one of these questions, unless we’ve had dental work. It’s as if our teeth are too big to fit properly in our jaws, and there isn’t enough room in the back or front for them all. It just doesn’t make sense that such an otherwise well-designed system would be so ill-fitting.

Other animals tend to have perfectly aligned teeth. Our distant hominin ancestors did too; and so do the few remaining peoples today who live a traditional hunting and gathering lifestyle. I am a dental anthropologist at the University of Arkansas, and I work with the Hadza foragers of Africa’s great rift valley in Tanzania. The first thing you notice when you look into a Hadza mouth is that they’ve got a lot of teeth. Most have 20 back teeth, whereas the rest of us tend to have 16 erupted and working. Hadza also typically have a tip-to-tip bite between the upper and lower front teeth; and the edges of their lowers align to form a perfect, flawless arch. In other words, the sizes of Hadza teeth and jaws match perfectly. The same goes for our fossil forebears and for our nearest living relatives, the monkeys and apes.

So why don’t our teeth fit properly in the jaw? The short answer is not that our teeth are too large, but that our jaws are too small to fit them in. Let me explain. Human teeth are covered with a hard cap of enamel that forms from the inside out. The cells that make the cap move outward toward the eventual surface as the tooth forms, leaving a trail of enamel behind. If you’ve ever wondered why your teeth can’t grow or repair themselves when they break or develop cavities, it’s because the cells that make enamel die and are shed when a tooth erupts. So the sizes and shapes of our teeth are genetically pre-programmed. They cannot change in response to conditions in the mouth.

But the jaw is a different story. Its size depends both on genetics and environment; and it grows longer with heavy use, particularly during childhood, because of the way bone responds to stress. The evolutionary biologist Daniel Lieberman at Harvard University conducted an elegant study in 2004 on hyraxes fed soft, cooked foods and tough, raw foods. Higher chewing strains resulted in more growth in the bone that anchors the teeth. He showed that the ultimate length of a jaw depends on the stress put on it during chewing.

Selection for jaw length is based on the growth expected, given a hard or tough diet. In this way, diet determines how well jaw length matches tooth size. It is a fine balancing act, and our species has had 200,000 years to get it right. The problem for us is that, for most of that time, our ancestors didn’t feed their children the kind of mush we feed ours today. Our teeth don’t fit because they evolved instead to match the longer jaw that would develop in a more challenging strain environment. Ours are too short because we don’t give them the workout nature expects us to.

There’s plenty of evidence for this. The dental anthropologist Robert Corruccini at Southern Illinois University has seen the effects by comparing urban dwellers and rural peoples in and around the city of Chandigarh in north India – soft breads and mashed lentils on the one hand, coarse millet and tough vegetables on the other. He has also seen it from one generation to the next in the Pima peoples of Arizona, following the opening of a commercial food-processing facility on the reservation. Diet makes a huge difference. I remember asking my wife not to cut our daughters’ meat into such small pieces when they were young. ‘Let them chew,’ I begged. She replied that she’d rather pay for braces than have them choke. I lost that argument.

Crowded, crooked, misaligned and impacted teeth are huge problems that have clear aesthetic consequences, but can also affect chewing and lead to decay. Half us could benefit from orthodontic treatment. Those treatments often involve pulling out or carving down teeth to match tooth row with jaw length. But does this approach really make sense from an evolutionary perspective? Some clinicians think not. And one of my colleagues at Arkansas, the bioarchaeologist Jerry Rose, has joined forces with the local orthodontist Richard Roblee with this very question in mind. Their recommendation? That clinicians should focus more on growing jaws, especially for children. For adults, surgical options for stimulating bone growth are gaining momentum, too, and can lead to shorter treatment times.

As a final thought, tooth crowding isn’t the only problem that comes from a shorter jaw. Sleep apnea is another. A smaller mouth means less space for the tongue, so it can fall back more easily into the throat during sleep, potentially blocking the airway. It should come as no surprise that appliances and even surgery to pull the jaw forward are gaining traction in treating obstructive sleep apnea.

For better and for worse, we hold in our mouths the legacy of our evolution. We might be stuck with an oral environment that our ancestors never had to contend with, but recognising this can help us deal with it in better ways. Think about that the next time you smile and look in a mirror.

Evolution’s Bite: A Story of Teeth, Diet, and Human Origins by Peter Ungar is out now through Princeton University Press.Aeon counter – do not remove

Peter S. Ungar is Distinguished Professor and director of the Environmental Dynamics Program at the University of Arkansas. He is the author of Teeth: A Very Short Introduction and Mammal Teeth: Origin, Evolution, and Diversity and the editor of Evolution of the Human Diet: The Known, the Unknown, and the Unknowable. He lives in Fayetteville, Arkansas.

This article was originally published at Aeon and has been republished under Creative Commons.

Landon R. Y. Storrs: What McCarthyism Can Teach Us about Trumpism

Since the election of President Donald Trump, public interest in “McCarthyism” has surged, and the focus has shifted from identifying individual casualties to understanding the structural factors that enable the rise of demagogues.

After The Second Red Scare was published in 2012, most responses I received from general readers were about the cases of individuals who had been investigated, or whom the inquirer guessed might have been investigated, under the federal employee loyalty program. That program, created by President Truman in 1947 in response to congressional conservatives’ charges that his administration harbored communist sympathizers, was the engine of the anticommunist crusade that became known as McCarthyism, and it was the central subject of my book. I was the first scholar to gain access to newly declassified records related to the loyalty program and thus the first to write a comprehensive history. The book argues that the program not only destroyed careers, it profoundly affected public policy in many fields.

Some queries came from relatives of civil servants whose lives had been damaged by charges of disloyalty. A typical example was the person who wanted to understand why, in the early 1950s, his parents abruptly moved the family away from Washington D.C. and until their deaths refused to explain why. Another interesting inquiry came from a New York Times reporter covering Bill de Blasio’s campaign for New York City mayor. My book referenced the loyalty case of Warren Wilhelm Sr., a World War II veteran and economist who left government service in 1953, became an alcoholic, was divorced by his wife, and eventually committed suicide. He never told his children about the excruciating loyalty investigation. His estranged son, born Warren Wilhelm Jr., legally adopted his childhood nickname, Bill, and his mother’s surname, de Blasio. I didn’t connect the case I’d found years earlier to the mayoral candidate until the journalist contacted me, at which point I shared my research. At that moment de Blasio’s opponents were attacking him for his own youthful leftism, so it was a powerful story, as I tried to convey in The Nation.

With Trump’s ascendance, media references to McCarthyism have proliferated, as commentators struggle to make sense of Trump’s tactics and supporters. Opinion writers note that Trump shares McCarthy’s predilections for bluffing and for fear-mongering—with terrorists, Muslims, and immigrants taking the place of communist spies. They also note that both men were deeply influenced by the disreputable lawyer Roy Cohn. Meanwhile, the president has tweeted that he himself is a victim of McCarthyism, and that the current investigations of him are “witch hunts”—leaving observers flummoxed, yet again, as to whether he is astonishingly ignorant or shamelessly misleading.

But the parallels between McCarthy’s era and our own run deeper than personalities. Although The Second Red Scare is about McCarthyism, it devotes little attention to McCarthy himself. The book is about how opponents of the New Deal exploited Americans’ fear of Soviet espionage in order to roll back public policies whose regulatory and redistributive effects conservatives abhorred. It shows that the federal employee loyalty program took shape long before the junior senator from Wisconsin seized the limelight in 1950 by charging that the State Department was riddled with communists.

By the late 1930s congressional conservatives of both parties were claiming that communists held influential jobs in key New Deal agencies—particularly those that most strongly challenged corporate prerogatives regarding labor and prices. The chair of the new Special House Committee to Investigate Un-American Activities, Martin Dies (a Texas Democrat who detested labor unions, immigrants, and black civil rights as much as communism), demanded that the U.S. Civil Service Commission (CSC) investigate employees at several agencies. When the CSC found little evidence to corroborate Dies’s allegations, he accused the CSC itself of harboring subversives. Similarly, when in 1950 the Tydings Committee found no evidence to support McCarthy’s claims about the State Department, McCarthy said the committee conducted a “whitewash.” President Trump too insists that anyone who disproves his claims is part of a conspiracy. One important difference is that Dies and McCarthy alleged a conspiracy against the United States, whereas Trump chiefly complains of conspiracies against himself—whether perpetrated by a “deep state” soft on terrorism and immigration or by a biased “liberal media.” The Roosevelt administration dismissed Dies as a crackpot, and during the Second World War, attacks on the loyalty of federal workers got little traction.

That changed in the face of postwar Soviet conduct, the nuclear threat, and revelations of Soviet espionage. In a futile effort to counter right-wing charges that he was “soft” on communism, President Truman expanded procedures for screening government employees, creating a loyalty program that greatly enhanced the power of the FBI and the Attorney General’s List of Subversive Organizations. State, local, and private employers followed suit. As a result, the threat of long-term unemployment forced much of the American workforce not only to avoid political dissent, but to shun any association that an anonymous informant might find suspect. Careers and families were destroyed. With regard to the U.S. civil service, the damage to morale and to effective policymaking lasted much longer than the loyalty program itself.

Public employees long have been vulnerable to political attacks. Proponents of limited government by definition dislike them, casting them as an affront to the (loaded) American ideals of rugged individualism and free markets. But hostility to government employees has been more broad-based at moments when severe national security threats come on top of widespread economic and social insecurity. The post-WWII decade represented such a moment. In the shadow of the Soviet and nuclear threats, women and African-Americans struggled to maintain the toeholds they had gained during the war, and some Americans resented new federal initiatives against employment discrimination. Resentment of the government’s expanding role was fanned by right-wing portrayals of government experts as condescending, morally degenerate “eggheads” who avoided the competitive marketplace by living off taxpayers.

Today, widespread insecurity in the face of terrorism, globalization, multiculturalism, and gender fluidity have made many Americans susceptible to the same sorts of reactionary populist rhetoric heard in McCarthy’s day. And again that rhetoric serves the objectives of those who would gut government, or redirect it to serve private rather than public interests.

The Trump administration calls for shrinking the federal workforce, but the real goal is a more friendly and pliable bureaucracy. Trump advisers complain that Washington agencies are filled with leftists. Trump transition teams requested names of employees who worked on gender equality at State and climate change initiatives at the EPA. Trump media allies such as Breitbart demanded the dismissal of Obama “holdovers.” Trump selected appointees based on their personal loyalty rather than qualifications and, when challenged, suggested that policy expertise hinders fresh thinking. In firing Acting Attorney General Sally Yates for declining to enforce his first “travel ban,” Trump said she was “weak” and had “betrayed” her department. Such statements, like Trump’s earlier claims that President Obama was a Kenyan-born Muslim, fit the textbook definition of McCarthyism: undermining political opponents by making unsubstantiated attacks on their loyalty to the United States. Even more alarming is Trump’s pattern of equating disloyalty to himself with disloyalty to the nation—the textbook definition of autocracy.

Might the demise of McCarthyism hold lessons about how Trumpism will end? The Second Red Scare wound down thanks to the courage of independent journalists, the decision after four long years of McCarthy’s fellow Republican senators to put country above party, and U.S. Supreme Court decisions in cases brought by brave defendants and lawyers. The power of each of those forces was contingent, of course, on the abilities of Americans to sort fact from fiction, to resist the politics of fear and resentment, and to vote.

StorrsLandon R. Y. Storrs is professor of history at the University of Iowa. She is the author of Civilizing Capitalism: The National Consumers’ League, Women’s Activism, and Labor Standards in the New Deal Era and The Second Red Scare and the Unmaking of the New Deal Left.

Steven Weitzman: The Origin of the Jews

WeitzmanThe Jews have one of the longest continuously recorded histories of any people in the world, but what do we actually know about their origins? While many think the answer to this question can be found in the Bible, others look to archaeology or genetics. Some skeptics have even sought to debunk the very idea that the Jews have a common origin. In The Origin of the Jews: The Quest for Roots in a Rootless Age, Steven Weitzman takes a learned and lively look at what we know—or think we know—about where the Jews came from, when they arose, and how they came to be. Weitzman recently took the time to answer a few questions about his new book.

Isn’t the origin of the Jews well known? The story as I learned it begins with the Bible—with Abraham, Isaac and Jacob and with the story of the Exodus from Egypt. What is it that we do not understand about the origin of the Jews?

SW: Arguably, modernity was born of a recognition that things did not originate in the way the Bible claims. Over the course of the nineteenth and twentieth centuries, as the intellectual elite in Europe began to realize that the Bible could not be relied upon as an origin account, they turned to science, to critical historiography, to archaeology and to other scholarly methods to try to answer the question of where things and people come from. The result of their efforts include Darwin’s theory of evolution, the Bing Bang theory and other enduring theories of origin, along with a lot of theories and ideas that have since been discredited. The same intellectual process unsettled how people accounted for the origin of the Jews. Scholars applied the tools that had been used to understand the origin of language, religion and culture to the Jews and in this way developed alternative accounts very different from or even opposed to the biblical account. This book tells the story of what scholars have learned in this way and wrestles with why, despite centuries of scholarship, the question of the origin of the Jews remains unsettled.

So what have scholars learned about the origin of the Jews?

SW: A lot and a little at the same time. There has been a tremendous amount of scholarship generated by the question. The Documentary Hypothesis, the famous theory that the Five Books of Moses reflects the work of different authors in different historical periods, was originally intended as an effort to explain how the people of the Old Testament became the Jews. Focusing on different textual sources, Assyriologists have uncovered evidence of a people in Canaan known as the Habiru that are believed to be the ancestors of the Hebrews, and others would trace the Jews’ origin to Egypt or see a role for Greek culture in their development. Every theory can cite facts to support its account; and some are quite pioneering in the methods they deploy, and yet even as someone conversant in this scholarship, I find that I myself cannot answer the question of what the origin of the Jews is. It is actually the difficulty of answering the question that fascinates me. From within my small field, I have always been drawn to questions that lie at the edge of or just beyond what scholars can know about the world, questions that appear to be just beyond reach, and the origin of the Jews represents one of those questions, lying inside and outside of history at the same time.

Can you explain more why the origin of the Jews is so hard to pin down?

SW: Partly the problem is a scarcity of evidence. If we are looking to prehistory to understand the origin of the Jews—prehistory in this context would refer to the period before we have written accounts of the Israelites—there just isn’t a lot of evidence to work with. We know that at some point a people called Israel emerged, but we have very little evidence that can help us understand that process—a lot of theories and educated guesses but not a lot of solid facts.
Origins are always elusive—they always seem to be buried, hidden or lost—and scholarship has really had to strain to find relevant evidence to base itself on.
But for me at least, the biggest challenge of all was the problem of pinning down what an origin is. The term covers a range of different ideas—continuity and novelty, ancestry and invention. An origin can refer to lineage, to whatever connects a thing to the past, but it can also refer to a rupture, the emergence of something fundamentally discontinuous with the past. I came to realize that one of the main reasons scholars explain the origin of the Jews so differently is that they begin from different conceptions of what an origin is. This project forced me to recognize that I didn’t understand what an origin is or sufficiently appreciate all the different assumptions, beliefs and questionable metaphors that lay hidden within that term.

Not only are there conceptual difficulties inherent in the search for Jewish origin, but there are political problems as well. The effort to answer the question of the origin of the Jews has had devastating consequences, as the Nazis demonstrated by using the scholarship of origin to rationalize violence against the Jews. Of course, more recently, the question has gotten caught up in the Israeli-Palestinian conflict as well, and is entangled in various intra-communal and interfaith debates about the nature of Jewish and Christian identity. There were many reasons to avoid this topic, intellectual, political and arguably even ethical, but not pursuing it also has its costs. There are lots of ideas circulating out there about how the Jews originated, along with a lot of misstatements, unexamined assumptions and confusion, and I felt it would be helpful to describe the challenges of this question, why it is difficult to address, what we know and don’t know, and what is at stake.

The book surveys several different approaches—various historical approaches, archaeology, social scientific approaches, even psychoanalysis has been used to address the question—but the research most likely to interest many contemporary readers comes from the field of genetics. What does DNA reveal about the origin of the Jews?

SW: First of all, I should say up front that I am not a geneticist and much of what I present in the book is based on what I learned from geneticist colleagues when I was a faculty member at Stanford or read at their suggestion. But we happen to be in a period when geneticists are making great strides in using DNA as a historical source, a way to understand the origin, migration history, and sexual and health history of different populations, and Jews have been intensively studied from this perspective. Even though the science was new to me, I felt I could not write a book on this subject without trying to engage this new research. As for what such research reveals, it offers a new way of investigating the ancestry of the Jews, the population(s) from whom they descend, and potentially sheds light on where that population lived, its size and demographic practice, and its mating practices. It can even help us to distinguish distinct histories for the male and female lineages of contemporary Jewish populations. All fascinating stuff, but does genetics represent the future of the quest to understand the origin of the Jews? The research is developing very rapidly. The data sets are expanding rapidly; the analysis is getting more nuanced; studies conducted a decade or two ago have already been significantly revised or superseded; and it is hard for non-geneticists to judge what is quality research and what is questionable. What is clear is that there has been criticism of such research from anthropologists and historians of science who detect hidden continuities with earlier now discredited race science and question how scientists interpret the data. I tried to tell both sides of this story, distilling the research but also giving voice to the critiques, and the book includes bibliographic guidance for those who want to judge the research for themselves.

Has this project gotten you to think about your own origin differently?

SW: Yes, but not in the way one might expect. Of course, as a Jew myself, the questions were not just intellectual but also personal and relational, bearing on how I thought about my own ancestry, my own sense of connection to my forebears, to other Jews, and to the land of Israel and to other peoples, but what I learned about the history of scholarship just didn’t reveal the clear insight one might have hoped for. To give a minor and amusing example, I recall being impressed by a genetic study which uncovered evidence of a surprising ancestry for Ashkenazic Levites. A Levite is a descendant from the tribe of Levi, a tribe with a special religious role, and I inherited such a status from my father. I never put any real stake in this part of my inheritance, but it was a point of connection to my father and his father, and I admit that I was intrigued when I read this study, which found that Ashkenazic Levite males have a different ancestry than that of other Ashkenazic Jews, perhaps descending from a convert with a different backstory than that of the other males in the tiny population from which today’s Ashkenazic Jews descend. But then a few years later, the same scientist published another study which undid that conclusion. So it goes with the research in general: it tells too many stories, or changes too much, or is too equivocal or uncertain in its results to demystify the origin of who I am. But on the other hand, I did learn a lot from this project about how—and why—I think about origins at all, and the mystery of who I am as a Jew—and of who we all are as human beings—runs much deeper for me now.

Steven Weitzman is the Abraham M. Ellis Professor of Hebrew and Semitic Languages and Literatures and Ella Darivoff Director of the Herbert D. Katz Center for Advanced Judaic Studies at the University of Pennsylvania. His books include Solomon: The Lure of WisdomSurviving Sacrilege: Cultural Persistence in Jewish Antiquity, and The Origin of the Jews: The Quest for Roots in a Rootless Age.

Anurag Agrawal: Needing and eating the milkweed

AgrawalU.S. agriculture is based on ideas that make me scratch my head. We typically grow plants that are not native to North America, we grow them as annuals, and we usually only care about one product from the crop, like the tomatoes that give us ketchup and pizza.

And we don’t like weeds. Why would we? They take resources away from our crops, reduce yields more than insect pests or disease, they’re hard to get rid of, and they might give you a rash. But there are few plants more useful, easy to cultivate, and environmentally friendly than the milkweed. The milkweed takes its ill name from the sticky rubbery latex that oozes out when you break the leaves, it’s the monarch butterflies only food, and it is a native meadow plant. Milkweed has sometimes received a bad rep, and perhaps for good reason; they can be poisonous to livestock, they are hard to get rid of, and they do reduce crop yields. But what about milkweed as a crop?

AgrawalThomas Edison showed that milkweed’s milky latex could be used to make rubber. The oil pressed from the seed has industrial applications as a lubricant, and even value in the kitchen and as a skin balm. And as a specialty item, acclaimed for its hypoallergenic fibers, milkweed’s seed fluff that carries milkweed seeds in the wind, is being used to stuff pillows and blankets. Perhaps more surprising, the same fluff is highly absorbent of oils, and is now being sold in kits to clean up oil tanker spills. The fibers from milkweed stems make excellent rope and were used by Native Americans for centuries. More than two hundred years ago, the French were using American milkweed fibers Agrawalto make beautiful cloths, said to be more radiant and velvety than fine silk. And chemically, milkweeds were used medicinally by Native Americans since the dawn of civilization, with a potential for use in modern medicine.  This is a diverse plant with a lot to offer.  Why wouldn’t we cultivate this plant, not only for its stem fibers, seed oils, pillowy fluff, rubbery latex, and medicines, but also in support of the dwindling populations of monarch butterflies?

Ever since the four lowest years of monarch butterfly populations between 2012 and 2015, planting milkweeds for monarchs has been on the tips of a lot of tongues. For most insects that eat plants, however, their populations are not limited by the availability of leaves.  Instead, their predators typically keep them in check, or as in the case of monarchs, there may be constraints Agrawalduring other parts of their annual cycle. Monarchs travel through vast expanses from Mexico to Canada, tasting their way as they go. They tolerate poisons in the milkweed plant; indeed, they are dependent on milkweed as their only food source as a caterpillar. Nearly all mating, egg-laying, and milkweed-eating occurs in the United States and Canada. And each autumn monarchs travel to Mexico, some 3,000 miles, fueled only by water and flower nectar.

All parts of the monarch’s unfathomable annual migratory cycle should be observed and studied. My own research has suggested that habitat destruction in the U.S., lack of flower resources, and logging at the overwintering sites in central Mexico are all contributing to the decline of monarch butterflies. Lack of milkweed does not seem to be causing the decline of monarchs. Nonetheless, planting native milkweeds can only help the cause of conserving monarch butterflies, but it is not the only answer. And of course we humans need our corn and soy, and we love our broccoli and strawberries, so is cultivating milkweed really something to consider?

We humans, with our highly sensitive pallets, do the one thing that monarch butterflies don’t do. We cook. And the invention of cooking foods has been deemed one of the greatest advances in human evolution. Cooking certainly reduces the time spent chewing and digesting, and perhaps more importantly, cooking opens up much of the botanical world for human consumption, because heat can break down plant poisons.

AgrawalEuell Gibbons, the famed proponent of wild plant edibles in the 1970s, was a huge advocate of eating milkweed. The shoots of new stems of the eastern “common milkweed” are my personal favorite. I simply pull them up when they are about 6-8 inches tall and eat them like asparagus. Gibbons recommended pouring boiling water over the vegetables in a pot, then heating only to regain the boil, and pouring off the water before sautéing. You can pick several times and the shoots keep coming. With some preparation, the other parts of the milkweed plant can be eaten too, and enjoyed like spinach, broccoli, and okra.

At the end of summer, many insects have enjoyed the benefits of eating milkweed, especially the monarch butterfly. Any boost we could give to the monarch population may help use preserve it in perpetuity. But the real value in cultivating milkweed as a crop is that it has a lot to offer, from medicines to fibers to oils. It is native and perennial, and can be grown locally and abundantly.  Let’s give this weed a chance.

Anurag Agrawal is a professor in the Department of Ecology and Evolutionary Biology and the Department of Entomology at Cornell University. He is the author of Monarchs and Milkweed: A Migrating Butterfly, a Poisonous Plant, and Their Remarkable Story of Coevolution.

Agrawal

Yair Mintzker: The Many Deaths of Jew Süss

Joseph Süss Oppenheimer—”Jew Süss”—is one of the most iconic figures in the history of anti-Semitism. In 1733, Oppenheimer became the “court Jew” of Carl Alexander, the duke of the small German state of Württemberg. When Carl Alexander died unexpectedly, the Württemberg authorities arrested Oppenheimer, put him on trial, and condemned him to death for unspecified “misdeeds.” On February 4, 1738, Oppenheimer was hanged in front of a large crowd just outside Stuttgart. He is most often remembered today through several works of fiction, chief among them a vicious Nazi propaganda movie made in 1940 at the behest of Joseph Goebbels. The Many Deaths of Jew Süss by Yair Mintzker is a compelling new account of Oppenheimer’s notorious trial.

You have chosen a very intriguing title for your book—The Many Deaths of Jew Süss. Who was this “Jew Süss” and why did he die more than once?

YM: Jew Süss is the nickname of Joseph Süss Oppenheimer, one of the most iconic figures in the history of anti-Semitism. Originally from the Jewish community in Heidelberg, Germany, in 1732 Oppenheimer became the personal banker (“court Jew”) of Carl Alexander, duke of the small German state of Württemberg. When Carl Alexander died unexpectedly in 1737, the Württemberg authorities arrested Oppenheimer, put him on trial, and eventually hanged him in front of a large crowd just outside Stuttgart. He is most often remembered today through a vicious Nazi propaganda movie made about him at the behest of Joseph Goebbels.

Why is Oppenheimer such an iconic figure in the history of anti-Semitism?

YM: Though Oppenheimer was executed almost three centuries ago, his trial never quite ended. Even as the trial was unfolding, it was already clear that what was being placed in the scales of justice was not any of Oppenheimer’s alleged crimes. The verdict pronounced in his case conspicuously failed to provide any specific details about the reasons for the death sentence. The significance of the trial, and the reasons for Oppenheimer’s public notoriety ever since the eighteenth century, stem from the fact that Oppenheimer’s rise-and-fall story has been viewed by many as an allegory for the history of German Jewry in general. Here was a man who tried to fit in, and seemed to for a time, but was eventually rejected; a Jew who enjoyed much success but then fell from power and met a violent death. Thus, at every point in time when the status, culture, past and future of Germany’s Jews have hung in the balance, the story of this man has moved to center stage, where it was investigated, novelized, dramatized, and even set to music. It is no exaggeration to say that Jew Süss is to the German collective imagination what Shakespeare’s Shylock is to the English-speaking world.

Your book is about Oppenheimer’s original trial, not about how this famous court Jew was depicted later. Why do you claim that he died more than once?

YM: We need to take a step back and say something about the sources left by Oppenheimer’s trial. Today, in over one hundred cardboard boxes in the state archives in Stuttgart, one can read close to thirty thousand handwritten pages of documents from the time period of the trial. Among these pages are the materials collected by the inquisition committee assigned to the case; protocols of the interrogations of Oppenheimer himself, his alleged accomplices, and many witnesses; descriptions of conversations Oppenheimer had with visitors in his prison cell; and a great number of poems, pamphlets, and essays about Oppenheimer’s final months, days, hours, and even minutes. But here’s the rub: while the abundance of sources about Oppenheimer’s trial is remarkable, the sources themselves never tell the same story twice. They are full of doubts, uncertainties, and outright contradictions about who Oppenheimer was and what he did or did not do. Instead of reducing these diverse perspectives to just one plot line, I decided to explore in my book four different accounts of the trial, each from a different perspective. The result is a critical work of scholarship that uncovers mountains of new documents, but one that refuses to reduce the story of Jew Süss to only one narrative.

What are the four stories you tell in the book, then?

YM: I look at Oppenheimer’s life and death as told by four contemporaries: the leading inquisitor in Oppenheimer’s trial, the most important eyewitness to Oppenheimer’s final days, a fellow court Jew who was permitted to visit Oppenheimer on the eve of his execution, and one of Oppenheimer’s earliest biographers.

What do we learn from these stories?

YM: What emerges from these accounts, above and beyond everything else, is an unforgettable picture of Jew Süss in his final days. It is a lurid tale of greed, sex, violence, and disgrace, but one that we can fully comprehend only if we follow the life stories of the four narrators and understand what they were trying to achieve by writing about Oppenheimer in the first place.

Is the purpose of this book to show, by composing these conflicting accounts of Jew Süss, that the truth is always in the eye of the beholder, that everything is relative and that there is therefore no one, single truth?

YM: No. The realization that the world looks different from different perspectives cannot possibly be the bottom line of a good work of history. This is so not because it’s wrong, but because it’s obvious. What I was setting out to do in writing this book was different. I used the multi-perspectival nature of lived experience as my starting point, not as my destination; it was a belief that informed what I did rather than a conclusion toward which I was driving.

And the result?

YM: A moving, disturbing, and often outright profound account of Oppenheimer’s trial that is also an innovative work of history and an illuminating parable about Jewish life in the fraught transition to modernity.

MintzkerYair Mintzker is associate professor of history at Princeton University. He is the author of The Defortification of the German City, 1689–1866 The Many Deaths of Jew Süss: The Notorious Trial and Execution of an Eighteenth-Century Court Jew.

Chris Chambers: The Seven Deadly Sins of Psychology

ChambersPsychological science has made extraordinary discoveries about the human mind, but can we trust everything its practitioners are telling us? In recent years, it has become increasingly apparent that a lot of research in psychology is based on weak evidence, questionable practices, and sometimes even fraud. The Seven Deadly Sins of Psychology by Chris Chambers diagnoses the ills besetting the discipline today and proposes sensible, practical solutions to ensure that it remains a legitimate and reliable science in the years ahead.

Why did you decide to write this book?

CC: Over the last fifteen years I‘ve become increasingly fed up with the “academic game” in psychology, and I strongly believe we need to raise standards to make our research more transparent and reliable. As a psychologist myself, one of the key lessons I’ve learned is that there is a huge difference between how the public thinks science works and how it actually works. The public have this impression of scientists as objective truth seekers on a selfless mission to understand nature. That’s a noble picture but bears little resemblance to reality. Over time, the mission of psychological science has eroded from something that originally was probably quite close to that vision but has now become a contest for short-term prestige and career status, corrupted by biased research practices, bad incentives and occasionally even fraud.

Many psychologists struggle valiantly against the current system but they are swimming against a tide. I trained within that system. I understand how it works, how to use it, and how it can distort your thinking. After 10 years of “playing the game” I realized I didn’t like the kind of scientist I was turning into, so I decided to try and change the system and my own practices—not only to improve science but to help younger scientists avoid my predicament. At its heart this book lays out my view of how we can reinvigorate psychology by adopting an emerging philosophy called “open science.” Some people will agree with this solution. Many will not. But, above all, the debate is important to have.

It sounds like you’re quite skeptical about science generally.

CC: Even though I’m quite critical about psychology, the book shouldn’t be seen as anti-science—far from it. Science is without doubt the best way to discover the truth about the world and make rational decisions. But that doesn’t mean it can’t or shouldn’t be improved. We need to face the problems in psychology head-on and develop practical solutions. The stakes are high. If we succeed then psychology can lead the way in helping other sciences solve similar problems. If we fail then I believe psychology will fade into obscurity and become obsolete.

Would it matter if psychology disappeared? Is it really that important?

CC: Psychology is a huge part of our lives. We need it in every domain where it is important to understand human thought or behavior, from treating mental illness, to designing traffic signs, to addressing global problems like climate change, to understanding basic (but extraordinarily complex) mental functions such as how we see or hear. Understanding how our minds work is the ultimate journey of self-discovery and one of the fundamental sciences. And it’s precisely because the world needs robust psychological science that researchers have an ethical obligation to meet the high standards expected of us by the public.

Who do you think will find your book most useful?

CC: I have tried to tailor the content for a variety of different audiences, including anyone who is interested in psychology or how science works. Among non-scientists, I think the book may be especially valuable for journalists who report on psychological research, helping them overcome common pitfalls and identify the signs of bad or weak studies. At another level, I’ve written this as a call-to-arms for my fellow psychologists and scientists in closely aligned disciplines, because we need to act collectively in order to fix these problems. And the most important readers of all are the younger researchers and students who are coming up in the current academic system and will one day inherit psychological science. We need to get our house in order to prepare this generation for what lies ahead and help solve the difficulties we inherited.

So what exactly are the problems facing psychology research?

CC: I’ve identified seven major ills, which (a little flippantly, I admit) can be cast as seven deadly sins. In order they are Bias, Hidden Flexibility, Unreliability, Data Hoarding, Corruptibility, Internment, and Bean Counting. I won’t ruin the suspense by describing them in detail, but they all stem from the same root cause: we have allowed the incentives that drive individual scientists to fall out of step with what’s best for scientific advancement. When you combine this with the intense competition of academia, it creates a research culture that is biased, closed, fearful and poorly accountable—and just as a damp bathroom encourages mold, a closed research culture becomes the perfect environment for cultivating malpractice and fraud.

It all sounds pretty bad. Is psychology doomed?

CC: No. And I say this emphatically: there is still time to turn this around. Beneath all of these problems, psychology has a strong foundation; we’ve just forgotten about it in the rat race of modern academia. There is a growing movement to reform research practices in psychology, particularly among the younger generation. We can solve many problems by adopting open scientific practices—practices such as pre-registering study designs to reduce bias, making data and study materials as publicly available as possible, and changing the way we assess scientists for career advancement. Many of these problems are common to other fields in the life sciences and social sciences, which means that if we solve them in psychology we can solve them in those areas too. In short, it is time for psychology to grow up, step up, and take the lead.

How will we know when we’ve fixed the deadly sins?

CC: The main test is that our published results should become a lot more reliable and repeatable. As things currently stand, there is a high chance that any new result published in a psychology journal is a false discovery. So we’ll know we’ve cracked these problems when we can start to believe the published literature and truly rely on it. When this happens, and open practices become the norm, the closed practices and weak science that define our current culture will seem as primitive as alchemy.

Chris Chambers is professor of cognitive neuroscience in the School of Psychology at Cardiff University and a contributor to the Guardian science blog network. He is the author of The 7 Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice.

Rebecca Tansley & Craig Meade: The Pacific Ocean as you’ve never seen it before

The Pacific Ocean covers one-third of Earth’s surface—more than all of the planet’s landmasses combined. It contains half of the world’s water, hides its deepest places, and is home to some of the most dazzling creatures known to science. The companion book to the spectacular five-part series on PBS produced by Natural History New Zealand, Big Pacific breaks the boundaries between land and sea to present the Pacific Ocean and its inhabitants as you have never seen them before. Providing an unparalleled look at a diverse range of species, locations, and natural phenomena, Big Pacific is truly an epic excursion to one of the world’s last great frontiers. In our latest Q&A, author Rebecca Tansley and showrunner Craig Meade ask each other questions about the series, the book, and the majestic Pacific Ocean:

Questions from Rebecca to Craig

There have been a lot of documentaries made about the oceans and the animals that live in them. How did the Big Pacific idea come about and what new perspectives did you think this series could bring?
It started ten years ago in a late night conversation in France with some of Japan’s best wildlife filmmakers.  We realized that after a thousand years of humanity dominated by the Atlantic and its people that the next thousand years would probably be owned by the Pacific. We conjectured that if we inverted the paradigm and considered the Pacific Ocean a continent, it would already hold many of the world’s major cities: Seattle, LA, Tokyo, Shanghai, Sydney, Taipei.  So what are the natural values of this new continent, what does it say to us, and what does it mean to us? What are its emotional messages? Let’s put a flag in it, explore it and see what we discover about it. So that night we started looking for the defining stories that we should tell of the Pacific Ocean.

The book sections match the episodes of the Big Pacific show – Passionate, Voracious, Mysterious, Violent. How did you come up with these themes and decide to structure the series around them?

To matter, stories must move us, trill our emotional strings. Usually these kind of words are embedded in the undercurrent of the script. Hinted at. But the Pacific is big and bold and we thought our statements about it should be so too. It’s all those things: passionate, voracious, violent and mysterious, but it’s also many other things. So I don’t believe this journey to capture its multitude of faces is yet over. Please let me do the Ecstatic, Selfish and Uncertain shows one day as well!

I talked to crew members about some of the special moments in the series’ production, but which is the most special Big Pacific moment for you, on screen?

The Yellow eyed penguins in the Passionate episode. Less than 4000 adults remain. They are a species that may have just a decade or two left and the cinematographer captured their cold and lonely existence beautifully. It’s not a story of sorrow but one of the bird’s passionate relationship with its mate and family. Like a black and white waddling hobbit he comes home from work and wanders through the mossy forest to the cave they all share. It’s an idyllic glimpse of natural New Zealand and a rare and wonderful animal few people are ever going to see. If they disappear for good from the wild I’ve no doubt this story is the one they’ll play to teach kids what a Yellow eyed penguin once was like.

The Big Pacific series is highly entertaining but also packed with fascinating information – I learned a lot writing the book! In a world of increasing pressure on our natural environment, what is the role of natural history storytelling?

I think it’s increasingly important we do not sugar-coat the truth. We mustn’t be the blind purveyors of a dream while a nightmare plays out in the natural world. So as filmmakers there’s always a tension in what we do. I actually want to bring you a dream so you know why we must protect what we have left in the wild world – but I mustn’t let that dream lie to you and hypnotize you into believing the dream is entirely real. Because in some cases the dream is already over. Like the Yellow eyed penguin story I mentioned, I find myself handling a story as though I am preserving something already lost; instead of revealing something new I find myself working to faithfully capture the essence of what was.

Questions for Rebecca from Craig

The Pacific Ocean is many things to many people: a place, a home, a source of food, a gulf between land masses. How did writing the Big Pacific book change your sense of what the Pacific is to us?

I grew up with the Pacific literally at my front door and I’ve never been far from it for my entire life. It’s been my playground, my pastime and my place of solace. Because of this, for me as well as millions of other people like me, it’s hard to define just what the Pacific means – it just infuses our lives. This is one of the many reasons I was attracted to this project, because of the way it focuses not just on the Pacific’s natural history but on people’s relationship with it too. I hope that comes through in the book, because you can’t separate the animals or the people from the ocean they live in and around. We are, actually, in many ways defined by our place in or on the Pacific. Writing the book reinforced this view and gave me an opportunity to express it.

There are so many evocative images in the Big Pacific book, is there one that you keep on returning to?One animal that you want to meet?

Oh that’s a tough one, because I’m in love with so many of the animals and the images! I’ve always had a strong interest in whales so I find the images of the rare Blue whale captured by Big Pacific Director of Photography, the late and obviously very talented Bob Cranston, mesmerizing. But in the course of writing the book I discovered many other wonderful members of the Pacific community. Among them are the Wolf eels, whose dedication to their partner and to their brood is totally endearing. I love the images of the Firefly squid because they seem so ethereal and their lives are so fleeting, yet nature has nonetheless equipped them miraculously for their short, spectacular journey. Plus I can’t not mention the Chinese horseshoe crab, because they are such admirable survivors. I hope the whole world wakes up to the beauty and value of all the animals that live in and around not just the Pacific but all the planet’s oceans, and recognizes that they deserve their place in it for the future as much as we do.

Natural history stories at their heart are science stories – but with fur and scales. To be enjoyable and understandable they usually require simplification, but still need to be highly accurate. That sounds like a complicated dance to perform when writing, was it?

I’m a storyteller, not a scientist, but like a scientist I’m curious about the world. The process I used for Big Pacific worked well. First I read the (draft) series scripts and watched the Big Pacific footage. This meant I became intrigued with the animals first and foremost as characters, and was drawn into other aspects of the Pacific’s natural history – such as the Silver Dragon and the Ring of Fire – as stories. When I set about writing I drew on the science that was provided to me by Big Pacific researcher Nigel Dunstone. Then it was a matter of asking myself, what do I find interesting about that animal or story that others might also enjoy? What might people not know? What is dramatic about this story? Of course I also ensured I was covering off important information, such as environmental threats and conservation status, and everything I wrote was checked afterwards by Nigel and the Big Pacific team.

You’ve made some fantastic films between your writing jobs, is it hard to transition from the spoken word to the written?  Are they two different crafts?

Writing and filmmaking are related in terms of both entertaining and organizing information for an intended audience, but they do that in different ways and to a large extent employ different skill-sets. Obviously filmmaking is a collective pursuit that usually requires a team of people, whereas writing is a solitary craft. I enjoy both equally and writing/directing my own films enables me to do this. I was fortunate enough to spend time with the Big Pacific team when I selected the images for the book, and also interviewed others, so in this writing project I did get to collaborate. I would add that when I write I’m very conscious of rhythm – an aspect that’s also important to aspects of filmmaking, such as narration and editing. I’m not really musical, but I like to think that I have that sense of linguistic rhythm and flow. Perhaps that’s why I studied languages for many years!

TansleyA documentary filmmaker herself, Rebecca Tansley has previously worked at the production company that made the Big Pacific series, NHNZ. In addition to writing and directing films she has written two other internationally published books and been a contributor to national magazines and newspapers in her home country of New Zealand. Rebecca has degrees in languages, media production and law.

Craig Meade and the production team at NHNZ are some of the most successful and prolific producers of natural history programs on the planet—more than 50 wildlife shows completed in just the last four years. But after 30 years of writing and directing Craig still doesn’t class himself as a wildlife filmmaker—he’s a science guy that prefers mud, tents and mosquitoes to laboratories. When he’s not making films Craig is a deer hunter and an on-call fire fighter.

Joel Brockner: Can Job Autonomy Be a Double-Edged Sword?

This post was originally published on the Psychology Today blog.

“You can arrive to work whenever convenient.”

“Work from home whenever you wish.”

“You can play music at work at any time.”

These are examples of actual workplace policies from prominent companies such as Aetna, American Express, Dell, Facebook, Google, IBM, and Zappos. They have joined the ranks of many organizations in giving employees greater job autonomy, that is, more freedom to decide when, where, and how to do their work. And why not? Research by organizational psychologists such as Richard Hackman and Greg Oldham and by social psychologists such as Edward Deci and Richard Ryan, has shown that job autonomy can have many positive effects. The accumulated evidence is that employees who experience more autonomy are more motivated, creative, and satisfied with their jobs.

Against this backdrop of the generally favorable effects of job autonomy, recent research has shown that it also may have a dark side: unethical behavior. Jackson Lu, Yoav Vardi, Ely Weitz and I discovered such results in a series of field and laboratory studies soon to be published in the Journal of Experimental Social Psychology. In field studies conducted in Israel, employees from a wide range of industries rated how much autonomy they had and how often they engaged in unethical behavior, such as misrepresenting their work hours or wasting work time on private phone calls. Those who had greater autonomy said that they engaged in more unethical behavior on the job. In laboratory experiments conducted in the United States we found that it may not even be necessary for people to have actual autonomy for them to behave unethically; merely priming them with the idea of autonomy may do the trick. In these studies participants were randomly assigned to conditions differing in how much the concept of autonomy was called to mind. This was done with a widely used sentence-unscrambling task in which people had to rearrange multiple series of words into grammatically correct sentences. For example, those in the high-autonomy condition were given words such as, “have many as you as days wish you vacation may” which could be rearranged to form the sentence, “You may have as many vacation days as you wish.” In contrast, those in the low-autonomy condition were given words such as, “office in work you must the,” which could be rearranged to, “You must work in the office.” After completing the sentence-unscrambling exercise participants did another task in which they were told that the amount of money they earned depended on how well they performed. The activity was structured in a way that enabled us to tell whether participants lied about their performance. Those who were previously primed to experience greater autonomy in the sentence-unscrambling task lied more. Job autonomy gives employees a sense of freedom which usually has positive effects on their productivity and morale but also can lead them to feel that they can do whatever they want, including not adhering to rules of morality.

All behavior is a function of what people want to do (motivation) and what they are capable of doing (ability). Consider the unethical behavior elicited by high levels of autonomy. Having high autonomy may not have made people want to behave unethically. However, it may have enabled the unethical behavior by making it possible for people to engage in it. Indeed, the distinction between people wanting to behave unethically versus having the capability of doing so may help answer two important questions:

(1) What might mitigate the tendency for job autonomy to elicit unethical behavior?

(2) If job autonomy can lead to unethical behavior should companies re-evaluate whether to give job autonomy to its employees? That is, can job autonomy be introduced in a way that maximizes its positive consequences (e.g., greater creativity) without introducing the negative effect of unethical behavior?

With respect to the first question, my hunch is that people who have job autonomy and therefore are able to behave unethically will not do so if they do not want to behave unethically. For example, people who are high on the dimension of moral identity, for whom behaving morally is central to how they define themselves would be less likely to behave unethically even when a high degree of job autonomy enabled or made it possible for them to do so.

With respect to the second question, I am not recommending that companies abandon their efforts to provide employees with job autonomy. Our research suggests, rather, that the consequences of giving employees autonomy may not be summarily favorable. Taking a more balanced view of how employees respond to job autonomy may shed light on how organizations can maximize the positive effects of job autonomy while minimizing the negative consequence of unethical behavior.

Whereas people generally value having autonomy, some people want it more than others. People who want autonomy a lot may be less likely to behave unethically when they experience autonomy. For one thing, they may be concerned that the autonomy they covet may be taken away if they were to take advantage of it by behaving unethically. This reasoning led us to do another study to evaluate when the potential downside of felt autonomy can be minimized while its positive effects can be maintained. Once again, we primed people to experience varying degrees of job autonomy with the word-unscrambling exercise. Half of them then went on to do the task which measured their tendency to lie about their performance, whereas the other half completed an entirely different task, one measuring their creativity. Once again, those who worked on the task in which they could lie about their performance did so more when they were primed to experience greater autonomy. And, as has been found in previous research those who did the creativity task performed better at it when they were primed to experience greater autonomy.

Regardless of whether they did the task that measured unethical behavior or creativity, participants also indicated how much they generally valued having autonomy. Among those who generally valued having autonomy to a greater extent, (1) the positive relationship between experiencing job autonomy and behaving unethically diminished, whereas (2) the positive relationship between experiencing job autonomy and creativity was maintained. In other words, as long as people valued having autonomy, the experience of autonomy had the positive effect of enhancing creativity without introducing the dangerous side effect of unethical behavior. So, when organizations introduce job autonomy policies like those mentioned at the outset, they may gain greater overall benefits when they ensure that their employees value having autonomy. This may be achieved by selecting employees who value having autonomy as well as by creating a corporate culture which emphasizes the importance of it. More generally, a key practical takeaway from our studies is that when unethical behavior is enabled, whether through job autonomy or other factors, it needs to be counterbalanced by conditions that make employees not want to go there.

BrocknerJoel Brockner is the Phillip Hettleman Professor of Business at Columbia Business School. He is the author of The Process Matters: Engaging and Equipping People for Success.

Lawrence Baum: Ideology in the Supreme Court

When President Trump nominated Neil Gorsuch for a seat on the Supreme Court, Gorsuch was universally regarded as a conservative. Because of that perception, the Senate vote on his confirmation fell almost completely along party lines. Indeed, Court-watchers concluded that his record after he joined the Court late in its 2016-2017 Term was strongly conservative. But what does that mean? One possible answer is that he agreed most often with Clarence Thomas and Samuel Alito, the justices who were considered the most conservative before Gorsuch joined the Court. But that answer does not address the fundamental question: why are the positions that those three justices took on an array of legal questions considered conservative?

The most common explanation is that liberals and conservatives each start with broad values that they then apply in a logical way to the various issues that arise in the Supreme Court and elsewhere in government. But logic can go only so far to explain the ideological labels of various positions. It is not clear, for instance, why liberals are the strongest proponents of most individual rights that the Constitution protects while conservatives are the most supportive of gun rights. Further, perceptions of issues sometimes change over time, so that what was once considered the liberal position on an issue is no longer viewed that way.

Freedom of expression is a good example of these complexities. Beginning early in the twentieth century, strong support for freedom of speech and freedom of the press was regarded as a liberal position. In the Supreme Court, the justices who were most likely to support those First Amendment rights were its liberals. But in the 1990s that pattern began to change. Since then, when the Court is divided, conservative justices provide support for litigants who argue that their free expression rights have been violated as often as liberals do.

To explain that change, we need to go back to the period after World War I when freedom of expression was established as a liberal cause. At that time, the government policies that impinged the most on free speech were aimed at political groups on the left and at labor unions. Because liberals were more sympathetic than conservatives to those segments of society, it was natural that freedom of expression became identified as a liberal cause in the political world. In turn, liberal Supreme Court justices gave considerably more support to litigants with free expression claims than did their conservative colleagues across the range of cases that the Court decided.

In the second half of the twentieth century, people on the political left rethought some of their assumptions about legal protections for free expression. For instance, they began to question the value of protecting “hate speech” directed at vulnerable groups in society. And they were skeptical about First Amendment challenges to regulations of funding for political campaigns. Meanwhile conservatives started to see freedom of expression in a more positive light, as a protection against undue government interference with political and economic activity.

This change in thinking affected the Supreme Court in the 1990s and after. More free expression cases came to the Court from businesses and people with a conservative orientation, and a conservative-leaning Court was receptive to those cases. The Court now decides few cases involving speech by labor unions and people on the political left, and cases from businesses and political conservatives have become common. Liberal justices are more favorable than their conservative colleagues to free expression claims by people on the left and by individuals with no clear political orientation, but conservative justices provide more support to claims by businesses and conservatives. As a result, what had been a strong tendency for liberal justices to give the most support to freedom of expression across the cases that the Court decided has disappeared.

The sharp change in the Supreme Court’s ideological orientation in free speech cases is an exception to the general rule, but it underlines some important things about the meaning of ideology. The labeling of issue positions as conservative or liberal comes through the development of shared understandings among political elites, and those understandings do not necessarily follow from broad values. In considerable part, they reflect attitudes toward the people and groups that champion and benefit from particular positions. The impact of those attitudes is reflected in the ways that people respond to specific situations involving an issue: liberal and conservative justices, like their counterparts elsewhere in government and politics, are most favorable to free speech when that speech comes from segments of society with which they sympathize. When we think of Supreme Court justices and the positions they take as conservative and liberal, we need to keep in mind that to a considerable degree, the ideological labeling of positions in ideological terms is arbitrary. Justice Gorsuch’s early record on the Court surely is conservative—but in the way that conservative positions have come to be defined in the world of government and politics, definitions that are neither permanent nor inevitable.

BaumLawrence Baum is professor emeritus of political science at Ohio State University. His books include Judges and Their Audiences, The Puzzle of Judicial BehaviorSpecializing the Courts, and Ideology in the Supreme Court.