Sarah Binder & Mark Spindel on The Myth of Independence

Born out of crisis a century ago, the Federal Reserve has become the most powerful macroeconomic policymaker and financial regulator in the world. The Myth of Independence traces the Fed’s transformation from a weak, secretive, and decentralized institution in 1913 to a remarkably transparent central bank a century later. Offering a unique account of Congress’s role in steering this evolution, Sarah Binder and Mark Spindel explore the Fed’s past, present, and future and challenge the myth of its independence.

Why did you write this book?

We were intrigued by the relationship of two powerful institutions that are typically studied in isolation: Congress, overtly political and increasingly polarized, and the Federal Reserve, allegedly independent, born of an earlier financial panic and the world’s most powerful economic policy maker. The economic conditions that created and sustain America’s century old central bank have been well studied. Scholars and market participants have spent considerably less time analyzing the complex political forces that drove the Fed’s genesis and its rise to prominence. Our research challenges widely accepted notions of Fed independence, instead arguing that the Fed sets policy subject to political constraints. Its autonomy is conditioned on economic outcomes and robust political support. In the long shadow of the global financial crisis, our research pinpoints the interdependence of two powerful policy-making institutions and their impact on contemporary monetary politics.

What does history teach us about contemporary monetary politics?

Probing the Fed’s history affords us a window onto the political and economic constraints under which the Fed makes monetary policy today. We draw two key conclusions about contemporary monetary policy from our study of the Fed’s development.

First, the history of the relationship between Congress and the Fed reveals a recurring cycle of economic crisis, political blame, and institutional reform. When the economy is performing well, Congress tends to look the other way, leaving the Fed to pursue its statutory mandate to boost jobs and limit inflation. When the economy sours, lawmakers react by blaming the Fed and then counter-intuitively often giving the Fed more power. Legislative and central bank reactions in the wake of the most recent financial crisis fit this recurring theme. Even after blaming them, Congress further concentrated financial regulation in the Fed’s Board of Governors. Understanding the electoral dynamics that shape Congressional reactions helps to explain the puzzling decision to empower the Fed in the wake of crisis.

Second, economists and central bankers often argue that the Fed has instrument, but not goal, independence: Congress stipulates the Fed’s mandate but leaves the central bank to choose the tools necessary to achieve it. Our historical analysis suggests instead that Congress shapes both the monetary goals and tools. Creating and clipping emergency lending power, imposing greater transparency, influencing adoption of an inflation target—these and other legislative efforts directly shape the Fed’s conduct. Even today, monetary policy remains under siege, as lawmakers on the left and right remain dissatisfied with the Fed’s performance in driving the nation’s economic recovery from the Great Recession.

What new light do you shed on the notion of central bank independence?

Placing the Fed within the broader political system changes our understanding of the nature and primacy of central bank independence.

First, economists prize central bank independence on grounds that it keeps inflation low and stable. However, we show that ever since the Great Depression, Congressional majorities have typically demanded the Fed place equal weight on generating growth and controlling inflation—diminishing the importance of central bank autonomy to lawmakers. Moreover, we demonstrate that the seminal Treasury-Fed Accord of 1951—a deal that most argue cemented the Fed’s independence—tethered the Fed more closely to Congress even as it broke the Fed’s subordination to the Treasury.

Second, prescriptions for central bank independence notwithstanding, fully separating fiscal and monetary policy is complicated. During the Fed’s first half-century, fiscal policy was monetary policy. The Fed underwrote U.S. government borrowing, either willingly or unwillingly enabling the spending objectives of the executive and legislative branches. Even after the 1951 Fed-Treasury Accord, macro-economic outcomes have played a determinative role in shaping U.S. fiscal policy. And most recently, the Fed’s adoption of unconventional monetary policy in the wake of the financial crisis pushed interest rates to zero and ballooned the Fed’s balance sheet—leading many Fed critics to argue that the Fed had crossed the line into Congress’s fiscal domain. Importantly, even strict proponents of monetary independence recognize that exigent conditions often demand collaboration between the central bank and government, complicating monetary politics.

Third, the myth of Fed independence is convenient for elected officials eager to blame the Fed for poor economic outcomes. In fact, Congress and the Fed are interdependent: the Fed operates very much within the political structure in Washington. The Federal Reserve Act—the governing law—has been consistently reopened and revised, particularly after extraordinary economic challenges. Each time, Congress centralizes more control in the Fed’s Washington-based Board of Governors, in exchange for more central bank transparency and congressional accountability. Because Fed “independence” rests with Congress’s tolerance of the Fed’s policy performance, we argue that the Fed earns partial and contingent independence from Congress, and thus hardly any independence at all.

How does intense partisan polarization in Washington today affect the Fed?

In the aftermath of the global financial crisis, like most national institutions, the Federal Reserve has been caught in the cross hairs of contemporary partisan polarization. Politicians of both stripes call for changes to the governance and powers of the Fed. Most prominently, we see bipartisan efforts to audit Federal Open Market Committee (FOMC) decisions. On the right, a vocal GOP cohort demands an unwinding of the Fed’s big balance sheet and a more formulaic approach to monetary policy. On the left, Democrats want greater diversity on the rosters of the Fed’s regional reserve banks. With the 2016 elections delivering government control to Republicans, prospects for reopening the Federal Reserve Act are heightened.

Several vacancies on the Board of Governors give President Trump and Republican senators another opportunity to air grievances and exert control. Trump inherits a rare opportunity to nominate a majority of members to the FOMC, including the power to appoint a new chair in early 2018 should he wish to replace Janet Yellen. Will he turn to more traditional monetary “hawks,” who seek to rollback crisis-era policies, thus tightening monetary policy? Or will Trump bend towards a more ideologically dovish chair, trading some inflation for a pro-growth agenda?

Washington leaves a large—and politicized—mark on the Federal Reserve. The Myth of Independence seeks to place these overtly political decisions into broader, historical perspective, exploring how the interdependence of Congress and the Federal Reserve shapes politics, the economy and financial markets. As Ben Bernanke expressed, “absent the support of some future White House, although it might be difficult to get passed and signed legislation that poses a serious challenge to the basic powers of the Fed, it unfortunately would not be impossible.”

BinderSarah Binder is professor of political science at George Washington University and senior fellow at the Brookings Institution. Her books include Advice and Dissent and Stalemate. Mark Spindel has spent his entire career in investment management at such organizations as Salomon Brothers, the World Bank, and Potomac River Capital, a Washington D.C.–based hedge fund he started in 2007.

Alexandra Logue: Not All Excess Credits Are The Students’ Fault

This post was originally published on Alexandra Logue’s blog

A recent article in Educational Evaluation and Policy Analysis reported on an investigation of policies punishing students for graduating with excess credits.  Excess credit hours are the credits that a student obtains in excess of what is required for a degree, and many students graduate having taken several courses more than what was needed.

To the extent that tuition does not cover the cost of instruction, and/or that financial aid is paying for these excess credits, someone other than the student—the college or the government—is paying for these excess credits.  Graduating with excess credits also means that a student is occupying possibly scarce classroom seats longer than s/he needs to and is not entering the work force with a degree and paying more taxes as soon as s/he could.  Thus there are many reasons why colleges and/or governments might seek to decrease excess credits.  The article considers cases in which states have imposed sanctions on students who graduate with excess credits, charging more for credits taken significantly above the number required for a degree.  The article shows that such policies, instead of resulting in students graduating sooner, have instead resulted in greater student debt.  But the article does not identify the reasons why this may be the case.  Perhaps one reason is because students do not have control over those excess credits.

For example, as described in my forthcoming book, Pathways to Reform: Credits and Conflict at The City University of New York, students may accumulate excess credits because of difficulties they have transferring their credits.  When students transfer, there can be significant delays in having the credits that they obtained at their old institution evaluated by their new institution.  At least at CUNY colleges, the evaluation process can take many months.  During that period, a student either has to stop out of college or take a risk and enroll in courses that may or may not be needed for the student’s degree.  Even when appropriate courses are taken, all too often credits that a student took at the old college as satisfying general education (core) requirements or major requirements become elective credits, or do not transfer at all. A student then has to repeat courses or take extra courses in order to satisfy all of the requirements at the new college.  Given that a huge proportion of students now transfer, or try to transfer, their credits (49% of bachelor’s degree recipients have some credits from a community college, and over one-third of students in the US? transfer within six years of starting college), a great number of credits are being lost.

Nevertheless, a 2010 study at CUNY found that a small proportion of the excess credits of its bachelor’s degree recipients was due to transfer—students who never transferred graduated with only one or two fewer excess credits, on average, than did students who did transfer.  Some transfer students may have taken fewer electives at their new colleges in order to have room in their programs to make up nontransferring credits from their old colleges, without adding many excess credits.

But does this mean that we should blame students for those excess credits and make them pay more for them?  Certainly some of the excess credits are due to students changing their majors late and/or to not paying attention to requirements and so taking courses that don’t allow them to finish their degrees, and there may even be some students who would rather keep taking courses than graduate.

But there are still other reasons that students may accumulate extra credits, reasons for which the locus of control is not the student.  Especially in financially strapped institutions, students may have been given bad or no guidance by an advisor.  In addition, students may have been required to take traditional remedial courses, which can result in a student acquiring many of what CUNY calls equated credits, on top of the required college-level credits (despite the fact that there are more effective ways to deliver remediation without the extra credits). Or a student may have taken extra courses that s/he didn’t need to graduate in order to continue to enroll full-time, so that the student could continue to be eligible for some types of financial aid and/or (in the past) health insurance. Students may also have made course-choice errors early in their college careers, when they were unaware of excess-credit tuition policies that would only have an effect years later.

The fact that the imposition of excess-credit tuition policies did not affect the number of excess credits accumulated but instead increased student debt by itself suggests that, to at least some degree, the excess credits are not something that students can easily avoid, and/or that there are counter-incentives operating that are even stronger than the excess tuition.

Before punishing students, or trying to control their behavior, we need to have a good deal of information about all of the different contingencies to which students are subject.  Students should complete their college’s requirements as efficiently as possible.  However, just because some students demonstrate delayed graduation behavior does not mean that they are the ones who are controlling that behavior.  Decreasing excess credits needs to be a more nuanced process, with contingencies and consequences tailored appropriately to those students who are abusing the system, and those who are not.

LogueAlexandra W. Logue is a research professor at the Center for Advanced Study in Education at the Graduate Center, CUNY. From 2008 to 2014, she served as executive vice chancellor and university provost of the CUNY system. She is the author of Pathways to Reform: Credits and Conflict at The City University of New York.

Omnia El Shakry: Psychoanalysis and Islam

Omnia El Shakry‘s new book, The Arabic Freud, is the first in-depth look at how postwar thinkers in Egypt mapped the intersections between Islamic discourses and psychoanalytic thought.

What are the very first things that pop into your mind when you hear the words “psychoanalysis” and “Islam” paired together?  For some of us the connections might seem improbable or even impossible. And if we were to be brutally honest the two terms might even evoke the specter of a so-called “clash of civilizations” between an enlightened, self-reflective West and a fanatical and irrational East.

It might surprise many of us to know, then, that Sigmund Freud, the founding figure of psychoanalysis, was ever-present in postwar Egypt, engaging the interest of academics, novelists, lawyers, teachers, and students alike. In 1946 Muhammad Fathi, a Professor of Criminal Psychology in Cairo, ardently defended the relevance of Freud’s theories of the unconscious for the courtroom, particularly for understanding the motives behind homicide. Readers of Nobel laureate Naguib Mahfouz’s 1948 The Mirage were introduced to the Oedipus complex, graphically portrayed in the novel, by immersing themselves in the world of its protagonist—pathologically erotically attached and fixated on his possessive mother. And by 1951 Freudian theories were so well known in Egypt that a secondary school philosophy teacher proposed prenuptial psychological exams in order to prevent unhappy marriages due to unresolved Oedipus complexes!

Scholars who have tackled the question of psychoanalysis and Islam have tended to focus on it as problem, by assuming that psychoanalysis and Islam have been “mutually ignorant” of each other, and they have placed Islam on the couch, as it were, alleging that it is resistant to the “secular” science of psychoanalysis. In my book, The Arabic Freud, I undo the terms of this debate and ask, instead, what it might mean to think of psychoanalysis and Islam together, not as a “problem,” but as a creative encounter of ethical engagement.

What I found was that postwar thinkers in Egypt saw no irreconcilable differences between psychoanalysis and Islam. And in fact, they frequently blended psychoanalytic theories with classical Islamic concepts. For example, when they translated Freud’s concept of the unconscious, the Arabic term used, “al-la-shuʿur,” was taken from the medieval mystical philosopher Ibn ʿArabi, renowned for his emphasis on the creative imagination within Islamic spirituality.

Islamic thinkers further emphasized similarities between Freud’s interpretation of dreams and Islamic dream interpretation, and they noted that the analyst-analysand (therapist-patient) relationship and the spiritual master-disciple relationship of Sufism (the phenomenon of mysticism in Islam) were nearly identical. In both instances, there was an intimate relationship in which the “patient” was meant to forage their unconscious with the help of their shaykh (spiritual guide) or analyst, as the case might be. Both Sufism and psychoanalysis, then, were characterized by a relationship between the self and the other that was mediated by the unconscious. Both traditions exhibited a concern for the relationship between what was hidden and what was shown in psychic and religious life, both demonstrated a preoccupation with eros and love, and both mobilized a highly specialized vocabulary of the self.

What, precisely, are we to make of this close connection between Islamic mysticism and psychoanalysis? On the one hand, it helps us identify something of a paradox within psychoanalysis, namely that for some psychoanalysis represents a non-religious and even atheistic world view. And there is ample evidence for this view within Freud’s own writings, which at times pathologized religion in texts such as The Future of an Illusion and Civilization and Its Discontents. At the same time, in Freud and Man’s Soul, Bruno Bettelheim argued that in the original German Freud’s language was full of references to the soul, going so far as to refer to psychoanalysts as “a profession of secular ministers of souls.” Similarly, psychoanalysis was translated into Arabic as “tahlil al-nafs”—the analysis of the nafs, which means soul, psyche, or self and has deeply religious connotations. In fact, throughout the twentieth century there have been psychoanalysts who have maintained a receptive attitude towards religion and mysticism, such as Marion Milner or Sudhir Kakar. What I take all of this to mean is that psychoanalysis as a tradition is open to multiple, oftentimes conflicting, interpretations and we can take Freud’s own ambivalence towards religion, and towards mysticism in particular, as an invitation to rethink the relationship between psychoanalysis and religion.

What, then, if religious forms of knowledge, and the encounter between psychoanalysis and Islam more specifically, might lead us to new insights into the psyche, the self, and the soul? What would this mean for how we think about the role of religion and ethics in the making of the modern self? And what might it mean for how we think about the relationship between the West and the Islamic world?

FreudOmnia El Shakry is Professor of History at the University of California, Davis. She is the author of The Great Social Laboratory: Subjects of Knowledge in Colonial and Postcolonial Egypt and the editor of Gender and Sexuality in Islam. Her new book, The Arabic Freud, is out this September.

Mitchell Cohen: The Politics of Opera

CohenThe Politics of Opera takes readers on a fascinating journey into the entwined development of opera and politics, from the Renaissance through the turn of the nineteenth century. What political backdrops have shaped opera? How has opera conveyed the political ideas of its times? Delving into European history and thought and an array of music by such greats as Lully, Rameau, and Mozart, Mitchell Cohen reveals how politics—through story lines, symbols, harmonies, and musical motifs—has played an operatic role both robust and sotto voce.

Politics is not usually the first thing most people think about when it comes to opera. Why did you write a book on politics and opera?

MC: It was natural. I have a passion for opera and I am a professor of political theory and co-edited Dissent, a political magazine. I began writing the book in order to explore the intersection of two apparently disparate domains. Moreover, if the relation between aesthetic ideas and political ideas interests you, opera provides a great terrain for exploration. Of course, not all operas are political, but more are—or have political implications—than many people realize. I should add: politics does not consume all there is to say about those operas that are political. The Politics of Opera is about how and when two domains come together, and I define politics broadly. In any event, there was also a selfish dimension to my project: I had to go to the opera for work. There are worse things to have to do.

Your book is unusual because of the time span you cover, roughly from the birth of opera through Mozart, some two hundred years. Why choose this period?

MC: Well, let’s start at the beginning. Modern politics—the modern state in Europe—was, broadly speaking, born at the time of the Renaissance. Opera emerged in the late Renaissance. In the last decades of the 16th century, humanist intellectuals in Florence debated about “ancient” and “modern” music—they meant Greek antiquity and their own day. Galileo’s father was one of them. Their conversations led to experiments that, in turn, became opera at the turn of the 17th century. In roughly this era, in Italy and France, important debates occurred and books were published about politics and the nature of politics because it was transforming. One might say that Machiavelli, decades earlier, began the discussion. Of course he didn’t write operas (he did write plays). The parallel between the development of a new form of politics and a new form of musical stage art intrigued me. But in Mozart’s day there was a massive political crack-up, the French revolution—there was, then, great upheaval and great genius at the same time. That’s why I took the late 18th century as a natural historical border. The Politics of Opera seeks to sink operas into the political times in which they were first imagined and not to imagine them as somehow standing outside their times. Another way of saying that is that if you want truly to grasp the politics of an opera you must look deeply both into history and into the ideas that were current when it was written and composed. You have to know what was being argued about then and not just impose your own contemporary preoccupations, although your own preoccupations may be enlightening too—so long as you keep an eye on the differences between your ideas and those found, say, in an opera by Monteverdi or Rameau or Mozart.

For whom are you writing?

MC: I try to write for a broad intelligent public and for scholars. I sought to make a contribution to our understanding of interesting, not-always-evident matters but in accessible ways. I hope that opera fans along with scholars and students of history, culture, music and politics will all be engaged by it. I hope they’ll learn something of what I learned in writing and researching it.

Your book’s prologue speaks of the itinerary of your explorations. What was the route?

MC: Italy, France, Vienna. Florence under the Medicis was the obvious place to begin because those humanists I mentioned were talking about relations between music, feelings, and ideas. The earliest opera for which we still have both the libretto and the music retold the story of Eurydice and Orpheus for a political event, the marriage in 1600 of Maria de’ Medici to France King Henri IV in Florence (He didn’t show up but sent a stand-in!). But then there was a leap of musical imagination when, in Mantua just a few years later, Claudio Monteverdi began composing operas, first of all his remarkable Orfeo. I am always tempted to call him “the great Monteverdi” and indeed he was the first great composer of opera, although he wrote many other wonderful compositions too. He would eventually be fired from Mantua’s ducal court but then he received a much more prestigious position in Venice, a republic. Towards the end of his life he composed some amazing operas in collaboration with librettists who were close to power in Venice. This included the first directly political and historical opera, The Coronation of Poppea. In it the philosopher Seneca and Roman emperor Nero quarrel over ‘reason’ versus ’emotion’ in ruling. From Italy I went to France, more precisely to the birth of French opera thanks to Jean-Baptiste Lully during the reign of Louis XIV. Then I turned to the quarrel in the 18th century between a great composer and theorist of harmony, Jean-Philippe Rameau, and a popular but not-so-great composer of opera, Jean-Jacques Rousseau. Yes, the Rousseau, the famous political philosopher who advocated sovereignty of the people but who also aspired to be a composer. Poor Rameau! Poor Rousseau! Rameau was the great artist and my book devotes considerable space to his opera Les Indes galantes, a remarkable opera that in part reflects the Age of Exploration—what others would call the Age of Imperialism. But Rameau was not a spectacular writer and Rousseau’s music, well, let’s just say you wouldn’t want to go too often to his best-known opera, Le Devin du Village (the Village Soothsayer). However, you really wouldn’t want to get into polemics with him since he was a master of them. 

From France I went on to Vienna, to Metastasio, the Imperial Poet of the Holy Roman Empire whose librettos were set by many composers, including Vivaldi. For my purposes the most interesting of them was Cato in Utica, which is about the last Roman republican resistance to the rise of the Roman Empire—Cato versus Julius Casesar. Of course, the book must finally come to Mozart’s operas.

As I looked at all these operas I tried to contextualize them and also to show parallels with key political ideas and problems of the times—ideas and problems that are embedded in them. So readers will come across a number of important thinkers and writers—some well-known, some less-known today—weaving throughout the book. These range from Machiavelli and Tacitus to Jean Bodin, Diderot, Edmund Burke, Rousseau and others.

Was Mozart political?

MC: Mozart was, of course, a man of music before anything else. We should be forever grateful for that. The more you study him, the more amazing he becomes. He didn’t write on politics but he certainly had problems with authority. His operas are filled with political themes and political issues of his time. He didn’t write his librettos but he helped to shape them. I try in The Politics of Opera to give a close reading (and hearing) to the results. The book actually stretches a little beyond Mozart and rounds off by discussing a little known work. The German poet Goethe wrote a sequel to The Magic Flute a few years after Mozart’s death. Goethe never finished it and nobody was brave enough to write music for it. In it there is a regrouping of the forces of darkness. Led by the infamous Queen of the Night they launch an assault against Sarastro’s enlightened realm—he is on a sabbatical—and Tamino and Pamina. Goethe wrote it in the mid 1790s. It is easy to think of it in light of wars and politics in Europe just then. There is, of course, much more to be found in it too.

You certainly cover a lot of territory. How do you approach it all?

MC: By using insights drawn from many thinkers and varied methods—political, philosophical, musicalogical, historical—in different combinations. I don’t impose one model on everything. I prefer what I call a methodological medley. It seems to me a particularly fruitful way to be inter-disciplinary.

MitchellCohen Cohen is professor of political science at Baruch College and the Graduate School of the City University of New York and an editor emeritus of Dissent. His books include The Wager of Lucien Goldmann and The Politics of Opera: A History from Monteverdi to Mozart. He has been a National Endowment for the Humanities Fellow at the Institute for Advanced Studies at Princeton and has written for many publications including the New York Times Sunday Book Review and the Times Literary Supplement (London).

 

Browse Our New Sociology 2017 Catalog

Our new Sociology catalog includes an essential guide to social science research in the digital age, an inside look at blue-collar trades turned hipster crafts, and an examination of the commercialization of far right culture in Germany.

If you’ll be at ASA 2017 in Montreal, please join us for wine and light refreshments:

Booth 721
3pm
Sunday, August 13th

Or stop by any time to see our full range of sociology titles and more.

Digital technology has the potential to revolutionize social research, data gathering, and analysis. In Bit by Bit, Matthew J. Salganik presents a comprehensive guide to the principles of social research in the digital age. Essential reading for anyone hoping to master the new techniques enabled by fast-developing digital technologies.

Bit by Bit, by Matthew J. Salganik

Richard E. Ocejo draws on multiple years of participant-observation in a fascinating look at four blue-collar trades that have acquired a new cachet in the modern urban economy: bartending, distilling, barbering, and butchering. Join him as he delves deep into the lives and culture of these Masters of Craft.

Ocejo

Recent years have seen a resurgence of far right politics in Europe, manifesting in the increasing presence of clothing and other products displaying overt or coded anti-Semitic, racist, and nationalist symbology. Cynthia Miller-Idriss examines the normalization and commercialization of far right ideology in The Extreme Gone Mainstream.

Miller-Idriss

Alexandra Logue: Are Faculty Missing in Action?

This post was originally published on the blog of Alexandra Logue

Last fall, an article in Inside Higher Ed authored by Judith Shapiro, President of the Teagle Foundation and former President of Barnard College, made the following statement:

“For the most part, however, faculty members have simply been missing in action when it comes to dealing with campus upheavals around race and racism.”

I agree with this statement, but I would expand it to say that faculty members have frequently been missing in action with regard to all kinds of controversial issues.  At many (most?) institutions, faculty are rewarded with promotions, raises, and tenure first for their research (largely based on their individual efforts), second for their teaching (again, largely based on their individual efforts), and only third for their service, which would include working together with others to make their colleges congenial and productive places for the colleges’ diverse inhabitants.  The faculty who produce the most work of direct benefit to themselves are largely those faculty who keep to themselves, focus on their own work, and stay out of the way of college conflagrations.  Consistent with this statement, research has shown that faculty do not feel safe expressing views with which others may disagree until they have had the final promotion to full professor (not, as some people think, until they have tenure).

An example of these tendencies concerns credit transfer among the 19 undergraduate colleges of The City University of New York, at which approximately 10,000 students transfer each fall alone.  Credit transfer is a controversial subject, just one reason being that whether the receiving college counts the credits or not can directly affect the college’s, as well as a department’s, funds, and whether faculty members have sufficient enrollment to teach certain courses.  Although ensuring that credits transfer can benefit students, it can also mean depriving faculty and/or a college of something desirable to them.  Thus it is no surprise that, although for over 40 years problems with credit transfer were seen as the worst problems for CUNY students, and although the faculty issued some statements about those problems, the faculty took no actions to solve the problems.  When the central administration finally instituted a system (known as Pathways) that guaranteed credit transfer for some courses, and thus directly affecting some faculty’s courses, only then did some faculty spend significant amounts of time on the credit transfer issue, with most of those faculty objecting to Pathways, including filing law suits against it.  This prompted one CUNY Distinguished Professor, in his testimony at a public hearing on Pathways, to say to the faculty in the audience: “Where have you been?  Where have you been for 40 years?”

Although there is nothing wrong with working hard to benefit oneself, we also need to provide clear incentives for faculty to work together for the benefit of students, as well as for the rest of the higher education community.

There is more about these issues in my forthcoming book Pathways to Reform:  Credits and Conflict at The City University of New York.

LogueAlexandra W. Logue is a research professor at the Center for Advanced Study in Education at the Graduate Center, CUNY. From 2008 to 2014, she served as executive vice chancellor and university provost of the CUNY system. She is the author of The Psychology of Eating and Drinking and Self-Control: Waiting Until Tomorrow for What You Want Today. She lives in New York City.

Landon R. Y. Storrs: What McCarthyism Can Teach Us about Trumpism

Since the election of President Donald Trump, public interest in “McCarthyism” has surged, and the focus has shifted from identifying individual casualties to understanding the structural factors that enable the rise of demagogues.

After The Second Red Scare was published in 2012, most responses I received from general readers were about the cases of individuals who had been investigated, or whom the inquirer guessed might have been investigated, under the federal employee loyalty program. That program, created by President Truman in 1947 in response to congressional conservatives’ charges that his administration harbored communist sympathizers, was the engine of the anticommunist crusade that became known as McCarthyism, and it was the central subject of my book. I was the first scholar to gain access to newly declassified records related to the loyalty program and thus the first to write a comprehensive history. The book argues that the program not only destroyed careers, it profoundly affected public policy in many fields.

Some queries came from relatives of civil servants whose lives had been damaged by charges of disloyalty. A typical example was the person who wanted to understand why, in the early 1950s, his parents abruptly moved the family away from Washington D.C. and until their deaths refused to explain why. Another interesting inquiry came from a New York Times reporter covering Bill de Blasio’s campaign for New York City mayor. My book referenced the loyalty case of Warren Wilhelm Sr., a World War II veteran and economist who left government service in 1953, became an alcoholic, was divorced by his wife, and eventually committed suicide. He never told his children about the excruciating loyalty investigation. His estranged son, born Warren Wilhelm Jr., legally adopted his childhood nickname, Bill, and his mother’s surname, de Blasio. I didn’t connect the case I’d found years earlier to the mayoral candidate until the journalist contacted me, at which point I shared my research. At that moment de Blasio’s opponents were attacking him for his own youthful leftism, so it was a powerful story, as I tried to convey in The Nation.

With Trump’s ascendance, media references to McCarthyism have proliferated, as commentators struggle to make sense of Trump’s tactics and supporters. Opinion writers note that Trump shares McCarthy’s predilections for bluffing and for fear-mongering—with terrorists, Muslims, and immigrants taking the place of communist spies. They also note that both men were deeply influenced by the disreputable lawyer Roy Cohn. Meanwhile, the president has tweeted that he himself is a victim of McCarthyism, and that the current investigations of him are “witch hunts”—leaving observers flummoxed, yet again, as to whether he is astonishingly ignorant or shamelessly misleading.

But the parallels between McCarthy’s era and our own run deeper than personalities. Although The Second Red Scare is about McCarthyism, it devotes little attention to McCarthy himself. The book is about how opponents of the New Deal exploited Americans’ fear of Soviet espionage in order to roll back public policies whose regulatory and redistributive effects conservatives abhorred. It shows that the federal employee loyalty program took shape long before the junior senator from Wisconsin seized the limelight in 1950 by charging that the State Department was riddled with communists.

By the late 1930s congressional conservatives of both parties were claiming that communists held influential jobs in key New Deal agencies—particularly those that most strongly challenged corporate prerogatives regarding labor and prices. The chair of the new Special House Committee to Investigate Un-American Activities, Martin Dies (a Texas Democrat who detested labor unions, immigrants, and black civil rights as much as communism), demanded that the U.S. Civil Service Commission (CSC) investigate employees at several agencies. When the CSC found little evidence to corroborate Dies’s allegations, he accused the CSC itself of harboring subversives. Similarly, when in 1950 the Tydings Committee found no evidence to support McCarthy’s claims about the State Department, McCarthy said the committee conducted a “whitewash.” President Trump too insists that anyone who disproves his claims is part of a conspiracy. One important difference is that Dies and McCarthy alleged a conspiracy against the United States, whereas Trump chiefly complains of conspiracies against himself—whether perpetrated by a “deep state” soft on terrorism and immigration or by a biased “liberal media.” The Roosevelt administration dismissed Dies as a crackpot, and during the Second World War, attacks on the loyalty of federal workers got little traction.

That changed in the face of postwar Soviet conduct, the nuclear threat, and revelations of Soviet espionage. In a futile effort to counter right-wing charges that he was “soft” on communism, President Truman expanded procedures for screening government employees, creating a loyalty program that greatly enhanced the power of the FBI and the Attorney General’s List of Subversive Organizations. State, local, and private employers followed suit. As a result, the threat of long-term unemployment forced much of the American workforce not only to avoid political dissent, but to shun any association that an anonymous informant might find suspect. Careers and families were destroyed. With regard to the U.S. civil service, the damage to morale and to effective policymaking lasted much longer than the loyalty program itself.

Public employees long have been vulnerable to political attacks. Proponents of limited government by definition dislike them, casting them as an affront to the (loaded) American ideals of rugged individualism and free markets. But hostility to government employees has been more broad-based at moments when severe national security threats come on top of widespread economic and social insecurity. The post-WWII decade represented such a moment. In the shadow of the Soviet and nuclear threats, women and African-Americans struggled to maintain the toeholds they had gained during the war, and some Americans resented new federal initiatives against employment discrimination. Resentment of the government’s expanding role was fanned by right-wing portrayals of government experts as condescending, morally degenerate “eggheads” who avoided the competitive marketplace by living off taxpayers.

Today, widespread insecurity in the face of terrorism, globalization, multiculturalism, and gender fluidity have made many Americans susceptible to the same sorts of reactionary populist rhetoric heard in McCarthy’s day. And again that rhetoric serves the objectives of those who would gut government, or redirect it to serve private rather than public interests.

The Trump administration calls for shrinking the federal workforce, but the real goal is a more friendly and pliable bureaucracy. Trump advisers complain that Washington agencies are filled with leftists. Trump transition teams requested names of employees who worked on gender equality at State and climate change initiatives at the EPA. Trump media allies such as Breitbart demanded the dismissal of Obama “holdovers.” Trump selected appointees based on their personal loyalty rather than qualifications and, when challenged, suggested that policy expertise hinders fresh thinking. In firing Acting Attorney General Sally Yates for declining to enforce his first “travel ban,” Trump said she was “weak” and had “betrayed” her department. Such statements, like Trump’s earlier claims that President Obama was a Kenyan-born Muslim, fit the textbook definition of McCarthyism: undermining political opponents by making unsubstantiated attacks on their loyalty to the United States. Even more alarming is Trump’s pattern of equating disloyalty to himself with disloyalty to the nation—the textbook definition of autocracy.

Might the demise of McCarthyism hold lessons about how Trumpism will end? The Second Red Scare wound down thanks to the courage of independent journalists, the decision after four long years of McCarthy’s fellow Republican senators to put country above party, and U.S. Supreme Court decisions in cases brought by brave defendants and lawyers. The power of each of those forces was contingent, of course, on the abilities of Americans to sort fact from fiction, to resist the politics of fear and resentment, and to vote.

StorrsLandon R. Y. Storrs is professor of history at the University of Iowa. She is the author of Civilizing Capitalism: The National Consumers’ League, Women’s Activism, and Labor Standards in the New Deal Era and The Second Red Scare and the Unmaking of the New Deal Left.

Steven Weitzman: The Origin of the Jews

WeitzmanThe Jews have one of the longest continuously recorded histories of any people in the world, but what do we actually know about their origins? While many think the answer to this question can be found in the Bible, others look to archaeology or genetics. Some skeptics have even sought to debunk the very idea that the Jews have a common origin. In The Origin of the Jews: The Quest for Roots in a Rootless Age, Steven Weitzman takes a learned and lively look at what we know—or think we know—about where the Jews came from, when they arose, and how they came to be. Weitzman recently took the time to answer a few questions about his new book.

Isn’t the origin of the Jews well known? The story as I learned it begins with the Bible—with Abraham, Isaac and Jacob and with the story of the Exodus from Egypt. What is it that we do not understand about the origin of the Jews?

SW: Arguably, modernity was born of a recognition that things did not originate in the way the Bible claims. Over the course of the nineteenth and twentieth centuries, as the intellectual elite in Europe began to realize that the Bible could not be relied upon as an origin account, they turned to science, to critical historiography, to archaeology and to other scholarly methods to try to answer the question of where things and people come from. The result of their efforts include Darwin’s theory of evolution, the Bing Bang theory and other enduring theories of origin, along with a lot of theories and ideas that have since been discredited. The same intellectual process unsettled how people accounted for the origin of the Jews. Scholars applied the tools that had been used to understand the origin of language, religion and culture to the Jews and in this way developed alternative accounts very different from or even opposed to the biblical account. This book tells the story of what scholars have learned in this way and wrestles with why, despite centuries of scholarship, the question of the origin of the Jews remains unsettled.

So what have scholars learned about the origin of the Jews?

SW: A lot and a little at the same time. There has been a tremendous amount of scholarship generated by the question. The Documentary Hypothesis, the famous theory that the Five Books of Moses reflects the work of different authors in different historical periods, was originally intended as an effort to explain how the people of the Old Testament became the Jews. Focusing on different textual sources, Assyriologists have uncovered evidence of a people in Canaan known as the Habiru that are believed to be the ancestors of the Hebrews, and others would trace the Jews’ origin to Egypt or see a role for Greek culture in their development. Every theory can cite facts to support its account; and some are quite pioneering in the methods they deploy, and yet even as someone conversant in this scholarship, I find that I myself cannot answer the question of what the origin of the Jews is. It is actually the difficulty of answering the question that fascinates me. From within my small field, I have always been drawn to questions that lie at the edge of or just beyond what scholars can know about the world, questions that appear to be just beyond reach, and the origin of the Jews represents one of those questions, lying inside and outside of history at the same time.

Can you explain more why the origin of the Jews is so hard to pin down?

SW: Partly the problem is a scarcity of evidence. If we are looking to prehistory to understand the origin of the Jews—prehistory in this context would refer to the period before we have written accounts of the Israelites—there just isn’t a lot of evidence to work with. We know that at some point a people called Israel emerged, but we have very little evidence that can help us understand that process—a lot of theories and educated guesses but not a lot of solid facts.
Origins are always elusive—they always seem to be buried, hidden or lost—and scholarship has really had to strain to find relevant evidence to base itself on.
But for me at least, the biggest challenge of all was the problem of pinning down what an origin is. The term covers a range of different ideas—continuity and novelty, ancestry and invention. An origin can refer to lineage, to whatever connects a thing to the past, but it can also refer to a rupture, the emergence of something fundamentally discontinuous with the past. I came to realize that one of the main reasons scholars explain the origin of the Jews so differently is that they begin from different conceptions of what an origin is. This project forced me to recognize that I didn’t understand what an origin is or sufficiently appreciate all the different assumptions, beliefs and questionable metaphors that lay hidden within that term.

Not only are there conceptual difficulties inherent in the search for Jewish origin, but there are political problems as well. The effort to answer the question of the origin of the Jews has had devastating consequences, as the Nazis demonstrated by using the scholarship of origin to rationalize violence against the Jews. Of course, more recently, the question has gotten caught up in the Israeli-Palestinian conflict as well, and is entangled in various intra-communal and interfaith debates about the nature of Jewish and Christian identity. There were many reasons to avoid this topic, intellectual, political and arguably even ethical, but not pursuing it also has its costs. There are lots of ideas circulating out there about how the Jews originated, along with a lot of misstatements, unexamined assumptions and confusion, and I felt it would be helpful to describe the challenges of this question, why it is difficult to address, what we know and don’t know, and what is at stake.

The book surveys several different approaches—various historical approaches, archaeology, social scientific approaches, even psychoanalysis has been used to address the question—but the research most likely to interest many contemporary readers comes from the field of genetics. What does DNA reveal about the origin of the Jews?

SW: First of all, I should say up front that I am not a geneticist and much of what I present in the book is based on what I learned from geneticist colleagues when I was a faculty member at Stanford or read at their suggestion. But we happen to be in a period when geneticists are making great strides in using DNA as a historical source, a way to understand the origin, migration history, and sexual and health history of different populations, and Jews have been intensively studied from this perspective. Even though the science was new to me, I felt I could not write a book on this subject without trying to engage this new research. As for what such research reveals, it offers a new way of investigating the ancestry of the Jews, the population(s) from whom they descend, and potentially sheds light on where that population lived, its size and demographic practice, and its mating practices. It can even help us to distinguish distinct histories for the male and female lineages of contemporary Jewish populations. All fascinating stuff, but does genetics represent the future of the quest to understand the origin of the Jews? The research is developing very rapidly. The data sets are expanding rapidly; the analysis is getting more nuanced; studies conducted a decade or two ago have already been significantly revised or superseded; and it is hard for non-geneticists to judge what is quality research and what is questionable. What is clear is that there has been criticism of such research from anthropologists and historians of science who detect hidden continuities with earlier now discredited race science and question how scientists interpret the data. I tried to tell both sides of this story, distilling the research but also giving voice to the critiques, and the book includes bibliographic guidance for those who want to judge the research for themselves.

Has this project gotten you to think about your own origin differently?

SW: Yes, but not in the way one might expect. Of course, as a Jew myself, the questions were not just intellectual but also personal and relational, bearing on how I thought about my own ancestry, my own sense of connection to my forebears, to other Jews, and to the land of Israel and to other peoples, but what I learned about the history of scholarship just didn’t reveal the clear insight one might have hoped for. To give a minor and amusing example, I recall being impressed by a genetic study which uncovered evidence of a surprising ancestry for Ashkenazic Levites. A Levite is a descendant from the tribe of Levi, a tribe with a special religious role, and I inherited such a status from my father. I never put any real stake in this part of my inheritance, but it was a point of connection to my father and his father, and I admit that I was intrigued when I read this study, which found that Ashkenazic Levite males have a different ancestry than that of other Ashkenazic Jews, perhaps descending from a convert with a different backstory than that of the other males in the tiny population from which today’s Ashkenazic Jews descend. But then a few years later, the same scientist published another study which undid that conclusion. So it goes with the research in general: it tells too many stories, or changes too much, or is too equivocal or uncertain in its results to demystify the origin of who I am. But on the other hand, I did learn a lot from this project about how—and why—I think about origins at all, and the mystery of who I am as a Jew—and of who we all are as human beings—runs much deeper for me now.

Steven Weitzman is the Abraham M. Ellis Professor of Hebrew and Semitic Languages and Literatures and Ella Darivoff Director of the Herbert D. Katz Center for Advanced Judaic Studies at the University of Pennsylvania. His books include Solomon: The Lure of WisdomSurviving Sacrilege: Cultural Persistence in Jewish Antiquity, and The Origin of the Jews: The Quest for Roots in a Rootless Age.

Yair Mintzker: The Many Deaths of Jew Süss

Joseph Süss Oppenheimer—”Jew Süss”—is one of the most iconic figures in the history of anti-Semitism. In 1733, Oppenheimer became the “court Jew” of Carl Alexander, the duke of the small German state of Württemberg. When Carl Alexander died unexpectedly, the Württemberg authorities arrested Oppenheimer, put him on trial, and condemned him to death for unspecified “misdeeds.” On February 4, 1738, Oppenheimer was hanged in front of a large crowd just outside Stuttgart. He is most often remembered today through several works of fiction, chief among them a vicious Nazi propaganda movie made in 1940 at the behest of Joseph Goebbels. The Many Deaths of Jew Süss by Yair Mintzker is a compelling new account of Oppenheimer’s notorious trial.

You have chosen a very intriguing title for your book—The Many Deaths of Jew Süss. Who was this “Jew Süss” and why did he die more than once?

YM: Jew Süss is the nickname of Joseph Süss Oppenheimer, one of the most iconic figures in the history of anti-Semitism. Originally from the Jewish community in Heidelberg, Germany, in 1732 Oppenheimer became the personal banker (“court Jew”) of Carl Alexander, duke of the small German state of Württemberg. When Carl Alexander died unexpectedly in 1737, the Württemberg authorities arrested Oppenheimer, put him on trial, and eventually hanged him in front of a large crowd just outside Stuttgart. He is most often remembered today through a vicious Nazi propaganda movie made about him at the behest of Joseph Goebbels.

Why is Oppenheimer such an iconic figure in the history of anti-Semitism?

YM: Though Oppenheimer was executed almost three centuries ago, his trial never quite ended. Even as the trial was unfolding, it was already clear that what was being placed in the scales of justice was not any of Oppenheimer’s alleged crimes. The verdict pronounced in his case conspicuously failed to provide any specific details about the reasons for the death sentence. The significance of the trial, and the reasons for Oppenheimer’s public notoriety ever since the eighteenth century, stem from the fact that Oppenheimer’s rise-and-fall story has been viewed by many as an allegory for the history of German Jewry in general. Here was a man who tried to fit in, and seemed to for a time, but was eventually rejected; a Jew who enjoyed much success but then fell from power and met a violent death. Thus, at every point in time when the status, culture, past and future of Germany’s Jews have hung in the balance, the story of this man has moved to center stage, where it was investigated, novelized, dramatized, and even set to music. It is no exaggeration to say that Jew Süss is to the German collective imagination what Shakespeare’s Shylock is to the English-speaking world.

Your book is about Oppenheimer’s original trial, not about how this famous court Jew was depicted later. Why do you claim that he died more than once?

YM: We need to take a step back and say something about the sources left by Oppenheimer’s trial. Today, in over one hundred cardboard boxes in the state archives in Stuttgart, one can read close to thirty thousand handwritten pages of documents from the time period of the trial. Among these pages are the materials collected by the inquisition committee assigned to the case; protocols of the interrogations of Oppenheimer himself, his alleged accomplices, and many witnesses; descriptions of conversations Oppenheimer had with visitors in his prison cell; and a great number of poems, pamphlets, and essays about Oppenheimer’s final months, days, hours, and even minutes. But here’s the rub: while the abundance of sources about Oppenheimer’s trial is remarkable, the sources themselves never tell the same story twice. They are full of doubts, uncertainties, and outright contradictions about who Oppenheimer was and what he did or did not do. Instead of reducing these diverse perspectives to just one plot line, I decided to explore in my book four different accounts of the trial, each from a different perspective. The result is a critical work of scholarship that uncovers mountains of new documents, but one that refuses to reduce the story of Jew Süss to only one narrative.

What are the four stories you tell in the book, then?

YM: I look at Oppenheimer’s life and death as told by four contemporaries: the leading inquisitor in Oppenheimer’s trial, the most important eyewitness to Oppenheimer’s final days, a fellow court Jew who was permitted to visit Oppenheimer on the eve of his execution, and one of Oppenheimer’s earliest biographers.

What do we learn from these stories?

YM: What emerges from these accounts, above and beyond everything else, is an unforgettable picture of Jew Süss in his final days. It is a lurid tale of greed, sex, violence, and disgrace, but one that we can fully comprehend only if we follow the life stories of the four narrators and understand what they were trying to achieve by writing about Oppenheimer in the first place.

Is the purpose of this book to show, by composing these conflicting accounts of Jew Süss, that the truth is always in the eye of the beholder, that everything is relative and that there is therefore no one, single truth?

YM: No. The realization that the world looks different from different perspectives cannot possibly be the bottom line of a good work of history. This is so not because it’s wrong, but because it’s obvious. What I was setting out to do in writing this book was different. I used the multi-perspectival nature of lived experience as my starting point, not as my destination; it was a belief that informed what I did rather than a conclusion toward which I was driving.

And the result?

YM: A moving, disturbing, and often outright profound account of Oppenheimer’s trial that is also an innovative work of history and an illuminating parable about Jewish life in the fraught transition to modernity.

MintzkerYair Mintzker is associate professor of history at Princeton University. He is the author of The Defortification of the German City, 1689–1866 The Many Deaths of Jew Süss: The Notorious Trial and Execution of an Eighteenth-Century Court Jew.

Joel Brockner: Can Job Autonomy Be a Double-Edged Sword?

This post was originally published on the Psychology Today blog.

“You can arrive to work whenever convenient.”

“Work from home whenever you wish.”

“You can play music at work at any time.”

These are examples of actual workplace policies from prominent companies such as Aetna, American Express, Dell, Facebook, Google, IBM, and Zappos. They have joined the ranks of many organizations in giving employees greater job autonomy, that is, more freedom to decide when, where, and how to do their work. And why not? Research by organizational psychologists such as Richard Hackman and Greg Oldham and by social psychologists such as Edward Deci and Richard Ryan, has shown that job autonomy can have many positive effects. The accumulated evidence is that employees who experience more autonomy are more motivated, creative, and satisfied with their jobs.

Against this backdrop of the generally favorable effects of job autonomy, recent research has shown that it also may have a dark side: unethical behavior. Jackson Lu, Yoav Vardi, Ely Weitz and I discovered such results in a series of field and laboratory studies soon to be published in the Journal of Experimental Social Psychology. In field studies conducted in Israel, employees from a wide range of industries rated how much autonomy they had and how often they engaged in unethical behavior, such as misrepresenting their work hours or wasting work time on private phone calls. Those who had greater autonomy said that they engaged in more unethical behavior on the job. In laboratory experiments conducted in the United States we found that it may not even be necessary for people to have actual autonomy for them to behave unethically; merely priming them with the idea of autonomy may do the trick. In these studies participants were randomly assigned to conditions differing in how much the concept of autonomy was called to mind. This was done with a widely used sentence-unscrambling task in which people had to rearrange multiple series of words into grammatically correct sentences. For example, those in the high-autonomy condition were given words such as, “have many as you as days wish you vacation may” which could be rearranged to form the sentence, “You may have as many vacation days as you wish.” In contrast, those in the low-autonomy condition were given words such as, “office in work you must the,” which could be rearranged to, “You must work in the office.” After completing the sentence-unscrambling exercise participants did another task in which they were told that the amount of money they earned depended on how well they performed. The activity was structured in a way that enabled us to tell whether participants lied about their performance. Those who were previously primed to experience greater autonomy in the sentence-unscrambling task lied more. Job autonomy gives employees a sense of freedom which usually has positive effects on their productivity and morale but also can lead them to feel that they can do whatever they want, including not adhering to rules of morality.

All behavior is a function of what people want to do (motivation) and what they are capable of doing (ability). Consider the unethical behavior elicited by high levels of autonomy. Having high autonomy may not have made people want to behave unethically. However, it may have enabled the unethical behavior by making it possible for people to engage in it. Indeed, the distinction between people wanting to behave unethically versus having the capability of doing so may help answer two important questions:

(1) What might mitigate the tendency for job autonomy to elicit unethical behavior?

(2) If job autonomy can lead to unethical behavior should companies re-evaluate whether to give job autonomy to its employees? That is, can job autonomy be introduced in a way that maximizes its positive consequences (e.g., greater creativity) without introducing the negative effect of unethical behavior?

With respect to the first question, my hunch is that people who have job autonomy and therefore are able to behave unethically will not do so if they do not want to behave unethically. For example, people who are high on the dimension of moral identity, for whom behaving morally is central to how they define themselves would be less likely to behave unethically even when a high degree of job autonomy enabled or made it possible for them to do so.

With respect to the second question, I am not recommending that companies abandon their efforts to provide employees with job autonomy. Our research suggests, rather, that the consequences of giving employees autonomy may not be summarily favorable. Taking a more balanced view of how employees respond to job autonomy may shed light on how organizations can maximize the positive effects of job autonomy while minimizing the negative consequence of unethical behavior.

Whereas people generally value having autonomy, some people want it more than others. People who want autonomy a lot may be less likely to behave unethically when they experience autonomy. For one thing, they may be concerned that the autonomy they covet may be taken away if they were to take advantage of it by behaving unethically. This reasoning led us to do another study to evaluate when the potential downside of felt autonomy can be minimized while its positive effects can be maintained. Once again, we primed people to experience varying degrees of job autonomy with the word-unscrambling exercise. Half of them then went on to do the task which measured their tendency to lie about their performance, whereas the other half completed an entirely different task, one measuring their creativity. Once again, those who worked on the task in which they could lie about their performance did so more when they were primed to experience greater autonomy. And, as has been found in previous research those who did the creativity task performed better at it when they were primed to experience greater autonomy.

Regardless of whether they did the task that measured unethical behavior or creativity, participants also indicated how much they generally valued having autonomy. Among those who generally valued having autonomy to a greater extent, (1) the positive relationship between experiencing job autonomy and behaving unethically diminished, whereas (2) the positive relationship between experiencing job autonomy and creativity was maintained. In other words, as long as people valued having autonomy, the experience of autonomy had the positive effect of enhancing creativity without introducing the dangerous side effect of unethical behavior. So, when organizations introduce job autonomy policies like those mentioned at the outset, they may gain greater overall benefits when they ensure that their employees value having autonomy. This may be achieved by selecting employees who value having autonomy as well as by creating a corporate culture which emphasizes the importance of it. More generally, a key practical takeaway from our studies is that when unethical behavior is enabled, whether through job autonomy or other factors, it needs to be counterbalanced by conditions that make employees not want to go there.

BrocknerJoel Brockner is the Phillip Hettleman Professor of Business at Columbia Business School. He is the author of The Process Matters: Engaging and Equipping People for Success.

Lawrence Baum: Ideology in the Supreme Court

When President Trump nominated Neil Gorsuch for a seat on the Supreme Court, Gorsuch was universally regarded as a conservative. Because of that perception, the Senate vote on his confirmation fell almost completely along party lines. Indeed, Court-watchers concluded that his record after he joined the Court late in its 2016-2017 Term was strongly conservative. But what does that mean? One possible answer is that he agreed most often with Clarence Thomas and Samuel Alito, the justices who were considered the most conservative before Gorsuch joined the Court. But that answer does not address the fundamental question: why are the positions that those three justices took on an array of legal questions considered conservative?

The most common explanation is that liberals and conservatives each start with broad values that they then apply in a logical way to the various issues that arise in the Supreme Court and elsewhere in government. But logic can go only so far to explain the ideological labels of various positions. It is not clear, for instance, why liberals are the strongest proponents of most individual rights that the Constitution protects while conservatives are the most supportive of gun rights. Further, perceptions of issues sometimes change over time, so that what was once considered the liberal position on an issue is no longer viewed that way.

Freedom of expression is a good example of these complexities. Beginning early in the twentieth century, strong support for freedom of speech and freedom of the press was regarded as a liberal position. In the Supreme Court, the justices who were most likely to support those First Amendment rights were its liberals. But in the 1990s that pattern began to change. Since then, when the Court is divided, conservative justices provide support for litigants who argue that their free expression rights have been violated as often as liberals do.

To explain that change, we need to go back to the period after World War I when freedom of expression was established as a liberal cause. At that time, the government policies that impinged the most on free speech were aimed at political groups on the left and at labor unions. Because liberals were more sympathetic than conservatives to those segments of society, it was natural that freedom of expression became identified as a liberal cause in the political world. In turn, liberal Supreme Court justices gave considerably more support to litigants with free expression claims than did their conservative colleagues across the range of cases that the Court decided.

In the second half of the twentieth century, people on the political left rethought some of their assumptions about legal protections for free expression. For instance, they began to question the value of protecting “hate speech” directed at vulnerable groups in society. And they were skeptical about First Amendment challenges to regulations of funding for political campaigns. Meanwhile conservatives started to see freedom of expression in a more positive light, as a protection against undue government interference with political and economic activity.

This change in thinking affected the Supreme Court in the 1990s and after. More free expression cases came to the Court from businesses and people with a conservative orientation, and a conservative-leaning Court was receptive to those cases. The Court now decides few cases involving speech by labor unions and people on the political left, and cases from businesses and political conservatives have become common. Liberal justices are more favorable than their conservative colleagues to free expression claims by people on the left and by individuals with no clear political orientation, but conservative justices provide more support to claims by businesses and conservatives. As a result, what had been a strong tendency for liberal justices to give the most support to freedom of expression across the cases that the Court decided has disappeared.

The sharp change in the Supreme Court’s ideological orientation in free speech cases is an exception to the general rule, but it underlines some important things about the meaning of ideology. The labeling of issue positions as conservative or liberal comes through the development of shared understandings among political elites, and those understandings do not necessarily follow from broad values. In considerable part, they reflect attitudes toward the people and groups that champion and benefit from particular positions. The impact of those attitudes is reflected in the ways that people respond to specific situations involving an issue: liberal and conservative justices, like their counterparts elsewhere in government and politics, are most favorable to free speech when that speech comes from segments of society with which they sympathize. When we think of Supreme Court justices and the positions they take as conservative and liberal, we need to keep in mind that to a considerable degree, the ideological labeling of positions in ideological terms is arbitrary. Justice Gorsuch’s early record on the Court surely is conservative—but in the way that conservative positions have come to be defined in the world of government and politics, definitions that are neither permanent nor inevitable.

BaumLawrence Baum is professor emeritus of political science at Ohio State University. His books include Judges and Their Audiences, The Puzzle of Judicial BehaviorSpecializing the Courts, and Ideology in the Supreme Court.

Rachel Schneider & Jonathan Morduch: Why do people make the financial decisions they make?

Deep within the American Dream lies the belief that hard work and steady saving will ensure a comfortable retirement and a Financialbetter life for one’s children. But in a nation experiencing unprecedented prosperity, even for many families who seem to be doing everything right, this ideal is still out of reach. In The Financial Diaries, Jonathan Morduch and Rachel Schneider draw on the groundbreaking U.S. Financial Diaries, which follow the lives of 235 low- and middle-income families as they navigate through a year. Through the Diaries, Morduch and Schneider challenge popular assumptions about how Americans earn, spend, borrow, and save—and they identify the true causes of distress and inequality for many working Americans. Combining hard facts with personal stories, The Financial Diaries presents an unparalleled inside look at the economic stresses of today’s families and offers powerful, fresh ideas for solving them. The authors talk about the book, what was surprising as they conducted their study, and how their findings affect the conversation on inequality in a new Q&A:

Why did you write this book?
We have both spent our careers thinking about households and consumer finance, and our field has reams and reams of descriptive data about what people do—savings rates, the number of overdrafts, the size of their tax refunds. We have lots of financial information but very little of the existing data helped us understand why—why people make the financial decisions they make, and why they get tripped up. So we decided to spend time with a group of families, get to know them very well, and track every dollar they earned, spent, borrowed, and shared over the course of one year. By collecting new and different kinds of information, we were able to understand a lot of the why, and gained a new view of what’s going on in America.

What did you learn about the financial lives of low- and moderate-income families in your year-long study?
We saw that the financial lives of a surprising number of families looks very different from the standard story that most people expect. The first and most prominent thing we saw is how unsteady, how volatile households’ income and expenses were for many. The average family in our study had more than five months a year when income was 25% above or below their average.

That volatility made it hard to budget and save—and it meant that plans were often derailed. How people were doing had less to do with the income they expected to earn in total during the year and more to do with when that income hit paychecks and how predictable that was. Spending emergencies added a layer of complexity. In other words, week-to-week and month-to-month cash flow problems dominated many families’ financial lives. Their main challenges weren’t resisting temptation to overspend in the present, or planning appropriately for the long term but how to make sure they would have enough cash for the needs they knew were coming soon.

The resulting anxiety, frustration, and a sense of financial insecurity affected families that were technically classified as middle class.

How does this tie into the economic anxiety that fueled Trump’s election?
The families we talked to revealed deep anxieties that are part of a broader backdrop for understanding America today. That anxiety is part of what fueled Trump, but it also fueled Bernie Sanders and, to an extent, Hillary Clinton. A broad set of the population feels rightly that the system just isn’t working for them.

For example, we met Becky and Jeremy, a couple with two kids who live in small town Ohio where Trump did well. Jeremy is a mechanic who fixes trucks on commission. Even though he works full-time, the size of his paychecks vary wildly depending on how many trucks come in each day. This volatility in their household income means that while they’re part of the middle class when you look at their annual income, they dipped below the poverty line six months out of the year.

One day we met with Becky, who was deciding whether or not to make their monthly mortgage payment a couple of weeks early. She had enough money on hand, but she was wavering between paying it now so she could rest easy knowing it was taken care of, or holding onto the money because she didn’t know what was going to happen in the next couple weeks, and was afraid she might need the money for something else even more urgent. She was making decisions like this almost every day, which created not only anxiety but a sense of frustration about always feeling on the edge.

Ultimately, Jeremy decided to switch to a lower-paying job with a bigger commute doing the exact same work – but now he’s paid on salary. They opted for stability over mobility. Becky and Jeremy helped us see how the economic anxiety people feel is not only about having enough money, but about the structure of their economic lives and the risk, volatility, and insecurity that have become commonplace in our economy.

One of the most interesting insights from your book is that while these families are struggling, they’re also working really hard and coming up with creative ways to cope. Can you share an example?
Janice, a casino worker in Mississippi, told us about a system she created with multiple bank accounts. She has one bank account close to her she uses for bill paying. But she also has a credit union account where she has part of her paycheck automatically deposited. This bank is an hour away, has inconvenient hours, and when they sent her an ATM card, she cut it in half. She designed a level of inconvenience for that account on purpose, in order to make it harder to spend that money. She told us she will drive the hour to that faraway bank when she has a “really, really need”—an emergency or cost that is big enough that she’ll overcome the barriers she put up on purpose. One month, she went down there because her grandson needed school supplies, which was a “really, really need” for her. The rest of the time, it’s too far away to touch. And that’s exactly how she designed it.

We found so many other examples like this one, where people are trying to create the right mix of structure and flexibility in their financial lives. There’s a tension between the structure that helps you resist temptation and save, and the flexibility you need when life conspires against you. But we don’t have financial products, services, and ideas that are designed around this need and the actual challenges that families are facing. This is why Janice has all these different banks she uses for different purposes—to get that mix of structure and flexibility that traditional financial services do not provide.

How does this tie into the conversation we’ve been having about inequality over the last decade or so?
Income and wealth inequality are real. But those two inequalities of income and assets are hiding this other really important inequality, which is about stability. What we learned in talking to families is that they’re not thinking about income and wealth inequality on a day-to-day basis—they’re worrying about whether they have enough money today, tomorrow, and next week. The problem is akin to what happens in businesses. They might be profitable on their income statement, but they ran out of cash and couldn’t make payroll next week.

This same scenario is happening with the families we met. We saw situations where someone has enough income or is saving over time, but nonetheless, they can’t make ends meet right now. That instability is the hidden inequality that’s missing from our conversation about wealth and income inequality.

How much of this comes down to personal responsibility? Experts like Suze Orman and Dave Ramsey argue you can live on a shoestring if you’re just disciplined. Doesn’t that apply to these families?
The cornerstone of traditional personal finance advice from people like Orman and Ramsey is budgeting and discipline. But you can’t really do that without predictability and control.

We met one woman who is extremely disciplined about her budget, but the volatility of her income kept tripping her up. She is a tax preparer, which means she earns half her income in the first three months of the year. She has a spreadsheet where she runs all her expenses, down to every taxi she thinks she might need to take. She budgets really explicitly and when she spends a little more on food one week, she goes back and looks at her budget, and changes it for the next few weeks to compensate. Her system requires extreme focus and discipline, but it’s still not enough to make her feel financially secure. Traditional personal finance advice just isn’t workable for most families because it doesn’t start with the actual problems that families face.

What can the financial services industry do to better serve low- and moderate-income families?
The financial services industry has a big job in figuring out how to deal with cash flow volatility at the household level, because most of the products they have generated are based on an underlying belief that households have a regular and predictable income. So their challenge is to develop new products and services—and improve existing ones—that are designed to help people manage their ongoing cash flow needs and get the right money at the right time.

There are a few examples of innovative products that are trying to help households meet the challenges of volatility and instability. Even is a new company that helps people smooth out their income by helping them automatically save spikes, or get a short-term “boost” to cover dips. Digit analyzes earning and spending patterns to find times when someone has a little extra on hand and put it aside, again automatically. Propel is looking to make it much easier and faster for people to get access to food stamps when they need them. There are a number of organizations trying to bring savings groups or lending circles, a way of saving and borrowing with friends and family common everywhere in the developing world, to more people in the United States.

There is lots of scope for innovation to meet the needs of households—the biggest challenge is seeing what those needs are, and how different they are from the standard way of thinking about financial lives and problems.

Jonathan Morduch is professor of public policy and economics at the New York University Wagner Graduate School of Public Service. He is the coauthor of Portfolios of the Poor (Princeton) and other books. Rachel Schneider is senior vice president at the Center for Financial Services Innovation, an organization dedicated to improving the financial health of Americans.