Ian Hurd: Good medicine for bad politics? Rethinking the international rule of law

When an international crisis erupts it is common to hear experts say that the situation will be improved if all parties stick to international law. From the Syrian war to Burma’s massacres to Guantanamo torture, faithful compliance with the law of nations is often prescribed as the best way forward. I wrote this book because I was curious about the idea that international law is good medicine for bad policies, a kind of non-political, universal good. International law often appears like a magical machine that takes in hot disagreements about how things should unfold and produces cool solutions that serve the interests of everyone. How to do Things with International Law examines this idea with a degree of skepticism, holds it up against some empirical cases, and suggests more realistic ways of thinking about the dynamics between international politics and international law.

The standard model of international law is built on two components, one more institutional and the other more normative. On the one hand, international law is seen as providing a framework for the coexistence for governments. Laws on diplomatic immunity, non-interference across borders, and the peaceful settlement of disputes help organize inter-governmental relations and give a kind of civility to world politics. On this view, following the rules makes it possible for diplomacy and negotiation to happen. The second, normative strand adds substantive values such as a commitment to human rights, to the protection of refugees, and against nuclear proliferation. Here, following the rules is said to be important because it enhances human welfare and the other goals encoded by the law. The two strands agree that compliance with international rules is beneficial and that violations of the rules lead to international disorder at best—and violence and chaos at worst.

This represents what I see as a conventional view of the international rule of law. It is a commitment to the idea that governments should follow their legal obligations and that when they do the world is a better place. It is an ideology, in the sense noted by Shirley Scott.

My book explores the premise and the power of this ideology and its influence in global politics. I look at the presumptions that it rests on and the practices it makes possible. I see the power of international law on display in the ways that governments and others make use of legal resources and categories to understand, justify, and act in the world. This is a social power, built on the idea of the rule of law and employed by governments in the service of a wide array of goals.

The book does not aim to answer questions about why states comply with or flout the law. Instead, it asks what they do with the law – and why, and with what effects. As a methodology, this points toward looking for where international law appears in the strategies of governments. On substance, it suggests a close connection between international law and political power. International law has influence in certain situations, when powerful actors find it useful. For instance, the US gave legal arguments for why Russia’s annexation of Crimea was unlawful and therefore should not be accepted by other countries. In response, Russia gave legal arguments to sustain its behavior. Legal experts may well conclude that one side had the stronger legal argument; disagreements about interpretation and application are central to legal practices. But my curiosity comes from seeing both sides use legal arguments as political resources in defense of their preferred outcome.

The use of law to legitimize state policy is a central feature of contemporary international politics. And yet to some, the instrumental use of law is said to reveal the inappropriate politicization of law, contradicting their idea of the rule of law itself. I see it the other way around: the international rule of law is the instrumental use of law. The legalization of international politics gives legal rationalizations their political weight. Their political weight makes them important sites of contestation. In a legalized world, it makes sense for actors to contest their actions in the language of law. To borrow Helen Kinsella’s example, the line between civilian and combatant in a war zone distinguishes those who should be killed from those who should not; the line is defined by the Geneva Conventions and other legal instruments and it is brought to life (and death) as governments interpret it in relation to those whom they wish to kill. Legal categories have political valence and this makes them important resources of power and thus worth fighting over. How else to make sense of the energy that governments put into shaping rules that reflect their interests?

Recognizing the close connection between international and power politics opens a way to considering the political productivity of international law. Law is not only regulative and constraining; it is also empowering and permissive. By defining some acts as unlawful and others as lawful, it makes the former harder for governments to do (or more expensive) and the latter easier. The availability of a legal justification smoothes the way for action just as much as its unavailability impedes it. If we look at one side of this balance, we see for instance that the UN Charter outlaws the use of force by governments and limits their autonomy with respect to going to war. On the other side the Charter also authorizes them to go to war as needed for ‘self-defense’ against an armed attack. In ‘self-defense,’ the Charter creates a new legal resource with the capacity to differentiate between a lawful and an unlawful war. This is a powerful tool for governments, a means for legalizing their recourse to force, and they have used it with enthusiasm since 1945. The Charter produced something that previously didn’t exist and as a consequence changed how governments go to war, how they justify their wars, and how they think about their security in relation to external threats.

With the political productivity of international law in mind, the book shows that international law is inseparable from politics and thus from power. For powerful governments, international law puts an instrument in their tool-kit as they seek to influence what happens in the world, and for the less powerful it is a tool that they might also seek to take up when they can but may equally be a means of control whose influence they seek to escape.

There isn’t much evidence to back up the presumption that international law steers global affairs naturally toward better outcomes. How to Do Things With International Law is neither a celebration of international law nor an indictment. It offers instead a look into its practical politics, a messy world of power politics that is as full of interpretation, ambiguity, violence and contestation as any other corner of social life.

HurdIan Hurd is associate professor of political science at Northwestern University. He is the author of After Anarchy and How to Do Things with International Law.

Gary Saul Morson & Morton Schapiro: The Humanomics of Tax Reform

CentsThe Trump administration is now placing tax reform near the top of its legislative agenda. Perhaps they will garner the votes for tax reduction, but reform? Good luck.

It has been three decades since there has been meaningful tax reform in this country. In 1986, tax shelters were eliminated, the number of tax brackets went from 15 to 4, including a reduction of the highest marginal tax rate from 50% to 38.5% and the standard deduction was increased, simplifying tax preparation and resulting in zero tax liability for millions of low-income families. At the same time, a large-scale expansion of the alternative minimum tax affected substantial numbers of the more affluent.

President Reagan insisted that the overall effect be neutral with regard to tax revenues. That demand made it possible to set aside the issue of whether government should be larger or smaller and instead focus on inefficiencies or inequities in how taxes were assessed. Two powerful Democrats, Dick Gephardt in the House and Bill Bradley in the Senate, were co-sponsors.

Economists might evaluate the merits of this monumental piece of legislation in terms of the incentives and disincentives it created, its ultimate impact on labor force participation, capital investment and the like, but there is another metric to be evaluated – was it perceived to be fair? Accounts from that day imply that it was.

The notion of fairness is not generally in the wheelhouse of economics. But the humanities have much to say on that matter.

To begin with, literature teaches that fairness is one of those concepts that seem simple so long as one does not transcend one’s own habitual way of looking at things. As soon as one learns to see issues from other points of view, different conceptions of fairness become visible and simple questions become more complex. Great novels work by immersing the reader in one character’s perspective after another, so we learn to experience how different people – people as reasonable and decent as we ourselves are – might honestly come to see questions of fairness differently.

So, the first thing that literature would suggest is that, apart from the specific provisions of the 1986 tax reform, the fact that it was genuinely bipartisan was part of what made it fair. Bipartisanship meant the reform was not one side forcing its will on the other. Had the same reform been passed by one party, it would not have seemed so fair. Part of fairness is the perception of fairness, which suggests that the process, not just the result, was fair.

Fairness, of course, also pertains to the content of the reforms. What are the obligations of the rich to support needy families? Are there responsibilities of the poor to participate however they can in providing for their own transformation?

In Tolstoy’s novel Anna Karenina, two main characters, Levin and Stiva, go hunting with the young fop, Vasenka, and as they encounter hard-working peasants, they start discussing the justice of economic inequality. Only foolish Vasenka can discuss the question disinterestedly, because it is, believe it or not, entirely new to him: “`Yes, why is it we spend our time riding, drinking, shooting doing nothing, while they are forever at work?’ said Vasenka, obviously for the first time in his life reflecting on the question, and consequently considering it with perfect sincerity.” Can it really be that an educated person has reached adulthood with this question never having occurred to him at all?

And yet, isn’t that the position economists find themselves in when they ignore fairness? When they treat tax reform, or any other issue, entirely in economic terms? Levin recognizes that there is something unfair about his wealth, but also recognizes that there is no obvious solution: it would do the peasants no good if he were to just give away his property. Should he make things more equal by making everyone worse off? On the contrary, his ability to make farmland more productive benefits the peasants, too. So, what, he asks, should be done?

Levin also knows that inequality is not only economic. If one experiences oneself as a lesser person because of social status, as many of the peasants do, that is itself a form of inequality entirely apart from wealth. In our society, we refer to participants in government as “taxpayers.” Does that then mean that to exempt large numbers of people from any taxation entirely demeans them – not least of all, in their own eyes?  There may be no effective economic difference between a very small tax and none at all, but it may make a tremendous psychological difference. Isn’t the failure to take the psychological effect of tax rates seriously as disturbingly innocent as Vasenka’s question about inequality?

Combining a humanistic and an economic approach might not give us specific answers, but it does make questions of fairness, including symbolic effects, part of the question. And in a democracy, where popular acceptance of the rules as legitimate is crucial, that would be a step forward.

Gary Saul Morson is the Lawrence B. Dumas Professor of the Arts and Humanities and professor of Slavic languages and literatures at Northwestern University. His many books include Narrative and Freedom, “Anna Karenina” in Our Time, and The Words of Others: From Quotations to Culture. Morton Schapiro is the president of Northwestern University and a professor of economics. His many books include The Student Aid Game. Morson and Schapiro are also the editors of The Fabulous Future?: America and the World in 2040 and the authors of Cents and Sensibility: What Economics Can Learn from the Humanities.

Matthew Simonton: American Oligarchy

SimontonThe 2016 election brought the burning issue of populism home to the United States. Donald Trump is, in many ways, part of a larger movement of populist politicians worldwide who have claimed to speak in the name of the “ordinary people.” (Marine Le Pen in France and Viktor Orbán in Hungary are other examples.) As with other populists, Trump’s presidency brings with it unsettling questions about illiberalism and ethno-nationalism. But in all the talk about “making American great again,” we are in danger of losing sight of a deeper problem, one which Trump will not change and in fact will likely exacerbate: the steady creep of oligarchy. The United States Constitution is enacted in the name of “We the People.” Abraham Lincoln famously described America’s political system in the Gettysburg Address as “government of the people, by the people, for the people.” Yet how much authority do ordinary citizens truly possess in today’s America? As the ancient Athenians would have put it, does the demos (people) in fact have kratos (power)?

Several indicators suggest that that power, if it ever was actually held by the people, is slipping away. Princeton University Press authors Larry Bartels and Martin Gilens have brought before our eyes hard truths about our “unequal democracy,” the fact that, too often, “affluence” brings “influence.” Gilens and the political scientist Benjamin I. Page demonstrated in an important article from 2014 that “economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens … have little or no independent influence.” Readers familiar with the findings of the economist Thomas Piketty have heard that the U.S. and other advanced capitalist economies are entering a new “Gilded Age” of wealth concentration. Can anything turn back inequality—what President Barack Obama called “the defining challenge of our time”—and the widening gap in political power and influence that comes with it?

The ancient Greeks had an answer to the problem of inequality, which they called demokratia. It is well known that Greek-style democracy was direct rather than representative, with citizens determining policy by majority vote in open-air assemblies. Yet democracy meant more than just meetings: political offices were distributed randomly, by lottery, on the assumption that every citizen was qualified (and in fact obligated) to participate in politics. Office-holders were also remunerated by the state, to ensure that poorer citizens who had to work for a living could still share in the constitution. Princeton author Josiah Ober has examined the ideology and practice of ancient democracy in multiple publications. In his latest work—similar in its conclusions to those of the ancient historian Alain Bresson—he has argued that democracies created fair rules and equal access to opportunity that secured citizen dignity and discouraged runaway economic inequality. Thus, as much as ancient democracies fall short of our contemporary standards (and they had grave faults in the areas of slave-holding and gender relations), they might constitute a model, however imperfect, for thinking about reducing both economic and political inequality.

On the other hand, many Greek city-states had a form of constitution based on diametrically opposed premises, and which encouraged opposite tendencies. This was oligarchia, the “rule of the few.” Ancient Greek oligarchs—members of the wealthy elite—most assuredly did not believe in citizen equality. Oligarchs thought that their greater wealth, which (by their lights, anyway) afforded them greater intelligence and virtue, made them uniquely qualified to rule. The non-elite, which then as today represented the poorer majority, had to be kept out of politics. (For a recent argument in favor of such an “oligarchy of the wise,” see Princeton author Jason Brennan’s Against Democracy.)

In my book Classical Greek Oligarchy: A Political History, I chart the rise of oligarchic thinking, showing that it emerged in conscious reaction to democracy, or the “power of the people.” Faced with the challenges democracy brought to their affluence and influence, oligarchs devised a new set of political institutions, which would ensure that the people could make no inroads into oligarchic privilege. This was not simply a matter of attaching property requirements to office-holding, although oligarchs certainly considered that essential. Oligarchies also stacked the judicial system in elites’ favor; sought to control the people’s speech, movement, and association; hoarded and manipulated information crucial to the city’s well-being; feathered their own nests with economic perquisites; and on occasion even resorted to extra-legal assassination to eliminate subversives. Oligarchies were, in short, authoritarian regimes. Engaging with contemporary scholarship in political science on authoritarianism, I show that ancient Greek oligarchies confronted the same basic problems that haunt modern authoritarians, and experimented with similar institutions for preserving their rule. In ways that have not been fully apparent until now, oligarchs and demos resemble today’s dictators and democrats.

As history shows us, inequality in one area (wealth) tends to convince elites that they have unequal abilities in another (politics). Yet in situations like that of Classical Greek oligarchy, when the wealthy obtain the unaccountable political power they desire, the result is not enlightened government but increased oppression. It would do citizens of modern democracies good to bear this in mind. In the United States, many are frustrated with politics, and with democracy in particular. Liberals worry about the supposed ignorance of the electorate. Conservatives want to restrict what majorities can legislate, especially in the area of economics. And the last election saw countless voters openly embrace a vision of America as headed by a billionaire strongman. In longing for a restriction on democracy, however—even if “only” meant for those with whom we disagree—we increase the likelihood of a more general oligarchic takeover. We play into oligarchs’ hands. If the Greek example is any indication, such short-term thinking would bode ill for the freedom of all citizens—and it would only make inequality worse.

Matthew Simonton is assistant professor of history in the School of Humanities, Arts, and Cultural Studies at Arizona State University. He received his PhD in classics from Stanford University. He is the author of Classical Greek Oligarchy: A Political History.

Jim Campbell: A new analysis in Polarized dispels election controversy

Overlooked “Unfavorability” Trends Raise Doubts that Comey Cost Clinton the Election

In her newly released What Happened and in interviews accompanying the book’s release, Hillary Clinton claims that former FBI Director James Comey’s late October re-opening of the investigation into the mishandling of national security emails was “the determining factor” in her 2016 presidential election loss. In the new afterword of the paperback edition of Polarized: Making Sense of a Divided America, I report evidence indicating that Comey’s letter did not cause Clinton’s loss.The suspected Comey-effect is tested by examining changes in Gallup’s unfavorability ratings of Clinton and Trump. The data shows that the decline in Clinton’s poll lead over Trump in the last weeks of the campaign was not the result of voters becoming more negative about Clinton (as would be the case if they were moved by the Comey letter). It was the result of voters becoming less negative about Trump (a development with no plausible link to the Comey letter). Comey didn’t drive voters away from Clinton. Rather, “Never Trump” Republicans were grudgingly becoming “Reluctant Trump” voters.

This finding is consistent with the earlier finding of the American Association for Public Opinion Research’s (AAPOR) Ad Hoc Committee on 2016 Election Polling. The Committee found evidence that “Clinton’s support started to drop on October 24th or 25th,” perhaps even earlier. This was at least three or four days before Comey’s letter was released.

Read on for the relevant excerpt and details from the afterword of my forthcoming paperback edition of Polarized: Making Sense of a Divided America:

In the closing weeks of the campaign, with what they saw as a Clinton victory looming darkly over the horizon, many disgruntled conservative hold-outs came back to the Republican column. As they rationalized or reconsidered their choice, unfavorable opinions about Trump among Republicans declined (about 7 points). Even so, about a fifth of Trump’s voters admitted that they still held an unfavorable view of him. More than a quarter of Trump’s voters said their candidate lacked the temperament to be president. For many, “Never Trump” had become “Reluctantly Trump.” They held their noses and cast their votes. Between Trump and Clinton, about 85% of conservative votes went to Trump. Along with sour views of national conditions, polarization had offset or overridden the grave reservations many conservatives had about a Trump vote.

Widespread and intense polarized views, across the public and between the parties, shaped the 2016 election. On one side of the spectrum, polarization compelled liberals to overlook Clinton’s scandals and deficiencies as a candidate as well as a sputtering economy and unstable international conditions. On the other side, dissatisfaction with national conditions and polarization compelled conservatives to vote for a candidate many thought lacked the rudimentary leadership qualities needed in a president. Non-ideological centrists again were caught in the middle–by ideology, by the candidates’ considerable shortcomings, and by generally dreary views of national conditions. Their vote split favored Clinton over Trump (52% to 40%, with 8% going to minor party candidates), close to its two-party division in 2012. The three components of the vote (polarization, the candidates, and national conditions) left voters closely enough divided to make an electoral vote majority for Trump possible.

Although the above explanation of the election is supported by the evidence and fits established theory, two other controversial explanations have gained some currency. They trace Trump’s surprising victory to Russia’s meddling in the election (by hacking Democratic emails and releasing them through Wikileaks) and FBI Director Comey’s late October letter, re-opening the investigation into Clinton’s mishandling of confidential national security emails. Some, including Clinton herself, contend Wikileaks and Comey’s letter caused the collapse of Clinton’s lead over Trump in the closing weeks of the campaign.

The evidence says otherwise. Contrary to the speculation, neither Wikileaks nor Comey’s letter had anything to do with the shriveling of Clinton’s lead. If either had been responsible, they would also have caused more voters to view Clinton negatively–but opinions about her did not grow more negative. Unfavorable opinions of Clinton were remarkably steady. From August to late September, Hillary Clinton’s unfavorables in Gallup polls averaged 55%. Her unfavorables in the Gallup poll completed on the day Comey released his letter (October 28) stood at 55%. In the exit polls, after Wikileaks and after Comey’s letter, her unfavorables were unchanged at 55%. Opinions about Hillary Clinton, a figure in the political spotlight for a quarter century, had long been highly and solidly polarized. Nothing Wikileaks revealed or Comey said was going to change minds about her at that late stage of the game.

The race tightened in the last weeks of the campaign because Trump’s unfavorables declined (by about 5 points). They declined as some conservatives and moderates with qualms about Trump came to the unpleasant realization that voting for Trump was the only possible way they could help prevent Clinton’s election. Some dealt with the dissonance of voting for a candidate they disliked by rationalizing, reassessing, or otherwise softening their views of Trump, trying to convince themselves that maybe “the lesser to two evils” was not really so awful after all. In voting, as in everything else, people tend to postpone unpleasant decisions as long as they can and make them as painless to themselves as they can.

The decay of Clinton’s October poll lead was not about Russian and Wikileaks meddling in the election and not about Comey’s letter. It was about polarization, in conjunction with dissatisfaction about national conditions, belatedly overriding the serious concerns many voters had about Donald Trump as a potential president. Trump’s candidacy put polarization to the test. His election testified to how powerful polarization has become. The highly polarized views of Americans and the highly polarized positions of the parties were critical to how voters perceived and responded to the candidates’ shortcomings and the nation’s problems.

James E. Campbell is UB Distinguished Professor of Political Science at the University at Buffalo, State University of New York. His books include The American Campaign: U.S. Presidential Campaigns and the National VoteThe Presidential Pulse of Congressional Elections, and Polarized: Making Sense of a Divided America.

Jason Brennan: How Kneeling Athletes Reveal the True Nature of Politics

BrennanMuch of Puerto Rico may be without power for six months. North Korea is increasingly belligerent. The world’s reaction to coming climate change ranges between empty symbolic gestures and nothing. A just shy of fascist party won 13% of the seats in the German federal election. The U.S. has been at war—and troops have been dying for frivolous reasons—for sixteen years. But what are Americans most outraged about? Whether football players kneeling during the National Anthem, in protest of police brutality toward blacks, is somehow wrongly disrespectful of a flag, “the troops!”, or America.

Both sides accuse the other side of hypocrisy and bad faith. And both sides are mostly right. Hypocrisy and bad faith are the self-driving cars of politics. They get us where we want, without our having to drive.

What Christopher Achen and Larry Bartels (in the 2016 Princeton University Press book Democracy for Realists) call the folk theory of democracy goes roughly as follows: People know their interests. They then form preferences about what the government should do to promote these goals. They vote for parties and politicians who will best realize these goals. Then the government implements the goals of the majority. But the problem, Achen and Bartels argue, is that each part of that “folk theory” is false.

Instead, as economist Robin Hanson likes to say, politics is not about policy. The hidden, unconscious reason we form political beliefs is to help us form coalitions with other people. Most of us choose our particular political affiliations because people like us vote that way. We then join together with other supposedly like-minded people, creating an us versus a them. We are good and noble and can be trusted. They are stupid and evil and at fault for everything. We loudly denounce the other side in order to prove, in public, that we are especially good and pure, and so our fellow coalition members should reward us with praise and high status.

Our political tribalism spills over and corrupts our behavior outside of politics. Consider research by political scientists Shanto Iyengar and Sean Westwood. Iyenger and Westwood wanted to determine how much, if at all, political bias affects how people evaluate job candidates. They conducted an experiment in which they asked over 1,000 subjects to evaluate what the subjects were told were the résumés of graduating high school students. Iyenger and Westwood carefully crafted two basic résumés, one of which was clearly more impressive than the other. They randomly labeled the job candidates as Republican or Democrat, and randomly made the candidates stronger or weaker. At that same time, they also determined whether the subjects—the people evaluating the candidates—were strong or weak Republicans, independents, or strong or weak Democrats.

The results are depressing: 80.4% of Democratic subjects picked the Democratic job candidate, while 69.2% of Republican subjects picked the Republican job candidate. Even when the Republican job candidate was clearly stronger, Democrats still chose the Democratic candidate 70% of the time. In contrast, they found that “candidate qualification had no significant effect on winner selection.” In other words, the evaluators didn’t care about how qualified the candidates were; they just cared about what the job candidates’ politics were.

Legal theorist Cass Sunstein notes that in 1960, only about 4-5% of Republicans and Democrats said they would “displeased” if their children married members of the opposite party. Now about 49% of Republicans and 33% of Democrats admit they would be displeased. The truth is probably higher than that—some people would be upset but won’t admit it on a survey. Explicit “partyism”—prejudice against people from a different political party—is now more common than explicit racism.

At least some people have honest, good faith disputes about how to realize shared moral values, or about just what morality and justice require. We should be able to maintain such disputes without seeing each other as enemies. Sure, some moral disagreements are beyond the pale. If someone advocates the genocidal slaughter of Jews, fine, they’re not a good person. But disagreements on whether the minimum wage does more harm than good are not grounds for mutual diffidence. But, as ample empirical research shows (you can read my Against Democracy for a review), we are biased to see political disputants as stupid and evil, rather than just having a reasonable disagreement. Indeed, as Diana Mutz (in her Hearing the Other Side) shows, people who are successfully able to articulate the other sides’ point of view hardly participate in politics, but the nasty true-believers vote early and often.

It’s not a surprise people are so irrational and nasty about politics. The logic behind it is simple. Your individual vote counts for almost nothing. Even on the more optimistic models, you are as likely to change an election as you are to win Powerball. Accordingly, it doesn’t matter if your political beliefs are true or false, reasonable or utterly absurd. When you cross the street, you form rational beliefs about traffic patterns—or you die. When you vote, though, you can afford to indulge your deepest prejudices with no cost. How we vote matters, but how any one of us does not.

Imagine a professor told her 1000-student class that in fifteen weeks, she would hold a final exam, worth 100% of their grade. Suppose she told them that in the name of equality, she would average all final exam grades together and give every student the same grade. Students wouldn’t study and the average grade would be an F. In effect, this scenario is how democracy works, except that we have a 210-million person class in the United States. The downside is not merely that we remain ignorant. Rather, the downside is that it liberates us to use our political beliefs for other purposes.

Politics makes us civic enemies. When we make something a political matter, we turn it into a zero-sum game where someone has to win and someone has to lose. Political decisions involve a constrained set of options. In politics, there are usually only a handful of viable choices. Political decisions are monopolistic: everyone has to accept the same decision. Political decisions are imposed involuntarily: you don’t really consent to the outcome of a democratic decision.

Now back to football players kneeling. My friends on the Right refuse to take the players at their word. The players say they’re protesting police brutality and other ways the U.S. mistreats its black populace. My friends on the Right scoff and say, no, really they just hate America and hate the troops. This reaction is wrong, but not surprising. Imputing evil motives to the other side is essential to politics. The Left does it all the time too. If, for example, some economists on the Right says they favor school vouchers as a means of improving school quality, the Left will just accuse them of hating the poor.

It’s worth noting that since 2009, the Pentagon has paid the NFL over $6 million to stage patriotic displays before games to help drive recruiting.[i] The pre-game flag shows are literally propaganda in the narrowest sense of the word. Personally, I think participating in government-funded propaganda exercises is profoundly anti-American, while taking a knee and refusing to dance on command shows real respect for what the country supposedly stands for.

Jason Brennan is the Flanagan Family Chair of Strategy, Economics, Ethics, and Public Policy at the McDonough School of Business at Georgetown University. He is the author of The Ethics of Voting (Princeton), and Against Democracy. He writes regularly for Bleeding Heart Libertarians, a blog.

Landon R. Y. Storrs: What McCarthyism Can Teach Us about Trumpism

Since the election of President Donald Trump, public interest in “McCarthyism” has surged, and the focus has shifted from identifying individual casualties to understanding the structural factors that enable the rise of demagogues.

After The Second Red Scare was published in 2012, most responses I received from general readers were about the cases of individuals who had been investigated, or whom the inquirer guessed might have been investigated, under the federal employee loyalty program. That program, created by President Truman in 1947 in response to congressional conservatives’ charges that his administration harbored communist sympathizers, was the engine of the anticommunist crusade that became known as McCarthyism, and it was the central subject of my book. I was the first scholar to gain access to newly declassified records related to the loyalty program and thus the first to write a comprehensive history. The book argues that the program not only destroyed careers, it profoundly affected public policy in many fields.

Some queries came from relatives of civil servants whose lives had been damaged by charges of disloyalty. A typical example was the person who wanted to understand why, in the early 1950s, his parents abruptly moved the family away from Washington D.C. and until their deaths refused to explain why. Another interesting inquiry came from a New York Times reporter covering Bill de Blasio’s campaign for New York City mayor. My book referenced the loyalty case of Warren Wilhelm Sr., a World War II veteran and economist who left government service in 1953, became an alcoholic, was divorced by his wife, and eventually committed suicide. He never told his children about the excruciating loyalty investigation. His estranged son, born Warren Wilhelm Jr., legally adopted his childhood nickname, Bill, and his mother’s surname, de Blasio. I didn’t connect the case I’d found years earlier to the mayoral candidate until the journalist contacted me, at which point I shared my research. At that moment de Blasio’s opponents were attacking him for his own youthful leftism, so it was a powerful story, as I tried to convey in The Nation.

With Trump’s ascendance, media references to McCarthyism have proliferated, as commentators struggle to make sense of Trump’s tactics and supporters. Opinion writers note that Trump shares McCarthy’s predilections for bluffing and for fear-mongering—with terrorists, Muslims, and immigrants taking the place of communist spies. They also note that both men were deeply influenced by the disreputable lawyer Roy Cohn. Meanwhile, the president has tweeted that he himself is a victim of McCarthyism, and that the current investigations of him are “witch hunts”—leaving observers flummoxed, yet again, as to whether he is astonishingly ignorant or shamelessly misleading.

But the parallels between McCarthy’s era and our own run deeper than personalities. Although The Second Red Scare is about McCarthyism, it devotes little attention to McCarthy himself. The book is about how opponents of the New Deal exploited Americans’ fear of Soviet espionage in order to roll back public policies whose regulatory and redistributive effects conservatives abhorred. It shows that the federal employee loyalty program took shape long before the junior senator from Wisconsin seized the limelight in 1950 by charging that the State Department was riddled with communists.

By the late 1930s congressional conservatives of both parties were claiming that communists held influential jobs in key New Deal agencies—particularly those that most strongly challenged corporate prerogatives regarding labor and prices. The chair of the new Special House Committee to Investigate Un-American Activities, Martin Dies (a Texas Democrat who detested labor unions, immigrants, and black civil rights as much as communism), demanded that the U.S. Civil Service Commission (CSC) investigate employees at several agencies. When the CSC found little evidence to corroborate Dies’s allegations, he accused the CSC itself of harboring subversives. Similarly, when in 1950 the Tydings Committee found no evidence to support McCarthy’s claims about the State Department, McCarthy said the committee conducted a “whitewash.” President Trump too insists that anyone who disproves his claims is part of a conspiracy. One important difference is that Dies and McCarthy alleged a conspiracy against the United States, whereas Trump chiefly complains of conspiracies against himself—whether perpetrated by a “deep state” soft on terrorism and immigration or by a biased “liberal media.” The Roosevelt administration dismissed Dies as a crackpot, and during the Second World War, attacks on the loyalty of federal workers got little traction.

That changed in the face of postwar Soviet conduct, the nuclear threat, and revelations of Soviet espionage. In a futile effort to counter right-wing charges that he was “soft” on communism, President Truman expanded procedures for screening government employees, creating a loyalty program that greatly enhanced the power of the FBI and the Attorney General’s List of Subversive Organizations. State, local, and private employers followed suit. As a result, the threat of long-term unemployment forced much of the American workforce not only to avoid political dissent, but to shun any association that an anonymous informant might find suspect. Careers and families were destroyed. With regard to the U.S. civil service, the damage to morale and to effective policymaking lasted much longer than the loyalty program itself.

Public employees long have been vulnerable to political attacks. Proponents of limited government by definition dislike them, casting them as an affront to the (loaded) American ideals of rugged individualism and free markets. But hostility to government employees has been more broad-based at moments when severe national security threats come on top of widespread economic and social insecurity. The post-WWII decade represented such a moment. In the shadow of the Soviet and nuclear threats, women and African-Americans struggled to maintain the toeholds they had gained during the war, and some Americans resented new federal initiatives against employment discrimination. Resentment of the government’s expanding role was fanned by right-wing portrayals of government experts as condescending, morally degenerate “eggheads” who avoided the competitive marketplace by living off taxpayers.

Today, widespread insecurity in the face of terrorism, globalization, multiculturalism, and gender fluidity have made many Americans susceptible to the same sorts of reactionary populist rhetoric heard in McCarthy’s day. And again that rhetoric serves the objectives of those who would gut government, or redirect it to serve private rather than public interests.

The Trump administration calls for shrinking the federal workforce, but the real goal is a more friendly and pliable bureaucracy. Trump advisers complain that Washington agencies are filled with leftists. Trump transition teams requested names of employees who worked on gender equality at State and climate change initiatives at the EPA. Trump media allies such as Breitbart demanded the dismissal of Obama “holdovers.” Trump selected appointees based on their personal loyalty rather than qualifications and, when challenged, suggested that policy expertise hinders fresh thinking. In firing Acting Attorney General Sally Yates for declining to enforce his first “travel ban,” Trump said she was “weak” and had “betrayed” her department. Such statements, like Trump’s earlier claims that President Obama was a Kenyan-born Muslim, fit the textbook definition of McCarthyism: undermining political opponents by making unsubstantiated attacks on their loyalty to the United States. Even more alarming is Trump’s pattern of equating disloyalty to himself with disloyalty to the nation—the textbook definition of autocracy.

Might the demise of McCarthyism hold lessons about how Trumpism will end? The Second Red Scare wound down thanks to the courage of independent journalists, the decision after four long years of McCarthy’s fellow Republican senators to put country above party, and U.S. Supreme Court decisions in cases brought by brave defendants and lawyers. The power of each of those forces was contingent, of course, on the abilities of Americans to sort fact from fiction, to resist the politics of fear and resentment, and to vote.

StorrsLandon R. Y. Storrs is professor of history at the University of Iowa. She is the author of Civilizing Capitalism: The National Consumers’ League, Women’s Activism, and Labor Standards in the New Deal Era and The Second Red Scare and the Unmaking of the New Deal Left.

Lawrence Baum: Ideology in the Supreme Court

When President Trump nominated Neil Gorsuch for a seat on the Supreme Court, Gorsuch was universally regarded as a conservative. Because of that perception, the Senate vote on his confirmation fell almost completely along party lines. Indeed, Court-watchers concluded that his record after he joined the Court late in its 2016-2017 Term was strongly conservative. But what does that mean? One possible answer is that he agreed most often with Clarence Thomas and Samuel Alito, the justices who were considered the most conservative before Gorsuch joined the Court. But that answer does not address the fundamental question: why are the positions that those three justices took on an array of legal questions considered conservative?

The most common explanation is that liberals and conservatives each start with broad values that they then apply in a logical way to the various issues that arise in the Supreme Court and elsewhere in government. But logic can go only so far to explain the ideological labels of various positions. It is not clear, for instance, why liberals are the strongest proponents of most individual rights that the Constitution protects while conservatives are the most supportive of gun rights. Further, perceptions of issues sometimes change over time, so that what was once considered the liberal position on an issue is no longer viewed that way.

Freedom of expression is a good example of these complexities. Beginning early in the twentieth century, strong support for freedom of speech and freedom of the press was regarded as a liberal position. In the Supreme Court, the justices who were most likely to support those First Amendment rights were its liberals. But in the 1990s that pattern began to change. Since then, when the Court is divided, conservative justices provide support for litigants who argue that their free expression rights have been violated as often as liberals do.

To explain that change, we need to go back to the period after World War I when freedom of expression was established as a liberal cause. At that time, the government policies that impinged the most on free speech were aimed at political groups on the left and at labor unions. Because liberals were more sympathetic than conservatives to those segments of society, it was natural that freedom of expression became identified as a liberal cause in the political world. In turn, liberal Supreme Court justices gave considerably more support to litigants with free expression claims than did their conservative colleagues across the range of cases that the Court decided.

In the second half of the twentieth century, people on the political left rethought some of their assumptions about legal protections for free expression. For instance, they began to question the value of protecting “hate speech” directed at vulnerable groups in society. And they were skeptical about First Amendment challenges to regulations of funding for political campaigns. Meanwhile conservatives started to see freedom of expression in a more positive light, as a protection against undue government interference with political and economic activity.

This change in thinking affected the Supreme Court in the 1990s and after. More free expression cases came to the Court from businesses and people with a conservative orientation, and a conservative-leaning Court was receptive to those cases. The Court now decides few cases involving speech by labor unions and people on the political left, and cases from businesses and political conservatives have become common. Liberal justices are more favorable than their conservative colleagues to free expression claims by people on the left and by individuals with no clear political orientation, but conservative justices provide more support to claims by businesses and conservatives. As a result, what had been a strong tendency for liberal justices to give the most support to freedom of expression across the cases that the Court decided has disappeared.

The sharp change in the Supreme Court’s ideological orientation in free speech cases is an exception to the general rule, but it underlines some important things about the meaning of ideology. The labeling of issue positions as conservative or liberal comes through the development of shared understandings among political elites, and those understandings do not necessarily follow from broad values. In considerable part, they reflect attitudes toward the people and groups that champion and benefit from particular positions. The impact of those attitudes is reflected in the ways that people respond to specific situations involving an issue: liberal and conservative justices, like their counterparts elsewhere in government and politics, are most favorable to free speech when that speech comes from segments of society with which they sympathize. When we think of Supreme Court justices and the positions they take as conservative and liberal, we need to keep in mind that to a considerable degree, the ideological labeling of positions in ideological terms is arbitrary. Justice Gorsuch’s early record on the Court surely is conservative—but in the way that conservative positions have come to be defined in the world of government and politics, definitions that are neither permanent nor inevitable.

BaumLawrence Baum is professor emeritus of political science at Ohio State University. His books include Judges and Their Audiences, The Puzzle of Judicial BehaviorSpecializing the Courts, and Ideology in the Supreme Court.

Global Ottoman: The Cairo-Istanbul Axis

First published in Global Urban History as ”Global Ottoman: The Cairo-Istanbul Axis” by Adam Mestyan. Republished with permission.

On a Sunday at the end of January 1863 groups of sheikhs, notables, merchants, consuls, and soldiers gathered in the Citadel of Cairo. They came to witness a crucial event: the reading aloud of the imperial firman that affirmed the governorship of Ismail Pasha over the rich province of Egypt. The firman was brought by the Ottoman sultan’s imperial envoy. After the announcement, which occurred, of course, in Ottoman Turkish, Ismail held a reception. Local Turkic notables and army leaders came to congratulate and express their loyalty. A few months later, in April 1863, they received Sultan Abdülaziz in person in Alexandria—something that had not occurred since the Ottomans occupied Egypt in the sixteenth century. From Alexandria the sultan took the train to Cairo. This was the first trip of a caliph on the tracks.

 

Mestyan1

The Fountain of the Valide (the mother of the khedive), between 1867 and 1890, by Maison Bonfils, Library of Congress.

But what did Ottoman mean exactly in Egypt? My forthcoming book, Arab Patriotism: The Ideology and Culture of Power in Late Ottoman Egypt, examines the significance and meaning of the Ottoman imperial context for the history of Egyptian nationalism. The book demonstrates the continuous negotiation between Turkic elites in Egypt and local intellectuals and notables bound by collective, albeit contested, notions of patriotism. There was an invisible compromise through the new representations and techniques of power, including the theater. This local instrumentalization and mixing of urban Muslim and European forms is the backstory to new political communities in the Middle East. Importantly, the Ottoman connection was an urban one: imperial elites are urban elites and rural elites had to become urban ones in order to maximize their interests by the fin-de-siècle.

So, what does the Ottoman framework mean for urban historians of the Arab world and in particular of Egypt? Such a framework does two important things: as Ehud Toledano underlines, it points to the delegated power of the governors by the sultan; and it reveals that new (elite) consumption practices and technologies spread not only by direct contact between local and European actors but also by the imperial mediation of Istanbul, or vice versa, by local provincial mediation to the capital. The Ottoman Empire guaranteed the network of free and safe trade, and movement between cities still in the nineteenth century. These basic features of the Ottoman context remained in place even after the British occupation of Egypt until 1914.

Such a perspective does not diminish the factual power of European empires and their military interventionism to protect their economic interests. On Barak eminently showed that the first train line in Egypt was crucial for British rule in India in the 1850s. European technologies and new sources of energy (coal, electricity) also helped the Ottomans to reach their far-flung domains quickly (remember the sultan arrived via train in Cairo). These instances, however, do not mean that the sultan’s (or his representatives’) power was usurped by Europeans completely until the 1870s. “Bringing the Ottomans back in” destabilizes the bifurcated view of West and East by highlighting a plural system of power just before the scramble for Africa.

We should not read “Ottoman” as “old” in contrast to the European “new.” This is the exciting moment of the Tanzimat reforms in the Ottoman Empire, in which new technologies were “Ottomanized” to a certain extent, next to legal changes. There was an Ottoman elite modernity representing novelty to the provincial populations which had milliard connections to bourgeois practices globally.

Mestyan2

Interior of the Mosque of Silahdar Agha, picture by the author.

In the context of nineteenth-century Cairo, the most aesthetic representations of power were often formulated in terms of Ottoman sovereignty—a sovereignty that the pashas of Egypt wished to renounce but never did. Among these Ottoman features, one can see the mosque of Mehmed Ali in the Citadel. Doris Behrens-Abuseif reminds us that this mosque represents both Mehmed Ali’s power and Ottoman imperial aesthetics. It dominates the city’s skyline to this day. The pasha’s military elite brought late Ottoman baroque to Cairo: just look at the mosque of Sulayman Silahdar Agha in al-Mu‘izz Street and other smaller Ottoman mosques, schools, fountains, and palaces around the city. Though featuring local characteristics (“the Egyptian dialect of Ottoman architecture”), these structures are unmistakably Istanbulite and were built prior to or in the 1870s. A particularly interesting interplay between Ottoman and European aesthetics, hygienic considerations, and capitalism occurred in the creation of various public and private gardens in nineteenth-century Cairo and Alexandria. Buildings of power such as the Abdin Palace were likely conceived in a competition with the Ottoman capital (there the Dolmabahçe Palace was new in the 1850s) restaging political representation in a “modern” architectural idiom.

Socially, there are some who understand the Ottoman presence in Cairo (and in Egypt in general) as “The Turks in Egypt,” to quote the useful but somewhat misleading publication title of Ekmeleddin Ihsanoğlu. There were certainly ethnic Turks in Ottoman Egypt. The local population looked at the quite cruel Ottoman military ruling class as the “Turks” (al-Atrāk). Yet, in nineteenth- and early twentieth-century Cairo, there were many Ottoman Armenian, Greek, Jewish, Albanian, Bulgarian, and Circassian families whose primary or secondary language was some version of Turkish and who had family or economic ties to various Ottoman cities in Asia and Europe. Their leading members rapidly transformed themselves into Franco- and Italophone “cultural creoles” (to use Julia Clancy-Smith’s expression), who forged new identities precisely by distancing themselves from their Ottoman past. Another identity strategy, not necessary exclusive of the previous one, was nationalism.

Mestyan3

Ismail Pasha, Library of Congress.

The Cairo Ottoman elite was connected to Istanbul and was part of the imperial order. There was an elite Ottoman global network. The rulers, their families, and various relatives lived in both cities (later in Paris and in Switzerland). Every day, orders, secret reports, gifts, and personal staff arrived from Istanbul in Alexandria to be transported to Cairo and vice versa. From the 1860s onwards, this political and leisure traffic was facilitated by the Aziziya steamship company. The landowning-ruling households developed significant economic investments in between both cities, not to mention the large yearly tribute Egypt paid to the sultanic treasury. Ottoman treaties applied to Egypt to a significant extent. In the 1860s, as Nicolas Michel argues, the Ottoman Empire not only remained skeptical of the Suez Canal, but actually intervened in its construction. Ismail Pasha himself is usually remembered for the Suez Canal opening ceremony and associated follies which led to foreign control of Egyptian finances. He was, however, also an Ottoman man, part of the imperial elite and intimately familiar with Istanbul where he lived and where he eventually died. His mother, Hoşyar, who maintained a full Ottoman cultural elite household, invested significantly in the Muslim landscape of Cairo. Last but not least, one should not forget that the Ottoman sultan’s firman was the legal basis of the pashas’ rule.

There was also an invisible Ottoman underworld in Egypt. Sufis orders, especially in Cairo, had spiritual and material significance in Istanbul. Musicians and entertainers travelled between the two rich centers and other cities. Religious endowments (sing. waqf) in Cairo were related to Istanbul in many ways, going back to the sixteenth century. Religious scholars often received a salary from the sultan or from the Sheikh-ül-Islam. Small merchant networks fully functioned between Egypt, the Syrian provinces, and Tunis. Shari‘a court cases from Cairo could sometimes even reach the Istanbul courts. The question of religious endowments in Egypt belonging to individuals in republican Turkey remained a complex problem until the 1950s. Merchants living in Istanbul and in other parts of the empire had significant investments in Cairo and vice versa. Likewise, criminals circulated (and sometimes escaped) between Istanbul and Cairo. Al-Qanun al-Sultani, the “Sultanic Law,” was the basis of the penal system in Egypt, though the governor wanted the right of death penalty for himself in the 1850s. Political dissidents also commuted between the two cities (scholars are yet to properly explore the use of fin-de-siècle Cairo as an Ottoman hub of anti-Abdülhamidian propaganda). The khedives and the sultans used various figures in the capitals to keep each other at bay.

The often-romanticized bourgeois society of Alexandria was in large part a semi-Ottoman society, which had its less spectacular but perhaps even more powerful sister-groups in Cairo. The ministries in Cairo received the French, Arabic, and Turkish newspapers printed in Istanbul until the 1880s. Armenian refugees arrived in Egypt in large numbers. Egyptians were legally Ottoman citizens until the First World War.  Even in the 1920s there were (Ottoman) Turkish newspapers printed in Cairo.

Mestyan4

Citadel of Cairo, between 1870 and 1890, by Antonio Beato, Library of Congress.

The British occupation did not reduce the Istanbul-Cairo traffic. Egypt and its capital Cairo remained both a cultural and financial market for Ottomans—from the Ottoman musical theater brought by enterprising Turkish-speaking Armenians, to clothing and other products.  Indeed, the occupation arguably boosted the symbolic Ottoman presence by the arrival of an Ottoman Imperial High Commissioner (the war hero Muhtar Pasha) in Cairo in 1885. Ottoman flags symbolized Egypt’s belonging and resistance in the 1890s. Some argue this was a mere instrumentalization of the Ottoman Empire because Egyptian anti-colonial mass nationalism had already bloomed, clamoring for an independent nation-state.

The urban nature of these dynamics and relations cannot be overemphasized. Rural Egyptians rarely identified with the empire. Outside the cities, “the Turks” meant taxation, conscription in the army, misuse of power, and pashas and beys with legal privileges. Until the nineteenth century, peasants could apply for justice to the distant sultan as his subjects by way of petitions, but the Mehmed Ali family, by assuming legislative powers, blocked this unique means of connection between the poorest and the highest.

The provincial system of the late Ottoman Empire was torn between centralization, local concerns, and integration into global infrastructures, as Johann Büssow, Christoph Herzog, On Barak, Toufoul Abou-Hodeib, and Till Grallert have recently shown on the examples of Ottoman Palestine, Iraq, Egypt, Lebanon and Syria, respectively. It was up to local elite activity to shape whether the processes of urbanization and capitalism were to be paired with imperial initiatives as negotiations played out in the European imperial context.

MestyanAdam Mestyan is a historian of the Middle East. He is an assistant professor of history at Duke University and a Foreign Research Fellow (membre scientifique à titre étranger) at the French Institute of Oriental Archeology – Cairo. His first monograph, Arab Patriotism: The Ideology and Culture of Power in Late Ottoman Egypt, presents the essential backstory to the formation of the modern nation-state in the Middle East.

James Gibson: Voters Beware! TV ads may damage Supreme Court legitimacy

The right-wing Judicial Crisis Network has launched a $10 million advertising campaign to put public pressure on Democratic politicians who oppose President Trump’s nomination of Judge Neil Gorsuch to the U.S. Supreme Court.

While ideological fights over who controls the courts are nothing new, my research suggests that this use of political advertising to sway public opinion of a nominee may do real damage to the the institutional legitimacy of the U.S. Supreme Court in the eyes of the American people.

In Citizens, Courts, and Confirmations, Gregory Caldeira and I focused on the 2006 nomination of Samuel Alito to the U.S. Supreme Court. During that confirmation battle, proponents and opponents of Alito’s confirmation ran intensely politicized television ads trying to shape public opinion on the nomination.

Using surveys of public opinion, we demonstrated that the ads spilled over to infect support for the Court as an institution, subtracting from its legitimacy. In order to understand how and why this happened, it’s important to consider what political scientists (including Caldeira and I) have discovered is the main source of the Court’s legitimacy.

Despite the arguments of some judges to the contrary, the American people do not believe that judges somehow mystically “find” the law. They realize, instead, that judges’ ideologies matter, that liberal and conservative judges make different decisions, and that they do so on the basis of honest intellectual differences. This philosophy is called “legal realism,” and it is widely embraced by the American people.

But there is a difference between honest ideological differences and the politicization of the courts. When people believe that a judge “is just another politician,” or that courts are filled with such judges, legitimacy suffers. The American people do not think highly of politicians. Politicians are seen as self-interested and insincere. That means one can rarely believe what politicians say because they so rarely say what they believe. It is not ideology that Americans oppose, but rather the insincere and strategic way that contemporary politics is fought.

Our analysis discovered that it is not damaging to the Court when Americans recognize that judges hold different ideologies and that those ideologies strongly influence their decisions. But when judges cross the line, when they engage in overly politicized behavior—either on the bench or off—then the Court’s legitimacy is threatened. Scalia’s intemperate language in his opinions is one such example of judges venturing into partisanship; so, too, is Ginsburg’s attempt to influence last year’s presidential election. Still, events like these do not widely penetrate the consciousness of the American people, and so in the end, they likely have small effects on institutional legitimacy.

The same cannot be said of televised advertisements. Millions of Americans are exposed to these churlish and politicized ads, and so they take their toll. The lesson of these ads is too often the same: The “Supreme Court is just another political institution,” worthy of no more esteem than the other institutions of government. As this belief becomes widespread, the institution of the Court is harmed.

Our analysis demonstrates that while Alito got his seat on the Supreme Court, the court he joined had a diminished supply of goodwill among the Court’s constituents, the American people. It also makes clear that the upcoming nomination fights have implications beyond who does and doesn’t get a seat on the bench. At stake is the very legitimacy of the U.S. Supreme Court.

GibsonJames L. Gibson is the Sidney W. Souers Professor of Government at Washington University. He is the coauthor of Citizens, Courts, and Confirmations: Positivity Theory and the Judgments of the American People.

Rahul Sagar: Are There “Good” Leaks and “Bad” Leaks?

Washington is awash in leaks. Should these leaks be praised or should they be condemned, as the president argues? President Trump’s supporters may argue that his critics—Democrats in particular—praise or condemn leaks as it suits them. Consider the hypocrisy, they will say:

First, since Democrats criticized Wikileaks’ publication of John Podesta’s emails, shouldn’t they also criticize NSA and FBI employees who have disclosed information about contacts between Trump Administration officials and Russian officials? Second, if it was wrong for Edward Snowden to have disclosed communications intelligence, as many Democrats argued at the time, then shouldn’t they also think it is wrong for NSA and FBI employees to disclose communications intelligence about Russian contacts with the Trump Administration?

These questions aren’t trivial. So how to respond?

The answer to the first question hinges on what kind of leaks are in question—those that expose wrongful or unlawful activities as opposed to those that reveal private behavior or information. The former variety further the public interest because they bring to light information that citizens and overseers require in order to hold representatives to account. Leaks about contacts between Trump Administration officials and Russian officials clearly fall into this category. The latter variety may have only a faint connection to the public interest. It may be of some interest to have an unvarnished account of the private conduct of public officials, but this interest hardly seems weighty enough to justify the violation of a person’s privacy (especially when the violation is wholesale). Leaks about Podesta’s pizza orders and office politicking fall into the latter category.

The answer to the second question hinges on knowing when unauthorized disclosures are justified. The president’s supporters may argue that intelligence leaks are never justified because they are illegal. To this the press and First Amendment aficionados may respond that leaks are never unlawful. In their view, the Espionage Act, often used to prosecute leakers, was never meant to be used in this fashion. This response is untenable, but even supposing it were true, it is irrelevant. The Communications Intelligence Act (18 USC §798) plainly makes it unlawful—without exception—for persons to communicate or publish classified information “concerning” or “obtained by” the “processes of communication intelligence.”

So the president is right to say that government employees who have disclosed intercepts pertaining to Russian actions, and even the reporters and newspapers that have published these leaks, have broken the law. But must the law always be followed? Suppose you witness a hit-and-run. There are clear signs saying that you are not to stop or park along the road. You would of course nonetheless stop on the reasonable calculation that disobedience is justified since a weighty interest is involved, and when there aren’t other means of aiding the victim. This is the position that intelligence officers sometimes find themselves in—only they can assist the victim, because only they are aware of the harm that has been done. Indeed when the harm they are witnessing is sufficiently acute, government employees may not only be justified in breaking the law, they may even be obliged to do so.

This is not the end of the story, however. Much depends on how a government employee breaks the law. Let us return to the analogy. As you rush the victim to the hospital are you morally obliged to stop at every red light along the way? It depends, surely, on how crowded the roads are, and how badly the victim is injured. If the roads are busy, jumping a light will likely endanger more lives than it will save. But if the roads are clear, and the victim is hemorrhaging, then it is ethical to run a red light. This is the standard that government employees and the press ought to hold themselves to. If they act rashly they will end up doing more harm than good. Arguably, this is why Snowden does not deserve a pardon—he disclosed classified information without much regard for consequences, seemingly driven by his own pet peeves. Did we really need to learn precisely how the United States spies on foreign powers, for instance. Far better then to act temperately—disclosing only as much information as is necessary to kick start the processes of oversight and accountability. This may be where we are today. But it is not easy to be certain. Since ordinary citizens are not privy to the contents of the intercepts, we must hope that the government employees responsible have faithfully calculated that the cost of disclosing such intelligence is worth bearing because the danger confronting the nation is so great.

There are costs, to be clear. The recent disclosures are likely to have exposed sources and methods since Russian agents have presumably learnt that their communication channels are not secure. There are also political costs for the intelligence community, since the leaks can be—indeed are being—portrayed as an effort to subvert the president.

It now remains for Congress to credibly investigate the worrying claims that have been aired. Should the claims prove true, then we will be indebted to the individuals that have made these disclosures at great risk to themselves. Should the disclosures prove unfounded, however, then President Trump’s supporters will have reason to think that politically motivated insiders have engaged in sabotage, and recriminations may well follow. It is also worth pointing out that should Congress fail to conduct a credible investigation, then further disclosures may be justified. This would be not unlike how the driver in our analogy may drive the victim to a different hospital should the first one prove unwilling to attend to the emergency.

It cannot be said enough that with great power comes great responsibility. This aphorism applies as much to presidents as it does to the press. There are “good” and “bad” leaks. To make the distinction, officials, reporters, and citizens must think carefully about the what, when, and how of unauthorized disclosures.

LeaksRahul Sagar is Global Network Associate Professor of Political Science at New York University in Abu Dhabi and Washington Square Fellow at New York University in New York. He is the author of Secrets and Leaks: The Dilemma of State Secrecy.

Leah Boustan: What Mid-Century White Flight Reveals about the Trump Electorate

BoustanIn the months since Donald Trump’s surprise win of the U.S. presidency, two prevailing explanations for the electoral upset have emerged: either Trump voters were swayed by racism or by economic anxiety. Trump’s campaign embraced a series of racist stereotypes—Mexicans are criminals; blacks live in inner-city hellholes—but it also promised to bring back jobs to America’s declining manufacturing regions.

History suggests that the real story is probably a mix of these two explanations. Historical events that we have attributed to racism are often partially motivated by economic concerns. Looking back, we can see the reverse is also true; decisions perceived as strictly economic calculations can be tinged by racism.

One such example is white flight from central cities. In the mid-20th century, the share of white metropolitan households living in cities fell from 64 percent to 36 percent. White flight is typically attributed to racist attitudes of white residents who worried about a black family moving next door; Ta-Nehisi Coates refers to white suburbanization as a “triumph of racist social engineering.” But a closer reevaluation of this chapter in urban history reveals that white flight was motivated by both racism and economic anxiety.

In 1940, the majority of African Americans still lived in the rural South. At the time, even northern cities like Chicago and Detroit, which today have large black communities, were less than 10 percent black. Prompted in part by new factory positions opening during World War II, large waves of black migrants left the South.

Black migration definitely coincided with white relocation to the suburbs. But, many white suburban moves were unrelated to black arrivals, driven instead by rising incomes after the War, the baby boom, and new highway construction. Indeed, suburbanization was prevalent even in cities that received few black southerners, like Minneapolis-St. Paul. But there is a strong relationship between the number of black migrants to a northern city during this period and the number of whites who chose to relocate to the suburbs. For every black arrival, two whites left a typical city, a figure that puts a precise value on what contemporaries already knew: when black people move in, white people move out—à la the Younger family in A Raisin in the Sun.

Still, only a portion of white flight can be traced to the classic dynamic of racial turnover. Cities were simply too segregated by race for many urban whites to actually encounter black neighbors. In 1940, the average white urban household lived more than three miles away from a black enclave. Yet despite substantial distance from black neighborhoods within the city, many white families chose to relocate to the suburbs as black migrants arrived.

Why did white households flee black neighborhoods that were miles away? Changing city finances played a role. As southern black migrants settled in northern cities in large numbers, this lowered the average income of the urban population. Cities responded with a combination of higher property taxes and shifts in spending priorities. Indeed, some white households left cities to avoid this rising tax burden, an economically motivated choice for sure, but one that cannot be fully separated from race and racism.

We can learn a lot about the fiscal motivation behind white flight by focusing on the choices of white residents in neighborhoods on city-suburban borders. Peripheral urban neighborhoods shared the racial composition and housing stock of their suburban counterparts, and enjoyed the same local parks, bus lanes and shopping streets. Yet, by crossing to the suburban side of the border, families could buy into a different local electorate, one that was more racially homogenous and better-off, and thus able to afford quality public schools and lower property taxes. (As an aside, I personally lived in three of these border areas—Cambridge-Somerville, MA; Minneapolis-Edina, MN and Los Angeles-Beverly Hills, CA—and found crossing the border to be imperceptible on the ground.)

Houses on the suburban side of the border are always a little more expensive because they offer access to suburban schools and other public goods. Using data on 100 such neighborhoods, I found that this cross-border housing price gap grows by a few percentage points as black migrants flow into the city – even if new black arrivals live miles away. White households were willing to pay more for suburban houses not only to escape black neighbors but also to join a different tax base.

The debate about how Trump prevailed is currently a stalemate between those who point to real sources of economic anxiety and those who fall back on “it’s racism, stupid!” But casting blame on other racial groups during times of economic downturn is a tried-and-true political tool. Even if the major source of job loss in U.S. manufacturing has been automation, it is relatively easy to encourage voters to blame Chinese manipulation or greedy immigrants. Trying to separate racism from economic anxiety can obscure more than it reveals. History instead urges us to consider how economic concerns and racial animus intertwine.

**

BoustanLeah Platt Boustan is professor of economics at the University of California, Los Angeles, and a research associate at the National Bureau of Economic Research. She is the author of Competition in the Promised Land: Black Migrants in Northern Cities and Labor Markets.

Benjamin W. Goossen: How to Radicalize a Peaceful Minority

There is no better way to turn a religious minority against a nation than by maligning, detaining, and excluding them. While Donald Trump claims his ban on immigrants from seven predominantly-Muslim countries will make Americans safer, history suggests that nativist policies will backfire. Consider the case of perhaps the world’s least likely national security threat: pacifist Mennonites.

PUPinions

A poster for the 1941 Nazi propaganda film, “Village in the Red Storm,” depicting the suffering of German-speaking Mennonites in the Soviet Union, in which the protagonists valiantly give up their pacifism to fight for their race

Members of mistreated groups—whether Mennonites a century ago or Muslims today—can and sometimes do turn on hostile governments, often with alarming speed. At the beginning of the twentieth century, no one would have associated Mennonites, a small Christian group dedicated to nonviolence and charitable works, with hate speech or mass murder. At the time, most Mennonites lived peaceable existences in rural, German-speaking enclaves in Europe or North America.

When the First World War generated a global wave of anti-German and anti-pacifist sentiment, however, tens of thousands—especially those in Central and Eastern Europe—turned to militarist German nationalism.

The shift was as swift as it was shocking. “We have imbibed the notion of pacifism with our mothers’ milk,” a respected Russian Mennonite leader named Benjamin Unruh wrote in 1917. “It is a Mennonite dogma.” Yet by the Second World War, Unruh had become a prominent Nazi collaborator, aiding ethnic cleansing programs that deported Poles and murdered Jews to make way for “Aryan” Mennonites.

How could diehard pacifists turn their backs on the peaceful teachings of their faith?

Mennonites like Unruh, who had once considered violence an unforgivable sin, could be found in military units across Hitler’s empire, including on the killing fields of the Holocaust. Unruh’s own home community near Crimea—once a bastion of pacifist theology—became a model colony under Nazi occupation, generating propaganda for dispersion across the Third Reich and providing a pipeline for young men to join the radical Waffen-SS.

PUPinions

A flag raising ceremony in the Mennonite colony of Molotschna in Nazi-occupied Ukraine in 1942 on the occasion of a visit from Heinrich Himmler

Demonizing Muslim refugees today grants legitimacy to a violent fringe—one already on the lookout for recruits. These are the same tactics that, in the months before the Second World War, prompted a small number of disaffected Mennonites from places as diverse as Canada, Paraguay, Brazil, Poland, and the Netherlands—as well as my own hometown of Newton, Kansas—to travel to Germany to support Hitler’s war machine.

Most Mennonite congregations worldwide, even during the darkest days of the twentieth century, retained their pacifism. And today, the global church has taken steps to address its partial legacy of German racism. This history nevertheless demonstrates how individuals or communities can discard peace-loving traditions; by the height of Nazi expansion, one fourth of the world’s Mennonites lived in—and frequently praised—Hitler’s Germany.

Scapegoated by nativist politicians, members in Eastern Europe and sometimes beyond saw the Third Reich as a refuge from humiliation, deportation, torture, and travel bans. Despite the harrowing experiences of more than 100,000 Mennonites in the Soviet Union—where families faced civil war, famine, and ethnic cleansing—countries like the United States generally closed their borders to the destitute. Canada, which in 1917 had disenfranchised its entire Mennonite population, likewise banned refugees at various points during the 1920s and 1930s.

1930 propaganda image originally subtitled “A German Death Sentence” depicting the suffering of Mennonites and other German-speakers in the Soviet Union

Letters and diaries show how some pacifists, denigrated in the East and barred from the West, became radicalized. One man recalled the shame of imprisonment in communist Ukraine. “So, you’re a German?” a Bolshevik interrogator asked, before beating him senseless. Secret police particularly targeted Mennonites who had tried to emigrate, accusing them of “carrying out of counter-revolutionary fascist activities”—even though most initially had little enthusiasm, let alone contact, with Nazi Germany.

“I was no enemy of the Soviets,” another victim of wrongful arrest reported, “but now that I’ve come to know them, you’ll find I’m a true enemy. Now I’m a Hitlerite, a fascist unto death.”

Targeting immigrants and refugees from war-torn Muslim countries gives terror groups like ISIS and al-Qaeda exactly what they want. Just as twentieth-century governments across Europe and the Americas needlessly alienated their Mennonite subjects and excluded Mennonite migrants, President Trump’s grandstanding harms those among the world’s least threatening and most vulnerable populations, in turn making all of us less safe. This is how to radicalize a peaceful minority.

ChosenBen Goossen is a historian at Harvard University and the author of Chosen Nation: Mennonites and Germany in a Global Era, forthcoming in May from Princeton University Press.