Hilda Sabato: The dilemmas of political representation

SabatoSince the beginning of the twenty-first century, the word “populism” has gained increasing space in the media, initially associated with political events in Latin America. The term is far from new, but it has reappeared to label very different regimes—from that of Chávez and Maduro in Venezuela, to those of Morales in Bolivia, Correa in Ecuador, and the Kirchners in Argentina. Unlike the spread of populist regimes in the postwar era, however, this latest wave has reached well beyond that continent, to include political and ideological movements all over the world. And while the success of the former was often explained by resorting to the long history of caudillos in Spanish America, it is quite obvious that such an argument cannot be applied to this new spread of populism across the globe. Both moments, however, share some common features that may better account for the flourishing of populism than any reference to a past tradition of caudillismo.

The end of the twentieth century heralded an era of political change on a global scale. Some of the main institutions and practices that had long reigned unchallenged in Western democracies have come under heavy scrutiny. The key political actor of the past century, the party, is in peril of extinction—at best, it will survive in new formats. Analysts talk about the crisis of representation, while most individuals feel foreign to the men and women in government, who they sense operate as a closed caste rather than as representatives of the people. In the words of Federico Finchelstein, “Democracy is confronting challenges that are similar to those it encountered during the Great Depression….” In that context, therefore, “Populism offers authoritarian answers to the crisis of democratic representation.”[1]

We are then, once more, at a critical turn in the history of modern politics, as it developed since the revolutions of the eighteenth century succeeded in introducing the sovereignty of the people as the founding principle of the polity and shattered the edifice of the ancien regime in several parts of Europe and the Americas. Within that framework, a key step in the actual organization of the new was the adoption of representative forms of government. In contrast to former experiences of direct popular rule, in the late eighteeth century the introduction of political representation offered a theoretical and practical solution to the challenge of making operative the principle of popular sovereignty.

Yet such a step posed dilemmas that have persisted throughout the centuries. Thus, the tension between the belief that power should stem directly from the people (an association of equals) and any operation whereby a selected few are set apart to exert power in the name of the many has run through the entire history of self-government. Modern representation did not overcome this quandary, although it offered a partial solution by combining democratic and aristocratic means: elections by all to select the few. Yet the attribute of distinction that marks those few—however chosen—keeps challenging the principle of equality, a value reinforced with the consolidation of democracy in the twentieth century. Besides this conceptual conundrum, the actual relationship between the representatives and the represented has always been, and remains, a crucial matter in the political life of modern times.

A second dilemma involved in representative government has posed even more challenges to the functioning of the polity. At the beginning of this story, although representatives were chosen by individual citizens embedded in their actual social conditions, they embodied, above all, the political community (the nation) as an indivisible whole, thus materializing the unity of the people. For almost a century, this issue informed the public debates around the unanimity or the plurality of the polity, and permeated the discussions on the forms of representation, which found one of its more heated moments late in that period in the controversies around the figure of the political party. By the 1900s, however, parties had become key institutions in the prevailing paradigm of representation, so much so that they were usually considered inseparable from democracy as it consolidated during the twentieth century. But today that whole edifice is crumbling, a clear sign that the challenges and dilemmas of political representation persist.

Republics of the New World addresses these issues at the time when modern representation appeared as a viable solution to the difficulties of instituting forms of government based on the principle of popular sovereignty. It traces the conflict-ridden history of representative institutions and practices in an area of sustained experimentation in the ways of the republic: post-colonial Spanish America. Two hundred years later, political representation remains problematic, and some of the same questions posed by the founders of those republics keep coming up, defying our democratic era. Today, like in the past, the way out of the crisis is uncertain and depends upon our own choices. In this context, populism offers a particular response to this predicament, while other political proposals resist its authoritarian features and seek to address the current dilemmas by enhancing the pluralistic and egalitarian elements of our democratic traditions.

Hilda Sabato is head researcher at the National Scientific and Technical Research Council (CONICET) in Argentina and former professor of history at the University of Buenos Aires. She is the author of Republics of the New World: The Revolutionary Political Experiment in Nineteenth-Century Latin America.

[1] Federico Finchelstein, From Fascism to Populism in History, Oakland: University of California Press, 2017, p.29.

Andrew Scull: On the response to mass shootings

ScullAmerica’s right-wing politicians have developed a choreographed response to the horrors of mass shootings. In the aftermath of Wednesday’s massacre of the innocents, President Trump stuck resolutely to the script. Incredibly, he managed to avoid even mentioning the taboo word “guns.” In his official statement on this week’s awfulness, he offers prayers for the families of the victims—as though prayers will salve their wounds, or prevent the next outrage of this sort; they now fall thick and fast upon us. And he spouted banalities: “No child, no teacher, should ever be in danger in an American school.” That, of course, was teleprompter Trump. The real Trump, as always, had surfaced hours earlier on Twitter. How had such a tragedy come to pass?  On cue, we get the canned answer: the issue was mental health: “So many signs that the Florida shooter was mentally disturbed.”  Ladies and gentlemen, we have a mental health problem don’t you see, not a gun problem.

Let us set aside the crass hypocrisy of those who have spent so much time attempting to destroy access to health care (including mental health care) for tens of millions of people bleating about the need to provide treatment for mental illness. Let us ignore the fact that President Trump, with a stroke of a pen, set aside regulations that made it a little more difficult for “deranged” people to obtain firearms. They have Second Amendment rights too, or so it would seem. Let us overlook the fact that in at least two of the recent mass shootings, the now-dead were worshipping the very deity their survivors and the rest of us are invited to pray to when they were massacred. Let us leave all of that out of account. Do we really just have a mental health problem here, and would addressing that problem make a dent in the rash of mass killings?

Merely to pose the question is to suggest how fatuous this whole approach is. Pretend for a moment that all violence of this sort is the product of mental illness, not, as is often the case, the actions of evil, angry, or viciously prejudiced souls. Is there the least prospect that any conceivable investment in mental health care could anticipate and forestall gun massacres? Of course not. Nowhere in recorded history, on no continent, in no country, in no century, has any society succeeded in eliminating or even effectively addressing serious forms of mental illness. Improving the lot of those with serious mental illness is a highly desirable goal. Leaving the mentally disturbed to roam or rot on our sidewalks and in our “welfare” hotels, or using a revolving door to move them in and out of jail—the central elements of current mental health “policy”—constitutes a national disgrace. But alleviating that set of problems (as unlikely as that seems in the contemporary political climate) will have zero effect on gun violence and mass shootings.

Mental illness is a scourge that afflicts all civilized societies. The Bible tells us, “The poor ye shall always have with you.”  The same, sadly, is true of mental illness. Mental distress and disturbance constitute one of the most profound sources of human suffering, and simultaneously constitute one of the most serious challenges of both a symbolic and practical sort to the integrity of the social fabric. Whether one looks to classical Greece and Rome, to ancient Palestine or the Islamic civilization that ruled much of the Mediterranean for centuries, to the successive Chinese empires or to feudal and early modern Europe, everywhere people have wrestled with the problem of insanity, and with the need to take steps to protect themselves against the depredations of the minority of the seriously mentally ill people who pose serious threats of violence. None of these societies, or many more I could mention, ever saw the levels of carnage we Americans now accept as routine and inevitable.

Mental illness is an immutable feature of human existence. Its association with mass slaughter most assuredly has not been. Our ancestors were not so naïve as to deny that madness was associated with violence. The mentally ill, in the midst of their delusions, hallucinations, and fury were sometimes capable of horrific acts: consider the portrait in Greek myth of Heracles dashing out the brains of his children, in his madness thinking them the offspring of his mortal enemy Euryththeus; Lucia di Lammermoor stabbing her husband on their wedding night; or Zola’s anti-hero of La Bete humaine, Jacques Lantier, driven by passions that escape the control of his reason, raping and killing the object of his desire: these and other fictional representations linking mental illness to animality and violence are plausible to those encountering them precisely because they match the assumptions and experience of the audiences toward whom they are directed. And real-life maddened murderers were to be found in all cultures across historical time. Such murders were one of the known possible consequences of a descent into insanity. But repeated episodes of mass killing by deranged individuals, occurring as a matter of routine?  Nowhere in the historical record can precursors of the contemporary American experience be found. It is long past time to stop blaming an immutable feature of human culture—severe mental illness—for routine acts of deadly violence that are instead the produce of a resolute refusal to face the consequences of unbridled access to a deadly form of modern technology.

Claims that the mowing down of unarmed innocents is a mental health problem cannot explain why, in that event, such massacres are exceedingly rare elsewhere in the contemporary world, while they are now routine in the United States. Mental illness, as I have stressed, is a universal feature of human existence. Mass shootings are not. Australia and Britain (to take but two examples) found themselves in the not-too-distant past having to cope with horrendous mass killings that involved guns. Both responded with sensible gun control policies, and have been largely spared a repetition of the horrors routinely visited upon innocent Americans. Our society’s “rational” response, by contrast, is to rush out and buy more guns, inflating the profits of those who profit from these deaths, and ensuring more episodes of mass murder.

The problem in the United States is not crazy people. It is crazy gun laws.

Andrew Scull is Distinguished Professor of Sociology and Science Studies at the University of California, San Diego. He is the author of Masters of Bedlam: The Transformation of the Mad-Doctoring Trade and Madness in Civilization: A Cultural History of Insanity, from the Bible to Freud, from the Madhouse to Modern Medicine.

A. James McAdams: What South Korea can learn from Germany

McAdamsWhen athletes from North and South Korea marched onto the field under the same flag in Pyeongchang on February 9, this was not the first time that two fiercely antagonistic states, one socialist and the other capitalist, jointly represent a divided nation at the Olympics. Three times, in the 1956, 1960, and 1964 Olympics, the teams from East and West Germany did the same. Over these years, however, East Germany had no choice in this arrangement. In accord with West German policy and with the IOC’s blessing, this show of unity was meant to prevent the East German regime from claiming to represent a separate sovereign entity apart from the old German nation. Only in 1968 did the IOC finally grant East Berlin’s wish to march independently under its own flag in Mexico City.

Yet the difference in these displays of political symbolism between hostile states is potentially misleading. It prevents us from recognizing how much the South Korean government can learn from the example of divided Germany. Only a year after East Berlin’s modest achievement in the 1968 Olympics, a new West German chancellor, Willy Brandt, took the first steps toward implementing a principle, “change through rapprochement,” that was based upon a simple idea: you can’t influence a state with which you have no relations. Although his government stubbornly refused to recognize its rival’s legitimacy, it did the next best thing from the perspective of its counterparts in East Berlin. It explicitly affirmed East Germany’s factual existence as a separate part of Germany. This concession paved the way for two decades of successful negotiations over practical improvements in the two states’ relations, including the reunification of families, greater opportunities for East German pensioners to visit the West, and increased trade. These ties did not precipitate Germany’s unification in 1990. But, they made the challenge of bringing together the two parts of the divided nation much easier.

In the same way, the two Korean teams’ show of unity at the Olympics could reasonably be defended as the logical first step in a similar direction. As South Korea’s new president Moon Jai-in enunciated in Berlin on July 6, Seoul is now prepared to treat Pyongyang as a serious negotiating partner precisely because it has no alternative to total hostility. Bonn’s relationship with East Berlin was always difficult because of the communist regime’s ability to manipulate its citizens’ contacts with the West. Yet comparatively speaking, these trials are slight when they are viewed in light of the monumental challenge of dealing with a regime that has the power to monitor every bit of information that flows to its population. East Germans could regularly watch West German news on their television sets, but precious few North Koreans have access to foreign radio broadcasts of any kind, let alone cell phones or computers. Hence, even the smallest openings to the North are valuable. In this respect, expanded contacts between Korea’s divided states, even they are small or merely symbolic, are arguably even more important than they were for the Germans. They represent the only way Moon’s government can hope to improve the lives of the people on the other side of his country’s border.

Germany’s example also suggests that improved relations between the Koreas could be strategically advantageous for Seoul. Once West Germany’s leaders proved their commitment to reducing tensions with East Berlin, it was much easier to present themselves as reliable, independently-minded interlocutors to governments throughout the eastern bloc, including the regime that ultimately decided the fate of East Germany, the Soviet Union. Similarly, Moon’s readiness to talk with the North could be a step toward an improved relationship with the country best positioned to influence Pyongyang—China. If the South Koreans are able to convince Beijing that their citizens were marching with their northern counterparts in Pyeongchang for specifically Korean reasons, and not some coordinated policy with the United States, Seoul could provide the key for stability on the Korean peninsula that the Chinese have been seeking.

Predictably, even the existence of these slight gestures between Seoul and Pyongyang has aggravated American policymakers who want to maintain a disciplined wall of hostility toward North Korea. Yet it is interesting to note that many of the same misgivings were present in Washington when Willy Brandt sought to open independent channels of communication with East Berlin. Henry Kissinger and other officials in the Nixon administration worried that the U.S. would lose control of its ability to define western policy toward the Soviet bloc. Yet despite these fears, Bonn eventually played an instrumental role in reducing the East-West tensions that stood in the way of realizing American interests amidst the unexpected fall of communist regimes in the late 1980s and early 1990s. Similarly, the enunciation of a South Korean version of “change through rapprochement” could be Washington’s best hope for ameliorating the threat that a totally isolated North Korea currently represents to global security.

A. James McAdams is the William M. Scholl Professor of International Affairs and director of the Nanovic Institute for European Studies at the University of Notre Dame. His many books include Germany Divided: From the Wall to Reunification and Vanguard of the Revolution: The Global Idea of the Communist Party. He lives in South Bend, Indiana.

John Tutino: Mexico, Mexicans, and the Challenge of Global Capitalism

This piece has been published in collaboration with the History News Network. 

TutinoMexico and Mexicans are in the news these days. The Trump administration demands a wall to keep Mexicans out of “America,” insisting that undocumented immigrants cause unemployment, low wages, and worse north of the border. It presses a renegotiation of the North American Free Trade Agreement, claiming to defend U.S. workers from the pernicious impacts of a deal said to favor Mexico and its people. Meanwhile U.S. businesses (from autos to agriculture) work to keep the gains they have made in decades of profitable cross-border production and marketing. Their lobbying highlights the profits they make employing Mexicans who earn little (at home and in the U.S.), and by their efforts subsidize U.S. businesses and consumers.

The integration of Mexico and the U.S., their workers and markets, is pivotal to U.S. power, yet problematic to many U.S. voters who feel prejudiced in a world of globalizing capitalism and buy into stereotypes that proclaim invasive Mexicans the cause of so many problems. Analysts of diverse views, including many scholars, often imagine that this all began in the 1990s with NAFTA. A historical survey, however, shows that the integration of North America’s economies began with the U.S. taking rich lands from Texas to California by war in the 1840s, driving the border south to its current location. U.S. capitalists led a westward expansion and turned south to rule railroads, mining, petroleum, and more in Mexico before 1910—while Mexican migrants went north to build railroads, harvest crops, and supply cities in lands once Mexican. The revolution that followed in part reacted to U.S. economic power; its disruptions sent more Mexicans north to work. While Mexico struggled toward national development in the 1920s, displaced families still moved north. When depression stalled the U.S. economy in the 1930s, Mexicans (including many born U.S. citizens) were expelled south. When World War II stimulated both North American economies, the nations contracted to draw Mexican men north to work as braceros. Mexico’s “miracle” growth after 1950 relied on U.S. models, capital, and labor-saving technology—and never created enough work to curtail migrant flows. The Mexican oil boom of the 1970s tapped U.S. funds, aiming to bring down OPEC oil prices to favor U.S. hegemony in a Cold-War world. By the 1980s the U.S. gained cheaper oil, helping re-start its economy. In the same decade, falling oil prices set off a debt fueled depression in Mexico that drove more people north. NAFTA, another Mexican collapse, and soaring migration followed in the 1990s. The history of life and work across the U.S.-Mexican border is long and complex. Through twists and turns it shaped modern Mexico while drawing profits, produce, and Mexicans to the U.S.

The Mexican Heartland takes a long view to explore how communities around Mexico City sustained, shaped, and at times challenged capitalism from its sixteenth century origins to our globalizing times. From the 1550s they fed an economy that sent silver, then the world’s primary money, to fuel trades that linked China, South Asia, Europe, and Africa—before British America began. By the eighteenth century, Mexico City was the richest place in the Americas, financing mines and global trade, sustained by people living in landed communities and laboring at commercial estates. It’s merchant-financiers and landed oligarchs were the richest men in the Americas while the coastal colonies of British America drew small profits sending tobacco to Europe and food to Caribbean plantations (the other American engines of early capitalism).

Then, imperial wars mixed with revolutionary risings to bring a world of change: North American merchants and slave holders escaped British rule after 1776, founding the United States; slaves in Saint Domingue took arms, claimed freedom, destroyed sugar plantations, and ended French rule, making Haiti by 1804; insurgents north of Mexico City took down silver capitalism and Spain’s empire after 1810, founding Mexico in 1821. Amid those conflicts, Britain forged a new industrial world while the U.S. began a rise to continental hegemony, taking lands from native peoples and Mexico to expand cotton and slavery, gain gold and silver, and settle European migrants. Meanwhile, Mexicans struggled to make a nation in a reduced territory while searching for a new economy.

The Mexican Heartland explores how families built lives within capitalism before and after the U.S. rose to power. They sought the best they could get from economies made and remade to profit the few. Grounded in landed communities sanctioned by Spain’s empire, they provided produce and labor to carry silver capitalism. When nineteenth-century liberals denied community land rights, villagers pushed back in long struggles. When land became scarce as new machines curtailed work and income, they joined Zapata in revolution after 1910. They gained land, rebuilt communities, and carried a national development project. Then after 1950, medical capitalism delivered antibiotics that fueled a population explosion while “green revolution” agriculture profited by expanding harvests while making work and income scarce. People without land or work thronged to burgeoning cities and across the border into the U.S., searching for new ways to survive, sustain families, and re-create communities.

Now, Mexicans’ continuing search for sustainable lives and sustaining communities is proclaimed an assault on U.S. power and prosperity. Such claims distract us from the myriad ways that Mexicans feed the profits of global corporations, the prosperity of the U.S. economy, and the comforts of many consumers. Mexicans’ efforts to sustain families and communities have long benefitted capitalism, even as they periodically challenged capitalists and their political allies to keep promises of shared prosperity. Yet many in the U.S. blame Mexico and Mexicans for the insecurities, inequities, and scarce opportunities that mark too many lives under urbanizing global capitalism.

Can a wall can solve problems of dependence and insecurity pervasive on both sides of the border? Or would it lock in inequities and turn neighboring nations proclaiming shared democratic values into ever more coercive police states? Can we dream that those who proclaim the liberating good of democratic capitalism may allow people across North America to pursue secure sustenance, build sustaining communities, and moderate soaring inequities? Such questions define our times and will shape our future. The historic struggles of Mexican communities illuminate the challenges we face—and reveal the power of people who persevere.

John Tutino is professor of history and international affairs and director of the Americas Initiative at Georgetown University. His books include The Mexican Heartland: How Communities Shaped Capitalism, a Nation, and World History, 1500-2000 and From Insurrection to Revolution in Mexico: Social Bases of Agrarian Violence, 1750–1940.

Ya-Wen Lei: Ideological Struggles and China’s Contentious Public Sphere

This post has been republished by the Fairbank Center for Chinese Studies at Harvard University.


Ideology was a critical theme at China’s 19th Party Congress in October 2017. In his speech, President Xi Jinping emphasized China’s “cultural confidence” as well as “Chinese values.” Attempting to import any other kind of political regime, he argued, would fail to match China’s social, historical and cultural conditions. Interestingly, however, at the same time that he rejected foreign political models, Xi promoted China’s particular version of modernization as a valuable model for other countries.

At the domestic level, Xi stressed the importance of controlling ideology, regulating the internet, and actively attacking “false” views within China’s public sphere. For Xi, ideology is a powerful tool that can, at best, unify the Chinese people or, at worst, turn them against the Chinese state.

In fact, ideology has been a priority for Xi ever since he became General Secretary of the Chinese Communist Party in 2012. This focus is understandable, I argue, precisely given the rising influence of liberal ideology within China’s public sphere.

Let me illustrate this by discussing one example, explored in greater depth in my book, The Contentious Public Sphere: Law, Media, and Authoritarian Rule in China. In Chapter 5, I analyze the political orientation of the top 100 opinion leaders on Weibo—one of China’s most popular social media sites—and the connections among them in 2015.

I classified Weibo opinion leaders into the following categories: political liberals, political conservatives, and others. I defined political liberals as those who express support on Weibo for constitutionalism (government authority derives from and should be limited by the constitution) and universal values (e.g., human rights, freedom, justice, equality), and political conservatives as those who argue against those principles. I classified as “others” those who expressed no views either way. I looked at people’s views on constitutionalism and universal values because these are particularly contested and politicized ideas in China given their association with Western liberal democracy. These are, in short, ideas that would not be popular in China if ideology were functioning “properly” from the government perspective.

Despite the Chinese government’s ideological control and censorship, I found that 58% of the top 100 Weibo opinion leaders in 2015 were political liberals, while only 15% were political conservatives. My analysis looked specifically at January of 2015, after the Chinese government launched its “purge the internet” campaign in August 2013 and arrested several opinion leaders. This was also after the government’s effort to use Weibo to create more “positive energy.” Presumably, then, the percentage of political liberals among opinion leaders might well have been even higher before the Chinese government’s intensified crackdowns.

In the following graph, I map the connections among the top 100 Weibo opinion leaders using social network analysis. Blue, red, and white nodes represent political liberals, political conservatives, and others, respectively. The graph reveals the greater level of influence of political liberals in general online, the dense connections among liberals themselves, and their seemingly greater influence on those who may be “on the fence” politically or simply more cautious about expressing their views of constitutionalism and universal values online. Importantly, political liberals would not have become so popular and influential had it not been for the direct and indirect endorsement of Chinese citizens.


Figure: Top 100 Weibo opinion leaders. Note: An edge between two opinion leaders is directional, showing that one opinion leader follows the other on Weibo. Blue, red, and white nodes represent political liberals, political conservatives, and others, respectively. Squares, triangles, boxes, diamonds, and circles denote media professionals, lawyers and legal scholars, scholars in non-law disciplines, entrepreneurs, and others, respectively. Gray and black edges show“following” across and between people with the same political orientation, respectively.

In short, the graph reveals a situation that contrasts sharply with the Chinese public sphere the government would like to see. The dissemination of liberal discourse and ideology, as well as growing public criticism of social and political problems in China, has only heightened the Chinese state’s concerns regarding ideology.

So, is ideology even “working” in China—at least in the way Xi would like? If constitutionalism and universal values are Western views that need to be discouraged and even attacked as “false,” this map of online opinion leaders in China suggests the government has its work cut out for it. How this happened, how it has changed China’s public sphere, and whether and how the govenment might attempt to regain ideological control moving foward are all questions I explore futher in my book, The Contentious Public Sphere: Law, Media, and Authoritarian Rule in China.

Ya-Wen Lei is an assistant professor in the Department of Sociology and an affiliate of the Fairbank Center for Chinese Studies at Harvard University. She is the author of The Contentious Public Sphere: Law, Media, and Authoritarian Rule in China.

Ian Hurd: Good medicine for bad politics? Rethinking the international rule of law

When an international crisis erupts it is common to hear experts say that the situation will be improved if all parties stick to international law. From the Syrian war to Burma’s massacres to Guantanamo torture, faithful compliance with the law of nations is often prescribed as the best way forward. I wrote this book because I was curious about the idea that international law is good medicine for bad policies, a kind of non-political, universal good. International law often appears like a magical machine that takes in hot disagreements about how things should unfold and produces cool solutions that serve the interests of everyone. How to do Things with International Law examines this idea with a degree of skepticism, holds it up against some empirical cases, and suggests more realistic ways of thinking about the dynamics between international politics and international law.

The standard model of international law is built on two components, one more institutional and the other more normative. On the one hand, international law is seen as providing a framework for the coexistence for governments. Laws on diplomatic immunity, non-interference across borders, and the peaceful settlement of disputes help organize inter-governmental relations and give a kind of civility to world politics. On this view, following the rules makes it possible for diplomacy and negotiation to happen. The second, normative strand adds substantive values such as a commitment to human rights, to the protection of refugees, and against nuclear proliferation. Here, following the rules is said to be important because it enhances human welfare and the other goals encoded by the law. The two strands agree that compliance with international rules is beneficial and that violations of the rules lead to international disorder at best—and violence and chaos at worst.

This represents what I see as a conventional view of the international rule of law. It is a commitment to the idea that governments should follow their legal obligations and that when they do the world is a better place. It is an ideology, in the sense noted by Shirley Scott.

My book explores the premise and the power of this ideology and its influence in global politics. I look at the presumptions that it rests on and the practices it makes possible. I see the power of international law on display in the ways that governments and others make use of legal resources and categories to understand, justify, and act in the world. This is a social power, built on the idea of the rule of law and employed by governments in the service of a wide array of goals.

The book does not aim to answer questions about why states comply with or flout the law. Instead, it asks what they do with the law – and why, and with what effects. As a methodology, this points toward looking for where international law appears in the strategies of governments. On substance, it suggests a close connection between international law and political power. International law has influence in certain situations, when powerful actors find it useful. For instance, the US gave legal arguments for why Russia’s annexation of Crimea was unlawful and therefore should not be accepted by other countries. In response, Russia gave legal arguments to sustain its behavior. Legal experts may well conclude that one side had the stronger legal argument; disagreements about interpretation and application are central to legal practices. But my curiosity comes from seeing both sides use legal arguments as political resources in defense of their preferred outcome.

The use of law to legitimize state policy is a central feature of contemporary international politics. And yet to some, the instrumental use of law is said to reveal the inappropriate politicization of law, contradicting their idea of the rule of law itself. I see it the other way around: the international rule of law is the instrumental use of law. The legalization of international politics gives legal rationalizations their political weight. Their political weight makes them important sites of contestation. In a legalized world, it makes sense for actors to contest their actions in the language of law. To borrow Helen Kinsella’s example, the line between civilian and combatant in a war zone distinguishes those who should be killed from those who should not; the line is defined by the Geneva Conventions and other legal instruments and it is brought to life (and death) as governments interpret it in relation to those whom they wish to kill. Legal categories have political valence and this makes them important resources of power and thus worth fighting over. How else to make sense of the energy that governments put into shaping rules that reflect their interests?

Recognizing the close connection between international and power politics opens a way to considering the political productivity of international law. Law is not only regulative and constraining; it is also empowering and permissive. By defining some acts as unlawful and others as lawful, it makes the former harder for governments to do (or more expensive) and the latter easier. The availability of a legal justification smoothes the way for action just as much as its unavailability impedes it. If we look at one side of this balance, we see for instance that the UN Charter outlaws the use of force by governments and limits their autonomy with respect to going to war. On the other side the Charter also authorizes them to go to war as needed for ‘self-defense’ against an armed attack. In ‘self-defense,’ the Charter creates a new legal resource with the capacity to differentiate between a lawful and an unlawful war. This is a powerful tool for governments, a means for legalizing their recourse to force, and they have used it with enthusiasm since 1945. The Charter produced something that previously didn’t exist and as a consequence changed how governments go to war, how they justify their wars, and how they think about their security in relation to external threats.

With the political productivity of international law in mind, the book shows that international law is inseparable from politics and thus from power. For powerful governments, international law puts an instrument in their tool-kit as they seek to influence what happens in the world, and for the less powerful it is a tool that they might also seek to take up when they can but may equally be a means of control whose influence they seek to escape.

There isn’t much evidence to back up the presumption that international law steers global affairs naturally toward better outcomes. How to Do Things With International Law is neither a celebration of international law nor an indictment. It offers instead a look into its practical politics, a messy world of power politics that is as full of interpretation, ambiguity, violence and contestation as any other corner of social life.

HurdIan Hurd is associate professor of political science at Northwestern University. He is the author of After Anarchy and How to Do Things with International Law.

Gary Saul Morson & Morton Schapiro: The Humanomics of Tax Reform

CentsThe Trump administration is now placing tax reform near the top of its legislative agenda. Perhaps they will garner the votes for tax reduction, but reform? Good luck.

It has been three decades since there has been meaningful tax reform in this country. In 1986, tax shelters were eliminated, the number of tax brackets went from 15 to 4, including a reduction of the highest marginal tax rate from 50% to 38.5% and the standard deduction was increased, simplifying tax preparation and resulting in zero tax liability for millions of low-income families. At the same time, a large-scale expansion of the alternative minimum tax affected substantial numbers of the more affluent.

President Reagan insisted that the overall effect be neutral with regard to tax revenues. That demand made it possible to set aside the issue of whether government should be larger or smaller and instead focus on inefficiencies or inequities in how taxes were assessed. Two powerful Democrats, Dick Gephardt in the House and Bill Bradley in the Senate, were co-sponsors.

Economists might evaluate the merits of this monumental piece of legislation in terms of the incentives and disincentives it created, its ultimate impact on labor force participation, capital investment and the like, but there is another metric to be evaluated – was it perceived to be fair? Accounts from that day imply that it was.

The notion of fairness is not generally in the wheelhouse of economics. But the humanities have much to say on that matter.

To begin with, literature teaches that fairness is one of those concepts that seem simple so long as one does not transcend one’s own habitual way of looking at things. As soon as one learns to see issues from other points of view, different conceptions of fairness become visible and simple questions become more complex. Great novels work by immersing the reader in one character’s perspective after another, so we learn to experience how different people – people as reasonable and decent as we ourselves are – might honestly come to see questions of fairness differently.

So, the first thing that literature would suggest is that, apart from the specific provisions of the 1986 tax reform, the fact that it was genuinely bipartisan was part of what made it fair. Bipartisanship meant the reform was not one side forcing its will on the other. Had the same reform been passed by one party, it would not have seemed so fair. Part of fairness is the perception of fairness, which suggests that the process, not just the result, was fair.

Fairness, of course, also pertains to the content of the reforms. What are the obligations of the rich to support needy families? Are there responsibilities of the poor to participate however they can in providing for their own transformation?

In Tolstoy’s novel Anna Karenina, two main characters, Levin and Stiva, go hunting with the young fop, Vasenka, and as they encounter hard-working peasants, they start discussing the justice of economic inequality. Only foolish Vasenka can discuss the question disinterestedly, because it is, believe it or not, entirely new to him: “`Yes, why is it we spend our time riding, drinking, shooting doing nothing, while they are forever at work?’ said Vasenka, obviously for the first time in his life reflecting on the question, and consequently considering it with perfect sincerity.” Can it really be that an educated person has reached adulthood with this question never having occurred to him at all?

And yet, isn’t that the position economists find themselves in when they ignore fairness? When they treat tax reform, or any other issue, entirely in economic terms? Levin recognizes that there is something unfair about his wealth, but also recognizes that there is no obvious solution: it would do the peasants no good if he were to just give away his property. Should he make things more equal by making everyone worse off? On the contrary, his ability to make farmland more productive benefits the peasants, too. So, what, he asks, should be done?

Levin also knows that inequality is not only economic. If one experiences oneself as a lesser person because of social status, as many of the peasants do, that is itself a form of inequality entirely apart from wealth. In our society, we refer to participants in government as “taxpayers.” Does that then mean that to exempt large numbers of people from any taxation entirely demeans them – not least of all, in their own eyes?  There may be no effective economic difference between a very small tax and none at all, but it may make a tremendous psychological difference. Isn’t the failure to take the psychological effect of tax rates seriously as disturbingly innocent as Vasenka’s question about inequality?

Combining a humanistic and an economic approach might not give us specific answers, but it does make questions of fairness, including symbolic effects, part of the question. And in a democracy, where popular acceptance of the rules as legitimate is crucial, that would be a step forward.

Gary Saul Morson is the Lawrence B. Dumas Professor of the Arts and Humanities and professor of Slavic languages and literatures at Northwestern University. His many books include Narrative and Freedom, “Anna Karenina” in Our Time, and The Words of Others: From Quotations to Culture. Morton Schapiro is the president of Northwestern University and a professor of economics. His many books include The Student Aid Game. Morson and Schapiro are also the editors of The Fabulous Future?: America and the World in 2040 and the authors of Cents and Sensibility: What Economics Can Learn from the Humanities.

Matthew Simonton: American Oligarchy

SimontonThe 2016 election brought the burning issue of populism home to the United States. Donald Trump is, in many ways, part of a larger movement of populist politicians worldwide who have claimed to speak in the name of the “ordinary people.” (Marine Le Pen in France and Viktor Orbán in Hungary are other examples.) As with other populists, Trump’s presidency brings with it unsettling questions about illiberalism and ethno-nationalism. But in all the talk about “making American great again,” we are in danger of losing sight of a deeper problem, one which Trump will not change and in fact will likely exacerbate: the steady creep of oligarchy. The United States Constitution is enacted in the name of “We the People.” Abraham Lincoln famously described America’s political system in the Gettysburg Address as “government of the people, by the people, for the people.” Yet how much authority do ordinary citizens truly possess in today’s America? As the ancient Athenians would have put it, does the demos (people) in fact have kratos (power)?

Several indicators suggest that that power, if it ever was actually held by the people, is slipping away. Princeton University Press authors Larry Bartels and Martin Gilens have brought before our eyes hard truths about our “unequal democracy,” the fact that, too often, “affluence” brings “influence.” Gilens and the political scientist Benjamin I. Page demonstrated in an important article from 2014 that “economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens … have little or no independent influence.” Readers familiar with the findings of the economist Thomas Piketty have heard that the U.S. and other advanced capitalist economies are entering a new “Gilded Age” of wealth concentration. Can anything turn back inequality—what President Barack Obama called “the defining challenge of our time”—and the widening gap in political power and influence that comes with it?

The ancient Greeks had an answer to the problem of inequality, which they called demokratia. It is well known that Greek-style democracy was direct rather than representative, with citizens determining policy by majority vote in open-air assemblies. Yet democracy meant more than just meetings: political offices were distributed randomly, by lottery, on the assumption that every citizen was qualified (and in fact obligated) to participate in politics. Office-holders were also remunerated by the state, to ensure that poorer citizens who had to work for a living could still share in the constitution. Princeton author Josiah Ober has examined the ideology and practice of ancient democracy in multiple publications. In his latest work—similar in its conclusions to those of the ancient historian Alain Bresson—he has argued that democracies created fair rules and equal access to opportunity that secured citizen dignity and discouraged runaway economic inequality. Thus, as much as ancient democracies fall short of our contemporary standards (and they had grave faults in the areas of slave-holding and gender relations), they might constitute a model, however imperfect, for thinking about reducing both economic and political inequality.

On the other hand, many Greek city-states had a form of constitution based on diametrically opposed premises, and which encouraged opposite tendencies. This was oligarchia, the “rule of the few.” Ancient Greek oligarchs—members of the wealthy elite—most assuredly did not believe in citizen equality. Oligarchs thought that their greater wealth, which (by their lights, anyway) afforded them greater intelligence and virtue, made them uniquely qualified to rule. The non-elite, which then as today represented the poorer majority, had to be kept out of politics. (For a recent argument in favor of such an “oligarchy of the wise,” see Princeton author Jason Brennan’s Against Democracy.)

In my book Classical Greek Oligarchy: A Political History, I chart the rise of oligarchic thinking, showing that it emerged in conscious reaction to democracy, or the “power of the people.” Faced with the challenges democracy brought to their affluence and influence, oligarchs devised a new set of political institutions, which would ensure that the people could make no inroads into oligarchic privilege. This was not simply a matter of attaching property requirements to office-holding, although oligarchs certainly considered that essential. Oligarchies also stacked the judicial system in elites’ favor; sought to control the people’s speech, movement, and association; hoarded and manipulated information crucial to the city’s well-being; feathered their own nests with economic perquisites; and on occasion even resorted to extra-legal assassination to eliminate subversives. Oligarchies were, in short, authoritarian regimes. Engaging with contemporary scholarship in political science on authoritarianism, I show that ancient Greek oligarchies confronted the same basic problems that haunt modern authoritarians, and experimented with similar institutions for preserving their rule. In ways that have not been fully apparent until now, oligarchs and demos resemble today’s dictators and democrats.

As history shows us, inequality in one area (wealth) tends to convince elites that they have unequal abilities in another (politics). Yet in situations like that of Classical Greek oligarchy, when the wealthy obtain the unaccountable political power they desire, the result is not enlightened government but increased oppression. It would do citizens of modern democracies good to bear this in mind. In the United States, many are frustrated with politics, and with democracy in particular. Liberals worry about the supposed ignorance of the electorate. Conservatives want to restrict what majorities can legislate, especially in the area of economics. And the last election saw countless voters openly embrace a vision of America as headed by a billionaire strongman. In longing for a restriction on democracy, however—even if “only” meant for those with whom we disagree—we increase the likelihood of a more general oligarchic takeover. We play into oligarchs’ hands. If the Greek example is any indication, such short-term thinking would bode ill for the freedom of all citizens—and it would only make inequality worse.

Matthew Simonton is assistant professor of history in the School of Humanities, Arts, and Cultural Studies at Arizona State University. He received his PhD in classics from Stanford University. He is the author of Classical Greek Oligarchy: A Political History.

Jim Campbell: A new analysis in Polarized dispels election controversy

Overlooked “Unfavorability” Trends Raise Doubts that Comey Cost Clinton the Election

In her newly released What Happened and in interviews accompanying the book’s release, Hillary Clinton claims that former FBI Director James Comey’s late October re-opening of the investigation into the mishandling of national security emails was “the determining factor” in her 2016 presidential election loss. In the new afterword of the paperback edition of Polarized: Making Sense of a Divided America, I report evidence indicating that Comey’s letter did not cause Clinton’s loss.The suspected Comey-effect is tested by examining changes in Gallup’s unfavorability ratings of Clinton and Trump. The data shows that the decline in Clinton’s poll lead over Trump in the last weeks of the campaign was not the result of voters becoming more negative about Clinton (as would be the case if they were moved by the Comey letter). It was the result of voters becoming less negative about Trump (a development with no plausible link to the Comey letter). Comey didn’t drive voters away from Clinton. Rather, “Never Trump” Republicans were grudgingly becoming “Reluctant Trump” voters.

This finding is consistent with the earlier finding of the American Association for Public Opinion Research’s (AAPOR) Ad Hoc Committee on 2016 Election Polling. The Committee found evidence that “Clinton’s support started to drop on October 24th or 25th,” perhaps even earlier. This was at least three or four days before Comey’s letter was released.

Read on for the relevant excerpt and details from the afterword of my forthcoming paperback edition of Polarized: Making Sense of a Divided America:

In the closing weeks of the campaign, with what they saw as a Clinton victory looming darkly over the horizon, many disgruntled conservative hold-outs came back to the Republican column. As they rationalized or reconsidered their choice, unfavorable opinions about Trump among Republicans declined (about 7 points). Even so, about a fifth of Trump’s voters admitted that they still held an unfavorable view of him. More than a quarter of Trump’s voters said their candidate lacked the temperament to be president. For many, “Never Trump” had become “Reluctantly Trump.” They held their noses and cast their votes. Between Trump and Clinton, about 85% of conservative votes went to Trump. Along with sour views of national conditions, polarization had offset or overridden the grave reservations many conservatives had about a Trump vote.

Widespread and intense polarized views, across the public and between the parties, shaped the 2016 election. On one side of the spectrum, polarization compelled liberals to overlook Clinton’s scandals and deficiencies as a candidate as well as a sputtering economy and unstable international conditions. On the other side, dissatisfaction with national conditions and polarization compelled conservatives to vote for a candidate many thought lacked the rudimentary leadership qualities needed in a president. Non-ideological centrists again were caught in the middle–by ideology, by the candidates’ considerable shortcomings, and by generally dreary views of national conditions. Their vote split favored Clinton over Trump (52% to 40%, with 8% going to minor party candidates), close to its two-party division in 2012. The three components of the vote (polarization, the candidates, and national conditions) left voters closely enough divided to make an electoral vote majority for Trump possible.

Although the above explanation of the election is supported by the evidence and fits established theory, two other controversial explanations have gained some currency. They trace Trump’s surprising victory to Russia’s meddling in the election (by hacking Democratic emails and releasing them through Wikileaks) and FBI Director Comey’s late October letter, re-opening the investigation into Clinton’s mishandling of confidential national security emails. Some, including Clinton herself, contend Wikileaks and Comey’s letter caused the collapse of Clinton’s lead over Trump in the closing weeks of the campaign.

The evidence says otherwise. Contrary to the speculation, neither Wikileaks nor Comey’s letter had anything to do with the shriveling of Clinton’s lead. If either had been responsible, they would also have caused more voters to view Clinton negatively–but opinions about her did not grow more negative. Unfavorable opinions of Clinton were remarkably steady. From August to late September, Hillary Clinton’s unfavorables in Gallup polls averaged 55%. Her unfavorables in the Gallup poll completed on the day Comey released his letter (October 28) stood at 55%. In the exit polls, after Wikileaks and after Comey’s letter, her unfavorables were unchanged at 55%. Opinions about Hillary Clinton, a figure in the political spotlight for a quarter century, had long been highly and solidly polarized. Nothing Wikileaks revealed or Comey said was going to change minds about her at that late stage of the game.

The race tightened in the last weeks of the campaign because Trump’s unfavorables declined (by about 5 points). They declined as some conservatives and moderates with qualms about Trump came to the unpleasant realization that voting for Trump was the only possible way they could help prevent Clinton’s election. Some dealt with the dissonance of voting for a candidate they disliked by rationalizing, reassessing, or otherwise softening their views of Trump, trying to convince themselves that maybe “the lesser to two evils” was not really so awful after all. In voting, as in everything else, people tend to postpone unpleasant decisions as long as they can and make them as painless to themselves as they can.

The decay of Clinton’s October poll lead was not about Russian and Wikileaks meddling in the election and not about Comey’s letter. It was about polarization, in conjunction with dissatisfaction about national conditions, belatedly overriding the serious concerns many voters had about Donald Trump as a potential president. Trump’s candidacy put polarization to the test. His election testified to how powerful polarization has become. The highly polarized views of Americans and the highly polarized positions of the parties were critical to how voters perceived and responded to the candidates’ shortcomings and the nation’s problems.

James E. Campbell is UB Distinguished Professor of Political Science at the University at Buffalo, State University of New York. His books include The American Campaign: U.S. Presidential Campaigns and the National VoteThe Presidential Pulse of Congressional Elections, and Polarized: Making Sense of a Divided America.

Jason Brennan: How Kneeling Athletes Reveal the True Nature of Politics

BrennanMuch of Puerto Rico may be without power for six months. North Korea is increasingly belligerent. The world’s reaction to coming climate change ranges between empty symbolic gestures and nothing. A just shy of fascist party won 13% of the seats in the German federal election. The U.S. has been at war—and troops have been dying for frivolous reasons—for sixteen years. But what are Americans most outraged about? Whether football players kneeling during the National Anthem, in protest of police brutality toward blacks, is somehow wrongly disrespectful of a flag, “the troops!”, or America.

Both sides accuse the other side of hypocrisy and bad faith. And both sides are mostly right. Hypocrisy and bad faith are the self-driving cars of politics. They get us where we want, without our having to drive.

What Christopher Achen and Larry Bartels (in the 2016 Princeton University Press book Democracy for Realists) call the folk theory of democracy goes roughly as follows: People know their interests. They then form preferences about what the government should do to promote these goals. They vote for parties and politicians who will best realize these goals. Then the government implements the goals of the majority. But the problem, Achen and Bartels argue, is that each part of that “folk theory” is false.

Instead, as economist Robin Hanson likes to say, politics is not about policy. The hidden, unconscious reason we form political beliefs is to help us form coalitions with other people. Most of us choose our particular political affiliations because people like us vote that way. We then join together with other supposedly like-minded people, creating an us versus a them. We are good and noble and can be trusted. They are stupid and evil and at fault for everything. We loudly denounce the other side in order to prove, in public, that we are especially good and pure, and so our fellow coalition members should reward us with praise and high status.

Our political tribalism spills over and corrupts our behavior outside of politics. Consider research by political scientists Shanto Iyengar and Sean Westwood. Iyenger and Westwood wanted to determine how much, if at all, political bias affects how people evaluate job candidates. They conducted an experiment in which they asked over 1,000 subjects to evaluate what the subjects were told were the résumés of graduating high school students. Iyenger and Westwood carefully crafted two basic résumés, one of which was clearly more impressive than the other. They randomly labeled the job candidates as Republican or Democrat, and randomly made the candidates stronger or weaker. At that same time, they also determined whether the subjects—the people evaluating the candidates—were strong or weak Republicans, independents, or strong or weak Democrats.

The results are depressing: 80.4% of Democratic subjects picked the Democratic job candidate, while 69.2% of Republican subjects picked the Republican job candidate. Even when the Republican job candidate was clearly stronger, Democrats still chose the Democratic candidate 70% of the time. In contrast, they found that “candidate qualification had no significant effect on winner selection.” In other words, the evaluators didn’t care about how qualified the candidates were; they just cared about what the job candidates’ politics were.

Legal theorist Cass Sunstein notes that in 1960, only about 4-5% of Republicans and Democrats said they would “displeased” if their children married members of the opposite party. Now about 49% of Republicans and 33% of Democrats admit they would be displeased. The truth is probably higher than that—some people would be upset but won’t admit it on a survey. Explicit “partyism”—prejudice against people from a different political party—is now more common than explicit racism.

At least some people have honest, good faith disputes about how to realize shared moral values, or about just what morality and justice require. We should be able to maintain such disputes without seeing each other as enemies. Sure, some moral disagreements are beyond the pale. If someone advocates the genocidal slaughter of Jews, fine, they’re not a good person. But disagreements on whether the minimum wage does more harm than good are not grounds for mutual diffidence. But, as ample empirical research shows (you can read my Against Democracy for a review), we are biased to see political disputants as stupid and evil, rather than just having a reasonable disagreement. Indeed, as Diana Mutz (in her Hearing the Other Side) shows, people who are successfully able to articulate the other sides’ point of view hardly participate in politics, but the nasty true-believers vote early and often.

It’s not a surprise people are so irrational and nasty about politics. The logic behind it is simple. Your individual vote counts for almost nothing. Even on the more optimistic models, you are as likely to change an election as you are to win Powerball. Accordingly, it doesn’t matter if your political beliefs are true or false, reasonable or utterly absurd. When you cross the street, you form rational beliefs about traffic patterns—or you die. When you vote, though, you can afford to indulge your deepest prejudices with no cost. How we vote matters, but how any one of us does not.

Imagine a professor told her 1000-student class that in fifteen weeks, she would hold a final exam, worth 100% of their grade. Suppose she told them that in the name of equality, she would average all final exam grades together and give every student the same grade. Students wouldn’t study and the average grade would be an F. In effect, this scenario is how democracy works, except that we have a 210-million person class in the United States. The downside is not merely that we remain ignorant. Rather, the downside is that it liberates us to use our political beliefs for other purposes.

Politics makes us civic enemies. When we make something a political matter, we turn it into a zero-sum game where someone has to win and someone has to lose. Political decisions involve a constrained set of options. In politics, there are usually only a handful of viable choices. Political decisions are monopolistic: everyone has to accept the same decision. Political decisions are imposed involuntarily: you don’t really consent to the outcome of a democratic decision.

Now back to football players kneeling. My friends on the Right refuse to take the players at their word. The players say they’re protesting police brutality and other ways the U.S. mistreats its black populace. My friends on the Right scoff and say, no, really they just hate America and hate the troops. This reaction is wrong, but not surprising. Imputing evil motives to the other side is essential to politics. The Left does it all the time too. If, for example, some economists on the Right says they favor school vouchers as a means of improving school quality, the Left will just accuse them of hating the poor.

It’s worth noting that since 2009, the Pentagon has paid the NFL over $6 million to stage patriotic displays before games to help drive recruiting.[i] The pre-game flag shows are literally propaganda in the narrowest sense of the word. Personally, I think participating in government-funded propaganda exercises is profoundly anti-American, while taking a knee and refusing to dance on command shows real respect for what the country supposedly stands for.

Jason Brennan is the Flanagan Family Chair of Strategy, Economics, Ethics, and Public Policy at the McDonough School of Business at Georgetown University. He is the author of The Ethics of Voting (Princeton), and Against Democracy. He writes regularly for Bleeding Heart Libertarians, a blog.

Landon R. Y. Storrs: What McCarthyism Can Teach Us about Trumpism

Since the election of President Donald Trump, public interest in “McCarthyism” has surged, and the focus has shifted from identifying individual casualties to understanding the structural factors that enable the rise of demagogues.

After The Second Red Scare was published in 2012, most responses I received from general readers were about the cases of individuals who had been investigated, or whom the inquirer guessed might have been investigated, under the federal employee loyalty program. That program, created by President Truman in 1947 in response to congressional conservatives’ charges that his administration harbored communist sympathizers, was the engine of the anticommunist crusade that became known as McCarthyism, and it was the central subject of my book. I was the first scholar to gain access to newly declassified records related to the loyalty program and thus the first to write a comprehensive history. The book argues that the program not only destroyed careers, it profoundly affected public policy in many fields.

Some queries came from relatives of civil servants whose lives had been damaged by charges of disloyalty. A typical example was the person who wanted to understand why, in the early 1950s, his parents abruptly moved the family away from Washington D.C. and until their deaths refused to explain why. Another interesting inquiry came from a New York Times reporter covering Bill de Blasio’s campaign for New York City mayor. My book referenced the loyalty case of Warren Wilhelm Sr., a World War II veteran and economist who left government service in 1953, became an alcoholic, was divorced by his wife, and eventually committed suicide. He never told his children about the excruciating loyalty investigation. His estranged son, born Warren Wilhelm Jr., legally adopted his childhood nickname, Bill, and his mother’s surname, de Blasio. I didn’t connect the case I’d found years earlier to the mayoral candidate until the journalist contacted me, at which point I shared my research. At that moment de Blasio’s opponents were attacking him for his own youthful leftism, so it was a powerful story, as I tried to convey in The Nation.

With Trump’s ascendance, media references to McCarthyism have proliferated, as commentators struggle to make sense of Trump’s tactics and supporters. Opinion writers note that Trump shares McCarthy’s predilections for bluffing and for fear-mongering—with terrorists, Muslims, and immigrants taking the place of communist spies. They also note that both men were deeply influenced by the disreputable lawyer Roy Cohn. Meanwhile, the president has tweeted that he himself is a victim of McCarthyism, and that the current investigations of him are “witch hunts”—leaving observers flummoxed, yet again, as to whether he is astonishingly ignorant or shamelessly misleading.

But the parallels between McCarthy’s era and our own run deeper than personalities. Although The Second Red Scare is about McCarthyism, it devotes little attention to McCarthy himself. The book is about how opponents of the New Deal exploited Americans’ fear of Soviet espionage in order to roll back public policies whose regulatory and redistributive effects conservatives abhorred. It shows that the federal employee loyalty program took shape long before the junior senator from Wisconsin seized the limelight in 1950 by charging that the State Department was riddled with communists.

By the late 1930s congressional conservatives of both parties were claiming that communists held influential jobs in key New Deal agencies—particularly those that most strongly challenged corporate prerogatives regarding labor and prices. The chair of the new Special House Committee to Investigate Un-American Activities, Martin Dies (a Texas Democrat who detested labor unions, immigrants, and black civil rights as much as communism), demanded that the U.S. Civil Service Commission (CSC) investigate employees at several agencies. When the CSC found little evidence to corroborate Dies’s allegations, he accused the CSC itself of harboring subversives. Similarly, when in 1950 the Tydings Committee found no evidence to support McCarthy’s claims about the State Department, McCarthy said the committee conducted a “whitewash.” President Trump too insists that anyone who disproves his claims is part of a conspiracy. One important difference is that Dies and McCarthy alleged a conspiracy against the United States, whereas Trump chiefly complains of conspiracies against himself—whether perpetrated by a “deep state” soft on terrorism and immigration or by a biased “liberal media.” The Roosevelt administration dismissed Dies as a crackpot, and during the Second World War, attacks on the loyalty of federal workers got little traction.

That changed in the face of postwar Soviet conduct, the nuclear threat, and revelations of Soviet espionage. In a futile effort to counter right-wing charges that he was “soft” on communism, President Truman expanded procedures for screening government employees, creating a loyalty program that greatly enhanced the power of the FBI and the Attorney General’s List of Subversive Organizations. State, local, and private employers followed suit. As a result, the threat of long-term unemployment forced much of the American workforce not only to avoid political dissent, but to shun any association that an anonymous informant might find suspect. Careers and families were destroyed. With regard to the U.S. civil service, the damage to morale and to effective policymaking lasted much longer than the loyalty program itself.

Public employees long have been vulnerable to political attacks. Proponents of limited government by definition dislike them, casting them as an affront to the (loaded) American ideals of rugged individualism and free markets. But hostility to government employees has been more broad-based at moments when severe national security threats come on top of widespread economic and social insecurity. The post-WWII decade represented such a moment. In the shadow of the Soviet and nuclear threats, women and African-Americans struggled to maintain the toeholds they had gained during the war, and some Americans resented new federal initiatives against employment discrimination. Resentment of the government’s expanding role was fanned by right-wing portrayals of government experts as condescending, morally degenerate “eggheads” who avoided the competitive marketplace by living off taxpayers.

Today, widespread insecurity in the face of terrorism, globalization, multiculturalism, and gender fluidity have made many Americans susceptible to the same sorts of reactionary populist rhetoric heard in McCarthy’s day. And again that rhetoric serves the objectives of those who would gut government, or redirect it to serve private rather than public interests.

The Trump administration calls for shrinking the federal workforce, but the real goal is a more friendly and pliable bureaucracy. Trump advisers complain that Washington agencies are filled with leftists. Trump transition teams requested names of employees who worked on gender equality at State and climate change initiatives at the EPA. Trump media allies such as Breitbart demanded the dismissal of Obama “holdovers.” Trump selected appointees based on their personal loyalty rather than qualifications and, when challenged, suggested that policy expertise hinders fresh thinking. In firing Acting Attorney General Sally Yates for declining to enforce his first “travel ban,” Trump said she was “weak” and had “betrayed” her department. Such statements, like Trump’s earlier claims that President Obama was a Kenyan-born Muslim, fit the textbook definition of McCarthyism: undermining political opponents by making unsubstantiated attacks on their loyalty to the United States. Even more alarming is Trump’s pattern of equating disloyalty to himself with disloyalty to the nation—the textbook definition of autocracy.

Might the demise of McCarthyism hold lessons about how Trumpism will end? The Second Red Scare wound down thanks to the courage of independent journalists, the decision after four long years of McCarthy’s fellow Republican senators to put country above party, and U.S. Supreme Court decisions in cases brought by brave defendants and lawyers. The power of each of those forces was contingent, of course, on the abilities of Americans to sort fact from fiction, to resist the politics of fear and resentment, and to vote.

StorrsLandon R. Y. Storrs is professor of history at the University of Iowa. She is the author of Civilizing Capitalism: The National Consumers’ League, Women’s Activism, and Labor Standards in the New Deal Era and The Second Red Scare and the Unmaking of the New Deal Left.

Lawrence Baum: Ideology in the Supreme Court

When President Trump nominated Neil Gorsuch for a seat on the Supreme Court, Gorsuch was universally regarded as a conservative. Because of that perception, the Senate vote on his confirmation fell almost completely along party lines. Indeed, Court-watchers concluded that his record after he joined the Court late in its 2016-2017 Term was strongly conservative. But what does that mean? One possible answer is that he agreed most often with Clarence Thomas and Samuel Alito, the justices who were considered the most conservative before Gorsuch joined the Court. But that answer does not address the fundamental question: why are the positions that those three justices took on an array of legal questions considered conservative?

The most common explanation is that liberals and conservatives each start with broad values that they then apply in a logical way to the various issues that arise in the Supreme Court and elsewhere in government. But logic can go only so far to explain the ideological labels of various positions. It is not clear, for instance, why liberals are the strongest proponents of most individual rights that the Constitution protects while conservatives are the most supportive of gun rights. Further, perceptions of issues sometimes change over time, so that what was once considered the liberal position on an issue is no longer viewed that way.

Freedom of expression is a good example of these complexities. Beginning early in the twentieth century, strong support for freedom of speech and freedom of the press was regarded as a liberal position. In the Supreme Court, the justices who were most likely to support those First Amendment rights were its liberals. But in the 1990s that pattern began to change. Since then, when the Court is divided, conservative justices provide support for litigants who argue that their free expression rights have been violated as often as liberals do.

To explain that change, we need to go back to the period after World War I when freedom of expression was established as a liberal cause. At that time, the government policies that impinged the most on free speech were aimed at political groups on the left and at labor unions. Because liberals were more sympathetic than conservatives to those segments of society, it was natural that freedom of expression became identified as a liberal cause in the political world. In turn, liberal Supreme Court justices gave considerably more support to litigants with free expression claims than did their conservative colleagues across the range of cases that the Court decided.

In the second half of the twentieth century, people on the political left rethought some of their assumptions about legal protections for free expression. For instance, they began to question the value of protecting “hate speech” directed at vulnerable groups in society. And they were skeptical about First Amendment challenges to regulations of funding for political campaigns. Meanwhile conservatives started to see freedom of expression in a more positive light, as a protection against undue government interference with political and economic activity.

This change in thinking affected the Supreme Court in the 1990s and after. More free expression cases came to the Court from businesses and people with a conservative orientation, and a conservative-leaning Court was receptive to those cases. The Court now decides few cases involving speech by labor unions and people on the political left, and cases from businesses and political conservatives have become common. Liberal justices are more favorable than their conservative colleagues to free expression claims by people on the left and by individuals with no clear political orientation, but conservative justices provide more support to claims by businesses and conservatives. As a result, what had been a strong tendency for liberal justices to give the most support to freedom of expression across the cases that the Court decided has disappeared.

The sharp change in the Supreme Court’s ideological orientation in free speech cases is an exception to the general rule, but it underlines some important things about the meaning of ideology. The labeling of issue positions as conservative or liberal comes through the development of shared understandings among political elites, and those understandings do not necessarily follow from broad values. In considerable part, they reflect attitudes toward the people and groups that champion and benefit from particular positions. The impact of those attitudes is reflected in the ways that people respond to specific situations involving an issue: liberal and conservative justices, like their counterparts elsewhere in government and politics, are most favorable to free speech when that speech comes from segments of society with which they sympathize. When we think of Supreme Court justices and the positions they take as conservative and liberal, we need to keep in mind that to a considerable degree, the ideological labeling of positions in ideological terms is arbitrary. Justice Gorsuch’s early record on the Court surely is conservative—but in the way that conservative positions have come to be defined in the world of government and politics, definitions that are neither permanent nor inevitable.

BaumLawrence Baum is professor emeritus of political science at Ohio State University. His books include Judges and Their Audiences, The Puzzle of Judicial BehaviorSpecializing the Courts, and Ideology in the Supreme Court.