Tim Rogan: What’s Wrong with the Critique of Capitalism Now

RoganWhat’s wrong with capitalism? Answers to that question today focus on material inequality. Led by economists and conducted in utilitarian terms, the critique of capitalism in the twenty-first century is primarily concerned with disparities in income and wealth. It was not always so. In The Moral Economists, Tim Rogan reconstructs another critical tradition, developed across the twentieth century in Britain, in which material deprivation was less important than moral or spiritual desolation. Examining the moral cornerstones of a twentieth-century critique of capitalism, The Moral Economists explains why this critique fell into disuse, and how it might be reformulated for the twenty-first century. Read on to learn more about these moral economists and their critiques of capitalism.

You begin by asking, ‘What’s wrong with capitalism?’ Shouldn’t we start by acknowledging capitalism’s great benefits?

Yes, absolutely. This was a plan for the reform of capitalism, not a prayer for its collapse or a pitch for its overthrow. These moral economists sought in some sense to save capitalism from certain of its enthusiasts—that has always been the project of the socialist tradition out of which these writers emerged. But our question about capitalism—as about every aspect of our social system, every means by which we reconcile individual preferences to arrive at collective decisions—should always be ‘What’s wrong with this?;’ ‘How can we improve this?;’ ‘What could we do better?’ And precisely how we ask those questions, the terms in which we conduct those debates, matters. My argument in this book is that our way of asking the question ‘What’s wrong with capitalism?’ has become too narrow, too focused on material inequality, insufficiently interested in some of the deeper problems of liberty and solidarity which the statistics recording disparities of wealth and income conceal.

Was this critique of capitalism also a critique of economics, and if so what do these critics add to the usual complaints against economics—about unrealistic assumptions, otherworldly models, indifference to historical developments such as financial crises, etc?

Yes, the moral economists were critical of economics. But although their criticisms might sound like variations on the familiar charge that economists make unreal assumptions about the capacities and proclivities of individual human beings, the moral economists’ challenge to mainstream economics was different. The most influential innovators in economics since the Second World War have been behavioral scientists pointing out that our capacity to make utilitarian calculations is not as high as economists once took it to be. Part of what the success of this series of innovations is that the ideal of reducing every decision to a calculation of utility retains its allure, even as we come to realize how fallible our real-time calculations are. Behavioral economists have found our capacity to think like rational utilitarian agents wanting. But when did the capacity to think like a rational utilitarian agent become the measure of our humanity? This is the question moral economists have been asking since the 1920s. Initiated by historians determined to open up means of thinking outside economic orthodoxy, since joined by mathematically-trained economists concerned to get a more realistic handle on the relationship between individual values and social choice, the moral economists’ enterprise promises a far more profound reconstitution of political economy than behavioral economics has ever contemplated.

Doesn’t the profile of these writers—dead, male, English, or Anglophile, writing about a variety of capitalism long since superseded—limit their contemporary relevance?

No. Their main concern was to discover and render articulate forms of social solidarity which the dominant economic discourse concealed. They found these on the outskirts of ‘Red Vienna’, on railroads under construction in post-war Yugoslavia, but most of all in the north of England. They believed that these inarticulate solidarities were what really held the country together—the secret ingredients of the English constitution. Though they belonged to a tradition of social thought in Britain that was skeptical towards Empire and supportive of the push for self-determination in India and elsewhere, they raised the prospect that the same dynamics had developed in countries to which British institutions had been exported—explaining the relative cohesion of Indian and Ghanaian democracies, for instance. More broadly E. P. Thompson in particular argued that factoring these incipient solidarities into constitutional thinking generated a more nuanced understanding of the rule of law than nineteenth-century liberalism entailed: in Thompson’s hand the rule of law became a more tensile creed, more capable of accommodating the personal particularities of the law’s subjects, more adept at mitigating the rigors of rational system to effect justice in specific cases. The profiles of the late-twentieth century commentators who continue the critical tradition Tawney, Polanyi and Thompson developed—especially Amartya Sen—underscore that tradition’s wider relevance.

Aren’t these writers simply nostalgists wishing we could return to a simpler way of life?

No. Tawney especially is often seen as remembering a time of social cohesion before the Reformation and before the advent of international trade and wishing for its return. This perception misunderstands his purpose.

Religion and the Rise of Capitalism draws sharp contrasts between two distinct iterations of European society – the late medieval and the modern. But this was a means of dramatizing a disparity between different societies developing in contemporary England—the society he encountered working at Toynbee Hall in London’s East End, where social atomization left people demoralized beyond relief, on the one hand; the society he encountered when he moved to Manchester to teach in provincial towns in Lancashire and Staffordshire, where life under capitalism was different, where the displacement of older solidarities was offset by the generation of new forms of cohesion, where many people were poor but where the social fabric was still intact.

The demoralized East End was the product of laissez faire capitalism—of the attempt to organize society on the basis that each individual was self-sufficient, profit-minded, unaffected by other human sentiments. The political crisis into which Britain was pitched in the late Edwardian period underlined how untenable this settlement was: without a sense of what more than the appetite for wealth motivated people, there could be no ‘background of mutual understanding’ against which to resolve disputes. At the same time the answer was not simply stronger government, a bigger state. The latent solidarities Tawney discovered in the north of England carried new possibilities: the facility of market exchange and the security of an effective state could be supplemented by informal solidarities making everyday life more human than the impersonal mechanisms of market and government allowed.

Polanyi and Thompson brought their historical settings forward into the nineteenth century, making their writings feel more contemporary. But they were both engaged in much the same exercise as Tawney—using history to dramatize disparities between different possibilities developing within contemporary society. They too had come into contact with forms of solidarity indicating that there was more than calculations of utility and the logic of state power at work in fostering social order.  Polanyi and then especially Thompson advanced their common project significantly when he found a new terminology with which to describe these incipient solidarities. Tawney had talked of ‘tradition’ and ‘convention’ and ‘custom,’ and Polanyi had followed Tawney in this—refusing to associate himself with Ferdinand Tonnies concept of Gemeinschaft and Henry Maine’s system of ‘status’ when pressed to, but offering no cogent concept through which to reckon with these forms of solidarity himself. Thompson’s concept of the ‘moral economy’ made the kinds of solidarities upon which they had all focused more compelling.

Does subscribing to a moral critique of capitalism mean buying into one of the prescriptive belief systems out of which that critique materialized? Do you need to believe in God or Karl Marx in order to advance a moral critique of capitalism without embarrassment?

No. Part of the reason that this critique of capitalism went out of commission was because the belief systems which underpinned it—which, more specifically, provided the conceptions of what a person is which falsified reductive concepts of ‘economic man’—went into decline. Neither Tawney nor Thompson was able to adapt to the attenuation of Christian belief and Marxian conviction respectively from which their iterations of the critique had drawn strength. Polanyi’s case was different: he was able to move beyond both God and Marx, envisaging a basis upon which a moral critique of capitalism could be sustained without relying on either belief system. That basis was furnished by the writings of Adam Smith, which adumbrated an account of political economy which never doubted but that economic transactions are embedded in moral worlds.

This was a very different understanding of Adam Smith’s significance to that with which most people to whom that name means something now have been inculcated. But it is an account of Adam Smith’s significance which grows increasingly recognizable to us now—thanks to the work of Donald Winch, Emma Rothschild and Istvan Hont, among others, facilitated by the end of Cold War hostilities and the renewal of interest in alternatives to state- or market-based principles of social order.

In other words there are ways of re-integrating economics into the wider moral matrices of human society without reverting to a Christian or Marxian belief system. There is nothing extreme or zealous about insisting that the moral significance of economic transactions be recognized. What was zealous and extreme was the determination to divorce economics from broader moral considerations. This moral critique of capitalism represented a recognition that the time for such extremity and zeal had passed. As the critique fell into disuse in the 1970s and 1980s, some of that zeal returned, and the last two decades now look to have been a period of especially pronounced ‘economism.’ The relevance of these writings now, then, is that they help us to put the last two decades and the last two centuries in perspective, revealing just how risky the experiment has been, urging us to settle back in now to a more sustainable pattern of economic thought.

You find that this moral critique of capitalism fell into disuse in the 1970s and 1980s. Bernie Sanders declared in April 2016 that instituting a ‘truly moral economy’ is ‘no longer beyond us.’ Was he right?

Yes and no. Sanders’ made this declaration at the Vatican, contemplating the great papal encyclicals of Rerum Novarum and Centesimus Annus. The discrepancies between what Sanders said and what Popes Leo XIII and Pope John Paul II before him said about capitalism is instructive. The encyclicals have always focussed on the ignominy of approaching a person as a bundle of economic appetites, on the apostasy of abstracting everything else that makes us human out of our economic thinking. Sanders sought to accede to that tradition of social thought—a tradition long since expanded to encompass perspectives at variance with Catholic theology, to include accounts of what a person is which originate outside the Christian tradition. But Sanders’s speech issued no challenge to the reduction of persons to economic actors. In designating material inequality the ‘great issue of our time,’ Sanders reinforced that reductive tendency: the implication is that all we care about is the satisfaction of our material needs, as if redistribution alone would solve all our problems.

The suggestion in Sanders speech was that his specific stance in the utilitarian debate over how best to organise the economy has now taken on moral force. There is an ‘individualist’ position which favors free enterprise and tolerates inequality as incidental to the enlargement of aggregate utility, and there is a ‘collectivist’ stance which enlists the state to limit freedom to ensure that inequality does not grow too wide, seeing inequality as inimical to the maximizing of aggregate utility. The ‘collectivists’ are claiming the moral high ground. But all they are really proposing is a different means to the agreed end of maximizing overall prosperity. The basis for their ‘moral’ claims seems to be that they have more people on their side—a development which would make Nietzsche smile, and should give all of us pause. There are similar overtones to the rallying of progressive forces around Jeremy Corbyn in the UK.

The kind of ‘moral economy’ Sanders had in mind—a big government geared towards maximizing utility—is not what these moral economists would have regarded as a ‘truly moral economy’. The kinds of checks upon economic license they had in mind were more spontaneous and informal—emanating out of everyday interactions, materializing as strictures against certain kinds of commercial practice in common law, inarticulate notions of what is done and what is not done, general conceptions of fairness, broad-based vigilance against excess of power. This kind of moral economy has never been beyond us. The solidarities out of which it arises were never eradicated, and are constantly regenerating.

Tim Rogan is a fellow of St. Catharine’s College, Cambridge, where he teaches history. He is the author of The Moral Economists: R. H. Tawney, Karl Polanyi, E. P. Thompson, and the Critique of Capitalism.

Jerry Z. Muller on The Tyranny of Metrics

Today, organizations of all kinds are ruled by the belief that the path to success is quantifying human performance, publicizing the results, and dividing up the rewards based on the numbers. But in our zeal to instill the evaluation process with scientific rigor, we’ve gone from measuring performance to fixating on measuring itself. The result is a tyranny of metrics that threatens the quality of our lives and most important institutions. In this timely and powerful book, Jerry Muller uncovers the damage our obsession with metrics is causing—and shows how we can begin to fix the problem. Complete with a checklist of when and how to use metrics, The Tyranny of Metrics is an essential corrective to a rarely questioned trend that increasingly affects us all.

What’s the main idea?

We increasingly live in a culture of metric fixation: the belief in so many organizations that scientific management means replacing judgment based upon experience and talent with standardized measures of performance, and then rewarding or punishing individuals and organizations based upon those measures. The buzzwords of metric fixation are all around us: “metrics,” “accountability,” “assessment,” and “transparency.” Though often characterized as “best practice,” metric fixation is in fact often counterproductive, with costs to individual satisfaction with work, organizational effectiveness, and economic growth.

The Tyranny of Metrics treats metric fixation as the organizational equivalent of The Emperor’s New Clothes. It helps explain why metric fixation has become so popular, why it is so often counterproductive, and why some people have an interest in pushing it. It is a book that analyzes and critiques a dominant fashion in contemporary organizational culture, with an eye to making life in organizations more satisfying and productive.

Can you give a few examples of the “tyranny of metrics?”

Sure. In medicine, you have the phenomenon of “surgical report cards” that purport to show the success rates of surgeons who perform a particular procedure, such as cardiac operations. The scores are publicly reported. In an effort to raise their scores, surgeons were found to avoid operating on patients whose complicated circumstances made a successful operation less likely. So, the surgeons raised their scores. But some cardiac patients who might have benefited from an operation failed to get one—and died as a result. That’s what we call “creaming”—only dealing with cases most likely to be successful.

Then there is the phenomenon of goal diversion. A great deal of K-12 education has been distorted by the emphasis that teachers are forced to place on preparing students for standardized tests of English and math, where the results of the tests influence teacher retention or school closings. Teachers are instructed to focus class time on the elements of the subject that are tested (such as reading short prose passages), while ignoring those elements that are not (such as novels). Subjects that are not tested—including civics, art, and history—receive little attention.

Or, to take an example from the world of business. In 2011 the Wells Fargo bank set high quotas for its employees to sign up customers who were interested in one of its products (say, a deposit account) for additional services, such as overdraft coverage or credit cards. For the bank’s employees, failure to reach the quota meant working additional hours without pay and the threat of termination. The result: to reach their quotas, thousands of bankers resorted to low-level fraud, with disastrous effects for the bank. It was forced to pay a fortune in fines, and its stock price dropped.

Why is the book called The Tyranny of Metrics?

Because it helps explain and articulate the sense of frustration and oppression that people in a wide range of organizations feel at the diversion of their time and energy to performance measurement that is wasteful and counterproductive.

What sort of organizations does the book deal with?

There are chapters devoted to colleges and universities, K-12 education, medicine and health care, business and finance, non-profits and philanthropic organizations, policing, and the military. The goal is not to be definitive about any of these realms, but to explore instances in which metrics of measured performance have been functional or dysfunctional, and then to draw useful generalizations about the use and misuse of metrics.

What sort of a book is it? Does it belong to any particular discipline or political ideology?

It’s a work of synthesis, drawing on a wide range of studies and analyses from psychology, sociology, economics, political science, philosophy, organizational behavior, history, and other fields. But it’s written in jargon-free prose, that doesn’t require prior knowledge of any of these fields. Princeton University Press has it classified under “Business,” “Public Policy,” and “Current Affairs.” That’s accurate enough, but it only begins to suggest the ubiquity of the cultural pattern that the book depicts, analyzes, and critiques. The book makes use of conservative, liberal, Marxist, and anarchist authors—some of whom have surprising areas of analytic convergence.

What’s the geographic scope of the book?

In the first instance, the United States. There is also a lot of attention to Great Britain, which in many respects was at the leading edge of metric fixation in the government’s treatment of higher education (from the “Teaching Quality Assessment” through the “Research Excellence Framework”), health care (the NHS) and policing, under the rubric of “New Public Management.” From the US and Great Britain, metric fixation—often carried by consultants touting “best practice”—has spread to Continental Europe, the Anglosphere, Asia, and especially China (where the quest for measured performance and university rankings is having a particularly pernicious effect on science and higher education).

Is the book simply a manifesto against performance measurement?

By no means. Drawing on a wide range of case studies from education to medicine to the military, the book shows how measured performance can be developed and used in positive ways.

Who do you hope will read the book?

Everyone who works in an organization, manages an organization, or supervises an organization, whether in the for-profit, non-profit, or government sector. Or anyone who wants to understand this dominant organizational culture and its intrinsic weaknesses.

Jerry Z. Muller is the author of many books, including Adam Smith in His Time and Ours and Capitalism and the Jews. His writing has appeared in the New York Times, the Wall Street Journal, the Times Literary Supplement, and Foreign Affairs, among other publications. He is professor of history at the Catholic University of America in Washington, D.C., and lives in Silver Spring, Maryland.

Jonathan Haskel & Stian Westlake on Capitalism without Capital

Early in the twenty-first century, a quiet revolution occurred. For the first time, the major developed economies began to invest more in intangible assets, like design, branding, R&D, and software, than in tangible assets, like machinery, buildings, and computers. For all sorts of businesses, from tech firms and pharma companies to coffee shops and gyms, the ability to deploy assets that one can neither see nor touch is increasingly the main source of long-term success. But this is not just a familiar story of the so-called new economy. Capitalism without Capital shows that the growing importance of intangible assets has also played a role in some of the big economic changes of the last decade.

What do you mean when you say we live in an age of Capitalism without Capital?

Our book is based on one big fact about the economy: that the nature of the investment that businesses do has fundamentally changed. Once businesses invested mainly in things you could touch or feel like buildings, machinery, and vehicles. But more and more investment now goes into things you can’t touch or feel: things like research and development, design, organizational development—“intangible’ investments. Today, in developed countries, businesses invest more each year intangible assets than in tangibles. But they’re often measured poorly or not at all in company accounts or national accounts. So there is still a lot of capital about, but it has done a sort of vanishing act, both physically and from the records that businesses and governments keep.

What difference does the rise of intangible investments make?

The rise of intangible investment matters because intangible assets tend to behave differently from tangible ones—they have different economic properties. In the book we call these properties the 4S’s—scalability, sunkenness, synergies, and spillovers. Intangibles can be used again and again, they’re hard to sell if a business fails, they’re especially good when you combine them, and the benefits of intangible investment often end up accruing to businesses other than the ones that make them. We argue that this change helps explain all sorts of important concerns people have about today’s economy, from why inequality has risen so much, to why productivity growth seems to have slowed down.

So is this another book about tech companies?

It’s much bigger than that. It’s true that some of the biggest tech companies have lots of very valuable intangibles, and few tangibles. Google’s search algorithms, software, and prodigious stores of data are intangibles; Apple’s design, brand, and supply chains are intangibles; Uber’s networks of drivers and users are intangible assets. Each of these intangibles is worth billions of dollars. But intangibles are everywhere. Even brick and mortar businesses like supermarkets or gyms rely on more and more intangible assets, such as software, codified operating procedures, or brands. And the rise of intangibles is a very long-term story: research by economists like Carol Corrado suggests that intangibles investment has been steadily growing since the early twentieth century, long before the first semiconductors, let alone the Internet.

Who will do well from this new intangible economy?

The intangible economy seems to be creating winners and losers. From a business point of view, we know that around the world, there’s a growing gap between the leading businesses in any given industry and the rest. We think this leader-laggard gap is partly caused by intangibles. Because intangibles are scalable and have synergies with one another, companies that have valuable intangibles will do better and better (and have more incentives to invest in more), while small and low performing companies won’t, and will lag ever further behind.

There is a personal dimension to this too. People who are good at combining ideas, and who are open to new ideas, will do better in an economy where there are lots of synergies between different assets. This will be a boon for educated, open-minded people, people with political, legal, and social connections, and for people who live in cities (where ideas tend to combine easily with one another). But others risk being left further behind.

Does this help explain the big political changes in recent years?

Yes—after the EU referendum in the UK and the 2016 presidential election in the US, a lot of pundits were asking why so many so-called “left behind” communities people voted for Brexit or Donald Trump. Some people thought they did so for cultural reasons, others argued the reasons were mainly economic. But we would argue that an intangible economy, these two reasons are linked: more connected, cosmopolitan places tend to do better economically in an intangible economy, while left-behind places suffer from an alienation that is both economic and cultural.

You mentioned that the rise of intangible investment might help explain why productivity growth is slowing. Why is that?

Many economists and policymakers worry about so-called secular stagnation: the puzzling fact that productivity growth and investment seems to have slowed down, even though interest rates are low and corporate profits are high, especially since 2009. We think the growing importance of intangibles can help explain this in a few ways.

  • There is certainly some under-measurement of investment going on—but as it happens this explains only a small part of the puzzle.
  • The rate of growth of intangible investment has slowed a bit since 2009. This seems to explain part of the slow-down in growth (and also helps explain why the slowdown has been manly concentrated in total factor productivity)
  • The gap between leading firms (with lots of intangibles) and laggard firms (with few) may have created a scenario where a few firms are investing in a lot of intangibles (think Google and Facebook) but for most others, it’s not worth it, since their more powerful competitors are likely to get the spillover benefits.

Does the intangible economy have consequences for investors?

Yes! Company accounts generally don’t record intangibles (except, haphazardly, as “goodwill” after an acquisition). This means that, as intangible assets become more important, corporate balance sheets tell investors less and less about the true value of a company. Much of what equity analysts spend their days doing is, in practice, trying to value intangibles.

And there’s lots of value to be had here: research suggests that equity markets undervalue intangibles like organizational development, and encourage public companies to underinvest in intangibles like R&D. But informed investors can take advantage of this—which can benefit both their own returns and the performance of the economy.

Jonathan, you’re an academic, and Stian, you are a policymaker. How did you come to write this book together?

We started working together in 2009 on the Nesta Innovation Index, which applied some of the techniques that Jonathan had worked on to measure intangibles to build an innovation measurement for the UK. The more we thought about, the clearer it became that intangibles helped explain all sorts of things. Ryan Avent from the Economist asked us to write a piece for their blog about one of these puzzles, and we enjoyed doing that so much we thought we would try writing a book. One of the most fun parts of writing the book was being able to combine the insights from academic economic research on intangibles and innovation with practical insights from innovation policy.

CapitalismJonathan Haskel is professor of economics at Imperial College Business School. Stian Westlake is a senior fellow at Nesta, the UK’s national foundation for innovation. Haskel and Westlake are cowinners of the 2017 Indigo Prize.

The Greatest Showman and the Deceptions of American Capitalism

by Edward J. Balleisen

BalleisenPerhaps unsurprisingly, The Greatest Showman, the new cinematic musical about the nineteenth-century American impresario of entertainment P. T. Barnum, unabashedly takes liberties with the historical record. As reviewers have already documented (Richard Brody in the New Yorker, Bruce Chadwick for History News Network), it fabricates matters large and small, as is the wont of Hollywood screenwriters and directors who work on biopics, while ignoring a host of truthful vignettes that cry out for cinematic treatment. As a historian of business fraud, I found myself especially disappointed that the musical steered clear of many aspects of Barnum’s career that speak powerfully to elements of our own moment, including the rise of a Barnum-esque publicity hound and conductor of media misdirection from the White House, and the constant turmoil swirling over allegations of fake news. And yet, The Greatest Showman does get some of the larger implications of Barnum’s life right—especially his injection of a democratic style of hullabaloo into American capitalism.

A full inventory of the film’s flights of fancy would require catalogue length. But a sampling conveys the minimal concern for fidelity to historical detail. The movie portrays the young Barnum as the poorly-clad son of an impoverished Connecticut tailor, rather than the child of a respectable proprietor who had a number of well-to-do relatives and also owned a store and inn. It gives Barnum experiences that he never had (begging and stealing food as an orphaned New York City street urchin; clerking for an insurance company). It depicts his move into the world of entertainment as occurring sometime well after the establishment of the railroad, perhaps even after the Civil War, rather than in the 1830s.

The Greatest Showman ignores Barnum’s earliest promotions of lotteries, curiosities and hoaxes, including his cruel exhibition of the elderly African-American slave woman Joice Heth as supposedly the 161-year old former wet-nurse of George Washington, and his willingness to profit further after her death through a public autopsy, experiences that laid the groundwork for his management of the American Museum. The screenwriters (Bill Condon and Jenny Bicks) have Barnum buy the museum on a wholly fictional mix of frustration, fantasy, and fraud, made possible by his fraudulent provision of fake collateral to a New York City bank that lends him the necessary $10,000. Instead of coming to grips with the actual Barnum’s vociferous advocacy of temperance, the film conjures up a hard-drinking man who makes deals over whiskeys in saloons. Rather than showing how Barnum consistently found new performers over the years, it brings together the midget Charles Stratton (known on stage as Tom Thumb), the Siamese twins Change and Eng, and the other members of the troupe within weeks of Barnum’s purchase of the American Museum.

The historical Barnum had a falling out with the famed Swedish singer Jenny Lind not because he refused her amorous advances in the middle of their American tour (the musical’s explanation), but because she tired of his relentless focus on maximizing the returns from her concerts. A key antagonist for Barnum in The Greatest Showman is one “Bennett,” portrayed as a stiff-collared, high-toned theatre critic of the New York Herald. The actual James Gordon Bennett was the publisher of that paper, who proved more than happy to go along with hoaxes and sensationalism himself, using both to help cement his newspaper’s position as the first penny newspaper that catered to the broad masses. The character of Barnum’s high society sidekick Philip Carlyle is entirely fictional, as in his relationship with Anne Wheeler, an African-American female trapeze artist. One last illustration—the film attributes the fire that destroyed Barnum’s New York City Museum to neighborhood toughs who did not like his business, rather than the actual arsonist, a Confederate sympathizer who wished toward the end of the Civil War to strike a blow against the Union.

Of course, by indulging a willingness to elide facts or push outright lies in the service of a hokey story, the makers of The Greatest Showman adopt Barnum’s own modus operandi as a purveyor of entertainment. And the movie does a creditable job of engaging with some of Barnum’s larger cultural significance—his recognition that publicity and HYPE of any kind was often a marketing asset; his understanding that the public would be forgiving of misrepresentations and humbug if they, on balance, enjoyed the eventual show; his embrace of difference and variation within the human condition as worthy of celebration (if also exploitation); his compulsion to expand operations to take advantage of new opportunities, even at the cost of incurring gargantuan debts; his relentless focus on the American mythos of democratic opportunity, whether through his own experience (as carefully narrated in his autobiographies) or those of the stars in his shows. As the film implies, there was indeed deep-seated antagonism to Barnum’s business practices and willingness to engage in fakery, though the complaints came overwhelmingly from pulpits and the pages of evangelical newspapers, rather than protesters who made their presence known outside the Museum. And Barnum did in fact seek to defuse those critiques through the promotion of respectable performers such as Jenny Lind, alongside his curiosities, penchant for misdirection, and outright fakery.

Nonetheless, The Greatest Showman also missed many opportunities to explore episodes in Barnum’s life that have renewed resonance in the early twenty-first century. One crucial theme here concerns Barnum’s engagement with American race relations, both as promoter and in his post-Civil War forays in Connecticut politics and public service. Barnum’s often dehumanizing treatment of people of color and his evolving political views on race will surely occasion much commentary amid the current dramatic growth in ethnocentric nationalism and racially-grounded politics, as in a recent Smithsonian Magazine piece by Jackie Mansky. Other contemporary developments that suggest the value of reconsidering Barnum’s historical significance, closer to my own expertise, include the reoccurrence of massive business frauds, the emergence of enduring conflict over the appropriate role of government in consumer and investor protection, and diminished faith in institutions of all sorts.

The musical, for example, overlooks Barnum’s own bankruptcy in 1855, brought about because of his misplaced faith in the promises of a clock manufacturer who was willing to relocate his operation to Barnum’s adopted home town of Bridgeport, Connecticut, as part of an industrial development scheme. Barnum freely endorsed the Jerome Clock Company’s loans, opening himself up to devastating losses when the company failed, losses made worse by the firm’s eventual forging of Barnum’s endorsement on many additional notes. Yet he also sidestepped the worst consequences of that failure by illegally transferring assets into his wife’s name, a move that greatly facilitated his ability to get back on his financial feet, and for which he never faced public condemnation or legal penalty. Barnum’s insolvency thus speaks to the reality that even the savviest operators can be victims of imposition; and that well-connected perpetrators of commercial deceit have often been able to sidestep the most damaging fallout from their actions.

Another fascinating episode that The Greatest Showman ignores is Barnum’s growing focus on debunking the deceit of other purveyors of rhetorical (or actual) snake oil. By the 1860s, the promoter sought to legitimize his own brand of hokum and bluster not only by adding unquestionably respectable acts to his museum and eventual circus, but also by exposing frauds in many sectors of American life.  Compiled in his 1866 volume, Humbugs of the World, these endeavors targeted misrepresentations in retail trade, medicine, and religion (especially in the realm of spiritualism). Here Barnum intuited the great power associated with well-constructed strategies of deflection—that one could gain trust in part by setting oneself up as an arbiter of untrustworthiness. Perhaps there is no greater contemporary practitioner of this particular form of showmanship than the current occupant of the White House. Donald Trump has rarely hesitated to get out ahead of critiques of his own business and political practices by casting the first stones, as through his allegations of malfeasance by political opponents (the pleas during the 2016 general election campaign to investigate Hillary Clinton and “Lock Her Up”) or representatives of the media (the incessant allegations of FAKE NEWS.) In addition to muddying factual waters, such strategies can shore up support among the faithful, sustaining the conviction that their champion is fighting the good fight, and could not possibly be engaging in duplicitous behavior of his own.

In the end, The Greatest Showman cares most about exploring fictionalized or wholly fictional romantic tensions—those between Barnum and his wife Charity and between the Philip Carlyle and Anne Wheeler—as well as the degree to which Barnum lives up to his purported insistence on an inclusive respect for his socially marginalized performers. These choices constrain the musical’s capacity to engage deeply with Barnum’s historical significance as an entrepreneur who played an outsized role in creating modern mass entertainment. And so a multitude of opportunities go begging. Barnum’s many legacies, however, continue to reverberate in contemporary America, whether one focuses on the the dynamics of social media saturation, the process of invented celebrity, the sources of abiding racial tensions,  the implications of pervasive commercial dissembling, or the nature of popular skepticism about expert appraisals of reality. And so the ground remains open for cultural reinterpretations of the Great Showman’s life and times.  If the twentieth-century is any guide, we won’t have to wait too long for another cinematic treatment—every generation or so, some movie-maker finds the resources to put Barnum back on the screen.[1]

[1] Previous films include “The Mighty Barnum” (1934), “The Greatest Show on Earth” (1952), “Barnum” (1986), and “P. T. Barnum” (1999).

Edward J. Balleisen is professor of history and public policy and vice provost for Interdisciplinary Studies at Duke University. He is the author of Fraud: An American History from Barnum to Madoff. He lives in Durham, North Carolina.

Pariah Moonshine Part III: Pariah Groups, Prime Factorizations, and Points on Elliptic Curves

by Joshua Holden

This post originally appeared on The Aperiodical. We republish it here with permission. 

In Part I of this series of posts, I introduced the sporadic groups, finite groups of symmetries which aren’t the symmetries of any obvious categories of shapes. The sporadic groups in turn are classified into the Happy Family, headed by the Monster group, and the Pariahs. In Part II, I discussed Monstrous Moonshine, the connection between the Monster group and a type of function called a modular form. This in turn ties the Monster group, and with it the Happy Family, to elliptic curves, Fermat’s Last Theorem, and string theory, among other things. But until 2017, the Pariah groups remained stubbornly outside these connections.

In September 2017, John Duncan, Michael Mertens, and Ken Ono published a paper announcing a connection between the Pariah group known as the O’Nan group (after Michael O’Nan, who discovered it in 1976) and another modular form. Like Monstrous Moonshine, the new connection is through an infinite-dimensional shape which breaks up into finite-dimensional pieces. Also like Monstrous Moonshine, the modular form in question has a deep connection with elliptic curves. In this case, however, the connection is more subtle and leads through yet another set of important mathematical objects: the quadratic fields.

At play in the fields quadratic

What mathematicians call a field is a set of objects which are closed under addition, subtraction, multiplication, and division (except division by zero). The rational numbers form a field, and so do the real numbers and the complex numbers. The integers don’t form a field because they aren’t closed under division, and the positive real numbers don’t form a field because they aren’t closed under subtraction.  (It’s also possible to have fields of things that aren’t numbers, which are useful in lots of other situations; see Section 4.5 of The Mathematics of Secrets for a cryptographic example.)

A common way to make a new field is to take a known field and enlarge it a bit. For example, if you start with the real numbers and enlarge them by including the number i (the square root of -1), then you also have to include all of the imaginary numbers, which are multiples of i, and then all of the numbers which are real numbers plus imaginary numbers, which gets you the complex numbers. Or you could start with the rational numbers, include the square root of 2, and then you have to include the numbers that are rational multiples of the square root of 2, and then the numbers which are rational numbers plus the multiples of the square root of 2. Then you get to stop, because if you multiply two of those numbers you get

Holden

which is another number of the same form. Likewise, if you divide two numbers of this form, you can rationalize the denominator and get another number of the same form. We call the resulting field the rational numbers “adjoined with” the square root of 2. Fields which are obtained by starting with the rational numbers and adjoining the square root of a rational number (positive or negative) are called quadratic fields.

Identifying a quadratic field is almost, but not quite, as easy as identifying the square root you are adjoining. For instance, consider adjoining the square root of 8. The square root of 8 is twice the square root of 2, so if you adjoin the square root of 2 you get the square root of 8 for free. And since you can also divide by 2, if you adjoin the square root of 8 you get the square root of 2 for free. So these two square roots give you the same field.  For technical reasons, a quadratic field is identified by taking all of the integers whose square roots would give you that field, and picking out the integer D with the smallest absolute value that can be written in the form b2 – 4ac for integers a, b, and c.  (This is the same b2 – 4ac as in the quadratic formula.)  This number D is called the fundamental discriminant of the field. So, for example, 8 is the fundamental discriminant of the quadratic field we’ve been talking about, not 2, because 8 = 42 – 4 × 2 × 1, but 2 can’t be written in that form.

Prime suspects

After addition, subtraction, multiplication, and division, one of the really important things you can do with rational numbers is factor their numerators and denominators into primes. In fact, you can do it uniquely, aside from the order of the factors. If you have number in a quadratic field, you can still factor it into primes, but the primes might not be unique. For example, in the rational numbers adjoined with the square root of negative 5 we have

Holden

where 2, 5, 1 + √–5, and 1 – √–5 are all primes. You’ll have to trust me on that last part, since it’s not always obvious which numbers in a quadratic field are prime. Figures 1 and 2 show some small primes in the rational numbers adjoined with the square roots of negative 1 and negative 3, respectively, plotted as points in the complex plane.

Holden 3

Figure 1. Some small primes in the rational numbers adjoined with the square root of -1 (D = -4), plotted as points in the complex plane. By Wikimedia Commons User Georg-Johann.)

 

Holden

Figure 2. Some small primes in the rational numbers adjoined with the square root of -3 (D = -3), plotted as points in the complex plane. By Wikimedia Commons User Fropuff.)

We express this by saying the rational numbers have unique factorization, but not all quadratic fields do. The question of which quadratic fields have unique factorization is an important open problem in general. For negative fundamental discriminants, we know that D = ‑3, ‑4, ‑7, ‑8, ‑11, ‑19, ‑43, ‑67, ‑163 are the only such quadratic fields; an equivalent form of this was conjectured by Gauss but fully acceptable proofs were not given until 1966 by Alan Baker and 1967 by Harold Stark. For positive fundamental discriminants, Gauss conjectured that there were infinitely many quadratic fields with unique factorization but this is still unproved.

Furthermore, Gauss identified a number, called the class number, which in some sense measures how far from unique factorization a field is. If the class number is 1, the field has unique factorization, otherwise not. The rational numbers adjoined with the square root of negative 5 (D = -20) have class number 2, and therefore do not have unique factorization. Gauss also conjectured that the class number of a quadratic field went to infinity as its discriminant went to negative infinity; this was proved by Hans Heilbronn in 1934.

Moonshine with class (numbers)

What about Moonshine? Duncan, Mertens, and Ono proved that the O’Nan group was associated with the modular form

F(z) = e -8 π i z + 2 + 26752 e 6 π i z + 143376 e 8 π i z  + 8288256 e 14 π i z  + …

which has the property that the coefficient of e 2 |D| π I z  is related to the class number of the field with fundamental discriminant < 0.  Furthermore, looking at elements of the O’Nan group sometimes gives us very specific relationships between the coefficients and the class number.  For example, the O’Nan group includes a symmetry which is like a 180 degree rotation, in that if you do it twice you get back to where you started.  Using that symmetry, Duncan, Mertens, and Ono showed that for even D < -8, 16 always divides a(D)+24h(D), where a(D) is the coefficient of  e 2 |D| π i z  and h(D) is the class number of the field with fundamental discriminant D.  For the example D = -20 from above, a(D) = 798588584512 and h(D) = 2, and 16 does in fact divide 798588584512 + 48.  Similarly, other elements of the O’Nan group show that 9 always divides a(D)+24h(D) if D = 3k+2 for some integer k and that 5 and 7 always divide a(D)+24h(D) under other similar conditions on And 11 and 19 divide a(D)+24h(D) under (much) more complicated conditions related to points on an elliptic curve associated with each D, which brings us back nicely to the connection between Moonshine and elliptic curves.

How much Moonshine is out there?

Monstrous Moonshine showed that the Monster, and therefore the Happy Family, was related to modular forms and elliptic curves, as well as string theory. O’Nan Moonshine brings in two more sporadic groups, the O’Nan group and its subgroup the “first Janko group”. (Figure 3 shows the connections between the sporadic groups. “M” is the Monster group, “O’N” is the O’Nan group, and “J1” is the first Janko group.) It also connects the sporadic groups not just to modular forms and elliptic curves, but also to quadratic fields, primes, and class numbers. Furthermore, the modular form used in Monstrous Moonshine is “weight 0”, meaning that k = 0 in the definition of a modular form given in Part II. That ties this modular form very closely to elliptic curves.

Holden

Figure 3. Connections between the sporadic groups. Lines indicate that the lower group is a subgroup or a quotient of a subgroup of the upper group. “M” is the Monster group and “O’N” is the O’Nan group; the groups connected below the Monster group are the rest of the Happy Family. (By Wikimedia Commons User Drschawrz.)

The modular form in O’Nan Moonshine is “weight 3/2”. Weight 3/2 modular forms are less closely tied to elliptic curves, but are tied to yet more ideas in mathematical physics, like higher-dimensional generalizations of strings called “branes” and functions that might count the number of states that a black hole can be in. That still leaves four more pariah groups, and the smart money predicts that Moonshine connections will be found for them, too. But will they come from weight 0 modular forms, weight 3/2 modular forms, or yet another type of modular form with yet more connections? Stay tuned! Maybe someday soon there will be a Part IV.

Joshua Holden is professor of mathematics at the Rose-Hulman Institute of Technology. He is the author of The Mathematics of Secrets: Cryptography from Caesar Ciphers to Digital Encryption.

Geoff Mulgan on Big Mind: How Collective Intelligence Can Change Our World

A new field of collective intelligence has emerged in the last few years, prompted by a wave of digital technologies that make it possible for organizations and societies to think at large scale. This “bigger mind”—human and machine capabilities working together—has the potential to solve the great challenges of our time. So why do smart technologies not automatically lead to smart results? Gathering insights from diverse fields, including philosophy, computer science, and biology, Big Mind reveals how collective intelligence can guide corporations, governments, universities, and societies to make the most of human brains and digital technologies. Highlighting differences between environments that stimulate intelligence and those that blunt it, Geoff Mulgan shows how human and machine intelligence could solve challenges in business, climate change, democracy, and public health. Read on to learn more about the ideas in Big Mind.

So what is collective intelligence?

My interest is in how thought happens at a large scale, involving many people and often many machines. Over the last few years many experiments have shown how thousands of people can collaborate online analyzing data or solving problems, and there’s been an explosion of new technologies to sense, analyze and predict. My focus is on how we use these new kinds of collective intelligence to solve problems like climate change or disease—and what risks we need to avoid. My claim is that every organization can work more successfully if it taps into a bigger mind—mobilizing more brains and computers to help it.

How is it different from artificial intelligence?

Artificial intelligence is going through another boom, embedded in everyday things like mobile phones and achieving remarkable break throughs in medicine or games. But for most things that really matter we need human intelligence as well as AI, and an over reliance on algorithms can have horrible effects, whether in financial markets or in politics.

What’s the problem?

The problem is that although there’s huge investment in artificial intelligence there’s been little progress in how intelligently our most important systems work—democracy and politics, business and the economy. You can see this in the most everyday aspect of collective intelligence—how we organize meetings, which ignores almost everything that’s known about how to make meetings effective.

What solutions do you recommend?

I show how you can make sense of the collective intelligence of the organizations you’re in—whether universities or businesses—and how to become better. Much of this is about how we organize our information commons. I also show the importance of countering the many enemies of collective intelligence—distortions, lies, gaming and trolls.

Is this new?

Many of the examples I look at are quite old—like the emergence of an international community of scientists in the 17th and 18th centuries, the Oxford English Dictionary which mobilized tens of thousands of volunteers in the 19th century, or NASA’s Apollo program which at its height employed over half a million people in more than 20,000 organizations. But the tools at our disposal are radically different—and more powerful than ever before.

Who do you hope will read the book?

I’m biased but think this is the most fascinating topic in the world today—how to think our way out of the many crises and pressures that surround us. But I hope it’s of particular interest to anyone involved in running organizations or trying to work on big problems.

Are you optimistic?

It’s easy to be depressed by the many examples of collective stupidity around us. But my instinct is to be optimistic that we’ll figure out how to make the smart machines we’ve created serve us well and that we could on the cusp of a dramatic enhancement of our shared intelligence. That’s a pretty exciting prospect, and much too important to be left in the hands of the geeks alone.

MulganGeoff Mulgan is chief executive of Nesta, the UK’s National Endowment for Science, Technology and the Arts, and a senior visiting scholar at Harvard University’s Ash Center. He was the founder of the think tank Demos and director of the Prime Minister’s Strategy Unit and head of policy under Tony Blair. His books include The Locust and the Bee.

Éloi Laurent on Measuring Tomorrow

Never before in human history have we produced so much data, and this empirical revolution has shaped economic research and policy profoundly. But are we measuring, and thus managing, the right things—those that will help us solve the real social, economic, political, and environmental challenges of the twenty-first century? In Measuring Tomorrow, Éloi Laurent argues that we need to move away from narrowly useful metrics such as gross domestic product and instead use broader ones that aim at well-being, resilience, and sustainability. An essential resource for scholars, students, and policymakers, Measuring Tomorrow covers all aspects of well-being, and incorporates a broad range of data and fascinating case studies from around the world: not just the United States and Europe but also China, Africa, the Middle East, and India. Read on to learn more about how we can measure tomorrow.

Why should we go “beyond growth” in the 21st century to pay attention, as you advocate, to well-being, resilience and sustainability?

Because “growth,” that is growth of Gross Domestic Product or GDP, captures only a tiny fraction of what goes on in complex human societies: it tracks some but not all of economic well-being (saying nothing about fundamental issues such as income inequality), it does not account for most dimensions of well-being (think about the importance of health, education, or happiness for your own quality of life), and does not account at all for sustainability, which basically means well-being not just today but also tomorrow (imagine your quality of life in a world where the temperature would be 6 degrees higher). My point is that because well-being (human flourishing), resilience (resisting to shocks) and sustainability (caring about the future) have been overlooked by mainstream economics in the last three decades, our economic world has been mismanaged and our prosperity is now threatened.

To put it differently, while policymakers govern with numbers and data, they are as well governed by them so they better be relevant and accurate. It turns out, and that’s a strong argument of the book, that GDP’s relevance is fast declining in the beginning of the twenty-first century for three major reasons. First, economic growth, so buoyant during the three decades following the Second World War, has gradually faded away in advanced and even developing economies and is therefore becoming an ever-more-elusive goal for policy. Second, both objective and subjective well-being—those things that make life worth living—are visibly more and more disconnected from economic growth. Finally, GDP and growth tell us nothing about the compatibility of our current well-being with the long-term viability of ecosystems, even though it is clearly the major challenge we and our descendants must face.

Since “growth” cannot help us understand let alone solve the two major crises of our time, the inequality crisis and ecological crises, we must rely on other compasses to find our way in this new century. In my view, the whole of economic activity, which is a subset of social cooperation, should be reoriented toward the well-being of citizens and the resilience and sustainability of societies. For that to happen, we need to put these three collective horizons at the center of our empirical world. Or rather, back at the center, because issues of well-being and sustainability have been around for quite a long time in economic analysis and were a central part of its philosophy until the end of the nineteenth century. But economics as we know it today has largely forgotten that these concerns were once at the core of its reflections.

Isn’t there a fundamental trade-off between well-being and sustainability? Can we really pursue those goals together?

That is a key question and the book makes the case that advances in human well-being are fully compatible with environmental sustainability and even that the two are, or at least can be, mutually reinforcing provided we think clearly about those notions. Well-being represents the many dimensions of human development and sustainability represents dynamic well-being. They are obviously related.

To use the words of Chinese Environment Minister Zhou Shengxian in 2011, “If our planet is wrecked and our health ravaged, what is the benefit of our development?” In other words, our economic and political systems exist only within a larger context, the biosphere, whose vitality is the source of their survival and perpetuation. If ecological crises are not measured, monitored, and mitigated, they will eventually wipe out human well-being.

Well-being without sustainability (and resilience understood as short-term sustainability) is just an illusion. Our planet’s climate crisis has the potential to destroy the unprecedented contemporary progress in human health in a mere few decades. As acknowledged by Minister Zhou, if China’s ecosystems collapse under the weight of hyper-growth, with no unpolluted water left to drink nor clean air to breathe, the hundreds of millions of people in that country who have escaped poverty since the 1980s will be thrown back into it and worse. But, conversely, sustainability without well-being is just an ideal. Human behaviors and attitudes will become more sustainable not to “save the planet,” but to preserve well-being. Measuring well-being, resilience and sustainability makes their fundamental interdependence even clearer.

 But do robust indicators of well-being and sustainability already exist? If so, what do they tell us about our world that conventional economic indicators cannot?

Plenty exist, the task now is to select the best and use them to change policy. This is really what this book is about. Think about health in the US. Simple metrics such as life expectancy or mortality rates tell us a whole different story about what has happened in the country in the last thirty years than just growth. Actually, the healthcare reform initiated by Barack Obama in 2009 can be explained by the desire to amend a health system in which the human and economic cost has become unbearable. The recent discovery by economists Angus Deaton and Anne Case of very high mortality rates among middle-aged whites in the United States, all the while GDP was growing, is proof that health status must be studied and measured regardless of a nation’s perceived wealth status. How is it that the richest country in the world in terms of average income per capita, a country that devotes more of its wealth to health than any other, comes close to last in the rankings with comparable countries in terms of health outcomes? Use different indicators, as I do in the chapter devoted to health, and the solution to the American health puzzle quickly becomes apparent to you: the ballooning of inefficient private spending has led to a system where the costs are huge compared to its performance.

Or consider happiness in China, which has seen its per capita income grow exponentially since the early 1990s, while happiness levels have either stagnated or dropped (depending on the survey) only to increase again in recent years when growth was much lower. If you look at China only through the lens of growth, you basically miss the whole story about the life of people.

Paying attention to well-being can also help us understand why the Arab Spring erupted in Tunisia in 2011, a country where growth was strong and steady but where civil liberties and political rights clearly deteriorated before the revolution. The same is true for the quality of life in Europe and in my hometown of Paris, where air pollution has reached unbearable and life threatening levels despite the appearance of considerable wealth. Measuring well-being and sustainability simply change the way we see the world and should change the way we do policy.

What sign do you see that what you call the well-being and sustainability transition is under way?

In the last decade alone, scholars and policy makers have recognized in increasing numbers that standard economic indicators such as GDP not only create false expectations of perpetual societal growth but are also broken compasses for policy. And things are changing fast at all levels of governance: global, national, local.

The well-being and sustainability transition received international recognition in September 2015, when the United Nations embraced a “sustainable development goals” agenda in which GDP growth plays only a marginal role. In the US, scores of scholars and (some) policy makers increasingly realize the importance of paying attention to inequality rather than just growth. China’s leaders acknowledge that sustainability is a much better policy target than explosive economic expansion. Pope Francis is also a force of change when he writes in the encyclical Laudato si, published in June 2015: “We are faced not with two separate crises, one environmental and the other social, but rather with one complex crisis which is both social and environmental.” and urges us to abandon growth as a collective horizon. Influential newspapers and magazines such as The Economist and NYT recently ran articles arguing that GDP should be dropped or at least complemented. Local transitions are happening all over the planet, from Copenhagen to Baltimore, Chinese provinces to Indian states.

How should students, activists and policymakers engage in “Measuring tomorrow?”

The book serves as a practical guide to using indicators of well-being and sustainability to change our world. The basic course of action is to make visible what matters for humans and then make it count. Unmeasurability means invisibility: “what is not measured is not managed.” as the saying goes Conversely, measuring is governing: indicators determine policies and actions. Measuring, done properly, can produce positive social meaning.

First, we thus need to engage in a transition in values to change behaviors and attitudes. We live in a world where many dimensions of human well-being already have a value and often a price; it is the pluralism of value that can therefore protect those dimensions from the dictatorship of the single price. It does not mean that everything should be monetized or marketed but understanding how what matters to humans can be accounted for is the first step to valuing and taking care of what really counts.

Then we need to understand that the challenge is not just to interpret or even analyze this new economic world, but to change it. We thus need to understand how indicators of well-being and sustainability can become performative and not just descriptive. This can be done by integrating indicators in policy through representative democracy, regulatory democracy, and democratic activism. Applied carefully by private and public decision-makers, well-being and sustainability indicators can foster genuine progress.

Finally, we need to build tangible transitions at the local level. Well-being is best measured where it is actually experienced. Localities (cities, regions) are more agile than states, not to mention international institutions, and better able to put in motion well-being indicators and translate them into new policies. We can talk, in this respect, after the late Elinor Ostrom, of a “polycentric transition,” meaning that each level of government can seize the opportunity of the well-being and sustainability transition without waiting for the impetus to come from above.

As you can see, so much to learn, do and imagine!

LaurentÉloi Laurent is senior economist at the Sciences Po Centre for Economic Research (OFCE) in Paris. He also teaches at Stanford University and has been a visiting professor at Harvard University. He is the author or editor of fifteen books, including Measuring Tomorrow: Accounting for Well-Being, Resilience, and Sustainability in the Twenty-First Century.

 

William A. P. Childs on Greek Art and Aesthetics in the Fourth Century B.C.

Greek Art and Aesthetics in the Fourth Century B.C. analyzes the broad character of art produced during this period, providing in-depth analysis of and commentary on many of its most notable examples of sculpture and painting. Taking into consideration developments in style and subject matter, and elucidating political, religious, and intellectual context, William A. P. Childs argues that Greek art in this era was a natural outgrowth of the high classical period and focused on developing the rudiments of individual expression that became the hallmark of the classical in the fifth century. Read on to learn more about fourth century B.C. Greek art:

Why the fourth century?

The fourth century BCE has been neglected in scholarly treatises with a  few recent exceptions: Blanch Brown, Anticlassicism in Greek Sculpture of the Fourth Century B.C.; Monographs on Archaeology and the Fine Arts sponsored by the Archaeological Institute of America and the College Art Association of America 26 (New York, 1976); and Brunilde Ridgway, Fourth-Century Styles in Greek Sculpture, Wisconsin Studies in Classics (Madison, WI, 1997).

One reason is simply that taste has been antithetical to the character of the century. Thus literary critics disparaged the wild reassessments of mythology by Euripides at the end of the fifth century as well as his supposedly colloquial language, and treated the sophists as morally dishonest.

Socially the century was marked by continuous warfare and the rise of  a new, rich elite. Individuals were as important, or more important, than society/community; artists were thought to have individual styles that reflected their personal vision. This was thought to debase the grandness of the high classic and replace it with cheap sensationalism and pluralism that defied straight-forward categorization.

The age-old hostility to Persia was revived, it seems largely for political reasons, while Persian artistic influence permeated much of the ornaments of the new, wealthy elite: mosaics, rich cloth, and metal work. At the same time Persia was constantly meddling in Greek affaires, which produced a certain hypocritical political atmosphere.

And, finally, Philip of Macedon brought the whole democratic adventure of the fifth century to a close with the establishment of monarchy as the default political system, and Alexander brought the East into the new Hellenic or Hellenistic culture out of which Roman culture was to arise.

Clearly most of the past criticism is true; it is our response that has totally changed, one assumes, because our own period is in many respects very similar to the character of the fourth century.

What is the character of the art of the fourth century?

On the surface there is little change from the high classical style of the fifth century—the subject of art is primarily religion in the form of votive reliefs and statues dedicated in sanctuaries. The art of vase-painting in Athens undergoes a slow decline in quality with notable exceptions, though it comes to an end as the century closes.

Though the function of art remains the same as previously, the physical appearance changes and changes again. At the end of the fifth century and into the first quarter of the fourth there is a nervous, linear style with strong erotic overtones. After about 370 the preference is for solidity and quiet poses. But what becomes apparent on closer examination is that there are multiple contemporary variations of the dominant stylistic structures. This has led to some difficulty in assigning convincing dates to individual works, though this is exaggerated. It is widely thought that the different stylistic variations are due to individual artists asserting their personal visions and interpretations of the human condition.

The literary sources, almost all of Roman date, do state that the famous artists, sculptors and painters, of the fourth century developed very individual styles that with training could be recognized in the works still extant. Since there are almost no original Greek statues preserved and no original panel paintings, it is difficult to evaluate these claims convincingly. But, since there are quite distinct groups of works that share broad stylistic similarities and these similarities agree to a large extent with the stylistic observations in the literary sources, it is at least possible to suggest that these styles are connected in some way with particular, named artists of the fourth century. However, rather than attributing works to the named artists, it seems wiser simply to identify the style and recognize that it conveys a particular character of the figure portrayed. This appears also applicable to vase-paintings that may reflect the styles of different panel painters. There are therefore Praxitelian and Skopaic sculptures and Parrhasian and Zeuxian paintings. Style conveys content.

The variety of styles as expressive tools indicates that there is a variety of content. A corollary of this fact is that the artist is presenting works that must be read by the viewer and therefore do not primarily represent social norms but are particular interpretations of both traditional and novel subjects: Aphrodite bathes, a satyr rests peacefully in the woods, and athletes clean themselves. In brief, the heroic and the divine are humanized and humans gain a psychological depth  that allows portraits to suggest character.

Was the cultural response to these developments purely negative as most modern commentaries suggest?

The question of the reception of art and poetry in the Greek world particularly of the archaic and classical periods has occupied scholars for at least the last two hundred years. It has been amply documented that artisans and people we consider artists were generally repudiated by the people composing the preserved texts of literature and historical commentary. For example, Plato is generally considered a conservative Philistine. Most modern commentators are appalled by his criticism of poetry and the plastic arts in all forms. Yet the English romantic poets of the late 18th and early 19th centuries thought Plato a kindred spirit. It was only in the late 19th and early 20th centuries that the negative assessment of Plato’s relation to poetry and art became authoritative.  However one wishes to assess Plato’s own appreciation of poetry and art, it is eminently clear that he had an intimate knowledge of contemporary art. Equally his criticism of people who praise art indicates that precisely what he criticizes is what Athenian society expected and praised. It does not require a large leap to surmise that Plato is the first art critic with a sophisticated approach though somewhat disorganized. His student, Aristotle, had the organization and perhaps a more nuanced view of art, but it is perhaps not an exaggeration to suggest that Aristotle was not as sensitive to art as was his teacher.

The fact of the matter is that from Homer on, the descriptions of objects, though very rare, are uniformly very appreciative. For Homer the wonder of life-likeness is paramount, a quality that endures down to the fourth century despite the changing styles and patent abstractions before the fourth century. At least in the fourth century artists also became wealthy and must have managed large workshops.  So the modern view that artisans/artists were considered inferior members of society appears to be a social evaluation by the wealthy and leisured.

In the fourth century BCE Greek artists embark on on an inquiry into individual expression of  profound insights into the human condition as well as social values. It is the conscious recognition of the varied expressive values of style that creates the modern concept of aesthetics and the artist.

ChildsWilliam A.P. Childs is professor emeritus of classical art and archaeology at Princeton University.

Kyle Harper: How climate change and disease helped the fall of Rome

HarperAt some time or another, every historian of Rome has been asked to say where we are, today, on Rome’s cycle of decline. Historians might squirm at such attempts to use the past but, even if history does not repeat itself, nor come packaged into moral lessons, it can deepen our sense of what it means to be human and how fragile our societies are.

In the middle of the second century, the Romans controlled a huge, geographically diverse part of the globe, from northern Britain to the edges of the Sahara, from the Atlantic to Mesopotamia. The generally prosperous population peaked at 75 million. Eventually, all free inhabitants of the empire came to enjoy the rights of Roman citizenship. Little wonder that the 18th-century English historian Edward Gibbon judged this age the ‘most happy’ in the history of our species – yet today we are more likely to see the advance of Roman civilisation as unwittingly planting the seeds of its own demise.

Five centuries later, the Roman empire was a small Byzantine rump-state controlled from Constantinople, its near-eastern provinces lost to Islamic invasions, its western lands covered by a patchwork of Germanic kingdoms. Trade receded, cities shrank, and technological advance halted. Despite the cultural vitality and spiritual legacy of these centuries, this period was marked by a declining population, political fragmentation, and lower levels of material complexity. When the historian Ian Morris at Stanford University created a universal social-development index, the fall of Rome emerged as the greatest setback in the history of human civilisation.

Explanations for a phenomenon of this magnitude abound: in 1984, the German classicist Alexander Demandt catalogued more than 200 hypotheses. Most scholars have looked to the internal political dynamics of the imperial system or the shifting geopolitical context of an empire whose neighbours gradually caught up in the sophistication of their military and political technologies. But new evidence has started to unveil the crucial role played by changes in the natural environment. The paradoxes of social development, and the inherent unpredictability of nature, worked in concert to bring about Rome’s demise.

Climate change did not begin with the exhaust fumes of industrialisation, but has been a permanent feature of human existence. Orbital mechanics (small variations in the tilt, spin and eccentricity of the Earth’s orbit) and solar cycles alter the amount and distribution of energy received from the Sun. And volcanic eruptions spew reflective sulphates into the atmosphere, sometimes with long-reaching effects. Modern, anthropogenic climate change is so perilous because it is happening quickly and in conjunction with so many other irreversible changes in the Earth’s biosphere. But climate change per se is nothing new.

The need to understand the natural context of modern climate change has been an unmitigated boon for historians. Earth scientists have scoured the planet for paleoclimate proxies, natural archives of the past environment. The effort to put climate change in the foreground of Roman history is motivated both by troves of new data and a heightened sensitivity to the importance of the physical environment. It turns out that climate had a major role in the rise and fall of Roman civilisation. The empire-builders benefitted from impeccable timing: the characteristic warm, wet and stable weather was conducive to economic productivity in an agrarian society. The benefits of economic growth supported the political and social bargains by which the Roman empire controlled its vast territory. The favourable climate, in ways subtle and profound, was baked into the empire’s innermost structure.

The end of this lucky climate regime did not immediately, or in any simple deterministic sense, spell the doom of Rome. Rather, a less favourable climate undermined its power just when the empire was imperilled by more dangerous enemies – Germans, Persians – from without. Climate instability peaked in the sixth century, during the reign of Justinian. Work by dendro-chronologists and ice-core experts points to an enormous spasm of volcanic activity in the 530s and 540s CE, unlike anything else in the past few thousand years. This violent sequence of eruptions triggered what is now called the ‘Late Antique Little Ice Age’, when much colder temperatures endured for at least 150 years. This phase of climate deterioration had decisive effects in Rome’s unravelling. It was also intimately linked to a catastrophe of even greater moment: the outbreak of the first pandemic of bubonic plague.

Disruptions in the biological environment were even more consequential to Rome’s destiny. For all the empire’s precocious advances, life expectancies ranged in the mid-20s, with infectious diseases the leading cause of death. But the array of diseases that preyed upon Romans was not static and, here too, new sensibilities and technologies are radically changing the way we understand the dynamics of evolutionary history – both for our own species, and for our microbial allies and adversaries.

The highly urbanised, highly interconnected Roman empire was a boon to its microbial inhabitants. Humble gastro-enteric diseases such as Shigellosis and paratyphoid fevers spread via contamination of food and water, and flourished in densely packed cities. Where swamps were drained and highways laid, the potential of malaria was unlocked in its worst form – Plasmodium falciparum – a deadly mosquito-borne protozoon. The Romans also connected societies by land and by sea as never before, with the unintended consequence that germs moved as never before, too. Slow killers such as tuberculosis and leprosy enjoyed a heyday in the web of interconnected cities fostered by Roman development.

However, the decisive factor in Rome’s biological history was the arrival of new germs capable of causing pandemic events. The empire was rocked by three such intercontinental disease events. The Antonine plague coincided with the end of the optimal climate regime, and was probably the global debut of the smallpox virus. The empire recovered, but never regained its previous commanding dominance. Then, in the mid-third century, a mysterious affliction of unknown origin called the Plague of Cyprian sent the empire into a tailspin. Though it rebounded, the empire was profoundly altered – with a new kind of emperor, a new kind of money, a new kind of society, and soon a new religion known as Christianity. Most dramatically, in the sixth century a resurgent empire led by Justinian faced a pandemic of bubonic plague, a prelude to the medieval Black Death. The toll was unfathomable – maybe half the population was felled.

The plague of Justinian is a case study in the extraordinarily complex relationship between human and natural systems. The culprit, the Yersinia pestis bacterium, is not a particularly ancient nemesis; evolving just 4,000 years ago, almost certainly in central Asia, it was an evolutionary newborn when it caused the first plague pandemic. The disease is permanently present in colonies of social, burrowing rodents such as marmots or gerbils. However, the historic plague pandemics were colossal accidents, spillover events involving at least five different species: the bacterium, the reservoir rodent, the amplification host (the black rat, which lives close to humans), the fleas that spread the germ, and the people caught in the crossfire.

Genetic evidence suggests that the strain of Yersinia pestis that generated the plague of Justinian originated somewhere near western China. It first appeared on the southern shores of the Mediterranean and, in all likelihood, was smuggled in along the southern, seaborne trading networks that carried silk and spices to Roman consumers. It was an accident of early globalisation. Once the germ reached the seething colonies of commensal rodents, fattened on the empire’s giant stores of grain, the mortality was unstoppable.

The plague pandemic was an event of astonishing ecological complexity. It required purely chance conjunctions, especially if the initial outbreak beyond the reservoir rodents in central Asia was triggered by those massive volcanic eruptions in the years preceding it. It also involved the unintended consequences of the built human environment – such as the global trade networks that shuttled the germ onto Roman shores, or the proliferation of rats inside the empire. The pandemic baffles our distinctions between structure and chance, pattern and contingency. Therein lies one of the lessons of Rome. Humans shape nature – above all, the ecological conditions within which evolution plays out. But nature remains blind to our intentions, and other organisms and ecosystems do not obey our rules. Climate change and disease evolution have been the wild cards of human history.

Our world now is very different from ancient Rome. We have public health, germ theory and antibiotic pharmaceuticals. We will not be as helpless as the Romans, if we are wise enough to recognise the grave threats looming around us, and to use the tools at our disposal to mitigate them. But the centrality of nature in Rome’s fall gives us reason to reconsider the power of the physical and biological environment to tilt the fortunes of human societies. Perhaps we could come to see the Romans not so much as an ancient civilisation, standing across an impassable divide from our modern age, but rather as the makers of our world today. They built a civilisation where global networks, emerging infectious diseases and ecological instability were decisive forces in the fate of human societies. The Romans, too, thought they had the upper hand over the fickle and furious power of the natural environment. History warns us: they were wrong.Aeon counter – do not remove

Kyle Harper is professor of classics and letters and senior vice president and provost at the University of Oklahoma. He is the author of The Fate of Rome, recently released, as well as Slavery in the Late Roman World, AD 275–425 and From Shame to Sin: The Christian Transformation of Sexual Morality in Late Antiquity. He lives in Norman, Oklahoma.

This article was originally published at Aeon and has been republished under Creative Commons.

Barry Eichengreen on How Global Currencies Work

At first glance, the modern history of the global economic system seems to support the long-held view that the leading world power’s currency—the British pound, the U.S. dollar, and perhaps someday the Chinese yuan—invariably dominates international trade and finance. In How Global Currencies Work, three noted economists provide a reassessment of this history and the theories behind the conventional wisdom. Read on to learn more about the two views of global currencies, changes in international monetary leadership, and more.

Your title refers to “two views” of global currencies. Can you explain?
We distinguish the “old view” and the “new view”—you can probably infer from the terminology to which view we personally incline. In the old view, one currency will tend dominate as the vehicle for cross-border transactions at any point in time. In the past it was the British pound; more recently it has been the U.S. dollar; and in the future it may be the Chinese renminbi, these being the currencies of the leading international economies of the nineteenth, twentieth, and twenty first centuries. The argument, grounded largely in theory, is that a single currency has tended to dominate, or will dominate, because it pays for investors and producers when engaging in cross-border transactions; specifically, it pays for them to do cross-border business in the same currency as their partners and competitors. This pattern reflects the convenience value of conformity—it reflects what economists refer to as “network externalities.” In this view, it pays to quote the prices of one’s exports in the same units in which they are quoted by other exporters; this makes it easy for customers to compare prices, enabling a newly competitive producer to break into international markets. It pays to denominate bonds marketed to foreign investors in the same currency as other international bonds, in this case to make it easier for investors to compare yields and maximize the demand for the bonds in question.

In what we call the new view, on the other hand, several national currencies can coexist—they can play consequential international roles at the same point in time. In the modern world, it is argued, network externalities are not all that strong. For one thing, interchangeability costs are low as a result of modern financial technology. The existence of deep and liquid markets allows investors and exporters to do business in a variety of different currencies and switch all but effortlessly between them—to sell one currency for another at negligible cost. The existence of hedging instruments allows those investors to insure themselves against financial risks—specifically, against the risk that prices will move in unexpected ways. Prices denominated in different currencies are easy to compare, since everyone now carries a currency converter in his or her pocket, in the form of a smartphone. These observations point to the conclusion, which is compelling in our view, that several national currencies can simultaneously serve as units of account, means of payment and stores of value for individuals, firms and governments engaged in cross-border transactions.

In our book we provide several kinds of evidence supporting the relevance of the new view, not just today but in the past as well. We suggest that the old view is an inaccurate characterization of not just the current state of affairs but, in fact, of the last century and more of international monetary history.

What exactly motivated you to write this book?
We were worried by the extent to which the old view, which pointed to a battle to the death for international monetary supremacy between the dollar and the renminbi, continues to dominate scholarly analysis and popular discourse. This misapprehension gives rise to concerns that we think are misplaced, and to policy recommendations that we think are misguided. Renminbi internationalization, the technical name for policies intended to foster use of China’s currency in cross-border transactions not just within China itself but among third countries as well, is not in fact an existential threat to the dollar’s international role. To the contrary, it is entirely consistent with continued international use of the greenback, or so our evidence suggests.

In addition, making a convincing case for the new view requires marshaling historical, institutional and statistical material and analyzing the better part of a century. We though this extensive body of evidence cried out for a book-length treatment.

To what revisions of received historical wisdom does your analysis point?
We use that historical, institutional and statistical analysis to show that the old view of single-currency dominance is inaccurate not just for today but also as a description of the situation in the first half of the twentieth century and even in the final decades of the nineteenth. In the 1920s and 1930s, the pound sterling and the dollar both in fact played consequential international roles. Under the pre-World War I gold standard, the same was true of sterling, the French franc and the German mark. Our reassessment of the historical record suggests that the coexistence of multiple international currencies, the state of affairs toward which we are currently moving, is not the exception but in fact the rule. There is nothing unprecedented or anomalous about it.

And, contrary to what is sometimes asserted, we show that there is no necessary association between international currency competition and financial instability. The classical gold standard was a prototypical multiple international and reserve currency system by our reading of the evidence. But, whatever its other defects, the gold standard system was a strikingly stable exchange-rate arrangement.

Finally, we show that, under certain circumstances at least, international monetary and financial leadership can be gained and lost quickly. This is contrary to the conventional wisdom that persistence and inertia are overwhelmingly strong in the monetary domain owing to the prevalence of network effects. It is contrary to the presumption that changes of the guard are relatively rare. It is similarly contrary to the presumption that, once an international currency, always an international currency.

So you argue, contrary to conventional wisdom, that changes in international monetary leadership can occur quickly under certain circumstances.  But what circumstances exactly?
The rising currency has to confront and overcome economic and institutional challenges, while the incumbent has to find it hard to keep up. Consider the case of the U.S. dollar. As late as 1914 the dollar played essentially no international role despite the fact that the U.S. had long since become the single largest economy. This initial position reflected the fact that although the U.S. had many of the economic preconditions in place—not only was it was far and away the largest economy but it was also the the number-one exporter—it lacked the institutional prerequisites. Passage of the Federal Reserve Act in 1913 corrected this deficiency. The founding of the Fed created a lender and liquidity provider of last resort. And the Federal Reserve Act authorized U.S. banks to branch abroad, essentially for the first time. World War I, which disrupted London’s foreign financial relations, meanwhile created an opening, of which the U.S. took full advantage. Over the first post-Fed decade, the greenback quickly rose to international prominence. It came to be widely used internationally, fully matching the role of the incumbent international currency, the British pound sterling, already by the middle of the first post-World War I decade.

The shift to dollar dominance after World War II was equally swift. Again the stage was set by a combination of economic and institutional advances on the side of the rising power and difficulties for the incumbent. The U.S. emerged from World War II significantly strengthened economically, the UK significantly weakened. In terms of institutions, the U.S. responded to the unsettled monetary and financial circumstances of the immediate postwar period with the Marshall Plan and other initiatives extending the country’s international financial reach. The UK meanwhile, was forced to resort to capital controls and stringent financial regulation, which limited sterling’s appeal.

What are the implications of your analysis for the future of the international monetary and financial system?
The implications depend on the policies adopted, prospectively, by the governments and central banks that are the issuers of the potential international currencies. Here we have in mind not just the dollar and the renminbi but also the euro, the Euro Area being the third economy, along with the U.S. and China with the economic scale that is a prerequisite for being able to issue a true international currency. If all three issuers follow sound and stable policies, then there is no reason why their three currencies can’t share the international stage for the foreseeable future—in effect there’s no reason why they can’t share that stage indefinitely. The global economy will be better off with three sources of liquidity, compared to the current status quo where it is all but wholly dependent on one.

In contrast, if one or more of the issuers in question follows erratic policies, investors will flee its currency, since in a world of multiple international and reserve currencies they will have alternatives—they will have somewhere to go. The result could then be sharp changes in exchange rates.  The consequence could be high volatility that would wreak havoc with national and international financial markets. So while a world of multiple international currencies has benefits, it also entails risks. Policy choices—and politics—will determine  whether the risks or benefits dominate in the end.

EichengreenBarry Eichengreen is the George C. Pardee and Helen N. Pardee Professor of Economics and Political Science at the University of California, Berkeley. His books include Hall of Mirrors, Exorbitant Privilege, Globalizing Capital, and The European Economy since 1945Arnaud Mehl is principal economist at the European Central Bank. Livia Chiţu is an economist at the European Central Bank.

Josephine Quinn: The Phoenicians never existed

The Phoenicians traveled the Mediterranean long before the Greeks and Romans, trading, establishing settlements, and refining the art of navigation. But who these legendary sailors really were has long remained a mystery. In Search of the Phoenicians by Josephine Quinn makes the startling claim that the “Phoenicians” never actually existed. Taking readers from the ancient world to today, this monumental book argues that the notion of these sailors as a coherent people with a shared identity, history, and culture is a product of modern nationalist ideologies—and a notion very much at odds with the ancient sources. Read on to learn more about the Phoenicians.

Who were the Phoenicians?

The Phoenicians were the merchants and long-distance mariners of the ancient Mediterranean. They came from a string of city-states on the coast of the Levant including the ports of Tyre, Sidon, Byblos, and Beirut, all in modern Lebanon, and spoke very similar dialects of a language very similar to Hebrew. Their hinterland was mountainous and land connections were difficult even between these neighboring cities themselves, so the Phoenicians were very much people of the sea. They had a particular genius for science and navigation, and as early as the ninth or tenth century BCE, their ships were sailing the full length of the Mediterranean and out through the straits of Gibraltar to do business on the Atlantic coast of Spain, attracted by the precious metals of the west. Levantine migrants and traders began to settle in the Western Mediterranean at least a century before Greeks followed suit, founding new towns in Spain, Sardinia, Sicily, and North Africa. Their biggest Western colony was at Carthage in modern Tunisia, a city which eventually eclipsed the homeland in importance, and under its brilliant general Hannibal vied with Rome for control of the Mediterranean: when Carthage was eventually destroyed by Roman troops in 146 BCE, it was said to be the wealthiest city in the world.

But doesn’t your book suggest that the Phoenicians didn’t even exist?

Not quite! The people we call Phoenician certainly existed as individuals, and they often have fascinating stories, from the Carthaginian noblewoman Sophonisba, who married not one but two warring African kings, to the philosopher Zeno of Kition on Cyprus, who moved to Athens and founded the Stoic school of philosophy. But one of the really intriguing things about them is how little we know about how they saw themselves—and my starting point in this book is that we have no evidence that they saw themselves as a distinct people or as we might say, ethnic group.

“Phoenician” is what the Greeks called these people, but we don’t find anyone using that label to describe themselves before late antiquity, and although scholars have sometimes argued that they called themselves “Canaanite,” a local term, one of the things I show in my book is how weak the evidence for that hypothesis really is. Of course, to say that they didn’t think of themselves as a distinct people just because we don’t have any evidence for them describing themselves as such is an argument from silence, and it could be disproved at any moment with the discovery of a new inscription. But in the meantime, my core argument is when we don’t know whether people thought of themselves as a collective, we shouldn’t simply assume that they did on the basis of ancient or modern parallels, or because ethnic identity seems “natural.”

So how did the Phoenicians see themselves?

This is the question I’m most interested in. Although there is no surviving Phoenician literature that might help us understand the way these people saw the world, Phoenician inscriptions reveal all sorts of interesting and sometimes surprising things that people wanted to record for posterity. They certainly saw themselves as belonging to their own cities, like the Greeks: they were “Byblians,” or “Sidonians,” or “Sons of Tyre.” But one of the things that I suggest in my book is that in inscriptions they present themselves first and foremost in terms of family: where a Greek inscription might give someone’s own name and that of their father, a Phoenician one will often go back several generations—16 or 17 in some cases. And then Phoenician-speaking migrants develop new practices of identification, including regional ones. We see particularly close relationships developing between neighboring settlements in the diaspora, and between people who are from the same part of the homeland. But we also see new, Western identities developing—‘Sardinian,’ for instance—which bring together Phoenicians, Greeks, and the local population.

And I think we can get further by looking at the evidence for cultural practices that Phoenician speakers share—or don’t share. So child sacrifice rituals seem to be limited to a small number of Western settlements around Carthage, but the cult of the god Melqart, the chief civic deity of Tyre, is practiced by people of Levantine origin all over the Mediterranean. And on my interpretation, Melqart’s broad popularity is quite a late development—in the fifth or fourth century BCE—which would suggest that a sense of connectivity between Phoenician-speakers in the diaspora got stronger the longer people had been away from their homeland. But at the same time, the cult reached out to other Mediterranean populations, since Melqart was celebrated by Greeks (and later Romans) as the equivalent of their own god Herakles.

Politics played a part in the construction of identities as well, and this is particularly apparent in one episode where an attempt seems to have been made to impose the notion of ‘being Phoenician’ on other people. By the late fifth century BCE Carthage was the dominant power in the western Mediterranean, controlling trade routes and access to ports, taxing defeated enemies, and beginning to acquire overseas territory as well, at the expense of other Levantine diaspora settlements. And at pretty much exactly this time they begin to mint coinage, and their very first coins have an image of a palm tree—or, in Greek, a phoinix, which is also the Greek word for Phoenician. It’s hard to resist the impression that celebrating a common ‘Phoenician’ heritage or identity put a useful political spin on the realities of Carthaginian imperial control.

If there’s so little evidence for genuine Phoenician identity in the ancient world, where does the modern idea of “the Phoenicians” come from?

The name itself comes from the Greeks, as we’ve already said, but they didn’t use it to delineate a specific ethnic or cultural group: for them, “Phoenician” was often a pretty vague and general term for traders and sailors from the Levant, there wasn’t a lot of cultural or ethnic content to it. You don’t get the same kind of detailed ethnographic descriptions of Phoenicians as you do of, for instance, Egyptians and Greeks. And the Romans followed suit: in fact, their particular focus on Carthage meant that the Latin words for “Phoenician”—poenus and punicus—were often used to mean ‘North African’ in general.

It wasn’t until the modern period that the idea of the Phoenicians as a coherent ethnic group fully emerged, in late nineteenth century European histories of Phoenicia that relied heavily on new and specifically European ideas about nationalism and natural cultures. This is when we first find them described as a racial group, with an “ethnic character.” And these notions were picked up enthusiastically in early twentieth century Lebanon, where the idea that the Lebanese had formed a coherent nation since antiquity was an important plank of the intellectual justification for a new Lebanese state after the collapse of the Ottoman empire—another story I tell in the book.

A more recent example of this comes from Anthony D. Smith’s wonderful 1988 book, The Ethnic Origins of Nations, which argues that although true nations are a modern phenomenon, they have precursors in ancient and medieval ethno-cultural communities. Among his ancient examples are what he sees as ‘pan-Phoenician sentiments’ based on a common heritage of religion, language, art and literature, political institutions, dress and, forms of recreation. But my argument is that in the case of the Phoenicians at least we are not dealing with the ancient ethnic origins of modern nations, but the modern nationalist origins of an ancient ethnicity.

Is there any truth to the stories that the ancient Phoenicians reached America?

I’m afraid not! It’s an old idea: in the early eighteenth century Daniel Defoe argued, not long after he published Robinson Crusoe, that the Carthaginians must have colonized America on the basis of the similarities he saw between them and the indigenous Americans, in particular in relation to “their idolatrous Customs, Sacrificings, Conjurings, and other barbarous usages in the Worship of their Gods.” But the only real evidence that has ever been proposed for this theory, an inscription “found” in Brazil in 1872, was immediately diagnosed by specialists as a fake.

The idea that Phoenicians got to Britain, and perhaps even Ireland, makes more sense. Cornish tin could certainly have been one attraction. There’s no strong evidence though for Phoenician settlement on either island, though the possibility captivated local intellectuals in the early modern period. One of the chapters I most enjoyed writing in this book is about the way that scholars in England concocted fantasies of Phoenician origins for their homeland, in part as a way of differentiating their own maritime power from the more territorial, and so “Roman,” French empire—at the same time as the Irish constructed a Phoenician past of their own that highlighted the similarity of their predicament under Britain’s imperial yoke to that of noble Carthage oppressed by brutal Rome.

These are of course just earlier stages in the same nationalist ‘invention of the Phoenicians’ that came to fruition in the nineteenth century histories we’ve already discussed: stories about Phoenicians helped the British and the Irish articulate their own national identities, which in turn further articulated the idea of the Phoenicians themselves.

Why did you write this book?

One reason was I really wanted to write a book about the ancient Mediterranean that wasn’t limited to Greece and Rome—though plenty of Greeks and Romans snuck in! But there’s another reason as well: “identity” has been such a popular academic topic in recent decades, and I wanted to explore its limits and even limitations as an approach to the ancient world. There are lots of reasons to think that a focus on ethnic identity, and even self-identity more generally, is a relatively modern phenomenon, and that our ideas about the strength and prevalence of ancient ethnic sentiments might be skewed by a few dramatic but unusual examples in places like Israel and perhaps Greece. I wanted to look at a less well-known but perhaps more typical group, to see what happens if we investigate them not as “a people,” but simply as people.

 

QuinnJosephine Quinn is associate professor of ancient history at the University of Oxford and a fellow of Worcester College. She is the coeditor of The Hellenistic West andThe Punic Mediterranean.

 

Miller Oberman: The Grave

The Unstill Ones: Poems by award-winning poet Miller Oberman is an exciting debut collection of original poems and translations from Old English. Check out the author’s translation of The Grave, followed by the poem in Old English and the author’s original poem of the same name. 

A translation of “The Grave”

“The Grave” in Old English

“The Grave” after

“The Grave,” found on folio 170r of MS Bodley 343, is sometimes referred to as the last poem written in Old English, and its final three lines were likely added on later, in Middle English, by a scribe medievalists refer to as “the tremulous hand of Worcester.” While it’s impossible to say whether the shaky writing belonged to “the tremulous hand,” or whether this is indeed the final Old English poem, I like to think both are true.

At a recent reading I heard audible nervous laughter from the audience as I read my translation of “The Grave,” which at first surprised me. I later wondered that it doesn’t happen every time—it’s truly a discomfiting piece of writing, an uncommonly embodied depiction of the physical experience of the grave itself, written from the perspective of within. The poem is haunting it its second person address, as your own grave seems to speak to you: “now you are measured, and the dirt after that.” Simple, declarative, and nearly impossible to argue with, the poem induces the claustrophobia of burial, and the loss of the self and the world.

It’s been crucial for me to hear and say this poem aloud in Old English, to allow its language the life and breath of speech. My translation is fairly literal, but the third reading here, my response to the poem, or my “after” has a different spatial relationship to death, if not to the physicality of the grave. It’s hard to make an argument about “self” to a poem written, memorized, and copied down anonymously a thousand years ago, but the speaker of my poem argues that, even if each grave is inevitable, the sky itself and those who continue to live under it are changed.

Miller Oberman has received a number of awards for his poetry, including a Ruth Lilly Fellowship, a 92Y Discovery Prize, and Poetry magazine’s John Frederick Nims Memorial Prize for Translation. His work has appeared in Poetry, London Review of Books, the Nation, Boston Review, Tin House, and Harvard Review. He lives in Brooklyn, New York. He is the author of The Unstill Ones: Poems.