Elizabeth Currid-Halkett: Conspicuous consumption is over. It’s all about intangibles now

In 1899, the economist Thorstein Veblen observed that silver spoons and corsets were markers of elite social position. In Veblen’s now famous treatise The Theory of the Leisure Class, he coined the phrase ‘conspicuous consumption’ to denote the way that material objects were paraded as indicators of social position and status. More than 100 years later, conspicuous consumption is still part of the contemporary capitalist landscape, and yet today, luxury goods are significantly more accessible than in Veblen’s time. This deluge of accessible luxury is a function of the mass-production economy of the 20th century, the outsourcing of production to China, and the cultivation of emerging markets where labour and materials are cheap. At the same time, we’ve seen the arrival of a middle-class consumer market that demands more material goods at cheaper price points.

However, the democratisation of consumer goods has made them far less useful as a means of displaying status. In the face of rising social inequality, both the rich and the middle classes own fancy TVs and nice handbags. They both lease SUVs, take airplanes, and go on cruises. On the surface, the ostensible consumer objects favoured by these two groups no longer reside in two completely different universes.

Given that everyone can now buy designer handbags and new cars, the rich have taken to using much more tacit signifiers of their social position. Yes, oligarchs and the superrich still show off their wealth with yachts and Bentleys and gated mansions. But the dramatic changes in elite spending are driven by a well-to-do, educated elite, or what I call the ‘aspirational class’. This new elite cements its status through prizing knowledge and building cultural capital, not to mention the spending habits that go with it – preferring to spend on services, education and human-capital investments over purely material goods. These new status behaviours are what I call ‘inconspicuous consumption’. None of the consumer choices that the term covers are inherently obvious or ostensibly material but they are, without question, exclusionary.

The rise of the aspirational class and its consumer habits is perhaps most salient in the United States. The US Consumer Expenditure Survey data reveals that, since 2007, the country’s top 1 per cent (people earning upwards of $300,000 per year) are spending significantly less on material goods, while middle-income groups (earning approximately $70,000 per year) are spending the same, and their trend is upward. Eschewing an overt materialism, the rich are investing significantly more in education, retirement and health – all of which are immaterial, yet cost many times more than any handbag a middle-income consumer might buy. The top 1 per cent now devote the greatest share of their expenditures to inconspicuous consumption, with education forming a significant portion of this spend (accounting for almost 6 per cent of top 1 per cent household expenditures, compared with just over 1 per cent of middle-income spending). In fact, top 1 per cent spending on education has increased 3.5 times since 1996, while middle-income spending on education has remained flat over the same time period.

The vast chasm between middle-income and top 1 per cent spending on education in the US is particularly concerning because, unlike material goods, education has become more and more expensive in recent decades. Thus, there is a greater need to devote financial resources to education to be able to afford it at all. According to Consumer Expenditure Survey data from 2003-2013, the price of college tuition increased 80 per cent, while the cost of women’s apparel increased by just 6 per cent over the same period. Middle-class lack of investment in education doesn’t suggest a lack of prioritising as much as it reveals that, for those in the 40th-60th quintiles, education is so cost-prohibitive it’s almost not worth trying to save for.

While much inconspicuous consumption is extremely expensive, it shows itself through less expensive but equally pronounced signalling – from reading The Economist to buying pasture-raised eggs. Inconspicuous consumption in other words, has become a shorthand through which the new elite signal their cultural capital to one another. In lockstep with the invoice for private preschool comes the knowledge that one should pack the lunchbox with quinoa crackers and organic fruit. One might think these culinary practices are a commonplace example of modern-day motherhood, but one only needs to step outside the upper-middle-class bubbles of the coastal cities of the US to observe very different lunch-bag norms, consisting of processed snacks and practically no fruit. Similarly, while time in Los Angeles, San Francisco and New York City might make one think that every American mother breastfeeds her child for a year, national statistics report that only 27 per cent of mothers fulfill this American Academy of Pediatrics goal (in Alabama, that figure hovers at 11 per cent).

Knowing these seemingly inexpensive social norms is itself a rite of passage into today’s aspirational class. And that rite is far from costless: The Economist subscription might set one back only $100, but the awareness to subscribe and be seen with it tucked in one’s bag is likely the iterative result of spending time in elite social milieus and expensive educational institutions that prize this publication and discuss its contents.

Perhaps most importantly, the new investment in inconspicuous consumption reproduces privilege in a way that previous conspicuous consumption could not. Knowing which New Yorker articles to reference or what small talk to engage in at the local farmers’ market enables and displays the acquisition of cultural capital, thereby providing entry into social networks that, in turn, help to pave the way to elite jobs, key social and professional contacts, and private schools. In short, inconspicuous consumption confers social mobility.

More profoundly, investment in education, healthcare and retirement has a notable impact on consumers’ quality of life, and also on the future life chances of the next generation. Today’s inconspicuous consumption is a far more pernicious form of status spending than the conspicuous consumption of Veblen’s time. Inconspicuous consumption – whether breastfeeding or education – is a means to a better quality of life and improved social mobility for one’s own children, whereas conspicuous consumption is merely an end in itself – simply ostentation. For today’s aspirational class, inconspicuous consumption choices secure and preserve social status, even if they do not necessarily display it.Aeon counter – do not remove

Elizabeth Currid-Halkett is the James Irvine Chair in Urban and Regional Planning and professor of public policy at the Price School, University of Southern California. Her latest book is The Sum of Small Things: A Theory of the Aspirational Class (2017). She lives in Los Angeles.

This article was originally published at Aeon and has been republished under Creative Commons.

Joshua Holden: Quantum cryptography is unbreakable. So is human ingenuity

Two basic types of encryption schemes are used on the internet today. One, known as symmetric-key cryptography, follows the same pattern that people have been using to send secret messages for thousands of years. If Alice wants to send Bob a secret message, they start by getting together somewhere they can’t be overheard and agree on a secret key; later, when they are separated, they can use this key to send messages that Eve the eavesdropper can’t understand even if she overhears them. This is the sort of encryption used when you set up an online account with your neighbourhood bank; you and your bank already know private information about each other, and use that information to set up a secret password to protect your messages.

The second scheme is called public-key cryptography, and it was invented only in the 1970s. As the name suggests, these are systems where Alice and Bob agree on their key, or part of it, by exchanging only public information. This is incredibly useful in modern electronic commerce: if you want to send your credit card number safely over the internet to Amazon, for instance, you don’t want to have to drive to their headquarters to have a secret meeting first. Public-key systems rely on the fact that some mathematical processes seem to be easy to do, but difficult to undo. For example, for Alice to take two large whole numbers and multiply them is relatively easy; for Eve to take the result and recover the original numbers seems much harder.

Public-key cryptography was invented by researchers at the Government Communications Headquarters (GCHQ) – the British equivalent (more or less) of the US National Security Agency (NSA) – who wanted to protect communications between a large number of people in a security organisation. Their work was classified, and the British government neither used it nor allowed it to be released to the public. The idea of electronic commerce apparently never occurred to them. A few years later, academic researchers at Stanford and MIT rediscovered public-key systems. This time they were thinking about the benefits that widespread cryptography could bring to everyday people, not least the ability to do business over computers.

Now cryptographers think that a new kind of computer based on quantum physics could make public-key cryptography insecure. Bits in a normal computer are either 0 or 1. Quantum physics allows bits to be in a superposition of 0 and 1, in the same way that Schrödinger’s cat can be in a superposition of alive and dead states. This sometimes lets quantum computers explore possibilities more quickly than normal computers. While no one has yet built a quantum computer capable of solving problems of nontrivial size (unless they kept it secret), over the past 20 years, researchers have started figuring out how to write programs for such computers and predict that, once built, quantum computers will quickly solve ‘hidden subgroup problems’. Since all public-key systems currently rely on variations of these problems, they could, in theory, be broken by a quantum computer.

Cryptographers aren’t just giving up, however. They’re exploring replacements for the current systems, in two principal ways. One deploys quantum-resistant ciphers, which are ways to encrypt messages using current computers but without involving hidden subgroup problems. Thus they seem to be safe against code-breakers using quantum computers. The other idea is to make truly quantum ciphers. These would ‘fight quantum with quantum’, using the same quantum physics that could allow us to build quantum computers to protect against quantum-computational attacks. Progress is being made in both areas, but both require more research, which is currently being done at universities and other institutions around the world.

Yet some government agencies still want to restrict or control research into cryptographic security. They argue that if everyone in the world has strong cryptography, then terrorists, kidnappers and child pornographers will be able to make plans that law enforcement and national security personnel can’t penetrate.

But that’s not really true. What is true is that pretty much anyone can get hold of software that, when used properly, is secure against any publicly known attacks. The key here is ‘when used properly’. In reality, hardly any system is always used properly. And when terrorists or criminals use a system incorrectly even once, that can allow an experienced codebreaker working for the government to read all the messages sent with that system. Law enforcement and national security personnel can put those messages together with information gathered in other ways – surveillance, confidential informants, analysis of metadata and transmission characteristics, etc – and still have a potent tool against wrongdoers.

In his essay ‘A Few Words on Secret Writing’ (1841), Edgar Allan Poe wrote: ‘[I]t may be roundly asserted that human ingenuity cannot concoct a cipher which human ingenuity cannot resolve.’ In theory, he has been proven wrong: when executed properly under the proper conditions, techniques such as quantum cryptography are secure against any possible attack by Eve. In real-life situations, however, Poe was undoubtedly right. Every time an ‘unbreakable’ system has been put into actual use, some sort of unexpected mischance eventually has given Eve an opportunity to break it. Conversely, whenever it has seemed that Eve has irretrievably gained the upper hand, Alice and Bob have found a clever way to get back in the game. I am convinced of one thing: if society does not give ‘human ingenuity’ as much room to flourish as we can manage, we will all be poorer for it.Aeon counter – do not remove

Joshua Holden is professor of mathematics at the Rose-Hulman Institute of Technology and the author of The Mathematics of Secrets.

This article was originally published at Aeon and has been republished under Creative Commons.

Michael Strauss: Our universe is too vast for even the most imaginative sci-fi

As an astrophysicist, I am always struck by the fact that even the wildest science-fiction stories tend to be distinctly human in character. No matter how exotic the locale or how unusual the scientific concepts, most science fiction ends up being about quintessentially human (or human-like) interactions, problems, foibles and challenges. This is what we respond to; it is what we can best understand. In practice, this means that most science fiction takes place in relatively relatable settings, on a planet or spacecraft. The real challenge is to tie the story to human emotions, and human sizes and timescales, while still capturing the enormous scales of the Universe itself.

Just how large the Universe actually is never fails to boggle the mind. We say that the observable Universe extends for tens of billions of light years, but the only way to really comprehend this, as humans, is to break matters down into a series of steps, starting with our visceral understanding of the size of the Earth. A non-stop flight from Dubai to San Francisco covers a distance of about 8,000 miles – roughly equal to the diameter of the Earth. The Sun is much bigger; its diameter is just over 100 times Earth’s. And the distance between the Earth and the Sun is about 100 times larger than that, close to 100 million miles. This distance, the radius of the Earth’s orbit around the Sun, is a fundamental measure in astronomy; the Astronomical Unit, or AU. The spacecraft Voyager 1, for example, launched in 1977 and, travelling at 11 miles per second, is now 137 AU from the Sun.

But the stars are far more distant than this. The nearest, Proxima Centauri, is about 270,000 AU, or 4.25 light years away. You would have to line up 30 million Suns to span the gap between the Sun and Proxima Centauri. The Vogons in Douglas Adams’s The Hitchhiker’s Guide to the Galaxy (1979) are shocked that humans have not travelled to the Proxima Centauri system to see the Earth’s demolition notice; the joke is just how impossibly large the distance is.

Four light years turns out to be about the average distance between stars in the Milky Way Galaxy, of which the Sun is a member. That is a lot of empty space! The Milky Way contains about 300 billion stars, in a vast structure roughly 100,000 light years in diameter. One of the truly exciting discoveries of the past two decades is that our Sun is far from unique in hosting a retinue of planets: evidence shows that the majority of Sun-like stars in the Milky Way have planets orbiting them, many with a size and distance from their parent star allowing them to host life as we know it.

Yet getting to these planets is another matter entirely: Voyager 1 would arrive at Proxima Centauri in 75,000 years if it were travelling in the right direction – which it isn’t. Science-fiction writers use a variety of tricks to span these interstellar distances: putting their passengers into states of suspended animation during the long voyages, or travelling close to the speed of light (to take advantage of the time dilation predicted in Albert Einstein’s theory of special relativity). Or they invoke warp drives, wormholes or other as-yet undiscovered phenomena.

When astronomers made the first definitive measurements of the scale of our Galaxy a century ago, they were overwhelmed by the size of the Universe they had mapped. Initially, there was great skepticism that the so-called ‘spiral nebulae’ seen in deep photographs of the sky were in fact ‘island universes’ – structures as large as the Milky Way, but at much larger distances still. While the vast majority of science-fiction stories stay within our Milky Way, much of the story of the past 100 years of astronomy has been the discovery of just how much larger than that the Universe is. Our nearest galactic neighbour is about 2 million light years away, while the light from the most distant galaxies our telescopes can see has been travelling to us for most of the age of the Universe, about 13 billion years.

We discovered in the 1920s that the Universe has been expanding since the Big Bang. But about 20 years ago, astronomers found that this expansion was speeding up, driven by a force whose physical nature we do not understand, but to which we give the stop-gap name of ‘dark energy’. Dark energy operates on length- and time-scales of the Universe as a whole: how could we capture such a concept in a piece of fiction?

The story doesn’t stop there. We can’t see galaxies from those parts of the Universe for which there hasn’t been enough time since the Big Bang for the light to reach us. What lies beyond the observable bounds of the Universe? Our simplest cosmological models suggest that the Universe is uniform in its properties on the largest scales, and extends forever. A variant idea says that the Big Bang that birthed our Universe is only one of a (possibly infinite) number of such explosions, and that the resulting ‘multiverse’ has an extent utterly beyond our comprehension.

The US astronomer Neil deGrasse Tyson once said: ‘The Universe is under no obligation to make sense to you.’ Similarly, the wonders of the Universe are under no obligation to make it easy for science-fiction writers to tell stories about them. The Universe is mostly empty space, and the distances between stars in galaxies, and between galaxies in the Universe, are incomprehensibly vast on human scales. Capturing the true scale of the Universe, while somehow tying it to human endeavours and emotions, is a daunting challenge for any science-fiction writer. Olaf Stapledon took up that challenge in his novel Star Maker (1937), in which the stars and nebulae, and cosmos as a whole, are conscious. While we are humbled by our tiny size relative to the cosmos, our brains can none the less comprehend, to some extent, just how large the Universe we inhabit is. This is hopeful, since, as the astrobiologist Caleb Scharf of Columbia University has said: ‘In a finite world, a cosmic perspective isn’t a luxury, it is a necessity.’ Conveying this to the public is the real challenge faced by astronomers and science-fiction writers alike. Aeon counter – do not remove

UniverseMichael A. Strauss is professor of astrophysics at Princeton University and coauthor with Richard Gott and Neil DeGrasse Tyson of Welcome to The Universe: An Astrophysical Tour.

This article was originally published at Aeon and has been republished under Creative Commons.

Thomas W. Laqueur: Ghosts and ghouls haunt the living with a message about life

LaqueurThere is, it would seem, no greater chasm than that which divides the living from the dead. We who still dwell on the side of life know this as we relegate the inert bodies of those so recently just like ourselves to the elements from which they came: earth or fire – ashes to ashes; air in the towers of the Zoroastrians; very occasionally, water. We do not just toss bodies over walls, whatever we might believe (or not believe) about a soul or an afterlife. We do it with care and with rituals: funeral and mourning. We do it because it is what humans do and have always done; it represents our entry into culture from nature. We live and have always lived with our dead. To do otherwise would be to expel the dead from the community of the living, to expunge them from history.

But, at the same time as we honour our dead, we generally also want to keep a certain distance. We expect them to leave us alone in our world and remain safely in theirs. When they don’t, it is a sign that something has gone very wrong. King Creon argues in Sophocles’ tragedy Antigone that the rebel Polyneices should remain unburied as punishment for his crimes: ‘unwept, unsepulchered, a treasure to feast on for birds looking out for a dainty meal’. Had he had his way, the shade of Polyneices would undoubtedly have returned to berate the living for their scandalous neglect. Antigone’s voice is the one we – or, in any case, our better selves – hear. Care for the dead is among the ‘unwavering, unwritten customs of the gods … not some trifle of now or yesterday, but for all eternity’.

This brings us to Halloween, and to All Saints’ Day on 1 November, and All Souls’ the day after – the days when the boundaries between the living and the dead seem most likely to be breached. Why are these still the days of ghosts and goblins, ghouls and dancing skeletons?

Before we can answer, we need a taxonomy of the dead who have returned to our world: the revenants. Within this large family there are two genera: the fleshly and the ethereal. And within each genus there are many species. Among the fleshly, there are vampires, for example – archaeologists have dug up skeletons in Poland with bricks in their mouths put there, they think, by villagers determined to keep the vampires from coming back to devour them. Vampires seldom stray far from home, while the Norse draugr, a fleshy revenant, wanders far afield. A related Norse species, the haugbúar stays near its burrow, complains about the other inhabitants and affects the weather. The very corporeal Chinese walking dead travel great distances to be buried in a geomantically auspicious spot.

Within the genus of the ethereal revenant there are also many species: those that come back very soon after death to chide their friends for not giving them proper obsequies; the shade of Patroclus appears to Achilles in the Iliad under just these circumstance. Or ghosts such as Hamlet’s father, in full armour – a touch of the material – coming back to tell his son he’d been murdered. There are ghosts that give off foul vapours, and ghosts that strike people (although how they do that since they have no bodies is unclear).

One thing can be said about the whole family of revenants: they are generally not a cheery lot. They come back because something is wrong: some debt from life needs to be repaid or vengeance taken; or their bodies were insufficiently cared for; or their souls were ill-remembered. Friendly ghosts such as the cartoon character Casper are an extreme rarity. In monotheistic religions, God tends to keep a close watch on the boundaries of the other world and ghosts are rare; he draws the dead to him. Monotheistic religions tend to discourage traffic with the dead, which is called necromancy, a dangerous kind of magic. In religions without one god in charge, the revenant tends to proliferate.

But nowhere do they ever seem to go away. Not in the Age of Reason: James Boswell in his Life of Samuel Johnson (1791) writes, ‘It is wonderful that 5,000 years have now elapsed … and still it is undecided whether or not there has ever been an instance of the spirit of any person appearing after death.’ All sorts of good arguments are against it, ‘but all belief is for it’. Not in the 19th century either: Jeremy Bentham, the most rational of men and enemy of superstition, could not rid himself of a belief in ghosts.

Even today, Halloween encourages us to remember in a fuzzy sort of way the medieval custom of praying for the souls of the dead by name and asking the saints to speed them toward salvation. Back then, it was an occasion for any souls unhappy with efforts to help them to come back and complain. It was a time when the boundaries between the living and the dead seemed more porous. Few of us today think we can do much for the souls of the dead or that there is much border-crossing. But the ghosts of old and even new species of revenant, such as zombies – a whole other story – are still resonant. In part, this is because the revenant have gone inward; our guilt toward the dead in general, or someone in particular whom we might have wronged, makes itself vividly manifest in our minds. It is real even if we know it is not real.

In part, it is because we are all in some way haunted by the dead who are still part of us and of our lives. It is also because mortality remains so deeply strange and unbearable. Sigmund Freud gets this right. Reason is of little help. After tens of thousands of years, there has been little progress. In ‘hardly any other sphere,’ he writes in The Uncanny (1919), ‘has our thinking and feeling changed so little since primitive times or the old been so well preserved, under a thin veneer, as in our relation to death.’

Finally, to return to where we began, we wish our fellow creatures a good death and a peaceful rest within the community of the living because we need them among us. They remain part of the world as we imagine it. To be human is to care for the dead. But we also wish the dead and dying well in order to maintain the chasm between our world and theirs. The dead are primally dangerous; we need them to stay where they are, safely quarantined, in a parallel universe to ours.Aeon counter – do not remove

Thomas W. Laqueur is the Helen Fawcett Professor of History at the University of California, Berkeley. His books include Making Sex: Body and Gender from the Greeks to Freud and Solitary Sex: A Cultural History of Masturbation. He is a regular contributor to the London Review of Books.

This article was originally published at Aeon and has been republished under Creative Commons.

James Q. Whitman: Why the Nazis studied American race laws for inspiration

Hitler's American ModelOn 5 June 1934, about a year and half after Adolf Hitler became Chancellor of the Reich, the leading lawyers of Nazi Germany gathered at a meeting to plan what would become the Nuremberg Laws, the centrepiece anti-Jewish legislation of the Nazi race regime. The meeting was an important one, and a stenographer was present to take down a verbatim transcript, to be preserved by the ever-diligent Nazi bureaucracy as a record of a crucial moment in the creation of the new race regime.

That transcript reveals a startling fact: the meeting involved lengthy discussions of the law of the United States of America. At its very opening, the Minister of Justice presented a memorandum on US race law and, as the meeting progressed, the participants turned to the US example repeatedly. They debated whether they should bring Jim Crow segregation to the Third Reich. They engaged in detailed discussion of the statutes from the 30 US states that criminalised racially mixed marriages. They reviewed how the various US states determined who counted as a ‘Negro’ or a ‘Mongol’, and weighed whether they should adopt US techniques in their own approach to determining who counted as a Jew. Throughout the meeting the most ardent supporters of the US model were the most radical Nazis in the room.

The record of that meeting is only one piece of evidence in an unexamined history that is sure to make Americans cringe. Throughout the early 1930s, the years of the making of the Nuremberg Laws, Nazi policymakers looked to US law for inspiration. Hitler himself, in Mein Kampf (1925), described the US as ‘the one state’ that had made progress toward the creation of a healthy racist society, and after the Nazis seized power in 1933 they continued to cite and ponder US models regularly. They saw many things to despise in US constitutional values, to be sure. But they also saw many things to admire in US white supremacy, and when the Nuremberg Laws were promulgated in 1935, it is almost certainly the case that they reflected direct US influence.

This story might seem incredible. Why would the Nazis have felt the need to take lessons in racism from anybody? Why, most especially, would they have looked to the US? Whatever its failings, after all, the US is the home of a great liberal and democratic tradition. Moreover, the Jews of the US – however many obstacles they might have confronted in the early 20th century – never faced state-sponsored persecution. And, in the end, Americans made immense sacrifices in the struggle to defeat Hitler.

But the reality is that, in the early 20th century, the US, with its vigorous and creative legal culture, led the world in racist lawmaking. That was not only true of the Jim Crow South. It was true on the national level as well. The US had race-based immigration law, admired by racists all over the world; and the Nazis, like their Right-wing European successors today (and so many US voters) were obsessed with the dangers posed by immigration.

The US stood alone in the world for the harshness of its anti-miscegenation laws, which not only prohibited racially mixed marriages, but also threatened mixed-race couples with severe criminal punishment. Again, this was not law confined to the South. It was found all over the US: Nazi lawyers carefully studied the statutes, not only of states such as Virginia, but also states such as Montana. It is true that the US did not persecute the Jews – or at least, as one Nazi lawyer remarked in 1936, it had not persecuted the Jews ‘so far’ – but it had created a host of forms of second-class citizenship for other minority groups, including Chinese, Japanese, Filipinos, Puerto Ricans and Native Americans, scattered all over the Union and its colonies. American forms of second-class citizenship were of great interest to Nazi policymakers as they set out to craft their own forms of second-class citizenship for the German Jewry.

Not least, the US was the greatest economic and cultural power in the world after 1918 – dynamic, modern, wealthy. Hitler and other Nazis envied the US, and wanted to learn how the Americans did it; it’s no great surprise that they believed that what had made America great was American racism.

Of course, however ugly American race law might have been, there was no American model for Nazi extermination camps. The Nazis often expressed their admiration for the American conquest of the West, when, as Hitler declared, the settlers had ‘shot down the millions of Redskins to a few hundred thousand’. In any case extermination camps were not the issue during the early 1930s, when the Nuremberg Laws were framed. The Nazis were not yet contemplating mass murder. Their aim at the time was to compel the Jews by whatever means possible to flee Germany, in order to preserve the Third Reich as a pure ‘Aryan’ country.

And here they were indeed convinced that they could identify American models – and some strange American heroes. For a young Nazi lawyer named Heinrich Krieger, for example, who had studied at the University of Arkansas as an exchange student, and whose diligent research on US race law formed the basis for the work of the Nazi Ministry of Justice, the great American heroes were Thomas Jefferson and Abraham Lincoln. Did not Jefferson say, in 1821, that it is certain ‘that the two races, equally free, cannot live in the same government’? Did not Lincoln often declare, before 1864, that the only real hope of America lay in the resettlement of the black population somewhere else? For a Nazi who believed that Germany’s only hope lay in the forced emigration of the Jews, these could seem like shining examples.

None of this is entirely easy to talk about. It is hard to overcome our sense that if we influenced Nazism we have polluted ourselves in ways that can never be cleansed. Nevertheless the evidence is there, and we cannot read it out of either German or American history.Aeon counter – do not remove

James Q. Whitman is the Ford Foundation Professor of Comparative and Foreign Law at Yale Law School. His books include Harsh Justice, The Origins of Reasonable Doubt, and The Verdict of Battle. He lives in New York City. His forthcoming book, Hitler’s American Model, is out in March from Princeton.

This article was originally published at Aeon and has been republished under Creative Commons.

Nancy Malkiel: Coeducation at university was – and is – no triumph of feminism

The 1960s witnessed a major shift in higher education in the Anglo-American world, which saw university life upended and reshaped in profoundly important ways: in the composition of student bodies and faculties; structures of governance; ways of doing institutional business; and relationships to the public issues of the day. Coeducation was one of those changes. But neither its causes nor its consequences were what one might expect.

Beginning in 1969, and mostly ending in 1974, there was a flood of decisions in favour of coeducation in the United States and the United Kingdom. Harvard, Yale and Princeton in the US; Churchill, Clare and King’s at Cambridge; Brasenose, Hertford, Jesus, St Catherine’s and Wadham at Oxford – many of the most traditional, elite and prestigious men’s colleges and universities suddenly welcomed women to their undergraduate student bodies.

However, as I argue in ‘Keep the Damned Women Out’: The Struggle for Coeducation (2016), this was not the result of women banding together to demand opportunity, press for access or win rights and privileges previously reserved for men. As appealing as it might be to imagine the coming of coeducation as one element in the full flowering of mid- to late-20th-century feminism, such a narrative would be at odds with the historical record. Coeducation resulted not from organised efforts by women activists, but from strategic decisions made by powerful men. Their purpose, in the main, was not to benefit college women, but to improve the opportunities and educational experiences of college men.

For one thing, coeducation was not on the feminist agenda in the 1960s and ’70s. The emerging women’s movement had other priorities. Some of these had to do with the rights and privileges of women in the public sphere: equal access to jobs; equal pay for equal work; legal prohibitions against discrimination on the basis of sex – the agenda, for example, of Betty Friedan and other founders of the National Organisation of Women in 1966. Other priorities concerned the status of women in the private realm, striking at societal expectations about sex roles and conventional relationships between women and men. One of the movement’s earliest proponents, Gloria Steinem, spoke out about such feminist issues as abortion and the Equal Rights Amendment; and in 1971, upon commencement at her alma mater, Smith College, she said that Smith needed to remain a college for women. Steinem argued that remaining single-sex was a feminist act. Like Wellesley College, Smith was at the time considering a high-level report recommending coeducation. And like Wellesley, Smith – influenced in part by Steinem and the women’s movement – backed away from taking such a step.

Just as the drive for coeducation had nothing to do with the triumph of feminism, so it had little to do with a high-minded commitment to opening opportunities to women. The men who brought coeducation to previously all-male institutions were acting not on any moral imperative, but were acting in their own institutional self-interest. Particularly in the US, elite institutions embarked on coeducation to shore up their applicant pools at a time when male students were making it plain that they wanted to go to school with women. Presidents such as Kingman Brewster Jr of Yale (1963-77) and Robert F Goheen of Princeton (1957-72) were forthright about their overriding interest: to enrol women students in order to recapture their hold on ‘the best boys’.

That the educational needs and interests of women were not uppermost on these men’s minds doubtless bears on the ways in which coeducation fell short of contributing to real equality between the sexes. That was true in the universities, where coeducation did not mean revolution. Contemporaries called the pioneering women students ‘honorary men’; they were included and assimilated, but they were expected to accept or embrace longstanding institutional traditions, not to upend them.

Nor did coeducation lead to a levelling of the playing field for men and women, during their college years or beyond. Coeducation did not resolve the perplexingly gendered behaviours and aspirations of female students. While women present credentials on entrance that match or exceed those of men, they still tend to shy away from studies in fields such as mathematics, physics, computer science and economics, where men dominate. Moreover, even in fields where women are well-represented, men, rather than women, achieve at the highest academic levels.

Women also make gendered choices about extracurricular pursuits: they typically undersell themselves, choosing to focus on the arts and community service, while declining to put themselves forward for major leadership positions in mainstream campus activities.

Just as importantly, sexual harassment and sexual assault are no more under control after more than four decades of coeducation than they were when men and women first started going to college together.

And women continue to face significant challenges in finding professional leadership opportunities and realising professional advancement. The handful of women CEOs in major corporations continue to be the exception, not the rule. Despite the fact that a second woman has now become prime minister of the UK and that a woman has for the first time won a major party nomination for president of the US, women are significantly underrepresented in the US Senate, the US House of Representatives, and the British Parliament. There continues to be a significant gender gap in salaries, from entry-level jobs to much higher-level positions. Achieving a manageable work-family balance is a persistent problem for women, with even the most highly educated female professionals facing pressure to step out of the labour force to raise children.

In short, coeducation has fallen well short of righting the fundamental gender-driven challenges that still bedevil our society. It has not succeeded (perhaps it could not have been expected to succeed) in accomplishing real equality for young women in colleges and universities, or in the worlds of work and family that follow.Aeon counter – do not remove

MalkielNancy Weiss Malkiel is professor emeritus of history at Princeton University, where she was the longest-serving dean of the college, overseeing the university’s undergraduate academic program for twenty-four years. Her books include Whitney M. Young, Jr., and the Struggle for Civil Rights and Farewell to the Party of Lincoln: Black Politics in the Age of FDR (both Princeton).

This article was originally published at Aeon and has been republished under Creative Commons.

Katharine Dow: Can surrogacy ever escape the taint of global exploitation?

making the good life jacket dowSurrogate motherhood has a bad rep, as a murky business far removed from everyday experience – especially when it comes to prospective parents from the West procuring the gestational services of less privileged women in the global South. So while middle-class 30- and 40-somethings swap IVF anecdotes over the dinner table, and their younger female colleagues are encouraged by ‘hip’ employers to freeze their eggs as an insurance policy against both time and nature, surrogacy continues to induce a great deal of moral handwringing.

The Kim Cotton case in 1985 was the first attempt to arrange a commercial surrogacy agreement in the United Kingdom. It set the tone for what was to come. Cotton was paid £6,500 to have a baby for an anonymous Swedish couple, and her story provoked sensational press-fuelled panic. British legislators, too, saw surrogacy as likely to lead to exploitation, with poorer women coerced into acting as surrogates out of financial need, and with intended parents taken advantage of by unscrupulous surrogacy brokers. Their action was swift: within just months of the Cotton story breaking, a law was passed banning for-profit surrogacy in the UK.

With the growth of an international surrogacy industry over the past two decades, worries over surrogacy’s fundamentally exploitative character have only intensified. Worst-case scenarios such as the Baby Gammy case in 2014, involving an Australian couple and a Thai surrogate, suggest that surrogacy frequently is exploitative. But that’s less because paying someone to carry and bear a child on your behalf is inherently usurious than because the transaction takes place in a deeply unequal world. The Baby Gammy case was complicated by other unsavoury factors, since the child, born with Down’s Syndrome, seemed to be rejected by his intended parents because of his condition. Then it turned out that the intended father had a previous conviction for child sex offences, which rather overshadowed the potential exploitation experienced by Gammy’s surrogate – and now de facto – mother.

I am not arguing for a laissez-faire approach to regulating surrogacy, but for thinking more deeply about how surrogacy reflects the context in which it takes place.

We need to step back and think critically about what makes people so driven to have a biogenetically related child that they are prepared to procure the intimate bodily capacity of another, typically less privileged, person to achieve that. We should also listen to surrogates, and try to understand why they might judge surrogacy as their best option. Intended parents are not always uncaring nabobs, and surrogate mothers are not just naïve victims; but while the power dynamic between them is decidedly skewed, each is subject to particular cultural expectations, moral obligations and familial pressures.

As for the larger context, we increasingly outsource even the most intimate tasks to those whose labour is cheap, readily available and less regulated. If we think of surrogacy as a form of work, it doesn’t look that different from many other jobs in our increasingly casualised and precarious global economic context, like selling bodily substances and services for clinical trials, biomedical research or product testing, or working as domestic staff and carers.

And surrogacy is on the rise. Both in the UK and in the United States, where some states allow commercial surrogacy and command the highest fees in the world, increasing numbers of would-be parents are turning to the international surrogacy industry: 95 per cent of the 2,000 surrogate births to UK intended parents each year occur overseas. With the age at which women have their first child increasing, more women are finding it difficult to conceive; and there’s now greater access to fertility treatments for single-sex couples and single people. In addition, surrogacy has become the option of choice for gay couples, transgender people, and single men wanting a biogenetically related child.

For me, as someone who has studied surrogacy, the practice is problematic because it reveals some of our most taken-for-granted assumptions about the nature of family. It also tells us much about work, gender, and how the two are connected. This is why it is so challenging.

At a time when parent-child relationships often appear to be one of the few remaining havens in an increasingly heartless world, surrogacy suggests that there might not be a straightforward relationship between women’s reproductive biology, their capacity to produce children, and their desire to nurture. The usual debates that focus simply on whether or not surrogacy is exploitative sidestep some of these uncomfortable truths, and make it difficult to ask more complicated questions about the practice.

There is a parallel here with abortion debates. Trying to define and defend the sanctity of life is important, but this also obscures highly problematic issues, such as the gendered expectation that women should look after children; the fact that women typically bear responsibility for contraception (and the disproportionate consequences of not using it); the prevalence of non-consensual sex; and the pressure on women to produce children to meet familial obligations.

Surrogacy is a technology. And like any other technology we should not attribute to it magical properties that conceal its anthropogenic – that is, human-made – character. It’s all too easy to blame surrogacy or the specific individuals who participate in it rather than to ask why surrogacy might make sense as a way of having children at all. We should give credit to intended parents and surrogate mothers for having thought deeply about their decisions, and we should not hold them individually responsible for surrogacy’s ills.Aeon counter – do not remove

Katharine Dow is a research associate in the Reproductive Sociology Research Group at the University of Cambridge. She is the author of Making a Good Life.

Want to join the discussion or follow us on Aeon? Head over to our partnership page.

This article was originally published at Aeon and has been republished under Creative Commons.
Aeon counter – do not remove

Jason Brennan: The right to vote should be restricted to those with knowledge

BrennanWho should hold power: the few or the many? Concentrating power in the hands of a few – in monarchy, dictatorship or oligarchy – tends to result in power for personal benefit at the expense of others. Yet in spreading power among the many – as in a democracy – individual votes no longer matter, and so most voters remain ignorant, biased and misinformed.

We have a dilemma.

Republican, representative democracy tries to split the difference. Checks and balances, judicial reviews, bills of rights and elected representatives are all designed to hold leaders accountable to the people while also constraining the foolishness of the ignorant masses. Overall, these institutions work well: in general, people in democracies have the highest standards of living. But what if we could do better?

Consider an alternative political system called epistocracy. Epistocracies retain the same institutions as representative democracies, including imposing liberal constitutional limits on power, bills of rights, checks and balances, elected representatives and judicial review. But while democracies give every citizen an equal right to vote, epistocracies apportion political power, by law, according to knowledge or competence.

The idea here is not that knowledgeable people deserve to rule – of course they don’t – but that the rest of us deserve not to be subjected to incompetently made political decisions. Political decisions are high stakes, and democracies entrust some of these high-stakes decisions to the ignorant and incompetent. Democracies tend to pass laws and policies that appeal to the median voter, yet the median voter would fail Econ, History, Sociology, and Poli Sci 101. Empirical work generally shows that voters would support different policies if they were better informed.

Voters tend to mean well, but voting well takes more than a kind heart. It requires tremendous social scientific knowledge: knowledge that most citizens lack. Most voters know nothing, but some know a great deal, and some know less than nothing. The goal of liberal republican epistocracy is to protect against democracy’s downsides, by reducing the power of the least-informed voters, or increasing the power of better-informed ones.

There are many ways of instituting epistocracy, some of which would work better than others. For instance, an epistocracy might deny citizens the franchise unless they can pass a test of basic political knowledge. They might give every citizen one vote, but grant additional votes to citizens who pass certain tests or obtain certain credentials. They might pass all laws through normal democratic means, but then permit bands of experts to veto badly designed legislation. For instance, a board of economic advisors might have the right to veto rent-control laws, just as the Supreme Court can veto laws that violate the Constitution.

Or, an epistocracy might allow every citizen to vote at the same time as requiring them to take a test of basic political knowledge and submit their demographic information. With such data, any statistician could calculate the public’s ‘enlightened preferences’, that is, what a demographically identical voting population would support if only it were better informed. An epistocracy might then instantiate the public’s enlightened preferences rather than their actual, unenlightened preferences.

A major question is what counts (and who decides what counts) as political competence, or basic political knowledge. We don’t want self-interested politicians rigging a political competence exam in their own favour. One might use widely accepted pre-existing tests; the Unites States citizenship test, for example, or the same questions that the American National Election Studies have used for 60 years. These questions – who is the current president? Which item is the largest part of the federal budget? – are easily verifiable and uncontroversial, plus an ability to answer them correctly is strongly correlated with the kind of political knowledge that does matter in an election.

One common objection to epistocracy – at least among political philosophers – is that democracy is essential to expressing the idea that everyone is equal. On its face, this is a strange claim. Democracy is a political system, not a poem or a painting. Yet people treat the right to vote like a certificate of commendation, meant to show that society regards you as a full member of the national club. (That’s one reason we disenfranchise felons.) But we could instead view the franchise as no more significant than a plumbing or medical licence. The US government denies me such licences, but I don’t regard that as expressing I’m inferior, all things considered, to others.

Others object that the equal right to vote is essential to make government respond to our interests. But the math doesn’t check out. In most major elections, I have as much chance of making a difference as I do of winning the lottery. How we vote matters, but how any one of us votes, or even whether one votes, makes no difference. It might be a disaster if Donald Trump wins the presidency, but it’s not a disaster for me to vote for him. As the political theorist Ben Saunders says: in a democracy, each person’s power is so small that insisting on equality is like arguing over the crumbs of a cake rather than an equal slice.

On the other hand, it’s true (at least right now) that certain demographic groups (such as rich white men) are more likely to pass a basic political knowledge test than others (such as poor black women). Hence the worry that epistocracies will favour the interests of some groups over others. But this worry might be overstated. Political scientists routinely find that so long as individual voters have a low chance of being decisive, they vote for what they perceive to be the common good rather than their self-interest. Further, it might well be that excluding or reducing the power of the least knowledgeable 75 per cent of white people produces better results for poor black women than democracy does.

Of course, any epistocratic system would face abuse. It’s easy to imagine all the things that might go wrong. But that’s also true of democracy. The more interesting question is which system, warts and all, would work best. In the end, it’s a mistake to picture epistocracy as being the rule of an elite band of technocrats or ‘philosopher kings’. Rather, the idea is: do what democracy does, but better. Democracy and epistocracy both spread power among the many, but epistocracy tries to make sure the informed many are not drowned out by the ignorant or misinformed many.Aeon counter – do not remove

Jason Brennan is the Robert J. and Elizabeth Flanagan Family Associate Professor of Strategy, Economics, Ethics, and Public Policy at the McDonough School of Business at Georgetown University. He is the author of The Ethics of Voting (Princeton), Why Not Capitalism?, Libertarianism and most recently, Against Democracy. He is the coauthor of Markets without Limits, Compulsory Voting, and A Brief History of Liberty. He writes regularly for Bleeding Heart Libertarians, a blog.

This article was originally published at Aeon and has been republished under Creative Commons.

Want to join the discussion or follow us on Aeon? Head over to our partnership page.
Aeon counter – do not remove

Nile Green: What happened when a Muslim student went to Cambridge in 1816

GreenTwo hundred years ago, there arrived in London the first group of Muslims ever to study in Europe. Dispatched by the Crown Prince of Iran, their mission was to survey the new sciences emerging from the industrial revolution.

As the six young Muslims settled into their London lodgings in the last months of 1815, they were filled with excitement at the new kind of society they saw around them. Crowds of men and women gathered nightly at the ‘spectacle-houses’, as they called the city’s theatres. London was buzzing with the final defeat of Napoleon at Waterloo a few months earlier, and the new sciences – or ulum-i jadid – that the students had been sent to discover seemed to be displayed everywhere, not least in the new steamboats that carried passengers along the Thames.

As the weeks turned into months, the six strangers began to realise the scale of their task. They had no recognisable qualifications, and no contacts among the then-small groves of academe: they didn’t even know the English language. At the time, there was no Persian-to-English dictionary to help them.

Hoping to learn English, and the Latin that they mistakenly took to still be Europe’s main language of science, the would-be students enlisted a clergyman by the name of Reverend John Bisset. An Oxford graduate, Bisset told them about England’s two ancient seats of learning. When two of the students were subsequently taken on by the mathematician and polymath Olinthus Gregory, further links were forged with the universities, since Gregory had spent several years as a successful bookseller in Cambridge. A plan was hatched to introduce at least one of the students, Mirza Salih, to a professor who might be amenable to helping a foreigner study informally at one of the Cambridge colleges.

This was long before Catholics were allowed to study at Britain’s universities, so the arrival in Cambridge of an Iranian Muslim (one who would go on to found the first newspaper in Iran) caused sensation and consternation.

The don who was selected to host Salih was a certain Samuel Lee of Queens’ College. Lee appears to have been an odd candidate for supporter of the young taliban, as the students were called in Persian. A committed Evangelical, Lee was devoted to the cause of converting the world’s Muslims to Christianity. Along with other colleagues at Queens’, including the influential Venn family, he also had close ties to the Church Missionary Society. Founded in 1799, the Society was fast becoming the centre of the Cambridge missionary movement.

Yet it was precisely this agenda that made the young Muslim so attractive to Lee. The point was not so much that Salih’s conversion might bring one more soul to Christian salvation. Rather, it was that as an educated Persian-speaker, Salih might help the professor in his great task of translating the Bible into Persian, a language that was at the time also used across India, as well as what is today Iran. Lee jumped at the opportunity. And so it was that Salih was invited to Cambridge.

As his Persian diary reveals, Salih came to like the professor enormously. For though posterity would commemorate Lee as the distinguished Oxbridge Orientalist who rose to the grand status of Regius Professor of Hebrew, his upbringing was far humbler. Lee had been raised in a small Shropshire village in a family of carpenters and, in his teens, was apprenticed to a woodworker himself. On a research trip from California, I visited Lee’s home village of Longnor. It is still a remote place today, reached by single-lane tracks hidden in the hedgerows. At the local church, I was delighted to find the initials of his carpenter great-grandfather, Richard Lee, carved into the pews he had made for his fellow villagers.

Two hundred years ago, it was almost unknown for a country boy like Sam Lee to become a Cambridge professor, but he had a genius for languages that won him the patronage of a local gentleman. As a similarly ambitious young scholar on the make, Salih warmed to the self-made Lee, and in his Persian diary he recorded his life story with admiration.

Through Lee’s patronage, Salih was able to lodge at Queens’ College, and dine in the hall with dons such as William Mandell and Joseph Jee. At the time, the president of Queens’ was the natural philosopher Isaac Milner, as famous a conversationalist as he was a chemist. Salih certainly enjoyed the dinners at the high table, but his time in Cambridge was not all a Regency feast. He made study tours of the libraries that interested him, especially the Wren Library at Trinity College, which housed the statue of Sir Isaac Newton. In his diary, Salih called him ‘a philosopher who was both the eyes and the lantern of England’.

In return for having the closed world of the university opened to him, Salih helped Lee in his work on the Persian Bible. He even wrote a letter of recommendation when Lee was first nominated for the post of Regius Professor. The letter is still preserved in the university archives.

Between Salih’s diary, Lee’s letters and university documents, a rich picture emerges of the unlikely relationship formed between this foreign Muslim and what was then the most muscularly Christian of the Cambridge colleges.

The university was only one of many places that Salih and his fellow Muslim students visited during their four years in England, questing for the scientific fruits of the Enlightenment. The encounter between ‘Islam and the West’ is often told in terms of hostility and conflict, but Salih’s diary presents a quite different set of attitudes – cooperation, compassion and common humanity – and, in preserving the record of an unexpected relationship with the evangelical Lee, unlikely friendships. Written in England at the same time as the novels of Jane Austen, Salih’s diary is a forgotten testament, and salutary reminder of the humane encounter between Europeans and Muslims at the dawn of the modern era.

Nile Green is professor of history at UCLA. His many books include Sufism: A Global History and The Love of Strangers. He lives in Los Angeles.

This article was originally published at Aeon and has been republished under Creative Commons.

Sara Lewis: Why we must keep the fires of the magical firefly burning

Once upon a time, fireflies were abundant throughout Japan. For more than 1,000 years, these glowing harbingers of summer shone brilliantly through the fabric of Japanese culture, which celebrated them in poetry and art. During the late 1800s, hundreds of city dwellers journeyed to the countryside to view their dazzling displays. In 1902, Lafcadio Hearn, the acclaimed English-language author and interpreter of Japanese culture, described this summertime spectacle:

Myriads of fireflies dart from either bank, to meet and cling above the water. At moments they so swarm together as to form what appears to the eye like a luminous cloud, or like a great ball of sparks.

Sometimes, when many fireflies gathered together, they all glowed slowly on then off again in unison, as if the air itself was breathing. Yet within just a few decades, these beloved insects would be nearly extinguished from the Japanese countryside.

Best-known among Japan’s 50 different firefly species is the Genji firefly, Luciola cruciata. With its fast-flowing rivers and streams, Japan provides ideal habitats for this firefly, whose life cycle is intimately tied to water. Females lay their eggs along mossy riverbanks, and newly hatched larvae crawl down into the water. As juveniles, these aquatic fireflies spend several months underwater, feasting on freshwater snails. Eventually, the young fireflies crawl back onto land, before metamorphosing into the familiar adults. As forerunners of early summer, Genji fireflies’ lime-green lights float silently over the water, mysterious and otherworldly.

Why did they fade away into glowing ghosts? Although Japanese fireflies faced many hazards, perhaps the most destructive was overharvesting, followed by habitat degradation.

During the Meiji period (1868-1912), the popular summer pastime of firefly-watching segued into commercial firefly harvesting. Live fireflies were in vogue, and people were willing to pay good money to enjoy their luminous beauty closer to home. Setting up shop in prime firefly locations, firefly wholesalers hired dozens of local firefly hunters. A single skilled hunter could bag up to 3,000 wild fireflies per night, working sunset to sunrise. In the morning, shop owners carefully packaged up the night’s catch and dispatched cages full of live fireflies to clients in Osaka, Kyoto and Tokyo, where the insects were released into hotel, restaurant and private gardens so that city dwellers might enjoy their brightly glowing show.

Japanese fireflies, harvested for their beauty, were being loved to death. As the demand for live fireflies grew, wild populations began to decline. Apparently no one cared that, once harvested, adult fireflies survived only a week or two; when they died, they were just replaced with freshly harvested new ones. And apparently no one cared that firefly hunters indiscriminately harvested not just the males, but also the precious egg-laying females, thereby extinguishing the only hope fireflies had to replenish their own populations. At the same time, rapid industrialisation and urban development led to the degradation of the fireflies’ natural habitat, as industrial effluent, agricultural runoff and household sewage flowed freely into rivers. River pollution curtailed the survival of the aquatic juveniles, and killed off their snail prey.

By the early 1920s, people took notice of the fact that firefly populations around Japan were thinning out. In response, the Japanese government in 1924 established the first National Natural Monument, providing legal protection for the Genji firefly habitat. Local communities undertook municipal projects to clean up their rivers, while commercial harvesting of wild fireflies was regulated, then banned altogether. Numerous private citizens attempted to raise the aquatic fireflies in captivity, using trial and error to coddle them through each life stage. Once these artificial breeding programmes determined how to rear large numbers of firefly larvae, they were reintroduced into rivers to bolster dwindling natural populations. While Japanese fireflies have never been restored to their former glory days, a predictably sad saga was transformed into a conservation success story by an impressive combination of national, local and private efforts. Now, Genji fireflies have become a symbol of national pride and Japanese environmentalism.

So what lessons can the United States take from this Japanese tale?

For nearly half a century, fireflies in the US were also harvested – this time, though, it was for their chemicals. Beginning around 1960, what’s now the Sigma-Aldrich chemical company of St Louis in Missouri extracted light-producing chemicals from fireflies that had been harvested from wild populations. Sigma enlisted thousands of collectors, across 25 Midwestern and Eastern states, to collect and process more than 3 million fireflies every summer. After extracting the firefly chemicals, the company marketed bioluminescence kits that were widely used in medical research and food-safety testing.

Fortunately for fireflies, in 1985 scientists developed synthetic versions of these chemicals that are both cheaper and more reliable, obviating the need to harvest wild fireflies. Yet near Oak Ridge, Tennessee, a mysterious gentleman named Dwight Sullivan was, even as recently as 2014, still paying collectors who harvested more than 40,000 wild fireflies that summer.

North America has more than 120 different firefly species. Some are abundant and widespread; others are rare, with restricted distributions. Surely we have plenty of fireflies! I suppose the Japanese thought so, too. And US fireflies are also threatened by habitat degradation, as well as by light pollution and pesticide use.

The first lesson is that fireflies are not an inexhaustible resource. We need to ban their commercial harvesting as an unjustifiable activity that exploits our shared natural heritage.

The second lesson is the need for habitat protection, specifically where species of particular cultural, ecological or economic interest live. Worldwide, that includes not just the Genji fireflies of Japan, but also the congregating mangrove fireflies of Thailand and Malaysia, and the winter fireflies of Taiwan. Right now, firefly ecotourism for two distinctive US species is escalating: the unique Blue Ghost fireflies (Phausis reticulata) found in DuPont State Forest in North Carolina, among other places, and the synchronous fireflies (Photinus carolinus) of the Great Smoky Mountains.

Japan has now designated 10 national monuments around the country that legally protect firefly habitats. In both Thailand and Malaysia, firefly sanctuaries have recently been established along mangrove rivers to safeguard prime sites for firefly ecotourism. In Taiwan, two firefly species enjoy legal habitat protection. In 2014, China set up a firefly preserve outside of Nanjing. Yet in the US, fireflies have no special protection.

If other countries can do it, why can’t the US – proactively setting aside just a few places where fireflies now thrive? Working with local, state and national conservation organisations, we can start by establishing firefly sanctuaries and management plans for existing ecotourist sites. We could also identify and protect biodiversity hotspots known to support many different firefly species.

We all dream about the kind of world that we want our children to inherit. Now is the time to work together to preserve for future generations these brilliant emissaries of nature’s magic.

Fireflies spark childhood memories, transform ordinary landscapes, and rekindle our sense of wonder (photo by Tsuneaki Hiramatsu).

Fireflies spark childhood memories, transform ordinary landscapes, and rekindle our sense of wonder (photo by Tsuneaki Hiramatsu).

Sara Lewis is a professor of biology at Tufts University. Her latest book is Silent Sparks: The Wondrous World of Fireflies (2016). She lives in Lincoln, Massachusetts.

This article was originally published at Aeon and has been republished under Creative Commons.

Leah Wright Rigueur: Black conservatives do not speak for the black majority

Aeon Magazine logo

By Leah Wright Rigueur

Published in association with Aeon Magazine, a Princeton University Press partner.

When black voices rally to validate and defend extremist ideas, political observers should watch with heavy skepticism. In April, the National Diversity Coalition for Donald Trump launched a campaign in support of the controversial presidential candidate. ‘This man is no more racist than Mickey Mouse is on the Moon!’ Bruce LeVell, the coalition’s co-founder and a businessman from Georgia, told The Washington Post. Better yet, what are we to make of the former Republican presidential candidate Ben Carson’s puzzling endorsement of Trump?

At a moment when black Americans, of all ideological persuasions, are deeply concerned with a status quo in the United States that allows racial inequality (and discrimination) to fester, black boosters for the party’s right wing have insisted that the ‘race issue’ is a distraction. Some even claim that black America will benefit from a Trump presidency. This kind of posturing might seem mystifying to some degree, but it is not new; there have always been black people willing to endorse the nation’s most extreme figures. The civil rights activist James Meredith worked for the Republican senator Jesse Helms in 1989, after all.

Employing black ‘surrogates’ or spokespeople for extremist candidates has become a way of validating non-traditional ideas as ‘authentic’, while at the same time invalidating accusations of racism. While the Democratic Party also has employed black voices in this manner (much to the distaste of its critics), the Republican Party’s use of conservative black voices is all the more fascinating because black conservatives’ beliefs are generally at odds with mainstream black opinion.

Egregious contemporary and historical examples abound. Consider the National Black Silent Majority Committee (BSMC), a black conservative organisation launched on 4 July 1970. Founded by Clay Claiborne (a former Republican National Committee staffer acquitted of defrauding black voters in the 1964 presidential election), the BSMC professed a faith in free-market enterprise and two-party competition, and adhered to a strict anti-communist, anti-welfare, anti-busing, pro-‘law and order’ agenda. Unlike other black Republican groups of the era, the BSMC articulated neither public nor private complaints about race and the Republican Party. Instead, the organisation exclusively blamed black people for the country’s problems with race. Upon the group’s founding, the civil rights activist Julian Bond called the BSMC a ‘trick’ to ‘subvert black political hopes on the altar of white supremacy and political expediency’.

The BMSC used Richard Nixon’s rhetoric of a forgotten class of Americans, claiming to speak for a majority of silent black Americans, ‘sick and tired of the agitation, shouting, burning and subversion carried out in their name by self-styled militant groups’. The organisation assembled a high-profile group of black men and women willing to endorse conservative values, including the national president of the Negro Elks fraternal order, the founders and publishers of the black newspapers the Atlanta Daily World and the Arizona Tribune (now The Informant), and dozens of black ministers from around the country. Black women also took on prominent roles as BSMC surrogates – an unusual occurrence, as black women were, and still are, the least likely of any demographic to support the Republican Party.

In 1972, for example, Mary Parrish was the star speaker of the BSMC’s 52-city ‘Black Youth Voter Crusade’. Parrish, a black Democrat-turned-Republican (who started her career campaigning for Congresswoman Shirley Chisholm) used her pulpit to claim that liberals had ‘politically enslaved’ black people, especially black women; the Republican Party, she insisted, without providing tangible examples, represented the best hope for the ‘continued advancement of black people’. Parrish’s unusual turn as the ‘face’ of the BSMC is not an isolated event. Today, black women are among the most high-profile of the Trump campaign’s spokespeople.

But such minority endorsements are sporadic, and rarely translate into partisan support. When the BSMC launched in 1970, more than 72 per cent of black Americans held unfavourable views of President Nixon. Currently, about 80 per cent of black people hold unfavourable views of Trump. For both the BSMC and Trump’s black surrogates, this disconnect is consistent with their resolute dismissal of issues related to racial and social inequality, and their harsh criticism of black people who reject the Republican nominee.

Back in the 1970s, the BSMC readily admitted that the vast majority of its supporters were white. As the historian Matt Lassiter has suggested, the Nixon White House ‘orchestrated’ the creation of the BSMC to provide a counter-narrative to black moderate, and militant, voices, which also appealed to ‘white voters who believed that the civil rights and antiwar movements had gone too far’.

My own research shows that the all-white National Republican Congressional Committee (NRCC) was also a heavy financial backer of the BSMC from the start, providing start-up funds, financing the group’s cross-country ‘Patriotism’ and ‘Anti-Busing’ crusades, regularly highlighting the BSMC’s adventures to the public, and arranging private meetings with influential white officials.

In an unintentionally ironic moment in 1970, the then South Carolina senator Strom Thurmond, a vocal cheerleader for the BSMC, declared that the organisation’s existence proved that plenty of black radicals were attempting to ‘speak for groups which they do not actually represent’. Indeed, by the mid-1970s, politicians actively used the BSMC to elicit broader political support for right-wing agendas largely rejected by black audiences, by suggesting that the group spoke for a black majority. The BSMC also provided a buffer against charges of racism, with white politicians arguing that their own policies couldn’t possibly be racist or discriminatory, since the BSMC endorsed them. In this way, the BSMC reassured white conservative voters uncomfortable with the social taboo of racism.

The BSMC is just one example of many organisations (and individuals) to emerge in the past few decades in support of ideas on the fringes of black political thought. As a result, black Republicans critical of their party’s position on race saw their influence within the party dwindle, as groups such as the BSMC saw their stock rise among the Republican Party’s right wing. New quantitative research suggests that little has changed; Republican politicians are more interested in championing right-wing black Republicans whose views on race fall outside mainstream black political thought than those whose race-conscious messages are more closely aligned with the attitudes of black people at large. For most black Republicans within the party, this sends a clear and troubling message – power for the party’s minorities often comes by way of endorsing right-wing extremism.

Thus Trump’s turn to minority (especially black) spokespeople should come as little surprise. But while race lends an air of legitimacy to extremist candidates, it rarely presents an accurate picture of black political opinion. If anything, when the extremists play the ‘race card’, genuine concern for racial issues are likely to be buried.

Leah Wright Rigueur The Loneliness of the Black Republicanis an assistant professor of public policy at the Harvard Kennedy School of Government. She is the author of The Loneliness of the Black Republican: Pragmatic Politics and the Pursuit of Power (2015).

Announcing a new partnership with Aeon Magazine

Aeon Magazine logo

Princeton University Press is excited to announce a new partnership with Aeon Magazine. Since September 2012, Aeon has “been publishing some of the most profound and provocative thinking on the web. It asks the biggest questions and finds the freshest, most original answers, provided by world-leading authorities on science, philosophy and society.” Aeon’s publishing platform is an excellent place for showcasing our authors’ thought leadership. When one of their articles takes off, it does so in style: Robert Epstein’s recent essay about why the brain is not a computer received over 375,000 views in just 4 days. In addition Aeon viewing statistics are counted by Altmetric, so they contribute to any measurement of academic impact.

Starting this June, Princeton University Press authors, past and present, will be contributing regular essays to Aeon’s Ideas section, and participating in various discussions that will be featured on a new, dedicated partnership page that features PUP authors and their work. We will be simultaneously featuring these essays on the PUP blog – a growing outlet for intellectual discourse. Check out the inaugural essays in this partnership by philosopher Jason Stanley and political scientist Justin Smith here. We hope you’ll follow us on Aeon.

–Debra Liese, PUP Social Media Manager & blog editor