Opportunity costs: can carbon taxing become a positive-sum game?

Climate change, caused by human activity, is arguably the biggest single problem facing the world today, and it is deeply entangled with the question of how to lift billions of people out of poverty without destroying the global environment in the process. But climate change also represents a crisis for economists (I am one). Decades ago, economists developed solutions – or variants on the same solution – to the problem of pollution, the key being the imposition of a price on the generation of pollutants such as carbon dioxide (CO2). The idea was to make visible, and accountable, the true environmental costs of any production process. 

Carbon pricing could stabilise the global climate, and cap unwanted warming, at a fraction of the cost that we are likely to end up paying in other ways. And as emissions were rapidly reduced, we could save enough to compensate most of the ‘losers’, such as displaced coal miners; a positive-sum solution. Yet, carbon pricing has been mostly spurned in favour of regulatory solutions that are significantly more costly. Why?

Environmental pollution is one of the most pervasive and intractable failures of market systems (and Soviet-style central planning). Almost every kind of economic activity produces harmful byproducts, which are costly to dispose of safely. The cheapest thing to do is to dump the wastes into waterways or the atmosphere. Under pure free-market conditions, that’s precisely what happens. Polluters pay nothing for dumping waste while society bears the cost.

Since most energy in modern societies comes from burning carbon-based fuels, solving this problem, whether through new technology or altered consumption patterns, will require changes in a vast range of economic activities. If these changes are to be achieved without reducing standards of living, or obstructing the efforts of less developed countries to lift themselves out of poverty, it is important to find a path to emissions reduction that minimises costs.

But since pollution costs aren’t properly represented in market prices, there’s little use in looking at the accounting costs that appear in corporate balance sheets, or the market-based costs that go into national accounting measures such as Gross Domestic Product (GDP). For economists, the right way to think is in terms of ‘opportunity cost’, which can be defined as follows: The opportunity cost of anything of value is what you must give up so that you can have it. So how should we think about the opportunity cost of CO2 emissions?

We could start with the costs imposed on the world’s population as a whole from climate change, and measure how this changes with additional emissions. But this is an impossibly difficult task. All we know about the costs of climate change is that they will be large, and possibly catastrophic. It’s better to think about carbon budgets. We have a good idea how much more CO2 the world can afford to emit while keeping the probability of dangerous climate change reasonably low. A typical estimate is 2,900 billion tonnes – of which 1,900 billion tonnes have already been emitted.

Within any given carbon budget, an additional tonne of CO2 emitted from one source requires a reduction of one tonne somewhere else. So, it is the cost of this offsetting reduction that determines the opportunity cost of the additional emission. The problem is that, as long as the CO2 generated ‘disappears’ into the atmosphere (and, eventually, the oceans), corporations and households do not bear the opportunity cost of the CO2 they emit.

In a properly working market economy, prices reflect opportunity costs (and vice versa). A price for CO2 emissions high enough to keep total emissions within the carbon budget would ensure that the opportunity cost of increasing emissions would be equal to the price. But how can this be brought about?

In the 1920s, the English economist Arthur Pigou suggested imposing taxes on firms generating pollution. This would make the (tax-inclusive) prices paid by those firms reflect social cost. An alternative approach, developed by the Nobel laureate Ronald Coase, stresses the role of property rights. Rather than setting a price for pollution, society decides how much pollution can be tolerated, and creates property rights (emissions permits) reflecting that decision. Companies that want to burn carbon must acquire emissions permits for the CO2 they produce. Whereas the carbon-tax approach determines a price and lets markets determine the volume of polluting activity, the property-rights approach sets the volume and lets the market determine the price.

There is no necessary link between imposing a carbon tax and distributing the resulting payments. However, natural intuitions of justice suggest that the revenue from carbon pricing should go to those adversely effected. At a national level, the proceeds could be used to offset the costs borne by low-income households. More ambitiously, a truly just system of global property rights would give everyone equal rights, and require those who want to burn more than their share of carbon (mostly, the global rich) to buy rights from those who burn less.

This raises the question of whether emissions rights should be equalised going forward, or whether historical emissions should be taken into account, allowing poorer nations to ‘catch up’. This debate has been rendered largely irrelevant by dramatic drops in the price of renewable energy that have sidelined development strategies based on fossil fuels. The best solution seems to be ‘contract and converge’. That is, all nations should converge as fast as possible to an emissions level far below that of currently developed countries, then phase out emissions entirely.

Carbon taxes have already been introduced in various places, and proposed in many more, but have met with vigorous resistance nearly everywhere. Emissions-permit schemes have been somewhat more successful, notably in the European Union, but have not taken off in the way envisaged when the Kyoto Protocol was signed in 1997. This disappointing outcome requires explanation.

The ideas of Pigou and Coase provide a theoretically neat answer to the market-failure problem. Unfortunately, they run into the more fundamental problem of income distribution and property rights. If governments create emissions rights and auction them, they create public property out of a resource (the atmosphere) that was previously available for use (and misuse) free of charge. The same is true when a carbon tax is proposed.

Whether property rights are created explicitly, as in the Coase approach, or implicitly, through the carbon taxes advocated by Pigou, there will be losers as well as gainers from the resulting change in the distribution of property rights and, therefore, market income. Not surprisingly, those potential losers have resisted market-based policies of pollution control.

The strongest resistance arises when businesses that have previously dumped their waste into airways and waterways free of charge are forced to bear the opportunity costs of their actions by paying taxes or purchasing emissions rights. Such businesses can call on an array of lobbyists, think tanks and friendly politicians to defend their interests.

Faced with these difficulties, governments have often fallen back on simpler options such as regulations and ad hoc interventions, such as feed-in tariffs and renewable-energy targets. These solutions are more costly and frequently more regressive, not least as the size of the cost burden and the way it is distributed is obscure and hard to understand. Yet the likely costs of climate change are so great that even second-best solutions such as direct regulation are preferable to doing nothing; and the delays caused by resistance from business, and from the ideologically driven science deniers in their pay, have been such that, in the short run, emergency interventions will be required.

Still, the need to respond to climate change is not going away any time soon, and the costs of regulatory solutions will continue to mount. If we are to stabilise the global climate without hampering efforts to end the scourge of global poverty, some form of carbon pricing is essential.

Economics in Two Lessons: Why Markets Work So Well, and Why They Can Fail So Badly by John Quiggin is forthcoming via Princeton University Press.Aeon counter – do not remove

John Quiggin is the President’s Senior Fellow in Economics at the University of Queensland in Brisbane, Australia. His previous book, Zombie Economics: How Dead Ideas Still Walk among Us (Princeton), has been translated into eight languages. He has written for the New York Times and the Economist, among other publications, and is a frequent blogger for Crooked Timber and on his own website: www.johnquiggin.com. Twitter @JohnQuiggin

This article was originally published at Aeon and has been republished under Creative Commons.

Christian Sahner: Islam spread through the Christian world via the bedroom

There are few transformations in world history more profound than the conversion of the peoples of the Middle East to Islam. Starting in the early Middle Ages, the process stretched across centuries and was influenced by factors as varied as conquest, diplomacy, conviction, self-interest and coercion. There is one factor, however, that is largely forgotten but which played a fundamental role in the emergence of a distinctively Islamic society: mixed unions between Muslims and non-Muslims.  

For much of the early Islamic period, the mingling of Muslims and non-Muslims was largely predicated on a basic imbalance of power: Muslims formed an elite ruling minority, which tended to exploit the resources of the conquered peoples – reproductive and otherwise – to grow in size and put down roots within local populations. Seen in this light, forced conversion was far less a factor in long-term religious change than practices such as intermarriage and concubinage. 

The rules governing religiously mixed families crystallised fairly early, at least on the Muslim side. The Quran allows Muslim men to marry up to four women, including ‘People of the Book’, that is, Jews and Christians. Muslim women, however, were not permitted to marry non-Muslim men and, judging from the historical evidence, this prohibition seems to have stuck. Underlying the injunction was the understanding that marriage was a form of female enslavement: if a woman was bound to her husband as a slave is to her master, she could not be subordinate to an infidel.

Outside of marriage, the conquests of the seventh and eighth centuries saw massive numbers of slaves captured across North Africa, the Middle East and Central Asia. Female slaves of non-Muslim origin, at least, were often pressed into the sexual service of their Muslim masters, and many of these relationships produced children.

Since Muslim men were free to keep as many slaves as they wished, sex with Jewish and Christian women was considered licit, while sex with Zoroastrians and others outside the ‘People of the Book’ was technically forbidden. After all, they were regarded as pagans, lacking a valid divine scripture that was equivalent to the Torah or the Gospel. But since so many slaves in the early period came from these ‘forbidden’ communities, Muslim jurists developed convenient workarounds. Some writers of the ninth century, for example, argued that Zoroastrian women could be induced or even forced to convert, and thus become available for sex.

Whether issued via marriage or slavery, the children of religiously mixed unions were automatically considered Muslims. Sometimes Jewish or Christian men converted after already having started families: if their conversions occurred before their children attained the age of legal majority – seven or 10, depending on the school of Islamic law – they had to follow their fathers’ faith. If the conversions occurred after, the children were free to choose. Even as fathers and children changed religion, mothers could continue as Jews and Christians, as was their right under Sharia law.

Mixed marriage and concubinage allowed Muslims – who constituted a tiny percentage of the population at the start of Islamic history – to quickly integrate with their subjects, legitimising their rule over newly conquered territories, and helping them grow in number. It also ensured that non-Muslim religions would quickly disappear from family trees. Indeed, given the rules governing the religious identity of children, mixed kinship groups probably lasted no longer than a generation or two. It was precisely this prospect of disappearing that prompted non-Muslim leaders – Jewish rabbis, Christian bishops and Zoroastrian priests – to inveigh against mixed marriage and codify laws aimed at discouraging it. Because Muslims were members of the elite, who enjoyed greater access to economic resources than non-Muslims, their fertility rates were probably higher.

Of course, theory and reality did not always line up, and religiously mixed families sometimes flouted the rules set by jurists. One of the richest bodies of evidence for such families are the biographies of Christian martyrs from the early Islamic period, a little-known group who constitute the subject of my book, Christian Martyrs under Islam (2018). Many of these martyrs were executed for crimes such as apostasy and blasphemy, and not a small number of them came from religiously mixed unions.

A good example is Bacchus, a martyr killed in Palestine in 786 – about 150 years after the death of the Prophet Muhammad. Bacchus, whose biography was recorded in Greek, was born into a Christian family, but his father at some point converted to Islam, thereby changing his children’s status, too. This greatly distressed Bacchus’s mother, who prayed for her husband’s return, and in the meantime, seems to have exposed her Muslim children to Christian practices. Eventually, the father died, freeing Bacchus to become a Christian. He was then baptised and tonsured as a monk, enraging certain Muslim relatives who had him arrested and killed.

Similar examples come from Córdoba, the capital of Islamic Spain, where a group of 48 Christians were martyred between 850 and 859, and commemorated in a corpus of Latin texts. Several of the Córdoba martyrs were born into religiously mixed families, but with an interesting twist: a number of them lived publicly as Muslims but practised Christianity in secret. In most instances, this seems to have been done without the knowledge of their Muslim fathers, but in one unique case of two sisters, it allegedly occurred with the father’s consent. The idea that one would have a public legal identity as a Muslim but a private spiritual identity as a Christian produced a unique subculture of ‘crypto-Christianity’ in Córdoba. This seems to have spanned generations, fuelled by the tendency of some ‘crypto-Christians’ to seek out and marry others like them.

In the modern Middle East, intermarriage has become uncommon. One reason for this is the long-term success of Islamisation, such that there are simply fewer Jews and Christians around to marry. Another reason is that those Jewish and Christian communities that do exist today have survived partly by living in homogeneous environments without Muslims, or by establishing communal norms that strongly penalise marrying out. In contrast to today’s world, where the frontiers between communities can be sealed, the medieval Middle East was a world of surprisingly porous borders, especially when it came to the bedroom.

Christian Martyrs under Islam: Religious Violence and the Making of the Muslim World by Christian C Sahner is published via Princeton University Press.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

Jason Brennan: When the state is unjust, citizens may use justifiable violence

If you see police choking someone to death – such as Eric Garner, the 43-year-old black horticulturalist wrestled down on the streets of New York City in 2014 – you might choose to pepper-spray them and flee. You might even save an innocent life. But what ethical considerations justify such dangerous heroics? (After all, the cops might arrest or kill you.) More important: do we have the right to defend ourselves and others from government injustice when government agents are following an unjust law? I think the answer is yes. But that view needs defending. Under what circumstances might active self-defence, including possible violence, be justified, as opposed to the passive resistance of civil disobedience that Americans generally applaud?

Civil disobedience is a public act that aims to create social or legal change. Think of Henry David Thoreau’s arrest in 1846 for refusing to pay taxes to fund the colonial exploits of the United States, or Martin Luther King Jr courting the ire of the authorities in 1963 to shame white America into respecting black civil rights. In such cases, disobedient citizens visibly break the law and accept punishment, so as to draw attention to a cause. But justifiable resistance need not have a civic character. It need not aim at changing the law, reforming dysfunctional institutions or replacing bad leaders. Sometimes, it is simply about stopping an immediate injustice­. If you stop a mugging, you are trying to stop that mugging in that moment, not trying to end muggings everywhere. Indeed, had you pepper-sprayed the police officer Daniel Pantaleo while he choked Eric Garner, you’d have been trying to save Garner, not reform US policing.

Generally, we agree that it’s wrong to lie, cheat, steal, deceive, manipulate, destroy property or attack people. But few of us think that the prohibitions against such actions are absolute. Commonsense morality holds that such actions are permissible in self-defence or in defence of others (even if the law doesn’t always agree). You may lie to the murderer at the door. You may smash the windows of the would-be kidnapper’s car. You may kill the would-be rapist.

Here’s a philosophical exercise. Imagine a situation in which a civilian commits an injustice, the kind against which you believe it is permissible to use deception, subterfuge or violence to defend yourself or others. For instance, imagine your friend makes an improper stop at a red light, and his dad, in anger, yanks him out of the car, beats the hell out of him, and continues to strike the back of his skull even after your friend lies subdued and prostrate. May you use violence, if it’s necessary to stop the father? Now imagine the same scene, except this time the attacker is a police officer in Ohio, and the victim is Richard Hubbard III, who in 2017 experienced just such an attack as described. Does that change things? Must you let the police officer possibly kill Hubbard rather than intervene?

Most people answer yes, believing that we are forbidden from stopping government agents who violate our rights. I find this puzzling. On this view, my neighbours can eliminate our right of self-defence and our rights to defend others by granting someone an office or passing a bad law. On this view, our rights to life, liberty, due process and security of person can disappear by political fiat – or even when a cop has a bad day. In When All Else Fails: The Ethics of Resistance to State Injustice (2019), I argue instead that we may act defensively against government agents under the same conditions in which we may act defensively against civilians. In my view, civilian and government agents are on a par, and we have identical rights of self-defence (and defence of others) against both. We should presume, by default, that government agents have no special immunity against self-defence, unless we can discover good reason to think otherwise. But it turns out that the leading arguments for special immunity are weak.

Some people say we may not defend ourselves against government injustice because governments and their agents have ‘authority’. (By definition, a government has authority over you if, and only if, it can oblige you to obey by fiat: you have to do what it says because it says so.) But the authority argument doesn’t work. It’s one thing to say that you have a duty to pay your taxes, show up for jury duty, or follow the speed limit. It is quite another to show that you are specifically bound to allow a government and its agents to use excessive violence and ignore your rights to due process. A central idea in liberalism is that whatever authority governments have is limited.

Others say that we should resist government injustice, but only through peaceful methods. Indeed, we should, but that doesn’t differentiate between self-defence against civilians or government. The common-law doctrine of self-defence is always governed by a necessity proviso: you may lie or use violence only if necessary, that is, only if peaceful actions are not as effective. But peaceful methods often fail to stop wrongdoing. Eric Garner peacefully complained: ‘I can’t breathe,’ until he drew his last breath.

Another argument is that we shouldn’t act as vigilantes. But invoking this point here misunderstands the antivigilante principle, which says that when there exists a workable public system of justice, you should defer to public agents trying, in good faith, to administer justice. So if cops attempt to stop a mugging, you shouldn’t insert yourself. But if they ignore or can’t stop a mugging, you may intervene. If the police themselves are the muggers – as in unjust civil forfeiture – the antivigilante principle does not forbid you from defending yourself. It insists you defer to more competent government agents when they administer justice, not that you must let them commit injustice.

Some people find my thesis too dangerous. They claim that it’s hard to know exactly when self-defence is justified; that people make mistakes, resisting when they should not. Perhaps. But that’s true of self-defence against civilians, too. No one says we lack a right of self-defence against each other because applying the principle is hard. Rather, some moral principles are hard to apply.

However, this objection gets the problem exactly backwards. In real life, people are too deferential and conformist in the face of government authority. They are all-too-willing to electrocute experimental subjects, gas Jews or bomb civilians when ordered to, and reluctant to stand up to political injustice. If anything, the dangerous thesis – the thesis that most people will mistakenly misapply – is that we should defer to government agents when they seem to act unjustly. Remember, self-defence against the state is about stopping an immediate injustice, not fixing broken rules.

Of course, strategic nonviolence is usually the most effective way to induce lasting social change. But we should not assume that strategic nonviolence of the sort that King practised always works alone. Two recent books – Charles Cobb Jr’s This Nonviolent Stuff’ll Get You Killed (2014) and Akinyele Omowale Umoja’s We Will Shoot Back (2013) – show that the later ‘nonviolent’ phase of US civil rights activism succeeded (in so far as it has) only because, in earlier phases, black people armed themselves and shot back in self-defence. Once murderous mobs and white police learned that black people would fight back, they turned to less violent forms of oppression, and black people in turn began using nonviolent tactics. Defensive subterfuge, deceit and violence are rarely first resorts, but that doesn’t mean they are never justified.

When All Else Fails: The Ethics of Resistance to State Injustice (2018) by Jason Brennan is published via Princeton University Press.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

Kevin Mitchell: Wired that way – genes do shape behaviours but it’s complicated

Many of our psychological traits are innate in origin. There is overwhelming evidence from twin, family and general population studies that all manner of personality traits, as well as things such as intelligence, sexuality and risk of psychiatric disorders, are highly heritable. Put concretely, this means that a sizeable fraction of the population spread of values such as IQ scores or personality measures is attributable to genetic differences between people. The story of our lives most definitively does not start with a blank page.

But exactly how does our genetic heritage influence our psychological traits? Are there direct links from molecules to minds? Are there dedicated genetic and neural modules underlying various cognitive functions? What does it mean to say we have found ‘genes for intelligence’, or extraversion, or schizophrenia? This commonly used ‘gene for X’ construction is unfortunate in suggesting that such genes have a dedicated function: that it is their purpose to cause X. This is not the case at all. Interestingly, the confusion arises from a conflation of two very different meanings of the word ‘gene’.

From the perspective of molecular biology, a gene is a stretch of DNA that codes for a specific protein. So there is a gene for the protein haemoglobin, which carries oxygen around in the blood, and a gene for insulin, which regulates our blood sugar, and genes for metabolic enzymes and neurotransmitter receptors and antibodies, and so on; we have a total of about 20,000 genes defined in this way. It is right to think of the purpose of these genes as encoding those proteins with those cellular or physiological functions.

But from the point of view of heredity, a gene is some physical unit that can be passed from parent to offspring that is associated with some trait or condition. There is a gene for sickle-cell anaemia, for example, that explains how the disease runs in families. The key idea linking these two different concepts of the gene is variation: the ‘gene’ for sickle-cell anaemia is really just a mutation or change in sequence in the stretch of DNA that codes for haemoglobin. That mutation does not have a purpose – it only has an effect.

So, when we talk about genes for intelligence, say, what we really mean is genetic variants that cause differences in intelligence. These might be having their effects in highly indirect ways. Though we all share a human genome, with a common plan for making a human body and a human brain, wired so as to confer our general human nature, genetic variation in that plan arises inevitably, as errors creep in each time DNA is copied to make new sperm and egg cells. The accumulated genetic variation leads to variation in how our brains develop and function, and ultimately to variation in our individual natures.

This is not metaphorical. We can directly see the effects of genetic variation on our brains. Neuroimaging technologies reveal extensive individual differences in the size of various parts of the brain, including functionally defined areas of the cerebral cortex. They reveal how these areas are laid out and interconnected, and the pathways by which they are activated and communicate with each other under different conditions. All these parameters are at least partly heritable – some highly so.

That said, the relationship between these kinds of neural properties and psychological traits is far from simple. There is a long history of searching for correlations between isolated parameters of brain structure – or function – and specific behavioural traits, and certainly no shortage of apparently positive associations in the published literature. But for the most part, these have not held up to further scrutiny.

It turns out that the brain is simply not so modular: even quite specific cognitive functions rely not on isolated areas but on interconnected brain subsystems. And the high-level properties that we recognise as stable psychological traits cannot even be linked to the functioning of specific subsystems, but emerge instead from the interplay between them.

Intelligence, for example, is not linked to any localised brain parameter. It correlates instead with overall brain size and with global parameters of white matter connectivity and the efficiency of brain networks. There is no one bit of the brain that you do your thinking with. Rather than being tied to the function of one component, intelligence seems to reflect instead the interactions between many different components – more like the way we think of the overall performance of a car than, say, horsepower or braking efficiency.

This lack of discrete modularity is also true at the genetic level. A large number of genetic variants that are common in the population have now been associated with intelligence. Each of these by itself has only a tiny effect, but collectively they account for about 10 per cent of the variance in intelligence across the studied population. Remarkably, many of the genes affected by these genetic variants encode proteins with functions in brain development. This didn’t have to be the case – it might have turned out that intelligence was linked to some specific neurotransmitter pathway, or to the metabolic efficiency of neurons or some other direct molecular parameter. Instead, it appears to reflect much more generally how well the brain is put together.

The effects of genetic variation on other cognitive and behavioural traits are similarly indirect and emergent. They are also, typically, not very specific. The vast majority of the genes that direct the processes of neural development are multitaskers: they are involved in diverse cellular processes in many different brain regions. In addition, because cellular systems are all highly interdependent, any given cellular process will also be affected indirectly by genetic variation affecting many other proteins with diverse functions. The effects of any individual genetic variant are thus rarely restricted to just one part of the brain or one cognitive function or one psychological trait.

What all this means is that we should not expect the discovery of genetic variants affecting a given psychological trait to directly highlight the hypothetical molecular underpinnings of the affected cognitive functions. In fact, it is an error to think of cognitive functions or mental states as having molecular underpinnings – they have neural underpinnings.

The relationship between our genotypes and our psychological traits, while substantial, is highly indirect and emergent. It involves the interplay of the effects of thousands of genetic variants, realised through the complex processes of development, ultimately giving rise to variation in many parameters of brain structure and function, which, collectively, impinge on the high-level cognitive and behavioural functions that underpin individual differences in our psychology.

And that’s just the way things are. Nature is under no obligation to make things simple for us. When we open the lid of the black box, we should not expect to see lots of neatly separated smaller black boxes inside – it’s a mess in there.

Innate: How the Wiring of our Brains Shapes Who We Are by Kevin Mitchell is published via Princeton University Press.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

All woman: the utopian feminism of Charlotte Perkins Gilman

by Michael Robertson

This article was originally published at Aeon and has been republished under Creative Commons.

RobertsonCharlotte Perkins Gilman is best known today for ‘The Yellow Wallpaper’ (1892), a widely anthologised short story that mixes Gothic conventions with feminist insights, and a chilling dissection of patriarchy that seems as if it might have been co-authored by Edgar Allan Poe and Gloria Steinem. Fewer people know that Gilman began her career as a speaker and writer on behalf of Nationalism, a short-lived political movement inspired by Edward Bellamy’s best-selling utopian novel Looking Backward: 2000-1887 (1888). She ended it as a writer of her own utopian fictions, including Herland (1915), a playful novel about an ideal all-female society.

What does Gilman’s utopian feminism have to say to us now, when the dystopian pessimism of Margaret Atwood’s The Handmaid’s Tale (1985) is resurgent?

As a young woman, Gilman was drawn to Bellamy’s utopian socialism because of his stance on women’s economic independence; in the society depicted in Looking Backward, every woman and man earns an ‘equal credit’. Bellamy was certain that, from this economic parity, gender equality would follow. Gilman took a different approach. She believed that the realisation of utopia depended on women’s ‘mother instinct’, and advocated what she called the ‘larger motherhood’. As she wrote in her Bellamyite poem ‘Mother to Child’ (1911):

For the sake of my child I must hasten to save
All the children on earth from the jail and the grave.

Her life’s work centred on the concept of what she called the ‘World’s Mother’ – the selfless, nurturing woman-spirit who loves, protects and teaches the entire human race.

During the first decade of the 20th century, following the collapse of Bellamy’s Nationalist movement, Gilman turned to utopian fiction, producing three novels, a novella, and a flock of short stories. All were variations on the same utopian blueprint: the ideal society could be achieved peacefully in a remarkably short time if only women were freed from conventional housework and childrearing (she envisioned a combination of communal living and professional childcare) in order to spread the self-sacrificing ethics of the larger motherhood.

In 1915, she broke this fictional mould with Herland, a utopian fantasy that combines the plot of Alfed, Lord Tennyson’s The Princess (1847) – the discovery of an all-female society – with the conventions of the masculine adventure tale. Three bold young men on a scientific expedition to a remote part of the globe hear tales of a land inhabited only by women, located in an inaccessible mountain range. The men obtain a biplane and pilot it into the mountains, where after landing they soon spy three beautiful young women and give chase. The athletic young women, sensibly attired in utopian bloomers, easily outrun the men, who are captured by a phalanx of unarmed but well-disciplined women who chloroform them and place them under house arrest in a guarded fortress.

At this point, the novel transitions into utopian exposition, with long disquisitions on Herland’s society. Gilman was remarkably indifferent to the typical concerns of utopian fiction: work, politics, government. Instead, she used her fantastical premise to focus on her own interests, such as animal rights. Herlanders have eliminated all domesticated animals because of the cruelty inherent in slaughtering them for food. They are appalled at the idea of separating cows from their calves. Any interference with the natural processes of mothering is abhorrent to them.

Mothering is at the centre of Herland society. The word ‘mother’ or its variants appears more than 150 times in the novel. The women of Herland reproduce parthenogenetically, bearing only daughters, who are raised communally: each child is regarded as the child of all. ‘We each have a million children to love and serve,’ one of the women explains. Gilman evidently felt no need to explain Herland’s economy because it seemed to her so obvious: these ‘natural cooperators’, whose ‘whole mental outlook’ is collective, have no use for the individualism and competitiveness inherent in capitalism. Instead, a motherly state meets every citizen’s basic needs.

Herland depends on Gilman’s interpretation of women’s ‘maternal instinct’, an idea she clung to despite her own disastrous experience as a mother. Following the birth of her only child, a daughter, when Gilman was 24, she was plunged into a horrendous depression, an episode that she drew on for ‘The Yellow Wallpaper’. When her daughter was three, Gilman separated from her husband; six years later, she divorced him and gave up custody of their child. Herland enabled her to reconcile the contradictions between her utopian celebration of the maternal spirit and her difficult personal experience. Although every woman in Herland is capable of parthenogenetic reproduction, only an elite is entrusted with rearing children, in a collectivised and professionalised fashion. Gilman’s interest in the topic blended her conviction that women, like men, owed it to the world to work outside the home with her self-exculpating belief that the raising of children is so vital to the future race that it must be entrusted to professionals. Gilman derided the smallness, the possessiveness of the average woman’s conception of motherhood: my children, my family, my home. Herlanders see every child as theirs, the entire population as one family, the nation as home. 

Herland dropped out of view soon after its publication. Gilman had serialised the novel in The Forerunner, her self-published magazine, which folded soon after, and it never came out in book form. The novel was resurrected in the late 1970s by the American scholar Ann J Lane, who edited a paperback edition. Initially, the novel was hailed as a rediscovered feminist classic. Later scholars were more critical. They singled out its gender essentialism, but also the eugenic regime that underlay Gilman’s utopianism: her obsession with improving the strategically undefined ‘race’. Drawing on Gilman’s other writings, they convincingly argued that white racism is central to her utopian project.

Four decades after its rediscovery, Herland no longer seems the purely playful, light-hearted speculative fiction it once did. Nor does its central theme of collective child-rearing seem that different from the gendered regimes animating The Handmaid’s Tale – which, with an unabashed sexist and racist in the White House, serves as a powerful cautionary tale for progressives. Dystopian fiction, however, lacks the visionary inspiration – what the German philosopher Ernst Bloch in the 1950s called ‘the principle of hope’ – that utopianism provides. 

Despite Herland’s time-bound shortcomings, we need its vision of a society without poverty and war, where every child is precious and inequalities of income, housing, education and justice are nonexistent. For all its faults, Herland remains an eloquent expression of the nonviolent democratic socialist imagination. As fully as any work in the utopian tradition, Herland reminds us of the truth of Oscar Wilde’s aphorism: ‘A map of the world that does not include Utopia is not worth even glancing at.’Aeon counter – do not remove

Michael Robertson is professor of English at The College of New Jersey and the author of two award-winning books, Worshipping Walt: The Whitman Disciples and Stephen Crane, Journalism, and the Making of Modern American Literature. A former freelance journalist, he has written for the New York Times, the Village VoiceColumbia Journalism Review, and many other publications. Most recently, he is the author of The Last Utopians: Four Late Nineteenth-Century Visionaries and Their Legacy.

Keith Whittington: Campus protests should stop at the door of the classroom

by Keith Whittington

Campus free speechProtests are a time-honoured tradition on college campuses – memorably exemplified by the protests of 1968 by the grandparents of the current generation of students. They reflect the passionate energies of students discovering their own priorities and commitments, and finding their voice in national conversations. Protests spring from the stimulating intellectual environment and vigorous debate found on college campuses, where students are willing to think about more than just the upcoming party or how to grab the rungs on the career ladder. 

Not that universities should encourage student protests, but neither should they try to quash them. What universities must insist on, however, is that student protests be compatible with the larger functioning of the university; they should not hinder the ability of anyone on campus to pursue their own activities or the central mission of the university in advancing and disseminating knowledge. There are a lot of people on a college campus, and university administrators need to coordinate their activities without getting in each other’s way. Protests are legitimate among those activities, but they do not take priority.

Students are not always inclined to respect those boundaries. Of late, student activists have found themselves provoked by disagreements with guest speakers whom faculty members have invited to speak to classes; by the subjects and readings that professors have assigned in their classes; even by the behaviour of professors themselves. Activists have found such controversies sufficient to justify disrupting classes in order to voice their objections. In doing so, they undermine the ability of other students to learn and to take full advantage of their own collegiate opportunities, as well as the ability of professors to exercise their academic freedom to teach unmolested.

Securing academic freedom in universities so that professors can publish and teach the fruits of their expertise ‘without fear or favour’ as the American Association of University Professors’ (AAUP) Declaration of Principles put it in 1915, has been an ongoing struggle, largely against the corrupting influence of forces outside the university proper, be they wealthy benefactors, politicians or the general public. But the ability of a university teacher to communicate, in the words of the AAUP, to his students ‘the genuine and uncoloured product of his own study or that of fellow-specialists’ can as easily be threatened from within, by pressure from students or campus administrators. Students in the classroom deserve from the professor ‘the best of what he has and what he is’ – professional judgment, ‘intellectual integrity’, and an ‘independence of thought and utterance’. Universities are valuable, in part, because they serve as an ‘inviolable refuge’ from the tyranny of democracy that demands that everyone think alike, feel alike and speak alike. The university is ‘an intellectual experiment station, where new ideas may germinate and where their fruit, though still distasteful to the community as a whole, may be allowed to ripen’.

Student protestors who interfere with classroom teaching because a professor has departed from their preferred orthodoxy are as guilty of intruding on academic freedom and subverting the mission of the university as the corporate baron who seeks the dismissal of a disfavoured professor who has offended that baron’s economic or ideological interests.

In 2017, activists at Northwestern University in Illinois forced the cancellation of a sociology class because they objected to its students hearing from and interacting with an agent of the United States’ Immigration and Customs Enforcement. This January, activists at the University of Chicago launched a sit-in in the classes of a business school professor in an attempt to force him to disinvite the former White House aide Steve Bannon from speaking on campus. And in 2017, activists at Reed College in Portland, Oregon engaged in an extended in-class protest of a core humanities course until the faculty agreed to shift its focus away from the origins of Western civilisation. By disrupting professors from teaching their courses as they think best, and preventing other students from participating in such courses as they wish, activists assert their own superior authority to dictate the limits of academic freedom and to demarcate the boundaries of acceptable intellectual enquiry on campus.

To be sure, there are reasonable arguments to be had over the value of hosting in-class conversations with government agents, or re-structuring humanities courses to better reflect the history of the students taking them: some might say there were even better arguments to be made against inviting Bannon to campus. However, by protesting, instead of arguing, student activists risk having those arguments drowned in the wash of media publicity that invariably comes their way. They will be seen, to be sure, but they very likely will not be heard.

In practical terms, universities should insist on boundaries to how those debates are conducted, boundaries that draw the line at disruptions that impede both teaching and learning. Students concerned about the fossil-fuel industry should not be allowed to prevent other students from hearing their professors lecturing on petroleum engineering. Students who regard Marxism as a dangerous philosophy should not be allowed to disrupt sociology classes on Marxist theory. Campus protests are valuable as a means for calling attention to a cause and generating interest in a set of ideas. They are sometimes a necessary prelude to action. But they hamper rather than advance the mission of the university when they go beyond publicising issues to become instruments for denying others on campus the ability to pursue their own educational projects.

Academic freedom in universities has been hard-won, and so universities have an obligation to prevent protests from intruding into the classroom. University codes of conduct routinely try to strike just such a balance, by facilitating freeranging discussion of any set of ideas or concerns that teachers or students might want to raise and explore, while prohibiting actions that infringe on the rights of others to use and enjoy university facilities and programmes. Teaching students is at the heart of what universities do. But teaching requires that students and their professors be able to gather together on campus unmolested by those who might object to what is being taught, how it is being taught, and by whom. Campus regulations should be designed and administered to protect that most basic educational function of the university.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

Keith E. Whittington is the William Nelson Cromwell Professor of Politics at Princeton University and a leading authority on American constitutional theory and law. He is the author of Speak Freely: Why Universities Must Defend Free Speech.

Dr. John C. Hulsman: Delphic priestesses were the world’s first political risk consultants

by Dr. John C. Hulsman

In 480 BCE, the citizens of Athens were in more trouble than it is possible for our modern minds to fathom. Xerxes, the seemingly omnipotent son of Darius the Great, had some unfinished business left to him by his father. A decade earlier, at the Battle of Marathon in August 490 BCE, the miraculous had happened: the underrated Athenian army had seen off Darius and his mighty Persian horde, saving the threatened city-state from certain destruction. Now Xerxes had invaded Greece again, to finish the work his father had started, and he’d assembled a vast army that the Greek historian Herodotus (typically exaggerating) put at 5 million, but – though modern scholars disagree on precise numbers – was likely to have been a still-overwhelming force of 360,000, on top of a gigantic armada of 750 ships. Confronted with an insurmountable foe and almost certain destruction, the hard-pressed Athenian leadership requested the services of the world’s first political risk consultant.

Already, by 480 BCE, the Pythia of Delphi was an ancient institution. Now commonly known as the Oracle of Delphi – when, in ancient Greek, the oracles were the pronouncements that the Pythia dispensed – the Pythia were the senior priestesses of the Temple of Apollo, the Greek God of Prophecy. For more than 1,100 years (until 390 CE, when radical Christians chased the last Pythia out of Parnassus), they were viewed as the most authoritative soothsayers in Greece. Pilgrims descended from all over the ancient world to the temple on the slope of Mount Parnassus to have their questions about the future answered. From the small, enclosed chamber at the base of the shrine, the Pythia (there were three priestesses on call at any time) delivered her oracles in a frenzied state – the likely result of imbibing the hallucinogenic vapours rising from the clefts in the rock of Mount Parnassus, which we now know sits atop the intersection of two tectonic plates.

The Pythia would be sitting in a perforated cauldron astride a tripod. Pilgrims reported (and Plutarch, who for a time served as high priest at Delphi, assisting the Pythia in her mission, confirmed) that as she inhaled the strange vapours her hair would stand on end, her complexion altered, and she would often begin panting, her voice assuming an otherworldly tone. In classical days it was said that the Pythia spoke in rhyme, in pentameter or hexameter. To put it in modern terms, the Pythia was clearly as high as a kite. But let’s look at the Pythia afresh, for I would argue that the Temple at Delphi was effectively the world’s first political risk-consulting firm.

Since the height of the Persian Wars, political and business leaders have looked to outsiders blessed with seemingly magical knowledge to divine both the present and the future. While the tools of divination have obviously changed, the pressing need for establishing the rules of the road for managing risk in geopolitics have not. The question for political risk analysis remains the same as it was during the heyday of the Pythia: with superior knowledge (spiritual or intellectual), can we reliably do this?

The Pythia’s prognosticating advantages, not least her outsider status, curiously track the qualities that political risk firms look for in their best analysts today. In their isolation at Mount Parnassus, the Pythia were not in danger of elite capture, and the curse of analytical groupthink that so often follows, in terms of what they predicated. This is the curse that doomed so many modern-day analysts to be so very wrong about the Brexit vote because they didn’t bother to look outside the hermetically sealed elite shell of London; or the startling advent of Donald Trump (they never left the East Coast corridor). Physical, intellectual and emotional distance have great analytical value.

Yet despite being isolated, the Pythia had limited but regular contact with the elites of the day who made the arduous trek to visit them. Over time, the priestesses at the Temple of Apollo came to understand what it was their clients wished to know, and how to provide exactly what they lacked; independent, outside, authoritative advice. It should be noted that the Pythia were chosen from a group of highly educated women, well-acquainted with the world. It is this strange and unique mix of special knowledge, education, distance from (and yet connection to) the centres of corruption and power, that describes the ideal CV for political risk analysts today.

The Pythia offered practical counsel that could shape future actions, just as political risk analysts do today – though we’d use modern jargon and call it ‘policy’ in the public sphere, and ‘corporate strategy’ in the business world. It is amazing how good a political risk record the priestesses actually had. Between 535 and 615 of the oracles have survived to the present day, and well over half of these are said to be historically correct. (I can name a goodly number of modern firms that would kill for that record.)

There has always been a market to answer basic political risk questions: can the Persians be stopped, and if so how? Will the UK vote for Brexit? Will Trump become president? Then as now, those with a reputation for getting basic political risk questions right were venerated, just as those who failed were over time discredited. Crucially, on the biggest political risk question Delphi was ever presented with – Xerxes’ invasion – the Pythia came through with flying colours. In her peculiar poetic and riddling fashion she suggested a ploy to get the Athenians off the hook. She recounted that when Athena, Greek goddess of wisdom and the patron of her namesake city, implored her father Zeus to save Athens, he told her that he would grant them ‘a wall of wood that alone should be uncaptured, a boon to you and your children’.

Naturally, the Pythia’s mysterious oracular pronouncements required interpretation by the city leaders of Athens. One of them, Themistocles, argued that a wall of wood specifically referred to the Athenian navy, and persuaded the city’s leaders to adopt a maritime-first strategy against the Persians. This policy – concocted by the Pythia – led directly to the decisive naval Battle of Salamis, the turning point that brought to an end the Persian risk to Athens’s very survival. To put it mildly, the Pythia had proven to be well worth her political-risk fee – both the direct monetary payment customarily made to her by pilgrims, and the larger donations to the gods, which secured petitioners an advanced place in the line.

To Dare More Boldly: The Audacious Story of Political Risk by John C Hulsman is out now via Princeton University Press.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

Against metrics: how measuring performance by numbers backfires

by Jerry Muller

More and more companies, government agencies, educational institutions and philanthropic organisations are today in the grip of a new phenomenon. I’ve termed it ‘metric fixation’. The key components of metric fixation are the belief that it is possible – and desirable – to replace professional judgment (acquired through personal experience and talent) with numerical indicators of comparative performance based upon standardised data (metrics); and that the best way to motivate people within these organisations is by attaching rewards and penalties to their measured performance. 

The rewards can be monetary, in the form of pay for performance, say, or reputational, in the form of college rankings, hospital ratings, surgical report cards and so on. But the most dramatic negative effect of metric fixation is its propensity to incentivise gaming: that is, encouraging professionals to maximise the metrics in ways that are at odds with the larger purpose of the organisation. If the rate of major crimes in a district becomes the metric according to which police officers are promoted, then some officers will respond by simply not recording crimes or downgrading them from major offences to misdemeanours. Or take the case of surgeons. When the metrics of success and failure are made public – affecting their reputation and income – some surgeons will improve their metric scores by refusing to operate on patients with more complex problems, whose surgical outcomes are more likely to be negative. Who suffers? The patients who don’t get operated upon.

When reward is tied to measured performance, metric fixation invites just this sort of gaming. But metric fixation also leads to a variety of more subtle unintended negative consequences. These include goal displacement, which comes in many varieties: when performance is judged by a few measures, and the stakes are high (keeping one’s job, getting a pay rise or raising the stock price at the time that stock options are vested), people focus on satisfying those measures – often at the expense of other, more important organisational goals that are not measured. The best-known example is ‘teaching to the test’, a widespread phenomenon that has distorted primary and secondary education in the United States since the adoption of the No Child Left Behind Act of 2001.

Short-termism is another negative. Measured performance encourages what the US sociologist Robert K Merton in 1936 called ‘the imperious immediacy of interests … where the actor’s paramount concern with the foreseen immediate consequences excludes consideration of further or other consequences’. In short, advancing short-term goals at the expense of long-range considerations. This problem is endemic to publicly traded corporations that sacrifice long-term research and development, and the development of their staff, to the perceived imperatives of the quarterly report.

To the debit side of the ledger must also be added the transactional costs of metrics: the expenditure of employee time by those tasked with compiling and processing the metrics in the first place – not to mention the time required to actually read them. As the heterodox management consultants Yves Morieux and Peter Tollman note in Six Simple Rules (2014), employees end up working longer and harder at activities that add little to the real productiveness of their organisation, while sapping their enthusiasm. In an attempt to staunch the flow of faulty metrics through gaming, cheating and goal diversion, organisations often institute a cascade of rules, even as complying with them further slows down the institution’s functioning and diminishes its efficiency.

Contrary to commonsense belief, attempts to measure productivity through performance metrics discourage initiative, innovation and risk-taking. The intelligence analysts who ultimately located Osama bin Laden worked on the problem for years. If measured at any point, the productivity of those analysts would have been zero. Month after month, their failure rate was 100 per cent, until they achieved success. From the perspective of the superiors, allowing the analysts to work on the project for years involved a high degree of risk: the investment in time might not pan out. Yet really great achievements often depend on such risks.

The source of the trouble is that when people are judged by performance metrics they are incentivised to do what the metrics measure, and what the metrics measure will be some established goal. But that impedes innovation, which means doing something not yet established, indeed that hasn’t even been tried out. Innovation involves experimentation. And experimentation includes the possibility, perhaps probability, of failure. At the same time, rewarding individuals for measured performance diminishes a sense of common purpose, as well as the social relationships that motivate co-operation and effectiveness. Instead, such rewards promote competition.

Compelling people in an organisation to focus their efforts on a narrow range of measurable features degrades the experience of work. Subject to performance metrics, people are forced to focus on limited goals, imposed by others who might not understand the work that they do. Mental stimulation is dulled when people don’t decide the problems to be solved or how to solve them, and there is no excitement of venturing into the unknown because the unknown is beyond the measureable. The entrepreneurial element of human nature is stifled by metric fixation.

Organisations in thrall to metrics end up motivating those members of staff with greater initiative to move out of the mainstream, where the culture of accountable performance prevails. Teachers move out of public schools to private and charter schools. Engineers move out of large corporations to boutique firms. Enterprising government employees become consultants. There is a healthy element to this, of course. But surely the large-scale organisations of our society are the poorer for driving out staff most likely to innovate and initiate. The more that work becomes a matter of filling in the boxes by which performance is to be measured and rewarded, the more it will repel those who think outside the box.

Economists such as Dale Jorgenson of Harvard University, who specialise in measuring economic productivity, report that in recent years the only increase in total-factor productivity in the US economy has been in the information technology-producing industries. The question that ought to be asked next, then, is to what extent the culture of metrics – with its costs in employee time, morale and initiative, and its promotion of short-termism – has itself contributed to economic stagnation?Aeon counter – do not remove

Jerry Z. Muller is the author of many books, including The Tyranny of Metrics. His writing has appeared in the New York Times, the Wall Street Journal, the Times Literary Supplement, and Foreign Affairs, among other publications. He is professor of history at the Catholic University of America in Washington, D.C., and lives in Silver Spring, Maryland.

 

This article was originally published at Aeon and has been republished under Creative Commons.

Say goodbye to the information age: it’s all about reputation now

OriggiThere is an underappreciated paradox of knowledge that plays a pivotal role in our advanced hyper-connected liberal democracies: the greater the amount of information that circulates, the more we rely on so-called reputational devices to evaluate it. What makes this paradoxical is that the vastly increased access to information and knowledge we have today does not empower us or make us more cognitively autonomous. Rather, it renders us more dependent on other people’s judgments and evaluations of the information with which we are faced.

We are experiencing a fundamental paradigm shift in our relationship to knowledge. From the ‘information age’, we are moving towards the ‘reputation age’, in which information will have value only if it is already filtered, evaluated and commented upon by others. Seen in this light, reputation has become a central pillar of collective intelligence today. It is the gatekeeper to knowledge, and the keys to the gate are held by others. The way in which the authority of knowledge is now constructed makes us reliant on what are the inevitably biased judgments of other people, most of whom we do not know.

Let me give some examples of this paradox. If you are asked why you believe that big changes in the climate are occurring and can dramatically harm future life on Earth, the most reasonable answer you’re likely to provide is that you trust the reputation of the sources of information to which you usually turn for acquiring information about the state of the planet. In the best-case scenario, you trust the reputation of scientific research and believe that peer-review is a reasonable way of sifting out ‘truths’ from false hypotheses and complete ‘bullshit’ about nature. In the average-case scenario, you trust newspapers, magazines or TV channels that endorse a political view which supports scientific research to summarise its findings for you. In this latter case, you are twice-removed from the sources: you trust other people’s trust in reputable science.

Or, take an even more uncontroversial truth that I have discussed at length elsewhere: one of the most notorious conspiracy theories is that no man stepped on the Moon in 1969, and that the entire Apollo programme (including six landings on the Moon between 1969 and 1972) was a staged fake. The initiator of this conspiracy theory was Bill Kaysing, who worked in publications at the Rocketdyne company – where Apollo’s Saturn V rocket engines were built. At his own expense, Kaysing published the book We Never Went to the Moon: America’s $30 Billion Swindle (1976). After publication, a movement of skeptics grew and started to collect evidence about the alleged hoax.

According to the Flat Earth Society, one of the groups that still denies the facts, the Moon landings were staged by Hollywood with the support of Walt Disney and under the artistic direction of Stanley Kubrick. Most of the ‘proofs’ they advance are based on a seemingly accurate analysis of the pictures of the various landings. The shadows’ angles are inconsistent with the light, the United States flag blows even if there is no wind on the Moon, the tracks of the steps are too precise and well-preserved for a soil in which there is no moisture. Also, is it not suspicious that a programme that involved more than 400,000 people for six years was shut down abruptly? And so on.

The great majority of the people we would consider reasonable and accountable (myself included) will dismiss these claims by laughing at the very absurdity of the hypothesis (although there have been serious and documented responses by NASA against these allegations). Yet, if I ask myself on what evidentiary basis I believe that there has been a Moon landing, I must admit that my evidence is quite poor, and that I have never invested a second trying to debunk the counter-evidence accumulated by these conspiracy theorists. What I personally know about the facts mixes confused childhood memories, black-and-white television news, and deference to what my parents told me about the landing in subsequent years. Still, the wholly secondhand and personally uncorroborated quality of this evidence does not make me hesitate about the truth of my beliefs on the matter.

My reasons for believing that the Moon landing took place go far beyond the evidence I can gather and double-check about the event itself. In those years, we trusted a democracy such as the US to have a justified reputation for sincerity. Without an evaluative judgment about the reliability of a certain source of information, that information is, for all practical purposes, useless.

The paradigm shift from the age of information to the age of reputation must be taken into account when we try to defend ourselves from ‘fake news and other misinformation and disinformation techniques that are proliferating through contemporary societies. What a mature citizen of the digital age should be competent at is not spotting and confirming the veracity of the news. Rather, she should be competent at reconstructing the reputational path of the piece of information in question, evaluating the intentions of those who circulated it, and figuring out the agendas of those authorities that leant it credibility.

Whenever we are at the point of accepting or rejecting new information, we should ask ourselves: Where does it come from? Does the source have a good reputation? Who are the authorities who believe it? What are my reasons for deferring to these authorities? Such questions will help us to get a better grip on reality than trying to check directly the reliability of the information at issue. In a hyper-specialised system of the production of knowledge, it makes no sense to try to investigate on our own, for example, the possible correlation between vaccines and autism. It would be a waste of time, and probably our conclusions would not be accurate. In the reputation age, our critical appraisals should be directed not at the content of information but rather at the social network of relations that has shaped that content and given it a certain deserved or undeserved ‘rank’ in our system of knowledge.

These new competences constitute a sort of second-order epistemology. They prepare us to question and assess the reputation of an information source, something that philosophers and teachers should be crafting for future generations.

According to Frederick Hayek’s book Law, Legislation and Liberty (1973), ‘civilisation rests on the fact that we all benefit from knowledge which we do not possess’. A civilised cyber-world will be one where people know how to assess critically the reputation of information sources, and can empower their knowledge by learning how to gauge appropriately the social ‘rank’ of each bit of information that enters their cognitive field.Aeon counter – do not remove

Gloria Origgi, a Paris-based philosopher, is a senior researcher at the Institut Jean Nicod at the National Center for Scientific Research. Her books include one on trust and another on the future of writing on the Internet. She maintains a blog in English, French, and Italian at gloriaoriggi.blogspot.com. Reputation: What it is and Why it Matters is available now.

This article was originally published at Aeon and has been republished under Creative Commons.

Rachel Sherman: How New York’s wealthy parents try to raise ‘unentitled’ kids

This article was originally published at Aeon and has been republished under Creative Commons.

ShermanWealthy parents seem to have it made when it comes to raising their children. They can offer their kids the healthiest foods, the most attentive caregivers, the best teachers and the most enriching experiences, from international vacations to unpaid internships in competitive fields.

Yet these parents have a problem: how to give their kids these advantages while also setting limits. Almost all of the 50 affluent parents in and around New York City that I interviewed for my book Uneasy Street: The Anxieties of Affluence (2017), expressed fears that children would be ‘entitled’ – a dirty word that meant, variously, lazy, materialistic, greedy, rude, selfish and self-satisfied. Instead, they strove to keep their children ‘grounded’ and ‘normal’. Of course, no parent wishes to raise spoiled children; but for those who face relatively few material limits, this possibility is distinctly heightened.

This struggle highlights two challenges that elite parents face in this particular historical moment: the stigma of wealth, and a competitive environment. For most of the 20th century, the United States had a quasi-aristocratic upper class, mainly white, Anglo-Saxon Protestant (WASP) families from old money, usually listed in the Social Register. Comfortable with their inherited advantages, and secure in their economic position, they openly viewed themselves as part of a better class of people. By sending their kids to elite schools and marrying them off to the children of families in the same community, they sought to reproduce their privilege.

But in the past few decades this homogenous ‘leisure class’ has declined, and the category of the ‘working wealthy’, especially in finance, has exploded. The ranks of high-earners have also partially diversified, opening up to people besides WASP men. This shift has led to a more competitive environment, especially in the realm of college admissions.

At the same time, a more egalitarian discourse has taken hold in the public sphere. As the sociologist Shamus Khan at Columbia University in New York argues in his book Privilege (2012), it is no longer legitimate for rich people to assume that they deserve their social position based simply on who they are. Instead, they must frame themselves as deserving on the basis of merit, particularly through hard work. At the same time, popular-culture images proliferate of wealthy people as greedy, lazy, shallow, materialistic or otherwise morally compromised.

Both competition and moral challenge have intensified since the 2008 economic crisis. Jobs for young people, even those with college educations, have become scarcer. The crisis has also made extreme inequality more visible, and exposed those at the top to harsher public critique.

In this climate, it is hard to feel that being wealthy is compatible with being morally worthy, and the wealthy themselves are well aware of the problem. The parents I talked with struggle over how to raise kids who deserve their privilege, encouraging them to become hard workers and disciplined consumers. They often talked about keeping kids ‘normal’, using language that invoked broad ‘middle-class’ American values. At the same time, they wanted to make sure that their children could prevail in increasingly competitive education and labour markets. This dilemma led to a profound tension between limiting and fostering privilege.

Parents’ educational decisions were especially marked by this conflict. Many supported the idea of public school in principle, but were anxious about large classes, lack of sports and arts programmes, and college prospects. Yet they worried that placing kids in elite private schools would distort their understanding of the world, exposing them only to extremely wealthy, ‘entitled’ peers. Justin, a finance entrepreneur, was conflicted about choosing private, saying: ‘I want the kids to be normal. I don’t want them to just be coddled, and be at a country club.’ Kevin, another wealthy father, preferred public school, wanting his young son not to live in an ‘elitist’ ‘narrow world’ in which ‘you only know a certain kind of people. Who are all complaining about their designers and their nannies.’

The question of paid work also brought up this quandary. All the parents I talked with wanted their kids to have a strong work ethic, with some worrying that their children would not be self-sufficient without it. But even those who could support their kids forever didn’t want to. Scott, for example, whose family wealth exceeds $50 million, was ‘terrified’ his kids would grow up to be ‘lazy jerks’. Parents also wanted to ensure children were not materialistic hyper-consumers. One father said of his son: ‘I want him to know limits.’ Parents tied consumption to the work ethic by requiring kids to do household chores. One mother with assets in the tens of millions had recently started requiring her six-year-old to do his own laundry in exchange for his activities and other privileges.

This mother, and many other parents of younger children, said they would insist that their kids work for pay during high school and college, in order to learn ‘the value of a dollar’. Commitment to children’s employment wavered, however, if parents saw having a job as incompatible with other ways of cultivating their capacities. Kate, who had grown up middle-class, said, of her own ‘crappy jobs’ growing up: ‘There’s some value to recognising this is what you have to do, and you get a paycheck, and that’s the money you have, and you budget it.’ But her partner Nadine, who had inherited wealth, contrasted her daughter’s possibly ‘researching harbour seals in Alaska’ to working for pay in a diner. She said: ‘Yes, you want them to learn the value of work, and getting paid for it, and all that stuff. And I don’t want my kids to be entitled. I don’t want them to be, like, silver spoon. But I also feel like life affords a lot of really exciting opportunities.’

The best way to help kids understand constraints, of course, is to impose them. But, despite feeling conflicted, these parents did not limit what their kids consumed in any significant way. Even parents who resisted private school tended to end up there. The limits they placed on consumption were marginal, constituting what the sociologist Allison Pugh in Longing and Belonging (2009) called ‘symbolic deprivation’. Facing competitive college admissions, none of the high-school-age kids of parents in my sample worked for pay; parents were more likely to describe their homework as their ‘job’.

Instead of limiting their privilege, parents tried to regulate children’s feelings about it. They wanted kids to appreciate their private education, comfortable homes, designer clothes, and (in some cases) their business-class or private travel. They emphasised that these privileges were ‘special’ or ‘a treat’. As Allison said, of her family’s two annual vacations: ‘You don’t want your kids to take these kinds of things for granted. … [They should know] most people don’t live this way. And that this is not the norm, and that you should feel this is special, and this is a treat.’

By the same token, they tried to find ways to help kids understand the ‘real world’ – to make sure they ‘understand the way everyone else lives’, in the words of one millionaire mother. Another mother fostered her son’s friendship with a middle-class family who lived in a modest apartment, because, she said: ‘I want to keep our feet in something that’s a little more normal’ than his private-school community.

Ideally, then, kids will be ‘normal’: hard workers and prudent consumers, who don’t see themselves as better than others. But at the same time, they will understand that they’re not normal, appreciating their privilege, without ever showing off. Egalitarian dispositions thereby legitimate unequal distributions, allowing children – and parents – to enjoy and reproduce their advantages without being morally compromised. These days, it seems, the rich can be entitled as long as they do not act or feel ‘entitled’. They can take it, as long as they don’t take it for granted.Aeon counter – do not remove

Rachel Sherman is associate professor of sociology at the New School for Social Research and Eugene Lang College. She is the author of Class Acts: Service and Inequality in Luxury Hotels. Sherman lives in New York

Scott Page: Why hiring the ‘best’ people produces the least creative results

This article was originally published at Aeon and has been republished under Creative Commons.

PageWhile in graduate school in mathematics at the University of Wisconsin-Madison, I took a logic course from David Griffeath. The class was fun. Griffeath brought a playfulness and openness to problems. Much to my delight, about a decade later, I ran into him at a conference on traffic models. During a presentation on computational models of traffic jams, his hand went up. I wondered what Griffeath – a mathematical logician – would have to say about traffic jams. He did not disappoint. Without even a hint of excitement in his voice, he said: ‘If you are modelling a traffic jam, you should just keep track of the non-cars.’ 

The collective response followed the familiar pattern when someone drops an unexpected, but once stated, obvious idea: a puzzled silence, giving way to a roomful of nodding heads and smiles. Nothing else needed to be said.

Griffeath had made a brilliant observation. During a traffic jam, most of the spaces on the road are filled with cars. Modelling each car takes up an enormous amount of memory. Keeping track of the empty spaces instead would use less memory – in fact almost none. Furthermore, the dynamics of the non-cars might be more amenable to analysis.

Versions of this story occur routinely at academic conferences, in research laboratories or policy meetings, within design groups, and in strategic brainstorming sessions. They share three characteristics. First, the problems are complex: they concern high-dimensional contexts that are difficult to explain, engineer, evolve or predict. Second, the breakthrough ideas do not arise by magic, nor are they constructed anew from whole cloth. They take an existing idea, insight, trick or rule, and apply it in a novel way, or they combine ideas – like Apple’s breakthrough repurposing of the touchscreen technology. In Griffeath’s case, he applied a concept from information theory: minimum description length. Fewer words are required to say ‘No-L’ than to list ‘ABCDEFGHIJKMNOPQRSTUVWXYZ’. I should add that these new ideas typically produce modest gains. But, collectively, they can have large effects. Progress occurs as much through sequences of small steps as through giant leaps.

Third, these ideas are birthed in group settings. One person presents her perspective on a problem, describes an approach to finding a solution or identifies a sticking point, and a second person makes a suggestion or knows a workaround. The late computer scientist John Holland commonly asked: ‘Have you thought about this as a Markov process, with a set of states and transition between those states?’ That query would force the presenter to define states. That simple act would often lead to an insight. 

The burgeoning of teams – most academic research is now done in teams, as is most investing and even most songwriting (at least for the good songs) – tracks the growing complexity of our world. We used to build roads from A to B. Now we construct transportation infrastructure with environmental, social, economic and political impacts.

The complexity of modern problems often precludes any one person from fully understanding them. Factors contributing to rising obesity levels, for example, include transportation systems and infrastructure, media, convenience foods, changing social norms, human biology and psychological factors. Designing an aircraft carrier, to take another example, requires knowledge of nuclear engineering, naval architecture, metallurgy, hydrodynamics, information systems, military protocols, the exercise of modern warfare and, given the long building time, the ability to predict trends in weapon systems.

The multidimensional or layered character of complex problems also undermines the principle of meritocracy: the idea that the ‘best person’ should be hired. There is no best person. When putting together an oncological research team, a biotech company such as Gilead or Genentech would not construct a multiple-choice test and hire the top scorers, or hire people whose resumes score highest according to some performance criteria. Instead, they would seek diversity. They would build a team of people who bring diverse knowledge bases, tools and analytic skills. That team would more likely than not include mathematicians (though not logicians such as Griffeath). And the mathematicians would likely study dynamical systems and differential equations.

Believers in a meritocracy might grant that teams ought to be diverse but then argue that meritocratic principles should apply within each category. Thus the team should consist of the ‘best’ mathematicians, the ‘best’ oncologists, and the ‘best’ biostatisticians from within the pool.

That position suffers from a similar flaw. Even with a knowledge domain, no test or criteria applied to individuals will produce the best team. Each of these domains possesses such depth and breadth, that no test can exist. Consider the field of neuroscience. Upwards of 50,000 papers were published last year covering various techniques, domains of enquiry and levels of analysis, ranging from molecules and synapses up through networks of neurons. Given that complexity, any attempt to rank a collection of neuroscientists from best to worst, as if they were competitors in the 50-metre butterfly, must fail. What could be true is that given a specific task and the composition of a particular team, one scientist would be more likely to contribute than another. Optimal hiring depends on context. Optimal teams will be diverse.

Evidence for this claim can be seen in the way that papers and patents that combine diverse ideas tend to rank as high-impact. It can also be found in the structure of the so-called random decision forest, a state-of-the-art machine-learning algorithm. Random forests consist of ensembles of decision trees. If classifying pictures, each tree makes a vote: is that a picture of a fox or a dog? A weighted majority rules. Random forests can serve many ends. They can identify bank fraud and diseases, recommend ceiling fans and predict online dating behaviour.

When building a forest, you do not select the best trees as they tend to make similar classifications. You want diversity. Programmers achieve that diversity by training each tree on different data, a technique known as bagging. They also boost the forest ‘cognitively’ by training trees on the hardest cases – those that the current forest gets wrong. This ensures even more diversity and accurate forests.

Yet the fallacy of meritocracy persists. Corporations, non-profits, governments, universities and even preschools test, score and hire the ‘best’. This all but guarantees not creating the best team. Ranking people by common criteria produces homogeneity. And when biases creep in, it results in people who look like those making the decisions. That’s not likely to lead to breakthroughs. As Astro Teller, CEO of X, the ‘moonshoot factory’ at Alphabet, Google’s parent company, has said: ‘Having people who have different mental perspectives is what’s important. If you want to explore things you haven’t explored, having people who look just like you and think just like you is not the best way.’ We must see the forest.Aeon counter – do not remove

Scott E. Page is the Leonid Hurwicz Collegiate Professor of Complex Systems, Political Science, and Economics at the University of Michigan and an external faculty member of the Santa Fe Institute. The recipient of a Guggenheim Fellowship and a member of the American Academy of Arts and Sciences, he is the author of The Diversity Bonus: How Great Teams Pay Off in the Knowledge Economy. He has been a featured speaker at Davos as well as at organizations such as Google, Bloomberg, BlackRock, Boeing, and NASA.

Kieran Setiya: How Schopenhauer’s thought can illuminate a midlife crisis

MidlifeThis article was originally published at Aeon and has been republished under Creative Commons.

Despite reflecting on the good life for more than 2,500 years, philosophers have not had much to say about middle age. For me, approaching 40 was a time of stereotypical crisis. Having jumped the hurdles of the academic career track, I knew I was lucky to be a tenured professor of philosophy. Yet stepping back from the busyness of life, the rush of things to do, I found myself wondering, what now? I felt a sense of repetition and futility, of projects completed just to be replaced by more. I would finish this article, teach this class, and then I would do it all again. It was not that everything seemed worthless. Even at my lowest ebb, I didn’t feel there was no point in what I was doing. Yet somehow the succession of activities, each one rational in itself, fell short.

I am not alone. Perhaps you have felt, too, an emptiness in the pursuit of worthy goals. This is one form of midlife crisis, at once familiar and philosophically puzzling. The paradox is that success can seem like failure. Like any paradox, it calls for philosophical treatment. What is the emptiness of the midlife crisis if not the unqualified emptiness in which one sees no value in anything? What was wrong with my life?

In search of an answer, I turned to the 19th-century pessimist Arthur Schopenhauer. Schopenhauer is notorious for preaching the futility of desire. That getting what you want could fail to make you happy would not have surprised him at all. On the other hand, not having it is just as bad. For Schopenhauer, you are damned if you do and damned if you don’t. If you get what you want, your pursuit is over. You are aimless, flooded with a ‘fearful emptiness and boredom’, as he put it in The World as Will and Representation (1818). Life needs direction: desires, projects, goals that are so far unachieved. And yet this, too, is fatal. Because wanting what you do not have is suffering. In staving off the void by finding things to do, you have condemned yourself to misery. Life ‘swings like a pendulum to and fro between pain and boredom, and these two are in fact its ultimate constituents’.

Schopenhauer’s picture of human life might seem unduly bleak. Often enough, midlife brings with it failure or success in cherished projects: you have the job you worked for many years to get, the partner you hoped to meet, the family you meant to start – or else you don’t. Either way, you look for new directions. But the answer to achieving your goals, or giving them up, feels obvious: you simply make new ones. Nor is the pursuit of what you want pure agony. Revamping your ambitions can be fun.

Still, I think there is something right in Schopenhauer’s dismal conception of our relationship with our ends, and that it can illuminate the darkness of midlife. Taking up new projects, after all, simply obscures the problem. When you aim at a future goal, satisfaction is deferred: success has yet to come. But the moment you succeed, your achievement is in the past. Meanwhile, your engagement with projects subverts itself. In pursuing a goal, you either fail or, in succeeding, end its power to guide your life. No doubt you can formulate other plans. The problem is not that you will run out of projects (the aimless state of Schopenhauer’s boredom), it’s that your way of engaging with the ones that matter most to you is by trying to complete them and thus expel them from your life. When you pursue a goal, you exhaust your interaction with something good, as if you were to make friends for the sake of saying goodbye.

Hence one common figure of the midlife crisis: the striving high-achiever, obsessed with getting things done, who is haunted by the hollowness of everyday life. When you are obsessed with projects, ceaselessly replacing old with new, satisfaction is always in the future. Or the past. It is mortgaged, then archived, but never possessed. In pursuing goals, you aim at outcomes that preclude the possibility of that pursuit, extinguishing the sparks of meaning in your life.

The question is what to do about this. For Schopenhauer, there is no way out: what I am calling a midlife crisis is simply the human condition. But Schopenhauer was wrong. In order to see his mistake, we need to draw distinctions among the activities we value: between ones that aim at completion, and ones that don’t.

Adapting terminology from linguistics, we can say that ‘telic’ activities – from ‘telos’, the Greek work for purpose – are ones that aim at terminal states of completion and exhaustion. You teach a class, get married, start a family, earn a raise. Not all activities are like this, however. Others are ‘atelic’: there is no point of termination at which they aim, or final state in which they have been achieved and there is no more to do. Think of listening to music, parenting, or spending time with friends. They are things you can stop doing, but you cannot finish or complete them. Their temporality is not that of a project with an ultimate goal, but of a limitless process.

If the crisis diagnosed by Schopenhauer turns on excessive investment in projects, then the solution is to invest more fully in the process, giving meaning to your life through activities that have no terminal point: since they cannot be completed, your engagement with them is not exhaustive. It will not subvert itself. Nor does it invite the sense of frustration that Schopenhauer scorns in unsatisfied desire – the sense of being at a distance from one’s goal, so that fulfilment is always in the future or the past.

We should not give up on our worthwhile goals. Their achievement matters. But we should meditate, too, on the value of the process. It is no accident that the young and the old are generally more satisfied with life than those in middle age. Young adults have not embarked on life-defining projects; the aged have such accomplishments behind them. That makes it more natural for them to live in the present: to find value in atelic activities that are not exhausted by engagement or deferred to the future, but realised here and now. It is hard to resist the tyranny of projects in midlife, to find a balance between the telic and atelic. But if we hope to overcome the midlife crisis, to escape the gloom of emptiness and self-defeat, that is what we have to do.Aeon counter – do not remove

Kieran Setiya is professor of philosophy at the Massachusetts Institute of Technology. He is the author of Midlife: A Philosophical Guide.