Say goodbye to the information age: it’s all about reputation now

OriggiThere is an underappreciated paradox of knowledge that plays a pivotal role in our advanced hyper-connected liberal democracies: the greater the amount of information that circulates, the more we rely on so-called reputational devices to evaluate it. What makes this paradoxical is that the vastly increased access to information and knowledge we have today does not empower us or make us more cognitively autonomous. Rather, it renders us more dependent on other people’s judgments and evaluations of the information with which we are faced.

We are experiencing a fundamental paradigm shift in our relationship to knowledge. From the ‘information age’, we are moving towards the ‘reputation age’, in which information will have value only if it is already filtered, evaluated and commented upon by others. Seen in this light, reputation has become a central pillar of collective intelligence today. It is the gatekeeper to knowledge, and the keys to the gate are held by others. The way in which the authority of knowledge is now constructed makes us reliant on what are the inevitably biased judgments of other people, most of whom we do not know.

Let me give some examples of this paradox. If you are asked why you believe that big changes in the climate are occurring and can dramatically harm future life on Earth, the most reasonable answer you’re likely to provide is that you trust the reputation of the sources of information to which you usually turn for acquiring information about the state of the planet. In the best-case scenario, you trust the reputation of scientific research and believe that peer-review is a reasonable way of sifting out ‘truths’ from false hypotheses and complete ‘bullshit’ about nature. In the average-case scenario, you trust newspapers, magazines or TV channels that endorse a political view which supports scientific research to summarise its findings for you. In this latter case, you are twice-removed from the sources: you trust other people’s trust in reputable science.

Or, take an even more uncontroversial truth that I have discussed at length elsewhere: one of the most notorious conspiracy theories is that no man stepped on the Moon in 1969, and that the entire Apollo programme (including six landings on the Moon between 1969 and 1972) was a staged fake. The initiator of this conspiracy theory was Bill Kaysing, who worked in publications at the Rocketdyne company – where Apollo’s Saturn V rocket engines were built. At his own expense, Kaysing published the book We Never Went to the Moon: America’s $30 Billion Swindle (1976). After publication, a movement of skeptics grew and started to collect evidence about the alleged hoax.

According to the Flat Earth Society, one of the groups that still denies the facts, the Moon landings were staged by Hollywood with the support of Walt Disney and under the artistic direction of Stanley Kubrick. Most of the ‘proofs’ they advance are based on a seemingly accurate analysis of the pictures of the various landings. The shadows’ angles are inconsistent with the light, the United States flag blows even if there is no wind on the Moon, the tracks of the steps are too precise and well-preserved for a soil in which there is no moisture. Also, is it not suspicious that a programme that involved more than 400,000 people for six years was shut down abruptly? And so on.

The great majority of the people we would consider reasonable and accountable (myself included) will dismiss these claims by laughing at the very absurdity of the hypothesis (although there have been serious and documented responses by NASA against these allegations). Yet, if I ask myself on what evidentiary basis I believe that there has been a Moon landing, I must admit that my evidence is quite poor, and that I have never invested a second trying to debunk the counter-evidence accumulated by these conspiracy theorists. What I personally know about the facts mixes confused childhood memories, black-and-white television news, and deference to what my parents told me about the landing in subsequent years. Still, the wholly secondhand and personally uncorroborated quality of this evidence does not make me hesitate about the truth of my beliefs on the matter.

My reasons for believing that the Moon landing took place go far beyond the evidence I can gather and double-check about the event itself. In those years, we trusted a democracy such as the US to have a justified reputation for sincerity. Without an evaluative judgment about the reliability of a certain source of information, that information is, for all practical purposes, useless.

The paradigm shift from the age of information to the age of reputation must be taken into account when we try to defend ourselves from ‘fake news and other misinformation and disinformation techniques that are proliferating through contemporary societies. What a mature citizen of the digital age should be competent at is not spotting and confirming the veracity of the news. Rather, she should be competent at reconstructing the reputational path of the piece of information in question, evaluating the intentions of those who circulated it, and figuring out the agendas of those authorities that leant it credibility.

Whenever we are at the point of accepting or rejecting new information, we should ask ourselves: Where does it come from? Does the source have a good reputation? Who are the authorities who believe it? What are my reasons for deferring to these authorities? Such questions will help us to get a better grip on reality than trying to check directly the reliability of the information at issue. In a hyper-specialised system of the production of knowledge, it makes no sense to try to investigate on our own, for example, the possible correlation between vaccines and autism. It would be a waste of time, and probably our conclusions would not be accurate. In the reputation age, our critical appraisals should be directed not at the content of information but rather at the social network of relations that has shaped that content and given it a certain deserved or undeserved ‘rank’ in our system of knowledge.

These new competences constitute a sort of second-order epistemology. They prepare us to question and assess the reputation of an information source, something that philosophers and teachers should be crafting for future generations.

According to Frederick Hayek’s book Law, Legislation and Liberty (1973), ‘civilisation rests on the fact that we all benefit from knowledge which we do not possess’. A civilised cyber-world will be one where people know how to assess critically the reputation of information sources, and can empower their knowledge by learning how to gauge appropriately the social ‘rank’ of each bit of information that enters their cognitive field.Aeon counter – do not remove

Gloria Origgi, a Paris-based philosopher, is a senior researcher at the Institut Jean Nicod at the National Center for Scientific Research. Her books include one on trust and another on the future of writing on the Internet. She maintains a blog in English, French, and Italian at gloriaoriggi.blogspot.com. Reputation: What it is and Why it Matters is available now.

This article was originally published at Aeon and has been republished under Creative Commons.

Rachel Sherman: How New York’s wealthy parents try to raise ‘unentitled’ kids

This article was originally published at Aeon and has been republished under Creative Commons.

ShermanWealthy parents seem to have it made when it comes to raising their children. They can offer their kids the healthiest foods, the most attentive caregivers, the best teachers and the most enriching experiences, from international vacations to unpaid internships in competitive fields.

Yet these parents have a problem: how to give their kids these advantages while also setting limits. Almost all of the 50 affluent parents in and around New York City that I interviewed for my book Uneasy Street: The Anxieties of Affluence (2017), expressed fears that children would be ‘entitled’ – a dirty word that meant, variously, lazy, materialistic, greedy, rude, selfish and self-satisfied. Instead, they strove to keep their children ‘grounded’ and ‘normal’. Of course, no parent wishes to raise spoiled children; but for those who face relatively few material limits, this possibility is distinctly heightened.

This struggle highlights two challenges that elite parents face in this particular historical moment: the stigma of wealth, and a competitive environment. For most of the 20th century, the United States had a quasi-aristocratic upper class, mainly white, Anglo-Saxon Protestant (WASP) families from old money, usually listed in the Social Register. Comfortable with their inherited advantages, and secure in their economic position, they openly viewed themselves as part of a better class of people. By sending their kids to elite schools and marrying them off to the children of families in the same community, they sought to reproduce their privilege.

But in the past few decades this homogenous ‘leisure class’ has declined, and the category of the ‘working wealthy’, especially in finance, has exploded. The ranks of high-earners have also partially diversified, opening up to people besides WASP men. This shift has led to a more competitive environment, especially in the realm of college admissions.

At the same time, a more egalitarian discourse has taken hold in the public sphere. As the sociologist Shamus Khan at Columbia University in New York argues in his book Privilege (2012), it is no longer legitimate for rich people to assume that they deserve their social position based simply on who they are. Instead, they must frame themselves as deserving on the basis of merit, particularly through hard work. At the same time, popular-culture images proliferate of wealthy people as greedy, lazy, shallow, materialistic or otherwise morally compromised.

Both competition and moral challenge have intensified since the 2008 economic crisis. Jobs for young people, even those with college educations, have become scarcer. The crisis has also made extreme inequality more visible, and exposed those at the top to harsher public critique.

In this climate, it is hard to feel that being wealthy is compatible with being morally worthy, and the wealthy themselves are well aware of the problem. The parents I talked with struggle over how to raise kids who deserve their privilege, encouraging them to become hard workers and disciplined consumers. They often talked about keeping kids ‘normal’, using language that invoked broad ‘middle-class’ American values. At the same time, they wanted to make sure that their children could prevail in increasingly competitive education and labour markets. This dilemma led to a profound tension between limiting and fostering privilege.

Parents’ educational decisions were especially marked by this conflict. Many supported the idea of public school in principle, but were anxious about large classes, lack of sports and arts programmes, and college prospects. Yet they worried that placing kids in elite private schools would distort their understanding of the world, exposing them only to extremely wealthy, ‘entitled’ peers. Justin, a finance entrepreneur, was conflicted about choosing private, saying: ‘I want the kids to be normal. I don’t want them to just be coddled, and be at a country club.’ Kevin, another wealthy father, preferred public school, wanting his young son not to live in an ‘elitist’ ‘narrow world’ in which ‘you only know a certain kind of people. Who are all complaining about their designers and their nannies.’

The question of paid work also brought up this quandary. All the parents I talked with wanted their kids to have a strong work ethic, with some worrying that their children would not be self-sufficient without it. But even those who could support their kids forever didn’t want to. Scott, for example, whose family wealth exceeds $50 million, was ‘terrified’ his kids would grow up to be ‘lazy jerks’. Parents also wanted to ensure children were not materialistic hyper-consumers. One father said of his son: ‘I want him to know limits.’ Parents tied consumption to the work ethic by requiring kids to do household chores. One mother with assets in the tens of millions had recently started requiring her six-year-old to do his own laundry in exchange for his activities and other privileges.

This mother, and many other parents of younger children, said they would insist that their kids work for pay during high school and college, in order to learn ‘the value of a dollar’. Commitment to children’s employment wavered, however, if parents saw having a job as incompatible with other ways of cultivating their capacities. Kate, who had grown up middle-class, said, of her own ‘crappy jobs’ growing up: ‘There’s some value to recognising this is what you have to do, and you get a paycheck, and that’s the money you have, and you budget it.’ But her partner Nadine, who had inherited wealth, contrasted her daughter’s possibly ‘researching harbour seals in Alaska’ to working for pay in a diner. She said: ‘Yes, you want them to learn the value of work, and getting paid for it, and all that stuff. And I don’t want my kids to be entitled. I don’t want them to be, like, silver spoon. But I also feel like life affords a lot of really exciting opportunities.’

The best way to help kids understand constraints, of course, is to impose them. But, despite feeling conflicted, these parents did not limit what their kids consumed in any significant way. Even parents who resisted private school tended to end up there. The limits they placed on consumption were marginal, constituting what the sociologist Allison Pugh in Longing and Belonging (2009) called ‘symbolic deprivation’. Facing competitive college admissions, none of the high-school-age kids of parents in my sample worked for pay; parents were more likely to describe their homework as their ‘job’.

Instead of limiting their privilege, parents tried to regulate children’s feelings about it. They wanted kids to appreciate their private education, comfortable homes, designer clothes, and (in some cases) their business-class or private travel. They emphasised that these privileges were ‘special’ or ‘a treat’. As Allison said, of her family’s two annual vacations: ‘You don’t want your kids to take these kinds of things for granted. … [They should know] most people don’t live this way. And that this is not the norm, and that you should feel this is special, and this is a treat.’

By the same token, they tried to find ways to help kids understand the ‘real world’ – to make sure they ‘understand the way everyone else lives’, in the words of one millionaire mother. Another mother fostered her son’s friendship with a middle-class family who lived in a modest apartment, because, she said: ‘I want to keep our feet in something that’s a little more normal’ than his private-school community.

Ideally, then, kids will be ‘normal’: hard workers and prudent consumers, who don’t see themselves as better than others. But at the same time, they will understand that they’re not normal, appreciating their privilege, without ever showing off. Egalitarian dispositions thereby legitimate unequal distributions, allowing children – and parents – to enjoy and reproduce their advantages without being morally compromised. These days, it seems, the rich can be entitled as long as they do not act or feel ‘entitled’. They can take it, as long as they don’t take it for granted.Aeon counter – do not remove

Rachel Sherman is associate professor of sociology at the New School for Social Research and Eugene Lang College. She is the author of Class Acts: Service and Inequality in Luxury Hotels. Sherman lives in New York

Scott Page: Why hiring the ‘best’ people produces the least creative results

This article was originally published at Aeon and has been republished under Creative Commons.

PageWhile in graduate school in mathematics at the University of Wisconsin-Madison, I took a logic course from David Griffeath. The class was fun. Griffeath brought a playfulness and openness to problems. Much to my delight, about a decade later, I ran into him at a conference on traffic models. During a presentation on computational models of traffic jams, his hand went up. I wondered what Griffeath – a mathematical logician – would have to say about traffic jams. He did not disappoint. Without even a hint of excitement in his voice, he said: ‘If you are modelling a traffic jam, you should just keep track of the non-cars.’ 

The collective response followed the familiar pattern when someone drops an unexpected, but once stated, obvious idea: a puzzled silence, giving way to a roomful of nodding heads and smiles. Nothing else needed to be said.

Griffeath had made a brilliant observation. During a traffic jam, most of the spaces on the road are filled with cars. Modelling each car takes up an enormous amount of memory. Keeping track of the empty spaces instead would use less memory – in fact almost none. Furthermore, the dynamics of the non-cars might be more amenable to analysis.

Versions of this story occur routinely at academic conferences, in research laboratories or policy meetings, within design groups, and in strategic brainstorming sessions. They share three characteristics. First, the problems are complex: they concern high-dimensional contexts that are difficult to explain, engineer, evolve or predict. Second, the breakthrough ideas do not arise by magic, nor are they constructed anew from whole cloth. They take an existing idea, insight, trick or rule, and apply it in a novel way, or they combine ideas – like Apple’s breakthrough repurposing of the touchscreen technology. In Griffeath’s case, he applied a concept from information theory: minimum description length. Fewer words are required to say ‘No-L’ than to list ‘ABCDEFGHIJKMNOPQRSTUVWXYZ’. I should add that these new ideas typically produce modest gains. But, collectively, they can have large effects. Progress occurs as much through sequences of small steps as through giant leaps.

Third, these ideas are birthed in group settings. One person presents her perspective on a problem, describes an approach to finding a solution or identifies a sticking point, and a second person makes a suggestion or knows a workaround. The late computer scientist John Holland commonly asked: ‘Have you thought about this as a Markov process, with a set of states and transition between those states?’ That query would force the presenter to define states. That simple act would often lead to an insight. 

The burgeoning of teams – most academic research is now done in teams, as is most investing and even most songwriting (at least for the good songs) – tracks the growing complexity of our world. We used to build roads from A to B. Now we construct transportation infrastructure with environmental, social, economic and political impacts.

The complexity of modern problems often precludes any one person from fully understanding them. Factors contributing to rising obesity levels, for example, include transportation systems and infrastructure, media, convenience foods, changing social norms, human biology and psychological factors. Designing an aircraft carrier, to take another example, requires knowledge of nuclear engineering, naval architecture, metallurgy, hydrodynamics, information systems, military protocols, the exercise of modern warfare and, given the long building time, the ability to predict trends in weapon systems.

The multidimensional or layered character of complex problems also undermines the principle of meritocracy: the idea that the ‘best person’ should be hired. There is no best person. When putting together an oncological research team, a biotech company such as Gilead or Genentech would not construct a multiple-choice test and hire the top scorers, or hire people whose resumes score highest according to some performance criteria. Instead, they would seek diversity. They would build a team of people who bring diverse knowledge bases, tools and analytic skills. That team would more likely than not include mathematicians (though not logicians such as Griffeath). And the mathematicians would likely study dynamical systems and differential equations.

Believers in a meritocracy might grant that teams ought to be diverse but then argue that meritocratic principles should apply within each category. Thus the team should consist of the ‘best’ mathematicians, the ‘best’ oncologists, and the ‘best’ biostatisticians from within the pool.

That position suffers from a similar flaw. Even with a knowledge domain, no test or criteria applied to individuals will produce the best team. Each of these domains possesses such depth and breadth, that no test can exist. Consider the field of neuroscience. Upwards of 50,000 papers were published last year covering various techniques, domains of enquiry and levels of analysis, ranging from molecules and synapses up through networks of neurons. Given that complexity, any attempt to rank a collection of neuroscientists from best to worst, as if they were competitors in the 50-metre butterfly, must fail. What could be true is that given a specific task and the composition of a particular team, one scientist would be more likely to contribute than another. Optimal hiring depends on context. Optimal teams will be diverse.

Evidence for this claim can be seen in the way that papers and patents that combine diverse ideas tend to rank as high-impact. It can also be found in the structure of the so-called random decision forest, a state-of-the-art machine-learning algorithm. Random forests consist of ensembles of decision trees. If classifying pictures, each tree makes a vote: is that a picture of a fox or a dog? A weighted majority rules. Random forests can serve many ends. They can identify bank fraud and diseases, recommend ceiling fans and predict online dating behaviour.

When building a forest, you do not select the best trees as they tend to make similar classifications. You want diversity. Programmers achieve that diversity by training each tree on different data, a technique known as bagging. They also boost the forest ‘cognitively’ by training trees on the hardest cases – those that the current forest gets wrong. This ensures even more diversity and accurate forests.

Yet the fallacy of meritocracy persists. Corporations, non-profits, governments, universities and even preschools test, score and hire the ‘best’. This all but guarantees not creating the best team. Ranking people by common criteria produces homogeneity. And when biases creep in, it results in people who look like those making the decisions. That’s not likely to lead to breakthroughs. As Astro Teller, CEO of X, the ‘moonshoot factory’ at Alphabet, Google’s parent company, has said: ‘Having people who have different mental perspectives is what’s important. If you want to explore things you haven’t explored, having people who look just like you and think just like you is not the best way.’ We must see the forest.Aeon counter – do not remove

Scott E. Page is the Leonid Hurwicz Collegiate Professor of Complex Systems, Political Science, and Economics at the University of Michigan and an external faculty member of the Santa Fe Institute. The recipient of a Guggenheim Fellowship and a member of the American Academy of Arts and Sciences, he is the author of The Diversity Bonus: How Great Teams Pay Off in the Knowledge Economy. He has been a featured speaker at Davos as well as at organizations such as Google, Bloomberg, BlackRock, Boeing, and NASA.

Kieran Setiya: How Schopenhauer’s thought can illuminate a midlife crisis

MidlifeThis article was originally published at Aeon and has been republished under Creative Commons.

Despite reflecting on the good life for more than 2,500 years, philosophers have not had much to say about middle age. For me, approaching 40 was a time of stereotypical crisis. Having jumped the hurdles of the academic career track, I knew I was lucky to be a tenured professor of philosophy. Yet stepping back from the busyness of life, the rush of things to do, I found myself wondering, what now? I felt a sense of repetition and futility, of projects completed just to be replaced by more. I would finish this article, teach this class, and then I would do it all again. It was not that everything seemed worthless. Even at my lowest ebb, I didn’t feel there was no point in what I was doing. Yet somehow the succession of activities, each one rational in itself, fell short.

I am not alone. Perhaps you have felt, too, an emptiness in the pursuit of worthy goals. This is one form of midlife crisis, at once familiar and philosophically puzzling. The paradox is that success can seem like failure. Like any paradox, it calls for philosophical treatment. What is the emptiness of the midlife crisis if not the unqualified emptiness in which one sees no value in anything? What was wrong with my life?

In search of an answer, I turned to the 19th-century pessimist Arthur Schopenhauer. Schopenhauer is notorious for preaching the futility of desire. That getting what you want could fail to make you happy would not have surprised him at all. On the other hand, not having it is just as bad. For Schopenhauer, you are damned if you do and damned if you don’t. If you get what you want, your pursuit is over. You are aimless, flooded with a ‘fearful emptiness and boredom’, as he put it in The World as Will and Representation (1818). Life needs direction: desires, projects, goals that are so far unachieved. And yet this, too, is fatal. Because wanting what you do not have is suffering. In staving off the void by finding things to do, you have condemned yourself to misery. Life ‘swings like a pendulum to and fro between pain and boredom, and these two are in fact its ultimate constituents’.

Schopenhauer’s picture of human life might seem unduly bleak. Often enough, midlife brings with it failure or success in cherished projects: you have the job you worked for many years to get, the partner you hoped to meet, the family you meant to start – or else you don’t. Either way, you look for new directions. But the answer to achieving your goals, or giving them up, feels obvious: you simply make new ones. Nor is the pursuit of what you want pure agony. Revamping your ambitions can be fun.

Still, I think there is something right in Schopenhauer’s dismal conception of our relationship with our ends, and that it can illuminate the darkness of midlife. Taking up new projects, after all, simply obscures the problem. When you aim at a future goal, satisfaction is deferred: success has yet to come. But the moment you succeed, your achievement is in the past. Meanwhile, your engagement with projects subverts itself. In pursuing a goal, you either fail or, in succeeding, end its power to guide your life. No doubt you can formulate other plans. The problem is not that you will run out of projects (the aimless state of Schopenhauer’s boredom), it’s that your way of engaging with the ones that matter most to you is by trying to complete them and thus expel them from your life. When you pursue a goal, you exhaust your interaction with something good, as if you were to make friends for the sake of saying goodbye.

Hence one common figure of the midlife crisis: the striving high-achiever, obsessed with getting things done, who is haunted by the hollowness of everyday life. When you are obsessed with projects, ceaselessly replacing old with new, satisfaction is always in the future. Or the past. It is mortgaged, then archived, but never possessed. In pursuing goals, you aim at outcomes that preclude the possibility of that pursuit, extinguishing the sparks of meaning in your life.

The question is what to do about this. For Schopenhauer, there is no way out: what I am calling a midlife crisis is simply the human condition. But Schopenhauer was wrong. In order to see his mistake, we need to draw distinctions among the activities we value: between ones that aim at completion, and ones that don’t.

Adapting terminology from linguistics, we can say that ‘telic’ activities – from ‘telos’, the Greek work for purpose – are ones that aim at terminal states of completion and exhaustion. You teach a class, get married, start a family, earn a raise. Not all activities are like this, however. Others are ‘atelic’: there is no point of termination at which they aim, or final state in which they have been achieved and there is no more to do. Think of listening to music, parenting, or spending time with friends. They are things you can stop doing, but you cannot finish or complete them. Their temporality is not that of a project with an ultimate goal, but of a limitless process.

If the crisis diagnosed by Schopenhauer turns on excessive investment in projects, then the solution is to invest more fully in the process, giving meaning to your life through activities that have no terminal point: since they cannot be completed, your engagement with them is not exhaustive. It will not subvert itself. Nor does it invite the sense of frustration that Schopenhauer scorns in unsatisfied desire – the sense of being at a distance from one’s goal, so that fulfilment is always in the future or the past.

We should not give up on our worthwhile goals. Their achievement matters. But we should meditate, too, on the value of the process. It is no accident that the young and the old are generally more satisfied with life than those in middle age. Young adults have not embarked on life-defining projects; the aged have such accomplishments behind them. That makes it more natural for them to live in the present: to find value in atelic activities that are not exhausted by engagement or deferred to the future, but realised here and now. It is hard to resist the tyranny of projects in midlife, to find a balance between the telic and atelic. But if we hope to overcome the midlife crisis, to escape the gloom of emptiness and self-defeat, that is what we have to do.Aeon counter – do not remove

Kieran Setiya is professor of philosophy at the Massachusetts Institute of Technology. He is the author of Midlife: A Philosophical Guide.

Kyle Harper: How climate change and disease helped the fall of Rome

HarperAt some time or another, every historian of Rome has been asked to say where we are, today, on Rome’s cycle of decline. Historians might squirm at such attempts to use the past but, even if history does not repeat itself, nor come packaged into moral lessons, it can deepen our sense of what it means to be human and how fragile our societies are.

In the middle of the second century, the Romans controlled a huge, geographically diverse part of the globe, from northern Britain to the edges of the Sahara, from the Atlantic to Mesopotamia. The generally prosperous population peaked at 75 million. Eventually, all free inhabitants of the empire came to enjoy the rights of Roman citizenship. Little wonder that the 18th-century English historian Edward Gibbon judged this age the ‘most happy’ in the history of our species – yet today we are more likely to see the advance of Roman civilisation as unwittingly planting the seeds of its own demise.

Five centuries later, the Roman empire was a small Byzantine rump-state controlled from Constantinople, its near-eastern provinces lost to Islamic invasions, its western lands covered by a patchwork of Germanic kingdoms. Trade receded, cities shrank, and technological advance halted. Despite the cultural vitality and spiritual legacy of these centuries, this period was marked by a declining population, political fragmentation, and lower levels of material complexity. When the historian Ian Morris at Stanford University created a universal social-development index, the fall of Rome emerged as the greatest setback in the history of human civilisation.

Explanations for a phenomenon of this magnitude abound: in 1984, the German classicist Alexander Demandt catalogued more than 200 hypotheses. Most scholars have looked to the internal political dynamics of the imperial system or the shifting geopolitical context of an empire whose neighbours gradually caught up in the sophistication of their military and political technologies. But new evidence has started to unveil the crucial role played by changes in the natural environment. The paradoxes of social development, and the inherent unpredictability of nature, worked in concert to bring about Rome’s demise.

Climate change did not begin with the exhaust fumes of industrialisation, but has been a permanent feature of human existence. Orbital mechanics (small variations in the tilt, spin and eccentricity of the Earth’s orbit) and solar cycles alter the amount and distribution of energy received from the Sun. And volcanic eruptions spew reflective sulphates into the atmosphere, sometimes with long-reaching effects. Modern, anthropogenic climate change is so perilous because it is happening quickly and in conjunction with so many other irreversible changes in the Earth’s biosphere. But climate change per se is nothing new.

The need to understand the natural context of modern climate change has been an unmitigated boon for historians. Earth scientists have scoured the planet for paleoclimate proxies, natural archives of the past environment. The effort to put climate change in the foreground of Roman history is motivated both by troves of new data and a heightened sensitivity to the importance of the physical environment. It turns out that climate had a major role in the rise and fall of Roman civilisation. The empire-builders benefitted from impeccable timing: the characteristic warm, wet and stable weather was conducive to economic productivity in an agrarian society. The benefits of economic growth supported the political and social bargains by which the Roman empire controlled its vast territory. The favourable climate, in ways subtle and profound, was baked into the empire’s innermost structure.

The end of this lucky climate regime did not immediately, or in any simple deterministic sense, spell the doom of Rome. Rather, a less favourable climate undermined its power just when the empire was imperilled by more dangerous enemies – Germans, Persians – from without. Climate instability peaked in the sixth century, during the reign of Justinian. Work by dendro-chronologists and ice-core experts points to an enormous spasm of volcanic activity in the 530s and 540s CE, unlike anything else in the past few thousand years. This violent sequence of eruptions triggered what is now called the ‘Late Antique Little Ice Age’, when much colder temperatures endured for at least 150 years. This phase of climate deterioration had decisive effects in Rome’s unravelling. It was also intimately linked to a catastrophe of even greater moment: the outbreak of the first pandemic of bubonic plague.

Disruptions in the biological environment were even more consequential to Rome’s destiny. For all the empire’s precocious advances, life expectancies ranged in the mid-20s, with infectious diseases the leading cause of death. But the array of diseases that preyed upon Romans was not static and, here too, new sensibilities and technologies are radically changing the way we understand the dynamics of evolutionary history – both for our own species, and for our microbial allies and adversaries.

The highly urbanised, highly interconnected Roman empire was a boon to its microbial inhabitants. Humble gastro-enteric diseases such as Shigellosis and paratyphoid fevers spread via contamination of food and water, and flourished in densely packed cities. Where swamps were drained and highways laid, the potential of malaria was unlocked in its worst form – Plasmodium falciparum – a deadly mosquito-borne protozoon. The Romans also connected societies by land and by sea as never before, with the unintended consequence that germs moved as never before, too. Slow killers such as tuberculosis and leprosy enjoyed a heyday in the web of interconnected cities fostered by Roman development.

However, the decisive factor in Rome’s biological history was the arrival of new germs capable of causing pandemic events. The empire was rocked by three such intercontinental disease events. The Antonine plague coincided with the end of the optimal climate regime, and was probably the global debut of the smallpox virus. The empire recovered, but never regained its previous commanding dominance. Then, in the mid-third century, a mysterious affliction of unknown origin called the Plague of Cyprian sent the empire into a tailspin. Though it rebounded, the empire was profoundly altered – with a new kind of emperor, a new kind of money, a new kind of society, and soon a new religion known as Christianity. Most dramatically, in the sixth century a resurgent empire led by Justinian faced a pandemic of bubonic plague, a prelude to the medieval Black Death. The toll was unfathomable – maybe half the population was felled.

The plague of Justinian is a case study in the extraordinarily complex relationship between human and natural systems. The culprit, the Yersinia pestis bacterium, is not a particularly ancient nemesis; evolving just 4,000 years ago, almost certainly in central Asia, it was an evolutionary newborn when it caused the first plague pandemic. The disease is permanently present in colonies of social, burrowing rodents such as marmots or gerbils. However, the historic plague pandemics were colossal accidents, spillover events involving at least five different species: the bacterium, the reservoir rodent, the amplification host (the black rat, which lives close to humans), the fleas that spread the germ, and the people caught in the crossfire.

Genetic evidence suggests that the strain of Yersinia pestis that generated the plague of Justinian originated somewhere near western China. It first appeared on the southern shores of the Mediterranean and, in all likelihood, was smuggled in along the southern, seaborne trading networks that carried silk and spices to Roman consumers. It was an accident of early globalisation. Once the germ reached the seething colonies of commensal rodents, fattened on the empire’s giant stores of grain, the mortality was unstoppable.

The plague pandemic was an event of astonishing ecological complexity. It required purely chance conjunctions, especially if the initial outbreak beyond the reservoir rodents in central Asia was triggered by those massive volcanic eruptions in the years preceding it. It also involved the unintended consequences of the built human environment – such as the global trade networks that shuttled the germ onto Roman shores, or the proliferation of rats inside the empire. The pandemic baffles our distinctions between structure and chance, pattern and contingency. Therein lies one of the lessons of Rome. Humans shape nature – above all, the ecological conditions within which evolution plays out. But nature remains blind to our intentions, and other organisms and ecosystems do not obey our rules. Climate change and disease evolution have been the wild cards of human history.

Our world now is very different from ancient Rome. We have public health, germ theory and antibiotic pharmaceuticals. We will not be as helpless as the Romans, if we are wise enough to recognise the grave threats looming around us, and to use the tools at our disposal to mitigate them. But the centrality of nature in Rome’s fall gives us reason to reconsider the power of the physical and biological environment to tilt the fortunes of human societies. Perhaps we could come to see the Romans not so much as an ancient civilisation, standing across an impassable divide from our modern age, but rather as the makers of our world today. They built a civilisation where global networks, emerging infectious diseases and ecological instability were decisive forces in the fate of human societies. The Romans, too, thought they had the upper hand over the fickle and furious power of the natural environment. History warns us: they were wrong.Aeon counter – do not remove

Kyle Harper is professor of classics and letters and senior vice president and provost at the University of Oklahoma. He is the author of The Fate of Rome, recently released, as well as Slavery in the Late Roman World, AD 275–425 and From Shame to Sin: The Christian Transformation of Sexual Morality in Late Antiquity. He lives in Norman, Oklahoma.

This article was originally published at Aeon and has been republished under Creative Commons.

He died as he lived: David Hume, philosopher and infidel

As the Scottish philosopher David Hume lay on his deathbed in the summer of 1776, his passing became a highly anticipated event. Few people in 18th-century Britain were as forthright in their lack of religious faith as Hume was, and his skepticism had earned him a lifetime of abuse and reproach from the pious, including a concerted effort to excommunicate him from the Church of Scotland. Now everyone wanted to know how the notorious infidel would face his end. Would he show remorse or perhaps even recant his skepticism? Would he die in a state of distress, having none of the usual consolations afforded by belief in an afterlife? In the event, Hume died as he had lived, with remarkable good humour and without religion.

The most famous depiction of Hume’s dying days, at least in our time, comes from James Boswell, who managed to contrive a visit with him on Sunday, 7 July 1776. As his account of their conversation makes plain, the purpose of Boswell’s visit was less to pay his respects to a dying man, or even to gratify a sense of morbid curiosity, than to try to fortify his own religious convictions by confirming that even Hume could not remain a sincere non-believer to the end. In this, he failed utterly.

‘Being too late for church,’ Boswell made his way to Hume’s house, where he was surprised to find him ‘placid and even cheerful … talking of different matters with a tranquility of mind and a clearness of head which few men possess at any time.’ Ever tactful, Boswell immediately brought up the subject of the afterlife, asking if there might not be a future state. Hume replied that ‘it was possible that a piece of coal put upon the fire would not burn; and he added that it was a most unreasonable fancy that we should exist for ever’. Boswell persisted, asking if he was not made uneasy by the thought of annihilation, to which Hume responded that he was no more perturbed by the idea of ceasing to exist than by the idea that he had not existed before he was born. What was more, Hume ‘said flatly that the morality of every religion was bad, and … that when he heard a man was religious, he concluded he was a rascal, though he had known some instances of very good men being religious.’

This interview might show Hume at his brashest, but in the 18th century it remained mostly confined to Boswell’s private notebooks. The most prominent and controversial public account of Hume’s final days came instead from an even more famous pen: that of Adam Smith, Hume’s closest friend. Smith composed a eulogy for Hume soon after the latter’s death in the form of a public letter to their mutual publisher, William Strahan. This letter was effectively the ‘authorised version’ of the story of Hume’s death, as it appeared (with Hume’s advance permission) as a companion piece to his short, posthumously published autobiography, My Own Life (1776).

Smith’s letter contains none of the open impiety that pervades Boswell’s interview, but it does chronicle – even flaunt – the equanimity of Hume’s last days, depicting the philosopher telling jokes, playing cards, and conversing cheerfully with his friends. It also emphasises the excellence of Hume’s character; indeed, Smith concluded the letter by declaring that his unbelieving friend approached ‘as nearly to the idea of a perfectly wise and virtuous man, as perhaps the nature of human frailty will permit’.

Though relatively little known today, in the 18th century Smith’s letter caused an uproar. He later proclaimed that it ‘brought upon me 10 times more abuse than the very violent attack I had made upon the whole commercial system of Great Britain’ – meaning, of course, The Wealth of Nations (1776). Throughout his life, Smith had generally gone to great lengths to avoid revealing much about his religious beliefs – or lack thereof – and to steer clear of confrontations with the devout, but his claim that an avowed skeptic such as Hume was a model of wisdom and virtue ‘gave very great offence’ and ‘shocked every sober Christian’ (as a contemporary commented).

Boswell himself deemed Smith’s letter a piece of ‘daring effrontery’ and an example of the ‘poisonous productions with which this age is infested’. Accordingly, he beseeched Samuel Johnson to ‘step forth’ to ‘knock Hume’s and Smith’s heads together, and make vain and ostentatious infidelity exceedingly ridiculous. Would it not,’ he pleaded, ‘be worth your while to crush such noxious weeds in the moral garden?’

Nor did the controversy subside quickly. Nearly a century later, one prolific author of religious tomes, John Lowrie, was still sufficiently incensed by Smith’s letter to proclaim that he knew ‘no more lamentable evidence of the weakness and folly of irreligion and infidelity’ in ‘all the range of English literature’.

In the 18th century, the idea that it was possible for a skeptic to die well, without undue hopes or fears, clearly haunted many people, including Boswell, who tried to call on Hume twice more after their 7 July conversation in order to press him further, but was turned away. Today, of course, non-believers are still regarded with suspicion and even hatred in some circles, but many die every day with little notice or comment about their lack of faith. It takes a particularly audacious and outspoken form of non-belief – more akin to the Hume of Boswell’s private interview than to the Hume of Smith’s public letter – to arouse much in the way of shock or resentment, of the kind that attended the death of Christopher Hitchens some years ago. (Indeed, there were a number of comparisons drawn between Hitchens and Hume at the time.) The fact that in the 18th century Smith endured vigorous and lasting abuse for merely reporting his friend’s calm and courageous end offers a stark reminder of just how far we have come in this regard.Aeon counter – do not remove

Dennis C. Rasmussen is associate professor of political science at Tufts University. His books include The Infidel and the Professor and The Pragmatic Enlightenment. He lives in Charlestown, Massachusetts.

This article was originally published at Aeon and has been republished under Creative Commons.

Lewis Glinert: Language dreams – An ancient tongue awakens in a Jewish baby

GlinertIn a Jewish section of Jerusalem, in 1885, a young couple, Eliezer and Devora Ben-Yehuda, were fearful for their child: they were rearing him in Hebrew, an unheard-of idea. They had taken in a wet-nurse, a dog and a cat; the nurse agreed to coo in Hebrew, while the dog and the cat – one male, the other female – would give the infant Itamar an opportunity to hear Hebrew adjectives and verbs inflected for gender. All other languages were to be silenced.

When Itamar turned three, however, he had still not uttered a word. Family friends protested. Surely this mother-tongue experiment would produce an imbecile. And then, the story goes, Itamar’s father marched in and upon finding the boy’s mother singing him a lullaby in Russian, flew into a rage. But then he fell silent, as the child was screaming: ‘Abba, Abba!’ (Daddy, Daddy!) Frightened little Itamar had just begun the reawakening of Hebrew as a mother tongue.

This is how I heard the story (embroidered, no doubt, by time) when I interviewed Itamar’s last living sister, Dola, for my BBC documentary ‘Tongue of Tongues’ in 1989.

As a young man in Russia, Eliezer Ben-Yehuda (born Perlman) had a far more modest dream: Jewish cultural rebirth. Groups of eastern European Jews, intensively schooled in the Bible and the Talmud in the traditional religious way, were beginning to explore a new, secular Jewish identity, built on reimagining their past and at the same time forging a ‘modernised’ Hebrew to acquaint fellow Jews with contemporary arts and sciences. Hebrew novels started appearing in Warsaw and Odessa, along with periodicals, newspapers, textbooks and encyclopaedias. They variously called their project haskalah (‘enlightenment’) or tehiyah (‘reawakening’).

Cultural renaissance, of course, was a rallying cry across 19th-century Europe, driven by a romantic reverence for a simpler or more glorious national past and, especially after 1848, by tumultuous struggles for ethnic and linguistic self-determination. The driving forces and goals were various and complex. Some, such as ennui in the soulless big city or the mobilisation of the masses through literacy, were modern; others were rooted in old ethnic identities or a respect for the vernacular in the arts and religion. The words and ways of the peasantry had a particular ring of authenticity for many nationalistic intellectuals, often neurotically out of touch (as Elie Kedourie and Joshua Fishman have documented) with the masses they aspired to lead. These sophisticated intellectuals were equally enchanted by childhood and the child’s access to truth and simplicity, as celebrated by Jean-Jacques Rousseau, William Blake and William Wordsworth.

To the vast majority of Jews, Hebrew language and Hebrew culture felt passé – pious, outmoded, arcane. The future, as they saw it, lay with English, German and Russian, and with the education, earning power and passport to assimilation that these languages promised. Migration to the West was on many minds. The young Ben-Yehuda was well aware of this. If current trends continued, he believed that his generation might well be the last erudite enough to understand its Jewish literary heritage.

But what kind of cultural ‘liberation’ could Jewish nationalists hope for? The Jews had no territory of their own, and a Jewish state, even Jewish autonomy, seemed a fantasy. (Zionism as a mass movement was still a generation in the future.) Nor was there a Hebrew-speaking peasantry or a Hebrew folk heritage to turn to for authenticity, or so it seemed. Hebrew was incorrigibly adult, stuffy. There was Yiddish, of course, the vernacular of most European Jews in the 19th century, but they generally considered it undignified, comic, a language without a grammar, a mishmash.

Then, in 1878, as Europe was toasting Bulgaria’s triumph against the Ottomans, the 19-year-old Ben-Yehuda had his epiphany. As he recalled years later in his memoirs: ‘The heavens opened … and I heard a mighty voice within me calling: “The rebirth of the Jews and their language on ancestral soil!”’ What if Jews could build a modern way of life in the Holy Land – raising their children to speak the old language?

Ben-Yehuda wanted great literature to be preserved down the generations. But to speak in order to read? Today, it sounds back-to-front, but in the 19th century it would have seemed quite reasonable. The trouble was that no child had used Hebrew as a mother tongue in close to 2,000 years. Thinking logically, Ben-Yehuda reasoned that a new mother tongue would need a willing mother: and so he found one, in an intellectual young woman named Devora Jonas, raised like him in Yiddish and Russian, and with only the barest knowledge of Hebrew. (Intensive textual study was traditionally reserved for young men.) No matter – they would marry and she would learn. In 1881, the young couple set sail for the Holy Land, pledging to set up the first secular, ‘progressive’ household in the pious city of Jerusalem, and to communicate with each other (and eventually, their children) only in Hebrew.

Speaking Hebrew was actually nothing new in itself; it had long been a lingua franca between Yiddish-, Ladino- and Arabic-speaking Jewish traders (and refugees). The markets of the Holy Land had resonated with Hebrew for hundreds of years. But a pidgin is not a mother tongue. Ben-Yehuda was a born philologist; he plucked words from ancient texts and coined his own, hoping one day to launch Hebrew’s answer to the Oxford English Dictionary. The birth of Itamar gave him an opportunity to put his experiment with Hebrew to the test. Could they rear the boy in Hebrew? Could they shield him from hearing other tongues? And, just as critical, could the family be a model for others?

Devora’s limited Hebrew was presumably sufficient for a three-year-old, but, like immigrant mothers everywhere, she eventually learned fluent Hebrew from her children, thereby demonstrating the two-way validity of the model. Ben-Yehuda, however, won the acclaim. ‘Why does everyone call him the Father of Modern Hebrew?’ sniffed the author S Y Agnon. ‘The people needed a hero,’ a politician wryly quipped, ‘so we gave them one.’ Ben-Yehuda’s political vision and scholarly toil complemented the physical toil by which the Zionist pioneers made their return to the Holy Land sacred.

Many more pieces had to fall into place in subsequent years to turn a language of books into a stable mother tongue for an entire society – some carefully laid, others dropping from heaven. But amid the waves of revolutionary-minded migrants deeply schooled in traditional texts, the developing demographics, economics and institutions of a new nation, the nationalistic fervour, and a lot of sheer desperation, we should not forget Hebrew’s very special version of the romance of a child’s talk.

The Story of Hebrew by Lewis Glinert is out now with Princeton University Press.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

Peter Ungar: It’s not that your teeth are too big: your jaw is too small

UngarWe hold in our mouths the legacy of our evolution. We rarely consider just how amazing our teeth are. They break food without themselves being broken, up to millions of times over the course of a lifetime; and they do it built from the very same raw materials as the foods they are breaking. Nature is truly an inspired engineer.

But our teeth are, at the same time, really messed up. Think about it. Do you have impacted wisdom teeth? Are your lower front teeth crooked or out of line? Do your uppers jut out over your lowers? Nearly all of us have to say ‘yes’ to at least one of these questions, unless we’ve had dental work. It’s as if our teeth are too big to fit properly in our jaws, and there isn’t enough room in the back or front for them all. It just doesn’t make sense that such an otherwise well-designed system would be so ill-fitting.

Other animals tend to have perfectly aligned teeth. Our distant hominin ancestors did too; and so do the few remaining peoples today who live a traditional hunting and gathering lifestyle. I am a dental anthropologist at the University of Arkansas, and I work with the Hadza foragers of Africa’s great rift valley in Tanzania. The first thing you notice when you look into a Hadza mouth is that they’ve got a lot of teeth. Most have 20 back teeth, whereas the rest of us tend to have 16 erupted and working. Hadza also typically have a tip-to-tip bite between the upper and lower front teeth; and the edges of their lowers align to form a perfect, flawless arch. In other words, the sizes of Hadza teeth and jaws match perfectly. The same goes for our fossil forebears and for our nearest living relatives, the monkeys and apes.

So why don’t our teeth fit properly in the jaw? The short answer is not that our teeth are too large, but that our jaws are too small to fit them in. Let me explain. Human teeth are covered with a hard cap of enamel that forms from the inside out. The cells that make the cap move outward toward the eventual surface as the tooth forms, leaving a trail of enamel behind. If you’ve ever wondered why your teeth can’t grow or repair themselves when they break or develop cavities, it’s because the cells that make enamel die and are shed when a tooth erupts. So the sizes and shapes of our teeth are genetically pre-programmed. They cannot change in response to conditions in the mouth.

But the jaw is a different story. Its size depends both on genetics and environment; and it grows longer with heavy use, particularly during childhood, because of the way bone responds to stress. The evolutionary biologist Daniel Lieberman at Harvard University conducted an elegant study in 2004 on hyraxes fed soft, cooked foods and tough, raw foods. Higher chewing strains resulted in more growth in the bone that anchors the teeth. He showed that the ultimate length of a jaw depends on the stress put on it during chewing.

Selection for jaw length is based on the growth expected, given a hard or tough diet. In this way, diet determines how well jaw length matches tooth size. It is a fine balancing act, and our species has had 200,000 years to get it right. The problem for us is that, for most of that time, our ancestors didn’t feed their children the kind of mush we feed ours today. Our teeth don’t fit because they evolved instead to match the longer jaw that would develop in a more challenging strain environment. Ours are too short because we don’t give them the workout nature expects us to.

There’s plenty of evidence for this. The dental anthropologist Robert Corruccini at Southern Illinois University has seen the effects by comparing urban dwellers and rural peoples in and around the city of Chandigarh in north India – soft breads and mashed lentils on the one hand, coarse millet and tough vegetables on the other. He has also seen it from one generation to the next in the Pima peoples of Arizona, following the opening of a commercial food-processing facility on the reservation. Diet makes a huge difference. I remember asking my wife not to cut our daughters’ meat into such small pieces when they were young. ‘Let them chew,’ I begged. She replied that she’d rather pay for braces than have them choke. I lost that argument.

Crowded, crooked, misaligned and impacted teeth are huge problems that have clear aesthetic consequences, but can also affect chewing and lead to decay. Half us could benefit from orthodontic treatment. Those treatments often involve pulling out or carving down teeth to match tooth row with jaw length. But does this approach really make sense from an evolutionary perspective? Some clinicians think not. And one of my colleagues at Arkansas, the bioarchaeologist Jerry Rose, has joined forces with the local orthodontist Richard Roblee with this very question in mind. Their recommendation? That clinicians should focus more on growing jaws, especially for children. For adults, surgical options for stimulating bone growth are gaining momentum, too, and can lead to shorter treatment times.

As a final thought, tooth crowding isn’t the only problem that comes from a shorter jaw. Sleep apnea is another. A smaller mouth means less space for the tongue, so it can fall back more easily into the throat during sleep, potentially blocking the airway. It should come as no surprise that appliances and even surgery to pull the jaw forward are gaining traction in treating obstructive sleep apnea.

For better and for worse, we hold in our mouths the legacy of our evolution. We might be stuck with an oral environment that our ancestors never had to contend with, but recognising this can help us deal with it in better ways. Think about that the next time you smile and look in a mirror.

Evolution’s Bite: A Story of Teeth, Diet, and Human Origins by Peter Ungar is out now through Princeton University Press.Aeon counter – do not remove

Peter S. Ungar is Distinguished Professor and director of the Environmental Dynamics Program at the University of Arkansas. He is the author of Teeth: A Very Short Introduction and Mammal Teeth: Origin, Evolution, and Diversity and the editor of Evolution of the Human Diet: The Known, the Unknown, and the Unknowable. He lives in Fayetteville, Arkansas.

This article was originally published at Aeon and has been republished under Creative Commons.

Peter Ungar: It’s not that your teeth are too big: your jaw is too small

We hold in our mouths the legacy of our evolution. We rarely consider just how amazing our teeth are. They break food without themselves being broken, up to millions of times over the course of a lifetime; and they do it built from the very same raw materials as the foods they are breaking. Nature is truly an inspired engineer.

But our teeth are, at the same time, really messed up. Think about it. Do you have impacted wisdom teeth? Are your lower front teeth crooked or out of line? Do your uppers jut out over your lowers? Nearly all of us have to say ‘yes’ to at least one of these questions, unless we’ve had dental work. It’s as if our teeth are too big to fit properly in our jaws, and there isn’t enough room in the back or front for them all. It just doesn’t make sense that such an otherwise well-designed system would be so ill-fitting.

Other animals tend to have perfectly aligned teeth. Our distant hominin ancestors did too; and so do the few remaining peoples today who live a traditional hunting and gathering lifestyle. I am a dental anthropologist at the University of Arkansas, and I work with the Hadza foragers of Africa’s great rift valley in Tanzania. The first thing you notice when you look into a Hadza mouth is that they’ve got a lot of teeth. Most have 20 back teeth, whereas the rest of us tend to have 16 erupted and working. Hadza also typically have a tip-to-tip bite between the upper and lower front teeth; and the edges of their lowers align to form a perfect, flawless arch. In other words, the sizes of Hadza teeth and jaws match perfectly. The same goes for our fossil forebears and for our nearest living relatives, the monkeys and apes.

So why don’t our teeth fit properly in the jaw? The short answer is not that our teeth are too large, but that our jaws are too small to fit them in. Let me explain. Human teeth are covered with a hard cap of enamel that forms from the inside out. The cells that make the cap move outward toward the eventual surface as the tooth forms, leaving a trail of enamel behind. If you’ve ever wondered why your teeth can’t grow or repair themselves when they break or develop cavities, it’s because the cells that make enamel die and are shed when a tooth erupts. So the sizes and shapes of our teeth are genetically pre-programmed. They cannot change in response to conditions in the mouth.

But the jaw is a different story. Its size depends both on genetics and environment; and it grows longer with heavy use, particularly during childhood, because of the way bone responds to stress. The evolutionary biologist Daniel Lieberman at Harvard University conducted an elegant study in 2004 on hyraxes fed soft, cooked foods and tough, raw foods. Higher chewing strains resulted in more growth in the bone that anchors the teeth. He showed that the ultimate length of a jaw depends on the stress put on it during chewing.

Selection for jaw length is based on the growth expected, given a hard or tough diet. In this way, diet determines how well jaw length matches tooth size. It is a fine balancing act, and our species has had 200,000 years to get it right. The problem for us is that, for most of that time, our ancestors didn’t feed their children the kind of mush we feed ours today. Our teeth don’t fit because they evolved instead to match the longer jaw that would develop in a more challenging strain environment. Ours are too short because we don’t give them the workout nature expects us to.

There’s plenty of evidence for this. The dental anthropologist Robert Corruccini at Southern Illinois University has seen the effects by comparing urban dwellers and rural peoples in and around the city of Chandigarh in north India – soft breads and mashed lentils on the one hand, coarse millet and tough vegetables on the other. He has also seen it from one generation to the next in the Pima peoples of Arizona, following the opening of a commercial food-processing facility on the reservation. Diet makes a huge difference. I remember asking my wife not to cut our daughters’ meat into such small pieces when they were young. ‘Let them chew,’ I begged. She replied that she’d rather pay for braces than have them choke. I lost that argument.

Crowded, crooked, misaligned and impacted teeth are huge problems that have clear aesthetic consequences, but can also affect chewing and lead to decay. Half us could benefit from orthodontic treatment. Those treatments often involve pulling out or carving down teeth to match tooth row with jaw length. But does this approach really make sense from an evolutionary perspective? Some clinicians think not. And one of my colleagues at Arkansas, the bioarchaeologist Jerry Rose, has joined forces with the local orthodontist Richard Roblee with this very question in mind. Their recommendation? That clinicians should focus more on growing jaws, especially for children. For adults, surgical options for stimulating bone growth are gaining momentum, too, and can lead to shorter treatment times.

As a final thought, tooth crowding isn’t the only problem that comes from a shorter jaw. Sleep apnea is another. A smaller mouth means less space for the tongue, so it can fall back more easily into the throat during sleep, potentially blocking the airway. It should come as no surprise that appliances and even surgery to pull the jaw forward are gaining traction in treating obstructive sleep apnea.

For better and for worse, we hold in our mouths the legacy of our evolution. We might be stuck with an oral environment that our ancestors never had to contend with, but recognising this can help us deal with it in better ways. Think about that the next time you smile and look in a mirror.

Evolution’s Bite: A Story of Teeth, Diet, and Human Origins by Peter Ungar is out now through Princeton University Press.Aeon counter – do not remove

UngarPeter S. Ungar is Distinguished Professor and director of the Environmental Dynamics Program at the University of Arkansas. He is the author of Evolution’s Bite: A Story of Teeth, Diet, and Human Origins, Teeth: A Very Short Introduction and Mammal Teeth: Origin, Evolution, and Diversity and the editor of Evolution of the Human Diet: The Known, the Unknown, and the Unknowable. He lives in Fayetteville, Arkansas.

This article was originally published at Aeon and has been republished under Creative Commons.

Elizabeth Currid-Halkett: Conspicuous consumption is over. It’s all about intangibles now

In 1899, the economist Thorstein Veblen observed that silver spoons and corsets were markers of elite social position. In Veblen’s now famous treatise The Theory of the Leisure Class, he coined the phrase ‘conspicuous consumption’ to denote the way that material objects were paraded as indicators of social position and status. More than 100 years later, conspicuous consumption is still part of the contemporary capitalist landscape, and yet today, luxury goods are significantly more accessible than in Veblen’s time. This deluge of accessible luxury is a function of the mass-production economy of the 20th century, the outsourcing of production to China, and the cultivation of emerging markets where labour and materials are cheap. At the same time, we’ve seen the arrival of a middle-class consumer market that demands more material goods at cheaper price points.

However, the democratisation of consumer goods has made them far less useful as a means of displaying status. In the face of rising social inequality, both the rich and the middle classes own fancy TVs and nice handbags. They both lease SUVs, take airplanes, and go on cruises. On the surface, the ostensible consumer objects favoured by these two groups no longer reside in two completely different universes.

Given that everyone can now buy designer handbags and new cars, the rich have taken to using much more tacit signifiers of their social position. Yes, oligarchs and the superrich still show off their wealth with yachts and Bentleys and gated mansions. But the dramatic changes in elite spending are driven by a well-to-do, educated elite, or what I call the ‘aspirational class’. This new elite cements its status through prizing knowledge and building cultural capital, not to mention the spending habits that go with it – preferring to spend on services, education and human-capital investments over purely material goods. These new status behaviours are what I call ‘inconspicuous consumption’. None of the consumer choices that the term covers are inherently obvious or ostensibly material but they are, without question, exclusionary.

The rise of the aspirational class and its consumer habits is perhaps most salient in the United States. The US Consumer Expenditure Survey data reveals that, since 2007, the country’s top 1 per cent (people earning upwards of $300,000 per year) are spending significantly less on material goods, while middle-income groups (earning approximately $70,000 per year) are spending the same, and their trend is upward. Eschewing an overt materialism, the rich are investing significantly more in education, retirement and health – all of which are immaterial, yet cost many times more than any handbag a middle-income consumer might buy. The top 1 per cent now devote the greatest share of their expenditures to inconspicuous consumption, with education forming a significant portion of this spend (accounting for almost 6 per cent of top 1 per cent household expenditures, compared with just over 1 per cent of middle-income spending). In fact, top 1 per cent spending on education has increased 3.5 times since 1996, while middle-income spending on education has remained flat over the same time period.

The vast chasm between middle-income and top 1 per cent spending on education in the US is particularly concerning because, unlike material goods, education has become more and more expensive in recent decades. Thus, there is a greater need to devote financial resources to education to be able to afford it at all. According to Consumer Expenditure Survey data from 2003-2013, the price of college tuition increased 80 per cent, while the cost of women’s apparel increased by just 6 per cent over the same period. Middle-class lack of investment in education doesn’t suggest a lack of prioritising as much as it reveals that, for those in the 40th-60th quintiles, education is so cost-prohibitive it’s almost not worth trying to save for.

While much inconspicuous consumption is extremely expensive, it shows itself through less expensive but equally pronounced signalling – from reading The Economist to buying pasture-raised eggs. Inconspicuous consumption in other words, has become a shorthand through which the new elite signal their cultural capital to one another. In lockstep with the invoice for private preschool comes the knowledge that one should pack the lunchbox with quinoa crackers and organic fruit. One might think these culinary practices are a commonplace example of modern-day motherhood, but one only needs to step outside the upper-middle-class bubbles of the coastal cities of the US to observe very different lunch-bag norms, consisting of processed snacks and practically no fruit. Similarly, while time in Los Angeles, San Francisco and New York City might make one think that every American mother breastfeeds her child for a year, national statistics report that only 27 per cent of mothers fulfill this American Academy of Pediatrics goal (in Alabama, that figure hovers at 11 per cent).

Knowing these seemingly inexpensive social norms is itself a rite of passage into today’s aspirational class. And that rite is far from costless: The Economist subscription might set one back only $100, but the awareness to subscribe and be seen with it tucked in one’s bag is likely the iterative result of spending time in elite social milieus and expensive educational institutions that prize this publication and discuss its contents.

Perhaps most importantly, the new investment in inconspicuous consumption reproduces privilege in a way that previous conspicuous consumption could not. Knowing which New Yorker articles to reference or what small talk to engage in at the local farmers’ market enables and displays the acquisition of cultural capital, thereby providing entry into social networks that, in turn, help to pave the way to elite jobs, key social and professional contacts, and private schools. In short, inconspicuous consumption confers social mobility.

More profoundly, investment in education, healthcare and retirement has a notable impact on consumers’ quality of life, and also on the future life chances of the next generation. Today’s inconspicuous consumption is a far more pernicious form of status spending than the conspicuous consumption of Veblen’s time. Inconspicuous consumption – whether breastfeeding or education – is a means to a better quality of life and improved social mobility for one’s own children, whereas conspicuous consumption is merely an end in itself – simply ostentation. For today’s aspirational class, inconspicuous consumption choices secure and preserve social status, even if they do not necessarily display it.Aeon counter – do not remove

Elizabeth Currid-Halkett is the James Irvine Chair in Urban and Regional Planning and professor of public policy at the Price School, University of Southern California. Her latest book is The Sum of Small Things: A Theory of the Aspirational Class (2017). She lives in Los Angeles.

This article was originally published at Aeon and has been republished under Creative Commons.

Joshua Holden: Quantum cryptography is unbreakable. So is human ingenuity

Two basic types of encryption schemes are used on the internet today. One, known as symmetric-key cryptography, follows the same pattern that people have been using to send secret messages for thousands of years. If Alice wants to send Bob a secret message, they start by getting together somewhere they can’t be overheard and agree on a secret key; later, when they are separated, they can use this key to send messages that Eve the eavesdropper can’t understand even if she overhears them. This is the sort of encryption used when you set up an online account with your neighbourhood bank; you and your bank already know private information about each other, and use that information to set up a secret password to protect your messages.

The second scheme is called public-key cryptography, and it was invented only in the 1970s. As the name suggests, these are systems where Alice and Bob agree on their key, or part of it, by exchanging only public information. This is incredibly useful in modern electronic commerce: if you want to send your credit card number safely over the internet to Amazon, for instance, you don’t want to have to drive to their headquarters to have a secret meeting first. Public-key systems rely on the fact that some mathematical processes seem to be easy to do, but difficult to undo. For example, for Alice to take two large whole numbers and multiply them is relatively easy; for Eve to take the result and recover the original numbers seems much harder.

Public-key cryptography was invented by researchers at the Government Communications Headquarters (GCHQ) – the British equivalent (more or less) of the US National Security Agency (NSA) – who wanted to protect communications between a large number of people in a security organisation. Their work was classified, and the British government neither used it nor allowed it to be released to the public. The idea of electronic commerce apparently never occurred to them. A few years later, academic researchers at Stanford and MIT rediscovered public-key systems. This time they were thinking about the benefits that widespread cryptography could bring to everyday people, not least the ability to do business over computers.

Now cryptographers think that a new kind of computer based on quantum physics could make public-key cryptography insecure. Bits in a normal computer are either 0 or 1. Quantum physics allows bits to be in a superposition of 0 and 1, in the same way that Schrödinger’s cat can be in a superposition of alive and dead states. This sometimes lets quantum computers explore possibilities more quickly than normal computers. While no one has yet built a quantum computer capable of solving problems of nontrivial size (unless they kept it secret), over the past 20 years, researchers have started figuring out how to write programs for such computers and predict that, once built, quantum computers will quickly solve ‘hidden subgroup problems’. Since all public-key systems currently rely on variations of these problems, they could, in theory, be broken by a quantum computer.

Cryptographers aren’t just giving up, however. They’re exploring replacements for the current systems, in two principal ways. One deploys quantum-resistant ciphers, which are ways to encrypt messages using current computers but without involving hidden subgroup problems. Thus they seem to be safe against code-breakers using quantum computers. The other idea is to make truly quantum ciphers. These would ‘fight quantum with quantum’, using the same quantum physics that could allow us to build quantum computers to protect against quantum-computational attacks. Progress is being made in both areas, but both require more research, which is currently being done at universities and other institutions around the world.

Yet some government agencies still want to restrict or control research into cryptographic security. They argue that if everyone in the world has strong cryptography, then terrorists, kidnappers and child pornographers will be able to make plans that law enforcement and national security personnel can’t penetrate.

But that’s not really true. What is true is that pretty much anyone can get hold of software that, when used properly, is secure against any publicly known attacks. The key here is ‘when used properly’. In reality, hardly any system is always used properly. And when terrorists or criminals use a system incorrectly even once, that can allow an experienced codebreaker working for the government to read all the messages sent with that system. Law enforcement and national security personnel can put those messages together with information gathered in other ways – surveillance, confidential informants, analysis of metadata and transmission characteristics, etc – and still have a potent tool against wrongdoers.

In his essay ‘A Few Words on Secret Writing’ (1841), Edgar Allan Poe wrote: ‘[I]t may be roundly asserted that human ingenuity cannot concoct a cipher which human ingenuity cannot resolve.’ In theory, he has been proven wrong: when executed properly under the proper conditions, techniques such as quantum cryptography are secure against any possible attack by Eve. In real-life situations, however, Poe was undoubtedly right. Every time an ‘unbreakable’ system has been put into actual use, some sort of unexpected mischance eventually has given Eve an opportunity to break it. Conversely, whenever it has seemed that Eve has irretrievably gained the upper hand, Alice and Bob have found a clever way to get back in the game. I am convinced of one thing: if society does not give ‘human ingenuity’ as much room to flourish as we can manage, we will all be poorer for it.Aeon counter – do not remove

Joshua Holden is professor of mathematics at the Rose-Hulman Institute of Technology and the author of The Mathematics of Secrets.

This article was originally published at Aeon and has been republished under Creative Commons.

Michael Strauss: Our universe is too vast for even the most imaginative sci-fi

As an astrophysicist, I am always struck by the fact that even the wildest science-fiction stories tend to be distinctly human in character. No matter how exotic the locale or how unusual the scientific concepts, most science fiction ends up being about quintessentially human (or human-like) interactions, problems, foibles and challenges. This is what we respond to; it is what we can best understand. In practice, this means that most science fiction takes place in relatively relatable settings, on a planet or spacecraft. The real challenge is to tie the story to human emotions, and human sizes and timescales, while still capturing the enormous scales of the Universe itself.

Just how large the Universe actually is never fails to boggle the mind. We say that the observable Universe extends for tens of billions of light years, but the only way to really comprehend this, as humans, is to break matters down into a series of steps, starting with our visceral understanding of the size of the Earth. A non-stop flight from Dubai to San Francisco covers a distance of about 8,000 miles – roughly equal to the diameter of the Earth. The Sun is much bigger; its diameter is just over 100 times Earth’s. And the distance between the Earth and the Sun is about 100 times larger than that, close to 100 million miles. This distance, the radius of the Earth’s orbit around the Sun, is a fundamental measure in astronomy; the Astronomical Unit, or AU. The spacecraft Voyager 1, for example, launched in 1977 and, travelling at 11 miles per second, is now 137 AU from the Sun.

But the stars are far more distant than this. The nearest, Proxima Centauri, is about 270,000 AU, or 4.25 light years away. You would have to line up 30 million Suns to span the gap between the Sun and Proxima Centauri. The Vogons in Douglas Adams’s The Hitchhiker’s Guide to the Galaxy (1979) are shocked that humans have not travelled to the Proxima Centauri system to see the Earth’s demolition notice; the joke is just how impossibly large the distance is.

Four light years turns out to be about the average distance between stars in the Milky Way Galaxy, of which the Sun is a member. That is a lot of empty space! The Milky Way contains about 300 billion stars, in a vast structure roughly 100,000 light years in diameter. One of the truly exciting discoveries of the past two decades is that our Sun is far from unique in hosting a retinue of planets: evidence shows that the majority of Sun-like stars in the Milky Way have planets orbiting them, many with a size and distance from their parent star allowing them to host life as we know it.

Yet getting to these planets is another matter entirely: Voyager 1 would arrive at Proxima Centauri in 75,000 years if it were travelling in the right direction – which it isn’t. Science-fiction writers use a variety of tricks to span these interstellar distances: putting their passengers into states of suspended animation during the long voyages, or travelling close to the speed of light (to take advantage of the time dilation predicted in Albert Einstein’s theory of special relativity). Or they invoke warp drives, wormholes or other as-yet undiscovered phenomena.

When astronomers made the first definitive measurements of the scale of our Galaxy a century ago, they were overwhelmed by the size of the Universe they had mapped. Initially, there was great skepticism that the so-called ‘spiral nebulae’ seen in deep photographs of the sky were in fact ‘island universes’ – structures as large as the Milky Way, but at much larger distances still. While the vast majority of science-fiction stories stay within our Milky Way, much of the story of the past 100 years of astronomy has been the discovery of just how much larger than that the Universe is. Our nearest galactic neighbour is about 2 million light years away, while the light from the most distant galaxies our telescopes can see has been travelling to us for most of the age of the Universe, about 13 billion years.

We discovered in the 1920s that the Universe has been expanding since the Big Bang. But about 20 years ago, astronomers found that this expansion was speeding up, driven by a force whose physical nature we do not understand, but to which we give the stop-gap name of ‘dark energy’. Dark energy operates on length- and time-scales of the Universe as a whole: how could we capture such a concept in a piece of fiction?

The story doesn’t stop there. We can’t see galaxies from those parts of the Universe for which there hasn’t been enough time since the Big Bang for the light to reach us. What lies beyond the observable bounds of the Universe? Our simplest cosmological models suggest that the Universe is uniform in its properties on the largest scales, and extends forever. A variant idea says that the Big Bang that birthed our Universe is only one of a (possibly infinite) number of such explosions, and that the resulting ‘multiverse’ has an extent utterly beyond our comprehension.

The US astronomer Neil deGrasse Tyson once said: ‘The Universe is under no obligation to make sense to you.’ Similarly, the wonders of the Universe are under no obligation to make it easy for science-fiction writers to tell stories about them. The Universe is mostly empty space, and the distances between stars in galaxies, and between galaxies in the Universe, are incomprehensibly vast on human scales. Capturing the true scale of the Universe, while somehow tying it to human endeavours and emotions, is a daunting challenge for any science-fiction writer. Olaf Stapledon took up that challenge in his novel Star Maker (1937), in which the stars and nebulae, and cosmos as a whole, are conscious. While we are humbled by our tiny size relative to the cosmos, our brains can none the less comprehend, to some extent, just how large the Universe we inhabit is. This is hopeful, since, as the astrobiologist Caleb Scharf of Columbia University has said: ‘In a finite world, a cosmic perspective isn’t a luxury, it is a necessity.’ Conveying this to the public is the real challenge faced by astronomers and science-fiction writers alike. Aeon counter – do not remove

UniverseMichael A. Strauss is professor of astrophysics at Princeton University and coauthor with Richard Gott and Neil DeGrasse Tyson of Welcome to The Universe: An Astrophysical Tour.

This article was originally published at Aeon and has been republished under Creative Commons.