Martin Rees on On The Future

Humanity has reached a critical moment. Our world is unsettled and rapidly changing, and we face existential risks over the next century. Various prospects for the future—good and bad—are possible. Yet our approach to the future is characterized by short-term thinking, polarizing debates, alarmist rhetoric, and pessimism. In this short, exhilarating book, renowned scientist and bestselling author Martin Rees argues that humanity’s future depends on our taking a very different approach to thinking about and planning for tomorrow. Rich with fascinating insights into cutting-edge science and technology, this book will captivate anyone who wants to understand the critical issues that will define the future of humanity on Earth and beyond.

Are you an optimist?

I am writing this book as a citizen, and as an anxious member of the human species. One of its unifying themes is that humanity’s flourishing depends on how wisely science and technology are deployed. Our lives, our health, and our environment can benefit still more from further advances in biotech, cybertech, robotics, and AI. There seems no scientific impediment to achieving a sustainable and secure world, where all enjoy a lifestyle better than those in the ‘west’ do today (albeit using less energy and eating less meat). To that extent, I am a techno-optimist. But what actually happens depends on politics and ethical choices.

Our ever more interconnected world is exposed to new vulnerabilities. Even within the next decade or two, robotics will disrupt working patterns, national economies, and international relations. A growing and more demanding population puts the natural environment under strain; peoples’ actions could trigger dangerous climate change and mass extinctions if ‘tipping points’ are crossed—outcomes that would bequeath a depleted and impoverished world to future generations. But to reduce these risks, we need to enhance our understanding of nature and deploy appropriate technology (zero-carbon energy, for instance) more urgently. Risks and ethical dilemmas can be minimized by a culture of ‘responsible innovation’, especially in fields like biotech, advanced AI and geoengineering; and we’ll need to confront new ethical issues—‘designer babies’, blurring of the line between life and death, and so forth—guided by priorities and values that science itself can’t provide.

Is there a moral imperative as well?

There has plainly been a welcome improvement in most people’s lives and life-chances—in education, health, and lifespan. This is owed to technology. However, it’s surely a depressing indictment of current morality that the gulf between the way the world is and the way it could be is wider than it ever was. The lives of medieval people may have been miserable compared to ours, but there was little that could have been done to improve them. In contrast, the plight of the ‘bottom billion’ in today’s world could be transformed by redistributing the wealth of the thousand richest people on the planet. Failure to respond to this humanitarian imperative, which nations have the power to remedy—surely casts doubt on any claims of institutional moral progress. That’s why I can’t go along with the ‘new optimists’ who promote a rosy view of the future, enthusing about improvements in our moral sensitivities as well as in our material progress. I don’t share their hope in markets and enlightenment.

A benign society should, at the very least, require trust between individuals and their institutions. I worry that we are moving further from this ideal for two reasons: firstly, those we routinely have to deal with are increasingly remote and depersonalised; and secondly, modern life is more vulnerable to disruption—‘hackers’ or dissidents can trigger incidents that cascade globally. Such trends necessitate burgeoning security measures. These are already irritants in our everyday life—security guards, elaborate passwords, airport searches and so forth—but they are likely to become ever more vexatious. Innovations like blockchain could offer protocols that render the entire internet more secure. But their current applications—allowing an economy based on cryptocurrencies to function independently of traditional financial institutions—seem damaging rather than benign. It’s depressing to realize how much of the economy is dedicated to activities that would be superfluous if we felt we could trust each other. (It would be a worthwhile exercise if some economist could quantify this.)

But what about politics? 

In an era where we are all becoming interconnected, where the disadvantaged are aware of their predicament, and where migration is easy, it’s hard to be optimistic about a peaceful world if a chasm persists, as deep as it is today’s geopolitics, between the welfare levels and life-chances in different regions. It’s specially disquieting if advances in genetics and medicine that can enhance human lives are available to a privileged few, and portend more fundamental forms of inequality. Harmonious geopolitics would require a global distribution of wealth that’s perceived as fair—with far less inequality between rich and poor nations. And even without being utopian it’s surely a moral imperative (as well as in the self-interest of fortunate nations) to push towards this goal. Sadly, we downplay what’s happening even now in far-away countries. And we discount too heavily the problems we’ll leave for new generations. Governments need to prioritise projects that are long-term in a political perspectives, even if a mere instant in the history of our planet.

Will super intelligent AI out-think humans?

We are of course already being aided by computational power. In the ‘virtual world’ inside a computer astronomers can mimic galaxy formation; meteorologists can simulate the atmosphere. As computer power grows, these ‘virtual’ experiments become more realistic and useful. And AI will make discoveries that have eluded unaided human brains. For example, there is a continuing quest to find the ‘recipe’ for a superconductor that works at ordinary room temperatures. This quest involves a lot of ‘trial and error’, because nobody fully understands what makes the electrical resistance disappear more readily in some materials than in others. But it’s becoming possible to calculate the properties of materials, so fast that millions of alternatives can be computed, far more quickly than actual experiments could be done. Suppose that a machine came up with a novel and successful recipe. It would have achieved something that would get a scientist a Nobel prize. It would have behaved as though it had insight and imagination within its rather specialized universe—just as Deep Mind’s Alpha Go flummoxed and impressed human champions with some of its moves. Likewise, searches for the optimal chemical composition for new drugs will increasingly be done by computers rather than by real experiments.

Equally important is the capability to ‘crunch’ huge data-sets. As an example from genetics, qualities like intelligence and height are determined by combinations of genes. To identify these combinations would require a machine fast enough to scan huge samples of genomes to identify small correlations. Similar procedures are used by financial traders in seeking out market trends, and responding rapidly to them, so that their investors can top-slice funds from the rest of us.

Should humans spread beyond Earth?

The practical case for sending people into space gets weaker as robots improve. So the only manned ventures (except for those motivated by national prestige) will be high-risk, cut price, and privately sponsored—undertaken by thrill-seekers prepared even to accept one-way tickets. They’re the people who will venture to Mars. But there won’t be mass emigration: Mars is far less comfortable than the South Pole or the ocean bed. It’s a dangerous delusion to think that space offers an escape from Earth’s problems. We’ve got to solve these here. Coping with climate change may seem daunting, but it’s a doddle compared to terraforming Mars. There’s no ‘Planet B’ for ordinary risk-averse people.

But I think (and hope) that there will be bases on Mars by 2100. Moreover we (and our progeny here on Earth) should cheer on the brave adventurers who go there. The space environment is inherently hostile for humans, so, precisely because they will be ill-adapted to their new habitat, the pioneer explorers will have a more compelling incentive than those of us on Earth to redesign themselves. They’ll harness the super-powerful genetic and cyborg technologies that will be developed in coming decades. These techniques will, one hopes, be heavily regulated on Earth; but ‘settlers’ on Mars will be far beyond the clutches of the regulators. This might be the first step towards divergence into a new species. So it’s these spacefaring adventurers, not those of us comfortably adapted to life on Earth, who will spearhead the post-human era. If they become cyborgs, they won’t need an atmosphere, and may prefer zero-g—perhaps even spreading among the stars.

Is there ‘intelligence’ out there already?

Perhaps we’ll one day find evidence of alien intelligence. On the other hand, our Earth may be unique and the searches may fail. This would disappoint the searchers. But it would have an upside for humanity’s long-term resonance. Our solar system is barely middle aged and if humans avoid self-destruction within the next century, the post-human era beckons. Intelligence from Earth could spread through the entire Galaxy, evolving into a teeming complexity far beyond what we can even conceive. If so, our tiny planet—this pale blue dot floating in space—could be the most important place in the entire cosmos.

What about God?

I don’t believe in any religious dogmas, but I share a sense of mystery and wonder with many who do. And I deplore the so called ‘new atheists’—small-time Bertrand Russell’s recycling his arguments—who attack religion. Hard-line atheists must surely be aware of ‘religious’ people who are manifestly neither unintelligent nor naïve, though they make minimal attempts to understand them by attacking mainstream religion, rather than striving for peaceful coexistence with it; they weaken the alliance against fundamentalism and fanaticism. They also weaken science. If a young Muslim or evangelical Christian is told at school that they can’t have their God and accept evolution, they will opt for their God and be lost to science. When so much divides us, and change is disturbingly fast, religion offers bonding within a community. And its heritage, linking its adherents with past generations, should strengthen our motivation not to leave a degraded world for generations yet to come.

Do scientists have special obligations?

It’s a main theme of my book that our entire future depends on making wise choices about how to apply science. These choices shouldn’t be made just by scientists: they matter to us all and should be the outcome of wide public debate. But for that to happen, we all need enough ‘feel’ for the key ideas of science, and enough numeracy to assess hazards, probabilities and risks—so as not to be bamboozled by experts, or credulous of populist sloganising. Moreover, quite apart from their practical use, these ideas should be part of our common culture. More than that, science is the one culture that’s truly global. It should transcend all barriers of nationality. And it should straddle all faiths too.

I think all scientists should divert some of their efforts towards public policy—and engage with government, business, and campaigning bodies. And of course the challenges are global. Coping with potential shortage of resources—and transitioning to low carbon energy—can’t be solved by each nation separately.

The trouble is that even the best politicians focus mainly on the urgent and parochial—and getting reelected. This is an endemic frustration for those who’ve been official scientific advisors in governments. To attract politicians’ attention you must get headlined in the press, and fill their inboxes. So scientists can have more leverage indirectly—by campaigning, so that the public and the media amplify their voice. Rachel Carson and Carl Sagan, for instance, were preeminent exemplars of the concerned scientist—with immense influence through their writings, lectures and campaigns, even before the age of social media and tweets

Science is a universal culture, spanning all nations and faiths. So scientists confront fewer impediments on straddling political divides.

Does being an astronomer influence your attitude toward the future?

Yes, I think it makes me specially mindful of the longterm future. Let me explain this. The stupendous timespans of the evolutionary past are now part of common culture (maybe not in Kentucky, or in parts of the Muslim world). But most people still somehow think we humans are necessarily the culmination of the evolutionary tree. That hardly seems credible to an astronomer—indeed, we could still be nearer the beginning than the end. Our Sun formed 4.5 billion years ago, but it’s got 6 billion more before the fuel runs out. It then flares up, engulfing the inner planets. And the expanding universe will continue—perhaps forever. Any creatures witnessing the Sun’s demise won’t be human—they could be as different from us as we are from slime mold. Posthuman evolution—here on Earth and far beyond—could be as prolonged as the evolution that’s led to us, and even more wonderful. And of course this evolution will be faster than Darwinian: it happens on a technological timescale, driven by advances in genetics and AI.

But (a final thought) even in the context of a timeline that extends billions of years into the future, as well as into the past. this century is special. It’s the first where one species—ours—has our planet’s future in its hands. Our creative intelligence could inaugurate billions of years of posthuman evolution even more marvelous than what’s led to us. On the other hand, humans could trigger bio, cyber, or environmental catastrophes that foreclose all such potentialities. Our Earth, this ‘pale blue dot’ in the cosmos, is a special place. It may be a unique place. And we’re its stewards at a specially crucial era—the anthropocene. That’s a key message for us all, whether or not we’re astronomers, and a motivation for my book.

Martin Rees is Astronomer Royal, and has been Master of Trinity College and Director of the Institute of Astronomy at Cambridge University. As a member of the UK’s House of Lords and former President of the Royal Society, he is much involved in international science and issues of technological risk. His books include Our Cosmic HabitatJust Six Numbers, and Our Final Hour (published in the UK as Our Final Century). He lives in Cambridge, UK.