T. L. Taylor on Watch Me Play: #Twitch and the Rise of Game Live Streaming

TaylorEvery day thousands of people broadcast their gaming live to audiences over the internet using popular sites such as Twitch, which reaches more than one hundred million viewers a month. In these new platforms for interactive entertainment, big esports events featuring digital game competitors live stream globally, and audiences can interact with broadcasters—and each other—through chat in real time. What are the ramifications of this exploding online industry? Taking readers inside home studios and backstage at large esports events, Watch Me Play investigates the rise of game live streaming and how it is poised to alter how we understand media and audiences. The first book to explore the online phenomenon Twitch and live streaming games, Watch Me Play offers a vibrant look at the melding of private play and public entertainment.

What led you to write this book?

I was captivated by a live esports tournament broadcast I saw in 2012 and originally set out to write an article about how streaming was amplifying that industry well beyond its roots as a grassroots scene. As I started to research what was happening on Twitch, one of the main platforms for game broadcasting, I realized esports was only part of the story. Seeing so many people sharing their play and watching the cultural impact it was having, I quickly understood there was a much bigger research project at stake. What started as a small update on the esports story became a book not only on how people are transforming their private play into public entertainment but profound changes in media more broadly.

Has live streaming changed the culture around esports and gaming more generally?

Absolutely. It used to be a lot of work to be an esports fan. You had to know where to find recorded match videos, download special files to watch competitions, and follow all kinds of specialist sites. Live streaming has made it incredibly easy now to watch esports events and it’s not unusual for there to be matches being broadcast from around the world 24/7 on Twitch. It’s also made it much easier to keep up to date with your favorite teams and players, even watching their practices. For players who aspire to improve, they now have regular access to people they can try and learn from. Live streaming has helped expand and grown esports fandom. Beyond competitive gaming, live streaming has tapped into some of the pleasures sites like YouTube offered in terms of watching, and learning about, games. But it’s extended spectatorship to include real time interaction between viewers and broadcasters, the growth of new gaming communities, a whole new infrastructure around regulation and monetization, and lots of fascinating experiments in sharing live gaming content.

How has the increasing popularity of Twitch impacted live streaming on the Internet overall?

Though the platform originally operated as a niche site catering to gamers, it has gotten real momentum and attention broadly in a relatively short amount of time. More people started watching, and broadcasting themselves, and really big productions caught the eye of those outside gaming. Live streaming taps into a longstanding pleasure in game culture— watching others play and sharing your own— but also syncs with broader changes around media distribution (think about the rise of Netflix and “cord cutting” where people forego cable television entirely) and the tremendous energy of user-generated content. The platform has also been very adept at transforming itself and now not only hosts gaming but all kinds of creative and “in real life” shows. And in a fascinating twist, traditional media has started folding itself back into Twitch. Just the other day I watched the Washington Post’s  https://www.twitch.tv/washingtonpost channel where reporters were talking about the stories of the day and fielding questions in real time from the audience.

How is live streaming changing how we understand media and emerging technologies?

Live streaming offers us an opportunity to understand how various domains—the televisual, the internet, and gaming—can weave together on an emerging platform. It takes the notion of “social media” and “social TV” and extends it well beyond the typical conversations about spaces like Twitter or Facebook. Ultimately we need to do a better job understanding the links, amplifications, and interrelations between what we sometimes think of disparate technologies and sites. The case of game live streaming gives us a path into thinking not only about changes in game culture, but new socio-technical platforms and network life.

How do you predict Twitch will grow and evolve in the coming years?

I always say I’m a sociologist and not a futurologist so I’m hesitant to make any predictions. There are still too many contingencies (around everything from user practices to regulation and economics). What I will say is that while Twitch is itself a relatively new platform, it’s part of a much longer history of broadcasting on the internet going back to the earliest days of webcams in the 1990s, and it sits alongside a wide range of user-generated content that plays a huge role not only online, but in traditional media. The themes of sharing yourself, your play, and of the rise of co-creative media and alternative distribution practices isn’t going away anytime soon.

What do you hope readers will take away from reading this book?

I hope readers will get a sense of the pleasures, and work, involved in game live streaming. Game live streamers who are broadcasting out of their homes give us insight into what it means to transform your private play into public entertainment. The book also tackles how live streaming is affecting other industries, not only esports but traditional media companies that are trying to understand—and catch up with—this slice of gaming. Finally, I hope readers will come to see how game live streaming offers a powerful case to thinking more broadly about things like regulation and governance—from community practices to law and corporate policy—on emerging internet platforms.  

T. L. Taylor is professor of comparative media studies at the Massachusetts Institute of Technology. Her books include Raising the Stakes and Play between Worlds.

Martin Rees on On The Future

Humanity has reached a critical moment. Our world is unsettled and rapidly changing, and we face existential risks over the next century. Various prospects for the future—good and bad—are possible. Yet our approach to the future is characterized by short-term thinking, polarizing debates, alarmist rhetoric, and pessimism. In this short, exhilarating book, renowned scientist and bestselling author Martin Rees argues that humanity’s future depends on our taking a very different approach to thinking about and planning for tomorrow. Rich with fascinating insights into cutting-edge science and technology, this book will captivate anyone who wants to understand the critical issues that will define the future of humanity on Earth and beyond.

Are you an optimist?

I am writing this book as a citizen, and as an anxious member of the human species. One of its unifying themes is that humanity’s flourishing depends on how wisely science and technology are deployed. Our lives, our health, and our environment can benefit still more from further advances in biotech, cybertech, robotics, and AI. There seems no scientific impediment to achieving a sustainable and secure world, where all enjoy a lifestyle better than those in the ‘west’ do today (albeit using less energy and eating less meat). To that extent, I am a techno-optimist. But what actually happens depends on politics and ethical choices.

Our ever more interconnected world is exposed to new vulnerabilities. Even within the next decade or two, robotics will disrupt working patterns, national economies, and international relations. A growing and more demanding population puts the natural environment under strain; peoples’ actions could trigger dangerous climate change and mass extinctions if ‘tipping points’ are crossed—outcomes that would bequeath a depleted and impoverished world to future generations. But to reduce these risks, we need to enhance our understanding of nature and deploy appropriate technology (zero-carbon energy, for instance) more urgently. Risks and ethical dilemmas can be minimized by a culture of ‘responsible innovation’, especially in fields like biotech, advanced AI and geoengineering; and we’ll need to confront new ethical issues—‘designer babies’, blurring of the line between life and death, and so forth—guided by priorities and values that science itself can’t provide.

Is there a moral imperative as well?

There has plainly been a welcome improvement in most people’s lives and life-chances—in education, health, and lifespan. This is owed to technology. However, it’s surely a depressing indictment of current morality that the gulf between the way the world is and the way it could be is wider than it ever was. The lives of medieval people may have been miserable compared to ours, but there was little that could have been done to improve them. In contrast, the plight of the ‘bottom billion’ in today’s world could be transformed by redistributing the wealth of the thousand richest people on the planet. Failure to respond to this humanitarian imperative, which nations have the power to remedy—surely casts doubt on any claims of institutional moral progress. That’s why I can’t go along with the ‘new optimists’ who promote a rosy view of the future, enthusing about improvements in our moral sensitivities as well as in our material progress. I don’t share their hope in markets and enlightenment.

A benign society should, at the very least, require trust between individuals and their institutions. I worry that we are moving further from this ideal for two reasons: firstly, those we routinely have to deal with are increasingly remote and depersonalised; and secondly, modern life is more vulnerable to disruption—‘hackers’ or dissidents can trigger incidents that cascade globally. Such trends necessitate burgeoning security measures. These are already irritants in our everyday life—security guards, elaborate passwords, airport searches and so forth—but they are likely to become ever more vexatious. Innovations like blockchain could offer protocols that render the entire internet more secure. But their current applications—allowing an economy based on cryptocurrencies to function independently of traditional financial institutions—seem damaging rather than benign. It’s depressing to realize how much of the economy is dedicated to activities that would be superfluous if we felt we could trust each other. (It would be a worthwhile exercise if some economist could quantify this.)

But what about politics? 

In an era where we are all becoming interconnected, where the disadvantaged are aware of their predicament, and where migration is easy, it’s hard to be optimistic about a peaceful world if a chasm persists, as deep as it is today’s geopolitics, between the welfare levels and life-chances in different regions. It’s specially disquieting if advances in genetics and medicine that can enhance human lives are available to a privileged few, and portend more fundamental forms of inequality. Harmonious geopolitics would require a global distribution of wealth that’s perceived as fair—with far less inequality between rich and poor nations. And even without being utopian it’s surely a moral imperative (as well as in the self-interest of fortunate nations) to push towards this goal. Sadly, we downplay what’s happening even now in far-away countries. And we discount too heavily the problems we’ll leave for new generations. Governments need to prioritise projects that are long-term in a political perspectives, even if a mere instant in the history of our planet.

Will super intelligent AI out-think humans?

We are of course already being aided by computational power. In the ‘virtual world’ inside a computer astronomers can mimic galaxy formation; meteorologists can simulate the atmosphere. As computer power grows, these ‘virtual’ experiments become more realistic and useful. And AI will make discoveries that have eluded unaided human brains. For example, there is a continuing quest to find the ‘recipe’ for a superconductor that works at ordinary room temperatures. This quest involves a lot of ‘trial and error’, because nobody fully understands what makes the electrical resistance disappear more readily in some materials than in others. But it’s becoming possible to calculate the properties of materials, so fast that millions of alternatives can be computed, far more quickly than actual experiments could be done. Suppose that a machine came up with a novel and successful recipe. It would have achieved something that would get a scientist a Nobel prize. It would have behaved as though it had insight and imagination within its rather specialized universe—just as Deep Mind’s Alpha Go flummoxed and impressed human champions with some of its moves. Likewise, searches for the optimal chemical composition for new drugs will increasingly be done by computers rather than by real experiments.

Equally important is the capability to ‘crunch’ huge data-sets. As an example from genetics, qualities like intelligence and height are determined by combinations of genes. To identify these combinations would require a machine fast enough to scan huge samples of genomes to identify small correlations. Similar procedures are used by financial traders in seeking out market trends, and responding rapidly to them, so that their investors can top-slice funds from the rest of us.

Should humans spread beyond Earth?

The practical case for sending people into space gets weaker as robots improve. So the only manned ventures (except for those motivated by national prestige) will be high-risk, cut price, and privately sponsored—undertaken by thrill-seekers prepared even to accept one-way tickets. They’re the people who will venture to Mars. But there won’t be mass emigration: Mars is far less comfortable than the South Pole or the ocean bed. It’s a dangerous delusion to think that space offers an escape from Earth’s problems. We’ve got to solve these here. Coping with climate change may seem daunting, but it’s a doddle compared to terraforming Mars. There’s no ‘Planet B’ for ordinary risk-averse people.

But I think (and hope) that there will be bases on Mars by 2100. Moreover we (and our progeny here on Earth) should cheer on the brave adventurers who go there. The space environment is inherently hostile for humans, so, precisely because they will be ill-adapted to their new habitat, the pioneer explorers will have a more compelling incentive than those of us on Earth to redesign themselves. They’ll harness the super-powerful genetic and cyborg technologies that will be developed in coming decades. These techniques will, one hopes, be heavily regulated on Earth; but ‘settlers’ on Mars will be far beyond the clutches of the regulators. This might be the first step towards divergence into a new species. So it’s these spacefaring adventurers, not those of us comfortably adapted to life on Earth, who will spearhead the post-human era. If they become cyborgs, they won’t need an atmosphere, and may prefer zero-g—perhaps even spreading among the stars.

Is there ‘intelligence’ out there already?

Perhaps we’ll one day find evidence of alien intelligence. On the other hand, our Earth may be unique and the searches may fail. This would disappoint the searchers. But it would have an upside for humanity’s long-term resonance. Our solar system is barely middle aged and if humans avoid self-destruction within the next century, the post-human era beckons. Intelligence from Earth could spread through the entire Galaxy, evolving into a teeming complexity far beyond what we can even conceive. If so, our tiny planet—this pale blue dot floating in space—could be the most important place in the entire cosmos.

What about God?

I don’t believe in any religious dogmas, but I share a sense of mystery and wonder with many who do. And I deplore the so called ‘new atheists’—small-time Bertrand Russell’s recycling his arguments—who attack religion. Hard-line atheists must surely be aware of ‘religious’ people who are manifestly neither unintelligent nor naïve, though they make minimal attempts to understand them by attacking mainstream religion, rather than striving for peaceful coexistence with it; they weaken the alliance against fundamentalism and fanaticism. They also weaken science. If a young Muslim or evangelical Christian is told at school that they can’t have their God and accept evolution, they will opt for their God and be lost to science. When so much divides us, and change is disturbingly fast, religion offers bonding within a community. And its heritage, linking its adherents with past generations, should strengthen our motivation not to leave a degraded world for generations yet to come.

Do scientists have special obligations?

It’s a main theme of my book that our entire future depends on making wise choices about how to apply science. These choices shouldn’t be made just by scientists: they matter to us all and should be the outcome of wide public debate. But for that to happen, we all need enough ‘feel’ for the key ideas of science, and enough numeracy to assess hazards, probabilities and risks—so as not to be bamboozled by experts, or credulous of populist sloganising. Moreover, quite apart from their practical use, these ideas should be part of our common culture. More than that, science is the one culture that’s truly global. It should transcend all barriers of nationality. And it should straddle all faiths too.

I think all scientists should divert some of their efforts towards public policy—and engage with government, business, and campaigning bodies. And of course the challenges are global. Coping with potential shortage of resources—and transitioning to low carbon energy—can’t be solved by each nation separately.

The trouble is that even the best politicians focus mainly on the urgent and parochial—and getting reelected. This is an endemic frustration for those who’ve been official scientific advisors in governments. To attract politicians’ attention you must get headlined in the press, and fill their inboxes. So scientists can have more leverage indirectly—by campaigning, so that the public and the media amplify their voice. Rachel Carson and Carl Sagan, for instance, were preeminent exemplars of the concerned scientist—with immense influence through their writings, lectures and campaigns, even before the age of social media and tweets

Science is a universal culture, spanning all nations and faiths. So scientists confront fewer impediments on straddling political divides.

Does being an astronomer influence your attitude toward the future?

Yes, I think it makes me specially mindful of the longterm future. Let me explain this. The stupendous timespans of the evolutionary past are now part of common culture (maybe not in Kentucky, or in parts of the Muslim world). But most people still somehow think we humans are necessarily the culmination of the evolutionary tree. That hardly seems credible to an astronomer—indeed, we could still be nearer the beginning than the end. Our Sun formed 4.5 billion years ago, but it’s got 6 billion more before the fuel runs out. It then flares up, engulfing the inner planets. And the expanding universe will continue—perhaps forever. Any creatures witnessing the Sun’s demise won’t be human—they could be as different from us as we are from slime mold. Posthuman evolution—here on Earth and far beyond—could be as prolonged as the evolution that’s led to us, and even more wonderful. And of course this evolution will be faster than Darwinian: it happens on a technological timescale, driven by advances in genetics and AI.

But (a final thought) even in the context of a timeline that extends billions of years into the future, as well as into the past. this century is special. It’s the first where one species—ours—has our planet’s future in its hands. Our creative intelligence could inaugurate billions of years of posthuman evolution even more marvelous than what’s led to us. On the other hand, humans could trigger bio, cyber, or environmental catastrophes that foreclose all such potentialities. Our Earth, this ‘pale blue dot’ in the cosmos, is a special place. It may be a unique place. And we’re its stewards at a specially crucial era—the anthropocene. That’s a key message for us all, whether or not we’re astronomers, and a motivation for my book.

Martin Rees is Astronomer Royal, and has been Master of Trinity College and Director of the Institute of Astronomy at Cambridge University. As a member of the UK’s House of Lords and former President of the Royal Society, he is much involved in international science and issues of technological risk. His books include Our Cosmic HabitatJust Six Numbers, and Our Final Hour (published in the UK as Our Final Century). He lives in Cambridge, UK.

Jonathan Haskel & Stian Westlake on Capitalism without Capital

Early in the twenty-first century, a quiet revolution occurred. For the first time, the major developed economies began to invest more in intangible assets, like design, branding, R&D, and software, than in tangible assets, like machinery, buildings, and computers. For all sorts of businesses, from tech firms and pharma companies to coffee shops and gyms, the ability to deploy assets that one can neither see nor touch is increasingly the main source of long-term success. But this is not just a familiar story of the so-called new economy. Capitalism without Capital shows that the growing importance of intangible assets has also played a role in some of the big economic changes of the last decade.

What do you mean when you say we live in an age of Capitalism without Capital?

Our book is based on one big fact about the economy: that the nature of the investment that businesses do has fundamentally changed. Once businesses invested mainly in things you could touch or feel like buildings, machinery, and vehicles. But more and more investment now goes into things you can’t touch or feel: things like research and development, design, organizational development—“intangible’ investments. Today, in developed countries, businesses invest more each year intangible assets than in tangibles. But they’re often measured poorly or not at all in company accounts or national accounts. So there is still a lot of capital about, but it has done a sort of vanishing act, both physically and from the records that businesses and governments keep.

What difference does the rise of intangible investments make?

The rise of intangible investment matters because intangible assets tend to behave differently from tangible ones—they have different economic properties. In the book we call these properties the 4S’s—scalability, sunkenness, synergies, and spillovers. Intangibles can be used again and again, they’re hard to sell if a business fails, they’re especially good when you combine them, and the benefits of intangible investment often end up accruing to businesses other than the ones that make them. We argue that this change helps explain all sorts of important concerns people have about today’s economy, from why inequality has risen so much, to why productivity growth seems to have slowed down.

So is this another book about tech companies?

It’s much bigger than that. It’s true that some of the biggest tech companies have lots of very valuable intangibles, and few tangibles. Google’s search algorithms, software, and prodigious stores of data are intangibles; Apple’s design, brand, and supply chains are intangibles; Uber’s networks of drivers and users are intangible assets. Each of these intangibles is worth billions of dollars. But intangibles are everywhere. Even brick and mortar businesses like supermarkets or gyms rely on more and more intangible assets, such as software, codified operating procedures, or brands. And the rise of intangibles is a very long-term story: research by economists like Carol Corrado suggests that intangibles investment has been steadily growing since the early twentieth century, long before the first semiconductors, let alone the Internet.

Who will do well from this new intangible economy?

The intangible economy seems to be creating winners and losers. From a business point of view, we know that around the world, there’s a growing gap between the leading businesses in any given industry and the rest. We think this leader-laggard gap is partly caused by intangibles. Because intangibles are scalable and have synergies with one another, companies that have valuable intangibles will do better and better (and have more incentives to invest in more), while small and low performing companies won’t, and will lag ever further behind.

There is a personal dimension to this too. People who are good at combining ideas, and who are open to new ideas, will do better in an economy where there are lots of synergies between different assets. This will be a boon for educated, open-minded people, people with political, legal, and social connections, and for people who live in cities (where ideas tend to combine easily with one another). But others risk being left further behind.

Does this help explain the big political changes in recent years?

Yes—after the EU referendum in the UK and the 2016 presidential election in the US, a lot of pundits were asking why so many so-called “left behind” communities people voted for Brexit or Donald Trump. Some people thought they did so for cultural reasons, others argued the reasons were mainly economic. But we would argue that an intangible economy, these two reasons are linked: more connected, cosmopolitan places tend to do better economically in an intangible economy, while left-behind places suffer from an alienation that is both economic and cultural.

You mentioned that the rise of intangible investment might help explain why productivity growth is slowing. Why is that?

Many economists and policymakers worry about so-called secular stagnation: the puzzling fact that productivity growth and investment seems to have slowed down, even though interest rates are low and corporate profits are high, especially since 2009. We think the growing importance of intangibles can help explain this in a few ways.

  • There is certainly some under-measurement of investment going on—but as it happens this explains only a small part of the puzzle.
  • The rate of growth of intangible investment has slowed a bit since 2009. This seems to explain part of the slow-down in growth (and also helps explain why the slowdown has been manly concentrated in total factor productivity)
  • The gap between leading firms (with lots of intangibles) and laggard firms (with few) may have created a scenario where a few firms are investing in a lot of intangibles (think Google and Facebook) but for most others, it’s not worth it, since their more powerful competitors are likely to get the spillover benefits.

Does the intangible economy have consequences for investors?

Yes! Company accounts generally don’t record intangibles (except, haphazardly, as “goodwill” after an acquisition). This means that, as intangible assets become more important, corporate balance sheets tell investors less and less about the true value of a company. Much of what equity analysts spend their days doing is, in practice, trying to value intangibles.

And there’s lots of value to be had here: research suggests that equity markets undervalue intangibles like organizational development, and encourage public companies to underinvest in intangibles like R&D. But informed investors can take advantage of this—which can benefit both their own returns and the performance of the economy.

Jonathan, you’re an academic, and Stian, you are a policymaker. How did you come to write this book together?

We started working together in 2009 on the Nesta Innovation Index, which applied some of the techniques that Jonathan had worked on to measure intangibles to build an innovation measurement for the UK. The more we thought about, the clearer it became that intangibles helped explain all sorts of things. Ryan Avent from the Economist asked us to write a piece for their blog about one of these puzzles, and we enjoyed doing that so much we thought we would try writing a book. One of the most fun parts of writing the book was being able to combine the insights from academic economic research on intangibles and innovation with practical insights from innovation policy.

CapitalismJonathan Haskel is professor of economics at Imperial College Business School. Stian Westlake is a senior fellow at Nesta, the UK’s national foundation for innovation. Haskel and Westlake are cowinners of the 2017 Indigo Prize.

Brian Kernighan on what we all need to know about computers

KernighanLaptops, tablets, cell phones, and smart watches: computers are inescapable. But even more are invisible, like those in appliances, cars, medical equipment, transportation systems, power grids, and weapons. We never see the myriad computers that quietly collect, share, and sometimes leak vast amounts of personal data about us, and often don’t consider the extent to which governments and companies increasingly monitor what we do. In Understanding the Digital World, Brian W. Kernighan explains, in clear terms, not only how computers and programming work, but also how computers influence our daily lives. Recently, Kernighan answered some questions about his new book.

Who is this book for? What kind of people are most likely to be interested?

BK: It’s a cliché, but it really is aimed at the proverbial “educated layman.” Everyone uses computers and phones for managing their lives and communicating with other people. So the book is for them. I do think that people who have some technical background will enjoy it, but will also find that it will help their less technical friends and family understand.

What’s the basic message of the book?

BK: Computers—laptops, desktops, tablets, phones, gadgets—are all around us. The Internet lets our computers communicate with us and with other computers all over the world. And there are billions of computers in infrastructure that we rely on without even realizing its existence. Computers and communications systems have changed our lives dramatically in the past couple of decades, and will continue to do so. So anyone who hopes to be at least somewhat informed ought to understand the basics of how such things work. One major concern has been the enormous increase in surveillance and a corresponding reduction in our personal privacy. We are under continuous monitoring by government agencies like the NSA in the United States and similar ones in other countries. At the same time, commercial interests track everything we do online and with our phones. Some of this is acceptable, but in my opinion, it’s gone way too far. It’s vital that we understand better what is being done and how to reduce the tracking and spying. The more we understand about how these systems work, the more we can defend ourselves, while still taking advantage of the many benefits they provide. For example, it’s quite possible to explore interesting and useful web sites without being continuously tracked. You don’t have to reveal everything about yourself to social networks. But you have to know something about how to set up some defenses. More generally, I’m trying to help the reader to reach a better than superficial understanding of how computers work, what software is and how it’s created, and how the Internet and the Web operate. Going just a little deeper into these is totally within the grasp of anyone. The more you know, the better off you will be; knowing even a little about these topics will put you ahead of the large majority of people, and will protect you from any number of foolish behaviors.

Can you give us an example of how to defend ourselves against tracking by web sites?

BK: Whenever you visit a web site, a record is made of your visit, often by dozens of systems that are collecting information that can be used for targeted advertising. It’s easy to reduce this kind of tracking by turning off third-party cookies and by installing some ad-blocking software. You can still use the primary site, but you don’t give away much if anything to the trackers, so the spread of information about you is more limited.

If I don’t care if companies know what sites I visit, why should I be worried?

BK: “I’ve got nothing to hide,” spoken by an individual, or “If you have nothing to hide, you have nothing to fear,” offered by a government, are pernicious ideas. They frame the discussion in such a way as to concede the point at the beginning. Of course you have nothing to hide. If that’s true, would you mind showing me your tax returns? How did you vote in the last election? What’s your salary? Could I have your social security number? Could you tell me who you’ve called in the past year? Of course not—most of your life is no one else’s business.

What’s the one thing that you would advise everyone to do right now to improve their online privacy and security?

BK: Just one thing? Learn more about how your computer and your phone work, how the Internet works, and how to use all of them wisely. But I would add some specific recommendations, all of which are easy and worthwhile. First, in your browser, install defensive extensions like like AdBlock and Ghostery, and turn off third-party cookies. This will take you less than ten minutes and will cut your exposure by at least a factor of ten. Second, make sure that your computer is backed up all the time; this protects you against hardware failure and your own mistakes (both of which are not uncommon), and also against ransomware (though that is much less a risk if you are alert and have turned on your defenses). Third, use different passwords for different sites; that way, if one account is compromised, others will not be. And don’t use your Facebook or Google account to log in to other sites; that increases your vulnerability and gives away information about you for minor convenience. Finally, be very wary about clicking on links in email that have even the faintest hint of something wrong. Phishing attacks are one of the most common ways that accounts are compromised and identities stolen.

KernighanBrian W. Kernighan is a professor in the Department of Computer Science at Princeton University. He is the coauthor of ten other books, including the computing classic The C Programming Language (Prentice Hall). He is the author of Understanding the Digital World: What You Need to Know about Computers, the Internet, Privacy, and Security.

David Alan Grier: The Light of Computation

by David Alan Grier

When one figure steps into the light, others can be seen in the reflected glow. The movie Hidden Figures has brought a little light to the contributions of NASA’s human computers. Women such as Katherine Goble Johnson and her colleagues of the West Area Computers supported the manned space program by doing hours of repetitive, detailed orbital calculations. These women were not the first mathematical workers to toil in the obscurity of organized scientific calculation. The history of organized computing groups can be traced back to the 17th century, when a French astronomer convinced three friends to help him calculate the date that Halley’s comet would return to view. Like Johnson, few human computers have received any recognition for their labors. For many, only their families appreciated the work that they did. For some, not even their closest relatives knew of their role in the scientific community.

GrierMy grandmother confessed her training as a human computer only at the very end of her life. At one dinner, she laid her fork on the table and expressed regret that she had never used calculus. Since none of us believed that she had gone to college, we dismissed the remark and moved the conversation in a different direction. Only after her passing did I find the college records that confirmed she had taken a degree in mathematics from the University of Michigan in 1921. The illumination from those records showed that she was not alone. Half of the twelve mathematics majors in her class were women. Five of those six had been employed as human computers or statistical clerks.

By 1921, organized human computing was fairly common in industrialized countries. The governments of the United States, Germany, France, Great Britain, Japan, and Russia supported groups that did calculations for nautical almanacs, national surveys, agricultural statistics, weapons testing, and weather prediction. The British Association for the Advancement of Science operated a computing group. So did the Harvard Observatory, Iowa State University, and the University of Indiana. One school, University College London, published a periodical for these groups, Tracts for Computers.

While many of these human computers were women, most were not. Computation was considered to be a form of clerical work, which was still a career dominated by men. However, human computers tended to be individuals who faced economic or social barriers to their careers. These barriers prevented them from becoming a scientist or engineer in spite of their talents. In the book When Computers Were Human, I characterized them as “Blacks, women, Irish, Jews and the merely poor.” One of the most prominent computing groups of the 20th century, the Mathematical Tables Project, hired only the impoverished. It operated during the Great Depression and recruited its 450 computers from New York City’s unemployment rolls.

During its 10 years of operations, the Math Tables Project toiled in obscurity. Only a few members of the scientific community recognized its contributions. Hans Bethe asked the group to do the calculations for a paper that he was writing in the physics of the sun. The engineer Philip Morse brought problems from his colleagues at MIT. The pioneering computer scientist John von Neumann asked the group to test a new mathematical optimization technique after he was unable to test it on the new ENIAC computer. However, most scientists maintained a distance between themselves and the Mathematical Tables Project. One member of the Academy of Science explained his reservations about the Project with an argument that came to be known as the Computational Syllogism. Scientists, he argued, are successful people. The poor, he asserted, are not successful. Therefore, he concluded, the poor cannot be scientists and hence should not be employed in computation.

Like the human computers of NASA, the Mathematical Tables Project had a brief moment in the spotlight. In 1964, the leader of the Project, Gertrude Blanch, received a Federal Woman’s Award from President Lyndon Johnson for her contributions to the United States Government. Yet, her light did not shine far enough to bring recognition to the 20 members of the Math Tables Project who published a book, later that year, on the methods of scientific computing. The volume became one of the most highly sold scientific books in history. Nonetheless, few people knew that it was written by former human computers.

The attention to Katherine Goble Johnson is welcome because it reminds us that science is a community endeavor. When we recognize the authors of scientific articles, or applaud the distinguished men and women who receive Nobel Prizes (or in the case of computer science, Turing Medals) we often fail to see the community members that were essential to the scientific work. At least in Hidden Figures, they receive a little of the reflected light.

David Alan Grier is the author of When Computers Were Human. He writes “Global Code” for Computer magazine and products the podcast “How We Manage Stuff.” He can be reached at grier@gwu.edu.

Ben Peters: Announcing “555 Questions to Make Digital Keywords Harder”

This post appears concurrently at Culture Digitally.

I have relatives who joke that our family motto ought to be “if there’s a harder way, we’ll find it.” Like all jokes, this one rings true–at times painfully true. Everyone, of course, seeks convenience and yet we discover so often the opposite—new hardness, challenges, problems—that prove both uncomfortable and useful. Perhaps (if you’ll forgive the perverse suggestion!), critical digital teaching and scholarship should be harder as well.

How should we make digital technology criticism harder? How should critical engagement with tech discourse best carry on? What intellectual challenges does it currently face? What challenges must it face?

If you haven’t already seen it, Sara Watson released her new and significant report on the state of tech criticism last week. I am excited to announce the release of another kind of resource that just might help us keep after such questions—especially in our classrooms.

Please enjoy and share this freely downloadable, 35-page teaching resource now available on the Princeton University Press website:

“555 Questions to Make Digital Keywords Harder: A Teaching Resource for Digital Keywords: A Vocabulary of Information Society and Culture

555 questions image 2Use this document as you will. Many may use it to support preexisting courses; a bold few may organize critical responses to it. The questions that prompted its creation are straightforward: Is it possible to gather enough material to generate and sustain a semester of discussion in undergraduate and graduate courses based on or around the volume Digital Keywords: A Vocabulary of Information Society and Culture? Can this document, paired with that volume, sustain a stand-alone course? Whatever the answers, the document’s purpose is to complicate—not to simplify—keyword analysis for all. Keywords are supposed to be hard.

Each essay in the volume receives four sections of notes. (1) Background music suggests music that could be played in the classroom as students shuffle in and out of class; the music is meant to prompt students’ talking and thinking about the topic at hand. (2) What can we learn from the contributor listing? fosters the vital habit of learning to understand not only the reading content but also the author and his or her background. (3) Exercise suggests an activity to prompt discussion at the start of a lecture or seminar—and to be shared at the end of a class in order to encourage sustained thinking about a given keyword essay in the next class. Students may also be asked to bring prepared lists with them at the start of a class. Finally, (4) discussion prompts are meant to raise one thread of harder questions, not easy answers, for classroom debate. Most of these 555 questions are meant to model conversation pathways that elevate the theoretical stakes of thinking with and in language.

This document is in some ways an antidote to the editorial instinct to consolidate, polish, and finalize the topics raised in this volume. As the editor of this fine volume, I stand convinced that these twenty-five essays constitute state-of-the-art and definitive scholarly approaches to significant keywords. In fact it is because I am convinced of the volume’s virtues that I seek here to test them—and I know no better way to do that than to ask questions that unravel, challenge, and extend the threads of thought woven together in the essays themselves. I am sure I join my fellow contributors in inviting readers, students, and scholars to do the same with these essays.

“555 Questions” is also something of a methodological extension of Williams’s keywords project—that is, these 555 questions are meant not to provoke particular responses so much as, in admittedly sometimes slapdash and zigzag ways, to model the type of language-based discussion that all sensitive users of language may engage in on their own terms. In other words, most of the questions raised in these pages require little more than taking language and its consequences seriously—at least initially. I am sure I have not done so in these pages with any more fertility or force than others; nevertheless, I offer these pages as a working witness to the generative capabilities of language analysis to get along swimmingly with both the real-world empiricism of the social sciences and the textual commitments of the humanities. I have not questioned my own introduction to the volume, which I leave to others, although I’ll leave off with this quote from it:

“No one can escape keywords so deeply woven into the fabric of daily talk. Whatever our motivations we—as editor and contributors—have selected these keywords because we believe the world cannot proceed without them. We invite you to engage and to disagree. It is this ethic of critical inquiry we find most fruitful in Williams. Keyword analysis is bound to reward all those who take up Williams’s unmistakable invitation to all readers: Which words do unavoidably significant work in your life and the world, and why?”

Peters