Public Thinker: T. L. Taylor On Gamergate, Live-Streaming, and Esports

This article was originally published by Public Books and is reprinted here with permission.

The qualitative sociologist T. L. Taylor is a professor of Comparative Media Studies at MIT and cofounder and director of research for AnyKey, an organization dedicated to supporting and developing fair and inclusive esports. She explores the interrelations of culture and technology in online leisure environments, writing in a clear style and with an evocative voice about gender, inclusivity, and diversity in those virtual spaces. Around this research she has built a career that has taken her from California to North Carolina to Denmark to Cambridge, brought her in front of audiences at the White House and the International Olympic Committee, and led her to speak to the New York Times, PBS, and the BBC as a gaming expert.

She is the author of three books and the coauthor of another. Her latest, Watch Me Play: Twitch and the Rise of Game Live Streaming, was published last fall. We spoke about that work, and in particular about online gaming culture, esports, and the economies of live-streaming, and put it in conversation with the Gamergate controversy, noting how the virtual worlds shaped by broader cultural currents might build a more welcoming and accessible future.


B. R. Cohen (BRC): Your research and teaching look at online gaming, esports, the sociology of virtual spaces, and the like. But I want to start with Gamergate. I should know what it is and understand its nuances, but maybe I don’t.

T. L. Taylor (TLT): Well, it began about five years ago, and you might think of it in two ways. First, Gamergate was targeted, systematic harassment of women in gaming, including developers, academics, and game critics. Although it was cloaked in the language of concern about “ethics in gaming,” it was essentially a targeted anti-feminist movement primarily against a host of women. But there’s the second way to think about it. We’ve now seen how its shape and method were a kind of template or dress rehearsal for the alt-right movement, which has been front and center in the last couple years.

BRC: Was that apparent at the time, or has it become clearer since?

TLT: Maybe a little of both. A number of commentaries have since connected what happened in Gamergate with patterns we now see with the alt-right. The forms of harassment are similar, as are the use of various online sites like 4Chan and Reddit.

BRC: Direct connections, too?

TLT: Yeah, definitely. Milo Yiannopoulos and Breitbart played a part in Gamergate. Brett Kavanagh’s friend Mark Judge, and many alt-right guys, were involved in attacking women like Anita Sarkeesian, who is a leading voice on women and video games. She was viciously harassed. Her life was threatened, and she was doxxed. These Gamergate tactics are the bread and butter of what we see in the alt-right movement more generally. To be frank, I often say that—for good or ill—gaming is the canary in the coal mine for broader cultural, critical, and political issues. Gamergate is a profoundly unfortunate example. To call it misogyny would be an understatement.

BRC: This was in 2014?

TLT: Around then, yes. I should say, too, as someone who studies gaming culture, gender, and technology, a pattern often emerges here. You start seeing a reactionary response when you get a critical mass of women, people of color, or queer folks in a space expressing their own thoughts about their circumstances, pushing back on the culture, and not merely echoing whatever the dominant culture is saying. This is when you get people involved in things like Gamergate or the alt-right purportedly defending “ethics in games” when, in fact, they’re mostly just perpetuating hate and fear. So it was a really nasty time. The people who bore the brunt of it were developers and people like Sarkeesian.

BRC: As a scholar studying this phenomenon, how much did you get caught up in it?

TLT: I got tagged in briefly early on, but I think in part because of my name I’m often seen as a man online, so I was not targeted in the same way.

BRC: You pointed me to the Conference on Advances in Computer Entertainment Technology (ACE) just last fall to show that this is still going on.

TLT: In fact, there was a huge controversy and protest movement that eventually led to the conference being cancelled. The ACE conference chair had invited Steve Bannon as its keynote speaker. I mean, the ACE Twitter account previously had Ada Lovelace and all these amazing women in technology in its header image, and yet two years ago the conference chair behaved appallingly on Twitter toward women, particularly junior women scholars. And then he tried to bring Bannon to a conference he was chairing. Gamergate wasn’t some isolated aberration; it was a convergence of off-line misogyny with online platforms and gaming spaces. The alt-right dovetails into that all too well.

BRC: So Gamergate is about gender and technology, certainly, but more broadly it’s about how marginalized peoples use these games to connect with each other and are re-marginalized within these online communities.

TLT: It’s this strange unfortunate double side of game culture. Gaming and geek culture have historically been places where people who felt like outsiders found connection through geeky loves and pastimes, whether they are games, anime, or comics. But as is often the case with subcultures, they also have heavily policed themselves. They police the boundaries of what they are and who is allowed in. As gaming has become mainstream, the stakes in policing those boundaries seem to have gotten even higher for many people. The question of whether you’re a “real gamer” or a “real comics fan” becomes more intense. It’s happened in a number of related subcultures. We have Gamergate, yes, but both the comics and science fiction communities, for example, have had their own version of this.

BRC: How did you come to this topic, this field? These are all social spaces that I see a sociologist would study. How do you make sense of these gaming and esports cultures in your work?

TLT: Well, I studied sociology as an undergrad at Berkeley and as a graduate student at Brandeis. From early on I was drawn to qualitative work and ethnography in particular. I’m probably not an anthropologist, though, because I’m also drawn to thinking about institutions and organizations in particular ways. Not that anthropologists don’t do that, but sociologists do something slightly different. I ended up at Brandeis, because there were only a handful of places to do qualitative sociology in the US at the time.

BRC: Where did your interest in computers and gaming come from?

TLT: I should’ve also mentioned that I was a community college student before Berkeley, and I’m a first-generation college student from a working-class family. I didn’t grow up with a computer in my home. We didn’t even have an early Atari. I played video games at the arcade but that was about it. My undergraduate thesis was on consumption practices among young Cambodian refugees in San Francisco. It had nothing to do with technology. But in 1991 I went to graduate school, moving from California to Boston, and started using the internet mostly because it was available and I wanted to stay in touch with a few friends from undergrad. I started spending a lot of time online and ended up doing my dissertation on embodiment in early virtual environments. This was before Second Life. These were text-based worlds, multiuser dungeons. Did you ever get into these things?

BRC: I didn’t. I’m not sure why. I think SimCity was the height of it for me.

TLT: You missed out on a host of early text-based games. Zork was one, in which you look around the room, go left, go right, by typing the commands. I got interested in the multiplayer ones because you’d head into online text-based worlds full of random people, bringing to mind that old New Yorker “On the internet, nobody knows you’re a dog” cartoon. In that spirit, a good part of the conversation in the 1990s was about identity on the internet. Sherry Turkle was thinking about identity in new and important ways in Life on the Screen. I was her research assistant in the 1990s, which helped develop my thinking on it. I noticed, though, that there was a sense of a presence in these worlds, which got me thinking about embodiment in online spaces, not just about identities. That’s what I worked on.

BRC: I take it that EverQuest was an exemplar of these games?

TLT: Right, that is what’s known as an MMO or MMORPG, a massively multiplayer online role-playing game. EverQuest wasn’t the only one, but in the 1990s it was one of the big ones. Unlike all those text-based worlds we’d been hanging out in, EverQuest and other MMOs brought graphics. My first book [Play between Worlds] was about MMOs.

BRC: Last fall I spoke with Siva Vaidhyanathan, whose research on social media grew along with his own biography as someone coming of academic age in the 1990s, when the internet was taking its current form. It sounds like you had a similar trajectory, but how did you come to study that game?

TLT: By the end of my dissertation I was mostly tired of it, as grad students usually are. Some of the people I met doing my dissertation research started telling me about this game, EverQuest. I thought, “Oh, that sounds like a fun distraction,” so I started playing it. Pretty quickly I realized, “Oh, no, wait, wait, there’s a lot of fascinating stuff happening here.” That’s how I got into the game as a player, and that was the hook that got me studying it as a sociologist.

BRC: When you were in those virtual worlds thinking about identity and then embodiment, did gender dynamics stand out right away?

TLT: Yes, right away. They were clear and crystalized within the game spaces in particular. In my early work on embodiment, I wrote about gender and sexuality, but because game spaces so clearly represent the gender issues visually, they’re hard to miss. Or in the case of esports, they’re so egregious; it’s stark. You asked about gender dynamics but, honestly, it wasn’t until grad school that I had any kind of serious feminist awareness. My eyes were always focused on class and socioeconomic issues when I was younger, because of my own biography coming from a working-class family. So for me, socioeconomic class issues were the early hook, while the feminist and gender questions came later.

BRC: It’s difficult in the necessary discussions of intersectionality to think of socioeconomic factors as an intersection, too. So many things can intersect.

TLT: It’s funny, I teach a games and culture class in which we do sessions on gender and race. I try to model thinking on how various aspects of our identities and biographies interact and collide. I talk about how I am a woman, but I’m also from a working-class family—and a white one at that. It’s very hard to do it all, but thinking across these areas is key. And intersectionality, as a way of thinking about interlocking systems of oppression—particularly for people of color—is such an important concept to expose students to.

BRC: How do you approach it?

TLT: I think for me it’s about the sociological imagination, something that the sociologist C. Wright Mills talked about. When I started taking sociology classes, I was like, “Holy shit.” This idea helped me take what felt deeply personal, individual, and family-based and link it to a bigger conversation. That was the first critical intellectual intervention in my life.

BRC: Your work beyond the MIT classroom is in touch with the gaming world as well. You used the phrases “gamers,” “game space,” and “gaming space.” Are those common terms? You’ve got gamers; you’ve got fans, audiences, and markets; and the rise of professionalization comes up in your books. But what is your relationship with the gaming community?

TLT: That’s a tricky question. I’m a low theory person at best, which means I don’t have typological models in my head, so I use those terms a bit colloquially. There isn’t one single game community or one kind of person who is a gamer. Each of my projects tries in some way to show the heterogeneity of the gaming space.

BRC: I don’t know much about those gamer spaces, those social worlds. That’s probably obvious by now. A few years ago, I was playing a game with my kids, Game of War, which we all joined on our devices, made our avatars, and played and chatted with people from all over the world. It didn’t take long to learn about the ways that personalities stuck out in those games, the ways people played them—aggressively, congenially, or otherwise. This was my first experience seeing that this was an entire social system worth examining. But even that felt different than the trolls on Twitter or the comment threads on Facebook. How do the social networks in these games differ from other social media, from Twitter or Facebook? Is it a whole different beast?

TLT: I would say there are many things happening. For example, much of what I talk about in my new book on live-streaming, Watch Me Play, would look familiar to people who study social networks. And some things would look familiar to people who study precarious labor and the gig economy. The stuff that’s happening in gaming is not separate from those broader cultural trends and developments. But it’s even messier, because people very regularly use a variety of other social networking sites to facilitate their game play or live-streaming.

One of the things I talk about in the book is how people are using Twitch to live broadcast their game play to each other, but they’re also using Twitter to keep in contact with fans and audience members. So one consistent thread in my various studies of online gaming is this notion of the assemblage, an assemblage of sites and practices that people rely on to make up their gaming or online experience. You can’t just take the artifact of the game—the specific software or platform—and fixate on it and think you understand something meaningful about gaming. The assemblage notion extends to different actors, stakeholders, institutions, and platforms; they all have to come together to make a particular game or cultural activity around a game happen.

BRC: You’re marking the development of the combination of so many different networks that couldn’t have happened at any other time.

TLT: Exactly. And for me it’s also a bit of a methodological intervention. If you want to understand these spaces and experiences, you have to understand that people aren’t just Twitter users, they aren’t just television watchers, and so on. We have a range of things cobbled together to make up our leisure or recreational practices.

BRC: You’re being technically intersectional.

TLT: Yes, yes, I like that. I think it would be an analytic mistake to focus on individual artifacts, even if methodologically we sometimes have to home in on particular platforms. But your participants often lead you elsewhere. You miss the dynamic interplay and misunderstand a lot of the social practice if you don’t follow those other threads.

BRC: You also write about structural cultural differences across the world, so it isn’t just about the context of cross-platform gaming experiences at one point in time. It’s also about cultural differences. In preparing for this conversation I kept seeing references to South Korea as a pioneer in a lot of these areas, or to Europe and North America as different regions with similar technical things that play out differently.

TLT: That is the sociology side of me, to be honest. With esports, people will often say “Oh, if we could just be like South Korea.” I wrote about that in Raising the Stakes. At the time, South Korea had television stations broadcasting esports and esports teams and sponsors. The more I looked into it, the more I realized that we can’t be like South Korea. Their esports culture came from a set of government policies, technological infrastructure, and cultural patterns of use based on the way youth culture is organized. So if you build your model based on a particular piece of hardware, software, or infrastructure, you’ll likely miss how it’s developing in other places in completely different ways. It’s a bit “Science & Technology Studies 101” to say that cultural context shapes technologies, but with new fields arising and new social spaces like esports, I’ve found that we need to keep showing this.

BRC: There’s more to it than drag and drop. Do you still see that kind of a drag-and-drop version of technology transfer circulating in mainstream media?

TLT: Yeah, absolutely. And it’s funny because in the spaces that I study, whether it’s esports or live-streaming, people build elaborate imagined audience-use models in their heads. I think that’s a lovely model, but it depends on so many complex factors that the technological determinists fail to acknowledge. How does the harassment of women and girls or the regulation of their leisure in particular ways shape their participation in gaming? This is where the nastiness of gaming sometimes comes into play, where models circulate in game communities about what “real gaming” is and what “real gamers” look like. And those are often deeply out of touch with the complexity of context in which people game or how taste and preference develop.

BRC: How do your studies of gaming fit with media portrayals of online communities, esports, or otherwise? You just mentioned determinism, and I think there’s a tendency in the broader media to focus too much on causation and impact, which we probably see with all new technologies. They’ll say, for example, that gaming is causing a problem, gaming is causing a new market, gaming generates harassment, gaming provides new opportunities. Your research helps correct that, I think, by also talking about what leads to gaming, not just what gaming leads to. If people want to talk about how gaming is increasing cultural friction, as with the harassment or gender issues, it would seem that we should attend to its foundations beforehand and not just its outcomes.

TLT: That determinist impulse is so common. When I’m talking to press, I often get the “Where’s it going?” or “What’s next for esports?” questions. And I answer that I am not a futurologist; there’s too much contingency. For me, the most interesting parts of the story are all those contingencies. I’m drawn to skirmishes, gaps, breakdown moments, and the little stories about everything falling apart. Those help to highlight the stakes. None of that is terribly satisfying for people looking for causality models. Esports and live-streaming are closely tied to commercial interests and are in a hype bubble right now. And so I think when I get those questions these days, I just have to say that it’s tied up in pure financial speculation. It’s kind of awful what’s happening in that regard. A lot of people just want to make a lot of money by figuring out what the next thing is. I couldn’t care less about that. For me, those aren’t the most interesting questions.

BRC: Studying commercial spaces and entertainment technologies must bring its own difficulties as a scholar.

TLT: That’s true. Much of the stuff I study either has an inherent commercial element, or there’s somebody who comes along and wants to commercialize it. But I tend to focus on things that have arisen out of user desire and community practice. I think that’s what makes the hype stuff tricky. Even though we’re in an esports bubble right now, I don’t think this thing called esports will ever go away, because it comes from actual people and users building grassroots communities.

BRC: On that point, I want to get back to Twitch and the rise of game live-streaming. Twitch is one of the things that’s commercializing esports, I take it?

TLT: Absolutely. Live-streaming amplified broadcasting, which brought in a bigger audience. That, in turn, has caught the eye of commercial interests. I was just at TwitchCon. It’s now a huge convention, which I guess speaks to the growing phenomenon. It’s massive. Twitch is a video platform on which people stream and watch games. Game live-streaming on a site like Twitch taps into the long-standing pleasures people take in sharing their play with each other, whether that’s sitting on a sofa watching your friend play or making and uploading your own videos. Twitch found a way to build a platform around that user activity. They are, of course, trying to commercialize it. It comes from an authentic and true experience but is now part of a larger culture of monetization and platform economies. Those who are now trying to earn a living or make ends meet by streaming games are tied to gig economies and precarious labor.

BRC: It makes me realize that I didn’t find Dragon’s Lair in your index. That’s my go-to when you talk about spectator video games. I remember arcades in the 1980s, everybody crowded in to see. It was the only video game with a TV screen above it so others could watch. Everybody would huddle around.

TLT: Right, that old arcade game, exactly. That sense of spectating is an important part of gaming. Sure, sometimes we play alone, and nobody’s there to watch, but the pleasure of watching and being watched has always been a part of gaming. Esports and Twitch as a platform tapped into that for the digital age. I was trying to understand that space as a sociologist for this new book. I got into the project because I saw that people were trying to bring gaming to spectator audiences and doing so in all kinds of creative ways, jamming technologies together. Then Twitch came along as a platform and made it easy. Or easier, I should say. Part of this story was coming to understand the dynamics of live-streaming not necessarily as sports but as entertainment, as media entertainment.

BRC: So who is the audience for your work? You’ve published books with academic presses and written in an accessible voice about complicated social and technical issues. You also teach about these things at MIT. But you’re also working with, writing about, and writing for these dynamic communities that are still in the making.

TLT: I think the books have been picked up by nonacademics because they act as a kind of legitimizing artifact and help chronicle a history. With esports folks I think they felt like, “Oh my God, somebody is paying serious attention to us.” It was a totem of legitimation, which is gratifying. I honestly don’t expect nonacademics to read my books. I really don’t, but of course it’s rewarding when the communities I study pick them up.

BRC: You do more specific public-facing things, too, like AnyKey, which, and I’m quoting your mission statement here, aims “to help create fair and inclusive spaces” for marginalized communities online.

TLT: That’s right, AnyKey has been a more explicitly publicly engaged project. Public talks, stuff on YouTube, things like that. AnyKey is where I try to do most of the public-facing work. My work with the initiative has also involved doing shorter white papers meant to actually provide helpful guidelines or insights, because just trying to distill these complicated things is a monumental task.

BRC: What are the general basics of AnyKey?

TLT: It started a few years ago. This actually dovetails with our conversation about Gamergate. When Gamergate was happening, Intel sort of blew it on their first-pass response. They got a lot of heat at the time, but they actually learned a lesson and made a big announcement that they would be supporting a number of different diversity initiatives. They were going to start taking diversity and inclusion more seriously and dedicated a chunk of money to sponsoring various measures. Because of the esports work I had done, I knew people at the Electronic Sports League (ESL), and one of them who’d been hearing me talk about gender for many years came to me and said, “Do you think there’s something we could do? Should we try to get in on this Intel stuff?” ESL has been working with Intel for years on esports. I said “Sure, let’s try to do something.” We connected up with Morgan Romine, who has a PhD in anthropology and who I codirect AnyKey with, and pitched to Intel research-driven initiatives around diversity and inclusion in esports. It worked, and we got some sponsorship money.

BRC: What exactly do you do there?

TLT: We’ve tried to do a range of things so far. Like I said, it’s research driven so we do fieldwork studies, we do workshops in which we try to get a sense of the key issues by working with various stakeholders, and we spend a lot of time talking to lots of folks in the esports space about the challenges they are facing. I’m the director of research and Morgan, my cofounder and director of initiatives, is the one who spins up concrete projects based on our findings. It covers everything, from practical skills like how to moderate chats to more symbolic issues. As an example, one of the things we heard early on is that women who were active and thriving in the esports space all had had these formative moments in which they saw another woman doing it, being involved in esports in some way, and it gave them a sense of like, “Oh wait, I could do that.” That led us to produce a series of videos profiling women in the scene. It was a “if you could see it, you could be it” kind of thing.

BRC: A kind of social inoculation, exposing them to the possibility?

TLT: Yeah, I mean, it’s kind of amazing when you start talking to people who are really making it. I love it. I have always been very interested in the women who manage to stay in a space that is so hostile to them. I mean anywhere, in any forum, not just online. Like, how the hell are they doing that? What is going on? It was the same way with esports, leading us to think about what we can learn from the women who are there. There was this thing they had come across and someone else was doing it, playing in that space, and it became seared into their imagination that they could do it too. That doesn’t remove all of the barriers, not by a long shot, but that power of the symbolic was real. So we do studies as well as practical things.

BRC: Like the chat moderation guides?

TLT: Right, yes, and we put out other guidelines like that. We have one on gender-inclusive tournaments, for example. We often support women’s tournaments, but we want those tournaments to be trans inclusive. So we did a whole …

BRC: That’s a thing, gender-defined tournaments?

TLT: Yeah, yeah, and women’s tournaments in esports are tricky because I think most of us who support them see them as a stopgap. Ultimately, we don’t want a world in which men and women are playing on separate teams. There’s no good reason for that. But the harassment of women in this space is so strong that we tend to feel that if you don’t give them opportunities in women-only tournaments, they won’t get the experience. So we see women’s tournaments as necessary for now while working toward gender inclusivity more broadly in esports.</

But even then, we were seeing tournaments happen that were women-only, but the language around them was not trans inclusive. That led us to put out a white paper covering a variety of issues like, for example, how to be gender inclusive when taking photos for your event, making sure that all the photos aren’t just of men. Even that degree of guidance was necessary. But also explaining to people how pronouns work and how to think about having trans inclusivity based on a “you are who you say you are” rule. It’s all in the research section of the AnyKey website.

One of the things we do with those best practices is simply to try to help people who want to make this space better and to give them language and frameworks. We just released another set of guidelines maybe a month ago on how to moderate your chat if you are streaming your esports tournament. Because the chat can be really awful if left unmoderated. And, again, a lot of people want it to be better but they don’t know where to start. So we put out these guidelines to help people.

BRC: Is this extracurricular for you? Or is it part of your job description?

TLT: Yeah, I don’t get paid for it. It’s extra. [Laughs] Public-facing work is such an interesting challenge, and this work with AnyKey has been one of the most challenging things I’ve ever done. We’re trying to take critical or feminist frameworks and interventions and make them accessible, spread them widely, and get them out of the classroom. It’s hard. I find a lot of people want things to be better, they want to do better, but they don’t have the tools or alternative language to get there. Once you give them that, they’re like, “Oh, okay, yeah, I can do that.”

T.L. Taylor Watch Me Play book cover

 

This article was commissioned by B. R. Cohen.

Featured image: T. L. Taylor. Photograph by Bryce Vickmark

Ken Steiglitz: Garage Rock and the Unknowable

Here is the second post in a series by The Discrete Charm of the Machine author Ken Steiglitz. You can access the first post here

I sat down to draft The Discrete Charm of the Machine with the goal of explaining, without math, how we arrived at today’s digital world. It is a quasi-chronological story; I take what I need, when I need it, from the space of ideas. I start at the simplest point, describing why noise is a constant threat to information and how using discrete values (usually zeros and ones) affords protection and a permanence not possible with information in analog (continuous) form. From there I sketch the important ideas of digital signal processing (for sound and pictures), coding theory (for nearly error-free communication), complexity theory (for computation), and so on—a fine arc, I think, from the boomy and very analog console radios of my childhood to my elegant little internet radio.

Yet the path through the book is not quite so breezy and trouble-free. In the final three chapters we encounter three mysteries, each progressively more fundamental and thorny. I hope your curiosity and sense of wonder will be piqued; there are ample references to further reading. Here are the problems in a nutshell:

  1. Is it no harder to find a solution to a problem than to merely check a solution? (Does P = NP?) This question comes up in studying the relative difficulty of solving problems with a computing machine. It is a mathematical question, and is still unresolved after almost 40 years of attack by computer scientists.
    As I discuss in the book, there are plenty of reasons to believe that P is not equal to NP and most computer scientists come down on that side. But … no one knows for sure.
  2. Are the digital computers we use today as powerful—in a practical sense—as any we can build in this universe (the extended Church-Turing thesis)? This is a physics question, and for that reason is fundamentally different from the P=NP question. Its answer depends on how the universe works.
    The thesis is intimately tied to the problem of building machines that are essentially more powerful than today’s digital computers—the human brain is one popular candidate. The question runs deep: some believe there is magic to found beyond the world of zeros and ones.
  3. Can a machine be conscious? Philosopher David Chalmers calls this the hard problem, and considers it “the biggest mystery.” It is not a question of mathematics, nor of physics, but of philosophy and cognitive science.

I want to emphasize that this is not merely the modern equivalent of asking how many angels could dance on the point of a pin. The answer has most serious consequences for us humans: it determines how we should treat our android creations, the inevitable products of our present rush to artificial intelligence. If machines are capable of suffering we have a moral responsibility to treat them compassionately.

My first reaction to the third question is that it is unanswerable. How can we know about the subjective mental life of anyone (or any thing) but ourselves? Philosopher Owen Flanagan called those who take this position mysterians, after the proto-punk band ? and the Mysterians. Michael Shermer joins this camp in his Scientific American column of July 1, 2018. I discuss the difficulty in the final chapter and remain agnostic—although I am hard-pressed even to imagine what form an answer would take.

I suggest, however, a pragmatic way around the big third question: Rather than risk harm, give the machines the benefit of the doubt. It is after all what we do for our fellow humans.

SteiglitzKen Steiglitz is professor emeritus of computer science and senior scholar at Princeton University. His books include The Discrete Charm of the MachineCombinatorial OptimizationA Digital Signal Processing Primer, and Snipers, Shills, and Sharks. He lives in Princeton, New Jersey.

 

Ken Steiglitz on The Discrete Charm of the Machine

SteiglitzA few short decades ago, we were informed by the smooth signals of analog television and radio; we communicated using our analog telephones; and we even computed with analog computers. Today our world is digital, built with zeros and ones. Why did this revolution occur? The Discrete Charm of the Machine explains, in an engaging and accessible manner, the varied physical and logical reasons behind this radical transformation. Ken Steiglitz examines why our information technology, the lifeblood of our civilization, became digital, and challenges us to think about where its future trajectory may lead.

What is the aim of the book?

The subtitle: To explain why the world became digital. Barely two generations ago our information machines—radio, TV, computers, telephones, phonographs, cameras—were analog. Information was represented by smoothly varying waves. Today all these devices are digital. Information is represented by bits, zeros and ones. We trace the reasons for this radical change, some based on fundamental physical principles, others on ideas from communication theory and computer science. At the end we arrive at the present age of the internet, dominated by digital communication, and finally greet the arrival of androids—the logical end of our current pursuit of artificial intelligence. 

What role did war play in this transformation?

Sadly, World War II was a major impetus to many of the developments leading to the digital world, mainly because of the need for better methods for decrypting intercepted secret messages and more powerful computation for building the atomic bomb. The following Cold War just increased the pressure. Business applications of computers and then, of course, the personal computer opened the floodgates for the machines that are today never far from our fingertips.

How did you come to study this subject?

I lived it. As an electrical engineering undergraduate I used both analog and digital computers. My first summer job was programming one of the few digital computers in Manhattan at the time, the IBM 704. In graduate school I wrote my dissertation on the relationship between analog and digital signal processing and my research for the next twenty years or so concentrated on digital signal processing: using computers to process sound and images in digital form.

What physical theory played—and continues to play—a key role in the revolution?

Quantum mechanics, without a doubt. The theory explains the essential nature of noise, which is the natural enemy of analog information; it makes possible the shrinkage and speedup of our electronics (Moore’s law); and it introduces the possibility of an entirely new kind of computer, the quantum computer, which can transcend the power of today’s conventional machines. Quantum mechanics shows that many aspects of the world are essentially discrete in nature, and the change from the classical physics of the nineteenth century to the quantum mechanics of the twentieth is mirrored in the development of our digital information machines.

What mathematical theory plays a key role in understanding the limitations of computers?

Complexity theory and the idea of an intractable problem, as developed by computer scientists. This theme is explored in Part III, first in terms of analog computers, then using Alan Turing’s abstraction of digital computation, which we now call the Turing machine. This leads to the formulation of the most important open question of computer science, does P equal NP? If P equals NP it would mean that any problem where solutions can just be checked fast can be solved fast. This seems like asking a lot and, in fact, most computer scientists believe that P does not equal NP. Problems as hard as any in NP are called NP-complete. The point is that NP-complete problems, like the famous traveling problem, seem to be intrinsically difficult, and cracking any one of them cracks them all.  Their essential difficulty manifests itself, mysteriously, in many different ways in the analog and digital worlds, suggesting, perhaps, that there is an underlying physical law at work. 

What important open question about physics (not mathematics) speaks to the relative power of digital and analog computers?

The extended Church-Turing thesis states that any reasonable computer can be simulated efficiently by a Turing machine. Informally, it means that no computer, even if analog, is more powerful (in an appropriately defined way) than the bare-boned, step-by-step, one-tape Turing machine. The question is open, but many computer scientists believe it to be true. This line of reasoning leads to an important conclusion: if the extended Church-Turing thesis is true, and if P is not equal to NP (which is widely believed), then the digital computer is all we need—Nature is not hiding any computational magic in the analog world.

What does all this have to do with artificial intelligence (AI)?

The brain uses information in both analog and digital form, and some have even suggested that it uses quantum computing. So, the argument goes, perhaps the brain has some special powers that cannot be captured by ordinary computers.

What does philosopher David Chalmers call the hard problem?

We finally reach—in the last chapter—the question of whether the androids we are building will ultimately be conscious. Chalmers calls this the hard problem, and some, including myself, think it unanswerable. An affirmative answer would have real and important consequences, despite the seemingly esoteric nature of the question. If machines can be conscious, and presumably also capable of suffering, then we have a moral responsibility to protect them, and—to put it in human terms—bring them up right. I propose that we must give the coming androids the benefit of the doubt; we owe them the same loving care that we as parents bestow on our biological offspring.

Where do we go from here?

A funny thing happens on the way from chapter 1 to 12. I begin with the modest plan of describing, in the simplest way I can, the ideas behind the analog-to-digital revolution.  We visit along the way some surprising tourist spots: the Antikythera mechanism, a 2000-year old analog computer built by the ancient Greeks; Jacquard’s embroidery machine with its breakthrough stored program; Ada Lovelace’s program for Babbage’s hypothetical computer, predating Alan Turing by a century; and B. F. Skinner’s pigeons trained in the manner of AI to be living smart bombs. We arrive at a collection of deep conjectures about the way the universe works and some challenging moral questions.

Ken Steiglitz is professor emeritus of computer science and senior scholar at Princeton University. His books include Combinatorial OptimizationA Digital Signal Processing Primer, and Snipers, Shills, and Sharks (Princeton). He lives in Princeton, New Jersey.

Joel Waldfogel on Digital Renaissance

WaldfogelThe digital revolution poses a mortal threat to the major creative industries—music, publishing, television, and the movies. The ease with which digital files can be copied and distributed has unleashed a wave of piracy with disastrous effects on revenue. Cheap, easy self-publishing is eroding the position of these gatekeepers and guardians of culture. Does this revolution herald the collapse of culture, as some commentators claim? Far from it. In Digital Renaissance, Joel Waldfogel argues that digital technology is enabling a new golden age of popular culture, a veritable digital renaissance

Are we living in a digital renaissance? How can we tell?

We are absolutely experiencing a digital renaissance. There are a few big signs. The first is the explosion in the number of new cultural products being created. The number of new songs, books, movies, and television shows created and being made available to consumers has increased by large amounts. There has been a tripling in the number of new songs, and similar growth rates for other sectors.

Of course, quantity alone is not enough to qualify a renaissance. What makes the recent period a renaissance is that the recent crops of new products are appealing to consumers, compared with old products. By various measures, new music, television shows, books, and movies are really good compared with earlier vintages.

And finally, we know it’s a digital renaissance because the higher quality of the new vintages is driven by the products made possible by digitization, i.e. new technologies that make it possible for small-scale creators and intermediaries outside of the traditional mainstream to bring products to market. The fruits of the digital renaissance include the music on independent record labels, the self-published books, movies from independent film makers, and television shows distributed outside of the traditional distribution channels. Again, many of these new products are created and distributed without the support of the traditional cultural gatekeepers (major record labels and movie studios, traditional television networks, and major publishing houses).

What will happen to traditional gatekeeping? Is it going away or will we see the creation of new gatekeepers?

First, while lots of creation now happens outside of the traditional gatekeepers, those traditional gatekeepers still have an important role. Once an artist has demonstrated his or her commercial promise, the traditional players are well-placed to bring new works to a large audience. Quite often, an artist will become known using independent channels and will then get snapped up by a traditional player. This happened with the famous self-published Fifty Shades books, and it happens with many musical acts—think Arcade Fire—whom consumers first encounter on indie labels.

Even though digitization has allowed a lot of people to create their work and put it in front of potential audiences, consumers have limited attention and limited capacity to figure out which of the new products are good. This puts a lot of power in the hands of new kinds of gatekeepers, the people choosing the content at Netflix, or the people deciding which products to recommend at Spotify or Amazon.

Why has piracy been a bigger problem for some creative industries over others?

Music faced piracy first and had to “write the book” on how to respond. It’s hard to go first since there are few examples to follow. It took the music industry four years to respond to Napster, until the iTunes Music Store. For four years there were convenient ways to steal music digitally but no convenient way to buy it. In the meantime, many consumers had become accustomed to getting music without paying. Music also had the problem that digital music files are small enough to move quickly over the Internet, while movie files were initially too large.

The industries hit after were also able to learn from the experience of the music industry, and responded more quickly. For example, roughly a year after the appearance of YouTube, the major television networks were making their shows available online free of charge.

Having convenient ways to buy digital products goes a long way toward stemming piracy. One of the first impacts of Spotify—the streaming music service—was to substantially reduce music piracy. More recently, Spotify (and other paid subscription services) have also reversed the long decline in music revenue.

What are your thoughts on copyright law in the United States? Does it need to be stricter? Better enforced?

The reason we have intellectual property rules, such as copyright laws, is to provide incentives for people to create. The goal is to make sure there is a steady supply of new products that consumers find appealing.

The digital era has ushered in a great deal of piracy and has therefore threatened the revenue of creators and intermediaries. If that’s all that digitization had done, then we would expect a drop off in creative activity. And we would need a stiffening of copyright enforcement just to keep creative incentives where they were.

Fortunately, digitization has also reduced the costs of creation and distribution, along with its facilitation of piracy. And the net effect of those two offsetting forces has been to unleash a large amount of good new creative production.

Many people in the creative industries would like to see stronger enforcement of intellectual property protections. They may be right, for a variety of reasons, including just respect for property rights. But the evidence in the book shows that we don’t need a strengthening of intellectual property rights in order to maintain the creative incentives that prevailed before Napster. We are, after all, experiencing a digital renaissance.

When representatives of creative industries lobby for stricter copyright protections, are their arguments sound? How should we assess the health of their industries?

The creative industries are really good at what they do, particularly in the US. And during the digital era many creators and intermediaries have felt real pain. U.S. recorded music revenue fell  by more than half in the decade after Napster. Moreover, users in many countries have blithe disregard for intellectual property. When the industry points out these facts, they are telling the truth.

But the pain of a particular industry is not as relevant to public policy as its output. If the creative industries could no longer cover costs of creating new products and new creative activity dried up, then we would require changes in public policy to keep the consuming public happy.

When the industries go before Congress for legislative assistance to protect their revenues, however, it should be to secure a steady supply of good new products, not to protect their revenues and incomes for their own sakes.

We should assess the intellectual property regime according to whether we are seeing a steady and robust supply of new products that consumers find appealing. And that we are.

How does the old adage “Nobody knows anything” come into play in this new era of digitization?

New products in most industries typically fail. Nowhere is this more common than in the creative industries, where roughly 90 percent of new products fail to generate enough revenue to cover their costs. This “hit or miss” aspect to creative production is what makes an explosion in new products so potentially valuable.

To see this, suppose that everyone knew everything, meaning that intermediaries could accurately predict which new products would find favor with consumers. Then a cost reduction giving rise to new products would bring forth the products that were not sufficiently promising to be worth delivering before. There would be some benefit to consumers, but it would be small.

Contrast that to the real world, in which we can’t really predict which products will be good before we spend the money to test them with consumers. In that—our real—world, a tripling in the number of new products brings with it lots of unsuccessful ones as well as some really successful ones that consumers find valuable.

What are the potential pitfalls of digitization in the creative industries? What should we be wary of?

Two things come to mind. First, there is so far no evidence that the undermining of creative nurture by the traditional intermediaries— the publishers, movie studios, and record labels, for example—has undermined the quality of new products, at least in the sense of being appreciated by contemporary fans and consumers. But it will be interesting to see whether the fruits of this era are still appreciated 25, 50, or 100 years from now.

Second, the new digital economy is increasingly dominated by a small number of players. These include Google and Facebook, Apple and Amazon, and Netflix and Spotify. So far, most of what these players have done has helped to deliver the renaissance. But many of these players could become influential gatekeepers, with outsized influence on what succeeds. I don’t see any evidence of this yet, but it’s something we should be watchful about. What makes things worse is that most of these players keep their data secret, so it’s really hard to know what’s happening to the consumption of particular products. This, in turn, makes it hard to keep tabs on the health of the industries. The digital renaissance can continue only as long as a large swath of creators can continue to create, and audiences can discover the new works.

Joel Waldfogel holds the Frederick R. Kappel Chair at the University of Minnesota’s Carlson School of Management. His previous books include Scroogenomics: Why You Shouldn’t Buy Presents for the Holidays. He lives in Minneapolis.

Adrienne Mayor on Gods and Robots

Adrienne Mayor Gods and Robots coverThe first robot to walk the earth was a bronze giant called Talos. This wondrous machine was created not by MIT Robotics Lab, but by Hephaestus, the Greek god of invention. More than 2,500 years ago, long before medieval automata, and centuries before technology made self-moving devices possible, Greek mythology was exploring ideas about creating artificial life—and grappling with still-unresolved ethical concerns about biotechne, “life through craft.” In the compelling, richly illustrated Gods and Robots, Adrienne Mayor tells the fascinating story of how ancient Greek, Roman, Indian, and Chinese myths envisioned artificial life, automata, self-moving devices, and human enhancements—and how these visions relate to and reflect the ancient invention of real animated machines.

Mayor answered some questions for us about robots, mythology, and her research.

Who first imagined the concept of robots? 

Most historians of science trace the first automatons to the Middle Ages. But I wondered, was it possible that ideas about creating artificial life were thinkable long before technology made such enterprises possible? Remarkably, as early as the time of Homer, ancient Greek myths were envisioning how to imitate, augment, and surpass nature, by means of biotechne, “life through craft”—what we now call biotechnology. Beings described as fabricated, “made, not born,” appeared in myths about Jason and the Argonauts, the sorceress Medea, the bronze robot Talos, the ingenious craftsman Daedalus, the fire-bringer Prometheus, and Pandora, the female android created by Hephaestus, god of invention. These vivid stories were ancient thought experiments set in an alternate world where technology was marvelously advanced.

What makes these ancient stories so compelling today?

Time-traveling back into the past more than two millennia to study what are essentially some of the first-ever science fiction stories by a pre-industrial society may seem strange. But I think the sophistication and relevance of these ancient dreams of technology might help us understand the timeless link between imagination and science. Some of the imaginary self-propelled devices and lifelike androids in the myths foreshadow some of today’s technological inventions of driverless cars, automated machines, and humanoid automatons. There are even mythic versions of artificial intelligence and ancient parallels to the modern “Uncanny Valley” effect—that eerie sensation when people encounter hyper-realistic robots. Notably, some of the doubts about creating artificial life expressed in antiquity anticipate our own practical and ethical dilemmas about AI and playing god by improving on nature. Taken together, the ancient narratives really represent a kind of “Mythology for the Age of Artificial Intelligence.”

Why were these perceptive myths about artificial life overlooked until now?

Historians of science tend to assume that automatons featured in classical myths were simply inert matter brought to life by a fiat or a magical spell, like Adam and Eve and Pygmalion’s ivory statue of Galatea. But many of the self-moving devices and automata described in myths were not merely lifeless things animated by magic or divine command. My book focuses on the myths of androids and automatons visualized as products of technology, designed and constructed with the same materials and methods that human artisans used to make tools, structures, and statues in antiquity, but with awesome results beyond what was technologically possible at the time. Some philosophers of science claim it was impossible in antiquity to imagine technology beyond what already existed, until mechanics was formalized as a discipline. But imagination has always driven innovation. Where science fiction goes, technology often follows. The last chapter of Gods and Robots traces the relationship between classical myths and real historical automata that began to proliferate in the Hellenistic era, when Alexandria in Egypt became the hub of learning and innovation and engineers designed self-moving machines and lifelike animated statues.

Modern sci-fi movies pop up in several chapters. How do they relate to ancient myths?

Some 2,500 years before movies were invented, ancient Greek vase painters illustrated popular stories of the bronze robot warrior Talos, the techno-wizard Medea, and the fembot Pandora dispatched to earth on an evil mission, in ways that seem very “cinematic.” Echoes of those classical myths reverberate in cult films like Metropolis (1927), Frankenstein (1931), Jason and the Argonauts (1963), Blade Runner (1982 and 2017), and recent sci-fi movies and TV shows such as Ex Machina and Westworld.

Movies and myths about imagined technology are cultural dreams. Like contemporary science fiction tales, the myths show how the power of imagination allows humans to ponder how artificial life might be created—if only one possessed sublime technology and genius. We can see “futuristic” thinking in the myths’ automated machines and tools, self-driving chariots, self-navigating ships, metal robots powered by special fluids, and AI servants made of gold and silver. Another similarity to sci-fi tales is that the myths warn about disturbing consequences of creating artificial life.

There are 75 extraordinary illustrations in Gods and Robots. Any ancient images that surprised you?

A small museum in Italy has an amazing Greek vase painted in the fifth century BC. It shows Medea and Jason using a tool to destroy the formidable bronze robot Talos. Here is proof that more than 2,500 years ago, an automaton was not only imagined as a machine with internal workings, but that its destruction required technology. Pandora appears on a magnificent amphora from the same time. The artist portrays Pandora as a life-sized doll about to be wound up, standing stiffly with a weird grin. The vase’s decorative border design is made up of Hephaestus’s tools to underscore her constructed nature. Another astonishing find was a group of carved cameos depicting the myth of Prometheus creating the first human beings. Instead of merely molding clay figures, Prometheus is shown using different tools to build the first human starting from the inside out, with the skeleton as the framework.

Have you come across any unexpected legends about automatons?

A little-known legend translated from Sanskrit claims that after his death, Buddha’s bodily remains were guarded by robotic warriors in a secret underground chamber in India.

Is there anything about ancient automatons that you would like to know more about?

It would be fascinating to gather automaton traditions from India, China, and Japan, to compare Eastern and Western perspectives on artificial life, AI, and robots.

Adrienne Mayor is the author, most recently, of The Amazons: Lives and Legends of Warrior Women across the Ancient World and The Poison King: The Life and Legend of Mithradates, Rome’s Deadliest Enemy, which was a finalist for the National Book Award (both Princeton). She is a research scholar in classics and the history of science at Stanford University and lives in Palo Alto, California.

PUP at New Scientist Live in London

New Scientist Live is an annual festival in London which attracts over 30,000 visitors across four days. Each year a huge hall in the ExCel Centre in London is transformed into a hub for all things science and technology, with talks running all day across six stages from some of the world’s greatest minds in the field.

The festival is a great opportunity for Princeton University Press to really get to know the readers of our science titles and see what they’re really engaged with at the moment. It’s always surprising and humbling to see so many younger readers at New Scientist Live so engaged with what we produce and invested in on-trend scientific topics. This really did remind us that, although New Scientist Live does exhibit the greatest minds of our time, it really is the stomping ground for the minds of tomorrow!

Rees signs his first-ever copy of On the Future

This was Princeton University Press’ second year at the festival and our best yet. We came armed with postcards, tote bags, lots of catalogues and copious amounts of badges which were a hit with the visiting school groups. It was also a great year for book sales on our stand – we topped last year’s sales by 11% with The Little Book of Black Holes and The Little Book of String Theory as some of our bestsellers.

One of the highlights of New Scientist Live as far as Princeton University Press was concerned was a wonderful talk by Martin Rees, the Astronomer Royal and member of the House of Lords, whose book, On the Future: Prospects for Humanity is published imminently. Lord Rees spoke to a rapt crowd, many of whom had to stand at the back or sit on the floor; such was his talk’s popularity. Rees discussed three themes from within his book: biotechnology, AI, and space travel. We found the whole talk really interesting, but were particularly fascinated by Rees’s forecast regarding the future of the human body in space. As Rees put it in his book:

The space environment is inherently hostile for humans. So, because they will be ill-adapted to their new habitat, the pioneer explorers will have a more compelling incentive than those of us on Earth to redesign themselves. They’ll harness the super-powerful genetic and cyborg technologies that will be developed in coming decades. These techniques will be, one hopes, heavily regulated on Earth, on prudential and ethical grounds, but ‘settlers’ on Mars will be far beyond the clutches of the regulators. We should wish them good luck in modifying their progeny to adapt to alien environments. This might be the first step towards divergence into a new species. Genetic modification would be supplemented by cyborg technology—indeed there may be a transition to fully inorganic intelligences. So, it’s these space-faring adventurers, not those of us comfortably adapted to life on Earth, who will spearhead the post human era.

Speaking about Stephen Hawking. Also on the stage was our author, Stuart Clark.

Martin Rees also participated in a panel event on the legacy of the late Professor Stephen Hawking. He was joined by Jennifer Ouellette, Marika Taylor, Tom Shakespeare, and our very own Stuart Clark. They discussed Hawking’s work in furthering our understanding of space, in closing the gap between various different scientific communities, and his work as an advocate for the disabled community. Martin Rees shared memories from his time with Hawking at Cambridge, and Marika Taylor shared Hawking’s love of night clubs and salsa bars. It was a very moving occasion.

After a successful 2018 at New Scientist Live, we are looking forward to exhibiting next year’s festival and all the exciting new ideas it will put on show.

 

Brian Kernighan on what we all need to know about computers

KernighanLaptops, tablets, cell phones, and smart watches: computers are inescapable. But even more are invisible, like those in appliances, cars, medical equipment, transportation systems, power grids, and weapons. We never see the myriad computers that quietly collect, share, and sometimes leak vast amounts of personal data about us, and often don’t consider the extent to which governments and companies increasingly monitor what we do. In Understanding the Digital World, Brian W. Kernighan explains, in clear terms, not only how computers and programming work, but also how computers influence our daily lives. Recently, Kernighan answered some questions about his new book.

Who is this book for? What kind of people are most likely to be interested?

BK: It’s a cliché, but it really is aimed at the proverbial “educated layman.” Everyone uses computers and phones for managing their lives and communicating with other people. So the book is for them. I do think that people who have some technical background will enjoy it, but will also find that it will help their less technical friends and family understand.

What’s the basic message of the book?

BK: Computers—laptops, desktops, tablets, phones, gadgets—are all around us. The Internet lets our computers communicate with us and with other computers all over the world. And there are billions of computers in infrastructure that we rely on without even realizing its existence. Computers and communications systems have changed our lives dramatically in the past couple of decades, and will continue to do so. So anyone who hopes to be at least somewhat informed ought to understand the basics of how such things work. One major concern has been the enormous increase in surveillance and a corresponding reduction in our personal privacy. We are under continuous monitoring by government agencies like the NSA in the United States and similar ones in other countries. At the same time, commercial interests track everything we do online and with our phones. Some of this is acceptable, but in my opinion, it’s gone way too far. It’s vital that we understand better what is being done and how to reduce the tracking and spying. The more we understand about how these systems work, the more we can defend ourselves, while still taking advantage of the many benefits they provide. For example, it’s quite possible to explore interesting and useful web sites without being continuously tracked. You don’t have to reveal everything about yourself to social networks. But you have to know something about how to set up some defenses. More generally, I’m trying to help the reader to reach a better than superficial understanding of how computers work, what software is and how it’s created, and how the Internet and the Web operate. Going just a little deeper into these is totally within the grasp of anyone. The more you know, the better off you will be; knowing even a little about these topics will put you ahead of the large majority of people, and will protect you from any number of foolish behaviors.

Can you give us an example of how to defend ourselves against tracking by web sites?

BK: Whenever you visit a web site, a record is made of your visit, often by dozens of systems that are collecting information that can be used for targeted advertising. It’s easy to reduce this kind of tracking by turning off third-party cookies and by installing some ad-blocking software. You can still use the primary site, but you don’t give away much if anything to the trackers, so the spread of information about you is more limited.

If I don’t care if companies know what sites I visit, why should I be worried?

BK: “I’ve got nothing to hide,” spoken by an individual, or “If you have nothing to hide, you have nothing to fear,” offered by a government, are pernicious ideas. They frame the discussion in such a way as to concede the point at the beginning. Of course you have nothing to hide. If that’s true, would you mind showing me your tax returns? How did you vote in the last election? What’s your salary? Could I have your social security number? Could you tell me who you’ve called in the past year? Of course not—most of your life is no one else’s business.

What’s the one thing that you would advise everyone to do right now to improve their online privacy and security?

BK: Just one thing? Learn more about how your computer and your phone work, how the Internet works, and how to use all of them wisely. But I would add some specific recommendations, all of which are easy and worthwhile. First, in your browser, install defensive extensions like like AdBlock and Ghostery, and turn off third-party cookies. This will take you less than ten minutes and will cut your exposure by at least a factor of ten. Second, make sure that your computer is backed up all the time; this protects you against hardware failure and your own mistakes (both of which are not uncommon), and also against ransomware (though that is much less a risk if you are alert and have turned on your defenses). Third, use different passwords for different sites; that way, if one account is compromised, others will not be. And don’t use your Facebook or Google account to log in to other sites; that increases your vulnerability and gives away information about you for minor convenience. Finally, be very wary about clicking on links in email that have even the faintest hint of something wrong. Phishing attacks are one of the most common ways that accounts are compromised and identities stolen.

KernighanBrian W. Kernighan is a professor in the Department of Computer Science at Princeton University. He is the coauthor of ten other books, including the computing classic The C Programming Language (Prentice Hall). He is the author of Understanding the Digital World: What You Need to Know about Computers, the Internet, Privacy, and Security.

Cipher challenge #3 from Joshua Holden: Binary ciphers

The Mathematics of Secrets by Joshua Holden takes readers on a tour of the mathematics behind cryptography. Most books about cryptography are organized historically, or around how codes and ciphers have been used in government and military intelligence or bank transactions. Holden instead focuses on how mathematical principles underpin the ways that different codes and ciphers operate. Discussing the majority of ancient and modern ciphers currently known, The Mathematics of Secrets sheds light on both code making and code breaking. Over the next few weeks, we’ll be running a series of cipher challenges from Joshua Holden. The last post was on subliminal channels. Today’s is on binary ciphers:

Binary numerals, as most people know, represent numbers using only the digits 0 and 1.  They are very common in modern ciphers due to their use in computers, and they frequently represent letters of the alphabet.  A numeral like 10010 could represent the (1 · 24 + 0 · 23 + 0 · 22 + 1 · 2 + 0)th = 18th letter of the alphabet, or r.  So the entire alphabet would be:

 plaintext:   a     b     c     d     e     f     g     h     i     j
ciphertext: 00001 00010 00011 00100 00101 00110 00111 01000 01001 01010

 plaintext:   k     l     m     n     o     p     q     r     s     t
ciphertext: 01011 01100 01101 01110 01111 10000 10001 10010 10011 10100

 plaintext:   u     v     w     x     y     z
ciphertext: 10101 10110 10111 11000 11001 11010

The first use of a binary numeral system in cryptography, however, was well before the advent of digital computers. Sir Francis Bacon alluded to this cipher in 1605 in his work Of the Proficience and Advancement of Learning, Divine and Humane and published it in 1623 in the enlarged Latin version De Augmentis Scientarum. In this system not only the meaning but the very existence of the message is hidden in an innocuous “covertext.” We will give a modern English example.

Suppose we want to encrypt the word “not” into the covertext “I wrote Shakespeare.” First convert the plaintext into binary numerals:

   plaintext:   n      o     t
  ciphertext: 01110  01111 10100

Then stick the digits together into a string:

    011100111110100

Now we need what Bacon called a “biformed alphabet,” that is, one where each letter can have a “0-form” and a “1-form.”We will use roman letters for our 0-form and italic for our 1-form. Then for each letter of the covertext, if the corresponding digit in the ciphertext is 0, use the 0-form, and if the digit is 1 use the 1-form:

    0 11100 111110100xx
    I wrote Shakespeare.

Any leftover letters can be ignored, and we leave in spaces and punctuation to make the covertext look more realistic. Of course, it still looks odd with two different typefaces—Bacon’s examples were more subtle, although it’s a tricky business to get two alphabets that are similar enough to fool the casual observer but distinct enough to allow for accurate decryption.

Ciphers with binary numerals were reinvented many years later for use with the telegraph and then the printing telegraph, or teletypewriter. The first of these were technically not cryptographic since they were intended for convenience rather than secrecy. We could call them nonsecret ciphers, although for historical reasons they are usually called codes or sometimes encodings. The most well-known nonsecret encoding is probably the Morse code used for telegraphs and early radio, although Morse code does not use binary numerals. In 1833, Gauss, whom we met in Chapter 1, and the physicist Wilhelm Weber invented probably the first telegraph code, using essentially the same system of 5 binary digits as Bacon. Jean-Maurice-Émile Baudot used the same idea for his Baudot code when he invented his teletypewriter system in 1874. And the Baudot code is the one that Gilbert S. Vernam had in front of him in 1917 when his team at AT&T was asked to investigate the security of teletypewriter communications.

Vernam realized that he could take the string of binary digits produced by the Baudot code and encrypt it by combining each digit from the plaintext with a corresponding digit from the key according to the rules:

0 ⊕ 0 = 0
0 ⊕ 1 = 1
1 ⊕ 0 = 1
1 ⊕ 1 = 0

For example, the digits 10010, which ordinarily represent 18, and the digits 01110, which ordinarily represent 14, would be combined to get:

1 0 0 1 0
0 1 1 1 0


1 1 1 0 0

This gives 11100, which ordinarily represents 28—not the usual sum of 18 and 14.

Some of the systems that AT&T was using were equipped to automatically send messages using a paper tape, which could be punched with holes in 5 columns—a hole indicated a 1 in the Baudot code and no hole indicated a 0. Vernam configured the teletypewriter to combine each digit represented by the plaintext tape to the corresponding digit from a second tape punched with key characters. The resulting ciphertext is sent over the telegraph lines as usual.

At the other end, Bob feeds an identical copy of the tape through the same circuitry. Notice that doing the same operation twice gives you back the original value for each rule:

(0 ⊕ 0) ⊕ 0 = 0 ⊕ 0 = 0
(0 ⊕ 1) ⊕ 1 = 1 ⊕ 1 = 0
(1 ⊕ 0) ⊕ 0 = 1 ⊕ 0 = 1
(1 ⊕ 1) ⊕ 1 = 0 ⊕ 1 = 1

Thus the same operation at Bob’s end cancels out the key, and the teletypewriter can print the plaintext. Vernam’s invention and its further developments became extremely important in modern ciphers such as the ones in Sections 4.3 and 5.2 of The Mathematics of Secrets.

But let’s finish this post by going back to Bacon’s cipher.  I’ve changed it up a little — the covertext below is made up of two different kinds of words, not two different kinds of letters.  Can you figure out the two different kinds and decipher the hidden message?

It’s very important always to understand that students and examiners of cryptography are often confused in considering our Francis Bacon and another Bacon: esteemed Roger. It is easy to address even issues as evidently confusing as one of this nature. It becomes clear when you observe they lived different eras.

Answer to Cipher Challenge #2: Subliminal Channels

Given the hints, a good first assumption is that the ciphertext numbers have to be combined in such a way as to get rid of all of the fractions and give a whole number between 1 and 52.  If you look carefully, you’ll see that 1/5 is always paired with 3/5, 2/5 with 1/5, 3/5 with 4/5, and 4/5 with 2/5.  In each case, twice the first one plus the second one gives you a whole number:

2 × (1/5) + 3/5 = 5/5 = 1
2 × (2/5) + 1/5 = 5/5 = 1
2 × (3/5) + 4/5 = 10/5 = 2
2 × (4/5) + 2/5 = 10/5 = 2

Also, twice the second one minus the first one gives you a whole number:

2 × (3/5) – 1/5 = 5/5 = 1
2 × (1/5) – 2/5 = 0/5 = 0
2 × (4/5) – 3/5 = 5/5 = 1
2 × (2/5) – 4/5 = 0/5 = 0

Applying

to the ciphertext gives the first plaintext:

39 31 45 45 27 33 31 40 47 39 28 31 44 41
 m  e  s  s  a  g  e  n  u  m  b  e  r  o
40 31 35 45 46 34 31 39 31 30 35 47 39
 n  e  i  s  t  h  e  m  e  d  i  u  m

And applying

to the ciphertext gives the second plaintext:

20  8  5 19  5  3 15 14  4 16 12  1  9 14 
 t  h  e  s  e  c  o  n  d  p  l  a  i  n
20  5 24 20  9 19  1 20 12  1 18  7  5
 t  e  x  t  i  s  a  t  l  a  r  g  e

To deduce the encryption process, we have to solve our two equations for C1 and C2.  Subtracting the second equation from twice the first gives:


so

Adding the first equation to twice the second gives:


so

Joshua Holden is professor of mathematics at the Rose-Hulman Institute of Technology.

Joel Mokyr: How the modern economy was born

MokyrBefore 1800, the majority of people lived on the verge of subsistence. In A Culture of Growth: The Origins of the Modern Economy, esteemed historian Joel Mokyr explains why in the industrialized world such a standard of living has grown increasingly uncommon. Mokyr offers a groundbreaking view on a culture of growth specific to early modern Europe, showing how the European Enlightenment laid the foundations for the scientific advances and pioneering inventions that would instigate explosive technological and economic development. Recently, Mokyr took some time to answer questions about the book.


How would you sum up the book’s main points?

JM: Before 1800 the overwhelming majority of humankind was poor; today in the industrialized world, almost nobody lives at the verge of subsistence, and a majority of people in the world enjoy living standards that would have been unimaginable a few centuries ago. My book asks how and why that happened. The question of the Great Enlightenment is central to economic history; a Nobel prize winning economist, Robert Lucas, once wrote that once we start thinking about it, it is hard to think of anything else.

Do we know how and where this started? 

JM: Yes, it started in Western Europe (primarily in Britain) in the last third of the eighteenth century through a set of technological innovations we now call the Industrial Revolution. From there it spread to the four corners of the world, although the success rate varied from place to place, and often the new techniques had to be adapted to local circumstances.

How is this book different from other work looking at this event? 

JM: The literature looking at the question of why this happened has advanced three types of explanations: geographical (looking at resources and natural endowments), political-institutional (focusing on the State and economic policies), or purely economic, through prices and incomes. My book examines culture: what did people believe, value, and how did they learn to understand natural phenomena and regularities they could harness to their material improvement.

Whose culture mattered most here? 

JM: Good question! Technological progress and the growth of modern science were driven first and foremost by a small educated elite of literate people who had been trained in medicine, mathematics and what they called “natural philosophy.” The culture of the large majority of people, who were as yet uneducated and mostly illiterate, mattered less in the early stages, but became increasingly important at a later stage when mass education became the norm.

So what was it about these intellectuals that mattered most? 

JM: In my earlier work, especially my The Enlightened Economy (2009), I pointed to what I called “the Industrial Enlightenment” as the central change that prepared the ground for modern economic growth. In the new book, I explain the origins of the Industrial Enlightenment. At some point, say around 1700, the consensus of intellectuals in Europe had become that material progress (what we were later to call “economic growth”) was not only desirable but possible, and that increasing what they called “useful knowledge” (science and technology) was the way to bring it about. These intellectuals then carried out that program through continuous advances in science that eventually found a myriad of economic applications.

How and why did this change happen? 

JM: That is the main question this book is focusing on and tries to answer. It describes and analyzes the cultural changes in the decades between Columbus and Newton, during what is sometimes known as “early modern Europe.” It was an age of tremendous cultural changes, above all of course the Reformation and the Scientific Revolution. Equally important was the emergence of what is known as “the Baconian Program,” in which Francis Bacon and his followers formulated the principles of what later became the Industrial Enlightenment. The success of these thinkers to persuade others of the validity of their notions of progress and the importance of a research agenda that reflected real economic needs is at the heart of the story of how the Industrial Enlightenment emerged.

So why did this take place in this period and in Europe, and not somewhere else? 

JM: Europe in this age enjoyed an unusual structure that allowed new and fresh ideas to flourish as never before. On the one hand, it was politically and religiously fragmented into units that fiercely competed with one another. This created a competitive market both for and among intellectuals that stimulated intellectual innovation. It was a market for ideas that worked well and in it the Baconian Program was an idea that succeeded, in part because it was attractive to many actors, but also because it was marketed effectively by cultural entrepreneurs. At the same time, political fragmentation coexisted with a unified and transnational institution (known at the time as the Republic of Letters) that connected European intellectuals through networks of correspondence and publications and created a pan-European competitive market in which new ideas circulated all over the Continent. In this sense, early modern Europe had the “best of all possible worlds” in having all the advantages of diversity and fragmentation and yet have a unified intellectual community.

Of all the new ideas, which ones were the most important? 

JM: Many new ideas played a role in the intellectual transformations that eventually led to the waves of technological progress we associate with modern growth. One of the most important was the decline in the blind veneration of ancient learning that was the hallmark of many other cultures. Shaking off the paralyzing grip of past learning is one of the central developments that counted in the cultural evolution in this period. The “classical canon” of Ptolemy and Aristotle was overthrown by rebels such as Copernicus and Galileo, and over time the intellectuals of this age became more assertive in their belief that they could outdo classical learning and that many of the conventional beliefs that had ruled the world of intellectuals in astronomy, medicine, and other fields were demonstrably wrong. Evidence and logic replaced ancient authority.

Was the success of the new ideas a foregone conclusion? 

JM: Not at all: there was fierce resistance to intellectual innovation by a variety of conservative powers, both religious and political. Many of the most original and creative people were persecuted. But in the end resistance failed, in large part because both people and books — and hence ideas — could move around in Europe and move to more liberal areas where their reception was more welcomed.

Could an Industrial Enlightenment not have happened elsewhere, for example in China? 

JM: The book deals at length with the intellectual development of China. In many ways, China’s economy in 1500 was as advanced and sophisticated as Europe. But in China the kind of competitive pluralism and diversity that were the hallmark of Europe were absent, and even though we see attempts to introduce more progressive thinking in China, it never succeeded to overthrow the conservative vested interests that controlled the world of intellectuals, above all the Mandarine bureaucracy. Instead of explosive growth as in Europe, Chinese science and technology stagnated.

Does the book have any implications for our own time? 

JM: By focusing on the social and economic mechanisms that stimulated and encouraged technological innovation in the past, my book points to the kind of factors that will ensure future technological creativity. First and foremost, innovation requires the correct incentives. Intellectuals on the whole do not require vast riches, but they will struggle for some measure of economic security and the opportunity to do their research in an environment of intellectual freedom in which successful innovation is respected and rewarded. Second, the freedom to innovate thrives in environments that are internationally competitive: just as much of innovation in earlier times emerged from the rivalry between England, France, Spain and the United Provinces, in the modern era the global competition between the United States, the EU, China, and so on will ensure continuous innovation. International competition and mobility ensure the intellectual freedom needed to propose new ideas. Finally, global institutions that share and distribute knowledge, as well as coordinate and govern intellectual communities of scientists and innovators across national boundaries and cultural divides, are critical for continued technological progress.

Joel Mokyr  is the Robert H. Strotz Professor of Arts and Sciences and professor of economics and history at Northwestern University, and Sackler Professor at the Eitan Berglas School of Economics at the University of Tel Aviv, Israel. He is the recipient of of the Heineken Prize for History and the International Balzan Prize for Economic History. Mokyr’s other works include The Enlightened Economy and the Gifts of Athena: Historical Origins of the Knowledge of Economy. His most recent book is a Culture of Growth: The Origins of the Modern Economy.

Katharine Dow: Can surrogacy ever escape the taint of global exploitation?

making the good life jacket dowSurrogate motherhood has a bad rep, as a murky business far removed from everyday experience – especially when it comes to prospective parents from the West procuring the gestational services of less privileged women in the global South. So while middle-class 30- and 40-somethings swap IVF anecdotes over the dinner table, and their younger female colleagues are encouraged by ‘hip’ employers to freeze their eggs as an insurance policy against both time and nature, surrogacy continues to induce a great deal of moral handwringing.

The Kim Cotton case in 1985 was the first attempt to arrange a commercial surrogacy agreement in the United Kingdom. It set the tone for what was to come. Cotton was paid £6,500 to have a baby for an anonymous Swedish couple, and her story provoked sensational press-fuelled panic. British legislators, too, saw surrogacy as likely to lead to exploitation, with poorer women coerced into acting as surrogates out of financial need, and with intended parents taken advantage of by unscrupulous surrogacy brokers. Their action was swift: within just months of the Cotton story breaking, a law was passed banning for-profit surrogacy in the UK.

With the growth of an international surrogacy industry over the past two decades, worries over surrogacy’s fundamentally exploitative character have only intensified. Worst-case scenarios such as the Baby Gammy case in 2014, involving an Australian couple and a Thai surrogate, suggest that surrogacy frequently is exploitative. But that’s less because paying someone to carry and bear a child on your behalf is inherently usurious than because the transaction takes place in a deeply unequal world. The Baby Gammy case was complicated by other unsavoury factors, since the child, born with Down’s Syndrome, seemed to be rejected by his intended parents because of his condition. Then it turned out that the intended father had a previous conviction for child sex offences, which rather overshadowed the potential exploitation experienced by Gammy’s surrogate – and now de facto – mother.

I am not arguing for a laissez-faire approach to regulating surrogacy, but for thinking more deeply about how surrogacy reflects the context in which it takes place.

We need to step back and think critically about what makes people so driven to have a biogenetically related child that they are prepared to procure the intimate bodily capacity of another, typically less privileged, person to achieve that. We should also listen to surrogates, and try to understand why they might judge surrogacy as their best option. Intended parents are not always uncaring nabobs, and surrogate mothers are not just naïve victims; but while the power dynamic between them is decidedly skewed, each is subject to particular cultural expectations, moral obligations and familial pressures.

As for the larger context, we increasingly outsource even the most intimate tasks to those whose labour is cheap, readily available and less regulated. If we think of surrogacy as a form of work, it doesn’t look that different from many other jobs in our increasingly casualised and precarious global economic context, like selling bodily substances and services for clinical trials, biomedical research or product testing, or working as domestic staff and carers.

And surrogacy is on the rise. Both in the UK and in the United States, where some states allow commercial surrogacy and command the highest fees in the world, increasing numbers of would-be parents are turning to the international surrogacy industry: 95 per cent of the 2,000 surrogate births to UK intended parents each year occur overseas. With the age at which women have their first child increasing, more women are finding it difficult to conceive; and there’s now greater access to fertility treatments for single-sex couples and single people. In addition, surrogacy has become the option of choice for gay couples, transgender people, and single men wanting a biogenetically related child.

For me, as someone who has studied surrogacy, the practice is problematic because it reveals some of our most taken-for-granted assumptions about the nature of family. It also tells us much about work, gender, and how the two are connected. This is why it is so challenging.

At a time when parent-child relationships often appear to be one of the few remaining havens in an increasingly heartless world, surrogacy suggests that there might not be a straightforward relationship between women’s reproductive biology, their capacity to produce children, and their desire to nurture. The usual debates that focus simply on whether or not surrogacy is exploitative sidestep some of these uncomfortable truths, and make it difficult to ask more complicated questions about the practice.

There is a parallel here with abortion debates. Trying to define and defend the sanctity of life is important, but this also obscures highly problematic issues, such as the gendered expectation that women should look after children; the fact that women typically bear responsibility for contraception (and the disproportionate consequences of not using it); the prevalence of non-consensual sex; and the pressure on women to produce children to meet familial obligations.

Surrogacy is a technology. And like any other technology we should not attribute to it magical properties that conceal its anthropogenic – that is, human-made – character. It’s all too easy to blame surrogacy or the specific individuals who participate in it rather than to ask why surrogacy might make sense as a way of having children at all. We should give credit to intended parents and surrogate mothers for having thought deeply about their decisions, and we should not hold them individually responsible for surrogacy’s ills.Aeon counter – do not remove

Katharine Dow is a research associate in the Reproductive Sociology Research Group at the University of Cambridge. She is the author of Making a Good Life.

Want to join the discussion or follow us on Aeon? Head over to our partnership page.

This article was originally published at Aeon and has been republished under Creative Commons.
Aeon counter – do not remove

Digital Keyword: Culture

digital keywords peters jacketThis post appears concurrently at Culture Digitally.

Culture is a keyword among keywords for Raymond Williams, who contributed to the founding of cultural studies in the 1960s and 1970s. It is among the most common ways to talk about how we talk. In the essay below, one of Williams’ most careful readers, Ted Striphas, offers a sensitive update to Williams and a wide-ranging intellectual history, describing how culture has coevolved with the digital turn since the end of World War II. No longer an antithesis to technology, culture has recently interpenetrated with the computational (e.g., digital humanities, culturomics, and big-data-driven cultural studies).

In fascinating conversation with Fred Turner’s prototype and Limor Shifman’s meme, in what sense do aspects of modern-day digital culture challenge and confirm Striphas’ observation about the dynamism and adaptability of culture—or, in Williams’ famous phrase, “one of two or three most complicated words in the English language?”

Ted Striphas: Culture

 

This comment may have been adapted from the introduction to Benjamin Peters’ Digital Keywords: A Vocabulary of Information Society and Culture. 25% discount code in 2016: P06197

Digital Keyword: “Algorithm”

digital keywords peters jacketThis post appears concurrently at Culture Digitally.

Tarleton Gillespie demystifies the many uses of the recent keyword algorithm, on loan from Arabic. It is at once a trick of the trade for software programmers, a synecdoche standing in for entire informational systems and their stakeholders in popular discourse, a talisman used by those stakeholders for evoking cultural authority and avoiding blame (e.g., to blame “Facebook’s algorithm” can implicitly shift responsibility away from the company that designed it), and shorthand for the broader sociocultural shift toward, as Gillespie argues, “the insertion of procedure into human knowledge and social experience.”

In rich conversation with Ted Striphas’ essay on culture and Stephanie Ricker Schulte’s essay on personalization, Gillespie clarifies and multiplies the ways the current media environment extends a larger bureaucratic revolution central to modernity.

Tarleton Gillespie: Algorithm

 

This comment may have been adapted from the introduction to Benjamin Peters’ Digital Keywords: A Vocabulary of Information Society and Culture. 25% discount code in 2016: P06197