Ken Steiglitz on The Discrete Charm of the Machine

SteiglitzA few short decades ago, we were informed by the smooth signals of analog television and radio; we communicated using our analog telephones; and we even computed with analog computers. Today our world is digital, built with zeros and ones. Why did this revolution occur? The Discrete Charm of the Machine explains, in an engaging and accessible manner, the varied physical and logical reasons behind this radical transformation. Ken Steiglitz examines why our information technology, the lifeblood of our civilization, became digital, and challenges us to think about where its future trajectory may lead.

What is the aim of the book?

The subtitle: To explain why the world became digital. Barely two generations ago our information machines—radio, TV, computers, telephones, phonographs, cameras—were analog. Information was represented by smoothly varying waves. Today all these devices are digital. Information is represented by bits, zeros and ones. We trace the reasons for this radical change, some based on fundamental physical principles, others on ideas from communication theory and computer science. At the end we arrive at the present age of the internet, dominated by digital communication, and finally greet the arrival of androids—the logical end of our current pursuit of artificial intelligence. 

What role did war play in this transformation?

Sadly, World War II was a major impetus to many of the developments leading to the digital world, mainly because of the need for better methods for decrypting intercepted secret messages and more powerful computation for building the atomic bomb. The following Cold War just increased the pressure. Business applications of computers and then, of course, the personal computer opened the floodgates for the machines that are today never far from our fingertips.

How did you come to study this subject?

I lived it. As an electrical engineering undergraduate I used both analog and digital computers. My first summer job was programming one of the few digital computers in Manhattan at the time, the IBM 704. In graduate school I wrote my dissertation on the relationship between analog and digital signal processing and my research for the next twenty years or so concentrated on digital signal processing: using computers to process sound and images in digital form.

What physical theory played—and continues to play—a key role in the revolution?

Quantum mechanics, without a doubt. The theory explains the essential nature of noise, which is the natural enemy of analog information; it makes possible the shrinkage and speedup of our electronics (Moore’s law); and it introduces the possibility of an entirely new kind of computer, the quantum computer, which can transcend the power of today’s conventional machines. Quantum mechanics shows that many aspects of the world are essentially discrete in nature, and the change from the classical physics of the nineteenth century to the quantum mechanics of the twentieth is mirrored in the development of our digital information machines.

What mathematical theory plays a key role in understanding the limitations of computers?

Complexity theory and the idea of an intractable problem, as developed by computer scientists. This theme is explored in Part III, first in terms of analog computers, then using Alan Turing’s abstraction of digital computation, which we now call the Turing machine. This leads to the formulation of the most important open question of computer science, does P equal NP? If P equals NP it would mean that any problem where solutions can just be checked fast can be solved fast. This seems like asking a lot and, in fact, most computer scientists believe that P does not equal NP. Problems as hard as any in NP are called NP-complete. The point is that NP-complete problems, like the famous traveling problem, seem to be intrinsically difficult, and cracking any one of them cracks them all.  Their essential difficulty manifests itself, mysteriously, in many different ways in the analog and digital worlds, suggesting, perhaps, that there is an underlying physical law at work. 

What important open question about physics (not mathematics) speaks to the relative power of digital and analog computers?

The extended Church-Turing thesis states that any reasonable computer can be simulated efficiently by a Turing machine. Informally, it means that no computer, even if analog, is more powerful (in an appropriately defined way) than the bare-boned, step-by-step, one-tape Turing machine. The question is open, but many computer scientists believe it to be true. This line of reasoning leads to an important conclusion: if the extended Church-Turing thesis is true, and if P is not equal to NP (which is widely believed), then the digital computer is all we need—Nature is not hiding any computational magic in the analog world.

What does all this have to do with artificial intelligence (AI)?

The brain uses information in both analog and digital form, and some have even suggested that it uses quantum computing. So, the argument goes, perhaps the brain has some special powers that cannot be captured by ordinary computers.

What does philosopher David Chalmers call the hard problem?

We finally reach—in the last chapter—the question of whether the androids we are building will ultimately be conscious. Chalmers calls this the hard problem, and some, including myself, think it unanswerable. An affirmative answer would have real and important consequences, despite the seemingly esoteric nature of the question. If machines can be conscious, and presumably also capable of suffering, then we have a moral responsibility to protect them, and—to put it in human terms—bring them up right. I propose that we must give the coming androids the benefit of the doubt; we owe them the same loving care that we as parents bestow on our biological offspring.

Where do we go from here?

A funny thing happens on the way from chapter 1 to 12. I begin with the modest plan of describing, in the simplest way I can, the ideas behind the analog-to-digital revolution.  We visit along the way some surprising tourist spots: the Antikythera mechanism, a 2000-year old analog computer built by the ancient Greeks; Jacquard’s embroidery machine with its breakthrough stored program; Ada Lovelace’s program for Babbage’s hypothetical computer, predating Alan Turing by a century; and B. F. Skinner’s pigeons trained in the manner of AI to be living smart bombs. We arrive at a collection of deep conjectures about the way the universe works and some challenging moral questions.

Ken Steiglitz is professor emeritus of computer science and senior scholar at Princeton University. His books include Combinatorial OptimizationA Digital Signal Processing Primer, and Snipers, Shills, and Sharks (Princeton). He lives in Princeton, New Jersey.

David Alan Grier: The Light of Computation

by David Alan Grier

When one figure steps into the light, others can be seen in the reflected glow. The movie Hidden Figures has brought a little light to the contributions of NASA’s human computers. Women such as Katherine Goble Johnson and her colleagues of the West Area Computers supported the manned space program by doing hours of repetitive, detailed orbital calculations. These women were not the first mathematical workers to toil in the obscurity of organized scientific calculation. The history of organized computing groups can be traced back to the 17th century, when a French astronomer convinced three friends to help him calculate the date that Halley’s comet would return to view. Like Johnson, few human computers have received any recognition for their labors. For many, only their families appreciated the work that they did. For some, not even their closest relatives knew of their role in the scientific community.

GrierMy grandmother confessed her training as a human computer only at the very end of her life. At one dinner, she laid her fork on the table and expressed regret that she had never used calculus. Since none of us believed that she had gone to college, we dismissed the remark and moved the conversation in a different direction. Only after her passing did I find the college records that confirmed she had taken a degree in mathematics from the University of Michigan in 1921. The illumination from those records showed that she was not alone. Half of the twelve mathematics majors in her class were women. Five of those six had been employed as human computers or statistical clerks.

By 1921, organized human computing was fairly common in industrialized countries. The governments of the United States, Germany, France, Great Britain, Japan, and Russia supported groups that did calculations for nautical almanacs, national surveys, agricultural statistics, weapons testing, and weather prediction. The British Association for the Advancement of Science operated a computing group. So did the Harvard Observatory, Iowa State University, and the University of Indiana. One school, University College London, published a periodical for these groups, Tracts for Computers.

While many of these human computers were women, most were not. Computation was considered to be a form of clerical work, which was still a career dominated by men. However, human computers tended to be individuals who faced economic or social barriers to their careers. These barriers prevented them from becoming a scientist or engineer in spite of their talents. In the book When Computers Were Human, I characterized them as “Blacks, women, Irish, Jews and the merely poor.” One of the most prominent computing groups of the 20th century, the Mathematical Tables Project, hired only the impoverished. It operated during the Great Depression and recruited its 450 computers from New York City’s unemployment rolls.

During its 10 years of operations, the Math Tables Project toiled in obscurity. Only a few members of the scientific community recognized its contributions. Hans Bethe asked the group to do the calculations for a paper that he was writing in the physics of the sun. The engineer Philip Morse brought problems from his colleagues at MIT. The pioneering computer scientist John von Neumann asked the group to test a new mathematical optimization technique after he was unable to test it on the new ENIAC computer. However, most scientists maintained a distance between themselves and the Mathematical Tables Project. One member of the Academy of Science explained his reservations about the Project with an argument that came to be known as the Computational Syllogism. Scientists, he argued, are successful people. The poor, he asserted, are not successful. Therefore, he concluded, the poor cannot be scientists and hence should not be employed in computation.

Like the human computers of NASA, the Mathematical Tables Project had a brief moment in the spotlight. In 1964, the leader of the Project, Gertrude Blanch, received a Federal Woman’s Award from President Lyndon Johnson for her contributions to the United States Government. Yet, her light did not shine far enough to bring recognition to the 20 members of the Math Tables Project who published a book, later that year, on the methods of scientific computing. The volume became one of the most highly sold scientific books in history. Nonetheless, few people knew that it was written by former human computers.

The attention to Katherine Goble Johnson is welcome because it reminds us that science is a community endeavor. When we recognize the authors of scientific articles, or applaud the distinguished men and women who receive Nobel Prizes (or in the case of computer science, Turing Medals) we often fail to see the community members that were essential to the scientific work. At least in Hidden Figures, they receive a little of the reflected light.

David Alan Grier is the author of When Computers Were Human. He writes “Global Code” for Computer magazine and products the podcast “How We Manage Stuff.” He can be reached at grier@gwu.edu.

Digital Keyword: Culture

digital keywords peters jacketThis post appears concurrently at Culture Digitally.

Culture is a keyword among keywords for Raymond Williams, who contributed to the founding of cultural studies in the 1960s and 1970s. It is among the most common ways to talk about how we talk. In the essay below, one of Williams’ most careful readers, Ted Striphas, offers a sensitive update to Williams and a wide-ranging intellectual history, describing how culture has coevolved with the digital turn since the end of World War II. No longer an antithesis to technology, culture has recently interpenetrated with the computational (e.g., digital humanities, culturomics, and big-data-driven cultural studies).

In fascinating conversation with Fred Turner’s prototype and Limor Shifman’s meme, in what sense do aspects of modern-day digital culture challenge and confirm Striphas’ observation about the dynamism and adaptability of culture—or, in Williams’ famous phrase, “one of two or three most complicated words in the English language?”

Ted Striphas: Culture

 

This comment may have been adapted from the introduction to Benjamin Peters’ Digital Keywords: A Vocabulary of Information Society and Culture. 25% discount code in 2016: P06197