Ken Steiglitz on The Discrete Charm of the Machine

SteiglitzA few short decades ago, we were informed by the smooth signals of analog television and radio; we communicated using our analog telephones; and we even computed with analog computers. Today our world is digital, built with zeros and ones. Why did this revolution occur? The Discrete Charm of the Machine explains, in an engaging and accessible manner, the varied physical and logical reasons behind this radical transformation. Ken Steiglitz examines why our information technology, the lifeblood of our civilization, became digital, and challenges us to think about where its future trajectory may lead.

What is the aim of the book?

The subtitle: To explain why the world became digital. Barely two generations ago our information machines—radio, TV, computers, telephones, phonographs, cameras—were analog. Information was represented by smoothly varying waves. Today all these devices are digital. Information is represented by bits, zeros and ones. We trace the reasons for this radical change, some based on fundamental physical principles, others on ideas from communication theory and computer science. At the end we arrive at the present age of the internet, dominated by digital communication, and finally greet the arrival of androids—the logical end of our current pursuit of artificial intelligence. 

What role did war play in this transformation?

Sadly, World War II was a major impetus to many of the developments leading to the digital world, mainly because of the need for better methods for decrypting intercepted secret messages and more powerful computation for building the atomic bomb. The following Cold War just increased the pressure. Business applications of computers and then, of course, the personal computer opened the floodgates for the machines that are today never far from our fingertips.

How did you come to study this subject?

I lived it. As an electrical engineering undergraduate I used both analog and digital computers. My first summer job was programming one of the few digital computers in Manhattan at the time, the IBM 704. In graduate school I wrote my dissertation on the relationship between analog and digital signal processing and my research for the next twenty years or so concentrated on digital signal processing: using computers to process sound and images in digital form.

What physical theory played—and continues to play—a key role in the revolution?

Quantum mechanics, without a doubt. The theory explains the essential nature of noise, which is the natural enemy of analog information; it makes possible the shrinkage and speedup of our electronics (Moore’s law); and it introduces the possibility of an entirely new kind of computer, the quantum computer, which can transcend the power of today’s conventional machines. Quantum mechanics shows that many aspects of the world are essentially discrete in nature, and the change from the classical physics of the nineteenth century to the quantum mechanics of the twentieth is mirrored in the development of our digital information machines.

What mathematical theory plays a key role in understanding the limitations of computers?

Complexity theory and the idea of an intractable problem, as developed by computer scientists. This theme is explored in Part III, first in terms of analog computers, then using Alan Turing’s abstraction of digital computation, which we now call the Turing machine. This leads to the formulation of the most important open question of computer science, does P equal NP? If P equals NP it would mean that any problem where solutions can just be checked fast can be solved fast. This seems like asking a lot and, in fact, most computer scientists believe that P does not equal NP. Problems as hard as any in NP are called NP-complete. The point is that NP-complete problems, like the famous traveling problem, seem to be intrinsically difficult, and cracking any one of them cracks them all.  Their essential difficulty manifests itself, mysteriously, in many different ways in the analog and digital worlds, suggesting, perhaps, that there is an underlying physical law at work. 

What important open question about physics (not mathematics) speaks to the relative power of digital and analog computers?

The extended Church-Turing thesis states that any reasonable computer can be simulated efficiently by a Turing machine. Informally, it means that no computer, even if analog, is more powerful (in an appropriately defined way) than the bare-boned, step-by-step, one-tape Turing machine. The question is open, but many computer scientists believe it to be true. This line of reasoning leads to an important conclusion: if the extended Church-Turing thesis is true, and if P is not equal to NP (which is widely believed), then the digital computer is all we need—Nature is not hiding any computational magic in the analog world.

What does all this have to do with artificial intelligence (AI)?

The brain uses information in both analog and digital form, and some have even suggested that it uses quantum computing. So, the argument goes, perhaps the brain has some special powers that cannot be captured by ordinary computers.

What does philosopher David Chalmers call the hard problem?

We finally reach—in the last chapter—the question of whether the androids we are building will ultimately be conscious. Chalmers calls this the hard problem, and some, including myself, think it unanswerable. An affirmative answer would have real and important consequences, despite the seemingly esoteric nature of the question. If machines can be conscious, and presumably also capable of suffering, then we have a moral responsibility to protect them, and—to put it in human terms—bring them up right. I propose that we must give the coming androids the benefit of the doubt; we owe them the same loving care that we as parents bestow on our biological offspring.

Where do we go from here?

A funny thing happens on the way from chapter 1 to 12. I begin with the modest plan of describing, in the simplest way I can, the ideas behind the analog-to-digital revolution.  We visit along the way some surprising tourist spots: the Antikythera mechanism, a 2000-year old analog computer built by the ancient Greeks; Jacquard’s embroidery machine with its breakthrough stored program; Ada Lovelace’s program for Babbage’s hypothetical computer, predating Alan Turing by a century; and B. F. Skinner’s pigeons trained in the manner of AI to be living smart bombs. We arrive at a collection of deep conjectures about the way the universe works and some challenging moral questions.

Ken Steiglitz is professor emeritus of computer science and senior scholar at Princeton University. His books include Combinatorial OptimizationA Digital Signal Processing Primer, and Snipers, Shills, and Sharks (Princeton). He lives in Princeton, New Jersey.

Joshua Holden: Quantum cryptography is unbreakable. So is human ingenuity

Two basic types of encryption schemes are used on the internet today. One, known as symmetric-key cryptography, follows the same pattern that people have been using to send secret messages for thousands of years. If Alice wants to send Bob a secret message, they start by getting together somewhere they can’t be overheard and agree on a secret key; later, when they are separated, they can use this key to send messages that Eve the eavesdropper can’t understand even if she overhears them. This is the sort of encryption used when you set up an online account with your neighbourhood bank; you and your bank already know private information about each other, and use that information to set up a secret password to protect your messages.

The second scheme is called public-key cryptography, and it was invented only in the 1970s. As the name suggests, these are systems where Alice and Bob agree on their key, or part of it, by exchanging only public information. This is incredibly useful in modern electronic commerce: if you want to send your credit card number safely over the internet to Amazon, for instance, you don’t want to have to drive to their headquarters to have a secret meeting first. Public-key systems rely on the fact that some mathematical processes seem to be easy to do, but difficult to undo. For example, for Alice to take two large whole numbers and multiply them is relatively easy; for Eve to take the result and recover the original numbers seems much harder.

Public-key cryptography was invented by researchers at the Government Communications Headquarters (GCHQ) – the British equivalent (more or less) of the US National Security Agency (NSA) – who wanted to protect communications between a large number of people in a security organisation. Their work was classified, and the British government neither used it nor allowed it to be released to the public. The idea of electronic commerce apparently never occurred to them. A few years later, academic researchers at Stanford and MIT rediscovered public-key systems. This time they were thinking about the benefits that widespread cryptography could bring to everyday people, not least the ability to do business over computers.

Now cryptographers think that a new kind of computer based on quantum physics could make public-key cryptography insecure. Bits in a normal computer are either 0 or 1. Quantum physics allows bits to be in a superposition of 0 and 1, in the same way that Schrödinger’s cat can be in a superposition of alive and dead states. This sometimes lets quantum computers explore possibilities more quickly than normal computers. While no one has yet built a quantum computer capable of solving problems of nontrivial size (unless they kept it secret), over the past 20 years, researchers have started figuring out how to write programs for such computers and predict that, once built, quantum computers will quickly solve ‘hidden subgroup problems’. Since all public-key systems currently rely on variations of these problems, they could, in theory, be broken by a quantum computer.

Cryptographers aren’t just giving up, however. They’re exploring replacements for the current systems, in two principal ways. One deploys quantum-resistant ciphers, which are ways to encrypt messages using current computers but without involving hidden subgroup problems. Thus they seem to be safe against code-breakers using quantum computers. The other idea is to make truly quantum ciphers. These would ‘fight quantum with quantum’, using the same quantum physics that could allow us to build quantum computers to protect against quantum-computational attacks. Progress is being made in both areas, but both require more research, which is currently being done at universities and other institutions around the world.

Yet some government agencies still want to restrict or control research into cryptographic security. They argue that if everyone in the world has strong cryptography, then terrorists, kidnappers and child pornographers will be able to make plans that law enforcement and national security personnel can’t penetrate.

But that’s not really true. What is true is that pretty much anyone can get hold of software that, when used properly, is secure against any publicly known attacks. The key here is ‘when used properly’. In reality, hardly any system is always used properly. And when terrorists or criminals use a system incorrectly even once, that can allow an experienced codebreaker working for the government to read all the messages sent with that system. Law enforcement and national security personnel can put those messages together with information gathered in other ways – surveillance, confidential informants, analysis of metadata and transmission characteristics, etc – and still have a potent tool against wrongdoers.

In his essay ‘A Few Words on Secret Writing’ (1841), Edgar Allan Poe wrote: ‘[I]t may be roundly asserted that human ingenuity cannot concoct a cipher which human ingenuity cannot resolve.’ In theory, he has been proven wrong: when executed properly under the proper conditions, techniques such as quantum cryptography are secure against any possible attack by Eve. In real-life situations, however, Poe was undoubtedly right. Every time an ‘unbreakable’ system has been put into actual use, some sort of unexpected mischance eventually has given Eve an opportunity to break it. Conversely, whenever it has seemed that Eve has irretrievably gained the upper hand, Alice and Bob have found a clever way to get back in the game. I am convinced of one thing: if society does not give ‘human ingenuity’ as much room to flourish as we can manage, we will all be poorer for it.Aeon counter – do not remove

Joshua Holden is professor of mathematics at the Rose-Hulman Institute of Technology and the author of The Mathematics of Secrets.

This article was originally published at Aeon and has been republished under Creative Commons.

David Alan Grier: The Light of Computation

by David Alan Grier

When one figure steps into the light, others can be seen in the reflected glow. The movie Hidden Figures has brought a little light to the contributions of NASA’s human computers. Women such as Katherine Goble Johnson and her colleagues of the West Area Computers supported the manned space program by doing hours of repetitive, detailed orbital calculations. These women were not the first mathematical workers to toil in the obscurity of organized scientific calculation. The history of organized computing groups can be traced back to the 17th century, when a French astronomer convinced three friends to help him calculate the date that Halley’s comet would return to view. Like Johnson, few human computers have received any recognition for their labors. For many, only their families appreciated the work that they did. For some, not even their closest relatives knew of their role in the scientific community.

GrierMy grandmother confessed her training as a human computer only at the very end of her life. At one dinner, she laid her fork on the table and expressed regret that she had never used calculus. Since none of us believed that she had gone to college, we dismissed the remark and moved the conversation in a different direction. Only after her passing did I find the college records that confirmed she had taken a degree in mathematics from the University of Michigan in 1921. The illumination from those records showed that she was not alone. Half of the twelve mathematics majors in her class were women. Five of those six had been employed as human computers or statistical clerks.

By 1921, organized human computing was fairly common in industrialized countries. The governments of the United States, Germany, France, Great Britain, Japan, and Russia supported groups that did calculations for nautical almanacs, national surveys, agricultural statistics, weapons testing, and weather prediction. The British Association for the Advancement of Science operated a computing group. So did the Harvard Observatory, Iowa State University, and the University of Indiana. One school, University College London, published a periodical for these groups, Tracts for Computers.

While many of these human computers were women, most were not. Computation was considered to be a form of clerical work, which was still a career dominated by men. However, human computers tended to be individuals who faced economic or social barriers to their careers. These barriers prevented them from becoming a scientist or engineer in spite of their talents. In the book When Computers Were Human, I characterized them as “Blacks, women, Irish, Jews and the merely poor.” One of the most prominent computing groups of the 20th century, the Mathematical Tables Project, hired only the impoverished. It operated during the Great Depression and recruited its 450 computers from New York City’s unemployment rolls.

During its 10 years of operations, the Math Tables Project toiled in obscurity. Only a few members of the scientific community recognized its contributions. Hans Bethe asked the group to do the calculations for a paper that he was writing in the physics of the sun. The engineer Philip Morse brought problems from his colleagues at MIT. The pioneering computer scientist John von Neumann asked the group to test a new mathematical optimization technique after he was unable to test it on the new ENIAC computer. However, most scientists maintained a distance between themselves and the Mathematical Tables Project. One member of the Academy of Science explained his reservations about the Project with an argument that came to be known as the Computational Syllogism. Scientists, he argued, are successful people. The poor, he asserted, are not successful. Therefore, he concluded, the poor cannot be scientists and hence should not be employed in computation.

Like the human computers of NASA, the Mathematical Tables Project had a brief moment in the spotlight. In 1964, the leader of the Project, Gertrude Blanch, received a Federal Woman’s Award from President Lyndon Johnson for her contributions to the United States Government. Yet, her light did not shine far enough to bring recognition to the 20 members of the Math Tables Project who published a book, later that year, on the methods of scientific computing. The volume became one of the most highly sold scientific books in history. Nonetheless, few people knew that it was written by former human computers.

The attention to Katherine Goble Johnson is welcome because it reminds us that science is a community endeavor. When we recognize the authors of scientific articles, or applaud the distinguished men and women who receive Nobel Prizes (or in the case of computer science, Turing Medals) we often fail to see the community members that were essential to the scientific work. At least in Hidden Figures, they receive a little of the reflected light.

David Alan Grier is the author of When Computers Were Human. He writes “Global Code” for Computer magazine and products the podcast “How We Manage Stuff.” He can be reached at grier@gwu.edu.