## Ken Steiglitz: It’s the Number of Zeroes that Counts

We present the third installment in a series by The Discrete Charm of the Machine author Ken Steiglitz. You can find the first post here and the second, here.

The scales of space and time in our universe; in everyday life we hang out very near the center of this picture: 1 meter and 1 second.

As we’ll see in The Discrete Charm the computer world is full of very big and very small numbers. For example, if your smartphone’s memory has a capacity of 32 GBytes, it means it can hold 32 billion bytes, or 32000000000 bytes. It’s awfully inconvenient and error-prone to count this many zeros, and it can get much worse, so scientists, who are used to dealing with very large and small numbers, just count the number of zeros. In this case the memory capacity is 3.2×1010 bytes. At the other extreme, pulses in an electronic circuit might occur at the rate of a billion per second, so the time between pulses is a billionth of a second, 0.000000001, a nanosecond, 1 × 10−9 seconds. In the time-honored scientific lingo, a factor of 10 is an “order of magnitude,” and back-of-the-envelope estimates often ignore factors of 2 or 3. What’s a factor of 2 or 3 between friends? What matters is the number of zeroes. In the last example, a nanosecond is 9 orders of magnitude smaller than a second.

Such big and small numbers also come up in discussing the size of transistors, the number of them that fit on a chip, the speed of communication on the internet in bits per second, and so on. The figure shows the range of magnitudes we’re ever likely to encounter when we discuss the sizes of things and the time that things take. At the low extremes I indicate the size of an electron and the time between the crests of gamma-ray waves, just about the highest frequency we ever encounter. The electron is about 6 orders of magnitude smaller than a typical virus (and a single transistor on a chip); the frequency of gamma rays is about 10 orders of magnitude faster than a gigahertz computer chip.

To this computer scientist a machine like an automobile is pretty boring. It runs only one program, or maybe two if you count forward and reverse gear. With few exceptions it has four wheels, one engine, one steering wheel—and all cars go about as fast as any other, if they can move in traffic at all. I could take my father’s 1941 Plymouth out for a spin today and hardly anyone would notice. It cost about \$845 in 1941 (for a four-door sedan), or about \$14,000 in today’s dollars. In other words, in our order-of-magnitude world, it is a product that is practically frozen in time. On the other hand, my obsolete and clumsy laptop beats the first computer I ever used by 5 orders of magnitude in speed and memory, and 4 orders of magnitude in weight and volume. If you want to talk money, I remember paying about 50¢ a byte for extra memory for a small laboratory computer in 1971—8 orders of magnitude more expensive than today, or maybe 9 if you take inflation into account.

The number of zeros is roughly the logarithm (base-10), and plots like the figure are said to have logarithmic scales. You can see them in the chapter on Moore’s law in The Discrete Charm, where I need them to get a manageable picture of just how much progress has been made in computer technology over the last few decades. The shrinkage in size and speedup has been, in fact, exponential with the years—which means constant-size hops in the figure, year by year. Anything less than exponential growth would slow to a crawl. This distinction between exponential and slower-than-exponential growth also plays a crucial role in studying the efficiency of computer algorithms, a favorite pursuit of theoretical computer scientists and a subject I take up towards the end of the book.

Counting zeroes lets us to fit the whole universe on a page.

Ken Steiglitz is professor emeritus of computer science and senior scholar at Princeton University. His books include The Discrete Charm of the MachineCombinatorial OptimizationA Digital Signal Processing Primer, and Snipers, Shills, and Sharks. He lives in Princeton, New Jersey.

## Ken Steiglitz: When Caruso’s Voice Became Immortal

We’re excited to introduce a new series from Ken Steiglitz, computer science professor at Princeton University and author of The Discrete Charm of the Machine, out now.

The first record to sell a million copies was Enrico Caruso’s 1904 recording of “Vesti la giubba.” There was nothing digital, or even electrical about it; it was a strictly mechanical affair. In those days musicians would huddle around a horn which collected their sound waves, and that energy was coupled mechanically to a diaphragm and then to a needle that traced the waveforms on a wax or metal-foil cylinder or disc. For many years even the playback was completely mechanical, with a spring-wound motor and a reverse acoustical system that sent the waveform from what was often a 78 rpm shellac disc to a needle, diaphragm, and horn. Caruso almost single-handedly started a cultural revolution as the first recording star and became a household name—and millionaire (in 1904 dollars)—in the process. All without the benefit of electricity, and certainly purely analog from start to finish. Digital sound recording for the masses was 80 years in the future.

Enrico Caruso drew this self portrait on April 11, 1902 to commemorate his first recordings for RCA Victor. The process was completely analog and mechanical. As you can see, Caruso sang into a horn; there were no microphones. [Public domain, from Wikimedia Commons]

The 1904 Caruso recording I mentioned is perhaps the most famous single side ever made and is readily available online. It was a sensation and music lovers who could afford it were happy to invest in the 78 rpm (or simply “78”) disc, not to mention the elaborate contraption that played it. In the early twentieth century a 78 cost about a dollar or so, but 1904 dollars were worth about 30 of today’s dollars, a steep price for 2 minutes and 28 seconds of sound full of hisses, pops, and crackles, and practically no bass or treble. In fact the disc surface noise in the versions you’re likely to hear today has been cleaned up and the sound quality greatly improved—by digital processing of course. But being able to hear Caruso in your living room was the sensation of the new century.The poor sound quality of early recordings was not the worst of it. That could be fixed, and eventually it was. The long-playing stereo record (now usually called just “vinyl”) made the 1960s and 70s the golden age of high fidelity, and the audiophile was born. I especially remember, for example, the remarkable sound of the Mercury Living Presence and Deutsche Grammophon labels. The market for high-quality home equipment boomed, and it was easy to spend thousands of dollars on the latest high-tech gear. But all was not well. The pressure of the stylus, usually diamond, on the vinyl disc wore both. There is about a half mile of groove on an LP record, and the stylus that tracks it has a very sharp, very hard tip; records wear out. Not as quickly as the shellac discs of the 20s and 30s, but they wear out.

The noise problem for analog recordings is exacerbated when many tracks are combined, a standard practice in studio work in the recording industry. Sound in analog form is just inherently fragile; its quality deteriorates every time it is copied or played back on a turntable or past a tape head.

Everything changed in 1982 with the introduction of the compact disc (CD), which was digital. Each CD holds about 400 million samples of a 74-minute stereo sound waveform, each sample represented by a 2-byte number (a byte is 8 bits). In this world those 800 million bytes, or 6.4 billion bits (zeros or ones) can be stored and copied forever, absolutely perfectly. Those 6.4 billion bits are quite safe for as long as our civilization endures.

There are 19th century tenors whose voices we will never hear. But Caruso, Corelli, Domingo, Pavarotti… their digital voices are truly immortal.

Ken Steiglitz is professor emeritus of computer science and senior scholar at Princeton University. His books include The Discrete Charm of the MachineCombinatorial OptimizationA Digital Signal Processing Primer, and Snipers, Shills, and Sharks. He lives in Princeton, New Jersey.

## Ken Steiglitz on The Discrete Charm of the Machine

A few short decades ago, we were informed by the smooth signals of analog television and radio; we communicated using our analog telephones; and we even computed with analog computers. Today our world is digital, built with zeros and ones. Why did this revolution occur? The Discrete Charm of the Machine explains, in an engaging and accessible manner, the varied physical and logical reasons behind this radical transformation. Ken Steiglitz examines why our information technology, the lifeblood of our civilization, became digital, and challenges us to think about where its future trajectory may lead.

What is the aim of the book?

The subtitle: To explain why the world became digital. Barely two generations ago our information machines—radio, TV, computers, telephones, phonographs, cameras—were analog. Information was represented by smoothly varying waves. Today all these devices are digital. Information is represented by bits, zeros and ones. We trace the reasons for this radical change, some based on fundamental physical principles, others on ideas from communication theory and computer science. At the end we arrive at the present age of the internet, dominated by digital communication, and finally greet the arrival of androids—the logical end of our current pursuit of artificial intelligence.

What role did war play in this transformation?

Sadly, World War II was a major impetus to many of the developments leading to the digital world, mainly because of the need for better methods for decrypting intercepted secret messages and more powerful computation for building the atomic bomb. The following Cold War just increased the pressure. Business applications of computers and then, of course, the personal computer opened the floodgates for the machines that are today never far from our fingertips.

How did you come to study this subject?

I lived it. As an electrical engineering undergraduate I used both analog and digital computers. My first summer job was programming one of the few digital computers in Manhattan at the time, the IBM 704. In graduate school I wrote my dissertation on the relationship between analog and digital signal processing and my research for the next twenty years or so concentrated on digital signal processing: using computers to process sound and images in digital form.

What physical theory played—and continues to play—a key role in the revolution?

Quantum mechanics, without a doubt. The theory explains the essential nature of noise, which is the natural enemy of analog information; it makes possible the shrinkage and speedup of our electronics (Moore’s law); and it introduces the possibility of an entirely new kind of computer, the quantum computer, which can transcend the power of today’s conventional machines. Quantum mechanics shows that many aspects of the world are essentially discrete in nature, and the change from the classical physics of the nineteenth century to the quantum mechanics of the twentieth is mirrored in the development of our digital information machines.

What mathematical theory plays a key role in understanding the limitations of computers?

Complexity theory and the idea of an intractable problem, as developed by computer scientists. This theme is explored in Part III, first in terms of analog computers, then using Alan Turing’s abstraction of digital computation, which we now call the Turing machine. This leads to the formulation of the most important open question of computer science, does P equal NP? If P equals NP it would mean that any problem where solutions can just be checked fast can be solved fast. This seems like asking a lot and, in fact, most computer scientists believe that P does not equal NP. Problems as hard as any in NP are called NP-complete. The point is that NP-complete problems, like the famous traveling problem, seem to be intrinsically difficult, and cracking any one of them cracks them all.  Their essential difficulty manifests itself, mysteriously, in many different ways in the analog and digital worlds, suggesting, perhaps, that there is an underlying physical law at work.

What important open question about physics (not mathematics) speaks to the relative power of digital and analog computers?

The extended Church-Turing thesis states that any reasonable computer can be simulated efficiently by a Turing machine. Informally, it means that no computer, even if analog, is more powerful (in an appropriately defined way) than the bare-boned, step-by-step, one-tape Turing machine. The question is open, but many computer scientists believe it to be true. This line of reasoning leads to an important conclusion: if the extended Church-Turing thesis is true, and if P is not equal to NP (which is widely believed), then the digital computer is all we need—Nature is not hiding any computational magic in the analog world.

What does all this have to do with artificial intelligence (AI)?

The brain uses information in both analog and digital form, and some have even suggested that it uses quantum computing. So, the argument goes, perhaps the brain has some special powers that cannot be captured by ordinary computers.

What does philosopher David Chalmers call the hard problem?

We finally reach—in the last chapter—the question of whether the androids we are building will ultimately be conscious. Chalmers calls this the hard problem, and some, including myself, think it unanswerable. An affirmative answer would have real and important consequences, despite the seemingly esoteric nature of the question. If machines can be conscious, and presumably also capable of suffering, then we have a moral responsibility to protect them, and—to put it in human terms—bring them up right. I propose that we must give the coming androids the benefit of the doubt; we owe them the same loving care that we as parents bestow on our biological offspring.

Where do we go from here?

A funny thing happens on the way from chapter 1 to 12. I begin with the modest plan of describing, in the simplest way I can, the ideas behind the analog-to-digital revolution.  We visit along the way some surprising tourist spots: the Antikythera mechanism, a 2000-year old analog computer built by the ancient Greeks; Jacquard’s embroidery machine with its breakthrough stored program; Ada Lovelace’s program for Babbage’s hypothetical computer, predating Alan Turing by a century; and B. F. Skinner’s pigeons trained in the manner of AI to be living smart bombs. We arrive at a collection of deep conjectures about the way the universe works and some challenging moral questions.

Ken Steiglitz is professor emeritus of computer science and senior scholar at Princeton University. His books include Combinatorial OptimizationA Digital Signal Processing Primer, and Snipers, Shills, and Sharks (Princeton). He lives in Princeton, New Jersey.