What is Calculus?

By Oscar Fernandez

This is the first of three short articles exploring calculus. This article briefly explores its origins. The second and third articles explore its substance and impact, respectively. They will be published in the coming weeks.

What is calculus? If you were watching Jeopardy on May 31, 2019 you were treated to one whimsical answer: “developed by 2 17th century thinkers & rivals, it’s used to calculate rates of change & to torment high school students.” Funny, Jeopardy. While that answer isn’t totally accurate, what I do like about it is its structure—history, substance, and impact. This is a tried-and-true powerful framework for understanding new concepts that marries context with content. In this three-part series on calculus I’ll give you a short introduction to calculus’ history, substance, and impact to provide you with a more fulfilling answer to the question “what is calculus?” First up: a short tour of the origins of calculus.

Three Big Problems That Drove the Development of Calculus

By the mid-1600s, scientists and mathematicians had spent millennia trying to solve what I’ll call the three Big Problems in mathematics: the instantaneous speed problem, the tangent line problem, and the area problem. The figure below illustrates these.

(Reprinted, with permission, from Calculus Simplified (Princeton University Press))

The instantaneous speed problem (a) popped up in many places, most notably in connection with Isaac Newton’s studies of gravity. You see, gravity continuously accelerates a falling object, changing its velocity from instant to instant. To fully understand gravity, then, requires an understanding of instantaneous velocity. This didn’t exist before calculus. The tangent line problem (b) arose mainly as a mathematical curiosity. The ancient Greeks knew how to calculate tangent lines to circles, but until calculus no one knew how to do that for other curves. The area problem (c) popped up in a variety of places. Ancient Egyptian tax collectors, for example, needed to know how to calculate the area of irregular shapes to accurately tax landowners. Many hundreds of years later, the ancient Greeks found formulas for the areas of certain shapes (e.g., circles) but no one knew how to find the area of any shape.

From understanding gravity to calculating taxes to mathematical curiosities, the three Big Problems illustrate the broad origins of calculus. And for millennia they remained unsolved. What made them so hard was that they could not be solved with pre-calculus mathematics. For example, you’ve been taught that you need two points to calculate the slope of a line. But in the tangent line problem you’re only given one point (point P in (b)). How can one possibly calculate the slope of a line with just one point?! Similarly, we think of speed as “change in distance divided by change in time” (as in “the car zoomed by at 80 miles per hour”). That’s a problem for the instantaneous speed problem (a), because there’s zero change in time during an instant, making the denominator of “change in distance divided by change in time” zero. We can’t divide by zero, so again we’re stuck.

The Two Geniuses That Figured Everything Out

It wasn’t until the mid-1600s that real progress on solving the three Big Problems was made. One thing the Jeopardy answer above got right was the allusion to the two 17th century thinkers credited with making the most progress: Isaac Newton and Gottfried Leibniz. You probably know a few things about Newton—you may have heard about Newton’s Three Laws of Motion, which forms the foundation of much of physics—but you’ve likely heard little if at all about Leibniz. That’s because, in short, Newton used the eventual power and influence he gained after making his many discoveries and advances public to discredit Leibniz’s role in the development of calculus. (Read more about the feud here.) Yet each of these great thinkers made important contributions to calculus. Their frameworks and approaches were very different, yet each provides tremendous insight into the mathematical foundations of calculus and how calculus works.

In the next post in this series we’ll dive into those foundations. We will discuss the ultimate foundation of calculus—limits—and the two pillars erected on that foundation—derivatives and integrals—that altogether constitute the mansion of calculus. And we will discover an amazing fact: all three of the Big Problems can be solved using THE SAME approach. As is true with so many thorny problems, we will see that all that was required was a change in perspective.

 

Calculus Simplified
By Oscar E. Fernandez

Calculus is a beautiful subject that most of us learn from professors, textbooks, or supplementary texts. Each of these resources has strengths but also weaknesses. In Calculus Simplified, Oscar Fernandez combines the strengths and omits the weaknesses, resulting in a “Goldilocks approach” to learning calculus: just the right level of detail, the right depth of insights, and the flexibility to customize your calculus adventure.

Fernandez begins by offering an intuitive introduction to the three key ideas in calculus—limits, derivatives, and integrals. The mathematical details of each of these pillars of calculus are then covered in subsequent chapters, which are organized into mini-lessons on topics found in a college-level calculus course. Each mini-lesson focuses first on developing the intuition behind calculus and then on conceptual and computational mastery. Nearly 200 solved examples and more than 300 exercises allow for ample opportunities to practice calculus. And additional resources—including video tutorials and interactive graphs—are available on the book’s website.

Calculus Simplified also gives you the option of personalizing your calculus journey. For example, you can learn all of calculus with zero knowledge of exponential, logarithmic, and trigonometric functions—these are discussed at the end of each mini-lesson. You can also opt for a more in-depth understanding of topics—chapter appendices provide additional insights and detail. Finally, an additional appendix explores more in-depth real-world applications of calculus.

Learning calculus should be an exciting voyage, not a daunting task. Calculus Simplified gives you the freedom to choose your calculus experience, and the right support to help you conquer the subject with confidence.

  • An accessible, intuitive introduction to first-semester calculus
  • Nearly 200 solved problems and more than 300 exercises (all with answers)
  • No prior knowledge of exponential, logarithmic, or trigonometric functions required
  • Additional online resources—video tutorials and supplementary exercises—provided

In Dialogue: Christopher Phillips and Tim Chartier on Sports & Statistics

Question: How would you describe the intersection between statistics and sports? How does one inform the other?

Christopher Phillips, author of Scouting and Scoring: Sports have undoubtedly become one of the most visible and important sites for the rise of data analytics and statistics. In some respects, sports seem to be an easy, even inevitable place to apply new statistical tools: most sports produce a lot of data across teams and seasons; games have fixed rules and clear measures of success (e.g., wins or points); players and teams have incentives to adjust in order to gain a competitive edge.

But as I discuss in my new book Scouting and Scoring: How We Know What We Know About Baseball, it is also easy to fall prey to myths about the use of statistics in sports. Though these myths apply across many sports, it is easiest to hone in on baseball, as that has been one of the most consequential areas for statistics.

Perhaps the most persistent and pernicious myth is that data emerge naturally from sporting events. There is no doubt that new video-, Doppler-, and radar-based technologies, especially when combined with increasingly cheap computing power and storage capability, have dramatically expanded the amount of data that can be collected. But it takes a huge about of labor to create, collect, clean, and curate data, even before anyone tries to analyze them. Moreover, some data, like errors in baseball, are inescapably the product of individual judgment which has to be standardized and monitored.

The second myth is that sport statistics emerged only recently, particularly after the rise of the electronic computer. In fact, statistical analysis in sports goes back decades: in baseball, playing statistics were being used to evaluate players for year-end awards and negotiate contracts for as long as professional baseball has existed. (And statistics were collected and published for cricket decades before baseball’s rules were formalized.) As new methods of statistical analysis emerged in the early twentieth century in fields like psychology and physiology, some observers immediately tried to apply them to sports. In the 1910 book Touching Second, the authors promoted the use of data for shifting around fielders and for scouting prospects, two of the most important uses of statistical data in the modern era as well. There’s certainly been a flurry of new statistics over the last twenty years, but the general idea isn’t new—consider that Allen Guttmann’s half-century-old book From Ritual to Record, highlights the “numeration of achievement” and the “quantification of the aesthetic” as defining features of modern sport.

Finally, it’s a myth that there is a fundamental divide between those who look at performance statistics (i.e., scorers) and those who evaluate bodies (i.e., scouts). The usual gloss is that scouts are holistic, subjective judges of quality whereas scorers are precise, objective measurers. In reality, baseball scouts have long used methods of quantification, whether for the pricing of amateur prospects, or for the grading of skills, or the creation of single metrics like the Overall Future Potential that reduce a player to a single number. There’s a fairly good case to be made that scouts and other evaluators of talent are even more audacious quantifiers than scorers in that the latter mainly analyze things that can be easily counted.

Tim Chartier, author of Math Bytes: Data surrounds us. The rate at which data is produced can make us seem like specks in the cavernous expanse of digital information.  Each day 3 billion photos and videos are shared on Snapchat.  In the last minute, 300 hours of video were uploaded to YouTube.  Data is offering new possibilities for insight. Sports is an area where data has a traditional role and newfound possibilities, in part, due to the enlarging datasets. 

For years, there are a number of constants in baseball that include the ball, bat, bases, and statistics like balls, strikes, hits and outs.  Statistics are and have simply been a part of the game.  You can find from the 1920 box score that Babe Ruth got 2 hits in 4 at-bats in his first game as a Yankee. While new metrics have emerged with analytical advances, the game has been well studied for some time. As Ford C. Frick stated in Games, Asterisks and People,

“Baseball is probably the world’s best documented sport.”

While this is true, the prevalence of data does not necessarily result in trusting the recommendations of those who study it.  For example, Manager Bobby Bragen stated, “Say you were standing with one foot in the oven and one foot in an ice bucket. According to the percentage people, you should be perfectly comfortable.”  This underscores an important aspect of data and analytics.  Data, inherently, can lead to insight but it becomes actionable when one trusts in how accurately it reflects our world. 

Other sports, while not as statistically robust as baseball also have an influx of data.  In basketball, cameras positioned in the rafters report the (x,y) position of every player on the court and the (x,y,z) position of the ball throughout the entire game every fraction of a second.  As such, we can replay aspects of games via this data for years to come.  With such information comes new information.  For example, we know that Steph Curry, while averaging just over 34 minutes a game, runs, on average, just over 2.6 miles per game. He also runs almost a quarter of a mile more on offense than defense. 

While such data can be stunning with its size and detail, it also comes with challenges. How do you recognize a pick and roll versus an isolation play simply from essentially dots moving in a plane?  Further, basketball, like football but unlike baseball, generally involves multiple players at a time.  How much credit do players get for a basket on offense?  A player’s position may open up possibilities for scoring, even if that player didn’t touch the ball.  As such, metrics have been and continued to be created in order to better understand the game.

Sports are played with a combination of analytics, gut and experience.  What combination depends on the sport, player, coach and context.  Nonetheless, data is here and will continue to give insight on the game. 

Anna Frebel on women in science who paved the way

As a young girl growing up in Germany, I always felt drawn to the idea of discovery. Noticing my expanding interest in science, my mother cultivated my curiosity about the world and our place in the universe. She repeatedly gifted me biographies of women scientists who defied the odds to pioneer discoveries in their respective fields. Indeed, these stories of accomplishment and determination greatly fueled my desire to become an astronomer.

As I spent countless hours reading and exploring on my own, I would find myself alone but never lonely in my educational pursuits. Little did I know, this form of self-reliance would serve me well as I completed my advanced degrees and research into finding ancient stars to learn about the cosmic origin of the chemical elements — published in my book Searching for the Oldest Stars: Ancient Relics from the Early Universe.

These days, I fly to Chile to use large telescopes once or twice per year. This work means long hours spent in solitude carrying out our observations. It is usually then that I most strongly feel it again: a sense of fulfillment and pride in this discovery work which I was lucky to gain a long time ago by reading the life stories of women in science.

I fondly remember learning about the thrill of traveling across continents with inspiring naturalist and scientific illustrator Maria Sybilla Merian (1647-1717) as she was researching and illustrating caterpillars and insects and their various life stages in the most detailed of ways. I met fierce and gifted mathematician Sofia Kovalevskaya (1850-1891) who was the first woman in math to obtain a PhD (coincidentally from the university in my hometown) and who later became the first woman math professor in Sweden. One of the most profound role models remains two time Nobel prize winner Marie Curie (1867-1934), a remarkably persistent physicist and chemist who discovered radioactivity and new chemical elements. Reading about her years of long work in the lab to eventually isolate 1/10th of a gram of radium, I too could imagine becoming a scientists. Curie’s immense dedication to science and humanity encapsulated everything I wanted to do with my life. Finally, atomic physicist Lise Meitner (1878-1968) showed me how groundbreaking discoveries can be made when daring to invoke unconventional ideas to explain experimental results. She realized that atoms cannot be arbitrarily large. If too heavy, they fission, break apart, and thus produce various heavy elements from the bottom half of the periodic table.

Throughout the years, these stories have stayed with me. Their impact and insight gave me comfort and guidance during the many phases of my academic and professional life. It was more than a question of gender. It was the confidence in knowing the women who came before me had created a path for the next generation to travel, myself included.

Some of these books have traveled with me as I moved from Germany to Australia to the US for my career and my path to professorship. In many ways, I’ve incorporated central aspects from the lives and research of these giants in science into my own work. Hence, these women remain in my heart and soul – and by knowing their stories, I never feel alone. From my perspective, reading biographies thus remains one of the most important forms of personal and professional mentorship and growth.

Recently, through a collaboration with STEM on Stage, I became a science adviser to the living history film “Humanity Needs Dreamers: A Visit With Marie Curie”. I also rekindled my love for these ladies and their stories by crafting a short play in which I portray Lise Meitner as she recalls her discovery of nuclear fission in 1938/39. The play “Pursuit of Discovery” is followed by a slide presentation about my research and how Meitner’s work provided the theoretical framework for my current studies into the formation of the heaviest elements in the periodic table.

I’m often asked about the challenges facing women in science. Although we have made significant progress, one of the main challenges is providing mentorship and role models. In astronomy, the number of senior level women remains small compared to our male counterparts. To help change this ratio, I’ve devoted time to help mentor undergraduate and graduate women in physics and astronomy.

Whether reading biographies of women in science, mentoring, or becoming Meitner on stage, it is important to give credit to those who paved the way for the next generation, and to highlight the amazing and inspiring accomplishments of women in science. As I write in my book, “we stand on the shoulders of giants.” And by knowing their stories, we can better know ourselves.

Anna Frebel is an Associate Professor in the Department of Physics at the Massachusetts Institute of Technology. She has received numerous international honors and awards for her discoveries and analyses of the oldest stars. She lives in Cambridge, Massachusetts.

 

 

Ken Steiglitz: Happy π Day!

As every grammar school student knows, π is the ratio of the circumference to the diameter of a circle. Its value is approximately 3.14…, and today is March 14th, so Happy π Day! The digits go on forever, and without a pattern. The number has many connections with computers, some obvious, some not so obvious, and I’ll mention a few.

The most obvious connection, I suppose, is that computers have allowed enthusiasts to find the value of π to great accuracy. But how accurately do we really need to know its value? Well, if we knew the diameter of the Earth precisely, knowing π to 14 or 15 decimal places would enable us to compute the length of the equator to within the width of a virus. This accuracy was achieved by the Persian mathematician Jamshīd al-Kāshī in the early 15th century. Of course humans let loose with digital computers can be counted on to go crazy; the current record is more than 22 trillion digits. (For a delightful and off-center account of the history of π, see A History of Pi, third edition, by Petr Beckmann, St. Martin’s Press, New York, 1971. The anti-Roman rant in chapter 5 alone is worth the price of admission.)

A photo of a European wildcat, Felis silvestris silvestris. The original photo is on the left. On the right is a version where the compression ratio gradually increases from right to left, thereby decreasing the image quality. The original photograph is by Michael Ga¨bler; it was modified by AzaToth to illustrate the effects of compression by JPEG. [Public domain, from Wikimedia Commons]

Don’t condemn the apparent absurdity of setting world records like this; the results can be useful. Running the programs on new hardware or software and comparing results is a good test for bugs. But more interesting is the question of just how the digits of π are distributed. Are they essentially random? Do any patterns appear? Is there a message from God hidden in this number that, after all, God created? Alas, so far no pattern has been found, and the digits appear to be “random” as far as statistical tests show. On the other hand, mathematicians have not been able to prove this one way or another.

Putting aside these more or less academic thoughts, the value of π is embedded deep in the code on your smartphone or computer and plays an important part in storing the images that people are constantly (it seems to me) scrolling through. Those images take up lots of space in memory, and they are often compressed by an algorithm like JPEG to economize on that storage. And that algorithm uses what are called “circular functions,” which, being based on the circle, depend for their very life on… π. The figure shows how the quality of an original image (left) degrades as it is compressed more and more, as shown on the right.

I’ll close with an example of an analog computer which we can use to find the value of π. The computer consists of a piece of paper that is ruled with parallel lines 3 inches (say) apart, and a needle 3 inches long. Toss the needle so that it has an equal chance of landing anywhere on the paper, and an equal chance of being at any angle. Then it turns out that the chance of the needle intersecting a line on the piece of paper is 2/π, so that by repeatedly tossing the needle and counting the number of times it does hit a line we can estimate the value of π. Of course to find the value of π to any decent accuracy we need to toss the needle an awfully large number of times. The problem of finding the probability of a needle tossed this way was posed and solved by Georges-Louis Leclerc, Comte de Buffon in 1777, and the setup is now called Buffon’s Needle. This is just one example of an analog computer, in contrast to our beloved digital computers, and you can find much more about them in The Discrete Charm of the Machine.

Ken Steiglitz is professor emeritus of computer science and senior scholar at Princeton University. His books include The Discrete Charm of the MachineCombinatorial OptimizationA Digital Signal Processing Primer, and Snipers, Shills, and Sharks (Princeton). He lives in Princeton, New Jersey.

Ken Steiglitz: It’s the Number of Zeroes that Counts

We present the third installment in a series by The Discrete Charm of the Machine author Ken Steiglitz. You can find the first post here and the second, here.

 

The scales of space and time in our universe; in everyday life we hang out very near the center of this picture: 1 meter and 1 second.

As we’ll see in The Discrete Charm the computer world is full of very big and very small numbers. For example, if your smartphone’s memory has a capacity of 32 GBytes, it means it can hold 32 billion bytes, or 32000000000 bytes. It’s awfully inconvenient and error-prone to count this many zeros, and it can get much worse, so scientists, who are used to dealing with very large and small numbers, just count the number of zeros. In this case the memory capacity is 3.2×1010 bytes. At the other extreme, pulses in an electronic circuit might occur at the rate of a billion per second, so the time between pulses is a billionth of a second, 0.000000001, a nanosecond, 1 × 10−9 seconds. In the time-honored scientific lingo, a factor of 10 is an “order of magnitude,” and back-of-the-envelope estimates often ignore factors of 2 or 3. What’s a factor of 2 or 3 between friends? What matters is the number of zeroes. In the last example, a nanosecond is 9 orders of magnitude smaller than a second.

Such big and small numbers also come up in discussing the size of transistors, the number of them that fit on a chip, the speed of communication on the internet in bits per second, and so on. The figure shows the range of magnitudes we’re ever likely to encounter when we discuss the sizes of things and the time that things take. At the low extremes I indicate the size of an electron and the time between the crests of gamma-ray waves, just about the highest frequency we ever encounter. The electron is about 6 orders of magnitude smaller than a typical virus (and a single transistor on a chip); the frequency of gamma rays is about 10 orders of magnitude faster than a gigahertz computer chip.

To this computer scientist a machine like an automobile is pretty boring. It runs only one program, or maybe two if you count forward and reverse gear. With few exceptions it has four wheels, one engine, one steering wheel—and all cars go about as fast as any other, if they can move in traffic at all. I could take my father’s 1941 Plymouth out for a spin today and hardly anyone would notice. It cost about $845 in 1941 (for a four-door sedan), or about $14,000 in today’s dollars. In other words, in our order-of-magnitude world, it is a product that is practically frozen in time. On the other hand, my obsolete and clumsy laptop beats the first computer I ever used by 5 orders of magnitude in speed and memory, and 4 orders of magnitude in weight and volume. If you want to talk money, I remember paying about 50¢ a byte for extra memory for a small laboratory computer in 1971—8 orders of magnitude more expensive than today, or maybe 9 if you take inflation into account.

The number of zeros is roughly the logarithm (base-10), and plots like the figure are said to have logarithmic scales. You can see them in the chapter on Moore’s law in The Discrete Charm, where I need them to get a manageable picture of just how much progress has been made in computer technology over the last few decades. The shrinkage in size and speedup has been, in fact, exponential with the years—which means constant-size hops in the figure, year by year. Anything less than exponential growth would slow to a crawl. This distinction between exponential and slower-than-exponential growth also plays a crucial role in studying the efficiency of computer algorithms, a favorite pursuit of theoretical computer scientists and a subject I take up towards the end of the book.

Counting zeroes lets us to fit the whole universe on a page.

SteiglitzKen Steiglitz is professor emeritus of computer science and senior scholar at Princeton University. His books include The Discrete Charm of the MachineCombinatorial OptimizationA Digital Signal Processing Primer, and Snipers, Shills, and Sharks. He lives in Princeton, New Jersey.

Ken Steiglitz: Garage Rock and the Unknowable

Here is the second post in a series by The Discrete Charm of the Machine author Ken Steiglitz. You can access the first post here

I sat down to draft The Discrete Charm of the Machine with the goal of explaining, without math, how we arrived at today’s digital world. It is a quasi-chronological story; I take what I need, when I need it, from the space of ideas. I start at the simplest point, describing why noise is a constant threat to information and how using discrete values (usually zeros and ones) affords protection and a permanence not possible with information in analog (continuous) form. From there I sketch the important ideas of digital signal processing (for sound and pictures), coding theory (for nearly error-free communication), complexity theory (for computation), and so on—a fine arc, I think, from the boomy and very analog console radios of my childhood to my elegant little internet radio.

Yet the path through the book is not quite so breezy and trouble-free. In the final three chapters we encounter three mysteries, each progressively more fundamental and thorny. I hope your curiosity and sense of wonder will be piqued; there are ample references to further reading. Here are the problems in a nutshell:

  1. Is it no harder to find a solution to a problem than to merely check a solution? (Does P = NP?) This question comes up in studying the relative difficulty of solving problems with a computing machine. It is a mathematical question, and is still unresolved after almost 40 years of attack by computer scientists.
    As I discuss in the book, there are plenty of reasons to believe that P is not equal to NP and most computer scientists come down on that side. But … no one knows for sure.
  2. Are the digital computers we use today as powerful—in a practical sense—as any we can build in this universe (the extended Church-Turing thesis)? This is a physics question, and for that reason is fundamentally different from the P=NP question. Its answer depends on how the universe works.
    The thesis is intimately tied to the problem of building machines that are essentially more powerful than today’s digital computers—the human brain is one popular candidate. The question runs deep: some believe there is magic to found beyond the world of zeros and ones.
  3. Can a machine be conscious? Philosopher David Chalmers calls this the hard problem, and considers it “the biggest mystery.” It is not a question of mathematics, nor of physics, but of philosophy and cognitive science.

I want to emphasize that this is not merely the modern equivalent of asking how many angels could dance on the point of a pin. The answer has most serious consequences for us humans: it determines how we should treat our android creations, the inevitable products of our present rush to artificial intelligence. If machines are capable of suffering we have a moral responsibility to treat them compassionately.

My first reaction to the third question is that it is unanswerable. How can we know about the subjective mental life of anyone (or any thing) but ourselves? Philosopher Owen Flanagan called those who take this position mysterians, after the proto-punk band ? and the Mysterians. Michael Shermer joins this camp in his Scientific American column of July 1, 2018. I discuss the difficulty in the final chapter and remain agnostic—although I am hard-pressed even to imagine what form an answer would take.

I suggest, however, a pragmatic way around the big third question: Rather than risk harm, give the machines the benefit of the doubt. It is after all what we do for our fellow humans.

SteiglitzKen Steiglitz is professor emeritus of computer science and senior scholar at Princeton University. His books include The Discrete Charm of the MachineCombinatorial OptimizationA Digital Signal Processing Primer, and Snipers, Shills, and Sharks. He lives in Princeton, New Jersey.

 

Ken Steiglitz: When Caruso’s Voice Became Immortal

We’re excited to introduce a new series from Ken Steiglitz, computer science professor at Princeton University and author of The Discrete Charm of the Machine, out now. 

The first record to sell a million copies was Enrico Caruso’s 1904 recording of “Vesti la giubba.” There was nothing digital, or even electrical about it; it was a strictly mechanical affair. In those days musicians would huddle around a horn which collected their sound waves, and that energy was coupled mechanically to a diaphragm and then to a needle that traced the waveforms on a wax or metal-foil cylinder or disc. For many years even the playback was completely mechanical, with a spring-wound motor and a reverse acoustical system that sent the waveform from what was often a 78 rpm shellac disc to a needle, diaphragm, and horn. Caruso almost single-handedly started a cultural revolution as the first recording star and became a household name—and millionaire (in 1904 dollars)—in the process. All without the benefit of electricity, and certainly purely analog from start to finish. Digital sound recording for the masses was 80 years in the future.

Enrico Caruso drew this self portrait on April 11, 1902 to commemorate his first recordings for RCA Victor. The process was completely analog and mechanical. As you can see, Caruso sang into a horn; there were no microphones. [Public domain, from Wikimedia Commons]

The 1904 Caruso recording I mentioned is perhaps the most famous single side ever made and is readily available online. It was a sensation and music lovers who could afford it were happy to invest in the 78 rpm (or simply “78”) disc, not to mention the elaborate contraption that played it. In the early twentieth century a 78 cost about a dollar or so, but 1904 dollars were worth about 30 of today’s dollars, a steep price for 2 minutes and 28 seconds of sound full of hisses, pops, and crackles, and practically no bass or treble. In fact the disc surface noise in the versions you’re likely to hear today has been cleaned up and the sound quality greatly improved—by digital processing of course. But being able to hear Caruso in your living room was the sensation of the new century.The poor sound quality of early recordings was not the worst of it. That could be fixed, and eventually it was. The long-playing stereo record (now usually called just “vinyl”) made the 1960s and 70s the golden age of high fidelity, and the audiophile was born. I especially remember, for example, the remarkable sound of the Mercury Living Presence and Deutsche Grammophon labels. The market for high-quality home equipment boomed, and it was easy to spend thousands of dollars on the latest high-tech gear. But all was not well. The pressure of the stylus, usually diamond, on the vinyl disc wore both. There is about a half mile of groove on an LP record, and the stylus that tracks it has a very sharp, very hard tip; records wear out. Not as quickly as the shellac discs of the 20s and 30s, but they wear out.

The noise problem for analog recordings is exacerbated when many tracks are combined, a standard practice in studio work in the recording industry. Sound in analog form is just inherently fragile; its quality deteriorates every time it is copied or played back on a turntable or past a tape head.

Everything changed in 1982 with the introduction of the compact disc (CD), which was digital. Each CD holds about 400 million samples of a 74-minute stereo sound waveform, each sample represented by a 2-byte number (a byte is 8 bits). In this world those 800 million bytes, or 6.4 billion bits (zeros or ones) can be stored and copied forever, absolutely perfectly. Those 6.4 billion bits are quite safe for as long as our civilization endures.

There are 19th century tenors whose voices we will never hear. But Caruso, Corelli, Domingo, Pavarotti… their digital voices are truly immortal.

SteiglitzKen Steiglitz is professor emeritus of computer science and senior scholar at Princeton University. His books include The Discrete Charm of the MachineCombinatorial OptimizationA Digital Signal Processing Primer, and Snipers, Shills, and Sharks. He lives in Princeton, New Jersey.

Browse our 2019 Mathematics Catalog

Our new Mathematics catalog includes an exploration of mathematical style through 99 different proofs of the same theorem; an outrageous graphic novel that investigates key concepts in mathematics; and a remarkable journey through hundreds of years to tell the story of how our understanding of calculus has evolved, how this has shaped the way it is taught in the classroom, and why calculus pedagogy needs to change.

If you’re attending the Joint Mathematics Meetings in Baltimore this week, you can stop by Booth 500 to check out our mathematics titles!

 

Integers and permutations—two of the most basic mathematical objects—are born of different fields and analyzed with different techniques. Yet when the Mathematical Sciences Investigation team of crack forensic mathematicians, led by Professor Gauss, begins its autopsies of the victims of two seemingly unrelated homicides, Arnie Integer and Daisy Permutation, they discover the most extraordinary similarities between the structures of each body. Prime Suspects is a graphic novel that takes you on a voyage of forensic discovery, exploring some of the most fundamental ideas in mathematics. Beautifully drawn and wittily and exquisitely detailed, it is a once-in-a-lifetime opportunity to experience mathematics like never before.

Ording 99 Variations on a Proof book cover

99 Variations on a Proof offers a multifaceted perspective on mathematics by demonstrating 99 different proofs of the same theorem. Each chapter solves an otherwise unremarkable equation in distinct historical, formal, and imaginative styles that range from Medieval, Topological, and Doggerel to Chromatic, Electrostatic, and Psychedelic. With a rare blend of humor and scholarly aplomb, Philip Ording weaves these variations into an accessible and wide-ranging narrative on the nature and practice of mathematics. Readers, no matter their level of expertise, will discover in these proofs and accompanying commentary surprising new aspects of the mathematical landscape.

 

Bressoud Calculus Reordered book cover

Exploring the motivations behind calculus’s discovery, Calculus Reordered highlights how this essential tool of mathematics came to be. David Bressoud explains why calculus is credited to Isaac Newton and Gottfried Leibniz in the seventeenth century, and how its current structure is based on developments that arose in the nineteenth century. Bressoud argues that a pedagogy informed by the historical development of calculus presents a sounder way for students to learn this fascinating area of mathematics.

Browse our 2019 Computer Science Catalog

Our new Computer Science catalog includes an introduction to computational complexity theory and its connections and interactions with mathematics; a book about the genesis of the digital idea and why it transformed civilization; and an intuitive approach to the mathematical foundation of computer science.

If you’re attending the Information Theory and Applications workshop in San Diego this week, you can stop by the PUP table to check out our computer science titles!

 

Mathematics and Computation provides a broad, conceptual overview of computational complexity theory—the mathematical study of efficient computation. Avi Wigderson illustrates the immense breadth of the field, its beauty and richness, and its diverse and growing interactions with other areas of mathematics. With important practical applications to computer science and industry, computational complexity theory has evolved into a highly interdisciplinary field that has shaped and will further shape science, technology, and society. 

 

Steiglitz Discrete Charm of the Machine book cover

A few short decades ago, we were informed by the smooth signals of analog television and radio; we communicated using our analog telephones; and we even computed with analog computers. Today our world is digital, built with zeros and ones. Why did this revolution occur? The Discrete Charm of the Machine explains, in an engaging and accessible manner, the varied physical and logical reasons behind this radical transformation, and challenges us to think about where its future trajectory may lead.

Lewis Zax Essential Discrete Mathematics for Computer Science

Discrete mathematics is the basis of much of computer science, from algorithms and automata theory to combinatorics and graph theory. This textbook covers the discrete mathematics that every computer science student needs to learn. Guiding students quickly through thirty-one short chapters that discuss one major topic each, Essential Discrete Mathematics for Computer Science can be tailored to fit the syllabi for a variety of courses. Fully illustrated in color, it aims to teach mathematical reasoning as well as concepts and skills by stressing the art of proof.

Calling Girls Who Love Math: Register for Girls’ Angle’s SUMIT 2019!

Get ready for a new mathematical adventure! SUMIT 2019 is coming April 6 and 7 with an all-new plot and math problems galore.

If you’re a 6th-11th grade girl who loves math, you’ll love SUMIT! There will be challenges for all levels and key leadership roles to fulfill. You’ll emerge with an even greater love of math, new friends, and lasting memories.

Princeton University Press has been a major sponsor of SUMIT since its inception in 2012, and is always proud to promote this magical escape-the-room-esque event where girls join forces to overcome challenges and become the heroines of an elaborate mathematical saga. The event offers one of the most memorable opportunities to do math while forming lasting friendships with like-minded peers. Together, girls build mathematical momentum and frequently surprise themselves with what they’re able to solve. All previous SUMITs have garnered overall ratings of 10 out of 10 by participants.

Created by Girls’ Angle, a nonprofit math club for girls, together with a team of college students, graduate students, and mathematicians, SUMIT 2019 takes place in Cambridge, MA.

Registration opens at 2 pm ET on Sunday, February 10 on a first-come-first-served basis and there are limited slots, so register quickly!

For more information, please visit http://girlsangle.org/page/SUMIT/SUMIT.html.

Edward Burger on Making Up Your Own Mind

BurgerWe solve countless problems—big and small—every day. With so much practice, why do we often have trouble making simple decisions—much less arriving at optimal solutions to important questions? Are we doomed to this muddle—or is there a practical way to learn to think more effectively and creatively? In this enlightening, entertaining, and inspiring book, Edward Burger shows how we can become far better at solving real-world problems by learning creative puzzle-solving skills using simple, effective thinking techniques. Making Up Your Own Mind teaches these techniques—including how to ask good questions, fail and try again, and change your mind—and then helps you practice them with fun verbal and visual puzzles. A book about changing your mind and creating an even better version of yourself through mental play, Making Up Your Own Mind will delight and reward anyone who wants to learn how to find better solutions to life’s innumerable puzzles. 

What are the practical applications of this book for someone who wants to improve their problem-solving skills?

The practicality goes back to the practical elements of one’s own education. Unfortunately, many today view “formal education” as the process of learning, but what they really mean is “knowing”—knowing the facts, dates, methodologies, templates, algorithms, and the like. Once the students demonstrate that newly-found knowledge by reproducing it back to the instructor on a paper or test they quickly let it all go from their short-term memories and move on. Today this kind of “knowledge” can be largely found via any search engine on any smart device. So in our technological information age, what should “formal education” mean?  Instead of focusing solely on “knowing,” it intentionally must also teach “growing”—growing the life of the mind. The practices offered in this volume attempt to do just that: offer readers a way to hone and grow their own thinking while sharpening their own minds. Those practices can then be directly applied to their everyday lives as they try to see the issues around them with greater clarity and creativity to make better decisions. The practical applications certainly will include their enhanced abilities to create better solutions to all the problems they encounter. But from my vantage point as an educator, the ultimate practical application is to help readers flourish and continue along a life-long journey in which they become better versions of themselves tomorrow than they are today. 

How has applying the problem-solving skills described in your book helped you in your everyday life?

In my leadership role as president of Southwestern University, I am constantly facing serious and complex challenges that need to be solved or opportunities to be seized. Those decisions require wisdom, creativity, focus on the macro issues while being mindful of the micro implications. Then action is required along with careful follow-up on the consequences of those decisions moving forward. I use the practices of effective thinking outlined in this book—including my personally favorite: effective failure—in every aspect of my work as president and I believe they have served me well. Effective failure, by the way, is the practice of intentionally not leaving a mistake or misstep until a new insight or deeper understanding is realized.  It is not enough to say, “Oh, that didn’t work, I’ll try something else.” That’s tenacity, which is wonderful, but alone is also ineffective failure.  Before trying that something else, this book offers practical but mindful ways of using one’s own errors to be wise guides to deeper understanding that natural lead to what to consider next. I also believe that through these varied practices of thinking I continue to grow as an educator, as a leader, as a mathematician, and as an individual who has committed his professional life to try to make the world better by inspiring others to be better. 

Can we really train our brains to be better problem solvers?

Yes!

Would you care to elaborate on that last, one-word response?

Okay, okay—But I hope I earned some partial credit for being direct and to-the-point. Many believe that their minds are the way they are and cannot be changed. In fact, we are all works-in-progress and capable of change—not the disruptive change that makes us into someone we’re not, but rather incremental change that allows us to be better and better versions of ourselves as we grow and evolve. That change in mindset does not require us to “think harder” (as so many people tell us), but rather to “think differently” (which is not hard at all after we embrace different practices of thinking, analysis, and creativity). Just as we can improve our tennis game, our poker skills, and the playing of the violin, we can improve our thinking and our minds. This book offers practical and straight-forward ways to embraces those enhance practices and puzzles to practice that art in an entertaining but thought-provoking way.

Why do you refer to “puzzle-solving” rather than the more typical phrase, “problem-solving?”

Because throughout our lives we all face challenges and conundrums that need to be faced and resolved as well as opportunities and possibilities that need to be either seized or avoided. Those negative challenges and possibilities are the problems in our lives. But everything we face—positive, negative, or otherwise—are the puzzles that life presents to us. Thus, I do not believe we should call mindful practices that empower us to find innovative or smart solutions “problem-solving.” We should call those practices that enhance our thinking about all the varied puzzles in our lives what they truly are: “puzzle-solving.” Finally, I believe we thrive within an optimistic perspective—and no one likes problems—but most do enjoy puzzles.

How did this book come about?

As with most things, this project natural evolved from a confluence of many previous experiences. My close collaborator, Michael Starbird, and I have been thinking about effective thinking collaboratively and individually for dozens of years. That effort resulted in our book, The 5 Elements of Effective Thinking (published by Princeton University Press and referenced in this latest work). Then when I began my work as president of Southwestern University over five years ago, I wanted to offer a class that was not a “typical” mathematics course, but rather a class that would capture the curiosity of all students who wonder how they can amplify their own abilities to grow and think more effectively—originally, wisely, and creatively. So I created a course entitled Effective Thinking through Creative Puzzle-Solving, and I have been teaching it every year at Southwestern since 2016.

How did your students change through their “puzzle-solving” journey?

Of course that question is best answered by my students at Southwestern University, and I invite you to visit our campus and talk with them to learn more. From my perspective, I have enjoyed seeing them become more open-minded, think in more creative and original ways (“thinking outside the box”), practice a more mindful perspective, and make time for themselves to be contemplative and reflective. Also, I have them write a number of essays (which I personally grade), and over the course of our time together, I have seen their writing and overall communication improve. Obviously, I am very proud of my students.

Edward B. Burger is the president of Southwestern University, a mathematics professor, and a leading teacher on thinking, innovation, and creativity. He has written more than seventy research articles, video series, and books, including The 5 Elements of Effective Thinking (with Michael Starbird) (Princeton), and has delivered hundreds of addresses worldwide. He lives in Georgetown, Texas.

Brian Kernighan on Millions, Billions, Zillions

KernighanNumbers are often intimidating, confusing, and even deliberately deceptive—especially when they are really big. The media loves to report on millions, billions, and trillions, but frequently makes basic mistakes or presents such numbers in misleading ways. And misunderstanding numbers can have serious consequences, since they can deceive us in many of our most important decisions, including how to vote, what to buy, and whether to make a financial investment. In this short, accessible, enlightening, and entertaining book, leading computer scientist Brian Kernighan teaches anyone—even diehard math-phobes—how to demystify the numbers that assault us every day. Giving you the simple tools you need to avoid being fooled by dubious numbers, Millions, Billions, Zillions is an essential survival guide for a world drowning in big—and often bad—data.

Why is it so important to be able to spot “bad statistics?”

We use statistical estimates all the time to decide where to invest, or what to buy, or what politicians to believe. Does a college education pay off financially? Is marijuana safer than alcohol? What brands of cars are most reliable? Do guns make society more dangerous? We make major personal and societal decisions about such topics, based on numbers that might be wrong or biased or cherry-picked. The better the statistics, the more accurately we can make good decisions based on them.

Can you give a recent example of numbers being presented in the media in a misleading way?

“No safe level of alcohol, new study concludes.” There were quite a few variants of this headline in late August. There’s no doubt whatsoever that heavy drinking is bad for you, but this study was actually a meta-analysis that combined the results of nearly 700 studies covering millions of people.  By combining results, it concluded that there was a tiny increase in risk in going from zero drinks a day to one drink, and more risk for higher numbers. But the result is based on correlation, not necessarily causation, and ignores potentially related factors like smoking, occupational hazards, and who knows what else. Fortunately, quite a few news stories pointed out flaws in the study’s conclusion.  To quote from an excellent review at the New York Times, “[The study] found that, over all, harms increased with each additional drink per day, and that the overall harms were lowest at zero. That’s how you get the headlines.”

What is an example of how a person could spot potential errors in big numbers?

One of the most effective techniques for dealing with big numbers is to ask, “How would that affect me personally?” For example, a few months ago a news story said that a proposed bill in California would offer free medical care for every resident, at a cost of $330 million per year. The population of California is nearly 40 million, so each person’s share of the cost would be less than $10. Sounds like a real bargain, doesn’t it? Given what we know about the endlessly rising costs of health care, it can’t possibly be right. In fact, the story was subsequently corrected; the cost of the bill would be $330 *billion* dollars, so each person’s share would be more like $10,000. Asking “What’s my share?” is a good way to assess big numbers.

In your book you talk about Little’s Law. Can you please describe it and explain why it’s useful?

Little’s Law is a kind of conservation law that can help you assess the accuracy of statements like “every week, 10,000 Americans turn 65.” Little’s Law describes the relationship between the time period (every week), the number of things involved (10,000 Americans), and the event (turning 65). Suppose there are 320 million Americans, each of whom is born, lives to age 80, then dies. Then 4 million people are born each year, 4 million die, and in fact there are 4 million at any particular age. Now divide by 365 days in a year, to see that about 11,000 people turn 65 on any particular day. So the original statement can’t be right—it should have said “per day,” not “per week.” Of course this ignores birth rate, life expectancy, and immigration, but Little’s Law is plenty good enough for spotting significant errors, like using weeks instead of days.

Is presenting numbers in ways designed to mislead more prevalent in the era of “alternative facts” than in the past?

I don’t know whether deceptive presentations are more prevalent today than they might have been, say, 20 years ago, but it’s not hard to find presentations that could mislead someone who isn’t paying attention. The technology for producing deceptive graphs and charts is better than it used to be, and social media makes it all too easy to spread them rapidly and widely.

Brian W. Kernighan is professor of computer science at Princeton University. His many books include Understanding the Digital World: What You Need to Know about Computers, the Internet, Privacy, and Security. He lives in Princeton, New Jersey.