Marc Chamberland: Why π is important

On March 14, groups across the country will gather for Pi Day, a nerdy celebration of the number Pi, replete with fun facts about this mathematical constant, copious amounts pie, and of course, recitations of the digits of Pi. But why do we care about so many digits of Pi? How big is the room you want to wallpaper anyway? In 1706, 100 digits of Pi were known, and by 2013 over 12 trillion digits had been computed. I’ll give you five reasons why someone may claim that many digits of Pi is important, but they’re not all good.

Reason 1
It provides accuracy for scientific measurements

Pi1

This argument had merit when only a few digits were known, but today this reason is as empty as space. The radius of the universe is 93 billion light years, and the radius of a hydrogen atom is about 0.1 nanometers. So knowing Pi to 38 places is enough to tell you precisely how many hydrogen atoms you need to encircle the universe. For any mechanical calculations, probably 3.1415 is more than enough precision.

Reason 2
It’s neat to see how far we can go

Pi2

It’s true that great feats and discoveries have been done in the name of exploration. Ingenious techniques have been designed to crank out many digits of Pi and some of these ideas have led to remarkable discoveries in computing. But while this “because it is there” approach is beguiling, just because we can explore some phenomenon doesn’t mean we’ll find something valuable. Curiosity is great, but harnessing that energy with insight will take you farther.

Reason 3
Computer Integrity

Pi3

The digits of Pi help with testing and developing new algorithms. The Japanese mathematician Yasumasa Kanada used two different formulas to generate and check over one trillion digits of Pi. To get agreement after all those arithmetic operations and data transfers is strong evidence that the computers are functioning error-free. A spin-off of the expansive Pi calculations has been the development of the Fast Fourier Transform, a ground-breaking tool used in digital signal processing.

Reason 4
It provides evidence that Pi is normal

Pi4

A number is “normal” if any string of digits appears with the expected frequency. For example, you expect the number 4 to appear 1/10 of the time, or the string 28 to appear 1/100 of the time. It is suspected that Pi is normal, and this was evidenced from the first trillion digits when it was seen that each digit appears about 100 billion times. But proving that Pi is normal has been elusive. Why is the normality of numbers important? A normal number could be used to simulate a random number generator. Computer simulations are a vital tool in modeling any dynamic phenomena that involves randomness. Applications abound, including to climate science, physiological drug testing, computational fluid dynamics, and financial forecasting. If easily calculated numbers such as Pi can be proven to be normal, these precisely defined numbers could be used, paradoxically, in the service of generating randomness.

Reason 5
It helps us understand the prime numbers

Pi5

Pi is intimately connected to the prime numbers. There are formulas involving the product of infinitely numbers that connect the primes and Pi. The knowledge flows both ways: knowing many primes helps one calculate Pi and knowing many digits of Pi allows one to generate many primes. The Riemann Hypothesis—an unsolved 150-year-old mathematical problem whose solution would earn the solver one million dollars—is intimately connected to both the primes and the number Pi.

And you thought that Pi was only good for circles.

SingleMarc Chamberland is the Myra Steele Professor of Mathematics and Natural Science at Grinnell College. His research in several areas of mathematics, including studying Pi, has led to many publications and speaking engagements in various countries. His interest in popularizing mathematics resulted in the recent book Single Digits: In Praise of Small Numbers with Princeton University Press. He also maintains his YouTube channel Tipping Point Math that tries to make mathematics accessible to a general audience. He is currently working on a book about the number Pi.

David Alan Grier: The Light of Computation

by David Alan Grier

When one figure steps into the light, others can be seen in the reflected glow. The movie Hidden Figures has brought a little light to the contributions of NASA’s human computers. Women such as Katherine Goble Johnson and her colleagues of the West Area Computers supported the manned space program by doing hours of repetitive, detailed orbital calculations. These women were not the first mathematical workers to toil in the obscurity of organized scientific calculation. The history of organized computing groups can be traced back to the 17th century, when a French astronomer convinced three friends to help him calculate the date that Halley’s comet would return to view. Like Johnson, few human computers have received any recognition for their labors. For many, only their families appreciated the work that they did. For some, not even their closest relatives knew of their role in the scientific community.

GrierMy grandmother confessed her training as a human computer only at the very end of her life. At one dinner, she laid her fork on the table and expressed regret that she had never used calculus. Since none of us believed that she had gone to college, we dismissed the remark and moved the conversation in a different direction. Only after her passing did I find the college records that confirmed she had taken a degree in mathematics from the University of Michigan in 1921. The illumination from those records showed that she was not alone. Half of the twelve mathematics majors in her class were women. Five of those six had been employed as human computers or statistical clerks.

By 1921, organized human computing was fairly common in industrialized countries. The governments of the United States, Germany, France, Great Britain, Japan, and Russia supported groups that did calculations for nautical almanacs, national surveys, agricultural statistics, weapons testing, and weather prediction. The British Association for the Advancement of Science operated a computing group. So did the Harvard Observatory, Iowa State University, and the University of Indiana. One school, University College London, published a periodical for these groups, Tracts for Computers.

While many of these human computers were women, most were not. Computation was considered to be a form of clerical work, which was still a career dominated by men. However, human computers tended to be individuals who faced economic or social barriers to their careers. These barriers prevented them from becoming a scientist or engineer in spite of their talents. In the book When Computers Were Human, I characterized them as “Blacks, women, Irish, Jews and the merely poor.” One of the most prominent computing groups of the 20th century, the Mathematical Tables Project, hired only the impoverished. It operated during the Great Depression and recruited its 450 computers from New York City’s unemployment rolls.

During its 10 years of operations, the Math Tables Project toiled in obscurity. Only a few members of the scientific community recognized its contributions. Hans Bethe asked the group to do the calculations for a paper that he was writing in the physics of the sun. The engineer Philip Morse brought problems from his colleagues at MIT. The pioneering computer scientist John von Neumann asked the group to test a new mathematical optimization technique after he was unable to test it on the new ENIAC computer. However, most scientists maintained a distance between themselves and the Mathematical Tables Project. One member of the Academy of Science explained his reservations about the Project with an argument that came to be known as the Computational Syllogism. Scientists, he argued, are successful people. The poor, he asserted, are not successful. Therefore, he concluded, the poor cannot be scientists and hence should not be employed in computation.

Like the human computers of NASA, the Mathematical Tables Project had a brief moment in the spotlight. In 1964, the leader of the Project, Gertrude Blanch, received a Federal Woman’s Award from President Lyndon Johnson for her contributions to the United States Government. Yet, her light did not shine far enough to bring recognition to the 20 members of the Math Tables Project who published a book, later that year, on the methods of scientific computing. The volume became one of the most highly sold scientific books in history. Nonetheless, few people knew that it was written by former human computers.

The attention to Katherine Goble Johnson is welcome because it reminds us that science is a community endeavor. When we recognize the authors of scientific articles, or applaud the distinguished men and women who receive Nobel Prizes (or in the case of computer science, Turing Medals) we often fail to see the community members that were essential to the scientific work. At least in Hidden Figures, they receive a little of the reflected light.

David Alan Grier is the author of When Computers Were Human. He writes “Global Code” for Computer magazine and products the podcast “How We Manage Stuff.” He can be reached at grier@gwu.edu.

Cipher challenge #3 from Joshua Holden: Binary ciphers

The Mathematics of Secrets by Joshua Holden takes readers on a tour of the mathematics behind cryptography. Most books about cryptography are organized historically, or around how codes and ciphers have been used in government and military intelligence or bank transactions. Holden instead focuses on how mathematical principles underpin the ways that different codes and ciphers operate. Discussing the majority of ancient and modern ciphers currently known, The Mathematics of Secrets sheds light on both code making and code breaking. Over the next few weeks, we’ll be running a series of cipher challenges from Joshua Holden. The last post was on subliminal channels. Today’s is on binary ciphers:

Binary numerals, as most people know, represent numbers using only the digits 0 and 1.  They are very common in modern ciphers due to their use in computers, and they frequently represent letters of the alphabet.  A numeral like 10010 could represent the (1 · 24 + 0 · 23 + 0 · 22 + 1 · 2 + 0)th = 18th letter of the alphabet, or r.  So the entire alphabet would be:

 plaintext:   a     b     c     d     e     f     g     h     i     j
ciphertext: 00001 00010 00011 00100 00101 00110 00111 01000 01001 01010

 plaintext:   k     l     m     n     o     p     q     r     s     t
ciphertext: 01011 01100 01101 01110 01111 10000 10001 10010 10011 10100

 plaintext:   u     v     w     x     y     z
ciphertext: 10101 10110 10111 11000 11001 11010

The first use of a binary numeral system in cryptography, however, was well before the advent of digital computers. Sir Francis Bacon alluded to this cipher in 1605 in his work Of the Proficience and Advancement of Learning, Divine and Humane and published it in 1623 in the enlarged Latin version De Augmentis Scientarum. In this system not only the meaning but the very existence of the message is hidden in an innocuous “covertext.” We will give a modern English example.

Suppose we want to encrypt the word “not” into the covertext “I wrote Shakespeare.” First convert the plaintext into binary numerals:

   plaintext:   n      o     t
  ciphertext: 01110  01111 10100

Then stick the digits together into a string:

    011100111110100

Now we need what Bacon called a “biformed alphabet,” that is, one where each letter can have a “0-form” and a “1-form.”We will use roman letters for our 0-form and italic for our 1-form. Then for each letter of the covertext, if the corresponding digit in the ciphertext is 0, use the 0-form, and if the digit is 1 use the 1-form:

    0 11100 111110100xx
    I wrote Shakespeare.

Any leftover letters can be ignored, and we leave in spaces and punctuation to make the covertext look more realistic. Of course, it still looks odd with two different typefaces—Bacon’s examples were more subtle, although it’s a tricky business to get two alphabets that are similar enough to fool the casual observer but distinct enough to allow for accurate decryption.

Ciphers with binary numerals were reinvented many years later for use with the telegraph and then the printing telegraph, or teletypewriter. The first of these were technically not cryptographic since they were intended for convenience rather than secrecy. We could call them nonsecret ciphers, although for historical reasons they are usually called codes or sometimes encodings. The most well-known nonsecret encoding is probably the Morse code used for telegraphs and early radio, although Morse code does not use binary numerals. In 1833, Gauss, whom we met in Chapter 1, and the physicist Wilhelm Weber invented probably the first telegraph code, using essentially the same system of 5 binary digits as Bacon. Jean-Maurice-Émile Baudot used the same idea for his Baudot code when he invented his teletypewriter system in 1874. And the Baudot code is the one that Gilbert S. Vernam had in front of him in 1917 when his team at AT&T was asked to investigate the security of teletypewriter communications.

Vernam realized that he could take the string of binary digits produced by the Baudot code and encrypt it by combining each digit from the plaintext with a corresponding digit from the key according to the rules:

0 ⊕ 0 = 0
0 ⊕ 1 = 1
1 ⊕ 0 = 1
1 ⊕ 1 = 0

For example, the digits 10010, which ordinarily represent 18, and the digits 01110, which ordinarily represent 14, would be combined to get:

1 0 0 1 0
0 1 1 1 0


1 1 1 0 0

This gives 11100, which ordinarily represents 28—not the usual sum of 18 and 14.

Some of the systems that AT&T was using were equipped to automatically send messages using a paper tape, which could be punched with holes in 5 columns—a hole indicated a 1 in the Baudot code and no hole indicated a 0. Vernam configured the teletypewriter to combine each digit represented by the plaintext tape to the corresponding digit from a second tape punched with key characters. The resulting ciphertext is sent over the telegraph lines as usual.

At the other end, Bob feeds an identical copy of the tape through the same circuitry. Notice that doing the same operation twice gives you back the original value for each rule:

(0 ⊕ 0) ⊕ 0 = 0 ⊕ 0 = 0
(0 ⊕ 1) ⊕ 1 = 1 ⊕ 1 = 0
(1 ⊕ 0) ⊕ 0 = 1 ⊕ 0 = 1
(1 ⊕ 1) ⊕ 1 = 0 ⊕ 1 = 1

Thus the same operation at Bob’s end cancels out the key, and the teletypewriter can print the plaintext. Vernam’s invention and its further developments became extremely important in modern ciphers such as the ones in Sections 4.3 and 5.2 of The Mathematics of Secrets.

But let’s finish this post by going back to Bacon’s cipher.  I’ve changed it up a little — the covertext below is made up of two different kinds of words, not two different kinds of letters.  Can you figure out the two different kinds and decipher the hidden message?

It’s very important always to understand that students and examiners of cryptography are often confused in considering our Francis Bacon and another Bacon: esteemed Roger. It is easy to address even issues as evidently confusing as one of this nature. It becomes clear when you observe they lived different eras.

Answer to Cipher Challenge #2: Subliminal Channels

Given the hints, a good first assumption is that the ciphertext numbers have to be combined in such a way as to get rid of all of the fractions and give a whole number between 1 and 52.  If you look carefully, you’ll see that 1/5 is always paired with 3/5, 2/5 with 1/5, 3/5 with 4/5, and 4/5 with 2/5.  In each case, twice the first one plus the second one gives you a whole number:

2 × (1/5) + 3/5 = 5/5 = 1
2 × (2/5) + 1/5 = 5/5 = 1
2 × (3/5) + 4/5 = 10/5 = 2
2 × (4/5) + 2/5 = 10/5 = 2

Also, twice the second one minus the first one gives you a whole number:

2 × (3/5) – 1/5 = 5/5 = 1
2 × (1/5) – 2/5 = 0/5 = 0
2 × (4/5) – 3/5 = 5/5 = 1
2 × (2/5) – 4/5 = 0/5 = 0

Applying

to the ciphertext gives the first plaintext:

39 31 45 45 27 33 31 40 47 39 28 31 44 41
 m  e  s  s  a  g  e  n  u  m  b  e  r  o
40 31 35 45 46 34 31 39 31 30 35 47 39
 n  e  i  s  t  h  e  m  e  d  i  u  m

And applying

to the ciphertext gives the second plaintext:

20  8  5 19  5  3 15 14  4 16 12  1  9 14 
 t  h  e  s  e  c  o  n  d  p  l  a  i  n
20  5 24 20  9 19  1 20 12  1 18  7  5
 t  e  x  t  i  s  a  t  l  a  r  g  e

To deduce the encryption process, we have to solve our two equations for C1 and C2.  Subtracting the second equation from twice the first gives:


so

Adding the first equation to twice the second gives:


so

Joshua Holden is professor of mathematics at the Rose-Hulman Institute of Technology.

Browse Our Mathematics 2017 Catalog

Be among the first to browse our Mathematics 2017 Catalog:

If you are heading to the 2017 Joint Mathematics Meetings in Atlanta, Georgia from January 4 to January 7, come visit us at booth #143 to enter daily book raffles, challenge the SET grand master in a SET match, and receive a free copy of The Joy of SET if you win! Please visit our booth for the schedule.

Also, follow #JMM17 and @PrincetonUnivPress on Twitter for updates and information on our new and forthcoming titles throughout the meeting.

Fibonacci helped to revive the West as the cradle of science, technology, and commerce, yet he vanished from the pages of history. Finding Fibonacci is Keith Devlin’s compelling firsthand account of his ten-year quest to tell Fibonacci’s story.

Devlin Fibonacci cover

This annual anthology brings together the year’s finest mathematics writing from around the world. Featuring promising new voices alongside some of the foremost names in the field, The Best Writing on Mathematics 2016 makes available to a wide audience many articles not easily found anywhere else—and you don’t need to be a mathematician to enjoy them.

Pitici Best writing on Maths

In The Calculus of Happiness, Oscar Fernandez shows us that math yields powerful insights into health, wealth, and love. Using only high-school-level math, he guides us through several of the surprising results, including an easy rule of thumb for choosing foods that lower our risk for developing diabetes, simple “all-weather” investment portfolios with great returns, and math-backed strategies for achieving financial independence and searching for our soul mate.

Fernandez Calculus of Happiness

If you would like updates of our new titles, subscribe to our newsletter.

Joshua Holden: The secrets behind secret messages

“Cryptography is all about secrets, and throughout most of its history the whole field has been shrouded in secrecy.  The result has been that just knowing about cryptography seems dangerous and even mystical.”

In The Mathematics of Secrets: Cryptography from Caesar Ciphers to Digital EncryptionJoshua Holden provides the mathematical principles behind ancient and modern cryptic codes and ciphers. Using famous ciphers such as the Caesar Cipher, Holden reveals the key mathematical idea behind each, revealing how such ciphers are made, and how they are broken.  Holden recently took the time to answer questions about his book and cryptography.


There are lots of interesting things related to secret messages to talk abouthistory, sociology, politics, military studies, technology. Why should people be interested in the mathematics of cryptography? 
 
JH: Modern cryptography is a science, and like all modern science it relies on mathematics.  If you want to really understand what modern cryptography can and can’t do you need to know something about that mathematical foundation. Otherwise you’re just taking someone’s word for whether messages are secure, and because of all those sociological and political factors that might not be a wise thing to do. Besides that, I think the particular kinds of mathematics used in cryptography are really pretty. 
 
What kinds of mathematics are used in modern cryptography? Do you have to have a Ph.D. in mathematics to understand it? 
 
JH: I once taught a class on cryptography in which I said that the prerequisite was high school algebra.  Probably I should have said that the prerequisite was high school algebra and a willingness to think hard about it.  Most (but not all) of the mathematics is of the sort often called “discrete.”  That means it deals with things you can count, like whole numbers and squares in a grid, and not with things like irrational numbers and curves in a plane.  There’s also a fair amount of statistics, especially in the codebreaking aspects of cryptography.  All of the mathematics in this book is accessible to college undergraduates and most of it is understandable by moderately advanced high school students who are willing to put in some time with it. 
 
What is one myth about cryptography that you would like to address? 
 
JH: Cryptography is all about secrets, and throughout most of its history the whole field has been shrouded in secrecy.  The result has been that just knowing about cryptography seems dangerous and even mystical. In the Renaissance it was associated with black magic and a famous book on cryptography was banned by the Catholic Church. At the same time, the Church was using cryptography to keep its own messages secret while revealing as little about its techniques as possible. Through most of history, in fact, cryptography was used largely by militaries and governments who felt that their methods should be hidden from the world at large. That began to be challenged in the 19th century when Auguste Kerckhoffs declared that a good cryptographic system should be secure with only the bare minimum of information kept secret. 
 
Nowadays we can relate this idea to the open-source software movement. When more people are allowed to hunt for “bugs” (that is, security failures) the quality of the overall system is likely to go up. Even governments are beginning to get on board with some of the systems they use, although most still keep their highest-level systems tightly classified. Some professional cryptographers still claim that the public can’t possibly understand enough modern cryptography to be useful. Instead of keeping their writings secret they deliberately make it hard for anyone outside the field to understand them. It’s true that a deep understanding of the field takes years of study, but I don’t believe that people should be discouraged from trying to understand the basics. 
 
I invented a secret code once that none of my friends could break. Is it worth any money? 
 
JH: Like many sorts of inventing, coming up with a cryptographic system looks easy at first.  Unlike most inventions, however, it’s not always obvious if a secret code doesn’t “work.” It’s easy to get into the mindset that there’s only one way to break a system so all you have to do is test that way.  Professional codebreakers know that on the contrary, there are no rules for what’s allowed in breaking codes. Often the methods for codebreaking with are totally unsuspected by the codemakers. My favorite involves putting a chip card, such as a credit card with a microchip, into a microwave oven and turning it on. Looking at the output of the card when bombarded 
by radiation could reveal information about the encrypted information on the card! 
 
That being said, many cryptographic systems throughout history have indeed been invented by amateurs, and many systems invented by professionals turned out to be insecure, sometimes laughably so. The moral is, don’t rely on your own judgment, anymore than you should in medical or legal matters. Get a second opinion from a professional you trustyour local university is a good place to start.   
 
A lot of news reports lately are saying that new kinds of computers are about to break all of the cryptography used on the Internet. Other reports say that criminals and terrorists using unbreakable cryptography are about to take over the Internet. Are we in big trouble? 
 
JH: Probably not. As you might expect, both of these claims have an element of truth to them, and both of them are frequently blown way out of proportion. A lot of experts do expect that a new type of computer that uses quantum mechanics will “soon” become a reality, although there is some disagreement about what “soon” means. In August 2015 the U.S. National Security Agency announced that it was planning to introduce a new list of cryptography methods that would resist quantum computers but it has not announced a timetable for the introduction. Government agencies are concerned about protecting data that might have to remain secure for decades into the future, so the NSA is trying to prepare now for computers that could still be 10 or 20 years into the future. 
 
In the meantime, should we worry about bad guys with unbreakable cryptography? It’s true that pretty much anyone in the world can now get a hold of software that, when used properly, is secure against any publicly known attacks. The key here is “when used properly. In addition to the things I mentioned above, professional codebreakers know that hardly any system is always used properly. And when a system is used improperly even once, that can give an experienced codebreaker the information they need to read all the messages sent with that system.  Law enforcement and national security personnel can put that together with information gathered in other waysurveillance, confidential informants, analysis of metadata and transmission characteristics, etc.and still have a potent tool against wrongdoers. 
 
There are a lot of difficult political questions about whether we should try to restrict the availability of strong encryption. On the flip side, there are questions about how much information law enforcement and security agencies should be able to gather. My book doesn’t directly address those questions, but I hope that it gives readers the tools to understand the capabilities of codemakers and codebreakers. Without that you really do the best job of answering those political questions.

Joshua Holden is professor of mathematics at the Rose-Hulman Institute of Technology in Terre Haute, IN. His most recent book is The Mathematics of Secrets: Cryptography from Caesar Ciphers to Digital Encryption.

This Halloween, a few books that won’t (shouldn’t!) die

If Halloween has you looking for a way to combine your love (or terror) of zombies and academic books, you’re in luck: Princeton University Press has quite a distinguished publishing history when it comes to the undead.

 

As you noticed if you follow us on Instagram, a few of our favorites have come back to haunt us this October morning. What is this motley crew of titles doing in a pile of withered leaves? Well, The Origins of Monsters offers a peek at the reasons behind the spread of monstrous imagery in ancient empires; Zombies and Calculus  features a veritable course on how to use higher math skills to survive the zombie apocalypse, and International Politics and Zombies invites you to ponder how well-known theories from international relations might be applied to a war with zombies. Is neuroscience your thing? Do Zombies Dream of Undead Sheep? shows how zombism can be understood in terms of current knowledge regarding how the brain works. Or of course, you can take a trip to the graveyard of economic ideology with Zombie Economics, which was probably off marauding when this photo was snapped.

If you’re feeling more ascetic, Black: The History of a Color tells the social history of the color black—archetypal color of darkness and death—but also, Michel Pastoureau tells us, monastic virtue. A strikingly designed choice:

In the beginning was black, Michel Pastoureau tells us in Black: A History of a Color

A post shared by Princeton University Press (@princetonupress) on

 

Happy Halloween, bookworms.

Raffi Grinberg: Survival Techniques for Proof-Based Math

GrinbergReal analysis is difficult. In addition to learning new material about real numbers, topology, and sequences, most students are also learning to read and write rigorous proofs for the first time. The Real Analysis Lifesaver by Raffi Grinberg is an innovative guide that helps students through their first real analysis course while giving them a solid foundation for further study. Below, Grinberg offers an introduction to proof-based math:

 

 

 

Raffi Grinberg is an entrepreneur and former management consultant. He graduated with honors from Princeton University with a degree in mathematics in 2012. He is the author of The Real Analysis Lifesaver: All the Tools You Need to Understand Proofs.

An interview with John Stillwell on Elements of Mathematics

elements of mathematics jacketNot all topics that are part of today’s elementary mathematics were always considered as such, and great mathematical advances and discoveries had to occur in order for certain subjects to become “elementary.” Elements of Mathematics: From Euclid to Gödel, by John Stillwell gives readers, from high school students to professional mathematicians, the highlights of elementary mathematics and glimpses of the parts of math beyond its boundaries.

You’ve been writing math books for a long time now. What do you think is special about this one?

JS: In some ways it is a synthesis of ideas that occur fleetingly in some of my previous books: the interplay between numbers, geometry, algebra, infinity, and logic. In all my books I try to show the interaction between different fields of mathematics, but this is one more unified than any of the others. It covers some fields I have not covered before, such as probability, but also makes many connections I have not made before. I would say that it is also more reflective and philosophical—it really sums up all my experience in mathematics.

Who do you expect will enjoy reading this book?

JS: Well I hope my previous readers will still be interested! But for anyone who has not read my previous work, this might be the best place to start. It should suit anyone who is broadly interested in math, from high school to professional level. For the high school students, the book is a guide to the math they will meet in the future—they may understand only parts of it, but I think it will plant seeds for their future mathematical development. For the professors—I believe there will be many parts that are new and enlightening, judging from the number of times I have often heard “I never knew that!” when speaking on parts of the book to academic audiences.

Does the “Elements” in the title indicate that this book is elementary?

JS: I have tried to make it as simple as possible but, as Einstein is supposed to have said, “not simpler”. So, even though it is mainly about elementary mathematics it is not entirely elementary. It can’t be, because I also want to describe the limits of elementary mathematics—where and why mathematics becomes difficult. To get a realistic appreciation of math, it helps to know that some difficulties are unavoidable. Of course, for mathematicians, the difficulty of math is a big attraction.

What is novel about your approach?

JS: It tries to say something precise and rigorous about the boundaries of elementary math. There is now a field called “reverse mathematics” which aims to find exactly the right axioms to prove important theorems. For example, it has been known for a long time—possibly since Euclid—that the parallel axiom is the “right” axiom to prove the Pythagorean theorem. Much more recently, reverse mathematics has found that certain assumptions about infinity are the right axioms to prove basic theorems of analysis. This research, which has only appeared in specialist publications until now, helps explain why infinity appears so often at the boundaries of elementary math.

Does your book have real world applications?

JS: Someone always asks that question. I would say that if even one person understands mathematics better because of my book, then that is a net benefit to the world. The modern world runs on mathematics, so understanding math is necessary for anyone who wants to understand the world.

John Stillwell is professor of mathematics at the University of San Francisco. His many books include Mathematics and Its History and Roads to Infinity. His most recent book is Elements of Mathematics: From Euclid to Gödel.

Even celebrities misquote Albert Einstein

Calaprice_QuotableEinstein_pb_cvrAlice Calaprice is the editor of The Ultimate Quotable Einstein, a tome mentioned time and again in the media because famous folks continue to attribute words to Einstein that, realistically, he never actually said. Presidential candidates, reality stars, and more have used social media make erroneous references to Einstein’s words, perhaps hoping to give their own a bit more credibility. From the Grapevine recently compiled the most recent misquotes of Albert Einstein by public figures and demonstrated how easy it is to use The Ultimate Quotable Einstein to refute those citations:

Albert Einstein was a wise man, even outside the science laboratory. He has inspired painters, young students and comic book creators. Even budding romantics take advice from him.

So it should come as no surprise, then, that so many people today quote Einstein. Or, to be more precise, misquote Einstein.

“I believe they quote Einstein because of his iconic image as a genius,” Alice Calaprice, an Einstein expert, tells From The Grapevine. “Who would know better and be a better authority than the alleged smartest person in the world?”

Read more here.

 

Nicholas J. Higham: The Top 10 Algorithms in Applied Mathematics

pcam-p346-newton.jpg

From “Computational Science” by David E. Keyes in Princeton Companion to Applied Mathematics

In the January/February 2000 issue of Computing in Science and Engineering, Jack Dongarra and Francis Sullivan chose the “10
algorithms with the greatest influence on the development and practice of science and engineering in the 20th century” and presented a group of articles on them that they had commissioned and edited. (A SIAM News article by Barry Cipra gives a summary for anyone who does not have access to the original articles). This top ten list has attracted a lot of interest.

Sixteen years later, I though it would be interesting to produce such a list in a different way and see how it compares with the original top ten. My unscientific—but well defined— way of doing so is to determine which algorithms have the most page locators in the index of The Princeton Companion to Applied Mathematics (PCAM). This is a flawed measure for several reasons. First, the book focuses on applied mathematics, so some algorithms included in the original list may be outside its scope, though the book takes a broad view of the subject and includes many articles about applications and about topics on the interface with other areas. Second, the content is selective and the book does not attempt to cover all of applied mathematics. Third, the number of page locators is not necessarily a good measure of importance. However, the index was prepared by a professional indexer, so it should reflect the content of the book fairly objectively.

A problem facing anyone who compiles such a list is to define what is meant by “algorithm”. Where does one draw the line between an algorithm and a technique? For a simple example, is putting a rational function in partial fraction form an algorithm? In compiling the following list I have erred on the side of inclusion. This top ten list is in decreasing order of the number of page locators.

  1. Newton and quasi-Newton methods
  2. Matrix factorizations (LU, Cholesky, QR)
  3. Singular value decomposition, QR and QZ algorithms
  4. Monte-Carlo methods
  5. Fast Fourier transform
  6. Krylov subspace methods (conjugate gradients, Lanczos, GMRES,
    minres)
  7. JPEG
  8. PageRank
  9. Simplex algorithm
  10. Kalman filter

Note that JPEG (1992) and PageRank (1998) were youngsters in 2000, but all the other algorithms date back at least to the 1960s.

By comparison, the 2000 list is, in chronological order (no other ordering was given)

  • Metropolis algorithm for Monte Carlo
  • Simplex method for linear programming
  • Krylov subspace iteration methods
  • The decompositional approach to matrix computations
  • The Fortran optimizing compiler
  • QR algorithm for computing eigenvalues
  • Quicksort algorithm for sorting
  • Fast Fourier transform
  • Integer relation detection
  • Fast multipole method

The two lists agree in 7 of their entries. The differences are:

PCAM list 2000 list
Newton and quasi-Newton methods The Fortran Optimizing Compiler
Jpeg Quicksort algorithm for sorting
PageRank Integer relation detection
Kalman filter Fast multipole method

Of those in the right-hand column, Fortran is in the index of PCAM and would have made the list, but so would C, MATLAB, etc., and I draw the line at including languages and compilers; the fast multipole method nearly made the PCAM table; and quicksort and integer relation detection both have one page locator in the PCAM index.

There is a remarkable agreement between the two lists! Dongarra and Sullivan say they knew that “whatever we came up with in the end, it would be controversial”. Their top ten has certainly stimulated some debate, but I don’t think it has been too controversial. This comparison suggests that Dongarra and Sullivan did a pretty good job, and one that has stood the test of time well.

Finally, I point readers to a talk Who invented the great numerical algorithms? by Nick Trefethen for a historical perspective on algorithms, including most of those mentioned above.

This post originally appeared on Higham’s popular website.

Higham jacketNicholas J. Higham is the Richardson Professor of Applied Mathematics at The University of Manchester. He most recently edited The Princeton Companion to Applied Mathematics.

Happy Birthday, Albert Einstein!

What a year. Einstein may have famously called his own birthday a natural disaster, but between the discovery of gravitational waves in February and the 100th anniversary of the general theory of relativity this past November, it’s been a big year for the renowned physicist and former Princeton resident. Throughout the day, PUP’s design blog will be celebrating with featured posts on our Einstein books and the stories behind them.

HappyBirthdayEinstein Graphic 3

Here are some of our favorite Einstein blog posts from the past year:

Was Einstein the First to Discover General Relativity? by Daniel Kennefick

Under the Spell of Relativity by Katherine Freese

Einstein: A Missionary of Science by Jürgen Renn

Me, Myself and Einstein by Jimena Canales

The Revelation of Relativity by Hanoch Gutfreund

A Mere Philosopher by Eoghan Barry

The Final Days of Albert Einstein by Debra Liese

 

Praeteritio and the quiet importance of Pi

pidayby James D. Stein

Somewhere along my somewhat convoluted educational journey I encountered Latin rhetorical devices. At least one has become part of common usage–oxymoron, the apparent paradox created by juxtaposed words which seem to contradict each other; a classic example being ‘awfully good’. For some reason, one of the devices that has stuck with me over the years is praeteritio, in which emphasis is placed on a topic by saying that one is omitting it. For instance, you could say that when one forgets about 9/11, the Iraq War, Hurricane Katrina, and the Meltdown, George W. Bush’s presidency was smooth sailing.

I’ve always wanted to invent a word, like John Allen Paulos did with ‘innumeracy’, and πraeteritio is my leading candidate–it’s the fact that we call attention to the overwhelming importance of the number π by deliberately excluding it from the conversation. We do that in one of the most important formulas encountered by intermediate algebra and trigonometry students; s = rθ, the formula for the arc length s subtended by a central angle θ in a circle of radius r.

You don’t see π in this formula because π is so important, so natural, that mathematicians use radians as a measure of angle, and π is naturally incorporated into radian measure. Most angle measurement that we see in the real world is described in terms of degrees. A full circle is 360 degrees, a straight angle 180 degrees, a right angle 90 degrees, and so on. But the circumference of a circle of radius 1 is 2π, and so it occurred to Roger Cotes (who is he? I’d never heard of him) that using an angular measure in which there were 2π angle units in a full circle would eliminate the need for a ‘fudge factor’ in the formula for the arc length of a circle subtended by a central angle. For instance, if one measured the angle D in degrees, the formula for the arc length of a circle of radius r subtended by a central angle would be s = (π/180)rD, and who wants to memorize that? The word ‘radian’ first appeared in an examination at Queen’s College in Belfast, Ireland, given by James Thomson, whose better-known brother William would later be known as Lord Kelvin.

The wisdom of this choice can be seen in its far-reaching consequences in the calculus of the trigonometric functions, and undoubtedly elsewhere. First semester calculus students learn that as long as one uses radian measure for angles, the derivative of sin x is cos x, and the derivative of cos x is – sin x. A standard problem in first-semester calculus, here left to the reader, is to compute what the derivative of sin x would be if the angle were measured in degrees rather than radians. Of course, the fudge factor π/180 would raise its ugly head, its square would appear in the formula for the second derivative of sin x, and instead of the elegant repeating pattern of the derivatives of sin x and cos x that are a highlight of the calculus of trigonometric functions, the ensuing formulas would be beyond ugly.

One of the simplest known formulas for the computation of π is via the infinite series ????4=1−13+15−17+⋯

This deliciously elegant formula arises from integrating the geometric series with ratio -x^2 in the equation 1/(1+????^2)=1−????2+????4−????6+⋯

The integral of the left side is the inverse tangent function tan-1 x, but only because we have been fortunate enough to emphasize the importance of π by utilizing an angle measurement system which is the essence of πraeteritio; the recognition of the importance of π by excluding it from the discussion.

So on π Day, let us take a moment to recognize not only the beauty of π when it makes all the memorable appearances which we know and love, but to acknowledge its supreme importance and value in those critical situations where, like a great character in a play, it exerts a profound dramatic influence even when offstage.

LA MathJames D. Stein is emeritus professor in the Department of Mathematics at California State University, Long Beach. His books include Cosmic Numbers (Basic) and How Math Explains the World (Smithsonian). His most recent book is L.A. Math: Romance, Crime, and Mathematics in the City of Angels.