A Word on Ranking from Team Sport Analyst Anjela Govan

Anjela Govan, an expert on the Markov method of rating and ranking, weighs in on the “science” behind getting a jump on the competition.

The topic of ranking (or the question “Who’s #1?”)   is usually accompanied by the question “Who will win this game?” Granted, ranking does not apply to sports only, but sports is viewed as less academic than most applications. The question of “Who will win?” essentially asks us to somehow identify which team is stronger, better, of higher quality, or is higher ranked. We need to recall that even though we often interchange words “ranking” and “rating,” they do have different meaning. Rating may somehow summarize the quality of a team based on some associated criteria. Ranking is simply an indication of the place in the list that reflects relative importance of teams to each other. The difficulty of this topic is to determine what constitutes quality and importance as far as this particular set of teams is concerned.  Ideally, to determine rating we would need to know the characteristics of the perfect team and measure all the teams against it, thus arriving at an absolute measure. Given that we know how far any given team is from the ideal, we can compute the absolute ratings. However, very few (if any) real world applications allow this ideal method. What we are able to do is measure relative difference in quality between teams, thus arriving at a rating based on these relative measurements. Now we are waxing  philosophical!

Back to game predictions:  This question has two aspects to it, first being “which team in a given competition will  win?” and the second being “by how much?”  The first is easier to answer.   Suppose we pick a method that produces rating scores, a favorite one from the great collection introduced by Dr. Langville and Dr. Meyer.  For a game between teams A and B to determine a winner we simply compare each team’s rating scores, rA and rB , the better rated team wins! Now for the by how much, referred to in chapter 9 of Who’s # 1 as point spread: there are many ways to estimate point spread. One of the simplest approaches is to think of the point spread being proportional to the difference between the ratings of the teams. That is, point spread for a game between team A and team B = α|rA rB |, where α is some constant. In this simplistic point spread approach, the work is concentrated in estimating an appropriate constant α. This constant could be the same for all the games, and could be determined  using the previous  season, that is the point spreads from the previous  season are known and least squares could be a way to approximate α. Another way is to customize α to the pairs of teams. Maybe there is a trend between teams that could be observed across a number of seasons. The described method is simplistic, perhaps it is evident why.  For a more in depth discussion do consider the well laid out chapter on point spreads in Who’s #1?



Using Darwin to Rank: Bracket Advice from Kathryn Pedings-Behling

Kathryn Pedings-Behling is is a high school math teacher by day and a graduate student by night. She teaches math at the Charleston Charter School for Math and Science where she heads the math department and the AP math program. She graduated from the College of Charleston in 2008 with a B.S. in Mathematics where she met Dr. Amy Langville who became her master’s thesis adviser. Kathryn will graduate with an M.S. in Mathematics from the College of Charleston in May. In her post below, she explains how Darwin comes into play when ranking sports teams.

Advice on filling out March Madness brackets from Kathryn Pedings-Behling


It sounds crazy, doesn’t it? What in the world does Darwin have to do with ranking sports teams? This was the same question I asked myself as I entered an independent study course with Dr. Amy Langville on the topic of evolutionary optimization. Okay, so now you are possibly seeing the connection: Darwin and evolution. But you are still stuck on this ranking thing. How is it related?

The process of evolutionary optimization takes Darwinian ideas of mating, mutating, fitness, and survival of the fittest and puts them in the context of ranking. I must admit that I am a total sucker for topics like this. The hardest part of making math your life is trying to have conversations with non-math folks about what it is that you are passionate about in this field. However, this was something that I knew “normal” people could wrap their head around. Any person who has been through an acceptable level of education knows the basics about evolution. The members of a population continue to mate and have offspring. Every now and then a mutation is thrown in. Only the fittest population members survive to then create more offspring. If you know these things, you can understand my ranking method! It’s that simple!

Before getting to the meat of the ranking method, there are a few important fundamental concepts. The members of your population are each a different possible ranking of your teams that is expressed mathematically as a permutation vector. You can mate these rankings by performing any of the rank aggregation methods found in the book Who’s #1?, such as average rank or Borda count. Mutating uses much simpler, random actions such as switching two teams or swapping the order of a group of teams, and is much more cost effective to perform on a large set of teams.

How do you know if your mating and mutating has produced a good offspring? This leads to a much deeper question: how do you define what it means to have a quality ranking? Well, if I knew the exact to answer that question, I would win the ESPN March Madness Bracketology Competition every single year, significantly boosting my popularity as a high school math teacher. However, we believe that we have a way to measure the relative quality of a ranking.

To understand this measure (which in Darwinian terms is the fitness of our population members), you need a new linear algebra definition called hillside form of a matrix. Hillside form is very easy to understand: the entries of each row are increasing from left to right, the entries of each column are decreasing from top to bottom, and the lower triangular of the matrix is filled with zeros (this last part of the definition is optional based on what type of data matrix you start with). Below you will find an example of a matrix in hillside form:




Hillside form makes sense in the context of ranking sports teams because, in a perfect scenario, you want the first place team to beat the second place team by a little, the third place team by a little more, etc. Unfortunately, the reality is that no season is perfect so all we can do is find the ranking that symmetrically reorders the columns and rows of a data matrix to be as close to hillside form as possible. This is the motivating factor behind our ranking method.

Now you are ready for the steps of the evolutionary optimization ranking method:

  1. Collect the data matrix you will use for your ranking – I tended to use game score data to make my head-to-head matrix, but that doesn’t have to be the case.
  2. Gather your original population – This can be done randomly or using the rankings of other methods you can learn about in Who’s #1?.
  3. Using each of your original population members, symmetrically reorder your data matrix and count the number of violations you have to hillside form. The lower the number, the better the ranking!
  4. Now it’s time to make an offspring! Either mate or mutate members of your population.
  5. Repeat step three for that offspring.
  6. If your offspring has a lower number of violations to hillside form than the weakest member of your population, congratulations – it made the cut! Drop the weakest member from your population. However, if the offspring did not make the cut, it’s history.
  7. Repeat steps 4 – 6 until you have a satisfactory solution – we chose to continue gathering offspring until the change in our fitness values got very small. Just realize when you are setting your stopping criteria that the longer you allow your algorithm to run, the better your solution will be since this method is constantly finding a solution that is getting closer and closer to hillside form.

There you have it! It’s really as easy as these seven steps. The thing I really love about this process is that it is so bare bones that you can pick any of those steps and make it as sophisticated as you like! Just remember two things before you start doing that: 1. Evolutionary optimization requires a certain amount of randomization in order to find “good” solutions. 2. The more sophisticated the method; the longer it will take to run.

Before ending this post, I need to share a deep, dark secret with you. I’m not a fan of sports. It’s true. Now, don’t get me wrong, I enjoy attending a live sporting event as much as the next person, but I actively dislike watching sports on television. I just can’t seem to get interested. However, what I am a fan of is ranking methods. So with that being said, don’t ask me if the bracket you’ve created is good because I have no idea, just be ready to talk to me about the math behind your ranking. That is something I can follow any day of the week.

Tim Chartier’s Bracketology 101 Webinar — the Math You Need to Pick the Winners

[Update for 2014 -- check out Tim's new tips: http://blog.press.princeton.edu/2014/03/17/top-tips-for-2014-march-madness-brackets-from-tim-chartier/]

Check out Tim Chartier’s webinars on the applications of linear algebra for more helpful hints on how to fill out your bracket.

Tim looks at how famous bracketologists have fared in the past (tip — Obama did OK in the 80th percentile, Dwyane Wade, not so much in the 43rd percentile) and provides step by step analysis of the math behind the Colley method and others.  He also guides viewers through a free java program that is available for download on his professional web site at Davidson College.


Bracketology 101: http://davidson.mediasite.mcnc.org/mcnc/SilverlightPlayer/Default.aspx?peid=bbac5c0483ad4659bea8adfc7258e3051d

Advanced Bracketology: http://davidson.mediasite.mcnc.org/mcnc/SilverlightPlayer/Default.aspx?peid=600e3c4c27e741edbc34cbf89a510b7a1d

You can find a link to the Java code for ranking on Tim’s blog: http://sites.davidson.edu/mathmovement/bracketology-101/

Listen to Tim’s talk on March Mathness at the MAA Distinguished Lecture Series: http://www.maa.org/dist-lecture/past-lectures.html

Tim Chartier is an Associate Professor of mathematics at Davidson College. His ability to communicate math both in and beyond the classroom were recognized with the Henry L. Alder Award for Distinguished Teaching by a Beginning College or University Mathematics Faculty Member from the Mathematical Association of America. His research and scholarship were recognized with an Alfred P. Sloan Research Fellowship. Tim serves on the Editorial Board for Math Horizons, a mathematics magazine of the Mathematical Association of America. He also serves as chair of the Advisory Council for the Museum of Mathematics. Tim has been a resource for a variety of media inquiries which includes fielding mathematical questions for the Sports Science program on ESPN. He is co-author with Anne Greenbaum of Numerical Methods: Design, Analysis and Computer Implementation of Algorithms. His latest book, Math Bytes, includes a section on bracketology as well as other fun math and computing endeavors.

March Mathness — The Massey Method

In this post for March Mathness, Kenneth Massey whose popular ratings (http://masseyratings.com) help rank the BCS teams each year, offers an overview of what goes into filling out his brackets for March Madness.



Advice on filling out March Madness brackets from Kenneth Massey


I’m a college basketball fan, but to be honest, I don’t watch many games during the regular season. Since my personal expertise is a function of ESPN highlights and commentary, I’ve learned to trust the math more than my own feelings about a matchup.

All season I compile a monster list of the various computer rankings for college basketball: http://www.masseyratings.com/cb/compare.htm

By following the results, I have a pretty good idea about which teams are over/under rated by the media and which ones are coming on strong or fading into tournament time.

Once the pairings are announced, I usually fill out a bracket based on my limited first-hand knowledge of the teams, the impressions I have from following the rankings, and maybe some “gut” intuitions. For that particular bracket, I don’t do any additional analysis, or even look at the numbers–I just rely on what’s already accumulated in my brain.

Now let me describe how I fill out my more analytical brackets. I have two different strategies, but both of them start with estimating the probabilities that each particular team will advance past each round.
In this post, I will describe that process. In a later post, I’ll describe how I use those probabilities to actually fill out my bracket picks.

I’ve been doing computer ratings for years, and have experimented with many mathematical models, one of which is described in Who’s # 1?. The model I currently use, and post on masseyratings.com, is proprietary, but I will list some of the pertinent aspects of it.

1) Margin of victory matters, but in an intelligent way. There are diminishing returns for blowouts, and adjustments are made for the pace of the game. For example 60-45 may be more impressive than 100-80.

2) Winning is rewarded, especially on the road. Even if the margin is small, a team gets a bump by winning games against good competition.

3) Schedule strength is implicit in all the equations. Everything is measured relative to the opponent, so there is higher reward and less risk for playing tough opponents.

4) The model has a decaying memory of early season games. The team in March is different from the team in November.

5) Games between mis-matched opponents are not as important as games between well-matched opponents. There is a lot more information in a #18 vs #23 matchup than there is when #18 plays #230.

6) My model produces offensive and defensive ratings for each team, as well as homefield advantage estimates. From these, it is possible to predict the distribution of final scores for a hypothetical matchup between any two teams.

After the ratings are computed, I use conditional probability to effectively account for every possible scenario of how the bracket could “play out”. For example, if team X makes it to the Sweet 16, who are they likely to face? According to the seedings, some teams have easier paths of advancement. I can compute the probability that each team advances past a given round, the expected number of rounds a team will win, and ultimately each team’s probability of winning the championship.

The great thing about probabilities is that you are never “wrong”. For example, last year my calculations showed that UConn had an 86% chance of winning the first round, a 54% chance of advancing to Sweet 16, a 29% chance of advancing to Elite 8, 12% chance of advancing to Final 4, 5% of playing in the championship game, and a 2.3% chance of winning it all.

By the nature of randomness, it is not really surprising that underdogs occasionally win. Even a dominant #1 overall seed rarely has more than a 25% chance of winning the entire tournament. That’s what makes the event so exciting–nobody knows what will happen.

After all the probabilities are computed, I proceed to fill in my picks. Don’t I just pick the teams with the highest probabilities? Not exactly. I’ll address that in a subsequent post.