Michigan #1 in Colley Rankings (W/L-based mathematical rankings)

Submitted by atticusb on October 11th, 2016 at 12:01 AM

Not sure how we feel about computer rankings, but the Colley Matrix (here) has Michigan at #1.  This ranking method only consider wins and losses and the win/loss records of opponents (and their opponents, etc., etc.).  See here for Wesley Colley's (the rankings creator) comments on his methodology and it's benefits.  Colley states:

[The rankings are] absolutely free from human influence or opinion, [account] for schedule strength, [ignore] runaway scores, and yet [produce] common sense results, which at the end of the season compare very favorably with human rankings (and other computer rankings). What else do you want? 

Well, for one thing, Michigan to be at #1 at the end of the season...

As a final aside, kudos to all our opponents to date... apparently they aren't as weak (ok, except for Rutgers, Colley agrees they suck) as might have been thought before the season started... Colley has our SOS as 10th overall.

Comments

atticusb

October 11th, 2016 at 12:03 AM ^

Ok, one more aside... Colley doesn't comment on whether the rankings are more or less *predictive* than the human polls... that would seem to me, at least, to be a critical element of "what else" one might want in a ranking system...

NOLA Blue

October 11th, 2016 at 2:30 PM ^

Looking at Week 6 of the previous 3 years (Teams who actually ends up in Top 4 after Conf Championships are in bold)

2015 Top 8 in Week 6:  Utah, Florida, Tex A&M, Ohio St, TCU, Iowa, Clemson, Michigan

2014 Top 8 in Week 6:  Auburn, Miss St, Miss, Zona, Fla St, Notre Dame, Ga Tech, UCLA

2013 Top 8 in Week 6: Stanford, Fla St, Clemson, Bama, Oklahoma, Georgia, Mizzou, Ohio St

So based upon the previous 3 years... It appears, landing in the Top 8 of Colley's Week 6 rankings gives a team approx 16% chance of finishing in the Top 4 and making the Play-Off.  Similarly, Colley's only correctly predicts (in week 6) a Top 4 finisher 16% of the time.

FLwolvfan22

October 11th, 2016 at 12:19 AM ^

Man, where have I been? Guess I better dust off that old batchelor of arts paper they gave me and display it proudly. They win out or one loss and PJ won't be there long.

MadMatt

October 11th, 2016 at 11:46 AM ^

I realize you are trying to be funny.  You are lampooning the caricature of an MSU fan in a way that you would not ever use to describe an actual person who attended Mich State.  But, I'm not sure I care for the term "half-breed."  It has a history that is not very pleasant.

Picktown GoBlue

October 11th, 2016 at 12:49 AM ^

Michigan is #1 in the Massey composite of a huge collection of rankings (they include AP and Coaches polls along with other computer rankings and lists) - link

There are 38 lists that have Michigan on top (out of 80).

MikeinTN

October 11th, 2016 at 1:01 AM ^

I have an excel spreadsheet that I'm sure is quite rudimentary by comparison but is the same concept although I only go one order of opponents. I don't feel like a 3 way carousel of who's beaten who adds statistical significance. I have Michigan ranked currently at #2 behind Texas A&M and SOS at #23.

J.

October 11th, 2016 at 9:15 AM ^

I've been anti-Colley since 2007.

In 2006, when his formula was part of the BCS, I implemented it locally in order to follow possible scenarios as the season wound down, up until Florida got its BS title game spot.  It was really very simple to duplicate, and while it defintiely isn't predictive, it's fine as far as it goes.  At least it's transparent, which is more than I can say for any manner of selecting a college football playoff field / champion before or since.

However, due to the lack of connectiveness of the graph, Colley ignored I-AA teams; games against them didn't count at all.  After the Horror, this was suddenly deemed to be a problem.  Therefore, in the middle of the season, he reworked his formula to count I-AA games, breaking them up into "groups" so that each "group" has about as many games vs. I-A opponents as the average I-A team itself does.

This turned a replicable, defensible mathematical formula into just another reverse-engineered popularity ranker -- he didn't like the resuls, so he changed the formula, and midseason, too.

In the end, it didn't matter; however, keep in mind that 2007 was the two-loss LSU "didn't lose in regulation" season.  Michigan had climbed back to #13 by the Wisconsin game, and they had the longest winning streak in the country.  If Henne hadn't been hurt, and a couple of other bounces had gone Michigan's way, they might have been playing for the title... or Colley's in-season shenanigans might have cost them a shot at the title.

I still check his rankings from time-to-time, but I don't respect them.

Alton

October 11th, 2016 at 10:10 AM ^

I have implemented Colley in the past for college hockey, lacrosse and baseball.  I will say that if you just ignore games against lower division teams, it does just fine picking teams at the end of the season, and I don't see why the NCAA doesn't use it instead of the RPI in those sports.  Since the RPI basically ignores games against lower-division teams, you can use that system in Colley too.

But I couldn't agree more with J here on the mid-season "emergency" kludge that he adopted in the App State season.  That was intellectually dishonest.

But that gets back to the problem of these "winner only" rating systems:  if you implement them for all 3 divisions at the same time, they do crazy things like put Mount Union in the top 20.  You can see that on a smaller scale with their treatment of Western Michigan right now--his system struggles to grasp that WMU is not really a top 10 team.  So you have to do a FBS-only system.  Which then has the problem of over-rating teams that lose to FCS schools, like Michigan did that one year.

Massey's system that incorporates scores into the ratings does just fine rating all 650+ college football teams at every level, but he has to limit his "winner only" system to FBS teams.  There are a few ways to address this, but changing the ratings in the middle of the season as a reaction to on-field results is not one of those ways.

LSAClassOf2000

October 11th, 2016 at 7:12 AM ^

I was always more of a Sagarin / Massey fan because I enjoy the predictive quality, but the schedule comparisons from Colley are sometimes interesting to me. Right now, however, if you go back through the last couple weeks, Michigan's rise in Colley's rankings as the OOC ended and the Big Ten schedule began has been pretty steady - #10 to #4 to #2 to #1 in the last four games, which in this matrix isn't easy usually considering who is at the top typically. 

JTrain

October 11th, 2016 at 7:17 AM ^

Stupid question(s) here:
Should we be rooting for OSU to beat Wisconsin this weekend? With MSU and Iowa looking weaker than we anticipated this year, it seems to hurt our Strength of schedule a lot.
Does winning the east (against a 1 loss OSU team) and then the championship automatically still put us in the final 4?
I know there is a lot of football games to be played yet but Washington in the top 4 in a lot of polls, Louisville looking strong, Alabama and Clemson at the top...is it possible for us to win out and not be in the final 4 teams??

Sent from MGoBlog HD for iPhone & iPad

Michansas Wolverback

October 11th, 2016 at 7:27 AM ^

No way. A 13-0 UM is in the CFP no matter what.

Moreover, if UW beats OSU, we should expect they will win out and we would get a 1- loss UW in the B1GCG rather than a lesser opponent. I think we want OSU to lose Saturday for sure. Just to show that they can be beat if nothing else.

Sent from MGoBlog HD for iPhone & iPad

DavidP814

October 11th, 2016 at 10:18 AM ^

The accuracy of all of the BCS computers took a dive when the powers that be compelled the participating models to remove margin of victory as a component.  Without that data point, the predictive powers of any computer rating system declines to the point it's virtually unusable.  At the time, Sagarin developed an alternate rating system for the BCS and kept his points rating as the "predictive" rating best equipped to project future results.

The newer models developed by Bill Connolley and ESPN, which calculate team strength on a drive-level or play-level, are much more predictive than any of the BCS computers, particularly after margin of victory was removed.