Week 3 Rankings - The Dewey Method

Submitted by 909Dewey on
For anyone who cares, see my week three rankings below. For the last three years I have been ranking college football teams based on a methodology I developed to overcome what I feel is a shortcoming of the Colley Matrix, namely, treating all wins equally (or not properly crediting close games as such.) I call my system "The Dewey Method". In my system, all teams begin with the same rank and the same rank points, teams can only gain or lose rank points based on the results of games, and games against non-FBS opponents are not counted. I have devised a system that caps the value of a runaway score and rewards the underdog in a hardfought close game. For example the Nebraska - Virginia Tech game was almost a draw in my system, while Tulsa still received credit for playing a good team, even though Oklahoma shut them out. Also, a win over an inferior opponent does not necessarily result in an increase of rank points. Yesterday, Notre Dame and Southern Miss both lost points after their wins against lower-ranked opponents. Here is my Top 25: Rank Team Rank Points 1 Penn State 1.6487 2 Alabama 1.5616 3 Texas 1.5129 4 LSU 1.5088 5 Kansas 1.4988 6 Clemson 1.4661 7 UCLA 1.4535 8 Michigan 1.4185 9 Auburn 1.4101 10 Iowa 1.3920 11 Nebraska 1.3739 12 Oklahoma 1.3711 13 California 1.3700 14 Boise St 1.3666 15 Pittsburgh 1.3448 16 Florida 1.3387 17 Missouri 1.3255 18 Cincinnati 1.3167 19 Virginia Tech 1.2694 20 Ohio State 1.2686 21 Mississippi 1.2500 22 Washington 1.2489 23 Kentucky 1.2263 24 Miami FL 1.2113 25 Texas A&M 1.2053 Notable omissions – Florida St at #26, Southern Cal at #31, North Carolina at #32, Texas Tech at #33 I welcome any feedback. -909

chally

September 20th, 2009 at 9:35 AM ^

While I'm always in favor of new approaches to ranking teams, have you tested your rankings to see whether they're predictive of future outcomes? At least part of what mainstream rankings are attempting to do is establish a hierarchy of talent, such that the #1 team should usually beat the #10 team, etc. I think that such hierarchy is an integral part of lending ratings credibility. I could establish a rankings system where each team gets a point for every letter in the name of the team that they beat, but obviously that would be arbitrary. Although factors like margin of victory and opponent winning percentage are more rationally related to the endeavor of ranking, the way that they are weighted is no less arbitrary unless it is testable against some motivating purpose. All of this is a long way of me saying that I would like to embrace your ranking approach, but that I need some additional reason to believe that it is useful. I get skeptical when I see a team like UCLA so far above a team like Miami (FL) because I would expect Miami to be an 11 point favorite on a neutral field.

909Dewey

September 20th, 2009 at 10:08 AM ^

Thanks for the feedback. I am trying for a fair realization of what has happened on the field and how the teams should be ranked accordingly more than I am trying for predicting outcomes. That being said there are certainly anecdotal examples where these rankings lead to some insight. For example, last week Nebraska was my number one team. They were the underdog at Virginia Tech who was not in my top 25. They obviously played Va Tech very close on the road. Also Florida was not in my top 25 last week and only ahead of Tennessee by one one-hundredth of a rank point, so my rankings would have told you to take Tennessee and the points, etc. Upsets are always going to happen - team A should always beat team B until they don't - and I am sure that any system is going to have a cieling as far as accuracy there. As far as UCLA and Miami FL, UCLA has three dominant performances while Miami has two closer wins. As with any "bias-free" system the rankings in the earlier weeks are going to be more questionable than later in the season. Here again though is a chance for predicting outcomes - per your hypothetical maybe you should take UCLA and 11 points on a neutral field. -909

Lofter4

September 20th, 2009 at 9:42 AM ^

Under your system where everyone starts the same, how is Miami (that Miami) #24 while many teams ahead of them (including Michigan) don't even have 1 win as impressive as 2 of Miami's so far (@FSU and Georgia Tech). I guess how I understood it, Miami with 2 big underdog wins should be ranked much higher, in my e-opinion.

909Dewey

September 20th, 2009 at 10:28 AM ^

The main reason Miami FL seems a little "undervalued" in my system is that they have only played two games. Both those games were close, and both their opponents were similarly ranked in my system. (A close game against even competition isn't going to get you too much in my system. I see this as a feature, not a bug. Also there is very little separation amongst the teams after only three weeks, so the idea of an underdog is not very strong yet.)

909Dewey

September 20th, 2009 at 10:50 AM ^

Florida had a dominant win against Troy and a decent win against Tennessee. PSU is benefiting from lack of separation among the teams in early play - on game day their opponents weren't considered "weak" yet in my system. This will be less of a factor as the season goes on. Or PSU might be that much better than Florida. I am saying that based on results so far, they are. (Also you have the two games versus three thing where the FL win over a non-FBS team doesn't count.)

OuldSod

September 20th, 2009 at 12:21 PM ^

To second someone else, you really do [at some point] need to retrodictively vet your method and see how well it predicts results (interestingly, BCS computer rankings typically have a higher retrodictive accuracy, around 90%, but only a 75% predictive accuracy). I believe for a computer ranking to be used in the BCS formula, it must meet a specified retrodictive accuracy. The criticism of computer rankings is that they use subjective weights on subjective criteria; that criticism is as vacuous as a donut shop after a visit from Charlie Weis. The weights are retrodictively determined using previous seasons to MAXIMIZE predictive ability BETTER than human polls. This study from Michigan Physics (http://arxiv.org/abs/physics/0505169) shows examples of this (and other references) for an A > B > C (but C beat A!!) mathematical model. I think it's badass you developed your own model, but by vetting it, you can only make it better.