the just released schedules were a flat-out statement that the B10 doesn't believe SOS will matter in playoff selection
Bradley-Terry Statistical Rating (KRACH) for FBS Football
What do the numbers say?
The Bradley-Terry method applied to college football.
A couple of notes regarding the calculations. I use the Bradley-Terry method for determining the ratings. This is an iterative, statistical rating that computes a hypothetical round-robin winning percentage if all teams played each other. Clearly, that's not the case in college football, and this method gives infinite results if teams are undefeated. This problem is 'solved' for the sake of comparison by adding a fictitious tie to each team's record.
- Game results are pulled from the NCAAFootball.com.
- Blogpoll results are pulled from SBNation.
- There are a lot of explanatory notes and links; I put those at the end of the post so people who don't care about them can skip them and get right to the results. There is also a link to all my results.
- For brevity, I only listed the top 20 here. For those who are interested, I also listed Michigan's position, FYI.
- I release this after all the major polls come out to avoid 'influencing' anybody's vote.
To the numbers...
|Through games of 2010.11.06|
Auburn is pulling away as the number one team. Oregon has finally caught up with everybody's ranking due to strength of schedule, and Boise St. has begun to fall for the same reason. The bonus for being undefeated this far into the season is starting to balance out with perceived strength of schedule, as LSU is nipping at the Broncos heels.
As a point of comparison, it lines up really well with the blogpoll. Two outliers: blogpollers really like Ohio State, and they really dislike Virginia Tech (see discussion of limitations, below).
There's a pretty significant drop-off from #5 LSU to #6 Stanford, and another notable drop-off from #10 Wisconsin to #11 Utah. I think this supports the notion that a 16-team tournament would be sufficient to include all the top teams. If you're in the muddle around 16, there really isn't all that much to complain about if you're left out.
The conference breakdown in the top 10 and top 16 (non-BCS conferences in parentheses):
|Conference||Top 10||Top 16|
The top 10 has roughly equal representation from the BCS conferences. Looking at the top 16, a bias toward the SEC begins to emerge, with 5 teams, including those in spots 14-16. Not surprisingly, the Big East is absent, and the ACC is, well, underrepresented. TCU, Boise St. and Utah are ranked above all comers from these conferences, including Virginia Tech, who lost to Boise St. and, as we all know, James Madison. The nagging question remains: how do you compare the relative strengths of conferences when they don't play each other?
The next few weeks should be interesting.
Discussion of limitations
That said, there's always a bit of resistance when I post this rating. It's one additional data point. It's not even my opinion, and it doesn't mean your team is better or worse than you think it is. It attempts to look objectively at how teams would fare, should they play every other team. There are some limitations, namely the infinite results, and incomparability, of undefeated teams. As with any statistical calculation, sample size is important; while there are only ~12 games per team, there are ~120 teams. One could argue the merits of using any sort of statistical calculation on said sample. Also, it should be pointed out that games against FCS teams are ignored. This is a double-edged sword: teams don't get credit for beating up on FCS teams, but Virginia Tech effectively gets a pass for losing to James Madison.
Brian pointed out another interesting anomaly (it's the double star at the very bottom) in last year's end-of-season college hockey KRACH. A similar effect can be seen in this rating, as discussed above. So, why does this happen? Like college football, there's little overlap between conferences, teams tend to get compartmentalised.
As with any tool, it's only as good as its user; we can't blindly take the results as fact. One possible solution is to take the top 30 teams at the end of the season and run a KRACH on only those teams. Although, for any hypothetical tournament, I would strongly support the inclusion of all conference champions.
What if I want to see the entire rating, and results for each week?
All the results are available, if you'd like to see the numbers yourself. As I said last year, John Whelan freely gave me the perl script in 1998 to calculate KRACH for ACHA club hockey teams, so I'm happy to share the script and input data if you don't want to write it yourself. And I am fallible. There's a lot of data to crunch, and I copied and pasted from the NCAA site; there may be errors. If you find one, please bring it to my attention and I'll make the fix posthaste.