TO THE HOT TAKE CANNON
For the last few years, I've blatanly stolen Seth's idea to use advanced metrics both to fill out my Bowl Pick'Em and to decide on which games to watch. Two years ago, using this approach got me 69% correct picks in my pool, but last year things were a bit rougher - an FEI-based pick'em got 54% correct, while a Sagarin PREDICTOR based one got 57%. When something doesn't work, throw more data it. So I put together a more elaborate spreadsheet (available here) that presents picks from several different advanced metrics: FEI, Colley, Massey (Power), and Sagarin (new, improved GOLDEN MEAN).
The methodology is straightforward - I compared all the teams using these metrics, and using the difference between them picked the winners and the confidence in the picks. That is, a huge difference in the ratings of the teams suggests a lock, a difference of zero is a push. In addition to looking at these metrics individually, I also put together a composite score by standardizing all the values and averaging them together. The list automatically sorts based on the system you use, with locks at the top and coin-flips (and presumably more exciting games) at the bottom. Interestingly, the four different system present three different potential national title winners, but none of those include OSU, so take some small pleasure in that.
Here is the table of composite picks:
|Rose Bowl Winner||Sugar Bowl Winner|
|Confidence - Watchability:||0.0807|
|Bowl||Date||Projected Winner||Confidence - Watchability|
|New Mexico||12/20/2014||Utah St||0.7864|
|Advocare V100 Texas||12/29/2014||Arkansas||0.7151|
|Popeyes Bahamas||12/24/2014||W. Kentucky||0.4061|
|Heart of Dallas||12/26/2014||Louisiana Tech||0.4038|
|Idaho Potato||12/20/2014||Air Force||0.2092|
|Quick Lane||12/26/2014||N. Carolina||0.2037|
Good luck in your bowl pools and happy holidays.
Back a few years ago, Seth put together a really neat FEI-based analysis of the bowl games. He not only picked winners in the games, he also created a "watchability index," which looked at how "good" the games would be in terms of quality and evenly matched the teams were. This allowed him to make the most of his limited CFB watching time over the holidays, an objective I shared. So, I stole his idea last year and put together a similar but much less sophisticated analysis, and got 69% of the picks correct while not having any one too mad at me for wanting to watch a select set of games.
I did a similar analysis this year, while adding FEI * in to the mix (full analysis here). In essence, I compared all the teams in terms of Sagarin and FEI, and using the difference between them picked the winners and the confidence in the picks. That is, a huge difference in the ratings of the teams suggests a lock, a difference of zero is a push.
Here's what I came up with for picks. The the two pick columns are who Sagarin and FEI predict will win the games, and the confidence ranks represent the picks about which each metric is "most confident," i.e. the biggest difference between the two teams, with higher numbers indicating more confidence. I then added the two confidence rankings up, which provides the following results. There were a few games where Sagarin and FEI's predictions did not line up - those should be some of the more closely contested games. Good luck in your bowl pools and happy holidays.
|Pinstripe||12/28/2013||Notre Dame||35||Notre Dame||35||70||0|
|Holiday||12/30/2013||Arizona State||33||Arizona State||33||66||0|
|Beef 'O' Brady's||12/23/2013||East Carolina||28||East Carolina||34||62||0|
|GoDaddy||1/5/2014||Ball State||27||Ball State||31||58||0|
|Las Vegas||12/21/2013||Southern California||26||Southern California||26||52||0|
|Heart of Dallas||1/1/2014||North Texas||16||North Texas||30||46||0|
|BCS Championship||1/6/2014||Florida State||34||Florida State||12||46||0|
|New Mexico||12/21/2013||Washington State||32||Washington State||9||41||0|
|Little Caesars||12/26/2013||Bowling Green||17||Bowling Green||16||33||0|
|Chick-fil-A||12/31/2013||Texas A&M||20||Texas A&M||13||33||0|
|Orange||1/3/2014||Ohio State||5||Ohio State||23||28||0|
|Liberty||12/31/2013||Mississippi State||15||Mississippi State||10||25||0|
|Belk||12/28/2013||North Carolina||11||North Carolina||4||15||0|
|Capital One||1/1/2014||Wisconsin||8||South Carolina||5||13||1|
|Hawaii||12/24/2013||Oregon State||10||Boise State||3||13||1|
|Music City||12/30/2013||Georgia Tech||2||Mississippi||8||10||1|
|Buffalo Wild Wings||12/28/2013||Kansas State||9||PUSH||1||10||1|
|Poinsettia||12/26/2013||Utah State||3||Utah State||6||9||0|
* I will admit that I am not 100% confident in the interpretation of FEI here. I have not yet been able to find a good explanation of what this represents beyond the standard explanation that Brian Fremeau provides on his website, "...the baseline possession efficiency expectations against which each team is measured." If you have a statistical-minded explanation, I'd be very interested to hear it.
I just used the wayback machine to look at the Sagarin predictor ratings from early January (games through 1/3). Notice anything?
- Ohio State
Wichita State was 19, Michigan State was 23, Marquette was 54, FGCU was 127.
Marquette was a bad miss, but on the whole these were a better predictor of tournament results than the ratings in March. You'd have done pretty well, filling out a bracket based on these.
Sometime later I'll check to see if this was true in prior years, but in the meantime I'll make a suggestion: is it possible that performance in non-conference games, against the full variety of styles of play that you'll eventually see in the tournament and against teams that you aren't familiar with (nor they with you), is a better predictor than conference play? It's been a while since we've seen M light up a zone like we just saw for a few minutes in the first half against Florida, because except for Northwestern nobody in the B1G plays it.
According to the Sagarin ratings, which predict actual game outcomes:*
IN BB, UM is #1 in the B1G. In the nation, UM is #3, IND #4, and Ohio is not in the top 10.*
IN FB, on a neutral field UM would still be favored over Ohio (by 0.15 pts). Why? UM’s nonconference opponents included the two teams now favored to be in the national title game. By contrast, Ohio’s “marquee” nonconference game this year supposedly was Cal, which is now not even in the top 70 nationally. Moreover, Ohio’s other wins were often very narrow. In fact, even for the UM game, they were only +2 pts after subtracting 3pts for home field (not even considering the injury to UM’s starting QB).
What then should we make of Ohio’s claim that they could win the AP national title? That claim should be laughable to any educated voter. In fact, Ohio is not even ranked in the top 20 nationally by Sagarin (they are #24, whereas UM is #22). Also, Ohio will not be tested in a competitive bowl or conference championship. Indeed, in the latter, on a neutral field, they would be favored by less than half a point vs WISC, 2.5 pts vs NW and they would be underdogs to both NEB and UM.
Interestingly, ND's delusions of grandeur also should be tempered. Although human pollsters will no doubt put them #1, would ND actually be favored to beat all the other teams according unbiased Sagarin PREDICTOR ratings? No, not at this point. Why? ND had many narrow wins, even over marginal teams and teams expected to be huge challenges--like USC--turned out not to be so great. Thus, by Sagarin's ratings, while ALA is #1 and Oregon #2, ND is only #3 Both ALA and OR both would be favored over ND by large margins on a neutral field. In addition, unlike ALA but like Ohio, ND will not be tested in a conference championship game.
*I report only those ratings that predict actual game outcomes. For BB I take the average of ELO and PREDICTOR ratings. For FB, I report only PREDICTOR ratings (not the ELO-CHESS, which is used by the BCS but does not consider margin of victory or predict actual game outcomes).
I like that computers are part of the BCS rankings. They're objective. That doesn't mean they're smart, though, since any computer ranking is only as smart as the people writing the ranking algorithm.
I'll pick on Jeff Sagarin, since his rankings are the best known and he's the most extreme with the Big 12. Here are the Big 12 teams in his BCS rankings:
2. Oklahoma State (10-0; AP#2)
6. Oklahoma (8-1; AP#5)
7. Kansas State (8-2; AP#16)
9. Baylor (6-3; AP#25)
12. Texas A&M (5-5; no AP votes)
13. Texas (6-3; AP#31)
17. Missouri (5-5; no AP votes)
26. Texas Tech (5-5; no AP votes)
28. Iowa State (5-4; no AP votes)
66. Kansas (2-8; no AP votes)
For reference, he has Michigan at #27 and his top ranked Big Ten team, Michigan State, is #22. In other words, he believes that 70% of the Big 12 has had a more impressive season than every single Big Ten team. Every Big 12 team but Kansas has been better than Wisconsin.
It seems like these rankings are far too kind to teams struggling in conferences that perform well in a few nonconference games. The Big 12 performed relatively well, granted, but here are all of their games against AQ conference teams:
Wins (6): Arizona (OSU), Florida State (OU), Miami (K-State), UCLA (Texas), Iowa (ISU), UConn (ISU) [note: they also had a few solid wins against TCU, Tulsa, etc.]
Losses (3): Arkansas (TAM), Arizona State (Missouri), Georgia Tech (Kansas)
This is a trivial thing to get worked up about, but these rankings make a big difference in determining who plays in BCS games and the championship game. Problems here can have major consequences.
No, I don't believe Sagarin rigged his schedule ratings to help Oregon and prevent TCU from miraculously slipping by Oregon. But it is interesting to note that while I have heard plenty of talk about TCU and Boise St. lacking schedule strength, I hadn't really heard much regarding Oregon's.
Step in unnamed MGoBlogger* (**edit** named Drakeep) who pointed out that the Big Ten teams' schedules included an average of 7 winning opponents (while each SEC team faced an average of 5.8, and the PAC-10 something like 4...) This savvy blogger also pointed out that Oregon had only faced 3 teams with a winning record. I could barely believe it, and checked the stats myself. Such is true.
So I head over to Sagarin to see where exactly a schedule against 3 winning teams and a very much non-winning FCS school would rank. 20th. What was U of M's against 7 winning teams and a winning FCS school? 40th. Hmmm....
Next, I give Sagarin the benefit of the doubt and assume that although Oregon's opponents didn't all win a lot of games, the games they did win must have been meaningful. (In other words, Oregon's opponents must have combined to beat a lot of winning teams... as beating crappy teams and losing to good ones should not build a team's own strength.)
Oregon - Played 3 teams with winning records (out of 11, plus one losing FCS team.) The 12 teams Oreg played, combined to achieve 12 victories over "winning FBS opponents" and 7 victories over "winning FCS opponents." That equates to Oregon's opponents each beating ONE winning team.
Mich - Played 7 teams with winning records (out of 11, plus one winning FCS team.) The 12 teams Mich played, combined to achieve 32 victories over "winning FBS opponents" and 7 victories over "winning FCS opponents." That equates to Michigan's opponents each beating 2.67 winning teams.
These statistics are not even close, on either the primary or secondary level. Yet, there it is: Oregon's SOS at 20 and Michigan's SOS at 40.
For another reference point: Mich St. played 5 teams with a winning record, and MSU's opponents combined to haul in 19 wins against "winning FBS opponents." They lie between Michigan and Oregon on both the primary and secondary levels, and have a SOS rated 65th.
In conclusion, based on the ranking of Michigan and MSU schedules, Oregon's schedule should probably rate somewhere between 70 and 80. This has placed me in the odd position of questioning the legitimacy of Sagarin's rankings... if any mathematician out there can point out how strength of schedule might use something more meaningful and direct than opponent's wins and opponents' wins against winning teams to rank schedules, let me know. Until then, I'm going to have to believe that Sagarin is off his rocker.
*Unnamed MGoBlogger - my apologies, but I went in search of your forum and could no longer find it. If you (or anyone else) would care to link to your post, I will gladly edit the above content to include your name and a link.