biakabutuka explosion 3000
An idea has been nagging me for the last few weeks that goes like this: to say a team's goal is to win the game is needlessly over-specific. Any rational team’s goal is to have the lead all game. So every second you don’t have the lead is a failure to some degree. Not only that, but a measurable failure.
With this in mind, I was surprised that none of the computerized rankings sound like they take lead time into account. Sagarin, Massey, Colley, Wolfe and Harris don’t mention it on their sites. This fed my curiosity of whether it’s any good as a metric. For the record, I didn't seek out to prove anything. Most of all, I just wanted to take a look at the season through a different lens. With that said, onto the…
I started with the 2012 per-drive data from cfbstats.com (H/T to mgousesr TSS for pointing me there), then calculated lead times in each game. Then I weighted those leads against the strength of the team the lead was against. I used my own results from the first calculation for the team strength metric, so that my results were not skewed in the slightest by anyone else’s formula. Then I weighted those results one more time for good measure, so opponents’ opponents are weighed in. The only factor considered is amount of time teams had the lead in games.
The Norm 1 (or normalized 1 time) ratings rank teams based on the amount of time they had a lead this season, nothing else. Norm 2 weights lead times against the Norm 1 rating of the opponent. Norm 3 weights lead times against the Norm 2 rating of the opponent.
The list, in three parts:
Top teams in graph form:
Some important notes with the data and/or formula
- No 2-pt conversions or missed extra points are accounted for because the data I used doesn’t mention them. All touchdowns are assumed to be 7 points.
- After calculating the running score in games, some of the outcomes of games were...off. Just a little bit. This is probably because of the last bullet.
- Tie scores are ignored. I think it might be worth it to value them somehow, but I didn’t have time.
- Because of the last caveat, a constantly tied slugfest is worth less than a back and forth game. This should only affect the kinds of teams that get into these kinds of games, i.e. the middling ones, but it still bothers me.
- To add to the last point, I therefore believe the very best and very worst teams are ranked the most accurately
- Overtime is ignored
- Even with team weightings, you are rewarded slightly more for leading the whole game against #19 Utah State than for leading for half of the game against #1 Alabama.
- You are rewarded more for giving away a game where you led all the way than for being on the other side of that.
- Injuries that affect today’s team are not factored into yesterday’s results.
- A strategy to wear other teams out may arguably be lead-agnostic early in the game. However, Oregon and Alabama are the kings of this strategy—in radically opposite ways no less—and they are the top two teams rated. So there’s that.
But anyway, onto…
Well, the results are unique, that's for sure. But they're not exactly out of left field, either. And some of them are downright acceptable.
- Michigan: I have to admit, part of the reason I did this was to prove that Michigan is better than their record. This may still be true, but not according to my formula. Why would this be? It's simple, really. I've given them a lot of credit for playing top teams, but they rarely led in these games. Deep down, what's the difference between losing all game and never showing up? In regards to the Alabama game I can say not much. Furthermore, their most dominant performances came against the worst opponents on their schedule. That shouldn't be a surprise, but if it’s true, neither should the fact that they are properly rated. I am disappoint.
- Oklahoma State: A 7-5 team that was competitive in every loss but one is my #6 team. I wonder if their fans and MSU’s fans have a support group, and if so, where would they find a couch.
- Ole Miss: I barely noticed this team this year. I wonder how their fans feel about their season. They were 6-6 but they may be in for a bounce next year if nobody leaves.
- Utah State: Holy crap did they ever have an under the radar season. But they do drop from #4 to #19 once you factor strength of schedule. Let’s not play these guys, you guys.
Not Surprise Bullets
- Notre Dame is not the best team but they are good. They look better when the strength of opponent is factored in (#4 vs #8).
- Ohio State is not an elite team. That's probably partly why Michigan played them so close. Like Notre Dame, the strength of opponents they led against does bump them up quite a bit (from #26 to #13).
- Texas A&M beating Alabama is somewhat less of a surprise—they’re my #3 team.
- Florida is overrated, said everyone ever until they beat Florida State. But guess who else is overrated? Florida State (their line happens to be one of the most interesting ones, though).
- Michigan State is...marginally better than Michigan? Well, no one would be surprised if you had claimed this in August.
- Stanford beat Oregon, had a tougher schedule, and won the Pac 12. So why do a lot of people just assume that Oregon is the better team? These results might explain why. Oregon was actually a lot more dominant all season, all else being equal. I mean if you don’t count all the stuff that counts.
- Proving my assumptions about Notre Dame and Ohio State almost offsets the disappointment in not proving my assumptions about Michigan.
- The championship game should probably be Oregon-Alabama, just like a lot of people assumed for most of the season. Go BCS.
- In a 4-team playoff, Notre Dame and their undefeated record would deserve a shot. As would Texas A&M, owners of the best win by any team all season.
- These results would be considerably more controversial if Georgia had defeated Alabama, or Michigan had eked out a 2011-esque win against Notre Dame. But none of this happened and maybe there’s a lesson in that.
- I do think that completely removing wins and losses from the equation takes a little of the fun out of it. And it leads to teams with 6 wins being rated higher than BCS juggernauts…like Northern Illinois. But on the other hand, I don’t see why this metric couldn’t be used in unison with a few others in determining how dominant of a season a team had.
- Vegas, which you may know is in the business of predicting games, would no doubt give less than ten points to Bama against Oregon, the current line against Notre Dame. Hey, if Vegas agrees with my relatively simple formula more than the one the big boys use, maybe my poll is better.**
Phew, sorry for the long post. If anyone’s interested, I would consider running this against previous seasons, and hopefully writing a lot less. I would also consider tweaking the formula if the improvements are obvious and consistently better.
* I can’t think of a good name for this. “Lead metric”?
** for the record, Vegas does disagree with some of my rankings. For example, in the bowl games Vegas favors Miss State over Northwestern and Stanford over Wisconsin. Could be because the Big Ten sucked and I didn’t weight the data properly. Also, I already warned you about middling teams. Ctrl-F it.
I keep hearing a lot of things said about our dearly departed QBs of the last couple years, and some of the assumptions don't make sense to me. This is the way I hear the story told, without dates but in chronological order:
1. Threet chooses Georgia Tech and Mallett chooses Michigan
2. Threet chooses to transfer to Michigan
3. Mallett doesn't like Michigan and decides to transfer to Arkansas
4. Threet chooses to transfer away from Michigan
Tell me if this doesn't make more sense. Bear with me as this is intricate:
1. Mallett grew up a Razorback fan but he wouldn't start over Mustain (or he assumed as much), so he goes to the next-best statue-QB school available: Lloyd Carr's Michigan Wolverines
2. Threet grew up a Michigan fan, but knowing that Mallett would be there, he shied away
3. Mallett doesn't like Michigan and Michigan doesn't like Mallett. Even the football seems to hate Mallett, as it finds ways to slip out of his hands time and time again.
4. Mitch Mustain transfers to USC. Mallet transferring to Arkansas becomes a question of when, not if. Carr knows this.
5. Carr looks furiously for the best available option. He knows Threet would transfer if he knew that Mallett was leaving, so he tells him that. Threet can't tell anyone that he knows Mallett is transferring, though.
6. Michigan has a bad season, Carr is gone, Rodriguez is hired.
7. Threet sees he's not in our long term plans anymore. His transfer is a question of when, not if. The wide open starting job of last season kept him at bay for a year.
8. Rodriguez knows the situation and looks furiously for two QB recruits. He is telling them that Threet is as good as gone. (How do you get two comparably talented QBs to be part of the same class, with last year's starter returning? Perhaps the reason we kept losing commits was because they kept being assured Threet was transferring, but it wasn't happening.)
9. Threet transfers; hello starting freshman QB! (edit:
Notice that we had no more QB decommits as soon as the transfer was made public.)
The big difference in timeline 2 is that everyone's actions are reactions to something external rather than unprovoked. What do you think? Is there evidence that Threet really opted to transfer to a volatile coaching situation, knowing he would sit behind a 5-star QB for four years? That makes no sense to me, especially since he transferred again this offseason because he doesn't want to be a benchwarmer.