[Ed.: Bump. This makes sense to me: Michigan should mostly dump special teams once it gets across midfield.]
As Brian highlighted in the UMass round-up, maybe forgoing the punt altogether might not be such a bad decision. He noted my earlier look at the the topic and I wanted to pull it back and revisit and refine some of the work.
I looked at the years 2004-2009 and only looked at the top 20 rated offenses for each year. This study assumes that Michigan’s offense this year will be at a top 20 caliber and provides a broad enough definition of greatness that there is a good sample size. I did not distinguish what type of offense (Texas Tech Air Raid vs Georgia Tech triple option vs spread and shred) was used to get into the top 20. I will detail more assumptions as they are applicable along the way. In place of fourth down conversion percentages I used third down conversion percentage since the data pool is much larger and covers a wider variety of opponent levels. Since the thought process on a third down and fourth downs are roughly the same in most all (for now, anyway) situations, it seems reasonable to use the third down numbers.
Time for a you know what…
Assumptions: Top 20 offense, average defense, average punt game, average field goal kicker.
Based on these assumptions, except for long yardage, the punter should grab a seat once the offense crosses midfield. On your own side of the field the decision still makes sense starting around the 30 for shorter yardage situations and becomes more viable for longer yardage as you cross further down the field. Field goals become practical with 4+ yards to gain and only from about the 5-25 yard lines.
There are two big advantages a potent offense has that make 4th down tries more logical. The first is that they have more to gain by success. With a limited number of drives in a given game, why give them away for free? The second is that they are more likely to make them. Good offenses are more likely to be in better position on fourth down and more likely to make it. Here is a chart of great offenses fourth down conversions compared with all offenses. The right hand column was the one used for the above chart.
|To Go||All Teams||Great Off|
It’s not a huge advantage on any one given down, but Top 20 offenses convert the same opportunities about 2-3 percentage points more often than the average offense. Note: the rate of conversion for great offenses was much higher in the original analysis and is part of the reason the chart isn’t quite as go for it as the original.
But we don’t have an average <blank>
<blank> = Kicker
Let’s start with the kicking game, which is currently 5 points below average on the season and rated third worst in the country after the first three weeks.
Assumptions: Top 20 offense, average defense, average punt game, below average field goal kicker (FG make odds are reduced by 25% everywhere on the field).
The decisions near midfield obviously aren’t changed but now attempting a field goal on 4th and 5-9 from inside the 25 is no longer the most valuable option.
<blank> = Punter
I know it hasn’t been the most Zoltanic of starts for Will Hagerup, but at this point if he can hold onto the snap, there is no point in adjusting him to below average, even if he isn’t an advantage at this point.
<blank> = Defense
This is the one that seems a bit counterintuitive and Brian and I disagree on. I say that the strength or weakness of your defense is irrelevant to your offensive decision on whether or not try a fourth down conversion. My belief that it is irrelevant is based on this chart.
Great defense obviously give up fewer points than bad defenses but the key point is that the difference between a great defense and a bad defense is consistent up and down the field. Giving the opponent a first down at midfield isn’t a guarantee of a touchdown even with a bad defense and isn’t a guarantee that pinning an opponent deep against a great defense will keep the other team off the board. In fact, the gap between the two is about .25 points per first and 10 all the way from the 1 to the 90. If this is true, then the ability of the defense is irrelevant to the offense’s decision to go for it. For that to be the case, there would have to be evidence that the difference between a good defense and a bad defense changes at different points on the field.
So what does all this mean
If Michigan can maintain their feverish offensive pace this year and fail to find an adequate kicker, I think their decision set in all but late game score specific situations should look something like this:
As I noted previously, if you buy into this mentality, it opens up another opportunity, changing your early down play calling. If your four down strategy has changed, so should your down by down playcalling. It may become more viable to risk a wasted down with deep ball knowing that you have an extra, or it might just make sense to keep the ball short in the air and on the ground knowing that over four plays instead of three the likelihood of getting the yardages greatly increases so play to have the shortest possible fourth down attempt if you don’t convert before that.
Apparently the Big Ten Network's website is running a poll to determine which Big Ten team has the "best home-field advantage". Popularity contests do not good data sets make, so I figured I'd apply a lot of counting and a little math and see what I came up with.
- For each Big Ten team, I tallied up their total wins over the last 11* years, and seperately tallied how many of those wins came at home.
- I ignored nonconference games. Those will naturally boost home winning percentages as you invite the baby seals to get clubbed at your house, and play home-and-homes against teams that might actually beat you.
- I wanted to compare how well a team did at home compared to how well it did on average, rather than just totalling home wins and saying "golly, Ohio State must have the best home field advantage because they won at home a lot". Well, unfortunately, they won on the road a lot too, so it doesn't tell you much.
- Of course, the inverse of saying a team has a "Strong home field advantage" would be to say that same team "Sucks on the road". I'm looking at you, Indiana.
*I had planned to look at the last 10 years, but made my spreadsheet a big too large and went on my merry way entering in data. I was all done by the time I realised my mistake and I saw no reason to discard the 1999 season just because it was one more than I had planned to look at.
First, and just for the record, here's your overall Big Ten winning percentages for the last 11 years:
|Rank||TEAM||WINNING %||Home Wins||Home Wins Rank|
Yeah, I know. I don't like it any more than you. Anyhow, as you can see, there's not a lot of difference between a team's overall rank and its rank in terms of raw number of home wins. A bad team is a bad team at home or on the road, and ditto for a good team.
Surely there must be something to the fearsome reputations to such locations as Beaver Stadium and the Horseshoe though, right?
At first, I tried expressing home field advantage as the percentage increase of home winning percentage over total winning percentage. However, I found that this simply weighted the home success of bad teams much higher. Instead, I totaled the number of wins each team had at home, subtracted the number of wins each team had on the road, and averaged over 11 years to yield a number I'm calling the Expected Increase in Wins at Home (EIWH). In other words, every year each team plays 4 Big Ten home games and 4 Big Ten road games. How many more wins, on average, does a given team expect to claim at home than it will on the road? The results are as follows:
The results have some suprises. Iowa, a slightly-above-average team overall, earns an average of one more win at home than it does on the road, as does celler-dwelling Indiana. Indiana has only won five Big Ten road games in the past 11 years. Iowa has a reputation as a tough place to play, especially at night, but the Indiana results are inexplicable.
On the other end of the spectrum, Illinois has only earned 16 of its 30 victories at home, which makes for an interesting contrast with Indiana in spite of the two school's proximity at the bottom of the overall standings. Strangest of all, the feared Horseshoe in Columbus grants a very modest advantage to the hated Buckeyes. They have less of a home field advantage than such teams as Northwestern (a school which, from my personal experience, barely fills half its stadium with home fans) and Minnesota (who played in the sterile Metrodome for all of the period of this study).
What's the message here? It seems that the level of hype attached to particular stadiums has little relation to the advantage those stadiums grant to the team playing there.
Much has been made of the recent UM record. However, whenf statisticians seek a more reliable measure of a team’s quality and the direction of a program, they look at the bigger picture by (1) comparing that season record with records from other schools and (2) considering not a single year, but groups of years (called a moving average).
(1) I looked at the records of the two most recent coaches among our rivals. I found that ND had a 3 win season, OSU had a four win season; and MSU had three four-win seasons. Some of these occurred during coaching transitions, like UM’s. But others had no such excuse.http://cid-4bf9d75c782b05b1.skydrive.live.com/self.aspx/notre%20dame%20trends/ND+trends+vs+UM.jpg
(2) As in prior threads (see footnote*), I now report the analysis of the records of the ND coaches, based on the victories averaged over each of 4 successive seasons.**
Results: Under Lou Holz, the trend was positive overall (with an increase of .125 victories per year). Yet, much as occurred during LC’s initial years, the gains were all early, and were followed by a gradual decline. For all the subsequent coaches at ND, the trends were consistently negative (a decrease in average victories of -.25 per season for Davies, -.25 per season for Willingham, -.10 per season for Weiss. However, the trends appear downward at a uniform rate, starting at Holtz’s peak.
1. The ND program is progressively deteriorating.
2. One wonders if the many coaching changes
contributed to this. I have given mixed
shades to the transition years, in which one coach has at least 2 years of the
other one’s players. From this, one
wonders whether Willingham would have continued the upward trend if he was kept
and could play his recruits during what were the first two years of Weiss’
3. Since ND faces massive losses next year, including the OL, RB and probably Clausen and Tate, in addition, with a completely inexperienced backup QB who will be unable to practice and coming off ACL surgery next August, one must seriously wonder when—no, whether—the ND program will get back on track.
If UM uses ND as an example of what might happen to a program, the questions for UM now is whether it will follow the pattern of Holtz, who began with a decline in average wins—similar to what is likely for RR (although Holtz did not have the big immediate dropoff in average wins from his predecessor, since that average was already quite low). The promising thing is that, unlike ND, UM has more, not less, starters coming back for the next two years. Clearly, it’s way too early to tell—as Brian has intimated today—but I can't help worrying that we might end up like ND if we keep getting rid of coaches before they can build their program.
* In two previous threads titled “Reasons for Hope” (for UM), and “reasons for MSU hopelessness.” Another interesting and pertinent link from another poster is: http://mgoblog.com/diaries/what-two-losing-seasons-start-tenure-means**Note that it’s not a simple average. At the beginning of a coach's tenure, his record is shown as an average that includes the prior coach's average--which may be either better or worse than the current record. As, such the first two years of each coach’s tenure are shown as mixed colors, as they reflect the recruits of the previous coach as well as the performance of the current coach. (just ask yourself, if Bo were alive and took over the coaching job of the perennial celler-dweller Northwestern team in the 60's, would he be responsible for the first few years?)
In a previous thread titled “Reasons for Hope” (for UM), I looked at the trends in average victories from LC to RR (based on an average of four consecutive years). The conclusion was: that RR--after a significant hemorrhage that occurred during his first year of surgery on the program--is close to stopping the slow bleeding that actually began after LC's first three years.
One critic objected in a heated manner to the methods I used. A few posters rebutted the critic, pointing out that his tunnel vision of only the worst possible portion of UM’s recent record ignored the bigger picture. I will not speculate on the motivations for this tunnel vision. However, one supportive poster--whom I thank-- suggested looking at the record of MSU compared to UM. So, taking this excellent suggestion, I tried using similar methods to look at the trends in average win pct at MSU under various head coaches.
I found that under Nick Saban, the trend in average victories was positive (with an increase of .06 victories per year, much as occurred during LC’s initial years). But after that, the trends were consistently negative (a decrease of .17 victories per year under Williams and Smith and a decrease of .25 victories per year under Mark Dantonio).* So, MSU declined at a pretty steady rate.
The only way that Dantonio can stop the bleeding and just stay even with the average victory record of his esteemed predecessor, John Smith, is to win 2 out of the 3 next games. So, this analysis does not support the often voiced idea—some will call it wishful thinking—that MSU has turned the program around under MD.
To stop the bleeding (decline in average), RR also needs to become bowl eligible (winning 2 out of the next 4 games including a bowl). To be fair, however, his task is much more formidable. UM’s current average, which is at a low point for UM during this period (7.5 victories per season) is still 3 victories per season more than MSU’s average (4.5).
Methods of Analysis (repeated)
I looked at the trends since Saban took over in 1995 (based on a moving average involving each four year period).
Toal wins and average wins for four successive seasons beginning in 1995 to present.
1995 6.5, 6, 7, 6 avg 6.25 Nick Saban trend +.06 per year
1996 6, 7, 6, 10 avg 7.25
1997 7, 6, 10,5 avg 7.0
1998 6, 10,5,7 avg 7.0
1999 10,5,7,4 avg 6.5
2000 5,7,4,8 avg 6.0 Bobby Williams -.17 per year
2001 7,4,8,5 avg 6.25
2002 4,8,5,5 avg 5.5
2003 8,5,5,4 avg 5.5 John Smith -.17 per year
2004 5,5,4,7 avg 5.5
2005 5,4,7, 5 avg 5.5
2006 4,7, 5,4 avg 5.0
2007-8 5,4 avg 4.5 until Mark Dantonio only -.25 per year (not including this year)
5,4,6 avg 5.0 -0.0 per year (assuming two more victories = 6 total this year)
*considering only his complete seasons---only if we assume he gets two more victories this year does he stay even with John Smith’s average when Smith left.
First off, I largely agree with ikestoys's diary (http://mgoblog.com/diaries/down-14-and-going-2). I have often thought that football is a game that rewards aggressive play calling, like going for two and on fourth down more often, and fake punts from your own 20... Eh...
Anyway, I disagree with a couple of points ikestoys made, both explicitly and implicitly, and I thought I'd chuck 'em out here.
Trials are not independent
This point was made by a commenter in the original diary, but the basic idea of treating the different sorts of trials (going for 2, going for 1, overtime) as independent events (and therefore as amenable to the application of the mathematics of garden-variety probability theory) is flawed.
In football the outcome of one trial affects the probability of another trial even occurring, and not in predictable ways. Let's say UM had made the first two-point conversion. Would State have played their next drive differently than they did? Maybe, maybe not. Perhaps they would have come out throwing and scored a field goal to go up by nine. We have no way of knowing how things would have unfolded in that alternative universe.
Relative frequencies are not probabilities
Second, and another point made by a commenter, is that ikestoys treats relative frequencies (the proportion of successful two-point conversions) as the same thing as probabilities of success. They are not. That's like saying that because 1% of adults die of lung cancer, you have a 1 in 100 chance of dying of lung cancer. Do you smoke? If so, then your probability is surely higher. If not, it's lower. The point here is that the probability of success of a two point conversion depends on many factors, as various people have noted.
Because relative frequencies =/= probabilities, I thought it would be interesting to see how the probabilities of winning fared if you didn't assume the probability of a successful two-point conversion was 0.44. So, two graphs for your viewing pleasure. The y-axis is the probability of winning the game after all events have unfolded (post-touchdown try after TD 1, TD 2, and possibly overtime). The x-axis is the probability of success of the two-point conversion (I limited the range of this probability to between 20 and 80%).
Graph the first
In the first graph, I have plotted the cumulative probabilities of winning for two strategies: going for 2 after scoring a TD to be down by 8 (iketoys's strategy--the black line), and going for 1 (RichRod's strategy--the blue line). The only thing I have allowed to vary is the probability of success of a two-point conversion (on the x-axis).
- Note that I have reproduced the probability ikestoys does, where the dashed red line intersects with the black curve at about 57% when Pr(success) for a two-point conversion is 44%.
- Note also that despite ikestoys's implicit claim that going for two is always the better move, if the probability of success falls below 35.5%, it is better to go for 1, as RichRod did. I'm not suggesting that this is what the probability would have been, though people's comments about a dog-tired Tate, a driving rain, etc., make this idea not too farfetched).
There are two other variables in the process: the probability of a successful PAT (which I held constant at 0.95), and the probability of winning in OT. The latter probability doesn't change the black curve below much, so I left it at 50/50, as did ikestoys.
In the graph below, the three non-black curves represent three different probabilities of winning in overtime: 40% (orange), 50% (blue), and 60% (green).
The only thing to take away here is that if you believe your probability of winning in overtime is high (based on your style of play, being at home, etc.) and if you believe your probability of a successful 2-point conversion is less than 44%(ish), then you should adopt RichRod's strategy. If you believe that your chances of winning in OT is 50/50, and you believe your chances of scoring on a two-pointer are > 35%, then you should follow ikestoys's strategy.
In conclusion (I know, finally)
Of course, coaches don't think this way in the heat of a game. Again, I basically agree with ikestoys, but the story is a bit more complex.
The situation: You are down 14 and probably only have 2 possessions left. Obviously, it will take two touchdowns to get back into the game. My question for you is, what combination of 2 point and 1 point conversions should you take to maximize your chance of winning the game?
Let's start off with a few assumptions. According to this rivals article, the average 2pt conversion rate in the NFL is 44%. I'll assume that it's about the same for CFB and that our team's conversion rate will be about the same in whatever specific situation we're in. We'll assume that we can estimate a PA kick as a sure thing. We'll also assume that we have a 50-50 chance of winning in OT.
So working with these assumptions, what is the optimal combination of 1pt/2pt tries?
Kicking 1pt tries only
This one is easy. Assuming we get 2 TDs to come back, taking 1pt each time will give us a 50-50 chance to win
In this situation, we get the first TD and take the 1pt. On the second TD, we 'man up' and go FTW BABY! Our chances of winning are equal to the chance of converting obviously, so 44%.
In this situation, we'll go for 2 after the first TD. If we convert, then we'll kick a 1pt try. If we do not convert, then we'll go for 2 again.
This is a slightly more complicated calculation, but here we go:
1.) 44% of the time we make the first 2pt conversion and go on to win the game.
2.) (.56)*(.56) = 31% of the time we miss both 2pt tries and lose despite making two TDs
3.) (.56)*(.44) = 25% of the time we miss the first but make the second 2pt. This ties the game and we go to overtime.
So what is our final equity? It is:
.44*1 + .31*0 + .25*(.5) = .57 or 57%
A quick explanation of this equation. We basically multiply the probability of an event by the outcome of the event. So 44% of the time we win (1), 31% of the time we lose (0) and 25% of the time we go to OT with a 50-50 shot (.5).
Now why isn't this done in the real world? Well part of it is that some of our assumptions aren't known. However, mostly it is coaches covering their ass. No one gets criticized for taking the safe route to force OT, only to lose. If you go for 2 twice and don't make it, you'll be torn apart in the press. Not to mention that football coaches don't focus much of their time on equity calculations.
The common belief of kicking 1pt to tie or going FTW! at the end with a 2pt conversion is clearly wrong, even if it is most commonly done.