in town for free camps
Michigan men represent excellence academically and athletically. At least that's what they represent if you believe the two statues above the doors to the Union. Milford men, on the other hand, are adept at being neither seen nor heard. Buster Bluth was a Milford man. The 2012-13 Michigan hockey team played like one.
The 2012-13 Michigan Wolverines took the ice in October ranked #3 in the country by USCHO.com and USA Today/USA Hockey Magazine. That preseason poll was the highlight of the season. Things went downhill quickly, and if you've been reading this blog for a while you'll remember that this team didn't do much to endear itself to the Michigan faithful. Now that we've had time to let the healing power of the basketball team's run to the title game and football recruiting goodness to soak in I think it's time to go back and try to figure out what went wrong for the team that broke The Streak™.
For comparison, let's look at the stats of the 2011-12 Wolverines versus those of the 2012-13 squad. This idea was inspired by Ron Utah's excellent post comparing the 2011 and 2012 football teams. The 11-12 hockey team lost in the first round, so we aren't exactly starting with high expectations for success here. Shawn Hunwick, Luke Glendening and David Wohlberg were the most significant departures from the 11-12 team.
2011-12 Michigan Hockey: 24-13-4 overall. 15-9-4 conference
Home: 15-5-1, Away: 4-6-3. Neutral: 5-2-0
|Faceoff W-L Pct.||.497||.503|
2012-13 Michigan Hockey: 18-19-3 overall, 10-15-3 conference
Home: 10-8-1. Away: 5-8-2, Neutral: 3-3-0
|Faceoff W-L Pct.||.514||.486|
I highlighted the things that really stood out to me. Everything is open for interpretation, but let's start with the basics. The 11-12 team scored 43 more goals than they allowed, while the 12-13 team scored one fewer goal than they allowed. Ouch. If you're wondering how shot volume impacted things, it doesn't get any prettier. Michigan had very similar offensive output in 11-12 and 12-13; their total shots were about the same and their scoring percentage was an identical 9.6%. The real fluctuation from year-to-year occurs when you look at the opponent's shots; 1242 allowed in 11-12 versus 1126 in 12-13. Even though the 11-12 team allowed more shots opponents only scored on 7.2% of them, compared with 11.5% in 12-13.
Special teams can't be used to explain away the year-to-year differences. Michigan actually scored more power play goals in 12-13 (31) than they did in 11-12 (23). Looking at it from the perspective of the penatly kill, MIchigan allowed fewer power play goals in 12-13 (24) than they did in 11-12 (27). Michigan spent less time on the penalty kill in 12-13, but they also spent almost two minutes less per game on the power play that season. It appears as though Michigan was outmatched at even strength throughout the 12-13 season, so much so that they missed the tournament and won six fewer games.
What does it mean for next season?
I wish I knew. Steven Racine established himself as the starter going into 2013-14, and that's more than you can say for the 12-13 team. There are some good prospects coming in (highlighted by former US NTDP forward JT Compher), but is that enough to replace the mass exodus of point scoring that Michigan will suffer this offseason? It doesn't seem likely. Michigan loses AJ Treais' 31 points, Jacob Trouba's 29 points, and Kevin Lynch's 27 points. Those were three of Michigan's top six pointgetters in 12-13. On the other hand, Michigan's problem in 12-13 was clearly one of defense and not offense so anything is possible. All it takes are guys who are willing and able to forecheck and backcheck, and as a sport hockey still lacks the sophisticated statistics that are able to capture the more esoteric elements of the game.
|True Shooting %||57.3%||57.6%|
I don't have a lot to say here except to note how RIDICULOUSLY identical both teams are in statistical profile. Rebounding and FT% are EXACT, 3P% is just one tenth off, and Steals, FT% and Assist rate are all razor thin.
I'd like to THINK that a team putting up identical numbers to M in the SEC should be highly advantageous for Michigan. I mean, by and large, they even play our game.
POSS - 64.7 / 62.7
FGA/Game - 58.2 / 54.3
3PA/Game - 19.7 / 21.8
OFF REB/Game - 10.7 / 10.7
It's unreal! It's a freakin' Mortal Kombat Mirror Match!
We will not be beaten at our own game.
In the last installment, I investigated one case of what sports commentators refer to as “momentum” (where a team that makes a successful play will continue making to be successful): outcomes in overtime games. Looking through the CFBStats data from 2005-2011, I found that not only did teams that came from behind to force overtime fail to come out on top at an unusual rate, their outcomes were not affected by other factors such as being the home team or coming back from large deficits. However, I was not entirely exhaustive in my analysis, and two commenters, SpyinColumbus and cgnost, pointed out that it might be interesting to see what, if any role, rankings might play in determining outcomes in overtime.
As it turned out, integrating Sagarin rankings into the CFBStats data was fairly straightforward, and I created a table that matches the CFBStats ID codes (which are the same as used in the NCAA data that CFBStats is built from) with the names that Sagarin uses in his published data. So if you are working with these two data sets and want to put them together, here is the file to integrate these data sources, which is covered by an ODC PDDL (public domain).
With this in mind, the first order of business was to address the issue of what, if any, differences emerge in terms of Sagarin rankings in determining overtime outcomes compared to whether or not teams come from behind. In essence, do teams that come from behind beat their Sagarin predictions? If so, this might suggest that teams coming from behind are bringing some momentum into overtime.
Again, I am considering the set of 230 overtime games from 2005-2011 (dropping the 2005 Arkansas State-Florida Atlantic 0-0 EOR game). I will focus on Sagarin’s “PREDICTOR” model as he regards this as the most useful predictor of game outcomes, though I will also present some analysis using “RATING” and “ELO_CHESS.” PREDICTOR accounts for margin of victory, while ELO_CHESS only considers game outcomes (Sagarin describes it as more “politically correct”). RATING is a synthesis of the two. I also used the year-by-year home advantage values to adjust these ratings, including the 2011 addition of separate values for home advantage for each of the ranking systems. Neutral site games are not adjusted.
One important limitation of this analysis is that, because historical week-by-week Sagarin rankings are not available to my knowledge, all of this analysis is based on his year-end rankings. Because end-of-year rankings are determined by performance in-season, this brings up considerable endogeneity issues that cannot be easily dismissed. The best way to address this would be with the week-by-week rankings, and so if anyone knows of historical data, please let me know and I will see if this changes the results in any meaningful way.
To characterize the results, the first analysis I considered with regard to the ranking was general prediction of overtime outcomes. Sagarin’s rankings use scales with higher values indicating a higher ranked team, and, at least with PREDICTOR, the expected margin of victory. To predict outcomes based on Sagarin’s rankings, I subtracted the PREDICTOR, RATING and ELO_CHESS values of the losing team from the winning team. Thus, positive values indicate that the higher ranked team won (a “normal” outcome) and negative values indicate an “upset.” Based on this, we see the following results for overtime games:
Sagarin’s hit rate for overtime games is about 57% at best and 54% at worst, depending on which of his models is being used. It is worth noting that among non-overtime games, his hit rate is much better (between 78.4% and 80.2% in games between 2005-2011), but this is not surprising because overtime games represent a small sample of games between more closely matched teams (average difference between teams for the ranking systems in non-overtime games is between 10.0 and 10.2 while for overtime games it is between .3 and 1.1). How do Sagarin’s rankings look when considering the way in which overtime is forced?
To do this analysis, I modified my measures somewhat to make the results more interpretable. Since I was focused on teams coming from behind, I subtracted the PREDICTOR rating of the leading team from that of the team that came from behind. This difference therefore represents Sagarin’s predicted outcome for the team coming from behind – if it is less than zero, then the team coming from behind would be predicted to lose the game, while if it is greater than zero, they would be predicted to win.
The overall average for the from behind PREDICTOR difference score is -1.44, which is significantly different from zero (t(229) = -2.19, p < .05), indicating that, on average, teams coming from behind were predicted to lose. A logistic regression with the from-behind PREDICTOR difference score as the independent variable and the game’s outcome as the dependent variable revealed that these differences in PREDICTOR scores did not predict the games’ outcome (Exp(β) = 1.002, p > .85). To further clarify this relationship, I split the data into games where the team coming from behind was predicted to lose (that is, had a PREDICTOR score less than zero) and where these teams were predicted to win (PREDICTOR>0), and compared this to the games’ overall outcome:
From behind loss
From behind win
From behind predicted loss
From behind predicted win
(χ2(1) = .49, p >.48)
What this tells us is that rankings and game outcomes are independent of one another. More directly, while teams coming from behind to tie the game up are more likely to have been predicted to lose, these predictions did not affect how they performed in overtime.
In the context of momentum, this provides further evidence that coming from behind has no effect on game outcomes. Overall, Sagarin rankings are a barely weighted coin flip in overtime games, and how the teams became deadlocked in regulation does not affect this coin in any way.
Thanks, again, for reading, and to cgnost for prompting this analysis. In the next installment, I’ll continue the search for evidence of momentum in traditional defensive stops (those not ending in fumbles, interceptions or safeties), with a special focus on my favorite play in all of football: the goal line stand. Go blue.
Offense and defense rankings based on total numbers and straight averages can be misleading at times. If a team plays opponents with strong rush offense but weak pass offense, the team's pass defense stats might look better than what they really should be. This is something Michigan was being accused of due to the fact that much of our "bad" defensive games came against strong rushing teams (Alabama and Air Force).
One way to mitigate this "effect" would be to not look at the totals and average numbers, but compare the game output against the average output the opponent has produced against all opponents. This produces numbers that show you how good your performance was compared to all other team that your opponent has played. It is more useful comparative method than using just total numbers.
So, exactly how does it work?
Here are the stats for Michigan so far this year:
|Opponents||Rush Net Total||Pass Yds Total||Total Yds||Pts||Avg Rush Total||Avg Pass Total||Avg Total Offense||Avg Scoring Offense|
|Average All Opp||145.1||145.9||291.0||17.3||196.0||194.7||390.7||27.5|
|Opponents||Avg Rush Off Diff||Avg Pass Off Diff||Avg Total Off Diff||Avg Scoring Off Diff|
|Average All Opp||-24%||-24%||-26%||-39%|
The first four columns of stats represent the actual stats from the game played against Michigan. The second set (of four) columns are the average output of that team against all opponents this year. The
last set (of four) columns second table are the differences in percentage of actual game stat versus the total year averages.
As you can see from the table, Alabama produced their average offensive output against Michigan while Purdue and Illinois barely produced about half of their normal offensive output.
By averaging all of the averages, we find that our defense is reducing our opponents' normal offensive output by about 25%, while only allowing only 61% of their normal scoring output.
Sounds pretty good, but how does that compare to rest of NCAA?
I didn't have enough time to calculate the differential averages for every team in NCAA, but I did the analysis for top 10 Pass/Rush/Total defensive teams and all of Big Ten (plus ND). I did not include stats against FCS opponents. Here it is ranked by total offense differential.
Few things that stand out:
- Alabama, LSU, and Florida St defense stand above the rest
- Michigan and Michigan St defenses stand above the rest of B1G
- Michigan is pretty good at both run and pass defense
- Ohio St pass defense is HORRIBLE!
- BYU defense is much better than I thought
- Many of the defenses highly ranked in one (pass or rush) only because they are so horrible at the other (I am looking at you Arizona St, Stanford, Nebraska and Oregon St!)
- Notre Dame is living on borrowed time - their scoring differential is MUCH higher than what rest of the defensive differentials would indicate
I do believe converting straight up numbers to percentages makes it much easier to compare between pass/rush and between different teams. I hope most of you find this useful. If I get enough upvotes, I will do the same analysis for offense as well.
So with two games left, home contests against Nebraska and Ohio State, we know Michigan will finish the regular season 8-4 at worst and 10-2 at best.
I think most of us would have been happy with being 8-2 after 10 at the beginning of the season, my pre-season prediction was 8-4, so I have to tell myself I can't be very disappointed no matter what happens to finish the season.
There's no doubt, however, that Michigan has been a pretty bi-polar team this season. Impressive wins over some decent teams and a couple of poor performances in our losses leave many fans wondering how good this team really is. I think we'll find out for sure in the next few weeks, but who wants to wait that long? Here's a statistical breakdown of the season so far:
All stats are based on the last 9 games, the game against Western doesn't officially count.
Total Offense: Denard Robinson- 1,611 yds passing, 864yds rushing, 275 total YPG (24th Overall, 1st in B1G)
Passing YPG: Denard Robinson- 99/189, 179ypg, 13 TDS, 13 INTs (71st Overall, 5th in B1G)
Passing Efficiency: Denard Robinson- 132.92 rtng (57th overall, 5th in B1G)
Junior Hemingway- 27rec, 520yds, 19.3ypc, 1 TD (NR)
Jeremy Gallon- 23rec, 391yds, 17.0ypc, 2 TDs (NR)
Roy Roundtree- 14rec, 278yds, 19.9ypc, 2 TDs (NR)
Denard Robinson- 151car, 864yds, 12 TDs, 5.7YPC, 96.0YPG (32nd Overall, 5th in B1G)
Fitzgerald Toussaint- 114car, 673yds, 5 TDs, 5.9YPC, 84.1 YPG (48th Overall, 6th in B1G)
Passing: 200.4ypg, 15TDs, 14INTs (84th Overall, 7th in B1G)
Rushing: 235.9ypg, 22TDs (11th Overall, 2nd in B1G)
Total Offense: 436.3ypg, 6.48 yards per play (33rd Overall, 3rd in B1G)
Scoring: 32.3ppg, 38TDs, 8 FGs (37th Overall, 3rd in B1G)
Turnovers lost: 19, 14 INTs (111th), 5 fumbles lost (9th) (T-78th Overall, 11th in B1G)
Red Zone Offense: 44 drives, 27 TDs, 8 FGs, 80% (T-69th Overall, 7th in B1G)
Not exactly the powerhouse that we were last year, but we have the 5th and 6th best rushers in the Big Ten in Denard and Fitzgerald. Denard is obviously not much of a passing quarterback and he gets a lot of flack for it, but with his legs factored in he's still the most productive player in the Big Ten. Toussaint is looking like the running back of the future. Our lack of a passing game means we don't have any receivers that stand out nationally, with none falling in the top 100. Our turnovers have been brutal this season, with our 14 INTs landing us 111th in the country. After a great start to the season in the red zone, we've fallen to an 80% in red zone scoring, putting us in the bottom half of the B1G.
All in all, not as impressive as many of us were hoping for, but plenty of glimmers of hope, the most productive player in the Big Ten, and a solid ground game make it a pretty decent season so far.
Offensive Grade: B
Passes Defended: JT Floyd- 6 PBU's, 2 INTs, .89 passes defended per game (T-78th Overall, 2nd in B1G)
Forced Fumbles: Thomas Gordon- 2FF (T-68th Overall, 4th in B1G)
Thomas Gordon- 4FR (T-2nd Overall, 1st in B1G)
Jake Ryan- 2 FR (T-31st Overall, T-4th in B1G)
Passing Defense: 191.3ypg, 6.47ypa, 9 TDs, 6 INTs (22nd Overall, 6th in B1G)
Rushing Defense: 130.9ypg, 4.01ypc, 9 TDs (41st Overall, 5th in B1G)
Total Defense: 322.2ypg, 5.18yds per play, 19TDs (17th Overall, 6th in B1G)
Scoring Defense: 19TDs, 4 FGs, 16.1ppg (7th Overall, 3rd in B1G)
Turnovers Forced: 20, 6 INTs (T-94th), 14 FR (T-5th) (T-28th Overall, 2nd in B1G)
Sacks: 19 sacks, 2.11 per game (44th Overall, 6th in B1G)
Red Zone Defense: 27 drives, 16 TDs, 2 FGs, 67% (1st Overall, 1st in B1G)
First of all, we have the best red zone defense in the country!? I would not have guessed that. Second of all, the Big Ten is a defensive juggernaut of a conference. When we're 22nd in the country in passing defense and that's only good for 6th in the Big Ten, that's pretty ridiculous. But seeing that we're 17th nationally in total defense and that five other Big Ten teams are still ahead of us (MSU, Wisky, PSU, Illinois and OSU) is just obscene. There's not even a major statistic that our defense is outside the top 50 in (we're also 39th in 3rd down defense and 20th in 4th down defense). I think if you told me our defense would be this good a year ago I would have slapped you. We're lacking in interceptions but dominating in fumble recoveries. I love Greg Mattison and I love this defense.
Defensive Grade: A-
Punting: Will Hagerup- 21 punts, 49 long, 35.8avg (NR)
Kicking: Brendan Gibbons- 8/11, 38 long, 37/37 XP (55th Overall, 6th in B1G)
Punt Returns: Jeremy Gallon- 14ret, 11.43ypr (18th Overall, 2nd in B1G)
Punt Returns: 16ret, 160yds, 10.0avg (39th Overall, 4th in B1G)
Punt Return D: 16ret, 142yds, 8.88ypr (78th Overall, 10th in B1G)
Net Punting: 33 punts, 37.73avg, 16ret, 8.8ypr, 32.82 net avg (112th Overall, 12th in B1G)
Kickoff Returns: 20ret, 388yds, 19.4ypr (102nd Overall, 10th in B1G)
Kickoff Return D: 37ret, 708yds, 19.1ypr (23rd Overall, 3rd in B1G)
Turnover Margin: 20 gained, 19 lost, +1 (51st Overall, 7th in B1G)
Penalties: 40 penalties, 39.22yds per game (T-12th Overall, 2nd in B1G)
Not really sure what to make of this. Pretty disheartening to see that we're one of the worst net punting teams in the nation, one of the worst kick return teams in the nation, and one of the worst punt return defense teams in the nation. It is, however, encouraging to see Gallon in the top 20 punt returners in the country, and our penalties are under control. Gibbons is Gibbons, and 8/11 is pretty good compared to last year. Still, I feel like Special Teams aren't a priority on this team.
Special Teams Grade: C+
So our offense has been a little underwhelming, our defense has been an extremely pleasant surprise, and our special teams have been business as usual, sadly. But that's just what the numbers say. What do you say?
Due to all of the debate regarding the Wisconsin game and the quality of the 2010 offense in general, I've been thinking about stats a fair bit. Thus, I went to find out some more regarding FEI calculation- I ended up not finding the information that I needed so I emailed Brian Fremeau to see if he can provide some illumination (although I believe the actual formula he uses is proprietary so I don't expect to learn too much).
The functional end result is that I've become curious about how people such as Brian Fremeau and others that create advanced stats based on play-by-play or drive-by-drive data are able to collect their data.
The NCAA team reports have game-by-game play-by-play data, but extracting the necessary information from them seems difficult since it's all text based. I'm guessing that it just looks complicated to me since I'm not a CS or CE person. But, I'm still interested in how the data is extracted.
So, if there a better site than the NCAA team reports to get play-by-play data to extract and distill down into the necessary components (pass, rush, yards, player(s), etc.) or is the NCAA site the best and it just takes some coding to make it work efficiently?
I wonder what kind of advanced stats the MGoCommunity could come up with access to years worth of distilled data from every team in the country...