Quantifying Terrible and Other Stuff

Submitted by Undefeated dre… on August 17th, 2011 at 9:48 PM

[Ed-M: Wow! Bumped.]

Over the past 3 years across all of FBS, no defensive unit underperformed expectations more than Michigan's 2010 squad.

Your world is not rocked. I understand; the statement certainly feels true. But is it true? How about these? For the past 3-year period (2008-2010)…

  • No team's combined offensive and defensive units underperform expectations worse than UCLA
  • No team's combined offensive and defensive units outperform expectations better than Navy
  • Besides Navy, the only teams whose offensive and defensive units are each in the top 10 in exceeding expectations are Boise State and TCU
  • Iowa is the best B1G team when it comes to outperforming expectations
  • Once we account for recruiting rankings, replacing a coordinator – either for offense or defense – has NO measurable, systematic impact on offensive or defensive performance in the following year
  • Firing a head coach tends to lead to a team doing worse than expected, defensively, in the following year and has no effect on the offense's performance

Be warned, this is another long diary. Instead of tl;dr'ing, feel free to skip to the closing two sections.


This post is a follow-up to "Canning a Coordinator, Revisited". Many thanks to the comments on that post, including those asking about how recruiting affects the models. The main data set uses FEI rankings, and changes in rankings, from 2007 through 2010. To that list I have added information on coach and coordinator changes (fired, promoted, etc.) and returning starters (courtesy Phil Steele). More detail on the data is available in the Canning a Coordinator diary.

A HUGE thank you to UpUpDownDownfrom Black Heart Gold Pants, whom you may recall from his opus on player development. Because I don't know PERL from Bruce Pearl, he was kind enough to send me the Rivals rankings he used for his post. Anything that's wrong in this diary is my fault; UUDD just helped a great deal by providing some data.

The Framework

It's probably impossible to read MGoBlog and not be aware of regression to the mean. The idea is that it's very hard to be extremely bad year over year (with the exception of Michigan's turnover margin <wince>), or extremely good year over year – there's a tendency to move toward the middle.

In the previous diary I put together a model to predict a change in team's FEI ranking from one year to the next. The inputs were simple: the team's prior FEI ranking, the number of returning starters, and some information about coach/coordinator changes. The team's previous FEI ranking had a big influence – roughly speaking, a team was predicted to slide back 1 spot in rankings for every 2 spots in rank it had the previous year. Similarly, a team was predicted to jump 1 spot in rankings for every 2 spots it didn't have in rank the previous year. This is what we mean by regression to the mean.

These models left more things unexplained than explained, but they showed some promise. One missing element was information about recruiting. All else being equal, we'd expect a team with highly rated recruits to perform better than teams with lower rated recruits.

Enter the Rivals recruiting rankings, courtesy of UpUpDownDown. I'm not interested in the debate between Scout, Rivals, ESPN, 24/7, etc. I just want some pretty good metric of recruiting strength, and Rivals recruiting rankings provides that.

I originally was going to look at recruiting rankings specific to offensive or defensive unit. However, many players are classified as the mysterious "Athlete", and they may end up on either offense or defense. Furthermore, players often change positions (Rivals has Ted Ginn going to OSU as a DB, for example). So instead I focused on overall team rankings.

The next question is how far back to look. I tested looking at the past year, past 2 years, … up to the past 5 years. Statistically, the previous 3-year period worked best. And it has the beauty of making sense – the previous three year period should be a good gauge of the team's overall recruiting strength entering the next season (4-year may make even more sense, but it didn't work as well statistically).

Put Some Meat on Those Bones!

We want to predict changes in FEI performance from one year to the next. A positive change in FEI rank is good; it means the team did much better than in the previous year. A negative change is bad. FEI ranks are specific to offense or defense, so we'll look at them separately. Using absolute FEI scores would actually improve our model's predictive accuracy, but absolute FEI scores are very hard to interpret in isolation. So we'll focus on ranks. Here's the story with the offense:

Reminder that R2, or R-squared, is the percent of variation in the data explained by the model. We're well under 50% so it's not like we've uncovered the secret to football analysis. But we have statistically significant results (interpreted by our 'confidence' column) that fit into a sensible and meaningful way of looking at the world.

To interpret the effects:

  • The intercept/default basically means the model wants to subtract 105 points of ranking, all else being equal. So if you started at 1, you'd end up at 106. And if you started at 120, you'd end up at 225 (a difficult feat, given there are only 120 FBS teams). But don't fret – the other variables will correct for the default subtraction.
  • Similar to what we saw in the February diary, there's roughly a 2 for 1 tradeoff between last year's rank and the next year's rank. For every 2 points worse of rank the previous year, the offense is predicted to improve by 1 rank the following year. This is the essence of regression to the mean.
  • Every returning offensive starter adds about 3 points of improved rank the following year.
  • Except the quarterback, who adds a total of 14 points of improved rank (2.9 for being an returning offensive starter, and 10.1 'bonus' points for being a quarterback).
  • Each star in average Rivals rating (for the team over the previous 3 years) boosts the offense's FEI rank by 16 points in the next year.

That's a little hairy, so let's look at some examples:

Cont.'d after jump

The 95th percentile for the 3-year Rivals recruiting stars is 3.65, which we'll round up to 3.7. The 5th percentile is 2.0. So you see in the first two rows a best-case scenario: #1 in FEI rank the previous year, 11 returning starters including the QB, and a highly-rated bunch of recruits. Predicted FEI rank the following year is 4th. The second row is a worst-case scenario – 120th in FEI rank the previous year, no returning starters, and a poorly recruited team at 2.0 stars. The predicted FEI rank the following year is 121. Impossible, because there are only 120 teams, but intuitively valid.

The middle five rows are attempts to show some comparative effects. If the team finished at 60 in FEI, returns 6 starters including the QB, and has an average number recruiting stars (2.6), it's predicted to finish… at 60. More face validity! Give that team a poorer recruiting profile and predicted finish drops to 69, give it a sterling set of recruits and it's predicted to jump to 42. Give the team an average recruiting ranking and all starters returning and it'll jump to 45; average recruits and no returning starters and it'll drop to 87.

The last row should look a little familiar – it's Michigan's offense going into 2011. After finishing 2nd in FEI in 2010, with 9 returning starters and a returning QB, Michigan is predicted to finish 16th (taking out Stonum drops the finish to 19th).

What about coordinators or coaches? RichRod to Borges? Spread-ball to Man-ball? In the aggregate, there is NO relationship between coaching and coordinator changes and offensive performance. In other words, across 3 years of data and 120 FBS programs, there's no compelling evidence to say that a coach or coordinator change, as a rule, leads to a better or worse offensive performance. Of course, your mileage may vary in individual situations, and we'll get to that later in the article.


And Your Text-Heavy Analysis of the Defense?

For the offense there is no statistical relationship between coaching changes and the change in the team's FEI rank the following year. For the defense there is an effect – but only at the head coaching level. And perhaps counter to intuition, changing a coach leads to a worse than expected defensive performance the following season.

A few notes. While our R-squared is smaller, we still have statistically significant results. For the most part, the impacts/coefficients are similar to the offensive side. Again we see the roughly 2-for-1 relationship between prior year's FEI rank and the following year's rank. Returning defensive starters are worth as much as returning offensive starters, though no single player as critical as the quarterback on offense. One star in average rival recruiting rating is worth about 16 points in FEI rank, similar to the offense. This is more face validity for the model.

The big difference between the offensive and defensive models is with the head coach fired factor. The model predicts, all else being equal, that a team that fires its head coach (and hence its DC) loses about 8 points in FEI rank vs. a team that keeps its head coach. There is no measurable effect for when a DC only is fired (this is a change from the model in the previous diary, caused by the addition of the Rivals rankings into the mix).

Once again examples may help explain the model.

A team that finishes first in FEI the previous year, returns 11 starters, and has a great recruiting record is predicted to finish 1st in FEI rank. A team that finishes 120th, returns no starters, fires its coach, and has a poor recruiting record is predicted to finish 125th (again, impossible because there are only 120 teams, but a pretty reasonable prediction for a non-constrained model). A team that finishes at 60 in FEI rank the previous year, returns 6 starters, keeps its coach, and has an average recruiting portfolio is predicted to finish 59th in FEI rank. Change that to a great recruiting record and the prediction jumps to 41; a terrible recruiting record drops it to 69. Return all 11 starters with an average recruiting record and the prediction is a jump to 45th place, return no starters and drop to 77.

Once again the last line is the prediction for Michigan. Regression to the mean, fairly good recruiting, and 9 returning starters peg us for 71st in defensive FEI in 2011. Interestingly, the model predicts that the defense would have been even better if Rodriguez remained the head coach (predicting a Michigan finish in 63rd position). Again, I repeat that mileage will vary for individual situations – I certainly don't think a GERG-led defense would do better than one with Mattison at the helm. But the fun thing about models is that they make predictions we can test, and perhaps improve upon going forward.


Enough About the Rules, What About the Exceptions?

We've covered the last two semi-prvocative bullets from the introduction. What about the others? To do that, we want to look at teams that overperform or underperform their prediction. Meaning, if a team was predicted to finish 40th in FEI but finishes 10th, that team has overperformed its prediction. If a team finishes 80th but was predicted to finish 50th, it has underperformed its prediction.

So let's look at individual feats (and years) of predictive futility. On offense:


Top Underperforming Offensive Years


Predicted Finish

Actual Finish


Texas 2010



New OC

Cal 2010



New OC

Baylor 2009




Auburn 2008



New HC

Washington 2008



New HC


Hey, Texas fans really did have a reason to grumble. Cal 'Herrmanned' their OC, while Auburn rolled the Chizik dice and came up MalzahnNewton. Only Baylor kept the staff around after such a crap year. We'll hold off on Washington for a second.

Now, for the defensive side of the ball:


Top Underperforming Defensive Years


Predicted Finish

Actual Finish


Michigan 2010



New HC

Florida State 2009



New HC

Washington 2008



New HC

Northwestern 2010




San Jose State 2009



New HC


GERRRRRRRRRRGGGGGGGGG!!!!!!!!!!! Yes, the squad that most underperformed the model's predictions is none other than our fair Michigan's beaver-abetted D from 2010. That's what we call external validity in the model building business! Now, a caveat. As someone may have pointed out a timeor two(or three), Michigan had decent recruiting classes but keeping those players on the field and actually playing for Michigan was much more of a challenge. If we give Michigan an average of about 2.5 stars over the 3 years prior to 2010 instead of the 3.5 Rivals gives them, then GERG's performance isn't the worst in the 3 year period. It's just bad.

Florida State fans can commiserate, as their 2009 team put up an out-of-character defensive stinkbomb that helped usher the end of the Bobby Bowden era. Four of the teams here got rid of their coaches after the season, with only Northwestern holding on to its entire staff.

And wow, does Washington in 2008 look like the worst team imaginable. Both offense and defense underperformed woefully, giving Tyrone Willingham the execrable exacta and bringing in Steve Sarkisian.

How about on the flowers and lollipops side? Offense:


Top Overperforming Offensive Years


Predicted Finish

Actual Finish


East Carolina 2010



First year under new HC

Houston 2008



First year under new HC

Arkansas State 2010



OC promoted to HC in 2011


East Carolina and Houston were in their first years under a new HC (and yet, in the aggregate, new head coaches have no significant effect on offensive performance). And Hugh Freeze just got promoted to head coach after his job as OC at Arkansas State.


Top Overperforming Defensive Years


Predicted Finish

Actual Finish


Navy 2009



2nd year after HC change

Stanford 2010



HC and DC went to NFL

Boise State 2008



DC went to Tennessee in 2010


With the exception of Stanford, the other schools on these top-performer lists are non-BCS schools, which correlates with a lack of Rivals recruiting stars – as a result, they're predicted to do less well. Harbaugh and Fangio took the money and ran (and for all Andrew Luck did for Stanford, the defense was equally outstanding last year).


Years Might Be Flukes; What About Trends?

One year is not definitive (except in the case of GERG, natch); in fact, a team that woefully underperformed the previous year could look great just by rebounding the following year.

With only 3 years of data to work with, it's hard to tease out trends. But I did look at which programs, overall, tended to overperform or underperform vs. expectations. Across 2008-2010, these are the teams that overall tended to do better (or worse) than the model predicts.

On offense, the top overperformer is Navy and it's not even close. Over the three year period, they on average outperformed expectations by 53 ranking spots per year. Expectations are low because of the low/nonexistent stars for their recruits, yet Ken Niumatalolo's squad keeps outperforming them.


Top Overperforming Programs 2008-2010 (Offense)


Average Overperformance





North Carolina State







On the flipside, the top offensive underperformer is UCLA, and it's also not even close. Just try to Google Rick Neuheisel and not have it automatically add "hot seat".


Top Underperforming Programs 2008-2010 (Offense)


Average Underperformance



New Mexico State


Washington State







As for defense, on the positive side there should be few surprises. Non-BCS schools (with lower expectations borne of lower recruit rankings) dominate with the usual suspects, including two military academies:


Top Overperforming Programs 2008-2010 (Defense)


Average Overperformance

Boise State


Air Force


Boston College







A note on the lone BCS team here: Boston College's Defensive FEI ranks the past 3 years: 4th, 5th, 15th. Wow.

And as for bad defense, do we even need to … GERRRRRRGGGGGGGGG!


Top Underperforming Programs 2008-2010 (Defense)


Average Underperformance





Washington State




Fresno State



 #1 with a bullet is the Wolverines, pushed along by a variety of factors including stuffed beavers, white hair, horrendous attrition, and injuries. Fresno State is a bit of a surprise given Pat Hill's pedigree.

And if we combine offensive and defensive data? We have this:


Top Overperforming Programs 2008-2010 (Combined)


Average Overperformance



Boise State




Air Force





Again, we see smaller schools that don't get great recruits, Rivals-wise, and are run by what we see as great coaches. Iowa is exceptional by being the only BCS team in the top 5. (Note that Iowa also benefits from a horrendous 2007 FEI performance, especially on offense, that made their 2008 numbers exceptional).

And now for the dregs:


Top Underperforming Programs 2008-2010 (Combined)


Average Underperformance



Washington State









UCLA, Washington State, and Washington not only stink but have been a black hole for recruits over the past few years. Wyoming and Kansas, sure, but for the top 3 underperformers to be from the Pac-10?


What Does It All Mean?

We don't know why exactly, teams outperform or underperform expectations. Reasons for outperformance could include luck, talent identification, talent development, PED-palooza, good coaching beyond just talent development, flawed recruiting rankings, flaws in FEI's system, and of course flaws in the predictive model. Reasons for underperformance include luck, attrition, injuries, bad coaching, flawed recruiting rankings, flaws in FEI's system, and of course flaws in the predictive model.

Still, some things we can conclude:

  • Recruiting matters – on average, improving your class average by half a star will boost your FEI rank by 8 spots, all else being equal.
  • Returning starters matter – 11 returning starters with an average 3-year recruiting ranking of 2.5 are worth about the same as no returning starters with an average 3-year recruiting ranking of 4.5.
  • If the choice is between returning a quarterback and 6 other offensive players, or returning 10 offensive players but not the quarterback, take the quarterback.
  • For all the attention paid to coaching and coordinator changes, they have very little short-term impact on the team's fortunes, in the aggregate. Again, mileage will vary in individual situations, but as a rule a team's performance can best be predicted by how it did last year, the number of returning starters, and average recruiting rankings.
  • Ken Niumatalolo, Chris Petersen, Gary Patterson, Troy Calhoun, and Kirk Ferentz are pretty good coaches with pretty good systems in place. Data supports the conventional wisdom..
  • Other pretty good coaches aren't "overperforming" because their performance is on par with what the recruiting record predicts; the coaches above outperform based on recruit and returning-starter expectations.
  • Rick Neuheisel really, really, really should be on a short leash. Barring great turnarounds, expect to see him and Paul Wulff on a Fox Sports West studio show in 2012.
  • While one year performance may be a fluke, heads still roll when teams have a bad year.
  • Greg Robinson, empirically, is a terrible defensive coordinator.


How About Some Provincialism?

The top schools in the B1G for outperforming expectations are Iowa, Nebraska, and Wisconsin. Whatever their methods, they have been successful turning 3 star recruits into 5 star players. Over the past three years, the worst B1G team relative to expectations is… Michigan, and that's despite last year's offensive leap. 2008, for a variety of reasons (including Tacopants), was an offensive disaster for Michigan, and 2009 was still below the model's expectations. Minnesota and Illinois round out the B1G bottom 3. Ohio State is right in the middle, mainly because it recruits so well and performs up to those expectations.

Before you weep in your beer: In 2008, Ball State was in the +30's on both offensive and defensive overperformance. Perhaps a fluke, perhaps not, but as the Mathlete has shownthe team did well in Hoke's last year. FEI data only goes back to 2007 so we can't look at previous Ball State seasons. Also note that Auburn's terrible offense in 2008 came after Al Borges was fired at the end of 2007 and a supposed improvement, in the guise of Tony Franklin, came in.

As for San Diego State, in 2009 the offense slightly underperformed and the defense slightly overperformed the model's expectations. But in 2010 the defense was a +37 ranking spots to expectation and the offense was a +57 (the 8th best overperformance among the team-seasons analyzed). These are good reasons to be optimistic about this year's Michigan team.

This piece is still a work in progress. I hope it provokes some thought, debate, and relief that we're not in Pullman. As ever, comments/feedback, especially of the constructive variety, are welcome. Go Blue!



August 17th, 2011 at 9:59 PM ^

and I'm going to come back to it later when I've got the chance. But one note:

"Iowa is the best B1G team when it comes to outperforming expectations"

This is the most obvious thing in the history of obvious things. Kirk Ferentz is the ancient Greek god of getting no name kids to play like NFL Hall of Famers.


August 17th, 2011 at 11:27 PM ^

Are there any similar teams in the last three years to where Michigan is expected to be? With an offense around sixteen and a defense in the seventy-one range?

Edit: I see a few from 2010 somewhat close to that:


Offense: 10 Defense: 69 12-1 and 29th in total FEI.


Offense: 9 Defense: 75 8-4 and 32nd in total FEI.



August 18th, 2011 at 12:38 AM ^

I get to "Rick Neuhe" before "hotseat" pops up.    I loled.   Hopefully we'll have outliers similar to SDSU's.   Contributions like this make this blog great.


August 18th, 2011 at 2:46 AM ^

I‘m sure you have Michigan‘s predicted finishes, actual finishes, and combined numbers for the three years that you covered, is there a reason why you didn’t show them?

You don’t seem comfortable with the fact that your test only looks back three years. You said you looked back four years but it did not work out statistically. What did you mean by that statement?

Thanks for the great work.

Undefeated dre…

August 18th, 2011 at 10:41 AM ^

Model's predicted and actuals for Michigan:

2008 Off: P61st A80th  2009 Off: P51st A64th   2010 Off: P45th A2nd

2008 Def: P25th A45th   2009 Def: P40th A70th  2010 Def: P46th A108th

Can't look back prior to 2008 because the first FEI numbers are for 2007. For the recruiting rankings, I looked at averaging across the past year, two years, three years, four years, and five years. Three year team averages worked best for prediction, and that makes some intuitive sense, so I used those.

To another comment on the R-squared -- it's not high, but it's a reasonable model. I'm sure the folks at FootballOutsiders have a better predictive model, but they haven't shared it publicly.


August 18th, 2011 at 8:48 AM ^

Ver y cool stuff. I love it! Do you have enough instances of coaching changes in the early years to look at 2 or 3-year effects? I réalise this is going to crush your sample size, but it might be intesting given delays for the implementation of new schemes.



August 18th, 2011 at 8:33 AM ^

I am sorry man, but...

"Once we account for recruiting rankings, replacing a coordinator – either for offense or defense – has NO measurable impact on offensive or defensive performance in the following year"

I stopped reading after that.  We all know that is horseshit.  See - GERG.  Even though it was RR's will to run a 3-3-5 our defense was markedly worse under him than under Shafer.  


August 18th, 2011 at 8:45 AM ^

His argument was probabilistic, not deterministic. ON AVERAGE replacing a coordinator doesn't make a difference. One outlying case never falsifies a probabilistic argument.

Also, the whole point of the second part of his piece is to highlight teams that don't fit this average pattern. You should read the rest. 


August 18th, 2011 at 9:02 AM ^

I am sorry.  I read that quote again, and it definitely did not include any words like "on average" or "maybe".  It seemed to be a pretty strong, definite quote to me.

I have learned in the last three years that this advanced metrics stuff should be taken with a dump truck of salt.  Just because it worked in baseball for Billy Beane for a few years doesn't mean it works in other sports.  Especially when you are talking about team stats, instead of individual stats.  There are just a TON of variables that aren't exactly measurable.

Maybe in 10 years we'll have all this stuff figured out.  But for now - I just don't think it works.  This season is as simple as: 

Young team gets older (+ wins), defensive coordinator upgrade / defensive coordinator running HIS defense (+ wins), and new head coach / new offensive system (- wins).

Right out of the gate it's looking like we'll be slightly better.  The X factor will be injuries.  If we come out healthier than we were last year, we'll win even more.  


August 18th, 2011 at 9:05 AM ^

You're right that this quote is wrong, but it doesn't mean the article is not probabilistic. This is a statistical model so you can only draw probabilistic infnce from it anyway. The author uses a lot of "on average" language elsewhere in the post, and it seems nit picky to bash him from that one lazy language slip.

But yeah, you're right, the quote gives the wrong impression.


August 18th, 2011 at 8:49 AM ^

okay, real comment.  if you have interest in future projects, i put my vote in for testing Rivals' position ratings.  how many wins added (or FEI; i suppose that might be less variable) is a 4 star QB worth vs. a 3 star or 5 star?  or DT?  which positions do Rivals nail?  which do they struggle with?

Six Zero

August 18th, 2011 at 9:09 AM ^

After the past three years, I've become really suspicious of using anyone's previous experiences in past employment as a prediction of what will happen in a new environment.  *COUGH COUGH*  It's certainly not unfair to say 'oh, SECRETARY X typed 80 words per minute at her previous job, so we can expect 80 words per minute now that she works for us.'  Yes, that's typically how the real word works.

But when it comes to football, performance in one college town can be dependent on unique circumstances to that situation, or lack of circumstances in some cases.  Ann Arbor is not Morgantown, nor is it San Diego, or even Muncie, Indiana.  (hehe, Muncie is a fun word)  It's very tempting to think that what happened at West Virginia was a perfect storm situation where everything aligned in just the right way for a certain coach, but that's no more fair than saying he was doomed to fail here.

What I like about this post is that Dream Season doesn't just focus necessarily on the past records of Brady Hoke, but also the performance of other teams that are also going through this mess.  This is not so much about what Hoke has done right elsewhere, as much as it is what went wrong here before that didn't go wrong in Hoke's previous stops.  We've never really had an unsuccessful hire before, certainly not in what we'd call the 'Modern Era of UM Football,' which has essentially been four coaches in forty years (Yes, the Mo thing happened, but was extracurricular to the entire business of football-- he was still very much a good 'hire' by the program).  Even worse, I think we can all agree that we've never really 'underperformed' before, at least not to this degree over three seasons.  So to look at how other teams have endured the same mess is an interesting take.

I guess the bottom line is, it's a clean slate, and we have very little to base predictions on.  It's exciting to judge the coaching staff on their performance in duties they have actually performed off the field so far (such as this thing they call recruiting), but it still gives us nothing to concretely predict performance from.  "The recruiting class is so good, we're definitely going 10-2 this year."  Right. 

I think we all expect the team to be better this year... no way but up from the bottom, right?  The issue is how far out of the hole we expect/want/wish them to crawl...


August 18th, 2011 at 10:00 AM ^

RR's West Virginia program was based on a number of things that we weren't sure would translate to Michigan, hence copious use of the Canham photo in '08.

For one, he had a DC who ran the defensive equivalent of the spread, and who like Rodriguez was one of the innovators of it.

On offense, WVU was based on scheme and getting recruits who had specific skills to fit that, because the players who are skilled in everything won't go to WVU. Consider:

WR Recruit 1: (4-star, No. 8 WR)

Speed: 4.4
Agility: B+
Height: 6'3
Weight: 200
Hands: A-

WR Recruit 2: (3-star, No. 31 WR)

Speed: 4.4
Agility: A-
Height: 5'8
Weight: 155
Hands: C

WR Recruit 3: (3-star, No. 30 WR)

Speed: 4.6
Agility: B-
Height: 6'0
Weight: 185
Hands: B-

Recruit 1 isn't going to WVa. He's a 4- or 5-star recruit heading to a bigger program. Recruit 3 is basically Recruit 1's skillset but downgraded to 2- or 3-star level. The secret of RR's system was that it can get Recruit 1's production from ignoring Recruit 3 and putting Recuit 2 in the slot position, where his height and weight and hands don't matter as much because he is primarily serving as a safety valve for when the linebackers cheat inside to stop the base play. RR can recruit that kind of guy because schools that need their receivers to be downfield threats aren't going to bother with that kid.

In extremis. Reality is more muddled.

The problem is Recruit 2 still is generally a liability if put in many situations, so you can only go far with a team full of Recruit 2's -- farther than expectations based on recruiting rankings, but not into the elite of college football.

I like to use the example of the Twins versus the Yankees. When you look at who gets the most performance out of the least amount of money (until this year), it's Twins by a long way. But the championship team isn't the one that can get the most wins per dollar, it's the team that gets the most wins. The gap between the team that can win with less and the team that wins with more is made up by an exponential increase in salaries.

In a way, transporting Rich Rodriguez to Michigan was like taking Glen Sather (wizard who build the '80s Oilers) out of Edmonton and putting him in New York, where he made the Jagr-era Rangers home of the worst contracts in the NHL.

In RR's case, my fear was that there isn't such a thing as WR Recruit 4:

Speed: 4.0
Agility: Doesn't have bones
Height: 5'9
Weight: 165
Hands: B-

Once in awhile there's a Devin Hester out there but you have to offer him hookers to get him amirite? By de-emphasizing several aspects of a player's game, RR was giving up on one of Michigan's biggest built-in advantages: the ability to recruit guys who don't have holes in their games. Meanwhile there isn't that much difference ("hands" is the answer) between Odoms and what RR was recruiting to West Virginia.

The opposite was true of the quarterbacks.

QB Recruit 1: (5-star, No 1 Pocket)

ThPower: A+
ThAccuracy: A
Height: 6'5
Speed: 5.2
Agility D

QB Recruit 2: (3-star, No. 10 Dual-Threat)

ThPower: C+
ThAccuracy: C
Height: 6'3
Speed: 4.7
Agility: B

QB Recruit 3: (3-star, No. 10 Pocket)

ThPower: B+
ThAccuracy: B+
Height: 6'3
Speed: 5.2
Agility: D

QB Recruit 4: (5-star, No. 1 Dual Threat)

ThPower: B+
ThAccuracy: B+
Height: 6'5
Speed: 4.6
Agility: B+

Unlike the ludicrous-legged hypothetical slot WR, that ludicrous-armed top Pro-Style guy actually exists -- there's on average one or two available a year.

In this case Recruit 1 is not "well rounded" -- rather he is more like WR Recruit 2 in that he is strongly emphasized in one category. However the major schools go after those guys because those are the skills emphasized by the NFL. RR again went where they ain't, but this time it was the more well-rounded type of player: Recruit 2. And in this coming to Michigan was a big potential for upgrade, because that high-rated, well-rounded guy does exist. Rodriguez missed out on the 2008 version of that (Pryor) but found a pair of single-attribute standouts in '09 and then got his Recruit 4 in 2010 in Gardner.


QB Recruit 5: (4-star, No. 6 ATH)

ThPower: B+
ThAccuracy: C+
Height: 6'0
Speed: 4.3
Agility: A+

QB Recruit 6: (4-star, No. 4 Dual-threat)

ThPower: B
ThAccuracy: A+
Height: 6'0
Speed: 4.6
Agility: B

Offensive line worked out similar to QB: with the other big schools going for guys who max out in size and strength attributes, RR picked up some might great well-rounded players (or inherited them) who never would have played at West Virginia.

And it worked. His offense actually did shoot to among the elites in the nation, even though slot production was no greater than he could get at West Virginia, but because Michigan gave RR access to higher rated better Quarterbacks and offensive linemen.

Except the defense sucked. Eh.


August 18th, 2011 at 11:42 AM ^

It took him too long to stop recruiting WR #2's all over the field, especially on defense, and start recruiting WR #1's to play those positions because he could at Michigan. He did so at QB with Devin, moving into the QB #4 range, but at all the other positions (and almost entirely on defense) he wasn't getting the best guys.  It wasn't that he ran different offensive and defensive systems...it was that it took him 3 years to start sniffing Michigan level athletes, when everyone was dreaming of him using his system with Michigan type players rather than WV level players, and exploding.  Recruiting more WV level players than Michigan is used to in a better conference grind hurt them, especially on defense.


August 18th, 2011 at 12:49 PM ^

Who among the defensive recruits was a slot bug type?

The 2008 class all RR added (not counting Carr's guys)  was Taylor Hill. He was definitely a "West Virginia type" because he previously committed to WVa. but I don't think his scouting profile made him any different than other 4-star linebackers.

That class of 24 players had 16 brought in to play offense, which made sense because the offense was decimated after the '07 class graduated/left early (this was before the attrition hit other than Mallett).

In the 2009 class Michigan was coming off of a 3-9 seasons so the level that could reasonably be expected to come was going to be a little lower. The defensive recruits (in order of Mountaineeriness) were:

  • Thomas Gordon - A 3-star smallish safety guy who had played QB from Cass Tech. Since Gordon was 5'11 and there isn't a guy under 6'0 in Hoke's 2012 class there isn't a comparable. Brian compared him to Brandent Englemon. Turning a nobody QB into a DB is a very RR at WVa. thing to do. For M comparables it's really Englemon or nobody.
  • Adrian Witty - The Danny DeVito to Denard's Ahnold in Twins. Didn't qualify. Basically Quinton McCoy, but it is very RR to take a flier recruit from a swampy Florida hellhole that he wans to make into a pipeline.
  • Brandin Hawthorne - See above: Pahokee is across town from Apopka. Hawthorne  was thought to be small and doesn't really have a place outside a 3-3-5 Spur or special teams, but was the glue of his hometown team, so very WVa.
  • Quinton Washington - Going to South Carolina and fending off SEC schools for Q is something Carr would do...for running backs. RR got him as the perfect pulling guard for the spread offense and had to fight off most of that region right down to the wire. No way he can get a guy like that at West Virginia. However nobody imagines Carr or Hoke would have convinced Q to come play DT.
  • Mike Jones - Here we're on the edge of Mountaineeriness. Jones was a 3-star safety hitter who hits, and likes to hit. Brian said Larry Foote but Prescott Burgess lite works too. He's Florida so RR-ness. But how many Michigan linebackers can you name who were recruited as 6'2 200 safeties? Hell Brandon Smith was one such the year before.
  • Vladimir Emilien - Your typical Ohio State 4-star safety until he went down with injury and Michigan stuck by him. This was a fancy way to get higher recruits by WVa. but it's also something Bo did a lot of in his day. Could go either way, shades toward a typical Michigan thing.
  • Cam Gordon - You can argue Michigan will take a pass on a hometown possession receiver who projects as a rangy linebacker, and that such a guy will end up bothering us as a 5th year senior at MSU. What you can't argue is that WVU had Cam-like guys because they didn't -- this is a middle of the Big Ten recruit all the way.
  • Craig Roh - Goes to Arizona to find a quirky, intelligent, great-technique (crab people!) DE whose dad reads MGoBlog...not WVU, and not all that Michigan-ish except like Shawn Crable I guess. Roh is more the guy we wanted Carr to recruit and plan around. If he lived in Ohio Hoke would be all over this but if not...? Roh fits no previous M recruiting pattern except that of awesome.
  • Isaiah Bell - Another linebackerish safety, this one from Michigan's Eastern Ohio stomping grounds. Not too different from Hoke's safeties from Ohio this year (except Bell has gained a lot of bad weight since then which I don't expect from Wilson or Gant).
  • Anthony LaLota - A boom or bust 4-star tall kid with lots of potential but very little experience or size (he was 6'6, 230, but listed as 260), and a brain more suitable for studying physics than "playing physical." Brian said Alain Kashama. Pat Massey's another. We could keep going. Point is smart, raw, tall DEs from the East Coast are totally a Michigan thing more than a WV thing. 
  • Justin Turner - Highly rated Midwest Ohio defensive back who wants to be Charles Woodson. We practically bred these guys.
  • William Campbell - Can't count him as RR-ian at all since he committed his sophomore (!) year to Carr, then led RR on a "will he or won't he" decommit chase his senior year.

Blue boy johnson

August 18th, 2011 at 2:25 PM ^

I think one of RR's biggest advantages is running a unique brand of the spread and being able and fortunate enough to recruit players of Denard's and Pat White's caliber, who don't fit as a QB in the vast majority of coaches paradigm.

I recall Gary Danielson stating at the time of the RR hire, "Michigan doesn't need to do this", and he was right, and I am loathe to admit it. In hindsight it isn't too difficult to apprehend what was clear to Danielson from the onset. Michigan was attracting NFL caliber quarterbacks like no other school in the land, yet Martin, to the delight of many, decided to abandon one of Michigan's greatest strengths, and blaze a new path. Bad idea.

Defense; I know many on hear want to heap all the blame on Gerg, but I disagree, I don't think Gerg ever had a chance. I place the majority of the blame on RR. RR brought half his defense with him from WV, but it was the bad half. RR and his cronies couldn't work with Shafer any better than they could with Robinson. Gerg wasn't the constant in the horrible defenses of the RR era. RR and his hand picked assistants were the constants of what was the worst defenses by a landslide in my memory of Michigan football  Gerg might be, could be, a terrible Super Bowl winning defensive coordinator. For the sake of argument let's say he is a terrible coach, then team him with a bunch of recalcitrant mediocre know it alls, force him into a system he knows not, and you have the recipe for disaster. Disaster is what RR got.

Mattison was asked a question about the 3-3-5 in his presser the other day and he basically said, "I don't know shit about it", obviously neither did Gerg.


August 18th, 2011 at 3:35 PM ^

Then in 2010 you can throw in the Ohio 3*'s that OSU didn't want-

Courtney Avery (may start, but not making anyone think he's going to be all conference)

Jibreel Black (maybe a situational surprise and positive)

Antonio Kinard

Davion Rogers

Jake Ryan (looking like a "gritty contributor")

The Talbotts (sad story)

Ray Vinopal

DJ Williamson

And like guys from around the country-

Josh Furman

Carvin Johnson

Jordan Paskorz


And very WV-type guys on offense-

Drew Dileo

Jerald Robinson


That's a lot over a multi-year period. 2011 was a mess, but there were some more already coming before the change that could be mixed in, along with some guys who might finally, in his 3rd full class of recruiting, be more than a token blue chipper. But how FOR SURE some of these guys were coming after the season started tanking has been overrated.

It's not that you can't take sleepers, it's just that you can't take them in that number (unless you're name is John Beilein and you're right 80% of the time), and expect to hang with the big boys. And it was starting to get excessive at the end of the Lloyd tenure, and became even greater after, rather than less so. You're supposed to have the excitment of a new coach and an exciting new system propel you to greater heights. As you pointed out, 3-9 kind of hurts that. Which is why putting in your system over winning as much as you can isn't necessarily a sound foundation, because you're better off having better players still learning the system in part than lesser players who have it down cold.   We don't need to get every player OSU wants to be good. We just need to get some/any players they want to be good; and we haven't done that up tll this year is quite awhile.



August 18th, 2011 at 10:02 AM ^

I was thinking the same thing - I bought a little too hard on all the predictions of future performance based on RR teams past performance to get too wrapped up in this kind of stuff this time around.  Still, its encouraging to see that Hoke's teams have exceeded some expectations. 

Great tag line too, by the way.  Apparently the magic happens just about everywhere for Mr. Omameh.

Blue in Yarmouth

August 18th, 2011 at 10:03 AM ^

Thanks for all the hard work, that must have taken a while. One thing I think may be unusual for uor coaching change is the difference in schemes. I can see how on average, a coaching change won't have any direct correlation but when you are moving an offense from a spread to a prostyle I have to think that will have an impact. Also, I am not sure there have been any instances where schools went from the absolute worst DC in the history of DC's to one of the best. This makes our circumstance different IMHE and I think the change in coaching is going to impact our situation in a positive way (I say positive because I really don't think Borges is moving to the 3 yards and a cloud of dust MANBALL).. 


August 18th, 2011 at 10:25 AM ^

...but as a statistician, there are some things right off the bat that stick out to me here...1) recruiting rankings are biased since they are subjective and have human error involved.  Case in point is how anonymous 2-star that Michigan or OSU or Texas, etc gets automatically gets bumped up to a 3-star within a month of that committment.  Anonymous 2-star that Iowa or Navy or Boise St., etc gets remains an anonymous 2-star.  2) With R^2 that low, it's hard to say these models are good predictors.

Having said that, it's good to see intuition (returning QB=good, GERG=bad) play out


August 18th, 2011 at 11:12 AM ^

First of all... yer perty smart!

Second, this validates a lot of those "feelings" so many of us have... that the PAC10 looked scarier than they turned out to be and that Michigan was on the verge of having a really solid team, but just never seemed to make it happen (last few years at least) and that teams like Iowa somehow seem to perform solidly despite appearing to be fairly mediocre.

Thanks for the hard work!


August 18th, 2011 at 11:27 AM ^

I do wonder if there might be a way to mathematically correct for the bias that favors non-powerhouses in exceeding expectations? For example, instead of absolute change in rank, what about a percentage based change in rank? If Team A's expectation is 20th and they finish 5th, that's 75% they lopped off from their expectation. In comparison, if Team B's expectation is 100th and they finish 85th, that's only 15% lopped off but for the same absolute value in change. Maybe this won't work quite as simply as I've stated, but I'm at least trying to plant a seed for thought.


August 18th, 2011 at 11:30 AM ^

First off, that's an amazing piece of work. Well done. I can imagine the effort that went into it, so take these suggestions as a grain of salt.

What is the effect of changing the source data? Does using Rivals/Scout/ESPN's rankings change the outcome much? How about using a different metric other than FEI? FSH uses F+ or S&P+ rankings, and I'm sure there are other advanced metrics out there to pick from. Added bonus: some of those go back past 2007 to give a better range of rankings.

Funny enough, today's FootballStudyHall article is on underperforming recruits, and 2008 Michigan leads the list. Now I imagine that has something to do with the difference between recruited stars and stars that actually took the field, but it's still an interesting counterpoint to this article.

FWIW, last year the #71 team in Defense according to the NCAA was Marshall. #71 in scording defense was Idaho.FEI says #71 is Louisiana-Layfayette, S&P+ says #71 is Tulsa. So set your expectations appropriately. We won't be a smoking crater, but we won't be good.

restive neb

August 18th, 2011 at 11:47 AM ^

I suspect that it can be explained by the fact that there are two reasons for coaching changes: A highly successful coach is hired away by a better program, causing a school to look for a replacement; or a coach is fired for underperforming. In the former case, you'd expect a drop in performance, and in the latter case, you'd expect a bump.  When analyzed in aggregate, you'd expect that it would show little significance up or down.  Logically, we can infer an effect, but you'd have to break out the reasons for the coaching change into the two categories to test it statistically.

Undefeated dre…

August 18th, 2011 at 1:22 PM ^

In the earlier "Canning a Coordinator" article I looked the differences between coordinators who were fired vs. those who were promoted. There were some slight effects, but many of them disappear once you control for the previous FEI rank. So a coordinator does great, gets promoted to a better job or to a head coach, and the team drops in FEI the following year. Ergo the coordinator was an irreplaceable genius. On the flipside, a team does terrible and the coordinator gets fired, and the next year the team takes a leap; the previous coordinator was therefore a doofus. But it turns out that regression to the mean explains a lot of the movement, and coaching changes don't do much -- in the aggregate. Without a doubt individual situations can differ.

In that piece, I also looked at effects 2 and 3 years down the line and still couldn't find anything systematic.

Not on topic, but re: other posts mentioning that recruiting rankings are flawed in that a 2* to Navy who switches to Florida State is automatically a 3* -- yes, absolutely, and that's one reason why the non-BCS schools are more likely to exceed expectation. I was more pleased/interested to find a systematic effect for recruiting strength overall, and its size relative to returning starters.

restive neb

August 18th, 2011 at 1:51 PM ^

Just one follow-up...  Were you able to test the signficance of interactions?  For this case, I'm thinking specifically of the interaction betwen the variables of the previous year's performance and the existence of a coaching change.

Thanks for the effort!

Eye of the Tiger

August 18th, 2011 at 1:34 PM ^

First off, I just want to say this is an excellent diary.

But I do have a few big questions.  

1.When you say "out/underperforms expectations" based on "the model's predictions," what model is that and how does it make predictions of performance?

2. As I'm sure you know, the requirements for statistical validity are an N > 200 and sufficiently random data.  While I'm sure you have enough cases to meet those requirements, there are randomness/distribution issues that derive from doing longitudinal data analysis with 3 time points.  That strikes me as too many for panel data analysis and too few for proper time series analysis, unless you have a specific way of dealing with it that I don't see.  What do you think?  How are you accounting for time effects?  


Sextus Empiricus

August 18th, 2011 at 2:18 PM ^

especially @ QB.  I agree that it is useless to look at position vs. recruiting but an analysis of QB would probably work.   At this point I don't think you should change your approach.  The take homes aren't going to change that much. 

Did you try other regressions than linear?  I don't think it would help - I'm just curious.

I would like to see your data...can you drop it somewhere?  This would be great if Brian/Mathlete would support a data drop folder for users but FileDropper would work for now. 

I would especially like to repeat the analysis for returning starter, QB  and recruiting class as that is a building block for future models.  The confidence intervals and outliers are especially of interest to me at least. I would also like to see some of the correlations graphically if possible.  It helps me grasp the data.

I've been looking at CC data.  I would like to match this to your approach(I'm looking back to 2000 so FEI doesn't do much for me).  I'm not getting exactly the same results but I think this is a SISO issue with my dataset I would have to look at that.

I like UpUpDownDown's piece on player development but it's a short data window (back to 2007) that favors the Johnny come lately types (Iowa and Wisconsin).  I think current recruits are getting pitched this take however.  It was a factor for Jordan Walsh, Darian Cooper and Ray Hamilton (and Diamond and Johnson this year)  perhaps. 

Kevin Haynes @ Football Outsiders has a Resource Centric Model that uses APR as well.  He found his correlates with brute force.  I have not looked at that...but would like to know if there is much to that. 

Undefeated dre…

August 18th, 2011 at 6:31 PM ^

Thanks, MGoCommunity, for the comments! I've replied to some directly. For the posts on the nerdier side:
1) In the article, expectations = the regression model's prediction. Variance from the prediction (i.e. the regression residual) is how we identify the biggest exceptions (or outliers).
2) Technically this data is 120 cross-sectional units (the FBS teams) across 3 sets of years (2007-2008, 2008-2009, and 2009-2010). This puts the data set into the cross-sectional time-series land, but using any of those techniques would have been a) more complicated b) frankly less in my wheelhouse and c) not really needed. I think treating each team/year as a separate unit of analysis is fine, and I didn't want to get into any complicated differencing, or functional forms that would start sucking up degrees of freedom, etc.
3) I did not look at other regression forms besides linear. Quadratic, cubic, etc. could have done better at predicting, as would a functional form that prevented predictions better than a rank of 1 or worse than a rank of 120. But all those methods lead to much more cumbersome ways of explaining/modeling effects. I wanted something simple and straightforward. And as noted, absolute FEI (vs. the rank) led to more explained variance, but I chose to use ranks because they were easier to explain/interpret.
4) My philosophy on model building is keep it simple, and don't do anything egregiously dumb. Just looking for a way to explain data in a theoretically valid way.
5) I've never heard of an N of 200 required for statistical significance (I've heard of an N of 30 before the Central Limit Theorem kicks in). In any case, we had over 200 cases. Were the error terms distributed normally and independently? I didn't check, but again I'm looking to explain a phenomenon, not build the most sophisticated model around.
6) I did test some interactions, mainly with recruiting rankings, but nothing came up significant. I didn't check interactions with previous year's performance and coaching changes, but I did test the effect of coaching changes on the predicted performance the following year. The (also very long) "Canning a Coordinator, Revisited" diary has a lot more detail on this.

Eye of the Tiger

August 19th, 2011 at 1:11 AM ^

If your N is below it, the standard errors get really large, which means your results are of limited reliability.  200 is thought to lend enough "power" (ie replicability) for reliable large-N analysis.  

It's fine to go down to 30 if you're doing double-blind experiments or something, but it's really poorly suited to this kind of large-N analysis.  But you're beyond N=200 anyways, so that's just an academic point...

I'm more concerned with how you deal with time in what appears to be panel data.  Might not be significant, but did you test for fixed effects by creating year variables...or use some other technique? 

Undefeated dre…

August 19th, 2011 at 3:11 PM ^

If I run the model for each year separately, the results are similar, which is good. If I add intercept dummies for 2 of the 3 years into the main model, the dummy coefficients are insignificant. If I added dummies for each cross-section (each team) I'd lose lots of degrees of freedom, so I didn't do that. I did look at BCS vs. non-BCS and that came out non-significant.

I'm still surprised by the N=200 idea. Standard errors depend on lots of things, of which sample size is one. I'd never toss out a model with an N of 100, or even 50, if I was confident in the sampling and the results were substantively and statistically significant. Especially with so few predictors in the model.