Canning a Coordinator, Revisited

Submitted by Undefeated dre… on

"We just need a change in coordinator."

Whether the offending coordinator is Greg Robinson or Greg Davis, the refrain is familiar. There's anecdotal evidence to support the benefits of a coordinator change, most recently with Manny Diaz at Mississippi State or Dana Holgorsen at Oklahoma State. But I wanted to attempt a more systematic look at the effects of coordinator changes. This diary is an update of a post in December and incorporates responses to many thoughtful comments to that earlier version.

This is a loooong diary. There's few steps to get to the end, and I know many tl;dr'ers are impatient. Putting the conclusions first seems a bit like putting the cart before the horse, but…. cart?

Cart

In a nutshell: firing a defensive coordinator tends to 'work' (to a degree), but firing an offensive coordinator does not. The number of returning starters turns out to be much more important than whether or not the coordinator was replaced. Because coordinators tend to be fired from poor-performing teams, often what we see as a positive boost from a coordinator change is simply regression to the mean.

As an added bonus, I stumbled upon these crude models to predict a change in a team's FEI rank vs. the prior year, where a positive change in rank is desirable (e.g. moving from a rank of 30 in the prior year to a rank of 10 in the following year is a +20 in rank):

Change in Offensive FEI Rank from Prior Year: +14 positions in rank if the starting QB returns, +3 for each additional returning offensive starter, and -0.5 for each level above last that the team was ranked in the previous year.

Change in Defensive FEI Rank from Prior Year: +3 for each returning defensive starter and -0.3 for each level above last that the team was ranked in the previous year.

These models certainly aren't going to put Football Outsiders out of business, but they're easy to use and somewhat intuitive. More detailed explanations below.

The Data

Team performance is based on the Fremeau Efficiency Index (FEI) from FootballOutsiders.com. Their free published data only goes back to 2007. FEI is not perfect, but it's easily available and eliminates some of the noise in scoring or yardage data. Because the focus will be on change in FEI performance, there are three years of data available (2008 vs. 2007, 2009 vs. 2008, and 2010 vs. 2009).

To determine coordinator changes, I used Rivals.com's annual "coordinator carousel." I coded the coordinators into four categories – stayed, promoted, fired/demoted, and left/unknown. A coordinator was classified as 'promoted' if he got a coordinator job at a 'better' school (arbitrarily determined by me, based mainly on the conference of the school) or if he got a head coaching job at any school. A coordinator was classified as fired if he didn't get a new job, took the same job at a 'worse' school, or took a position job at any school. The 'unknowns' are mainly coordinators who went on to take a position in the NFL or the same position at a similar school – in many cases it's hard to determine if that's a promotion or a demotion. Nearly all Michigan fans believe Jim Herrmann's trip to the NFL was encouraged, but for some coaches a job in the NFL could be their desired career path. I did some Googlestalking to try to parse out which was which, but if I could find no definitive sentiment I just coded them into the 'unknown' bucket . The coding was a bit tedious and I would welcome anyone who wants to double-check or validate my coding.

An issue confounding coordinator changes is that they often come with a head coaching change as well. If a head coach comes in with a whole new staff and the FEI metrics improve, is that because of the head coach or the coordinators? So I separated out coordinators that came on board as part of a new coaching regime (e.g. Malzahn) and those that came on with an existing head coach (e.g. GERG).

With 120 FBS teams and 3 seasons, there are 360 total data points (technically 359 since Western Kentucky was new to the FBS in 2008). Here's a breakdown of what happened with the coordinators. Keep in mind that when a head coach changes, the coordinators also go.

Image and video hosting by TinyPic

And now please say goodbye to the 'unknown' category, because we'll be ignoring them for most of the rest of this piece.

The Exceptions: Best/Worst Firings

To judge the 'best' or 'worst' firings, I calculated the change in the team's Offensive or Defensive FEI rank from the season before the change to the season after the change. This is simplistic, to be sure, but it's also clean and easy to understand. I looked into using the actual FEI metric, but the metric values seem a bit more volatile than the rankings. And the ranks are frankly easier to deal with/explain. I also looked at change in performance 2 years after the change to account for more time for a coordinator's influence to take effect. If anything, the evidence is weaker 2 years down the line, so I focused on 1 year changes.

And here's the best and worst firings of coordinators, judged solely by the unit's movement in FEI in the season after the coordinator change.

Image and video hosting by TinyPic

For those curious:

  • Manny Diaz's job in 2010 with Mississippi State reflects the 6th best performance following a DC getting fired (+59 places in FEI rank)
  • Greg Robinson's first year with Michigan ranks as the 6th worst performance following a DC getting fired (-25 places)
  • The best change in DFEI rank for a DC who didn't get fired was North Carolina State in 2010, under Mike Archer (with a Jon TAHNOOTA boost?) (+70 places)
  • The best change in OFEI rank for an OC who didn't get fired was San Diego State in 2010, under hey, that's Al Borges! (+81 places) [Brian's piece on Borges had SDSU as improving only 67 spots in FEI rank from 2009 to 2010, but I think that was using pre-2010 bowl season FEI numbers]

One other curiosity – the top changes in performance after a coordinator was fired all occurred in 2010. This is strange. We know Fremeau Efficiency isn't perfect, but it uses the same methodology in each year. I can think of only two explanations: 1) it's a fluke 2) coaches/athletic directors are getting more astute about when to fire/not fire a coordinator. My guess is it's the former, but I'm open to other interpretations.

The Rule: General Trends with Coordinator Replacement

As we move from looking at particularly good or bad firings to looking at the overall picture, I need to make this point clear: A firing by itself does not cause an improvement in FEI performance. Obviously, the who's matter (Shafer to GERG, anyone?). And the aggregate averages we look at include outliers on either end (and those outliers are potentially the cases where the 'who' really does matter, for good or bad).

A couple hypotheses to test:

H1) Teams that fire their coordinators should improve more dramatically in FEI than teams that stand pat with their coordinators. Head coaches will typically fire a coordinator only if he is perceived to be underperforming with his unit, so a change of coordinator should mean more of a rebound in FEI than in normal circumstances.

H2) Teams that lose a coordinator due to a promotion should decline more in FEI than teams that stand pat with their coordinators. The thinking here is that a coordinator must have been overperforming with his unit to merit a promotion, so his departure will coincide with the unit declining more than normal.

For the same reasons discussed in the previous section, we'll evaluate the hypotheses based on the change in a unit's FEI rank from the previous season to the current season. And looking at our three years of data across 120 FBS teams get this:

Image and video hosting by TinyPic

Offense first. Here we see little support for H1) – a team that fires its offensive coordinator improves by 2.3 positions in FEI rank, on average, but teams that stand pat improve by 1.1 positions in FEI rank – virtually no difference. The story is the same even if we look at performance two years out (not shown here). We do see some support for H2) – a team that loses its offensive coordinator to a promotion tends to decline in performance by 7.8 positions in FEI rank the following season.

For the defense, the pattern is reversed. We see strong support for H1) – a team that fires its defensive coordinator improves by nearly 12 positions in FEI rank, on average, while a team that stands pat with its coordinator has almost no change in FEI rank. But we don't see much support for H2) – a unit that loses a defensive coordinator to promotion drops by only 3.1 positions in FEI rank.

By now you're thinking three things:

  1. Holy shit, this is too long!
  2. What about the players a coordinator inherits? Did the best players graduate, or did an inexperienced group get more seasoning? You can't look at coordinator performance without considering the players.
  3. What about regression to the mean, or the tendency of units that do incredibly well in one year to slip the next year, or for units that do incredibly poorly in one year to improve the next year – regardless of who is coaching/coordinating?

For 1), you're right, and we're not even halfway! For 2), you're right, but hold on a second. Let's look at 3).

First, we'll look at teams that finished in the top 60 in FEI in the previous season. We'd expect the teams to decline in performance, in aggregate, simply because of regression to the mean. But we can still evaluate our old friends H1) and H2).

Image and video hosting by TinyPic

And the verdict is… basically no support at all for H1), for either offense or defense. If a coordinator of a top 60 team is fired, the typical team performs about the same as the typical team where a coordinator stayed. We see some support for H2), but only for offenses; top 60 teams that lose an offensive coordinator to promotion tend to fall back even moreso than top 60 teams that stand pat. And oh by the way, top 60 teams that change their entire staffs tend to really drop off the next year.

So let's do the same for the bottom 60 teams in FEI.

Image and video hosting by TinyPic

For H1), we see strong support for the defensive side but no support for the offensive side. A team that fires its defensive coordinator tends to improve 24.4 positions in FEI rank, while a team that stands pat tends to improve only 9 points in defensive FEI rank [Beating a dead horse note: I'm not saying that firing a defensive coordinator causes the FEI to improve, only that there's an association between the two. The real cause, most likely, is that the unit was underperforming badly vs. the head coach's expectations, which caused the coordinator to get fired]. On the offensive side, a team that fires its coordinator actually performs worse than a team that stands pat, on average.

H2) gets no support for either offense or defense. Sample sizes are small (because not many coordinators from teams in the bottom 60 in FEI get promoted), so mileage varies, but these teams didn't appear to have any difficulty replacing their promoted coordinators.

Double bar charts! What do they all mean?

More or less this:

  1. Among teams that finish in the bottom half of FEI rankings, those that fire a defensive coordinator outperform their stand pat counterparts by about 15 points in FEI rank, on average. This is the only time we see a substantial positive impact from canning a coordinator.
  2. Among teams that finish in the top of half of FEI rankings, those that lose an offensive coordinator to a promotion underperform their stand pat counterparts by about 11 points in FEI rank, on average. So either offensive coordinators get out while the getting is good, or those promoted offensive coordinators really were/are offensive geniuses whose team suffers without them.

Adding Player Quality into the Mix

In the first version of this article I basically punted on trying to quantify the quality of the rosters. Obviously a coordinator inheriting a bunch of bad freshmen will not fare nearly as well as a coordinator inheriting a roster full of good upperclassmen. To quantify the quality of the roster, I'm using Phil Steele's favorite metric, returning starters. Returning starter data comes from Vegas Insider in 2008, Phil Steele's blog post in the Orlando Sentinel in 2009, and Phil Steele's own site for 2010. I decided against using recruiting rankings not just because it was more work (especially splitting out offense vs. defense), but also because teams tend not to vary too much in recruiting year over year (Scout's team per recruit averages correlate at .9 year over year), and when they do vary a lot it tends to coincide with either one or two fluky recruits or with a head coaching change, which I'm already incorporating.

So, do teams that return more starters tend to do better in FEI? Well, Phil Steele would cry if the data didn't support it. I split unit/years into rough thirds based on the number of returning starters, and ...

Image and video hosting by TinyPic

We see a clear effect, moreso for the offense. A team in the top third of returning offensive starters tends to improve by 13 positions of FEI rank from one year to the next, while a team in the bottom third declines by 14 positions of FEI rank. The same story applies, albeit less dramatically, for defenses.

(Finally!) Putting It All Together

The main question is, once you account for returning starters, does the change in coordinators still matter? If you've made it this far, you're as tired of bar charts as I am. For this analysis I'm going to turn to the simple statistician's best friend, regression analysis. Regression analysis is great for having factors basically fight it out to see which is the better predictor of our target variable.

You can't do a regression without good prior beliefs, so I'll put 'em down here:

  1. More returning starters should lead to a positive change in FEI rank.
  2. Returning QB's likely matter more than other returning offensive starters, so we should look at them separately (just like Phil Steele does).
  3. All other things being equal, it's likely a team at the bottom of the rankings will improve the next year, and likely a team at the top of the rankings to decline the next year. This is the regression to the mean hypothesis, which implies the previous year's rank should have an impact on the change in FEI rank.
  4. If coordinator changes truly have an impact, it needs to show up after we account for returning starters and previous year's FEI rank.

[NOTE: the gurus at FootballOutsiders use both returning starters and recruiting rankings, along with a five year program success score, fluky turnover margins, etc. in their predictive models. Some information about what goes into their projection stew is available here and here.]

Predicting Change in Offensive FEI Rank

If we look at the offense, basically (1), (2), and (3) are confirmed and (4) is blown out of the water. Here's the full model:

Image and video hosting by TinyPic

A flag/dummy variable was used to test the effects of three of the four categories of coach changes (OC fired, OC promoted, and whole staff swept out – for statistical reasons, the intercept/'default' includes those situations where the coaching staff didn't change at all). I'll wait to interpret the impacts until we get to the reduced model. For now we'll just worry about our statistical confidence – higher is better. The typical rule of thumb is to look for 90% or 95% confidence (which means that in only 1 in 10 or 1 in 20 samples would we see nonzero effects when in fact there were zero effects). We have four variables with very high confidence, so they'll stay in the model. But the variables reflecting a coordinator change have a low confidence, meaning we can't reject the hypothesis that coordinator changes have no effect on change in FEI rank. In cruder terms – coordinator changes don't seem to matter on the offensive side of the ball.

And now the reduced model, dropping the irrelevant coordinator variables. The R-squared (overall measure of fit) is unchanged at 0.32.

Image and video hosting by TinyPic

Even though you read MGoBlog, you're probably more of a football fan than a stats nerd. So some interpretation:

  • The R-squared of 0.32 is pretty good, considering we don't have a lot of inputs in the model.
  • The intercept/default is the predicted change in Offensive FEI rank if all the other variables are at their minimum. In effect, if a team was first in Offensive FEI rank and had no returning starters, its predicted change in FEI rank would be to go from 1st to 59th – a true regression to the mean.
  • The Offensive FEI Rank coefficient basically means a team is expected to gain a half a point in rank for every point of rank it had in the previous year. If the 120th ranked team has no returning starters, its predicted change in rank is -58.1 + 120*(0.5), or +1.9. In other words, it would be predicted to finish 118th in FEI rank, or just about at rock bottom again.
  • Returning offensive starters is fairly straightforward – each returning starter is worth +3.2 points in Offensive FEI rank.
  • Phil Steele thinks a returning quarterback is special, and the data supports his claim. While a typical returning starter is worth +3.2 points, a returning quarterback gets a 10.8 point bonus, for a total impact of 14 ranking points.

To help get a handle on what the model is saying, below are some examples of hypothetical situations.

Image and video hosting by TinyPic

The model's not perfect – it can't predict any team to be ranked above 13th or below 118th in the following year. And in R-squared terms, the majority of variance in the data is left unexplained. But all effects are statistically significant, and the model makes intuitive sense. You may recognize the last line of the table as Michigan's current situation. If Michigan behaves according to the model, the change in coaching will have no impact and Michigan will regress to the mean a bit, falling from 2nd to 20th. Of course no particular situation, including Michigan's, is guaranteed to perform as the model predicts, but it's a prediction we can test.

Predicting Change in Defensive FEI Rank

QB's don't play defense, and Phil Steele doesn't call out a particular defensive position as critical, so we're testing three hypotheses: that more returning starters = improvement in rank, that worse previous year's rank leads to better next year's rank, and that a coordinator change can have an impact on FEI performance. Table?

Table:

Image and video hosting by TinyPic

The story here is a bit more complicated. As with the offensive side, the previous year's rank has a clear impact, as does the number of returning starters. Also similar to the offensive side, the promotion of a coordinator has no statistically significant effect. But the other two flags have borderline 'significance'. Keeping the head coach but firing the DC leads to a 7.5 position gain in FEI rank, while firing the DC *and* the head coach leads to a 6.4 point decline in FEI rank. Because they are borderline, I'm going to leave them in and just drop the "DC promoted" variable from the final model, which is below.

Image and video hosting by TinyPic

Quick highlights:

  • R-squared of .23 is not as good as with the offensive side of the ball, perhaps because of the absence of a single key returning starter as with the QB on the offensive side. [Note: if we used actual FEI, not FEI rank, the R-squared is much stronger (about double) but with the same general results. However, interpreting FEI changes is not as transparent as interpreting FEI rank changes, so I'm sticking with rank changes here. On the offensive side, performance of the model is about the same whether we use FEI or FEI rank].
  • Interpretation of the intercept/'default' is the same as with the offense. Assuming all other variables are at minimum (i.e. previous FEI rank was 1st, 0 returning defensive starters, no DC change), then the team is predicted to drop to 40th in FEI the following year.
  • The intercept is smaller in the defensive model, but so is the 'reward' for having a bad FEI rank the previous year, so it balances out. A team that finished 120th in Defensive FEI in the previous year, has no returning starters and no changes in coordinator, is predicted to change in rank -39.2 + 120*(0.3), or -3.2 points, to a rank of 123. OK, that's not possible (there's only 120 teams), but it's close enough and again shows some intuitive power of the model – take a crappy defense and return no starters, and it will remain crappy.
  • Remarkably, the worth of a returning defensive starter is virtually the same as the worth of a returning offensive starter.
  • In situations where the defense is so bad the coordinator is fired, teams tend to get a 7.3 point boost in FEI rank. But if the head coach is also swept out, teams tend to drop 6.6 points in FEI rank.

As before, below are some hypotheticals to help show how to interpret the model.

Image and video hosting by TinyPic

Once again the last line is Michigan's predicted outcome for 2011 based on the model – a modest improvement to 94th in FEI. Note that if Rodriguez had stayed but Robinson was fired, the model would have predicted a defensive FEI rank of 80th in 2011. For the love of all that is holy this is just presented for interest's sake; we have no idea what would have been, and of course we have no idea of what will be until it actually happens. I for one certainly hope (and believe) Mattison's defensive unit will outperform the model's expectation.

Important Caveats

  • The model is not saying that defensive coordinators should be fired every year in order to get a 7+ point boost in FEI rank. Even though we're using regression, this is a descriptive model, not a normative one – it's only describing what has happened, not prescribing what should happen.
  • Both models have statistically significant results. But they're nowhere close to perfect predictors of performance – in statistical terms, a lot more variance is left unexplained than explained. So while we can use the model to make rough predictions, your mileage will definitely vary in individual situations.
  • I don't want to diminish the importance of individual coaches. Manny Diaz may bring golden blitzes of thunder and rage wherever he goes, and GERG may bring doom wherever he goes. It's just that in the aggregate, we don't see much evidence for clear changes in unit performance based on a coordinator switch. If the trends of 2010 continue, however, that may change.
  • Another way coordinators can impact the team long-term is with recruiting. If a coordinator happens to be a heckuva recruiter along with being a decent coach, that will likely pay dividends longer down the line. That's not investigated here.
  • The guys at FootballOutsiders do a much better job of prediction than this model. The only issue is that their models are both more complicated and less transparent. And I'm not sure if they've ever tested coordinator changes. In any case, this article from 2008 says their model had a correlation of .8 for predicting next year's FEI – which corresponds to an R-squared about double the models above.

Wrapping It All Up

For me, this analysis has three big surprises:

  1. No matter how we slice it, changing an offensive coordinator can't be tied to a systematic gain in offensive performance. Maybe coordinators are fired more for philosophical or chemistry differences than for performance-related issues.
  2. In the aggregate, firing a defensive coordinator does correspond with a boost in the unit's performance, but it's not huge – roughly equivalent to the benefit of having two more returning starters on defense.
  3. Roster quality, as measured by returning starters, has a clear positive impact on change in FEI rank, and it's almost exactly the same for returning offensive and defensive starters (QB's excluded).

Another surprise is that Al Borges, happily, pops up twice in the good column – after he left Auburn its offense went in the FEI tank, and in his last season at SDSU his offense improved more in FEI rank than any other unit that did not change coordinators in the three years of data I examined.

Things I'll think more about:

  • Whenever I hear an offensive coordinator is fired, my first reaction will that it's a short term desperation move. My crazy prediction: Mack Brown's removal of Greg Davis is an indicator that Mack's in his last few years of being a head coach.
  • Fremeau may be flawed. By all accounts Auburn's offense was in decline when Borges was fired in 2007, and yet its offense was ranked 24th – Fremeau may be 100% right, but in some cases the FEI is clearly contrary to conventional wisdom.
  • Was the spate of 'good' firings in 2010 a one-time fluke, or part of a trend? The only way to tell is with more years of data. It is possible that coaches/AD's are getting better at knowing when to fire/not fire a coordinator, and who to hire as a replacement?
  • It could be interesting to look at average recruiting rankings of a unit in the two years prior to a coordinator change to the two years after a change. I think the focus would have to be on a per recruit basis, not per class basis, to make up for uneven class sizes and roster needs.
  • [EDIT: new] Either now, or after a bit more tinkering with the model, we can go back and look at teams/units that overperformed (or underperformed) relative to the model's expectations. For instance, and not surprising, it looks like Michigan's 2010 defense underperformed the model's expectations by 40+ spots, while the offense overperformed the model's expectations by 50+ spots. In fact, GERG's D underperformed expectations 2 years running, which is more statistical evidence of his incompetence. And coaches/coordinators who systematically overperform expectations could be true geniuses/motivators. By the way, if you look at the teams that had a high net underperformance in a certain year, many were either in their first year under a new coach (Washington St. 2008, Kansas 2010) or had a head coaching change the following year (Washington 2008, San Jose State 2009). Need a little more time to think about/look into this one.

This piece is still a work in progress, and there may be blind spots I haven't even considered. At the very least I'll try to update it next year with a new round of data. As ever, comments/feedback, especially of the constructive variety, are welcome. Go Blue!

Comments

CoachW

February 15th, 2011 at 11:52 PM ^

tl;dr

/s

Thanks for the effort and time it took to put this together.  Very insightful.

 

My summary:

 

Denard = Offense go boom.  In a good way, yes?  As in all over you opponent try?

Defense = It couldn't get any worse so let's get rid of the stuffed animals on the sidelines and then we can instill all the toughness and such.

In all honesty, I think this is one of the better posts in recent memory.  Thank you.

Undefeated dre…

February 16th, 2011 at 5:04 PM ^

A couple things to keep in mind:

1) The crude model for the offense doesn't "care" if it's Borges or Magee, or Hoke or Rodriguez, running the show. The prediction is a decline no matter what. The thing is, our offense was tremendous last year, at least in terms of FEI. The natural reaction is to say "well, it's going to be tremendous-er next year", but it doesn't have much more to go in the upward direction, and any amount of things could lead to a downward dip. This kind of thing happens all the time in baseball – well if these guys just perform at last year's level, and these guys step up, we'll be unbeatable! But more often than not multiple players fail to perform to expectations. Of course, if we dip a few in FEI but go up a few in scoring offense, I'd be delighted.

2) Michigan is a very weird case to try to predict, because both of its units were far out into the tails of FEI. Out of all teams from 2008 to 2010, Michigan 2010 had the largest gap in FEI rank between the offensive and defensive units (106 places). Idaho 2009 had a 105 gap, and East Carolina 2010 had a gap of 100 (Stanford had a gap of 94 in 2009 – woo Stanford precedent!). Anyway, with a linear regression model and truncated data, predictive performance isn't always great at the limits of the data distribution (in this case, ranks near 120 or ranks near 1). So mileage will definitely vary with Michigan, especially when you add in all the idiosyncratic things about the Michigan roster and coaching staff.

BlueDragon

February 16th, 2011 at 12:25 AM ^

I really liked your post.  The stats were well-researched and the use of hypotheses was well thought-out and explored via the data.  I don't know as much about stats as I should as a member of this site, but reading up is always very enjoyable and informative.  +1.

Organizationally, I had just a few criticisms.  The references to the exceeding length are fine, but either condense the descriptions or find a way to re-write the connective text and explanations without the references.  It comes off as a litte too self-deprecating for a diary of this scope.

There's an easy solution, though.  Don't forget that we're on the internet, not writing a paper.  So, we can insert funny images at well-chosen spots to break up the monotony and keep our audience's attention.  I suggest visiting Cracked.com and reading a few of their articles.  One of the things I really admire about that site is the way the written text and the photos/videos flow together.  Some of the truly inspired diaries on MGoBlog have made good use of this technique.  Offhand, Misopogon's Decimated Defense #1 is a good example because of the way he blows up the number 58 1/3%, which is the percentage of defensive recruits over the past five years who were still playing for Michigan as of October 29th, 2009.

http://mgoblog.com/diaries/decimated-defense  http://www.cracked.com/

Your Cart section was comparable to an expository paragraph.  I read it eagerly when I first clicked on your post, but when I reached the next section, I was a little disappointed.  Why?  The first question that popped into my mind after finishing the head of the diary was, "What happens if the coordinator left for greener pastures?"  Granted, the Michigan perspective as of this offseason is, "We fired everybody, and what happens next?"  However, when writing a diary of this scope, the emphasis on pure statistics not explicity limited to Michigan or the Big Ten means that all four of the possible fates of the coordinator (staying, promotion, firing, new HC) should have been addressed, in abbreviated format, in the Cart paragraph.

Your title is also a little misleading and ties into my previous point about the Cart section.  I was left with the initial impression that this was a post purely about firing coordinators at one position or another.  However, you addressed the possibilities of a coordinator being promoted, or a coordinator being retained, which incidentally makes a good null hypothesis for H1 and H2 in The Rule: General Trends with Coordinator Replacement.  My point is, the title added to the confusion created by the organization of the Cart.

Otherwise, this is fine work.  I hope The Mathelete reads this, because stats like this are right up his alley.  I look forward to reading more of your work.

Undefeated dre…

February 16th, 2011 at 9:30 AM ^

I love those other diaries, and I think they're clever and well done. I may be too old a dog to learn a new trick -- my first papers were written with PC Write (you can all shudder now).

One quick note: I'm a big believer of replicability in research -- meaning, tell where you got your data, what assumptions you made, etc. This can make things tedious, for sure, but it's something I felt needed to be done (especially since footnotes aren't possible here -- which, given Brian's affinity for DFW, is unfortunate!). It's also entirely possible I'm a control freak.

ken725

February 16th, 2011 at 12:26 AM ^

Wow.  Very nice work.  

I think the situation at Michigan was somewhat different than most places.  It is rumored that Gerg was actually interviewing for the LB coach position and that RR liked him so much that he decided to hire him as DC.  We took Gerg, who ran the 4-3 defense and surrounded him with coaches who knew the 3-3-5 system that RR ran at WVU.  

I have no idea how you can account for that in your regression, but I think the 3-3-5 mindset and forcing Gerg to run that system along with other factors made our defense the way it was.

 

justingoblue

February 16th, 2011 at 1:02 AM ^

This probably had more research behind it than any of the diaries since the season ended. (Don't know if Mathlete has chimed in since then, but if not then definitely, if so then probably.)

Truly incredible, way to go.

Clayzer

February 16th, 2011 at 8:51 AM ^

Posts like this make me question whether or not I'm smart enough for this blog. Then I remember posters like The Knowledge exsist and I feel slightly better about myself...but only slightly

TheMile

February 16th, 2011 at 9:06 AM ^

Great article.  My only critique is that the metric you analyze is FEI rank.  I think the raw FEI numbers would be a better metric to compare than the ranks.  It likely wouldn't change any of your conclusions, though, so it's a somewhat irrelevant point.

Undefeated dre…

February 16th, 2011 at 9:17 AM ^

Thanks! The issue is, an actual FEI value doesn't have too much intrinsic meaning to most people (including me). If you said team x was a .5 in FEI, I'd have no idea what it meant. I did run the models using FEI, not rank, and they peformed similarly -- in fact there was a much higher R2 on the defensive side if FEI, not FEI rank, is used. But again, it's not as intuitive to use FEI.

Moe Greene

February 16th, 2011 at 9:54 AM ^

Effective immediately, I will replace all copy in my papers from "the variable was positive and statisticallly significant at a better than .05 level" with:

 

"Is it significant? HELLZ YEAH!"

stubob

February 16th, 2011 at 12:18 PM ^

Wow, very impressive. My interpretation is that the numbers confirm observation: that coaching changes just don't make a huge change in performance (aside from a dip during the changeover), and the number of experienced players has a larger role.

One part of the data that would be interesting is the distrubution of FEI for coaching changes: do coaches leave good teams, or do bad teams fire coaches? Also, how do teams that don't have a titular coordinator fit in? If the HC is also the OC, does that effect the numbers at all?

Undefeated dre…

February 16th, 2011 at 1:12 PM ^

I do think (some) individual coaches matter, and there's always the effect on recruiting I'm not showing here. But yeah.

I can look into the dual-responsibility thing – is there a list of those situations? Historical?

On your other question – yes and yes. Table below has the unit's average FEI rank in the year that led to the coordinator staying/going/being promoted.

Average Team FEI Rank that led to: Offense Defense
Coordinator stayed 59.2 58.8
Coordinator promoted 44.3 40.8
Coordinator fired 74.2 76.2
Head coach change 67.8 64.9

WolverBean

February 16th, 2011 at 1:54 PM ^

Both models have statistically significant results. But they're nowhere close to perfect predictors of performance – in statistical terms, a lot more variance is left unexplained than explained.

 

Leaving room for injuries, team chemistry/leadership, weather, the particulars of matchups between teams, amazing individual performances, "momentum," the timing of turnovers, and the like.  In other words, all the things that make football so compelling.  I think if the model explained all the variance in the data, I'd actually be disappointed!

Great work.  I give this diary five "Tremendous"es out of five.

OneFootIn

February 16th, 2011 at 6:15 PM ^

I for one appreciate the methodical approach and thought you did a great job keeping it fun to read despite the length. I will never say no to silly pix along the way, but they aren't necessary. I am going to bookmark your post to compare actual outcomes with your predictions. It might help us improve the model to see how Michigan varies from predictions if we can identify some hypotheses about why the variation occurred. At least we know there will be no shortage of opinions about that...

Go Blue!

True Blue in CO

February 16th, 2011 at 8:57 PM ^

and waited to carefully read tonight. Great job and even though it is long, it was still an easy read if you take your time. It would be great to see an analysis of change from one year to the next by teams with both an above average offense and a way below average defense and what happened with or without a coaching change. This quality analysis will be tough for others to beat.

BlueHills

February 17th, 2011 at 8:58 PM ^

Wow, you showed an awful lot of patience putting this together, it's really impressive!

Of course, there are always individual exceptions to these kinds of things, so here's to hoping that M will improve despite the stat that shows that a staff change usually causes some pain in the first year!

 

 

 

m1jjb00

February 17th, 2011 at 9:27 PM ^

This was excellent.

Test whether there's a large difference in the standard deviation of the residuals in the cases of a coaching change.  It sounds like there is from your description.

 

 

Realus

February 18th, 2011 at 8:33 AM ^

It's diaries like this that take MGoBlog from being really really good to the best!

Thanks for all of the time and hard work.

And thanks to Brian for making all of this possible.  It's diaries like this that show that Brian is not only a really good analyst and writer, but a fantastic catalyst / blog manager as well.

Everyone Murders

February 21st, 2011 at 8:52 AM ^

"What about regression to the mean, or the tendency of units that do incredibly well in one year to slip the next year, or for units that do incredibly poorly in one year to improve the next year – regardless of who is coaching/coordinating?"

The second part of this sentence sums up my mindset going into the past two seasons.  Unfortunately, it didn't pan out for Michigan.

Great diary - makes it worth wading through some of the chaff (some of which chaffes me) when you come across a diary of this quality.  

zlionsfan

February 21st, 2011 at 1:57 PM ^

You're right, FEI is still a work in progress. FO's coverage of college football is still comparatively new, and unlike with the NFL, I don't know how far back they're going to be able to go. NCAA games have much less available information than NFL games do ...

I actually like the length of this diary. I think it's better to answer as many questions as you can in the body rather than (or in addition to) in the comments ... ideally you'd be able to use DHTML or AJAX or something to hide some of the details by default and let those who are interested read more, but I doubt that's possible in this setting. You'll probably get some tl;dr no matter what you write anyway ...

Hopefully Michigan's 2011 season will be one that sets the curve, so to speak, WRT DC changes. 108th to 94th would be a start, but I suspect (hope?) dumping GERG for Mattison will bring about more improvement.