Canning a Coordinator: Does It Work?

Submitted by Undefeated dre… on December 5th, 2010 at 4:01 PM

Yesterday I posted something that in retrospect was too much on the self-indulgent than on the value-add side of things. To make it up to the MGoBlog community, I wanted to do some data crunching. The problem is that there are many posters on the site that do the numbers thing very well, offering some great original analysis that deserves as wide an audience as possible. So I tried to find a niche.

One meme that seemed worth analyzing was "we just need a change in coordinator". This premise is not particular to Rich Rodriguez, as Lloyd Carr, Jim Herrmann, and Ron English could attest. It's of course not even particular to Michigan (see Texas vs. Greg Davis). Fans think a change of coordinator will bring success – but most of the evidence is anecdotal (e.g. Guz Malzahn at Auburn, Manny Diaz at Mississippi State). Of course the success or failure of coordinator changes depends on who is hired, not just on the fact that someone was hired. But I wanted to attempt a more systematic look at the effects of coordinator changes on the performance of that coordinator's unit.

Before you type tl;dr, here's a quick summary of findings: changing the defensive coordinator of a poorly performing defensive unit tends to lead to a modest improvement in the next year. Changing an offensive coordinator is much more of a crapshoot with no clear trends, even if we look at performance two years after the change.


Background I Feel Obligated to Place Here But Feel Free to Skip

The available data is a bit sparse. For performance metrics, I wanted to use the Fremeau Efficiency Index (FEI) from That (publicly available) data only goes back to 2007. FEI is great because it eliminates a lot of noise in the performance data, especially strength of schedule and atypical end-of-half drives. For 2010 I'm using the FEI rankings as of the games of 11/27/10 – with so few games this weekend I didn't want to wait until mid-week to post this.

A bit more difficult was data on coordinator changes. In recent years has helpfully posted a "coordinator carousel" listing which coordinators left, where they went, and who replaced them. I took that data and coded the coordinators into four categories – stayed, promoted, fired/demoted, and unknown. A coordinator was classified as 'promoted' if he got a coordinator job at a 'better' school (arbitrarily determined by me, based mainly on the conference of the school) or if he got a head coaching job at any school. A coordinator was classified as fired if he didn't get a new job, took the same job at a 'worse' school, or took a position job at any school. The 'unknowns' are mainly coordinators who went on to take a position in the NFL or the same position at a similar school – in many cases it's hard to determine if that's a promotion or a demotion. Nearly all Michigan fans believe Jim Herrmann's trip to the NFL was 'encouraged', but for some coaches a job in the NFL could be their desired career path. I did some Googlestalking to try to parse out which was which, but if I could find no definitive sentiment I just grouped them into a separate category . This coding was a bit tedious and I would welcome anyone who wants to double-check or validate my coding.

An issue confounding the ability to test the effect of coordinator changes is that they often come with a head coaching change as well. If a head coach comes in with a whole new staff and the FEI metrics improve, is that because of the head coach or the coordinators? So I also separated out coordinators that came on board as part of a new coaching regime (e.g. Malzahn) and those that came on with an existing head coach (e.g. GERG).

Finally, we know that there's a ton of other factors that can affect unit performance besides a coordinator change, the primary one being the players on the field. Did the best players graduate, or did an inexperienced group get more seasoning? Was recruiting on the upswing or the downswing? Those factors are not addressed here (see, I told you I wasn't the Mathlete).


Actual Results: Aggregated

So, our dependent variable is the change in the team's FEI ranking for the unit from the last season under the old coordinator to the first season under the new coordinator. We have 359 records (120 FBS teams by 3 years, except since Western Kentucky in 2008, their first year as a FBS school). Here's a quick look at the aggregated results:


Change in FEI Rank

No coordinator change

Coordinator change








In the aggregate, across all types of coordinator changes, offenses tend to improve by 1.3 spots in FEI rank if their coordinator doesn't change, and decline by 2.3 ranking spots if the coordinator does change. For defenses there is almost no effect, and no differentiation between units with a coordinator change vs. those without a coordinator change. This is almost the essence of a null relationship.


Actual Results: Broken Out by Type of Coordinator Change

But coordinator changes should make a difference, right? The only way to tease that out is to break out those coordinator changes by whether the coordinator was part of an all new staff, whether he was pushed out, or whether he was promoted to a better job elsewhere. If we do that, we start to see some more sensible relationships:

We're not talking huge sample sizes, here (the smallest is 28 defenses where the old coordinator was promoted). But what we see makes some sense, at least for the defense. On average, if a defensive coordinator was fired, those team's units improve in FEI rank by 11.2 points the following year. In contrast, if the old defensive coordinator was promoted (because he was deemed to be good at his job), the defense declines by 4.3 points in FEI rank.

What about the offense? Here, it's the same pattern but not quite as extreme. If the offensive coordinator is promoted, the team declines in FEI rank by 8.0 spots. If he's fired, the team improves slightly by 2.1 spots. [I did a quick check to see if performance for the offense improves markedly in year 2 after a coordinator change, and couldn't find any compelling evidence that it does.] Note that all-new staffs seem to be more harmful to the defense than to the offense. And, as we might expect, teams have a hard time replacing coordinators who were good enough to get promoted to a better opportunity.


Actual Results: Broken Out by Previous Season's Rank

But we have to be conscious of regression to the mean. Coordinators tend to be fired from poor-performing units. The terrible performance of that unit may be due to the coordinator, but it's also likely due to some outside factors, including luck. Just as it's hard to be the #1 team year in and year out, it's hard to be the #120 team year in and year out. So a team that finishes terribly in one year is likely to improve its performance the next year, even if the coordinator remains (if that puts me on record as saying that if GERG stays Michigan will finish better than 104th out of 120 teams in Defensive FEI in 2011, so be it).

The question is, once we control for the performance of the units, does a coordinator change seem as beneficial? In other words, if team A has a terrible offense and doesn't fire its coordinator, while team B has a terrible offense and does fire its coordinator, does team B tend to improve more than team A? With a fairly sparse data set we can't get too specific with our controls, so I simply sliced the data into thirds based on their previous year's rank for the offensive or defensive unit.

If we look at the top 40 teams in FEI, all teams decline from one year to the next. This is the essence of regression to the mean (and competitive parity). We have to be very careful here because sample sizes are small (not many coordinators change if the team is performing relatively well, though tell that to Georgia's Willie Martinez). Among top 40 teams in FEI, the best thing is continuity of staff – and even there, teams are likely to lose 18 or 11 points of rank.

Now we see some more intuitive results. Presumably coaches are less apt to tolerate a bad performance if they don't think their coordinator is the right one for the job. Among teams in the middle of FEI performance, firing a coordinator tends to lead to a slight improvement in FEI rank over and above what is seen if the staff remains the same.  Defenses are up 6.7 spots in FEI rank vs. a 0.3 point decline if the staff doesn't change, and offenses are up slightly (+3.0 vs. +1.3 for same staff). This does not mean that all teams would be better off firing their coordinators -- just that there is some juice to the conventional wisdom that a canned coordinator can be replaced with a better alternative -- at least for the defensive side of the ball.

Note that replacing a promoted defensive coordinator has slightly more benefit than replacing a fired coordinator – not sure what to make of that, except that again sample sizes are small (8 teams over 3 years fell into this category – the bar with the 10.9).

Finally, what if your unit stinks, and ranks in the bottom 40 in FEI?

We have really small sample sizes for the "old coordinator promoted" groups, as you'd expect (why hire a coordinator from a bad team?), so interpret those results with caution. The main areas to examine are the far right and far left of the chart. First, the offense. If an offense is terrible but keeps its coordinator, it tends to improve 23 spots in the FEI rankings. If a terrible offense replaces its coordinator, the FEI rank only goes up 9 points. Contrast that to the defense; keeping the DC of a terrible unit leads to a 12 spot increase in FYI rank, on average, while firing him leads to a 24 point gain. You might argue that it takes longer for changes in offensive coordinators to show a benefit, but in the few cases where a new offensive coordinator lasts to year 2, there's no clear evidence for that claim.

Fun fact – note that replacing a staff entirely leads to about the same change as keeping the same staff. Two possible interpretations – any staff would do better after a terrible year, or athletic director's are incredibly foresightful and know which underperforming staffs to fire and which to retain. In any case, here we're seeing the conventional wisdom hold true for defenses (a canned coordinator outperforms his predecessor), while not so much for the offense. You could argue that the offenses among canned coordinators would have done worse if the coordinator had stayed, but given the improvements for the other types of offensive coordinator changes it's not the likeliest explanation.


Double Bar Charts! What Do They All Mean?

The gist of these charts is that firing an offensive coordinator seems to have no clear positive effect for the team's performance, either in year 1 or year 2. In fact, teams seem to be better off if they keep their offensive coordinator. It appears that swapping out an offensive coordinator could be an indicator of something seriously wrong with the program that a mere offensive change can't fix (see Tommy Tuberville and Tony Franklin at Auburn).

On defense the story is different. Getting rid of an underperforming coordinator appears to pay clear dividends. These benefits aren't monumental – about 7-12 or so extra spots of FEI ranking over keeping the coordinator, but they're consistent.


What Does This Mean for Michigan?

Who knows? This analysis is aggregated, and mileage will seriously vary based on the team and the coaching staffs. On average, given Michigan's horrible defensive FEI in 2010, getting rid of GERG and replacing him with another coordinator would lead to a jump of about 24 points in FEI ranking, while leaving him as DC would lead to a jump of about 12 points in FEI ranking. Whatever happens at Michigan would vary from this weak 'prediction', but I put it out there in any case. The most optimistic scenario, as we'll see below, would be a jump of 75 spots in the FEI.

Bringing in a whole new coach is a different animal – and again shows the danger of looking at means instead of specific situations. But based on the averages we see, a new coach would lead to a 34 spot drop in offensive FEI rankings and a 14.5 bump in defensive FEI rankings, while keeping Rodriguez and canning GERG would lead to an 18 point drop in offensive FEI rankings and a 24 point bump in defensive FEI rank. Again, this is a very weak prediction based on aggregated data, and is not the main purpose of this diary. The purpose was to see if changing a coordinator is a cure all, a bandaid, or an empty act of desperation. It appears that for offenses, a coordinator change is at best a bandaid, while for defenses it may be more of a cure.

If you're curious, here are the top 3 best and worst coordinator cannings where the head coach DID NOT change, based on changes in performance year-over-year, from the  coaching offseasons after the 2007, 2008, and 2009 seasons:


Best Firings


Offseason of change

Change in FEI Rank

Relevant Parties



Arkansas State


(from 117th to 39th)

Doug Ruse out, Hugh Freeze in


Oklahoma State


(from 61st to 14th)

Gunter Brewer demoted, Dana Holgorson in

3rd (tie)



(from 53rd to 12th)

Chris Klenakis out, Chris Ault takes more control (possibly not a 'firing')

3rd (tie)



(from 84th to 43rd)

Chad Morris in as co-coordinator, Herb Hand demoted to co-coordinator



Florida International


(from 111th to 37th)

Phil Galiano out, Geoff Collins in




(from 96th to 22nd)

Buh/Lynn out, Vic Fangio in


Texas A&M


(from 77th to 4th)

Joe Kines out, Tim DeRuyter in

Worst Firings


Offseason of change

Change in FEI Rank

Relevant Parties





(from 24th to 100th)

Al Borges out, Tony Franklin in


Colorado State


(from 70th to 117th)

Greg Peterson out, Pat Meyer in




(from 19th to 65th)

Jim Michalczik out, Frank Cignetti in





(from 58th to 112th)

Kent Baer out, Ed Donatell in




(from 36th to 83rd)

Bronco Mendenhall out, Jaime Hill in




(from 27th to 64th)

Willie Martinez out, Todd Grantham in


For those curious, Michigan replacing Scott Shafer with GERG ranks as the 6th-worst firing of a sitting DC, looking strictly at one-year changes in unit performance.


Final Notes

This analysis is far from perfect, and I welcome any and all feedback. I am somewhat concerned that the 'best' firings all seem to be from the 2009-2010 season. Not sure if that's a function of using FEI data from before the end of the season, or if head coaches are paying more attention to coordinator fit, if there are flaws with my coding, or if it's just a fluke.



December 5th, 2010 at 4:38 PM ^

I think something to take into account is just how much talent and timing can impact the degree of improvement/decline you see in such a short sample size (one year to the next).  Just in looking at Michigan's recent history, we've had two defensive coordinators walk into prime positions the first year they got promoted to the job. 

Hermann inherited Charles Woodson and had one of the best defenses in college football history.  Hermann won the Broyles Award and received universal praise.  A year later they lost Woodson and everyone thought he was a knucklehead as McNabb and the great Jarious Jackson torched the U-M defense early in the year.

Ron English was on everybody's short list for head coaching jobs after being handed the reins of a defense with three seniors who would go on to be NFL Pro Bowlers and 9 regulars who would go on to get drafted.  The defense was awesome and English was the man.  Fast forward to App. State and Oregon (after a large chunk of that NFL talent left campus) and English was an idiot.  He then signed on with Steve Kragthorpe's sinking ship at Louisville and in one year went from interviewing for the Arkansas job to getting the EMU job.

Undefeated dre…

December 5th, 2010 at 5:01 PM ^

It's just a question of quantifying it -- maybe using average recruiting ratings? Though with fairly small samples, we may be asking more of the data than it can provide.

I should add that assuming recruits/player quality are random noise on the offensive and defensive sides of the ball, we're still left with wondering why changes seem to 'work' for defenses but not for offenses.

Zone Left

December 5th, 2010 at 4:45 PM ^

Awesome diary.  Seriously, great work.  Can this get bumped to the front page?

Nothing too surprising.  Sticking to Michigan's situation, assuming the defensive staff is replaced, it makes sense that a change for a bad defense would help.  Presumably, a team that is that bad should regress to the mean some without help and that a new staff might be able to breathe some confidence into the younger players.


December 5th, 2010 at 4:56 PM ^

The low sample size and the relatively strong impact of outliers (defenses that improve by 74 levels in rank) make this a difficult analysis.  Did you happen to run any statistical tests to determine if the average changes in rank were, in fact, significant, or were within the realm of "statistical noise"?

Undefeated dre…

December 5th, 2010 at 5:18 PM ^

There's not a lot that can be done about it. This is the population of data that we have. This also is an analysis that hinges on outliers, so I'm not sure I want to throw them out. Finally, I feel a little weird treating the ranks as means for a t-test, but... at the overall level, the difference in mean FEI rank change for offensive coordinator firings vs. same coaches is not significant. The mean FEI rank change for defensive coordinator firings vs. same coaches is significant (barely, at 90%) for the overall data.

Once we start splicing the data into thirds, it's harder to get significance. If we look at the bottom 40 group, the difference in mean FEI rank change for firings vs. same coaches is significant at 90% confidence, with the implication that changing offensive coordinators when your offense stinks is NOT the right course of action. For defenses, the difference in mean FEI rank change for firings vs. same coaches doesn't meet 90% significance, but it does meet 80% significance, with the implication that changing coordinators when your defense stinks is supported.

In any case, this analysis was meant to be more descriptive than anything, and hopefully something we can add to over the years as we get more seasons of data.


December 6th, 2010 at 11:07 AM ^

Thanks for the depth of the analysis - I appreciate the extra mile!

Unless I'm reading you incorrectly, there are a couple of things that stand out to me:

1. A team with a poor defense (bottom 40 FEI) gets, on average, a bump of 24 places in rank by firing its defensive coordinator (but keeping the head coach).  If that were to happen to Michigan next year, that would place Michigan's defense at 80 (104 this year).  That would still place it right at the margin of the bottom 40, and by the underlying logic of what you seem to suggest here (fire GR to maximize defensive improvement), Michigan should again fire its defensive coordinator next year, whoever it may be, and "expect" to improve its defensive rank by another 24 places.

2. A team with a poor defense (bottom 40 FEI) gets, on average, a larger bump in rank (27 places v. 24 places) by having its defensive coordinator promoted (hired away to a "better" school or becoming a head coach, etc.).  That seems counter-intuitive.  Again, confounding correlation with causation here, that implies Michigan would do better by convincing someone to hire away GR, or by promoting GR to a higher position, rather than simply firing him.  (Granted, the difference between "promoting" v. "firing" is probably not statistically significant.)

Undefeated dre…

December 6th, 2010 at 11:47 AM ^

It seems a little disingenuous to post first about statistical confidence and then to take a number that is explicitly stated as based on a tiny sample size (the promoted coordinators in the bottom 40 group) and build an argument.

The 'cause' of the coordinator change is either with the head coach or with the AD. In a world of perfect information and perfect foresight, the only coaches/coordinators who got canned would be the ones who would fail to perform at an expected level of performance given the players, facilities, competition, etc. Of course no one has perfect information or perfect foresight.

Another way to look at is this -- if a coordinator is so bad as to be can-worthy (and is canned), we should expect performance to improve moreso than if a coordinator is not deemed can-worthy (and is not canned). Why? Because the canned coordinator was deemed as not doing as well as he could given the circumstances. The interesting thing, to me, is that we see this argument more or less hold for defensive coordinators, but not for offensive coordinators. This suggests, possibly, that there is more guesswork (or desperation?) involved in the decision to fire an offensive coordinator than with a defensive one.


December 6th, 2010 at 12:00 PM ^

I guess another way to look at it is to see if a change in defensive coordinator results in a better improvement than the average.  Thus, assuming GR is fired, if his replacement only improves the defense by ~24 spots in FEI defensive ranking, that really isn't "better than average" improvement.  If that does happen, one could argue that Michigan should fire its defensive coordinator again next year.

And for the firing v. promoting, I would argue that the appropriate variable is simply whether there is a change in defensive coordinator, regardless of whether the existing coordinator is "fired" or "promoted."

Finally, while I appreciate the care you take with your words, most people on this board will read your OP and conflate correlation with causation, as in "all we have to do is fire GR and our defense will get better and all will be well next year..."

Undefeated dre…

December 6th, 2010 at 12:25 PM ^

I do think there's a difference, though, between fired and promoted. A coordinator is promoted b/c he's perceived (by the football community) to be doing a 'good' job, so it seems likely that an equivalent replacement would be difficult to find, which would lead to a decline in performance for that unit. With a firing, on the other hand, the coordinator is thought to be doing a 'bad' job, so we should expect a bump in performance if he is replaced.


December 6th, 2010 at 12:38 AM ^

At this point the defense is so bad we cannot test the hypothesis that a new DC will worsen its standing. There is just no further down to go. However if there is any "there" there ( ie any conclusion that can be reached from a small data pool with many uncontrolled variables ) it appears we should replace Gerg and expect a modest improvement over what occurs with improved player experience alone. Of course this jibes with what I believe so it is undoubtedly FACT. Brilliant sir, as long as we agree.


December 6th, 2010 at 9:06 AM ^

We should hire Willy Martinez formerly of Georgia, now a db coach at Oklahoma?? The only reason I say this is because when someone had brought up replacement DCs for Gerg I found his resume' to look very promising and put him in. Probably wouldn't be too hard to sway him from being a DB coach.


December 6th, 2010 at 9:51 AM ^

I believe this evaluation should have an impact in the equation as well. It appears to be of opinion that some coaches can relate better to the college program than the pros. Whether it is because of their relationship with the students, coaching styles, etc...., it seems that some coaches can't make the switch, no matter what direction it is. And it doesn't seem to matter what sport you refer to.


December 6th, 2010 at 10:33 AM ^

Am I understanding correctly that you crunched the numbers by individually calculating the results for every coordinator fired? And are these numbers for 2007-2010? That is a daunting amount of work.

Undefeated dre…

December 6th, 2010 at 10:43 AM ^

The FEI data was the easy part, since you can sort and match by school name, though for some reason Football Outsiders changes school name nomenclature just to bug you (e.g. Louisiana-Monroe vs. LA-Monroe, or something like that).

The harder part was the coordinators. I took the Rivals site info about coaching changes after the 2007, 2008, and 2009 seasons, and for each FBS school made a code, for each coordinator -- stayed the same, left as part of a head coaching change, was seemingly 'promoted', was seemingly fired/demoted, or couldn't figure out why. The stayed the same codes were easy, as were the head coaching purge changes. The pain was, for all the other coordinators, trying to assess if they were promoted to a better opportunity vs. demoted/fired. Some of it is educated guesswork, which I think did OK -- though I'd love to have a second or third set of eyes.

I then created a data set that had as its units of analysis/rows the team and offseason (so Michigan would appear as Michigan 2007, Michigan 2008, Michigan 2009), and then had as columns of data the FEI numbers from the previous season, from the season after the change, and for the second season after the change (to try to assess if changes in offensive coordinators took more time to sink in -- and I couldn't find compelling evidence of it). As for the total amount of work, it was some, but I have to think it pales in comparison to what others have put into their diaries.

Thanks for reading!