2012 Opponents' Returning Starters: Why the Schedule Isn't as Tough as it Appears

2012 Opponents' Returning Starters: Why the Schedule Isn't as Tough as it Appears

Submitted by hart20 on January 10th, 2012 at 1:47 PM

Before we dive into the details, here are some things that you should know so that you can understand what’s below better:

1.   *  : When you see an asterisk by a number, know that it means that I included at least 1 Jr who I am projecting or has said that they will leave in those numbers. When you see a * by a name, it means that the player is a Jr projected to leave or is leaving.

2. 17 starters is the number that you want to return if you want to have at least the same amount of wins as the previous year. So the greatest number of starters that you would want to leave would be 5. After 5 starters leave, you expect to lose more games than the previous year. (Note: That number is from memory, I could not find the diary with that number so if someone could link to that in the comments, that would be great.)

3. I used the depth chart from each team’s Rivals site and the NFL draft projections from http://www.cbssports.com/nfl/draft/prospectrankings. If a player was around the top 3 at his position, I projected him to leave unless, obviously, he’s already said that he’s coming back.

 

Now for a chart, keep in mind that the numbers are in terms of players lost. Remember, losing more than 5 total starters is not a good thing. The chart breaks it down into lost starters on offense, defense and then total.

  Offense Defense Total 
Alabama 5* 8* 13*
Air Force 8 9 17
U Mass 5 3 8
Notre Dame 4 6 10
Purdue 3 4 7
Illinois 4 4* 8*
MSU 6* 2* 8*
Nebraska 2 4 6
Minnesota 7 6 13
Northwestern 6 3 9
Iowa 4* 6 10*
Ohio State 5 0 5

 

Continue on for a more in depth look at each team. This includes starters lost by position, list of each starter lost, some notes on back ups, notes on new coaches or leaving coaches, and other general observations on each team. After the draft, I will break down each team in terms of percentage of yards, TDs, tackles, interceptions, forced fumbles, etc. lost with each starter. 

 

**If I didn't include a leaving starters, leaving/new coachs, or if someone I've included has said they're coming back to school, please let me know so that I can fix it.**

 

 

Alabama

Offense

5 Starters lost: 1 RB*, 2 WR, 1 TE, 1 C

*Trent Richardson, RB

Darius Hanks, WR

Marquis Maze, WR

Brad Smelley, TE

William Vlachos, C

Quick Notes: Replacement TEs will be a true sophomore. Richardson is a top 5 pick and was a Heisman candidate, he’ll be tough to replace.

Defense

8 Starters lost: 1 NT, 3 LB*, 2 CB, 2 S*

Josh Chapman, NT

Jerrell Harris, LB

Courtney Upshaw, LB

*Dont’a Hightower, LB

*Dre Kirkpatrick, CB

DeQuan Menzie, CB

Mike Barron, S

*Robert Lester, S

Quick Notes: The talent Alabama will be losing is ridiculous. Everyone they’re losing will be playing in the NFL. At least half of them are ranked in the top 3 at their positions. Yeah, Alabama will replace that talent, but there is no way that the talent coming in is greater than the talent going out. 2 possible LB replacements will be JRs. The rest of the possible replacement LBs will be true sophomores.

Coaching

Offensive Coordinator is leaving.

Total

13 starters lost.

 

Air Force

Offense

8 Starters lost: 1 QB, 1 RB, 2 WR, 1 TE, 1 C, 1 RG, 1 RT

Tim Jefferson, QB

Asher Clark, RB

Jonathan Warzeka, WR

Zack Kauth, WR

Joshua Freeman, TE

Jeffrey Benson, C

A.J. Wallerstein, RG

Kevin Whitt, RT

Quick Notes: Back Up QB is graduating too.

Defense

9 Starters lost: 2 DE, 1 NT, 3 LB, 2 CB, 1 S

Zach Payne, DE

Harry Kehs, DE

Ryan Gardner, NT

Jordan Waiwaiole, LB

Brady Amack, LB

Patrick Hennessey, LB

Anthony Wright, CB

Josh Hall, CB

Jon Davis, S

Coaching

No Changes. (Note: These guys are going on to do something greater than playing football, a thank you to all of them and to everyone else serving.)

Total

17 starters lost.

 

UMass

Offense

5 Starters lost: 1 RB, 3 WR, 1 TE

Jonathan Hernandez, RB

Tom Gilson, WR

Julian Talley, WR

Jesse Julmiste, WR

Emil Igwenagu, TE

Defense

3 Starters lost: 1 DT, 2 LB

James Gilchrist, DT

Tyler Holmes, LB

Shane Vivieros, LB

Coaching

New head coach (Former ND OC). Also first year in the FBS.

Total

8 starters lost.

 

Notre Dame

Offense

4 starters lost: 1 WR, 1 OT, 1 OG, 1 C,

Michael Floyd, WR

Taylor Dever, OT

Trevor Robinson, OG

Mike Golic Jr., C

Quick Notes: You all know about the talent of Michael Floyd, and no, I’m not talking about drunk driving. Losing him is a huge, huge, huge loss for ND, especially with the QB questions that they have. It looks like the replacement OT and OG could be true sophomores.

Defense

6 starters lost: 1 DE, 1 LB, 2 CB, 2 S

Ethan Johnson, DE

Darius Fleming, LB

Gary Gray, CB

Robert Blanton, CB

Harrison Smith, S

Jamoris Slaughter, S

Quick Notes: Manti Te’o is returning. That’s huge for ND.

Coaching

New Offensive Coordinator, RB coach, OL coach, and QB coach. Also Brian Kelly is still there.

Total

10 Starters

 

Purdue

Offense

3 Starters lost: 1 WR, 1 OT, 1 OG

Justin Siller, WR

Dennis Kelly, OT

Nick Mondek, OG

Defense

4 Starters lost: 1 DE, 1 LB, 2 S

Gerlad Gooden, DE

Joe Holland, LB

Albert Evans, S

Logan Link, S

Coaching

No Changes

Total

7 starter lost.

 

Illinois

Offense

4 Starters lost: 1 RB, 1 WR, 1 OT, 1 OG

Jason Ford, RB

A.J. Jenkins, WR

Jeff Allen, OT

Jack Cornell, OG

Quick Notes: A.J. Jenkins is a pretty big loss for Illinois.

Defense

4 Starters lost: 1 DE*, 2 LB, 1 CB

*Whitney Mercilus, DE

Trulon Henry, LB

Ian Thomas, LB

Tavon Wilson, CB

Quick Notes: Mercilus is the #3 DE, if he leaves, which is likely, it’ll be a huge loss for Illinois.

Coaching

You know the deal with Zook, he’s gone so there’ll be a new HC and a new coaching staff.

Total

8 starters lost.

 

MSU

Offense

6 starters lost: 1 QB, 2 WR, 1 OG, 1 FB, 1 RB*

Kirk Cousins, QB

B.J. Cunningham, WR

Keshawn Martin, WR

Joel Foreman, OG

Todd Anderson, FB

*Edwin Baker, RB

Quick Notes: Replacement QB will be a Jr or Sr. B.J. Cunningham is a beast, losing him really hurts. Replacements will be Arnett, providing the NCAA grants a waiver, or will be a Jr or a few sophomores. Losing Baker is another hit to MSU’s offense

Defense

2 starters lost: 1 DT*, 1 S

*Jerel Worthy, DT

Trenton Robinson, S

Quick Notes: Worthy’s back-up is graduating too. Robinson’s replacement will be a Jr or sophomore. The defense looks to be much the same as it was last year.

Coaching

As of the time of this writing, rumors of Narduzzi to Texas A&M continue to run rampant. Losing him is huge.

Total

8 starters lost.

 

Nebraska

Offense

2 starters lost: 1 OT, 1 C

J. Hardrick, OT

Mike Caputo, C

Quick Notes: Offense looks to be as potent as you can be with a QB who throws like a girl.

Defense

4 starters lost: 1 DT, 1 LB, 1 CB, 1 S

Jared Crick, DT

Lavonte David, LB

Alfonzo Dennard, CB

Austin Cassidy, S

Quick Notes: Nebraska loses their top 3 players on D. Tough to replace losses like that.

Coaching

New DL coach.

Total

6 starters lost.

 

Minnesota

Offense

7 starters lost: 1 RB, 1 WR, 1 FB, 1 TE, 2 OG, 1 C

Duane Bennett, RB

Da’Jon McKnight, WR

Eric Lair, FB

Collin McGarry, TE

Chris Bunders, OG

Ryan Orton, OG

Ryan Wynn, C

Quick Notes: Ouch.

Defense

6 starters lost: 2 DT, 1 LB, 1 CB, 2 S

Anthony Jacobs, DT

Brandon Kirksey, DT

Gary Tinsley, LB

Troy Stoudermire, CB

Shady Salamon, S

Kim Royston, S

Quick Notes: Ouch again. Minnesota looks to stay Minnesota.

Coaching

No changes.

Total

13 starters lost.

 

Northwestern

Offense

6 starters lost: 1 QB, 2 RB, 1 WR, 1 OT, 1 OG

Dan Persa, QB

Jacob Schmidt, RB

Drake Dunsmore, RB

Jeremy Ebert, WR

Al Netter, OT

Ben Burkett, OG

Quick Notes: Losing Persa is pretty big but Colter filled in pretty well for him this year.

Defense

3 starters lost: 1 DT, 1 CB, 1 S

Jack DiNardo, DT

Jeravin Matthews, CB

Brian Peters, S

Coaching

No changes.

Total

9 starters lost.

 

Iowa

Offense

4 starters lost: 1 WR, 2 OT*, 1 OG,

Marvin McNutt, WR

*Riley Reiff, OT

Markus Zusevics, OT

Adam Gettis, OG

Quick Notes: Losing McNutt is pretty big for Iowa.

Defense

6 starters lost: 1 DE, 2 DT, 1 LB, 1 CB, 1 S

Broderick Binns, DE

Mike Daniels, DT

Thomas Nardo, DT

Tyler Nielsen, LB

Shaun Prater, CB

Jordan Bernstine, S

Quick notes: Defense looks to worsen with the loss of 6 starters and the DC.

Coaching

New DL coach. New DC.

Total

10 starters lost.

 

Ohio St.

Offense

5 starters lost: 1 RB, 1 WR, 2 OT, 1 C

Dan Herron, RB

DeVier Posey, WR

Mike Adams, OT

J.B. Shugarts, OT

Michael Brewster, C

Quick Notes: New offense, new schemes, new plays, etc. Remember the offense without Posey and Herron? Couldn’t score if they were playing a middle school football team.. Remember the offense without Posey? Not much better. Now they’ll have a new offense without 3 new starters on the OL and without their 2 biggest playmakers on offense.

Defense

0 starters lost.

Quick Notes: The defense that we owned this year looks to be the same next year.

Coaching

New HC, mostly new coaching staff.

Total

5 starters lost.

 

Quantifying Terrible and Other Stuff

Quantifying Terrible and Other Stuff

Submitted by Undefeated dre… on August 17th, 2011 at 9:48 PM

[Ed-M: Wow! Bumped.]

Over the past 3 years across all of FBS, no defensive unit underperformed expectations more than Michigan's 2010 squad.

Your world is not rocked. I understand; the statement certainly feels true. But is it true? How about these? For the past 3-year period (2008-2010)…

  • No team's combined offensive and defensive units underperform expectations worse than UCLA
  • No team's combined offensive and defensive units outperform expectations better than Navy
  • Besides Navy, the only teams whose offensive and defensive units are each in the top 10 in exceeding expectations are Boise State and TCU
  • Iowa is the best B1G team when it comes to outperforming expectations
  • Once we account for recruiting rankings, replacing a coordinator – either for offense or defense – has NO measurable, systematic impact on offensive or defensive performance in the following year
  • Firing a head coach tends to lead to a team doing worse than expected, defensively, in the following year and has no effect on the offense's performance

Be warned, this is another long diary. Instead of tl;dr'ing, feel free to skip to the closing two sections.

Background

This post is a follow-up to "Canning a Coordinator, Revisited". Many thanks to the comments on that post, including those asking about how recruiting affects the models. The main data set uses FEI rankings, and changes in rankings, from 2007 through 2010. To that list I have added information on coach and coordinator changes (fired, promoted, etc.) and returning starters (courtesy Phil Steele). More detail on the data is available in the Canning a Coordinator diary.

A HUGE thank you to UpUpDownDownfrom Black Heart Gold Pants, whom you may recall from his opus on player development. Because I don't know PERL from Bruce Pearl, he was kind enough to send me the Rivals rankings he used for his post. Anything that's wrong in this diary is my fault; UUDD just helped a great deal by providing some data.

The Framework

It's probably impossible to read MGoBlog and not be aware of regression to the mean. The idea is that it's very hard to be extremely bad year over year (with the exception of Michigan's turnover margin <wince>), or extremely good year over year – there's a tendency to move toward the middle.

In the previous diary I put together a model to predict a change in team's FEI ranking from one year to the next. The inputs were simple: the team's prior FEI ranking, the number of returning starters, and some information about coach/coordinator changes. The team's previous FEI ranking had a big influence – roughly speaking, a team was predicted to slide back 1 spot in rankings for every 2 spots in rank it had the previous year. Similarly, a team was predicted to jump 1 spot in rankings for every 2 spots it didn't have in rank the previous year. This is what we mean by regression to the mean.

These models left more things unexplained than explained, but they showed some promise. One missing element was information about recruiting. All else being equal, we'd expect a team with highly rated recruits to perform better than teams with lower rated recruits.

Enter the Rivals recruiting rankings, courtesy of UpUpDownDown. I'm not interested in the debate between Scout, Rivals, ESPN, 24/7, etc. I just want some pretty good metric of recruiting strength, and Rivals recruiting rankings provides that.

I originally was going to look at recruiting rankings specific to offensive or defensive unit. However, many players are classified as the mysterious "Athlete", and they may end up on either offense or defense. Furthermore, players often change positions (Rivals has Ted Ginn going to OSU as a DB, for example). So instead I focused on overall team rankings.

The next question is how far back to look. I tested looking at the past year, past 2 years, … up to the past 5 years. Statistically, the previous 3-year period worked best. And it has the beauty of making sense – the previous three year period should be a good gauge of the team's overall recruiting strength entering the next season (4-year may make even more sense, but it didn't work as well statistically).

Put Some Meat on Those Bones!

We want to predict changes in FEI performance from one year to the next. A positive change in FEI rank is good; it means the team did much better than in the previous year. A negative change is bad. FEI ranks are specific to offense or defense, so we'll look at them separately. Using absolute FEI scores would actually improve our model's predictive accuracy, but absolute FEI scores are very hard to interpret in isolation. So we'll focus on ranks. Here's the story with the offense:

Reminder that R2, or R-squared, is the percent of variation in the data explained by the model. We're well under 50% so it's not like we've uncovered the secret to football analysis. But we have statistically significant results (interpreted by our 'confidence' column) that fit into a sensible and meaningful way of looking at the world.

To interpret the effects:

  • The intercept/default basically means the model wants to subtract 105 points of ranking, all else being equal. So if you started at 1, you'd end up at 106. And if you started at 120, you'd end up at 225 (a difficult feat, given there are only 120 FBS teams). But don't fret – the other variables will correct for the default subtraction.
  • Similar to what we saw in the February diary, there's roughly a 2 for 1 tradeoff between last year's rank and the next year's rank. For every 2 points worse of rank the previous year, the offense is predicted to improve by 1 rank the following year. This is the essence of regression to the mean.
  • Every returning offensive starter adds about 3 points of improved rank the following year.
  • Except the quarterback, who adds a total of 14 points of improved rank (2.9 for being an returning offensive starter, and 10.1 'bonus' points for being a quarterback).
  • Each star in average Rivals rating (for the team over the previous 3 years) boosts the offense's FEI rank by 16 points in the next year.

That's a little hairy, so let's look at some examples:

Cont.'d after jump

The 95th percentile for the 3-year Rivals recruiting stars is 3.65, which we'll round up to 3.7. The 5th percentile is 2.0. So you see in the first two rows a best-case scenario: #1 in FEI rank the previous year, 11 returning starters including the QB, and a highly-rated bunch of recruits. Predicted FEI rank the following year is 4th. The second row is a worst-case scenario – 120th in FEI rank the previous year, no returning starters, and a poorly recruited team at 2.0 stars. The predicted FEI rank the following year is 121. Impossible, because there are only 120 teams, but intuitively valid.

The middle five rows are attempts to show some comparative effects. If the team finished at 60 in FEI, returns 6 starters including the QB, and has an average number recruiting stars (2.6), it's predicted to finish… at 60. More face validity! Give that team a poorer recruiting profile and predicted finish drops to 69, give it a sterling set of recruits and it's predicted to jump to 42. Give the team an average recruiting ranking and all starters returning and it'll jump to 45; average recruits and no returning starters and it'll drop to 87.

The last row should look a little familiar – it's Michigan's offense going into 2011. After finishing 2nd in FEI in 2010, with 9 returning starters and a returning QB, Michigan is predicted to finish 16th (taking out Stonum drops the finish to 19th).

What about coordinators or coaches? RichRod to Borges? Spread-ball to Man-ball? In the aggregate, there is NO relationship between coaching and coordinator changes and offensive performance. In other words, across 3 years of data and 120 FBS programs, there's no compelling evidence to say that a coach or coordinator change, as a rule, leads to a better or worse offensive performance. Of course, your mileage may vary in individual situations, and we'll get to that later in the article.

 

And Your Text-Heavy Analysis of the Defense?

For the offense there is no statistical relationship between coaching changes and the change in the team's FEI rank the following year. For the defense there is an effect – but only at the head coaching level. And perhaps counter to intuition, changing a coach leads to a worse than expected defensive performance the following season.

A few notes. While our R-squared is smaller, we still have statistically significant results. For the most part, the impacts/coefficients are similar to the offensive side. Again we see the roughly 2-for-1 relationship between prior year's FEI rank and the following year's rank. Returning defensive starters are worth as much as returning offensive starters, though no single player as critical as the quarterback on offense. One star in average rival recruiting rating is worth about 16 points in FEI rank, similar to the offense. This is more face validity for the model.

The big difference between the offensive and defensive models is with the head coach fired factor. The model predicts, all else being equal, that a team that fires its head coach (and hence its DC) loses about 8 points in FEI rank vs. a team that keeps its head coach. There is no measurable effect for when a DC only is fired (this is a change from the model in the previous diary, caused by the addition of the Rivals rankings into the mix).

Once again examples may help explain the model.

A team that finishes first in FEI the previous year, returns 11 starters, and has a great recruiting record is predicted to finish 1st in FEI rank. A team that finishes 120th, returns no starters, fires its coach, and has a poor recruiting record is predicted to finish 125th (again, impossible because there are only 120 teams, but a pretty reasonable prediction for a non-constrained model). A team that finishes at 60 in FEI rank the previous year, returns 6 starters, keeps its coach, and has an average recruiting portfolio is predicted to finish 59th in FEI rank. Change that to a great recruiting record and the prediction jumps to 41; a terrible recruiting record drops it to 69. Return all 11 starters with an average recruiting record and the prediction is a jump to 45th place, return no starters and drop to 77.

Once again the last line is the prediction for Michigan. Regression to the mean, fairly good recruiting, and 9 returning starters peg us for 71st in defensive FEI in 2011. Interestingly, the model predicts that the defense would have been even better if Rodriguez remained the head coach (predicting a Michigan finish in 63rd position). Again, I repeat that mileage will vary for individual situations – I certainly don't think a GERG-led defense would do better than one with Mattison at the helm. But the fun thing about models is that they make predictions we can test, and perhaps improve upon going forward.

 

Enough About the Rules, What About the Exceptions?

We've covered the last two semi-prvocative bullets from the introduction. What about the others? To do that, we want to look at teams that overperform or underperform their prediction. Meaning, if a team was predicted to finish 40th in FEI but finishes 10th, that team has overperformed its prediction. If a team finishes 80th but was predicted to finish 50th, it has underperformed its prediction.

So let's look at individual feats (and years) of predictive futility. On offense:

 

Top Underperforming Offensive Years

Team/Year

Predicted Finish

Actual Finish

Change

Texas 2010

33rd

104th

New OC

Cal 2010

39th

108th

New OC

Baylor 2009

39th

105th

None

Auburn 2008

36th

100th

New HC

Washington 2008

40th

98th

New HC

 

Hey, Texas fans really did have a reason to grumble. Cal 'Herrmanned' their OC, while Auburn rolled the Chizik dice and came up MalzahnNewton. Only Baylor kept the staff around after such a crap year. We'll hold off on Washington for a second.

Now, for the defensive side of the ball:

 

Top Underperforming Defensive Years

Team/Year

Predicted Finish

Actual Finish

Change

Michigan 2010

46th

108th

New HC

Florida State 2009

33rd

92nd

New HC

Washington 2008

55th

112th

New HC

Northwestern 2010

49th

100th

None

San Jose State 2009

55th

105th

New HC

 

GERRRRRRRRRRGGGGGGGGG!!!!!!!!!!! Yes, the squad that most underperformed the model's predictions is none other than our fair Michigan's beaver-abetted D from 2010. That's what we call external validity in the model building business! Now, a caveat. As someone may have pointed out a timeor two(or three), Michigan had decent recruiting classes but keeping those players on the field and actually playing for Michigan was much more of a challenge. If we give Michigan an average of about 2.5 stars over the 3 years prior to 2010 instead of the 3.5 Rivals gives them, then GERG's performance isn't the worst in the 3 year period. It's just bad.

Florida State fans can commiserate, as their 2009 team put up an out-of-character defensive stinkbomb that helped usher the end of the Bobby Bowden era. Four of the teams here got rid of their coaches after the season, with only Northwestern holding on to its entire staff.

And wow, does Washington in 2008 look like the worst team imaginable. Both offense and defense underperformed woefully, giving Tyrone Willingham the execrable exacta and bringing in Steve Sarkisian.

How about on the flowers and lollipops side? Offense:

 

Top Overperforming Offensive Years

Team/Year

Predicted Finish

Actual Finish

Note

East Carolina 2010

86th

17th

First year under new HC

Houston 2008

80th

13th

First year under new HC

Arkansas State 2010

101st

34th

OC promoted to HC in 2011

 

East Carolina and Houston were in their first years under a new HC (and yet, in the aggregate, new head coaches have no significant effect on offensive performance). And Hugh Freeze just got promoted to head coach after his job as OC at Arkansas State.

 

Top Overperforming Defensive Years

Team/Year

Predicted Finish

Actual Finish

Note

Navy 2009

95th

34th

2nd year after HC change

Stanford 2010

71st

11th

HC and DC went to NFL

Boise State 2008

73rd

14th

DC went to Tennessee in 2010

 

With the exception of Stanford, the other schools on these top-performer lists are non-BCS schools, which correlates with a lack of Rivals recruiting stars – as a result, they're predicted to do less well. Harbaugh and Fangio took the money and ran (and for all Andrew Luck did for Stanford, the defense was equally outstanding last year).

 

Years Might Be Flukes; What About Trends?

One year is not definitive (except in the case of GERG, natch); in fact, a team that woefully underperformed the previous year could look great just by rebounding the following year.

With only 3 years of data to work with, it's hard to tease out trends. But I did look at which programs, overall, tended to overperform or underperform vs. expectations. Across 2008-2010, these are the teams that overall tended to do better (or worse) than the model predicts.

On offense, the top overperformer is Navy and it's not even close. Over the three year period, they on average outperformed expectations by 53 ranking spots per year. Expectations are low because of the low/nonexistent stars for their recruits, yet Ken Niumatalolo's squad keeps outperforming them.

 

Top Overperforming Programs 2008-2010 (Offense)

Team

Average Overperformance

Navy

+53

Houston

+37

North Carolina State

+27

Nevada

+26

Stanford

+24

 

On the flipside, the top offensive underperformer is UCLA, and it's also not even close. Just try to Google Rick Neuheisel and not have it automatically add "hot seat".

 

Top Underperforming Programs 2008-2010 (Offense)

Team

Average Underperformance

UCLA

-45

New Mexico State

-31

Washington State

-30

Wyoming

-29

California

-27

 

As for defense, on the positive side there should be few surprises. Non-BCS schools (with lower expectations borne of lower recruit rankings) dominate with the usual suspects, including two military academies:

 

Top Overperforming Programs 2008-2010 (Defense)

Team

Average Overperformance

Boise State

+33

Air Force

+32

Boston College

+31

TCU

+29

Navy

+29

 

A note on the lone BCS team here: Boston College's Defensive FEI ranks the past 3 years: 4th, 5th, 15th. Wow.

And as for bad defense, do we even need to … GERRRRRRGGGGGGGGG!

 

Top Underperforming Programs 2008-2010 (Defense)

Team

Average Underperformance

Michigan

-37

Kansas

-29

Washington State

-28

UNLV

-25

Fresno State

-24

 

 #1 with a bullet is the Wolverines, pushed along by a variety of factors including stuffed beavers, white hair, horrendous attrition, and injuries. Fresno State is a bit of a surprise given Pat Hill's pedigree.

And if we combine offensive and defensive data? We have this:

 

Top Overperforming Programs 2008-2010 (Combined)

Team

Average Overperformance

Navy

+41

Boise State

+27

TCU

+26

Air Force

+25

Iowa

+22

 

Again, we see smaller schools that don't get great recruits, Rivals-wise, and are run by what we see as great coaches. Iowa is exceptional by being the only BCS team in the top 5. (Note that Iowa also benefits from a horrendous 2007 FEI performance, especially on offense, that made their 2008 numbers exceptional).

And now for the dregs:

 

Top Underperforming Programs 2008-2010 (Combined)

Team

Average Underperformance

UCLA

-30

Washington State

-29

Washington

-23

Wyoming

-22

Kansas

-21

 

UCLA, Washington State, and Washington not only stink but have been a black hole for recruits over the past few years. Wyoming and Kansas, sure, but for the top 3 underperformers to be from the Pac-10?

 

What Does It All Mean?

We don't know why exactly, teams outperform or underperform expectations. Reasons for outperformance could include luck, talent identification, talent development, PED-palooza, good coaching beyond just talent development, flawed recruiting rankings, flaws in FEI's system, and of course flaws in the predictive model. Reasons for underperformance include luck, attrition, injuries, bad coaching, flawed recruiting rankings, flaws in FEI's system, and of course flaws in the predictive model.

Still, some things we can conclude:

  • Recruiting matters – on average, improving your class average by half a star will boost your FEI rank by 8 spots, all else being equal.
  • Returning starters matter – 11 returning starters with an average 3-year recruiting ranking of 2.5 are worth about the same as no returning starters with an average 3-year recruiting ranking of 4.5.
  • If the choice is between returning a quarterback and 6 other offensive players, or returning 10 offensive players but not the quarterback, take the quarterback.
  • For all the attention paid to coaching and coordinator changes, they have very little short-term impact on the team's fortunes, in the aggregate. Again, mileage will vary in individual situations, but as a rule a team's performance can best be predicted by how it did last year, the number of returning starters, and average recruiting rankings.
  • Ken Niumatalolo, Chris Petersen, Gary Patterson, Troy Calhoun, and Kirk Ferentz are pretty good coaches with pretty good systems in place. Data supports the conventional wisdom..
  • Other pretty good coaches aren't "overperforming" because their performance is on par with what the recruiting record predicts; the coaches above outperform based on recruit and returning-starter expectations.
  • Rick Neuheisel really, really, really should be on a short leash. Barring great turnarounds, expect to see him and Paul Wulff on a Fox Sports West studio show in 2012.
  • While one year performance may be a fluke, heads still roll when teams have a bad year.
  • Greg Robinson, empirically, is a terrible defensive coordinator.

 

How About Some Provincialism?

The top schools in the B1G for outperforming expectations are Iowa, Nebraska, and Wisconsin. Whatever their methods, they have been successful turning 3 star recruits into 5 star players. Over the past three years, the worst B1G team relative to expectations is… Michigan, and that's despite last year's offensive leap. 2008, for a variety of reasons (including Tacopants), was an offensive disaster for Michigan, and 2009 was still below the model's expectations. Minnesota and Illinois round out the B1G bottom 3. Ohio State is right in the middle, mainly because it recruits so well and performs up to those expectations.

Before you weep in your beer: In 2008, Ball State was in the +30's on both offensive and defensive overperformance. Perhaps a fluke, perhaps not, but as the Mathlete has shownthe team did well in Hoke's last year. FEI data only goes back to 2007 so we can't look at previous Ball State seasons. Also note that Auburn's terrible offense in 2008 came after Al Borges was fired at the end of 2007 and a supposed improvement, in the guise of Tony Franklin, came in.

As for San Diego State, in 2009 the offense slightly underperformed and the defense slightly overperformed the model's expectations. But in 2010 the defense was a +37 ranking spots to expectation and the offense was a +57 (the 8th best overperformance among the team-seasons analyzed). These are good reasons to be optimistic about this year's Michigan team.

This piece is still a work in progress. I hope it provokes some thought, debate, and relief that we're not in Pullman. As ever, comments/feedback, especially of the constructive variety, are welcome. Go Blue!

The Impact of Returning Starters from 2008-2011

The Impact of Returning Starters from 2008-2011

Submitted by JohnnyV123 on June 25th, 2011 at 8:07 AM

Inspired by the general concensus that the number of returning starters in college football matter and a diary by NOLA Blue in which he discussed how Michigan would fare against opponents in 2011-12 based on returning starters, some of the comments (including my own) criticized looking at pure numbers of returning starters rather than the actual players returning.

It got me thinking if there was any predictablity to be found in pure numbers of returning starters (from now on referred to as RS) and if that translated into wins the next year by having a high amount of RS or losses by having a low amount.

Using Phil Steele's lists of RS I looked at the record for every team in a BCS conference plus Notre Dame in 2008-09, then listed how many starters they would be returning for the 2009-10 season, then added their record for the 2009-10 season, and noted the change in the amount of wins between the two seasons. I repeated that for the 2009-10 season going into the 2010-11 season.

One important note is that I had to decide what to do when teams played a different amount of games in consecutive seasons. For example a team plays 13 games in season one and goes 10-3. The next year in season two the same team plays 14 games and goes 10-4. Technically, they won the same amount of games in both years and the difference in wins is 0 but the team had an extra game to get 10 wins. I decided to handle this by using 0.5's  In this case I would give the team a win change of -0.5 for winning the same amount of games but having an extra game to do it in. This also works to the benefit of some teams.

Here is what I came up with:

Seasons 2008-09 to 2009-10

Team 2008-09 Record 2009 Returning Starters (* Denotes QB Return) 2009-10 Record Net Win Change
         
Florida State 9-4 13* 7-6 -2
Boston College 9-5 14 8-5 -0.5
Maryland 8-5 10* 2-10 -5.5
Wake Forest 8-5 14* 5-7 -2.5
Clemson 7-6 15 9-5 +1.5
NC State 6-7 14* 5-7 -0.5
Virginia Tech 10-4 16* 10-3 +0.5
Georgia Tech 9-4 20* 11-3 +1.5
North Carolina 8-5 15* 8-5 0
Miami 7-6 17 9-4 +2
Virginia 5-7 13* 3-9 -2
Duke 4-8 13* 5-7 +1
Cincinnati 11-3 10* 12-1 +1.5
Pittsburgh 9-4 15* 10-3 +1
West Virginia 9-4 12 9-4 0
Rutgers 8-5 15 9-4 +1
Connecticut 8-5 14* 8-5 0
USF 8-5 14* 8-5 0
Louisville 5-7 15 4-8 -1
Syracuse 3-9 14* 4-8 +1
Penn State 11-2 10* 11-2 0
Ohio State 10-3 12* 11-2 +1
Michigan State 9-4 17 6-7 -3
Iowa 9-4 16* 11-2 +2
Northwestern 9-4 14 8-5 -1
Minnesota 7-6 16* 6-7 -1
Wisconsin 7-6 13* 10-3 +3
Illinois 5-7 15* 3-9 -2
Purdue 4-8 14 5-7 +1
Michigan 3-9 16 5-7 +2
Indiana 3-9 16 4-8 +1
Missouri 10-4 10 8-5 -1.5
Nebraska 9-4 14 10-4 +0.5
Kansas 8-5 16* 5-7 -2.5
Colorado 5-7 14* 3-9 -2
Kansas State 5-7 14 6-6 +1
Iowa State 2-10 17* 7-6 +4.5
Texas 12-1 17* 13-1 +0.5
Oklahoma 12-2 15* 8-5 -3.5
Texas Tech 11-2 13 9-4 -2
Oklahoma State 9-4 14* 9-4 0
Baylor 4-8 18* 4-8 0
Texas A&M 4-8 17* 6-7 +1.5
USC 12-1 12 9-4 -3
Oregon 10-3 9* 10-3 0
Oregon State 9-4 12* 8-5 -1
California 9-4 17* 8-5 -1
Arizona 8-5 14 8-5 0
Arizona State 5-7 15 4-8 -1
Stanford 5-7 18* 8-5 +2.5
UCLA 4-8 17* 7-6 +2.5
Washington State 2-11 15* 1-11 -0.5
Washington 0-12 18* 5-7 +5
Florida 13-1 20* 13-1 0
Georgia 10-3 16 8-5 -2
South Carolina 7-6 12 7-6 0
Vanderbilt 7-6 19 2-10 -4.5
Tennessee 5-7 13* 7-6 +1.5
Kentucky 7-6 11* 7-6 0
Alabama 12-2 14 14-0 +2
Mississippi 9-4 17* 9-4 0
LSU 8-5 14* 9-4 +1
Arkansas 5-7 19 8-5 +2.5
Auburn 5-7 17* 8-5 +2.5
Mississippi State 4-8 10* 5-7 +1
Notre Dame 7-6 17* 6-6 -0.5

And here is a table of number of RS and how many won more games, less games, or no change

Number of RS No Change + Wins - Wins Total of Teams in
Each Group
9 0 0 0 1
9* 1 0 0
10 0 0 1 5
10* 1 2 1
11 0 0 0 1
11* 1 0 0
12 2 0 1 5
12* 0 1 1
13 0 0 1 6
13* 0 3 2
14 1 4 2 15
14* 3 2 3
15 0 2 2 9
15* 1 1 3
16 0 2 1 7
16* 0 2 2
17 0 1 1 10
17* 1 5 2
18 0 0 0 3
18* 1 2 0
19 0 1 1 2
19* 0 0 0
20 0 0 0 2
20* 1 1 0

Here is some info to take away from this.

When I refer to same, more, or less I am talking about the amount of wins between the two seasons.

Overall Win Amount: 13 same (19.69%), 29 more (43.93%), 24 less (36.36%) Total: 66 teams

RS with a QB win difference: 10 same (23.26%), 19 more (44.19%), 14 less (32.56%) Total: 43 teams

RS without a QB win difference: 3 same (13.04%), 10 more (43.48%), 10 less (43.48%) Total: 23 teams

9-13 RS: 5 same (27.78%), 6 more (33.33%), 7 less (38.89%) Total: 18 teams
14-16 RS: 5 same (16.13%), 13 more (41.94%), 13 less (41.94%) Total: 31 teams
17-20 RS: 3 same (17.65%), 10 more (58.82%), 4 less (23.53%) Total: 17 teams

I figured teams would be more successful with a returning QB and while that is supported somewhat in these years with 44.19% of teams going on to a better record the next season the teams without a returning QB were equally as likely to be more or less successful proving the lack of an experienced QB didn't significantly lessen the chances of improvement.

As the number of RS increased more teams did improve but I was surprised to see that not until a team returned 17 starters was it significantly more likely to. In the 15 or 16 RS number it still seemed close to a 50/50 to expect more or less wins.

More after the break

Canning a Coordinator, Revisited

Canning a Coordinator, Revisited

Submitted by Undefeated dre… on February 15th, 2011 at 10:37 PM

"We just need a change in coordinator."

Whether the offending coordinator is Greg Robinson or Greg Davis, the refrain is familiar. There's anecdotal evidence to support the benefits of a coordinator change, most recently with Manny Diaz at Mississippi State or Dana Holgorsen at Oklahoma State. But I wanted to attempt a more systematic look at the effects of coordinator changes. This diary is an update of a post in December and incorporates responses to many thoughtful comments to that earlier version.

This is a loooong diary. There's few steps to get to the end, and I know many tl;dr'ers are impatient. Putting the conclusions first seems a bit like putting the cart before the horse, but…. cart?

Cart

In a nutshell: firing a defensive coordinator tends to 'work' (to a degree), but firing an offensive coordinator does not. The number of returning starters turns out to be much more important than whether or not the coordinator was replaced. Because coordinators tend to be fired from poor-performing teams, often what we see as a positive boost from a coordinator change is simply regression to the mean.

As an added bonus, I stumbled upon these crude models to predict a change in a team's FEI rank vs. the prior year, where a positive change in rank is desirable (e.g. moving from a rank of 30 in the prior year to a rank of 10 in the following year is a +20 in rank):

Change in Offensive FEI Rank from Prior Year: +14 positions in rank if the starting QB returns, +3 for each additional returning offensive starter, and -0.5 for each level above last that the team was ranked in the previous year.

Change in Defensive FEI Rank from Prior Year: +3 for each returning defensive starter and -0.3 for each level above last that the team was ranked in the previous year.

These models certainly aren't going to put Football Outsiders out of business, but they're easy to use and somewhat intuitive. More detailed explanations below.

The Data

Team performance is based on the Fremeau Efficiency Index (FEI) from FootballOutsiders.com. Their free published data only goes back to 2007. FEI is not perfect, but it's easily available and eliminates some of the noise in scoring or yardage data. Because the focus will be on change in FEI performance, there are three years of data available (2008 vs. 2007, 2009 vs. 2008, and 2010 vs. 2009).

To determine coordinator changes, I used Rivals.com's annual "coordinator carousel." I coded the coordinators into four categories – stayed, promoted, fired/demoted, and left/unknown. A coordinator was classified as 'promoted' if he got a coordinator job at a 'better' school (arbitrarily determined by me, based mainly on the conference of the school) or if he got a head coaching job at any school. A coordinator was classified as fired if he didn't get a new job, took the same job at a 'worse' school, or took a position job at any school. The 'unknowns' are mainly coordinators who went on to take a position in the NFL or the same position at a similar school – in many cases it's hard to determine if that's a promotion or a demotion. Nearly all Michigan fans believe Jim Herrmann's trip to the NFL was encouraged, but for some coaches a job in the NFL could be their desired career path. I did some Googlestalking to try to parse out which was which, but if I could find no definitive sentiment I just coded them into the 'unknown' bucket . The coding was a bit tedious and I would welcome anyone who wants to double-check or validate my coding.

An issue confounding coordinator changes is that they often come with a head coaching change as well. If a head coach comes in with a whole new staff and the FEI metrics improve, is that because of the head coach or the coordinators? So I separated out coordinators that came on board as part of a new coaching regime (e.g. Malzahn) and those that came on with an existing head coach (e.g. GERG).

With 120 FBS teams and 3 seasons, there are 360 total data points (technically 359 since Western Kentucky was new to the FBS in 2008). Here's a breakdown of what happened with the coordinators. Keep in mind that when a head coach changes, the coordinators also go.

Image and video hosting by TinyPic

And now please say goodbye to the 'unknown' category, because we'll be ignoring them for most of the rest of this piece.

The Exceptions: Best/Worst Firings

To judge the 'best' or 'worst' firings, I calculated the change in the team's Offensive or Defensive FEI rank from the season before the change to the season after the change. This is simplistic, to be sure, but it's also clean and easy to understand. I looked into using the actual FEI metric, but the metric values seem a bit more volatile than the rankings. And the ranks are frankly easier to deal with/explain. I also looked at change in performance 2 years after the change to account for more time for a coordinator's influence to take effect. If anything, the evidence is weaker 2 years down the line, so I focused on 1 year changes.

And here's the best and worst firings of coordinators, judged solely by the unit's movement in FEI in the season after the coordinator change.

Image and video hosting by TinyPic

For those curious:

  • Manny Diaz's job in 2010 with Mississippi State reflects the 6th best performance following a DC getting fired (+59 places in FEI rank)
  • Greg Robinson's first year with Michigan ranks as the 6th worst performance following a DC getting fired (-25 places)
  • The best change in DFEI rank for a DC who didn't get fired was North Carolina State in 2010, under Mike Archer (with a Jon TAHNOOTA boost?) (+70 places)
  • The best change in OFEI rank for an OC who didn't get fired was San Diego State in 2010, under hey, that's Al Borges! (+81 places) [Brian's piece on Borges had SDSU as improving only 67 spots in FEI rank from 2009 to 2010, but I think that was using pre-2010 bowl season FEI numbers]

One other curiosity – the top changes in performance after a coordinator was fired all occurred in 2010. This is strange. We know Fremeau Efficiency isn't perfect, but it uses the same methodology in each year. I can think of only two explanations: 1) it's a fluke 2) coaches/athletic directors are getting more astute about when to fire/not fire a coordinator. My guess is it's the former, but I'm open to other interpretations.

The Rule: General Trends with Coordinator Replacement

As we move from looking at particularly good or bad firings to looking at the overall picture, I need to make this point clear: A firing by itself does not cause an improvement in FEI performance. Obviously, the who's matter (Shafer to GERG, anyone?). And the aggregate averages we look at include outliers on either end (and those outliers are potentially the cases where the 'who' really does matter, for good or bad).

A couple hypotheses to test:

H1) Teams that fire their coordinators should improve more dramatically in FEI than teams that stand pat with their coordinators. Head coaches will typically fire a coordinator only if he is perceived to be underperforming with his unit, so a change of coordinator should mean more of a rebound in FEI than in normal circumstances.

H2) Teams that lose a coordinator due to a promotion should decline more in FEI than teams that stand pat with their coordinators. The thinking here is that a coordinator must have been overperforming with his unit to merit a promotion, so his departure will coincide with the unit declining more than normal.

For the same reasons discussed in the previous section, we'll evaluate the hypotheses based on the change in a unit's FEI rank from the previous season to the current season. And looking at our three years of data across 120 FBS teams get this:

Image and video hosting by TinyPic

Offense first. Here we see little support for H1) – a team that fires its offensive coordinator improves by 2.3 positions in FEI rank, on average, but teams that stand pat improve by 1.1 positions in FEI rank – virtually no difference. The story is the same even if we look at performance two years out (not shown here). We do see some support for H2) – a team that loses its offensive coordinator to a promotion tends to decline in performance by 7.8 positions in FEI rank the following season.

For the defense, the pattern is reversed. We see strong support for H1) – a team that fires its defensive coordinator improves by nearly 12 positions in FEI rank, on average, while a team that stands pat with its coordinator has almost no change in FEI rank. But we don't see much support for H2) – a unit that loses a defensive coordinator to promotion drops by only 3.1 positions in FEI rank.

By now you're thinking three things:

  1. Holy shit, this is too long!
  2. What about the players a coordinator inherits? Did the best players graduate, or did an inexperienced group get more seasoning? You can't look at coordinator performance without considering the players.
  3. What about regression to the mean, or the tendency of units that do incredibly well in one year to slip the next year, or for units that do incredibly poorly in one year to improve the next year – regardless of who is coaching/coordinating?

For 1), you're right, and we're not even halfway! For 2), you're right, but hold on a second. Let's look at 3).

First, we'll look at teams that finished in the top 60 in FEI in the previous season. We'd expect the teams to decline in performance, in aggregate, simply because of regression to the mean. But we can still evaluate our old friends H1) and H2).

Image and video hosting by TinyPic

And the verdict is… basically no support at all for H1), for either offense or defense. If a coordinator of a top 60 team is fired, the typical team performs about the same as the typical team where a coordinator stayed. We see some support for H2), but only for offenses; top 60 teams that lose an offensive coordinator to promotion tend to fall back even moreso than top 60 teams that stand pat. And oh by the way, top 60 teams that change their entire staffs tend to really drop off the next year.

So let's do the same for the bottom 60 teams in FEI.

Image and video hosting by TinyPic

For H1), we see strong support for the defensive side but no support for the offensive side. A team that fires its defensive coordinator tends to improve 24.4 positions in FEI rank, while a team that stands pat tends to improve only 9 points in defensive FEI rank [Beating a dead horse note: I'm not saying that firing a defensive coordinator causes the FEI to improve, only that there's an association between the two. The real cause, most likely, is that the unit was underperforming badly vs. the head coach's expectations, which caused the coordinator to get fired]. On the offensive side, a team that fires its coordinator actually performs worse than a team that stands pat, on average.

H2) gets no support for either offense or defense. Sample sizes are small (because not many coordinators from teams in the bottom 60 in FEI get promoted), so mileage varies, but these teams didn't appear to have any difficulty replacing their promoted coordinators.

Double bar charts! What do they all mean?

More or less this:

  1. Among teams that finish in the bottom half of FEI rankings, those that fire a defensive coordinator outperform their stand pat counterparts by about 15 points in FEI rank, on average. This is the only time we see a substantial positive impact from canning a coordinator.
  2. Among teams that finish in the top of half of FEI rankings, those that lose an offensive coordinator to a promotion underperform their stand pat counterparts by about 11 points in FEI rank, on average. So either offensive coordinators get out while the getting is good, or those promoted offensive coordinators really were/are offensive geniuses whose team suffers without them.

Adding Player Quality into the Mix

In the first version of this article I basically punted on trying to quantify the quality of the rosters. Obviously a coordinator inheriting a bunch of bad freshmen will not fare nearly as well as a coordinator inheriting a roster full of good upperclassmen. To quantify the quality of the roster, I'm using Phil Steele's favorite metric, returning starters. Returning starter data comes from Vegas Insider in 2008, Phil Steele's blog post in the Orlando Sentinel in 2009, and Phil Steele's own site for 2010. I decided against using recruiting rankings not just because it was more work (especially splitting out offense vs. defense), but also because teams tend not to vary too much in recruiting year over year (Scout's team per recruit averages correlate at .9 year over year), and when they do vary a lot it tends to coincide with either one or two fluky recruits or with a head coaching change, which I'm already incorporating.

So, do teams that return more starters tend to do better in FEI? Well, Phil Steele would cry if the data didn't support it. I split unit/years into rough thirds based on the number of returning starters, and ...

Image and video hosting by TinyPic

We see a clear effect, moreso for the offense. A team in the top third of returning offensive starters tends to improve by 13 positions of FEI rank from one year to the next, while a team in the bottom third declines by 14 positions of FEI rank. The same story applies, albeit less dramatically, for defenses.

(Finally!) Putting It All Together

The main question is, once you account for returning starters, does the change in coordinators still matter? If you've made it this far, you're as tired of bar charts as I am. For this analysis I'm going to turn to the simple statistician's best friend, regression analysis. Regression analysis is great for having factors basically fight it out to see which is the better predictor of our target variable.

You can't do a regression without good prior beliefs, so I'll put 'em down here:

  1. More returning starters should lead to a positive change in FEI rank.
  2. Returning QB's likely matter more than other returning offensive starters, so we should look at them separately (just like Phil Steele does).
  3. All other things being equal, it's likely a team at the bottom of the rankings will improve the next year, and likely a team at the top of the rankings to decline the next year. This is the regression to the mean hypothesis, which implies the previous year's rank should have an impact on the change in FEI rank.
  4. If coordinator changes truly have an impact, it needs to show up after we account for returning starters and previous year's FEI rank.

[NOTE: the gurus at FootballOutsiders use both returning starters and recruiting rankings, along with a five year program success score, fluky turnover margins, etc. in their predictive models. Some information about what goes into their projection stew is available here and here.]

Predicting Change in Offensive FEI Rank

If we look at the offense, basically (1), (2), and (3) are confirmed and (4) is blown out of the water. Here's the full model:

Image and video hosting by TinyPic

A flag/dummy variable was used to test the effects of three of the four categories of coach changes (OC fired, OC promoted, and whole staff swept out – for statistical reasons, the intercept/'default' includes those situations where the coaching staff didn't change at all). I'll wait to interpret the impacts until we get to the reduced model. For now we'll just worry about our statistical confidence – higher is better. The typical rule of thumb is to look for 90% or 95% confidence (which means that in only 1 in 10 or 1 in 20 samples would we see nonzero effects when in fact there were zero effects). We have four variables with very high confidence, so they'll stay in the model. But the variables reflecting a coordinator change have a low confidence, meaning we can't reject the hypothesis that coordinator changes have no effect on change in FEI rank. In cruder terms – coordinator changes don't seem to matter on the offensive side of the ball.

And now the reduced model, dropping the irrelevant coordinator variables. The R-squared (overall measure of fit) is unchanged at 0.32.

Image and video hosting by TinyPic

Even though you read MGoBlog, you're probably more of a football fan than a stats nerd. So some interpretation:

  • The R-squared of 0.32 is pretty good, considering we don't have a lot of inputs in the model.
  • The intercept/default is the predicted change in Offensive FEI rank if all the other variables are at their minimum. In effect, if a team was first in Offensive FEI rank and had no returning starters, its predicted change in FEI rank would be to go from 1st to 59th – a true regression to the mean.
  • The Offensive FEI Rank coefficient basically means a team is expected to gain a half a point in rank for every point of rank it had in the previous year. If the 120th ranked team has no returning starters, its predicted change in rank is -58.1 + 120*(0.5), or +1.9. In other words, it would be predicted to finish 118th in FEI rank, or just about at rock bottom again.
  • Returning offensive starters is fairly straightforward – each returning starter is worth +3.2 points in Offensive FEI rank.
  • Phil Steele thinks a returning quarterback is special, and the data supports his claim. While a typical returning starter is worth +3.2 points, a returning quarterback gets a 10.8 point bonus, for a total impact of 14 ranking points.

To help get a handle on what the model is saying, below are some examples of hypothetical situations.

Image and video hosting by TinyPic

The model's not perfect – it can't predict any team to be ranked above 13th or below 118th in the following year. And in R-squared terms, the majority of variance in the data is left unexplained. But all effects are statistically significant, and the model makes intuitive sense. You may recognize the last line of the table as Michigan's current situation. If Michigan behaves according to the model, the change in coaching will have no impact and Michigan will regress to the mean a bit, falling from 2nd to 20th. Of course no particular situation, including Michigan's, is guaranteed to perform as the model predicts, but it's a prediction we can test.

Predicting Change in Defensive FEI Rank

QB's don't play defense, and Phil Steele doesn't call out a particular defensive position as critical, so we're testing three hypotheses: that more returning starters = improvement in rank, that worse previous year's rank leads to better next year's rank, and that a coordinator change can have an impact on FEI performance. Table?

Table:

Image and video hosting by TinyPic

The story here is a bit more complicated. As with the offensive side, the previous year's rank has a clear impact, as does the number of returning starters. Also similar to the offensive side, the promotion of a coordinator has no statistically significant effect. But the other two flags have borderline 'significance'. Keeping the head coach but firing the DC leads to a 7.5 position gain in FEI rank, while firing the DC *and* the head coach leads to a 6.4 point decline in FEI rank. Because they are borderline, I'm going to leave them in and just drop the "DC promoted" variable from the final model, which is below.

Image and video hosting by TinyPic

Quick highlights:

  • R-squared of .23 is not as good as with the offensive side of the ball, perhaps because of the absence of a single key returning starter as with the QB on the offensive side. [Note: if we used actual FEI, not FEI rank, the R-squared is much stronger (about double) but with the same general results. However, interpreting FEI changes is not as transparent as interpreting FEI rank changes, so I'm sticking with rank changes here. On the offensive side, performance of the model is about the same whether we use FEI or FEI rank].
  • Interpretation of the intercept/'default' is the same as with the offense. Assuming all other variables are at minimum (i.e. previous FEI rank was 1st, 0 returning defensive starters, no DC change), then the team is predicted to drop to 40th in FEI the following year.
  • The intercept is smaller in the defensive model, but so is the 'reward' for having a bad FEI rank the previous year, so it balances out. A team that finished 120th in Defensive FEI in the previous year, has no returning starters and no changes in coordinator, is predicted to change in rank -39.2 + 120*(0.3), or -3.2 points, to a rank of 123. OK, that's not possible (there's only 120 teams), but it's close enough and again shows some intuitive power of the model – take a crappy defense and return no starters, and it will remain crappy.
  • Remarkably, the worth of a returning defensive starter is virtually the same as the worth of a returning offensive starter.
  • In situations where the defense is so bad the coordinator is fired, teams tend to get a 7.3 point boost in FEI rank. But if the head coach is also swept out, teams tend to drop 6.6 points in FEI rank.

As before, below are some hypotheticals to help show how to interpret the model.

Image and video hosting by TinyPic

Once again the last line is Michigan's predicted outcome for 2011 based on the model – a modest improvement to 94th in FEI. Note that if Rodriguez had stayed but Robinson was fired, the model would have predicted a defensive FEI rank of 80th in 2011. For the love of all that is holy this is just presented for interest's sake; we have no idea what would have been, and of course we have no idea of what will be until it actually happens. I for one certainly hope (and believe) Mattison's defensive unit will outperform the model's expectation.

Important Caveats

  • The model is not saying that defensive coordinators should be fired every year in order to get a 7+ point boost in FEI rank. Even though we're using regression, this is a descriptive model, not a normative one – it's only describing what has happened, not prescribing what should happen.
  • Both models have statistically significant results. But they're nowhere close to perfect predictors of performance – in statistical terms, a lot more variance is left unexplained than explained. So while we can use the model to make rough predictions, your mileage will definitely vary in individual situations.
  • I don't want to diminish the importance of individual coaches. Manny Diaz may bring golden blitzes of thunder and rage wherever he goes, and GERG may bring doom wherever he goes. It's just that in the aggregate, we don't see much evidence for clear changes in unit performance based on a coordinator switch. If the trends of 2010 continue, however, that may change.
  • Another way coordinators can impact the team long-term is with recruiting. If a coordinator happens to be a heckuva recruiter along with being a decent coach, that will likely pay dividends longer down the line. That's not investigated here.
  • The guys at FootballOutsiders do a much better job of prediction than this model. The only issue is that their models are both more complicated and less transparent. And I'm not sure if they've ever tested coordinator changes. In any case, this article from 2008 says their model had a correlation of .8 for predicting next year's FEI – which corresponds to an R-squared about double the models above.

Wrapping It All Up

For me, this analysis has three big surprises:

  1. No matter how we slice it, changing an offensive coordinator can't be tied to a systematic gain in offensive performance. Maybe coordinators are fired more for philosophical or chemistry differences than for performance-related issues.
  2. In the aggregate, firing a defensive coordinator does correspond with a boost in the unit's performance, but it's not huge – roughly equivalent to the benefit of having two more returning starters on defense.
  3. Roster quality, as measured by returning starters, has a clear positive impact on change in FEI rank, and it's almost exactly the same for returning offensive and defensive starters (QB's excluded).

Another surprise is that Al Borges, happily, pops up twice in the good column – after he left Auburn its offense went in the FEI tank, and in his last season at SDSU his offense improved more in FEI rank than any other unit that did not change coordinators in the three years of data I examined.

Things I'll think more about:

  • Whenever I hear an offensive coordinator is fired, my first reaction will that it's a short term desperation move. My crazy prediction: Mack Brown's removal of Greg Davis is an indicator that Mack's in his last few years of being a head coach.
  • Fremeau may be flawed. By all accounts Auburn's offense was in decline when Borges was fired in 2007, and yet its offense was ranked 24th – Fremeau may be 100% right, but in some cases the FEI is clearly contrary to conventional wisdom.
  • Was the spate of 'good' firings in 2010 a one-time fluke, or part of a trend? The only way to tell is with more years of data. It is possible that coaches/AD's are getting better at knowing when to fire/not fire a coordinator, and who to hire as a replacement?
  • It could be interesting to look at average recruiting rankings of a unit in the two years prior to a coordinator change to the two years after a change. I think the focus would have to be on a per recruit basis, not per class basis, to make up for uneven class sizes and roster needs.
  • [EDIT: new] Either now, or after a bit more tinkering with the model, we can go back and look at teams/units that overperformed (or underperformed) relative to the model's expectations. For instance, and not surprising, it looks like Michigan's 2010 defense underperformed the model's expectations by 40+ spots, while the offense overperformed the model's expectations by 50+ spots. In fact, GERG's D underperformed expectations 2 years running, which is more statistical evidence of his incompetence. And coaches/coordinators who systematically overperform expectations could be true geniuses/motivators. By the way, if you look at the teams that had a high net underperformance in a certain year, many were either in their first year under a new coach (Washington St. 2008, Kansas 2010) or had a head coaching change the following year (Washington 2008, San Jose State 2009). Need a little more time to think about/look into this one.

This piece is still a work in progress, and there may be blind spots I haven't even considered. At the very least I'll try to update it next year with a new round of data. As ever, comments/feedback, especially of the constructive variety, are welcome. Go Blue!