returning starters

[Ed-M: Wow! Bumped.]

Over the past 3 years across all of FBS, no defensive unit underperformed expectations more than Michigan's 2010 squad.

Your world is not rocked. I understand; the statement certainly feels true. But is it true? How about these? For the past 3-year period (2008-2010)…

  • No team's combined offensive and defensive units underperform expectations worse than UCLA
  • No team's combined offensive and defensive units outperform expectations better than Navy
  • Besides Navy, the only teams whose offensive and defensive units are each in the top 10 in exceeding expectations are Boise State and TCU
  • Iowa is the best B1G team when it comes to outperforming expectations
  • Once we account for recruiting rankings, replacing a coordinator – either for offense or defense – has NO measurable, systematic impact on offensive or defensive performance in the following year
  • Firing a head coach tends to lead to a team doing worse than expected, defensively, in the following year and has no effect on the offense's performance

Be warned, this is another long diary. Instead of tl;dr'ing, feel free to skip to the closing two sections.

Background

This post is a follow-up to "Canning a Coordinator, Revisited". Many thanks to the comments on that post, including those asking about how recruiting affects the models. The main data set uses FEI rankings, and changes in rankings, from 2007 through 2010. To that list I have added information on coach and coordinator changes (fired, promoted, etc.) and returning starters (courtesy Phil Steele). More detail on the data is available in the Canning a Coordinator diary.

A HUGE thank you to UpUpDownDownfrom Black Heart Gold Pants, whom you may recall from his opus on player development. Because I don't know PERL from Bruce Pearl, he was kind enough to send me the Rivals rankings he used for his post. Anything that's wrong in this diary is my fault; UUDD just helped a great deal by providing some data.

The Framework

It's probably impossible to read MGoBlog and not be aware of regression to the mean. The idea is that it's very hard to be extremely bad year over year (with the exception of Michigan's turnover margin <wince>), or extremely good year over year – there's a tendency to move toward the middle.

In the previous diary I put together a model to predict a change in team's FEI ranking from one year to the next. The inputs were simple: the team's prior FEI ranking, the number of returning starters, and some information about coach/coordinator changes. The team's previous FEI ranking had a big influence – roughly speaking, a team was predicted to slide back 1 spot in rankings for every 2 spots in rank it had the previous year. Similarly, a team was predicted to jump 1 spot in rankings for every 2 spots it didn't have in rank the previous year. This is what we mean by regression to the mean.

These models left more things unexplained than explained, but they showed some promise. One missing element was information about recruiting. All else being equal, we'd expect a team with highly rated recruits to perform better than teams with lower rated recruits.

Enter the Rivals recruiting rankings, courtesy of UpUpDownDown. I'm not interested in the debate between Scout, Rivals, ESPN, 24/7, etc. I just want some pretty good metric of recruiting strength, and Rivals recruiting rankings provides that.

I originally was going to look at recruiting rankings specific to offensive or defensive unit. However, many players are classified as the mysterious "Athlete", and they may end up on either offense or defense. Furthermore, players often change positions (Rivals has Ted Ginn going to OSU as a DB, for example). So instead I focused on overall team rankings.

The next question is how far back to look. I tested looking at the past year, past 2 years, … up to the past 5 years. Statistically, the previous 3-year period worked best. And it has the beauty of making sense – the previous three year period should be a good gauge of the team's overall recruiting strength entering the next season (4-year may make even more sense, but it didn't work as well statistically).

Put Some Meat on Those Bones!

We want to predict changes in FEI performance from one year to the next. A positive change in FEI rank is good; it means the team did much better than in the previous year. A negative change is bad. FEI ranks are specific to offense or defense, so we'll look at them separately. Using absolute FEI scores would actually improve our model's predictive accuracy, but absolute FEI scores are very hard to interpret in isolation. So we'll focus on ranks. Here's the story with the offense:

Reminder that R2, or R-squared, is the percent of variation in the data explained by the model. We're well under 50% so it's not like we've uncovered the secret to football analysis. But we have statistically significant results (interpreted by our 'confidence' column) that fit into a sensible and meaningful way of looking at the world.

To interpret the effects:

  • The intercept/default basically means the model wants to subtract 105 points of ranking, all else being equal. So if you started at 1, you'd end up at 106. And if you started at 120, you'd end up at 225 (a difficult feat, given there are only 120 FBS teams). But don't fret – the other variables will correct for the default subtraction.
  • Similar to what we saw in the February diary, there's roughly a 2 for 1 tradeoff between last year's rank and the next year's rank. For every 2 points worse of rank the previous year, the offense is predicted to improve by 1 rank the following year. This is the essence of regression to the mean.
  • Every returning offensive starter adds about 3 points of improved rank the following year.
  • Except the quarterback, who adds a total of 14 points of improved rank (2.9 for being an returning offensive starter, and 10.1 'bonus' points for being a quarterback).
  • Each star in average Rivals rating (for the team over the previous 3 years) boosts the offense's FEI rank by 16 points in the next year.

That's a little hairy, so let's look at some examples:

Cont.'d after jump

The 95th percentile for the 3-year Rivals recruiting stars is 3.65, which we'll round up to 3.7. The 5th percentile is 2.0. So you see in the first two rows a best-case scenario: #1 in FEI rank the previous year, 11 returning starters including the QB, and a highly-rated bunch of recruits. Predicted FEI rank the following year is 4th. The second row is a worst-case scenario – 120th in FEI rank the previous year, no returning starters, and a poorly recruited team at 2.0 stars. The predicted FEI rank the following year is 121. Impossible, because there are only 120 teams, but intuitively valid.

The middle five rows are attempts to show some comparative effects. If the team finished at 60 in FEI, returns 6 starters including the QB, and has an average number recruiting stars (2.6), it's predicted to finish… at 60. More face validity! Give that team a poorer recruiting profile and predicted finish drops to 69, give it a sterling set of recruits and it's predicted to jump to 42. Give the team an average recruiting ranking and all starters returning and it'll jump to 45; average recruits and no returning starters and it'll drop to 87.

The last row should look a little familiar – it's Michigan's offense going into 2011. After finishing 2nd in FEI in 2010, with 9 returning starters and a returning QB, Michigan is predicted to finish 16th (taking out Stonum drops the finish to 19th).

What about coordinators or coaches? RichRod to Borges? Spread-ball to Man-ball? In the aggregate, there is NO relationship between coaching and coordinator changes and offensive performance. In other words, across 3 years of data and 120 FBS programs, there's no compelling evidence to say that a coach or coordinator change, as a rule, leads to a better or worse offensive performance. Of course, your mileage may vary in individual situations, and we'll get to that later in the article.

 

And Your Text-Heavy Analysis of the Defense?

For the offense there is no statistical relationship between coaching changes and the change in the team's FEI rank the following year. For the defense there is an effect – but only at the head coaching level. And perhaps counter to intuition, changing a coach leads to a worse than expected defensive performance the following season.

A few notes. While our R-squared is smaller, we still have statistically significant results. For the most part, the impacts/coefficients are similar to the offensive side. Again we see the roughly 2-for-1 relationship between prior year's FEI rank and the following year's rank. Returning defensive starters are worth as much as returning offensive starters, though no single player as critical as the quarterback on offense. One star in average rival recruiting rating is worth about 16 points in FEI rank, similar to the offense. This is more face validity for the model.

The big difference between the offensive and defensive models is with the head coach fired factor. The model predicts, all else being equal, that a team that fires its head coach (and hence its DC) loses about 8 points in FEI rank vs. a team that keeps its head coach. There is no measurable effect for when a DC only is fired (this is a change from the model in the previous diary, caused by the addition of the Rivals rankings into the mix).

Once again examples may help explain the model.

A team that finishes first in FEI the previous year, returns 11 starters, and has a great recruiting record is predicted to finish 1st in FEI rank. A team that finishes 120th, returns no starters, fires its coach, and has a poor recruiting record is predicted to finish 125th (again, impossible because there are only 120 teams, but a pretty reasonable prediction for a non-constrained model). A team that finishes at 60 in FEI rank the previous year, returns 6 starters, keeps its coach, and has an average recruiting portfolio is predicted to finish 59th in FEI rank. Change that to a great recruiting record and the prediction jumps to 41; a terrible recruiting record drops it to 69. Return all 11 starters with an average recruiting record and the prediction is a jump to 45th place, return no starters and drop to 77.

Once again the last line is the prediction for Michigan. Regression to the mean, fairly good recruiting, and 9 returning starters peg us for 71st in defensive FEI in 2011. Interestingly, the model predicts that the defense would have been even better if Rodriguez remained the head coach (predicting a Michigan finish in 63rd position). Again, I repeat that mileage will vary for individual situations – I certainly don't think a GERG-led defense would do better than one with Mattison at the helm. But the fun thing about models is that they make predictions we can test, and perhaps improve upon going forward.

 

Enough About the Rules, What About the Exceptions?

We've covered the last two semi-prvocative bullets from the introduction. What about the others? To do that, we want to look at teams that overperform or underperform their prediction. Meaning, if a team was predicted to finish 40th in FEI but finishes 10th, that team has overperformed its prediction. If a team finishes 80th but was predicted to finish 50th, it has underperformed its prediction.

So let's look at individual feats (and years) of predictive futility. On offense:

 

Top Underperforming Offensive Years

Team/Year

Predicted Finish

Actual Finish

Change

Texas 2010

33rd

104th

New OC

Cal 2010

39th

108th

New OC

Baylor 2009

39th

105th

None

Auburn 2008

36th

100th

New HC

Washington 2008

40th

98th

New HC

 

Hey, Texas fans really did have a reason to grumble. Cal 'Herrmanned' their OC, while Auburn rolled the Chizik dice and came up MalzahnNewton. Only Baylor kept the staff around after such a crap year. We'll hold off on Washington for a second.

Now, for the defensive side of the ball:

 

Top Underperforming Defensive Years

Team/Year

Predicted Finish

Actual Finish

Change

Michigan 2010

46th

108th

New HC

Florida State 2009

33rd

92nd

New HC

Washington 2008

55th

112th

New HC

Northwestern 2010

49th

100th

None

San Jose State 2009

55th

105th

New HC

 

GERRRRRRRRRRGGGGGGGGG!!!!!!!!!!! Yes, the squad that most underperformed the model's predictions is none other than our fair Michigan's beaver-abetted D from 2010. That's what we call external validity in the model building business! Now, a caveat. As someone may have pointed out a timeor two(or three), Michigan had decent recruiting classes but keeping those players on the field and actually playing for Michigan was much more of a challenge. If we give Michigan an average of about 2.5 stars over the 3 years prior to 2010 instead of the 3.5 Rivals gives them, then GERG's performance isn't the worst in the 3 year period. It's just bad.

Florida State fans can commiserate, as their 2009 team put up an out-of-character defensive stinkbomb that helped usher the end of the Bobby Bowden era. Four of the teams here got rid of their coaches after the season, with only Northwestern holding on to its entire staff.

And wow, does Washington in 2008 look like the worst team imaginable. Both offense and defense underperformed woefully, giving Tyrone Willingham the execrable exacta and bringing in Steve Sarkisian.

How about on the flowers and lollipops side? Offense:

 

Top Overperforming Offensive Years

Team/Year

Predicted Finish

Actual Finish

Note

East Carolina 2010

86th

17th

First year under new HC

Houston 2008

80th

13th

First year under new HC

Arkansas State 2010

101st

34th

OC promoted to HC in 2011

 

East Carolina and Houston were in their first years under a new HC (and yet, in the aggregate, new head coaches have no significant effect on offensive performance). And Hugh Freeze just got promoted to head coach after his job as OC at Arkansas State.

 

Top Overperforming Defensive Years

Team/Year

Predicted Finish

Actual Finish

Note

Navy 2009

95th

34th

2nd year after HC change

Stanford 2010

71st

11th

HC and DC went to NFL

Boise State 2008

73rd

14th

DC went to Tennessee in 2010

 

With the exception of Stanford, the other schools on these top-performer lists are non-BCS schools, which correlates with a lack of Rivals recruiting stars – as a result, they're predicted to do less well. Harbaugh and Fangio took the money and ran (and for all Andrew Luck did for Stanford, the defense was equally outstanding last year).

 

Years Might Be Flukes; What About Trends?

One year is not definitive (except in the case of GERG, natch); in fact, a team that woefully underperformed the previous year could look great just by rebounding the following year.

With only 3 years of data to work with, it's hard to tease out trends. But I did look at which programs, overall, tended to overperform or underperform vs. expectations. Across 2008-2010, these are the teams that overall tended to do better (or worse) than the model predicts.

On offense, the top overperformer is Navy and it's not even close. Over the three year period, they on average outperformed expectations by 53 ranking spots per year. Expectations are low because of the low/nonexistent stars for their recruits, yet Ken Niumatalolo's squad keeps outperforming them.

 

Top Overperforming Programs 2008-2010 (Offense)

Team

Average Overperformance

Navy

+53

Houston

+37

North Carolina State

+27

Nevada

+26

Stanford

+24

 

On the flipside, the top offensive underperformer is UCLA, and it's also not even close. Just try to Google Rick Neuheisel and not have it automatically add "hot seat".

 

Top Underperforming Programs 2008-2010 (Offense)

Team

Average Underperformance

UCLA

-45

New Mexico State

-31

Washington State

-30

Wyoming

-29

California

-27

 

As for defense, on the positive side there should be few surprises. Non-BCS schools (with lower expectations borne of lower recruit rankings) dominate with the usual suspects, including two military academies:

 

Top Overperforming Programs 2008-2010 (Defense)

Team

Average Overperformance

Boise State

+33

Air Force

+32

Boston College

+31

TCU

+29

Navy

+29

 

A note on the lone BCS team here: Boston College's Defensive FEI ranks the past 3 years: 4th, 5th, 15th. Wow.

And as for bad defense, do we even need to … GERRRRRRGGGGGGGGG!

 

Top Underperforming Programs 2008-2010 (Defense)

Team

Average Underperformance

Michigan

-37

Kansas

-29

Washington State

-28

UNLV

-25

Fresno State

-24

 

 #1 with a bullet is the Wolverines, pushed along by a variety of factors including stuffed beavers, white hair, horrendous attrition, and injuries. Fresno State is a bit of a surprise given Pat Hill's pedigree.

And if we combine offensive and defensive data? We have this:

 

Top Overperforming Programs 2008-2010 (Combined)

Team

Average Overperformance

Navy

+41

Boise State

+27

TCU

+26

Air Force

+25

Iowa

+22

 

Again, we see smaller schools that don't get great recruits, Rivals-wise, and are run by what we see as great coaches. Iowa is exceptional by being the only BCS team in the top 5. (Note that Iowa also benefits from a horrendous 2007 FEI performance, especially on offense, that made their 2008 numbers exceptional).

And now for the dregs:

 

Top Underperforming Programs 2008-2010 (Combined)

Team

Average Underperformance

UCLA

-30

Washington State

-29

Washington

-23

Wyoming

-22

Kansas

-21

 

UCLA, Washington State, and Washington not only stink but have been a black hole for recruits over the past few years. Wyoming and Kansas, sure, but for the top 3 underperformers to be from the Pac-10?

 

What Does It All Mean?

We don't know why exactly, teams outperform or underperform expectations. Reasons for outperformance could include luck, talent identification, talent development, PED-palooza, good coaching beyond just talent development, flawed recruiting rankings, flaws in FEI's system, and of course flaws in the predictive model. Reasons for underperformance include luck, attrition, injuries, bad coaching, flawed recruiting rankings, flaws in FEI's system, and of course flaws in the predictive model.

Still, some things we can conclude:

  • Recruiting matters – on average, improving your class average by half a star will boost your FEI rank by 8 spots, all else being equal.
  • Returning starters matter – 11 returning starters with an average 3-year recruiting ranking of 2.5 are worth about the same as no returning starters with an average 3-year recruiting ranking of 4.5.
  • If the choice is between returning a quarterback and 6 other offensive players, or returning 10 offensive players but not the quarterback, take the quarterback.
  • For all the attention paid to coaching and coordinator changes, they have very little short-term impact on the team's fortunes, in the aggregate. Again, mileage will vary in individual situations, but as a rule a team's performance can best be predicted by how it did last year, the number of returning starters, and average recruiting rankings.
  • Ken Niumatalolo, Chris Petersen, Gary Patterson, Troy Calhoun, and Kirk Ferentz are pretty good coaches with pretty good systems in place. Data supports the conventional wisdom..
  • Other pretty good coaches aren't "overperforming" because their performance is on par with what the recruiting record predicts; the coaches above outperform based on recruit and returning-starter expectations.
  • Rick Neuheisel really, really, really should be on a short leash. Barring great turnarounds, expect to see him and Paul Wulff on a Fox Sports West studio show in 2012.
  • While one year performance may be a fluke, heads still roll when teams have a bad year.
  • Greg Robinson, empirically, is a terrible defensive coordinator.

 

How About Some Provincialism?

The top schools in the B1G for outperforming expectations are Iowa, Nebraska, and Wisconsin. Whatever their methods, they have been successful turning 3 star recruits into 5 star players. Over the past three years, the worst B1G team relative to expectations is… Michigan, and that's despite last year's offensive leap. 2008, for a variety of reasons (including Tacopants), was an offensive disaster for Michigan, and 2009 was still below the model's expectations. Minnesota and Illinois round out the B1G bottom 3. Ohio State is right in the middle, mainly because it recruits so well and performs up to those expectations.

Before you weep in your beer: In 2008, Ball State was in the +30's on both offensive and defensive overperformance. Perhaps a fluke, perhaps not, but as the Mathlete has shownthe team did well in Hoke's last year. FEI data only goes back to 2007 so we can't look at previous Ball State seasons. Also note that Auburn's terrible offense in 2008 came after Al Borges was fired at the end of 2007 and a supposed improvement, in the guise of Tony Franklin, came in.

As for San Diego State, in 2009 the offense slightly underperformed and the defense slightly overperformed the model's expectations. But in 2010 the defense was a +37 ranking spots to expectation and the offense was a +57 (the 8th best overperformance among the team-seasons analyzed). These are good reasons to be optimistic about this year's Michigan team.

This piece is still a work in progress. I hope it provokes some thought, debate, and relief that we're not in Pullman. As ever, comments/feedback, especially of the constructive variety, are welcome. Go Blue!