In football the QB position is the lynchpin for the whole offense. They touch the ball on every play, read the defense, and choose the best course of action based on what they see in the moment. So, naturally, the outlook of an offense depends in large part on the outlook of the QB who will be flying the plane. The goal of this diary is to see if there are any reliable trends in how a generic QB progresses from one year to the next and to investigate if there are factors that can be identified and quantified that will aid or hinder his on field success. I'm actually very surprised about how clear the data is.
To do this I have accumulated information for 226 quarterbacks that have played in BCS conferences since 2003. The pool was restricted to BCS schools so that some level of control was applied to the level of talent surrounding and opposing the quarterback; the presumption being that players in BCS conferences will be playing with and against talent that is on par with their own.
If a player did not average at least 10 passing attempts per game he played in a given year, the data point was not considered because the number is highly unreliable (small sample size). This shuts out some interesting pieces of data (Tim Tebow 2006) but improves the overall conclusions significantly. In Tebow’s case, his second year as a regular player was his first year as a regular passer so his sophomore season was placed in the Year 1 group. There are a few other, more obscure anomalies that were given the same treatment. The large number of data points make the impact of those anomalies negligible.
The metric I used for this study is NCAA Passer Rating. Unfortunately, Passer Rating isn’t perfect when it comes to evaluating QBs; there are many disses available on that topic (Advanced NFL Stats, Football Outsiders, Fifth Down). I leave the detailed explanation to the articles I’ve linked. However, though it’s imperfect, passer rating is still a familiar number for most football fans and it does provide significant and reasonable insight into the relative performance of QBs. On with the show.
The following chart shows the average NCAA QB Rating by year of experience for all QBs included in this study.The chart includes the standard error of the averages for those that know what that means (or are good guessers). The chart shows a couple of interesting things: more experience is better, which…duh, and the average QB rating seems to improve by approximately equal amounts going into year 2 and into year 3 but then tails off a little going into year 4.
Now, the second point goes against conventional wisdom somewhat; QBs are supposed to improve a lot more after their first year than after subsequent years. The fly in the ointment is that, in order to track improvement, the data should be evaluated as matched pairs. This means that we should take each specific QB’s improvement over the preceding year and then average the deltas to understand the average improvement from one year to the next. Doing that yields this chart.
This chart shows what we expect to see, the change after the year 1 is much bigger than the change after years 2 and 3. But, now there’s the apparent negative improvement between years 3 and 4. What’s up with that?
Need … more … charts …
What I did here is plot average improvement versus the previous year’s rating. To clear out the inherent noise in the data, I lumped QB Ratings near each other together (i.e: ratings from 115.0 to 124.9 treated as 120 and so on). The trends are clear and strong, and they demonstrate that mean reversion is in full effect—the higher a QB’s rating is in a given year, the more likely he is to have a lower score in the next year and vice versa. It’s very difficult to have 2 really good or really bad years in a row (unless the QB is awesome or terrible).
We know from the first chart in the series that ratings go up as your years of experience goes up, hence, by the fourth year as starter, the net expected change is negative. The guys above 130 are likely to fall back and the guys below 130 are likely to move up. This effect allows us to infer that there is an expected upper bound for a seasoned QB, probably in the 130 to 140 range. One possible explanation for this phenomenon, is that a QB is unlikely to have the same group of players around him for all four years. The team around him might be out of phase with his development and that will have an effect on the numbers he puts up.
The familiar example around here is Chad Henne. Chad had Braylon Edwards and a veteran offensive line in his first year. So any improvement he may have developed in between 2004 and 2005 was partially offset by the loss of Edwards and other changes around him. However, as the team around him developed and he continued to develop, he saw a big jump in performance in his third year. Then, going into 2007, there were many losses on the offensive line in addition to Steve Breaston, and Henne’s numbers fell back to the 130-ish level. Overall it looks like Henne never really improved, but the reality is that his development made up for and was masked by the changes in the team around him in all likelihood. I think this is a more plausible explanation than “he was always sweet and he never got better.”
Finally, it’s worth taking a look at the dependency of first year performance vs. Seniority. The question being: is it better to have a redshirt junior making his first start instead of a true freshman?
Once again I’ve plotted the averages and their corresponding standard error and included sample size along the axis for reference. The responsible conclusion is that seniority is not a significant factor in first year success for Redshirt Sophomores or younger. Players older than that seem to perform better. However, you could just as easily conclude that since the averages overlap so much, especially in non-adjacent points, the trend is pretty weak and that no trend exists. It seems that other factors, such as supporting cast and the overall talent of the player, matter more than the age of the QB when he makes his first collegiate start. The team thing is difficult to assess but talent is easy; Rivals.com, be my guide.
Same thing as before, lumped averages with standard error and sample sizes shown. This time, I think the trend is real because: A) it makes sense and B) there is no overlap between 2-stars and 5-stars. Also, a 5-star QB is more likely to have a good team around him than a 2-star player is. All of these things support the trend despite the uncertainty in the data. There’s another reason, let’s zoom in on 5-stars; this time with a table.
|Reggie McNeal||Texas A&M||2003||124.5|
|Trent Edwards||Stanford||2003||79.5||4 new OL; 2 new WR; new RB|
|Kyle Wright||Miami (FL)||2005||137.2|
|Marcus Vick||Virginia Tech||2005||143.3|
|Anthony Morelli||Penn State||2006||111.9||4 new OL|
|Matthew Stafford||Georgia||2006||109||3 new OL; 2 new WR;|
|Xavier Lee||Florida State||2006||123.5|
|Jimmy Clausen||Notre Dame||2007||103.9||3 new OL; 1 new WR|
|Tyrod Taylor||Virginia Tech||2007||119.7|
|Terrelle Pryor||Ohio State||2008||146.5|
When you strip out the four guys that had extenuating circumstances (Mallett stays in), the average is about 131. That’s approaching the theoretical upper limit right away, on average.
I’m currently working an a project that tries to use this information to see what we can expect out of the QBs on our upcoming schedule. I’ll also try to use the dataset to try and tease out what we can expect out of our guys based on QBs similar to themselves.