like +1,000,000 points or something.
if you seek an image of the most Wisconsin OL ever, enter here
like +1,000,000 points or something.
This is just great. Best diary since Misopogon's attrition series. Hopefully Brian frontpages this, or at least comments on it.
So we want stability, but not a Bobby or JoePa.
or Kinesiology, or Sports Management, or human resources or something for this... it's a friggin thesis!
Really excellent work, I thoroughly enjoyed reading through this. I would love to offer some constructive criticism here, but I can't think of anything.
Super cool as always. Because I'm lazy, I'm wondering about your thoughts on the causal mechanism linking 5-stars to better draft outcomes for 4-stars. As I see, it there could be at leat four non-mutually exclusive possibilities.
1. You get better: 5-stars bring up the level of play of 4-stars (this seems to be your thought).
2. Multitude of sins: It's easier to be successful as a 4-star when you have a couple of 5-stars around you, even if you aren't actually any better than you would have been without the 5-stars around.
E.g. (and by analogy), the Lions O-line looks like world-beaters blocking for Barry; or, Emmitt Smith looks like a world-beater running behind the Dallas O-line. For a counter-example, ask Shawn Marion how his post-Steve Nash career has turned out...
3. Better teams: 5-stars make teams better, leading to more exposure for those teams' four-stars.
4. Reflected glory: Similar to (2) and (3) above. If you play with some 5-stars, you get "buzz" and are more likely to be drafted. Malcolm Gladwell likes this theory.
I appreciate your thoughts... I had considered the "multitude of sins" posit, and find it highly likely to be a contributor.
I emphasized players getting better because it is apparent that 2-4 star athletes are somewhat equal in the absence of 5-star influence. Posits 2-4 should be distributed equally among 2-4 stars if the 5-stars were simply elevating the team's performance. Regarding #2: A 5-star No Flyzone safety should make the entire secondary look better regardless of the secondary's star composition. Thus, 3-stars and 4-stars both benefit. Regarding #3: An O-line having an extremely dominant set of stats thanks to some guy named Jake Long blazing through all comers would bring analysts' attention to each component of the O-line, and eventually credit would be distributed as due... if 4-stars and 3-stars are assumed to be equal performers, then this advantage would also be spread without regard to stars. #4: see blend of #2 and 3. :^)
But, what we see is a clear separation between 4-stars and 3-stars as the number of 5-stars is increased. Granted, in the "multitude of sins" posit; the effect would require that 4-stars and 3-stars actually both be on the field to benefit. If the starting spots are filled from a presumed hierarchy instead of an analysis of the best player available at that moment, such an effect might begin to bias towards benefiting 4-stars. Thus, I concur that this is very likely a contributing factor.
In the case of player development, we have to assess whether 4-stars are just physically more capable of improvement, or if there is a psychological component to feeling like they are closer to closing the gap between themselves and 5-star teammates (do 2-stars give up and just plan to come in and enjoy the ride?) I also find these to be likely scenarios.
In terms of Michigan, I think it is safe to say that 1) our 2-stars are not becoming Michigan Men to simply enjoy the ride and 2) that RR's style of coaching will mean that all players will have an opportunity to benefit from any added exposure 5-stars bring through success.
the only thing I would say is that the coaching stability relates only to the head coach. I'm guessing at all these programs there was some turnover among the position coaches as well as at the offensive and defensive coordinator positions.
It would be interesting to know a breakdown of the positions that the players played who were drafted. It wouldn't surprise me that at the larger schools a lot of the 2 and 3 star players getting drafted were kickers and/or punters, not necessarily offensive and defensive players.
Kickers/punters were not included in either the recruiting numbers or the draft numbers...
I agree with the fact that maintaining the same head-coach does not mean a perfect system of stability; that is why I stated such when discussing the decision to only include schools with no change at head-coach:
"Thus, all players assessed at the school received strongly similar tutelage (not accounting for position-coach changes.)"
God bless you sir, for putting the 'college' back in 'college athletics'. Now I'm not wasting my time, I'm sharpening my brain.
Great work and thanks for putting it up. Someone should link this at the relevant Georgia sites, too. They might be interested.
One of the better stats posts I've ever seen. pure genius.
Another awesome post, great read.
...for the nice compliments. While I am glad that you have found this post useful, the value of the MGoBlog community's input on the posts leading to this one cannot be overlooked. I am continually impressed by the civility and constructiveness of MGoBlog's readership. Keep on keepin' on!
it looks like rivals finds the teams that have won the most of late and finds out who that coach is recruiting and which players he thinks are better than others. cross list a number of coaches and you start to come up with relevant national rankings. but it's pretty obvious that they have a limited network that isn't actually available to evaluate the entirety of relevant players other than through inference.
if i'm right, it suggests that we can probably infer what the Rivals rankings will look like based on offers and past team winning percentage. for instance, teams that come out of nowhere and start winning a lot of games like Boise and TCU will eventually see their recruits rated better even if they aren't outcompeting national powers for recruits.
this would also mean that Rivals/Scout are merely reporting services and not doing any value added scouting of their own.
I suggested (i.e., it assumes Rival's evaluations are not independent) but I can certainly see offer lists influencing rankings being a factor and having something of a similar effect on the data. Although it doesn't totally explain why you see the fall off in the star-rankings predictive power at the highest and lowest levels.
independent evaluations. they just don't add any value. it's way more important that he knows what the million dollar coach thinks. and i assume these guys are not so full of themselves to think they know more than the coaches. in fact, i'm pretty sure they fawn over/are in awe of the coaches.
so we should expect to see a bias based on the degree of access. the farther from the network, the farther from being highly rated.
the question left is predictive power at the highest levels. there are a lot of various factors there and since they are successful in the extreme, it becomes difficult to pick out what exactly they're good at that makes the difference. i also wonder if the best coaches are the most sophisticated when it comes to dealing with the recruiting services. so they may be distorting their signal to some degree as well.
I love the data but I'm very skeptical of your conclusions.
Isn't this likely a function of the Rivals rankings being an imperfect measure of a recruit's value in a marketplace with unequal buying power?
Stated less obtusely, everyone's guess of who the best players are is different and imperfect, although there is still a good deal of overlap in opinion (compare Rivals rankings to other recruiting services, e.g.). If you give someone qualified to evalute recruits the pick of the litter, the guys they select as the best may not always be the same, but whomever is selected is still highly likely to be a stud.
A recruiting service has the pick of the litter in the sense that they can assign a 5-star ranking to whomever they choose. A school obviously does not have the power to sign whoever they choose, but a place like USC of late is the next closest thing. They have enjoyed so much recruiting power, they will generally limit their offers to those recruits they think very, very highly of (5-star caliber recruits according to their own evaluation).
Thus, for a school like USC, you would not necessarily expect a recruit's Rivals ranking to strongly correlate with the USC coaching staff's internal evaluation of the recruit, since they are generally only going after the guys they really like. So it is not terribly surprising when their Rivals 4-star guys pan out at a similar frequency to their Rivals 5-star guys...if the USC coaching staff didn't think they were a stud, they probably would not have recruited them.
As you move down the food chain, and look at schools that pull from both the elite and less elite ranks, the Rivals rankings start becoming a more likely predictor of the coaching staff's own evaluation of the player (if you accept the premise that the Rivals rankings are a rough proxy of a recruit's market value...i.e., the pool of 5-star players includes a higher concentration of good/more sought after recruits, and so on down the scale). That recruit your coaches are really excited about is more likely to be a 4-star or 5-star guy than a 3-star (although not always). That guy your coaches had to settle for is more likely a 3-star than a 4-star. Consistently, with these teams (teams capable of pulling the occasional 5-star), you start seeing more of a correlation in the mid-tier ranks between star ranking and likelihood of success.
However, once you approach the lower tier of schools (from a recruiting power perspective), you are dealing with teams that are, figuratively, picking through the left-over scraps from the more elite teams. Again, at that point, the predictive ability of the Rivals rankings begins to give way to the fact that a bunch of more powerful market participants doing their own evaluations chose to take a pass on your recruits. In other words, the better schools have already taken a close look through the Rivals 2-star bin and the Rivals 3-star bin and picked out the most worthy fruit from each. It should not surprise that what is left behind in each may not be significantly different from one another.
Anyways, this to me seems a lot more likely an explanation for the phenomenon reflected in the data above than the idea that the presence of "5 star" guys on your team has some mysterious effect on the abilities of their lower ranked peers.
Amazing post, by the way.
have similar understandings
i think this also implies that the talent of the players in any school's recruiting class is more like the talent level of the class than whatever star ranking he's been given.
so, star rankings are actually a lagging indicator
although you still have to acknowledge the correlations in the data for those not quite USC-level schools. It is still important to recognize that a coaching staff's prioritizing of particular recruits will frequently correlate with the star-rankings. So when your team is persuing a 4-star player and a 3-star player at the same position and signs the 4-star player, that's probably good news (i.e., that's more likely the guy the coaches really wanted), but not necessarily.
I agree with both you and Colin...
1 - it is entirely plausible that some schools get their top picks among the 4-stars they recruit as well as their top picks among the 3-stars they recruit; and thus their 4-stars and 3-stars behave at a higher level than the average of their ranks. I could see these schools to also be the schools that receive a higher number of 5-stars, as that may be an overall measure of their recruiting draw.
2 - it is also plausible that the recruiting services rank recruits higher as they draw interest from specific schools who have had a lot of recent success... the more schools on that recruit's list who have had recent success, the higher they get ranked.
However, the design of this study was to look at players vs. players on the same team... in other words, both of these systemic biases you are talking about should be expected to affect the players of the same teams in an equivalent manner. If USC is more likely to get the players it wants, then that means both the 3-stars and 4-stars they signed came to them in higher regard... therefore, while both levels of recruits might be expected to behave at a higher success rate than their counterparts at another school, their correlation to each others' success should remain equivalent. Unless we assume that USC only got the 4-stars it really wanted, but for some reason was forced to accept the 3-stars it signed... I find this unlikely.
Also, I think a consideration of the 5-star mega-recruiters' data vs. the highly-successful 5-star recruiters' could hold some answers to this dilemma. If the teams who are more successful at recruiting 5-stars (by a factor of 2.5) are considered to be getting their "pick of the litter" among 4-stars and 3-stars, then comparing them to the next tier of teams should reveal a much higher level of success among the 4-star and 3-star data. However, what we see is a small bump in the 4-star data and a regression among their 3-stars. To me, this suggests an influence of psychology... be it the psychology of the coaches and who they give first-crack at the field, or the psychology of players who either believe in their own skills, or don't.
Adding even more doubt to the argument: the behavior of 2-stars at the lower tier schools vs. the behavior of 3-stars at the top tier schools... Cal, Iowa, Oregon, and Va Tech are placing 2-stars into the draft at a rate of 12.2%. Georgia, tUOS and Penn St are placing 2-stars into the draft at a rate of 12.5%. If we deem Florida St, Oklahoma, Texas and USC to be the most successful recruiters based on their high numbers of 5-stars, and further posit that this means they are getting the best of the 4- and 3-stars; then why do their 3-stars only get drafted at a rate of 10.3%?
Again, I like your posits. But, this data was not designed to answer either of those questions; and as we can see, if we were to begin to make inferences from the data, the results do not support either posit.