Rating the Raters: Reviewing Recruiting Services for Success and Bias Comment Count

The Mathlete


Matthew Stafford (Rivals #6 overall, 2006) and Mitch Mustain (Rivals #10 overall, 2006)

With the ESPN150 hot off of the presses yesterday, all four major sites now have an updated Top 150/247/250/300 list available for the 2013 class. I wanted to dive in and look at how each service has performed over the years. Would any site stand out as doing a better job of predicting success? Do allegations of regional bias play out for any specific services?

Rivals

Typically considered the gold standard (unless another site has the player you like rated higher), Rivals online archives are available back to 2002. They have produced a ranked Top 250 since 2008 (Terrelle Pryor was the original #1 recruit) and an unranked Top 250 list in 2006-2007. Since 2002 they have classified between 25 and 33 players as five stars and typically have about 300 four-stars per class.

Scout

Like Rivals, online archives go back to 2002. They're a bit more generous on the five-stars, extending the honor to exactly 50 high school seniors per class since 2008. The larger group of five-stars is offset by smaller group of four-stars that round out the rest of the Scout 300 ranking that has been available since 2005.

ESPN

The Worldwide Leader joined the recruiting party in 2006. No one is stingier with the fifth star than ESPN, offering up between 11 and 18 each season since they went crazy with 42 in their first year. The ESPN four-star threshold is also a bit tougher. Last year the number peaked at 238 but prior to that the total was closer to 200 per class.

247

The newest service is 247 Sports, which did a barebones review of the 2010 class before jumping in head first for the last two completed classes. Their best-of list ranks the top 247 players (just like their name, get it?) and is in line with Rivals in terms of number of five- and four-star rated players. Their later entry into the group has allowed them to provide what is, in my opinion, the best website in terms of navigation and ease of use. For the most part they are excluded from these evaluations since the first class they fully rated were only freshman last season.

Methodology

This is where it gets tricky. Do you evaluate on hits or misses or both? Based on available, accessible information I decided that hits would be easier to quantify and really what you want to know about a service. Who does a better job of predicting future stars? By stars I defined them as players who earned all-conference or AP All-American status for a BCS conference school. First team all-conference honors were weighted double and AP All-Americans were weighted triple. If a player earned awards for multiple years, their value was weighted for each season depending on their level.

Each recruit was given an overall ranking for each service in each season. If there was a formal ranking, I used my method to complete the rankings behind them. I used star values and position rankings to approximate a ranking. All five-stars were ranked first, then four-stars and finally three-stars. Each player was ranked in position order and the positions were allocated based on total quantity in each group. This way the #4 fullback wasn’t rated the same way as the #4 wide receiver. Kickers and punters were excluded.

The square root of the rank was then used to further accentuate the differences at the top end of the rankings. Ratings were capped at 1000 and any unranked player was given that value. ESPN was evaluated solely based on the 2006 and later classes.

Who Rates the Best (at Rating)

Overall, Rivals gave the highest average rating to a future star. The weighted average ranking of a player to earn post-season honors from Rivals was #268. Scout wasn’t far behind at 281 and ESPN lagged further back at 329. Here is how each service did by recruiting class (lower is better):

Class Rivals Scout ESPN 247
2002 285 320    
2003 296 313    
2004 261 262    
2005 216 241    
2006 265 298 331  
2007 328 318 386  
2008 270 257 322  
2009 241 249 246  
2010 130 170 219 230
2011 24 73 43 21

Rivals dominated from 2002-2006 before Scout picked up a couple seasons in 2007 and 2008. With plenty of eligibility left for the 2009 class it’s still anyone’s game. Rivals has jumped out to an early lead for the 2010 class and the 2011 class is 60% comprised of Sammy Watkins and generally pointless at this point in its lifecycle. ESPN has failed to come close for any completed classes, although the 2009 class to date has been neck-and-neck between all three services with probably two-thirds of the results still outstanding.

Offensive Ratings-Weighted Average National Rank of Post-Season Honorees

Group Rivals Scout ESPN
OL 297 350 383
QB 166 153 135
RB 203 225 304
WR 266 244 296
All Offense 254 271 309

ESPN finally picks up a win in the tightly contested quarterback evaluations. As you can see by the lower numbers, picking future all-conference quarterbacks has proved to be one of the easier tasks among rating services. ESPN’s average rank of 135 puts them ahead of both Rivals and Scout.

Scout does the best at wide receiver with Rivals a bit back. ESPN is not very close to the leaders at any offensive positions other than quarterback. Their results for both offensive linemen and running backs are particularly lacking.

Defensive Ratings-Weighted Average National Rank of Post-Season Honorees

Group Rivals Scout ESPN
DL 213 238 271
LB 299 310 376
DB 301 303 365
All Defense 288 297 356

It’s a clean sweep for Rivals on the defensive side of the ball. Scout is never far from, but still consistently behind, Rivals. ESPN is a distant third across all position groups, at least 20% higher than Rivals in every category and nearly 25% overall.

Conference Ratings-Weighted Average National Rank of Post-Season Honorees

Conference Rivals Scout ESPN
Big Ten 284 288 438
Big XII 335 347 467
Big East 491 402 485
SEC 175 206 157
Pac-10/12 255 260 418
ACC 293 323 288
AP AA 215 237 269

The criticism of ESPN having an SEC bias and west coast neglect certainly shows up in the evaluations. SEC is the conference ESPN clearly wins and the ACC is a narrow win. All the other conferences are just carnage for ESPN while Rivals again takes a majority of wins. Scout is virtually tied for the Big Ten, Big XII and Pac-12 while lapping the field for the always crucial Big East rankings. It is difficult to tell whether the ESPN success is due to better rankings on players ultimately landing at ACC and SEC schools or if they are just giving a flat lift to those conferences. The fact that 1 in 3 players on the 2013 Top 150 are from Florida or Georgia probably indicates that at least some of the success comes from allocating preferential spots to players from the SEC footprint.

Regional Ratings

Here is how each service allocated their ranking slots to geographies. Higher rankings are weighted heavier and each player’s home state was allocated to one of the five major conferences (sorry Big East) based on their geography.

Footprint 247 ESPN Rivals Scout
ACC 13% 13% 13% 14%
Big Ten 14% 14% 16% 16%
Big XII 17% 19% 18% 19%
Pac-10/12 17% 14% 19% 18%
SEC 39% 40% 35% 33%

Players from ACC territory were the most consistently allocated across all four services. ESPN and 247 allocate fewer prime slots to the Big Ten versus Rivals and Scout. ESPN is a major outlier out west as the Pac-12 footprint garners much lower rankings there than at any of the other three. The SEC evaluations pick up about 40% from 247 and ESPN versus 35% and below from Rivals and Scout. This gap has narrowed some in more recent years, but there is still a strong bias from 247 and ESPN towards players from the SEC footprint.

Takeaways

In terms of ability to predict future success, Rivals stands out as the clear winner among all of the services. Scout is not significantly behind and has closed the gap in recent seasons. Rivals predictions proved more accurate at five of the seven position groups and overall for both sides of the ball. ESPN is a distant third in almost every sub-category with the exception being quarterback where they lead the most closely contested position group.

The services appear to be mirrored in their regional biases. ESPN and 247 slant to the Southeast and Mid-Atlantic regions with a clear sub valuation of the West Coast. Rivals and Scout don’t have dramatic swings to any one region but do give less value to the Southeast with the extra spread across the rest of the country.

While Michigan’s 11 [players in the...] Top 150 showing is absolutely a good thing, ESPN has proven to be the least reliable of the three established services. Historically, 30.5% of the weighted post-season honorees originally appeared in the ESPN 150 while Scout and Rivals have each had at least 34% in their Top 150’s. The differences aren’t massive and all sites have had their misses, but overall there is clear evidence that Rivals is the most consistently correct and that Scout is a strong #2. Although the individual players fluctuate, the overall ratings for Michigan’s class to date are essentially identical between Rivals, Scout and ESPN with 247 being lower on it than any other service.

Bonus: Protecting Conference Turf

Not looking at recruiting services but conferences now, I wanted to see which conferences did the best job of keeping the best players from their region in-house. Each state was split between conferences based on number of schools in a given state. States without BCS conference teams were excluded. This isn’t perfect because there is no way Cincinnati and the Big East have the same share of Ohio as the Big Ten and its school in Ohio. But it makes each school among the Big Six theoretically even and provides a good starting point.

Conference Total FP Pts Split FP Pts Signed FP Pts Signed/ Total Signed/ Split
ACC 74,500 47,725 30,610 41% 64%
Big Ten 34,821 24,022 23,268 67% 97%
Big XII 47,118 46,558 33,195 70% 71%
Big East 61,033 27,376 11,706 19% 43%
Pac-10/12 37,287 37,287 30,063 81% 81%
SEC 82,902 47,610 49,365 60% 104%

*FP=Footprint

The Pts are an estimate of the total value of the recruits within a footprint. The total includes points for all players to any conference with a school in that region. The split is an allocation based on number of BCS conference schools in that state in each conference. Not surprisingly, the conferences with the smallest geographic competition, the Big XII and the Pac-12 signed the highest percentages of their available recruits. After splitting up the states, the SEC actually signs more than their allocation of the footprint. The Big Ten is close behind but work from a much smaller pool. If the SEC is able to make make gains in Texas (they currently have a 12% “share”) with the addition of A&M, the talent gap between the SEC and the rest of the conferences could widen.

Comments

Jivas

April 18th, 2012 at 12:29 PM ^

One suggestion: I think the best summary measure of rater performance (using data I believe to be readily available) would be mapping the various recruiting ratings into team wins (or similar measures of team performance*).  You did something earlier to evaluate how much recruiting rankings impact wins, using rankings from one of the sites.  If you repeat that analysis with each of the sites - for the periods when data is available - that "horse race" would be more informative than mapping the ratings into postseason honors.

Again, really cool stuff.  Not a criticism - the data here is limited - just a suggestion.

---

(* You're obviously going to deal with omitted variable issues here - e.g. coaching, player development, etc. - but you can't really escape that problem.  Besides, the residuals from that study would be an interesting measure of pure coach-em-up ability.)

Jon06

April 18th, 2012 at 12:32 PM ^

It'd be interesting to see how robustly popular conceptions about more specific biases among the recruiting services (the rumored Rivals [Scout?] Notre Dame preference, ESPN's UA game preference, etc.) are borne out by the data, but maybe it's too time-intensive, for too small a data set, to figure out how rankings move based on the relevant events?

The Mathlete

April 18th, 2012 at 12:48 PM ^

Since this was based mostly on all-conference ND is laregely absent. I did check the average ranking for each class and Scout is right in line with Rivals for ND recruits over the last 11 classes. Nothing stands out for either one. It's possible they are both overestimating, but they are generally in line with each other. 

MrVociferous

April 18th, 2012 at 12:53 PM ^

These numbers are confusing as hell.  Especially when it comes to conference rankings.  I have no idea what any of those numbers mean.

M-Wolverine

April 18th, 2012 at 12:55 PM ^

that I didn't understand....but if you only go off the hits, don't your chances of getting more hits increase with the amount of star players you predict? I mean, if you get 50 out of 100 right, is that better than 45 out of 50? Also, I'm not even sure predicting stars is the best measure. Recognizing someone that everyone says is going to be a good player probably isn't that great an accomplishment. I bet a lot of armchair YouTube watchers could do the same, if not as well. To me, with all the access to film they have, the best measure of doing a good job would be hitting on the guys who no one else likes, but blows up and does well..or at least well beyond their expectations. A guy who's a 3* to everyone else who becomes all-everything is better than saying "Shane Morris should be good" and he's, well, good. I mean, if Conley is really good, it makes ESPN look smart.

I admit the fails are harder to pinpoint (what constitutes a fail?), and there are so many factors that contribute to it (are we expecting sites to review their football ability, or their will to succeed, or their character, or all of the above?).

Jeff M

April 18th, 2012 at 2:08 PM ^

Following on M-Wolverine's post, if you also analyzed a recruiting service that gave every player on every team's roster five stars, wouldn't they be up at the very top? While it might be hard to evaluate misses, do we know if Rivals or the other services "inflate" their ratings -- do they have significantly more five, four, or three stars?

You pointed out that ESPN rarely gives out a five star, which I think hurts them here -- they might have future All-American X as the 20th best player in the country, but give him four stars while Scout gives him five, and so on. You did point out that Rivals has fewer five stars, but more four stars, which in terms of volume would likely feed

I think there are two parts of looking at misses -- who are the award winners that came out of nowhere (relatively speaking), and who are the busts? If you're a four star or five star recruit, the expecation should be that you contend for all-conference honors by the time you're done, so maybe we could look at the batting average -- # of four+five stars that get an award / total four and five stars. We could also look at award winners with lower average or higher deltas in between services...I'm not sure exaclty how we'd want to quantify that.

This is a really great post, Mathlete. I think it answers the question of "which rating services identify the most players who are future stars." But, as a follow-up it seems like we're interested in fleshing out rating success a bit more.

M-Wolverine

April 18th, 2012 at 2:19 PM ^

And wondering if there was some explanation already given in the methodology that I just didn't understand. Wasn't trying to throw the baby out with the bathwater, just seeing if there was some accounting in the data for what looks like the "duh, basic logic to me", but often is accounted for in some analysis that didn't come across to me. Misopogon is able to point these out to me all the time, so I ask, figuring my data crunching skills are rusty enough that there's something obvious that I just didn't get.

The Mathlete

April 18th, 2012 at 2:26 PM ^

In terms of incentives to give more stars to stack the deck, that probably wasn't clear enough in my explanation. Everything is based on each site's national player ranking. After the Top X for a site are listed, I take the remaining four stars and rank them based on position ranks and then take the three stars and do the same. Each service has each of its players (sometimes as many as 1500) with a position rank, given a national rating. For example last year Willie Henry was ranked #484 on Scout (#38 DT), 1008 on 247 (#75 DT) and 1153 on ESPN (#97 DT). With no position rank from Rivals he was considered unranked. Even though Ondre Pipkins didn't make the ESPN Top150, he was ranked 166 because he was the second highest rated DT that didn't make the Top 150. Hope this helps clear it up.

Jeff M

April 18th, 2012 at 2:41 PM ^

That does clear things up for me. One thing I'm just curious about is how you land at 166 for Pipkins, as opposed to 162, 169, etc. I might just be misunderstanding -- is that ESPN's actual ranking for him, or something you were deriving based on his position ranking?

The Mathlete

April 18th, 2012 at 3:06 PM ^

Something I derived. ESPN ranked the top 150, which included the top 14 DT's. That left 9 unranked four star DT's of which Pipkins was the second highest ranked. Those 9 players were evenly allocated among the rankings between #151 and #249 (there were 249 4 or 5 stars). The remaining ranked 3 stars were spread between #250 and #1196 based on their position ranking.

JohnnyV123

April 18th, 2012 at 1:32 PM ^

I love the problems you are trying to tackle in your blogs mathlete, but there is little room for me to commend and critique them because you are never completely clear on your methodology. That makes me forced to dismiss it as "oh that might be true but since I don't understand how he got the numbers I can't really say for sure, which is a shame."

Your whole methodology section is just begging for some examples so we can understand exactly how you got your numbers because although it surely makes sense to you since you did it your explanation of it is confusing.

You use a method to approximate a ranking based on stars and the position rankings (I get that but showing some of that might be nice too) but then what does it mean for a recruiting service if they picked a guy that was a AP All American as their 100th ranked player?

That's all I'm looking for but I appreciate all the effort you are putting in.

mlax27

April 18th, 2012 at 1:30 PM ^

I think there is some bias in positions that can explain some things.  An all conference QB is probably more important to a team than an all conference tight end or receiver.  So naturally services are going to rate QB's the highest, which explains why the average rating is so much better than other positions. 

Additionally, the number at the position make a difference.  There is 1 QB per conference and if you only count the BCS conferences, there are 6 first team all conference QB's, covering the last 4 years of recruiting classes.  So it would be fairly easy to have ranked them all when each year you put 10-12 inside the top 150 or whatever the number is.  On a related note, there are 5 times as many all conference lineman.  So to have ranked them all would take quite a bit more skill. 

So I'm not sure it is actually easier to predict QB success with so many ranked highly. 

Daniel

April 18th, 2012 at 1:30 PM ^

is the fact that we have 11 recruits on their top 150 list (for now) an even more impressive indicator of this class's awesomeness or further evidence that the WWL has decided to adopt Michigan as a pet program?

Obviously, it's probably some combination of both, but I was wondering what you guys thought.

club_med

April 18th, 2012 at 1:47 PM ^

Maybe I didn't understand correctly, but from the methodology discussion, I understood that if a guy ends up being a two-time All-American/Conference, that gets counted twice, right? Presumably, the likelihood of becoming a two-time award winner increases after you get your first award, so I wouldn't think it would be fair to treat winning an award as additive in terms of a dependent measure. Would your results change substantially if you only looked at whether or not they won an award at all?

By the same token, the analysis might be improved by using an ordered logit model for this data, because the the distance between the value of a second-team All-Conference and first team All-Conference is not the same as that between first team All-Conference and All-American.

MosherJordan

April 18th, 2012 at 1:57 PM ^

With the launch of ESPN's nationsites (we miss you TomVH!), My bold predition is that ESPN's SEC bias will be replaced by a nationsite bias. Harder to test, but I would guess you'll see proportionately more ESPN 150 rankings given to schools that sport a nationsite than you will for schools that don't. Especially if they're able to track how many insider accounts link to specific nation sites. Gotta keep the paying customers happy.

fredro12

April 21st, 2012 at 10:01 PM ^

back in 2003-2008 Rivals and Scout were the standard......ESPN has a strangle hold on the media and they rate the south kids higher and why wouldnt they the Southeast produces the best talent.