OfferScore (patent pending): A new way to rate recruits - needs input

Submitted by MaizeAndBlueWahoo on September 8th, 2009 at 9:48 PM
OK.  So when it comes to recruiting, Brian likes to qualify the guru rankings with the prospect's actual offer list.  I think most of us are on board with this idea, including myself.  If a prospect's rankings are meh-ish, but Ohio State and Florida have also extended offers, that's probably a guy we want.

But we like numbers also.  You can crunch numbers and analyze them. With that in mind I've come up with the OfferScore (I've put some effort into this and that calls for a cheesy name, deal with it) to help quantify the value of a prospect's offer list.  I had planned on deploying this on my own blog first with UVa's own prospects and then also doing a diary for Michigan's, but the idea needs at the very least some fine-tuning, so I turn to the audience of zillions as opposed to the audience of hundreds for assistance.

The basic methodology is this: Scout and Rivals have a mathematical formula for ranking each school's class from 1 to 120.  So:

- I averaged each school's Rivals class ranking for the years 2005-2009 (five years worth).

- I did the same for Scout's, then took the two averages and averaged again for a final list.  The result is a list from 1-120.  Each school's "value" is their ranking.

- Then, for an individual player, I took their top five best offers and averaged them, leaving out the offer from the school they actually committed to.  (The reason for this is because if you were to ever use this method to rank classes against one another, schools like USC would have an built-in advantage over the rest of the conference even if that particular year they did nothing but poach FIU's recruits.  In other words if I didn't do that, a ranking would be heavily biased towards the already-highly-ranked schools.)

- Right now, the result for each player is a number between 1 and 120, with stars (because this is recruiting) semi-arbitrarily assigned as follows: ***** = 1-12, **** = 13-30, *** = 31-60, ** = 61-90, * = 91-120.  This also means that offers from the top 12 schools (USC, Georgia, Florida, LSU, OSU, Alabama, Michigan, OU, Texas, FSU, Miami, Auburn) are considered "five-star offers" though that really has no bearing on the final product.

Sample: Tate Forcier had like 33 offers. "Top five" non-Michigan offers were Florida (3rd), LSU (4th), Auburn (12th), Tennessee (14th), and Texas A&M (16th).  That averages out to 9.8, so that's his score, and it makes him a five-star recruit according to this.

I've done that for all the players in Michigan's 2009 (technically, 2009 should use the 2004-2008 classes to rank the schools, but I'm not going back and recalculating that) and 2010 recruiting classes.  Here's how that shakes out:

Name (Score)

2009

Five stars:

William Campbell (6)
Denard Robinson (6.4)
Je'Ron Stokes (9.2)
Tate Forcier (9.8)

Four stars:

Anthony LaLota (16.6)
Craig Roh (16.8)
Quinton Washington (18)
Taylor Lewan (22.8)
J.T. Turner (26.2)

Three stars:

Michael Schofield (33.6)
Brandin Hawthorne (33.8)
Mike Jones (39.4)
Vincent Smith (44.6)
Jeremy Gallon (46)
Fitzgerald Toussaint (47.6)

Two stars:

Isaiah Bell (68.67)
Cameron Gordon (73.4)
Thomas Gordon (81.25)

Null:

Brendan Gibbons
Teric Jones

2010

Five stars:

Marvin Robinson (3.2)

Four stars:

Christian Pace (23.2)
Devin Gardner (25.4)
Kenny Wilkins (27.8)
Terry Talbot (29.4)

Three stars:

Austin White (30.6)
Jeremy Jackson (32.2)
Ricardo Miller (33.2)
Stephen Hopkins (48.6)
Terrence Talbot (51.6)
Cornelius Jones (52.33)

Two stars:

Courtney Avery (62.4)
Drew Dileo (67.5)

Null:

Tony Drake
Antonio Kinard
Jerald Robinson
D.J. Williamson

As you can see, one of the major problems and something I'd like to find a way to work around is that guys who have no other offers don't get a score.  As a corollary, it tends to bias against guys who committed early in the process.  Another thing I've wondered is if that's the right way to display the results.  It seems to work OK, but who knows.

Also, it's a smidge unfair to guys who have not a hell of a lot of offers, and one especially "low-ranked" offer.  Best example in that list is Paskorz, who had four other offers: Minnesota, Pitt, UVA, and Bowling Green.  Nobody would call BGSU a major factor in his recruiting - it was Pitt and UVA that Michigan beat.  The BGSU offer is a major anchor - take it out and he goes from a very low three-star to a pretty high one.  Then again, you could say that if he was a better prospect, he'd have earned another good offer somewhere.

Obviously there are advantages to this, though.  It passes the smell test.  It wouldn't stand well on its own, but then again, neither do any of the guru rankings, really.  Ranking schools against each other like this would prevent things happening like Will Hagerup dragging someone's ranking down.  Whoever gets him is going to be dinged in the Rivals/Scout rankings because he's a punter and therefore not highly ranked.  I do think it's likely to be more predictive of college success than any individual guru site's ratings.  And I think it'd be a better way to rack and stack schools against each other in recruiting rankings, because it's less based on subjective measurements.  As a side bonus, it created a list of schools from most to least "valuable," which to me is worthwhile because it is purely reflective of the schools' relative attractiveness to recruits.

So, what I need is critique.  Good idea/stupid idea?  Where can it improve?

The only thing I can think of to change at this time of night is the star rating scale. The number of 5* players is significantly smaller than 4* players. And there are fewer 4* than 3*... While I did notice that your 5* was indeed the smallest group, I think your 5* group was still too large. I think I would cut the 5* off somewhere between 6 and 8 to be more representative of recruiting services.

As for a way to drop the anchors, just throw out the lowest ranked school as an outlier. While technically you should throw out the top school as well, then, I think this is justified because some of the lower ranked schools may have gotten into recruiting before the player became a big name recruit. Or they were the local school and were hoping to catch the home grown large fish. Or you're taking the top half of their offers, up to 5 schools. Or you just feel like dropping the low ranked school since it's your rating system and that's what you want to do. Or whatever.

The problem with a smaller range for five-stars, 1-8, say, is that a score smaller than 3 is for all intents and purposes completely impossible. A score of 1 would require the recruit to have received only a USC offer to go with his Michigan one. Not gonna happen that way. Marvin Robinson actually has an astronomical score, the second highest likely score really, because it's the result of five of the six top schools offering him (only LSU didn't.)

Besides, aesthetically I just like it better this way. If I were to match it up with the services there would be bajillions of three stars - like, 20-90 would be three-stars - and no one-stars because they reserve that for prospects they haven't evaluated.

It's kind of recursive. You use the Rivals/Scout star rankings of the players to determine the team scores, then use the team scores, with not 100% valid offer lists, to rate the players. I'm willing to bet USC is mostly 5 stars, because they have a USC offer as their number 1 offer.

He dropped the school to which they committed. So they don't have that offer on their list. So it's somewhat less recursive that it would have been had he kept the signing school on their list.

Hmm, I missed that. It's still somewhat of an echo chamber though. Recruits from the southeast tend to have Florida, LSU, Auburn, Georgia, etc offers. It's like how recruits in the midwest have more Big 10 offers.

Does it account for schools with offer cannons (UM, UF) vs very selective (OSU)?

No, it doesn't. Granted there is that possible issue of a "committable" offer vs. a "not really committable" one especially with the offer-cannon schools. I'd need a legion of TomVH's at my disposal to get that deep though.

Offer Cannons automatically make mediocre recruits appear more appealing than they would otherwise. I suppose that goes back to the outlier idea and maybe he should drop both the top and bottom offer as well as the committed school, THEN take the top half, up to 5 offers.

Sounds like it would increase the amount of effort somewhat.

It is a little bit recursive, something I thought about in the middle of doing it. I think it goes through enough number-churning to get rid of that problem, though. By the time you get to the end result, there isn't enough resemblance between a player's Rivals/Scout star rankings and their ranking here.

But I did specifically say that a recruit doesn't have the school they committed to counted, precisely because of the problem you mentioned.

I should also have mentioned that I used Rivals' offer list, which, while not 100% perfect, is much much more accurate than Scout's.

Would it be more beneficial to use the standings of the team from actual play? The team rankings are sometimes skewed in that they are based off of the first 20 commits (I believe that's right). So teams with small classes are somewhat punished and teams with large classes don't necessarily see increased results.

My point is a an offer from a national championship team is worth a lot, but if that team only has a 15 person class, they won't be ranked that high, relatively speaking.

Does this make sense? I feel like I am not being clear.

EDIT: I would also add that I am not sure I would consider class ranking to quality of offer. For example, LSU won the BCS in 2007. However, that recruiting cycle (class of 2008) they finished 11th according to Rivals. It seems like an LSU offer that year would be worth more than 11th.

It's also possible I am completely misinterpreting your study.

I thought about using some form of actual results. AP rankings might be useful. Unfortunately they stop at about 38 or so depending on how many teams get votes. And then you'd need a way to quantify the rest - is a 6-6 season in the WAC better than in C-USA? I can see that being a major effort in and of itself.

Plus, there's "prestige" to be taken into account. I mean, we had a lousy-ass season last year but a Michigan offer still carries a lot of weight. FSU has been middling lately but damn if they're not tough as hell to recruit against. Boise State's been awesome lately but if a recruit gets Boise State and, say, Cal in the mail, Cal is where he'll probably end up. How do you quantify prestige?

I tossed all these questions around and then settled on the current system. My concerns were allayed when I found that the system churned out school rankings that do a pretty damn good job of combining prestige with on-field results. Georgia was kind of a head-scratcher at #2 along with South Carolina at #15, but otherwise the list turned out pretty damn close to how you might rank them subjectively. But when you recruit against South Carolina you recruit against Steve Spurrier, so even that makes some sense.

I do realize small class sizes hurt rankings, which is exactly why I went back five years to average it. That's long enough that schools will all have big and small classes and complete turnover occurs.

I completely agree with "me". You could do something like use Jeff Sagarin's ratings over the last 10 years to determine the value of an offer of each school. I'm sure over the last 10 years if you average it Miami and FSU will look better and most of the Mid-majors will look worse. I can't think of many teams that have been consistently great over the last 10 years and haven't seen a recruiting bump or more value from their offers as a result of it.

I was going to put something about Sagarin in there. I understand the OP's logic in his system, and by all means it's his deal, but I would have used real standings / results to determine the value of an offer. If you want to get outside of the box and really go somewhere with this (which I think it has a ton of potential to do), get away from Rivals and Scout and all of that and do it with real data. Prestige is something, but if you're trying to determine which class is most likely to produce a winning product on the field, you should trust the team's performance on the field to be the indicator of what's a good offer. I think Sagarin is a great idea. He has a lot of history and respect at ranking teams, and his stuff is always very reasonable.

One problem that I see with it is that your results are still based on what Rivals and Scout consider to be good recruits. The ranks of the schools is totally based on how Rivals and Scout rank the players that attend those schools. I would have to imagine that you have a problem with how those sites evaluate players (I know I do), which is probably why you want to make a new system. I doubt your results will be much different from Rivals and Scout, but I hope they are. It's nice to see someone thinking about recruiting in a different way.

One thing I can think of as a rebuttal to that is that players who are recruited to spread offense schools have frequently been of lower star rating than if they were being recruited to a pro-style school. This is an artifact of the ranking systems, not necessarily the quality of the player.

For a large number of players, it doesn't matter what the offensive style is, it's more about the skill. No school is going to turn down a seantrel or a shariff just because of their formations. Those athletes will be used in some productive way no matter what.

The result is that his system will get rid of the detrimental ratings players get for being 'spread type' players because they will then be grouped in with other high end players that are recruited to winning programs.

"One thing I can think of as a rebuttal to that is that players who are recruited to spread offense schools have frequently been of lower star rating than if they were being recruited to a pro-style school."

Florida?

I was trying to think of a coherent way to get the idea across that teams like Florida who recruit both 'spread type' and 'pro-style type' players get good athletes who are frequently ranked well (player star wise) regardless of the system they are fitting into.

I think his system will eliminate the downgrade that smaller, faster players get for the fact that they are not 'NFL type players.' I guess I'm thinking mostly of the slot ninja type that gets downrated because they are not 'tall enough' and the tweener DE/LB and LB/S types that may well have a future in the NFL d/t their athleticism.

Or I could be completely confused.

Essentially, and I agree with this, we need rankings that don't have any component for NFL potential. But at the same time, players with NFL potential are likely to be better players on the whole. Jeremy Gallon was still a 4star, but he was still a top 3 slot receiver.

It's like this to me: Forcier will never have Matt Barkley star rankings because the NFL doesn't need a 6'0" passer when they can get a 6'3" version of Forcier. Would you rather have Forcier or Barkley?

I guess what I'm saying is that we don't need another set of rankings to tell us the same thing: players top schools want are good. Freakishly talented players like D-Rob are good. M-Rob is good, but he doesn't have safety speed, or perfect OLB size. So, 5.8 on rivals. Makes sense. Ricardo Miller does not have elite ZOMG speed, so, 3star. I feel like our recruits are appropriately ranked within the confines of how Rivals and Scout operate. Someone like Ken Wilkins could well be a 5* in RR's system, maybe he's the kind of guy you use against Illinois, but not Wisconsin.

I look at it this way, by Rivals, anyone in the top 100 is probably unequivocally awesome and desirable. For 100-250, probably decently accurate. Past that, into 5.7 and below, go by offers. Or look at camp performances. Roh rocked face at the UA game. He rocked face against WMU.

I say we trust that the coaches know what they are doing and stop caring about the rankings. This is seriously just ONE slightly down year, and there is still plenty of time to close strong. Rich Rod drinks his coffee, he'll come through.

While I think that this is an interesting idea, I think that this model has several problems. The biggest of these is that recruits that commit early in the process often receive fewer offers, if other teams believe that the commitment is solid. This is especially problematic for players like Miller and Gardner, who are far less likely to receive offers from, for instance, USC, if they feel that they are unlikely to get the player and decide not to offer. Granted, this problem also exists in the Rivals and Scout ratings, but I don't think that this system really corrects for this. Secondly, these rankings cannot quantify the overall level of available talent at different positions from year to year. For instance, this year there is a relative lack of elite talent at QB and DT, so QBs and DTs are receiving offers from schools that they may not in an ordinary year. Finally, these rankings elevate the value of kickers and punters, as both positions typically max out at three stars (justly, in my opinion), yet a three star kicker can receive offers from teams such as Michigan and Ohio State, which can elevate their ranking to a four or (in rare cases) five star unreasonably.

see it as a position rank? Like the kickers that get recruited by OSU/Fl/Mich/ND/LSU would be considered a 5 star kicker, but not 5 star when it comes to overall.

When a player, like Ricardo Miller, give an early/rock solid verbal, do schools tend to withhold offers? If schools do, this would affect your formula.

@Seth also: I do mention that issue in the diary. It is kind of a problem with the model that I'm not sure yet how to work around. It's possible you could dive a little deeper and guess which schools might have offered in the future, but that's another thing that needs a lot of TomVH's to do properly.

Is that two recruits with the exact same offers, say Michigan, Illinois, Tulsa and ole Miss would have completely different grades if one commuted to M and the other to Tulsa. Same offers, but one will likely be a mid 3 star an the other maybe a 4 stat. Admittedly, I don't have the solution to his problem, just thought I'd point it out.

Y'know, I didn't think of that. But now that I give it some thought, it doesn't bother me too much. If Tulsa gets that player, their fans will be ecstatic. If we do, we'll be less so. Plus, ultimately you'd use this to determine how well schools did recruiting against each other, and Tulsa deserves a bigger boost for outrecruiting Michigan than Michigan does for outrecruiting Tulsa.

It's a weird artifact of the system, yeah, and I don't have a solution either. But I think it's one I'd be willing to live with.

with all recruits saying "I got an offer from __________ or _________" how will we know if its an actual offer or a "hey were kinda interested"?

Also, Im not sure how much of this you might find, but I can see where someone commits early and other schools dont find it worth the effort to recruit someone hard because the recruit is a "lock" to go to somewhere. This could end with someone being a great recruit, but lacking top end offers.

And what about legal/behavioral problems that cause teams to not offer?

I really think that you are onto something good, just a few potential tweaks.

you're off to a great start. it requires some tweaking but the fact that your results roughly approximate scout and rivals ratings shows something.

Not to be that guy, but: Of course they approximate those ratings, they are based on them, and they wouldn't pass the smell test if they weren't similar.

I'm not sure why an alternate system is needed when 3rd party recruiting services rankings are already an excellent predicator of collegiate success and NFL ability (which is almost always indicative of a good college player). Doc Saturday crunched the numbers, and the results showed a 5 star is way, way more likely to be All American than a 4 star, who in turn is much more likely than a 3 star, and so on. Adding to this, there is also a strong correlation between team recruiting rankings and program success- the top programs for recruiting are the top programs for winning (Miami/FSU are commonly brought up as a counter-argument, but their current struggles are blips in an otherwise steady pattern of excellence). Despite people's cherry-picked exceptions ("Callahan Bright sucked and Mike Hart was awesome, recruiting rankings are worthless!"), as a whole Scout/Rival are very accurate. They are not perfect, and some systems value some types of players more than others, but as a whole it works.

http://rivals.yahoo.com/ncaa/football/blog/dr_saturday/post/Hug-your-fr…

http://rivals.yahoo.com/ncaa/football/blog/dr_saturday/post/Hug-your-fr…

...with that said, I think it's cool you're looking for new metrics. Just not sure they're going to do better than recruiting service rankings.

Not necessarily as a stand alone talent evaluator, but something to compare other rankings too and find discrepancies. It could probably use more number crunching in a minitab-esque program, and a better way to rank teams would be nice. The latter is tough...the easiest way to go more in depth would be to use Phil Steeles class rankings, as they are a composite of 7 different recruiting sources and would probably be better representations of reality.

Also, the difference year in and year out between different ranked classes is hard to predict. It might be much easier to separate schools into tiers, like 5* tier for the top X number of schools, 4* for the next X, so on so forth. The numbers should become easier to work with, and easier to understand for people unfamiliar with this system.

I.E. A player whose top 5 are all 5* programs would be a 5. A player who has 1 5*, 3 4*, and 1 3* would be a 4. That gives an easier system to compare with other recruiting rankings, and is easier to see things that don't entirely fit. Like M-rob, who would be a 5*, compared to his rankings, we can see that coaches think he will be better than the recruiting services.

My final point relates to that last one, as its where I think this could be useful. It could help identify what coaches think about these players, and perhaps even identify trends where coaches are generally right or wrong, using hindsight of course.

For a general idea of the effect of early commitment on the offer sheet, you can probably graph the number of offers a student receives as a function of commitment time for students with comparable star ratings.

If this function is linear, then maybe one student that commits 2 times earlier than another student should receive 2 times a higher offer rating in your system.

The issues a lot of posters had got me wondering if somehow you could actually use some rating system vis a vis Rivals, Scouts and Jeff Sagarin's and maybe avg. them out. You might use different weights if you prefer one over the other and then maybe avg. out these sites rating with OfferScore ratings (again maybe put more weight on OfferScore). This way you would tackle a lot of issues raised here.

I do understand where this rating system is coming from and maybe you abhor the scouting sites but this method does eliminate a lot of issues.

I think the key, though, is finding a way to rank the schools well enough. If you have an accurate accounting of where each school ranks in recruiting prestige, you can better gauge the value of each offer.

I think using just the Rivals rankings doesn't tell the whole story.

For each year, I would have a formula that takes into account the following (weight each as you see fit)

• Average Rivals Ranking over 5 years
• Average Rivals Position Ranking over 5 years
• Last year's blog poll finish
• Average AP finish over 5 years
• All-time winning percentage

Also, you have a problem with there being some kids who accept and report like a gazillion offers, while others make it known only a few schools are ever in the running (so why waste an offer). If a kid has offers from the entire Sun Belt, plus Ohio State and USC, the average will fall around the Sun Belt.

Early commits are especially susceptible to this. Greg Brown, for example, got only a few offers before committing as a junior this week. He could end up a 5-star, yet your metric, discluding Michigan, ends up giving him the buzz of a typical Sparty (2.5 to 3 stars).

Positional rankings also cause a hassle. Offensive linemen get a lot of offers, because teams need a lot of offensive linemen. Punters, not so much.

Your answer, then, would be to limit the number of offers considered. Take the best 10 AND the best three. So a kid with a full spate of offers ends up being the OSU/USC/FLA average, plus the OSU/USC/FLA/ND/TEX/OK/ALA/PSU/MIA/FSU average.

Perhaps you could use the average final computer ranking, rather than the recruiting site rankings. This way, you use a conglomeration of unbiased statistical rankings and remove the recursive nature of your system. Also, to address prestige, you would go back further than four years (although you may want to weight more recent polls slightly higher). Finally, you also probably should devalue offers from schools that throw them out like candy (i.e. Michigan, Florida, etc.), because a good recruit offered by one of these schools supposedly will receive more good offers from other schools.

This does not take into account the problems stemming from early commits or from 'system' players. The only way I can think to solve that is to include the school that the player committed to. While, as noted in the OP, that will tend to favor the better teams, I think that the built in assumption that the better teams are getting better recruits is a valid one in the model, especially when the committed school would be worth a maximum of 20% of the OfferScore.

I like your idea, but baseball has been doing something similar for a long time. The stats are refined enough to know what SHOULD happen to a player coming from one division to another. For example, a player from Japan should be able to perform in the MLB at a level 98% as good as the Japanese stats.

It sounds nerdy, because it is. It is also very reliable.

They have stats for individual parks and moving from one level to the next.

Who is going to offer Devin Gardner now?

But maybe it isn't. I believe that it was Lefty Drizelle who said something to the effect of, "At least I know who the competition is," when talking about recruits who have "committed." So maybe Tress (or JoePa) comes in and tries to take him. (Pryor will graduate before Forcier and Robinson...there will be a logjam at QB...insert stupid buckeye argument here.) (Old man can argue...Clark is a senior and you can compete as a Freshman...I don't know who PSU's backup is...)

I guess there is a potential, how secure is the commitment question to be asked. Is there a family legacy involved? This could be a case by case question to ask. However, the Sabermetrics approach can take some of this into account. I wonder if some of this has been done already...

It is an interesting idea. I wonder if there is a way you could get this into some sort of journal if you stuck to a rigorous mathematical formula. You might consider taking geography into account too as this basically leaves the talent evaluation to various football coaches, and a football coach is more likely to go to TX than to WY. (Also, a coach will send his best recruiter to various hotbeds rather than football purgatory.)

Though I find myself wondering, after reading some of the comments, if this method is trying to be a "master metric" attempting to include all other methods of player ranking, or simply another factor measureing a recruit in a specific way.

You'd probably have an easier go of it if you stuck to the latter of those two items; I'd suggesest cutting out everything that isnt specefically necessary to determine the strength of a recruit's offers.

Perhaps, and this might be preposterously difficult, instead of (or in addition to) taking into account someone else's method (i.e. Rivals or Scout) of ranking a school's recruiting classes, include a mix of a school's average season performance rankings.

Another factor to consider might be how stingy a particular school is at dolling out offers, though I'm not sure if there is a readily available running total list for all schools.

Anyway, just some thoughts, well done overall though.

I didn't read through the comments, so this may have already been thrown out there, but to solve the few/no offer types, you could assume they are replacement level recruits for that program. For Michigan, this is pretty evidently 3 stars. If you wanted to tweak using the numbers you gathered, check out the distribution of Michigan recruits going back a couple years. Replacement level would be something like Average - 1 SD.

Looking over the comments I see two main themes:

1) It biases against players who committed early and therefore didn't get as many offers as they might have.

This I knew would happen from the get-go, although I was surprised at how many there were that had only their Michigan offer.

A corollary is that information on the offers a player has is often sketchy. Admittedly that's true. I think they're solid enough though, at least on Rivals. Scout I don't trust.

A half-solution might be to take recruits who commit before June 1 (that is, the end of the May evaluation period) and add in the "interested" schools that Rivals and ESPN list. ESPN is actually usually pretty accurate early in the process. (Later on, they don't want to get their hands dirty unless the recruit is at their UA game.) Take Greg Brown: they list CMU, MSU, LSU, and Iowa. Rivals basically says the same. Those four schools would generate a pretty fair score, though now we get to the argument about CMU being an unfair anchor....eh, nothing's perfect. In any case, splitting between early and later commitments and using June 1 as the cutoff might help fix the bias.

2) The system used to rank the schools should be fixed.

I understand the semi-recursive nature of things and the reliance on the system I'm trying to improve on here. Unfortunately it would just take a lot of work to use anything any more complicated than that, and the ranking makes a lot of sense as it is, so the cost-to-benefit ratio doesn't add up. About the only equal-to-lesser work-intensive idea there is using Sagarin ratings. Misopogon, bless your heart you put a lot of thought into that and don't think I don't appreciate it, but I'd need a Cray to figure all that out.

The other thing is this: On-field success isn't the only thing that attracts recruits. I think using the polls would end up overrating the BYUs and Texas Techs of the world. Using the Rivals and Scout rankings allows this ranking to be based on attractiveness to recruits as closely as possible; much closer than any other system could without getting subjective.

Ultimately, the system is probably much better for comparing the classes of two schools than it is for evaluating individual players. On average, it's fairly decent for individuals, but that's because it's really lousy for some (the zero-offer crowd) and actually pretty spectacular for others. Take the Brothers Talbott. Their guru rankings are very similar across the board. OfferScore (Patent Pending) tells a very different story and says we should be genuinely excited about Terry.

Hey MBW - do you live in VA? I live in Richmond and would like to gather a group to see M recruiting prospectg Aramide Olaniyan (Woodberry Forest) play. His team plays at Collegiate in Richmond in October. Any interest?