somehow we're only 124th
Putting The B And S In BCS
Note: UFR coming tomorrow. I have to grab the video myself and that's slowing things down quite a bit. Also, I hate doing this so end up doing other things. I'm almost done, though.
A mighty hat tip to Vijay of IBFC, who brought this to my attention.
I've defended the idea of computer rankings in this space before -- long story short, human pollsters would be just as flawed if they didn't know the scores of any of the games, too -- while knocking their current implementation in the BCS. What I didn't know is how reprehensible certain implementations are. Take, for example, Richard Billingsley of Billingsley report fame.
I like my computer rankings studious and complicated. Whisper math into my ear: regression! matrix! directed acyclic graph! Oh yeah, that's the stuff. I like to think of the creators of these things as four-eyed math PhDs with the complexion of cave fish and hentai addictions. In short, I want them to be smart. I want their writing to be an impenetrable mass of equations.
Believe it or not, the system is designed after our own United States Constitution. But don't hold that against it! Although at times I feel this system is just about as complicated as our Federal Government, there is one huge difference..... this one works!
Take my rankings, please!
This would be charmingly odd if there were some good old fashioned impenetrable and rigorous equations. This is not the case. The Billingsley formula is a ridiculous hodgepodge of kludge factors combined with good old-fashioned human input.
Inanity #1: a team gets more credit for its ranking at the time the game was played than its actual ranking:
For many years I struggled with whether a team's SOS should be calculated by using a teams rating and rank on the day the game was played, or use an opponents most recent rating and rank. There are excellent arguments for both sides. Early on I used ONLY GAME DAY stats. I felt very strongly that if Georgia was ranked #1 when they played #5 Florida, the Gators should get credit for playing a #1 team, even if Georgia later fell to #10. THE MIND SET OF THE GAME, THE INTENSITY OF THE GAME, REVOLVED AROUND PLAYING A #1 TEAM. How can the mind set and intensity of a game be overlooked 4 weeks later? But critics will say "but what if Georgia fell to #50, do the Gators still get credit for playing a #1 team?" Very good point. It does happen. Rankings can fluctuate dramatically during the course of a season. Look at Alabama in 2000.
Several years ago I made a compromise that I think has worked exceptionally well. I use a combination of both, with percentages tilted slightly towards the game day rating and rank. This way both are taken into account. The current rankings are not totally discounted but more credit is given to the original "mind set and intensity" of the game.
First of all, this is a man who thinks "mindset" is two words. Second... that's completely insane. USC gets little credit for playing Arkansas because Billingsley didn't think Arkansas was good going into the season. When reality disagrees with his rankings, he treats it as noise. If Notre Dame had proceeded to go 2-10 after Michigan waxed them, they'd still get a huge boost because ND was #2 when they played.
Inanity #2: arbitrarily rank everyone and then give that ranking real weight because you want the poll to look cool in week #2, when no one cares.
The Season Progression may need a little explanation. They are really a very simple, yet powerful set of rules. I want my poll to "look logical". In the first week of the season if Florida St. beats #107 No. Illinois, and Ball St. beats #58 Memphis, I don't want Ball St. ranked ahead of Florida St. just because they both have 1-0 records. That's not logical. We ALL KNOW Ball St. is not in the same league with Florida St., at least not at this juncture. Let them EARN IT first. Let them prove it over due course of time, then my poll will respond accordingly. That's what I mean by Season Progression. All of my teams start out with a rank, #1-#117, because they ARE NOT ALL EQUAL. We KNOW THAT from past experience, so why not use that experience to begin with. Some would say starting all teams equal, or all at 0, is the only FAIR thing to do. I say it's the most UNFAIR thing you can do, and besides its just plain illogical.
I think some would say that APING HUMAN POLLS is RIDICULOUS because if THAT'S WHAT WE WANTED TO DO we wouldn't PUT COMPUTER POLLS IN THERE IN THE FIRST PLACE.
Inanity #3: Adamant opposition to having past seasons carryover except when he wants them to.
As I have said many, many, times, I am adamantly OPPOSED to PRESEASON POLLS. They do an incredible injustice to College Football. I could state COUNTLESS examples over the last 50 years of such injustices, but let's look at the most recent glaring example of Texas in 2001? How in this world did Texas deserve a Top 5 Pre Season ranking after having come off a 9-5 #29 campaign? Moving 26 places without ever playing a down of football? Based on what, a new hot quarterback? Give me a break. The sportswriters may as well hold a lottery in George W's 10 Gallon Hat. It would be just as accurate. Enough of that... don't get me started.
Wow. That actually sounds sane. Past seasons shouldn't affect this one at a--
I am convinced that carrying a team's RANK over from one season to the next, and then making the rules for the first few weeks of the season "more relaxed" is the best method to use. To accomplish this I created a different set of rules for the first 4 weeks of the season.
There's no ellipsis there! That thought follows the previous one without a word omitted! Aaaah!
Inanity #4: heavily weighting the last week.
Now, let's go one step further. I don't want a team jumping 60 places from #70 to #10 in November either. You just simply can't turn your season around in one game, even if you beat a #1 team. I want people to be able to look at my poll, look at the previous week's contests, and say, "oh, I can see how he did that". So there are specific rules in place that PREVENT those things from occurring. I guess you could say it "forces a team to progress through the season in a logical fashion". I don't believe a team should be #50 in week #8 and #1 in week #9. I wanted to create as much STABILITY as possible in the poll, especially in the Top 10. If a team moves up, I want a person to be able to see WHY, through looking AT THE MOST RECENT PERFORMANCE FIRST, then taking the other factors into account. Additionally, I feel very strongly that the most recent performance should carry a stronger weight. A team should be better in November than they are in September.
Note that this is exactly what human polls are often accused of: that when you lose is more important than strength of schedule or overall performance. Ask Louisville and West Virginia about this. The idea of a computer poll is to get rid of this bias.
Inanity #4: arbitrary bonuses.
A team's defensive performance is given a special look because in my mind winning the game it's self is a reward of offensive performance, but the defense often gets overlooked. Great teams are built on solid defense and I feel that should be rewarded, even if it is so very slightly. The reward is based on holding an opponent to less than a touchdown, on a scale of 0-6 points, a shutout getting the most benefit. Also, after all is said and done, a final look is made at a team's overall record, and a very small adjustment is made in that comparison. If a team has a winning record, even by just one game, say 3-2 on the season, they get some reward for that.
e come out in favor of polls using yardage, turnovers, and any other data available that they can make sense of. this is not that. This is a completely arbitrary bonus that you get for shutting out Northeast Kansas Tech or don't for letting your third-stringers give up a meaningless late touchdown. It's overvaluing an opponent because it has a 3-2 record instead of a 2-3 record.
Billingsley has set out to create a polls that LOOKS GOOD and in doing so he has put in all sorts of kludges. His poll is an attempt to massage computer data into mimicking human polls. It has nothing more rigorous than Billinglsey's opinion behind hit. It's a Rube Goldberg machine that's embarrassing, and it's part of the BCS.