/* Style Definitions */
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-fareast-font-family:"Times New Roman";
Ed: I looked a long ways back and am at convinced this topic
hasn’t been “diarized” recently. Many
apologies if it has—may there be some original ideas in my writing. This is all for fun and to keep my mind
occupied during the dark of the offseason.
Currently, the conventional wisdom among many college
football fans and pundits is that the SEC is the sport’s premier conference
with the Big 10, Pac 10, and Big 12 in some order behind the SEC and the ACC
and Big East rounding out the six BCS automatic qualifiers in terms of overall
strength. However, this was not always
the case. In the early portion of this
millennium, the Pac 10 was considered by many to be the weakest conference
while the Big 10 and SEC dueled at the top for perceived supremacy. Back when Woody and Bo stalked the sidelines,
many thought the Big 10 or the SWC was easily the best conference.
What I’d like to examine with this diary is which conference
was the best conference at various periods since the modern era of Michigan
football began, which I’ll use as Bo Schembechler’s arrival. If conference supremacy has changed, why has
this occurred? I’m not a subscriber to
the “fast athletes come from the south” school, and unless something IS found
in the water, I’ll likely never be convinced.
Unfortunately, neither of these items is easy to prove.
I’m going to begin with the prevailing notion of conference
supremacy, and I plan to examine reasons things may have changed in a later diary. The best way to accomplish this analysis is
to create a statistical model capturing the strength of each team based on
their accomplishments, their opponents’ accomplishments, and their opponents’
opponents’ accomplishments to include margin of victory. My statistical prowess leads me to believe
that Jeff Sagarin needs to create this model for me. My repeated visits to his house have only
resulted in a restraining order, so until he creates or I can find a historical
statistical model, I have to use another means to ascertain dominance.
Sagarin’s approach (statistical modeling) is clearly an
excellent approach because it evaluates each team and provides some means to
objectively analyze each team’s performance.
Human polls are notoriously inaccurate because they are based on
preconceived notions and have obvious inertia—humans tend to keep teams that
are rated highly up high after losses and don’t move teams up quickly who are
rated lower to start with. Furthermore, “name
brand teams,” such as Michigan and Ohio State tend to start out rated more
highly, which lends itself to a bias in final polling. In other words, teams who start high tend to
stay high relative to teams with similar resumes who start lower. Unfortunately, I am not a mathematical genius,
so an in-depth statistical analysis is out.
Using team records for the analysis does not provide useful
data for two reasons. First, because
conference opponents play an even number of games and each conference game must
be a win, loss, or tie—each conference will have an overall .500 record
in-conference. The Pac 10 plays a round
robin schedule, so ten teams played nine games this year totaling ninety games,
with ninety wins and ninety losses. The
only win percentage variation between conferences would be out of conference games,
but without a way to objectively evaluate those opponents, I cannot use those
games to evaluate a conference’s strength. Second, records can be deceiving in
conference. The ACC has tended to have
many teams gravitate towards .500.
However, those .500 records could indicate many excellent teams knocking
each other off or a protracted cripple fight.
Long story, but I feel it necessary to point out that I
understand the weaknesses of the analysis to follow. Records without extensive statistical
analysis are not useful, so I’ve decided to use human polls, hoping that the
relatively large sample size (forty plus years) and using end of season polls
will limit (although not eliminate) the human bias. The major weakness of this analysis is that a
Top 25 poll ignores mid and lower level teams.
Conferences are made of top, mid, and low level teams, not just top
teams present in annual rankings. I
played with using bowl results to include mid level teams, but the bowl season
was too small until recently, and I believe bowl matchups are not inherently
even. A great example is 6-6 MSU facing
8-4 Texas Tech this year. Bowls are
based on conference contracts and grabs at TV ratings and ticket sales, not
facilitating equal matchups between conferences.
My methodology follows:
1. I’m using the AP
Top 25 polls from 1969 through the final poll in 2009.
2. The #1 rated team
in the poll receives 1 point, #2 gets 2 points, and so on. Prior to 1989, the AP ranked 20 teams, and
has ranked 25 teams ever since. Because
conferences have unranked teams, they
must be counted in some way. I’ve
decided to count each unranked team as five points below the lowest ranked team
in the poll (25 points prior to 1989, 30 points thereafter). While this does inflate the rankings of
conferences with several truly crappy teams, I have no way to objectively
evaluate the quality of 1983 Northwestern versus 1983 Vanderbilt. I feel this method is relatively fair,
because each conference receives the same treatment.
I thought about rating 25% of unranked teams at 30, 50% at
50, and 25% at 75, but that seemed to complicated and error prone.
3. I only used
current BCS conferences. It’s too
difficult to try to measure the smaller conferences because so few teams were
ranked over time. Each team counts
against their current conference. The
SWC doesn’t exist today, so they are not the best conference anymore.
4. The conference
with the lowest average ranking wins that year.
The lowest per decade wins decade, etc.
5. Thanks to http://homepages.cae.wisc.edu/~dwilson/rfsc/history/APpolls.txt
for the historical polls.
The decade by decade encapsulation follows:
Overall, it’s clear that human voters prefer the
accomplishments of the SEC to every other conference, particularly in the new millennium. The 22.5 average ranking has only been
equaled seven times by every other conference COMBINED. They have also had the lowest average six out
of the past nine years and 15 total years, which is the most of any conference
(the Big 10 is second with 12).
I actually thought the Big 10 would come out stronger
earlier and weaker later on. However,
based on this analysis, the Big 10 is in a solid second place behind the SEC
today, and would be second overall if it weren’t for the teams that would
become the Big 12 having an excellent period from 1969-1978. The Big 10’s weakness was primarily due to
the bottom portion of the conference.
Indiana and Minnesota were only ranked twice, for example. The Big 10 rated highest from 1998-today five
times, second only to the SEC, but ranked behind both the SEC and Big 12 for
the first twenty years of my analysis.
The ACC is not a strong conference historically, but it
benefits the most from my methodology.
FSU and Miami (that Miami) were not part of the ACC for a large portion
of their respective runs, but count towards the ACC’s point totals—which is why
they are second place from 1989-1998.
Two very strong teams move the ranking for a conference significantly.
The Big East is essentially a default. The conference didn’t even exist until the
1991 season, and two of their members, UConn and South Florida didn’t play D-1A
until the 21st Century. I
didn’t count them until they joined D-1A, for what it’s worth.
West Coast bias has a basis, regardless of whether it is
valid or not. Despite USC’s strength the
past decade, the conference only beat the Big East, and only beat the Big East
over time as well.
I spent way too much time on this project, and plan to
examine why there may have been any changes in a later diary. Please ask any questions in the comments and
I’ll try to see if the analysis has any answers.