An Analysis of Conference Strength Since 1969

Submitted by Zone Left on January 26th, 2010 at 2:57 PM
Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Ed: I looked a long ways back and am at convinced this topic hasn’t been “diarized” recently.  Many apologies if it has—may there be some original ideas in my writing.  This is all for fun and to keep my mind occupied during the dark of the offseason.

Currently, the conventional wisdom among many college football fans and pundits is that the SEC is the sport’s premier conference with the Big 10, Pac 10, and Big 12 in some order behind the SEC and the ACC and Big East rounding out the six BCS automatic qualifiers in terms of overall strength.  However, this was not always the case.  In the early portion of this millennium, the Pac 10 was considered by many to be the weakest conference while the Big 10 and SEC dueled at the top for perceived supremacy.  Back when Woody and Bo stalked the sidelines, many thought the Big 10 or the SWC was easily the best conference.

What I’d like to examine with this diary is which conference was the best conference at various periods since the modern era of Michigan football began, which I’ll use as Bo Schembechler’s arrival.  If conference supremacy has changed, why has this occurred?  I’m not a subscriber to the “fast athletes come from the south” school, and unless something IS found in the water, I’ll likely never be convinced.  Unfortunately, neither of these items is easy to prove.

I’m going to begin with the prevailing notion of conference supremacy, and I plan to examine reasons things may have changed in a later diary.  The best way to accomplish this analysis is to create a statistical model capturing the strength of each team based on their accomplishments, their opponents’ accomplishments, and their opponents’ opponents’ accomplishments to include margin of victory.  My statistical prowess leads me to believe that Jeff Sagarin needs to create this model for me.  My repeated visits to his house have only resulted in a restraining order, so until he creates or I can find a historical statistical model, I have to use another means to ascertain dominance. 

Sagarin’s approach (statistical modeling) is clearly an excellent approach because it evaluates each team and provides some means to objectively analyze each team’s performance.  Human polls are notoriously inaccurate because they are based on preconceived notions and have obvious inertia—humans tend to keep teams that are rated highly up high after losses and don’t move teams up quickly who are rated lower to start with.  Furthermore, “name brand teams,” such as Michigan and Ohio State tend to start out rated more highly, which lends itself to a bias in final polling.  In other words, teams who start high tend to stay high relative to teams with similar resumes who start lower.  Unfortunately, I am not a mathematical genius, so an in-depth statistical analysis is out.

Using team records for the analysis does not provide useful data for two reasons.  First, because conference opponents play an even number of games and each conference game must be a win, loss, or tie—each conference will have an overall .500 record in-conference.  The Pac 10 plays a round robin schedule, so ten teams played nine games this year totaling ninety games, with ninety wins and ninety losses.  The only win percentage variation between conferences would be out of conference games, but without a way to objectively evaluate those opponents, I cannot use those games to evaluate a conference’s strength.  Second, records can be deceiving in conference.  The ACC has tended to have many teams gravitate towards .500.  However, those .500 records could indicate many excellent teams knocking each other off or a protracted cripple fight. 

Long story, but I feel it necessary to point out that I understand the weaknesses of the analysis to follow.  Records without extensive statistical analysis are not useful, so I’ve decided to use human polls, hoping that the relatively large sample size (forty plus years) and using end of season polls will limit (although not eliminate) the human bias.  The major weakness of this analysis is that a Top 25 poll ignores mid and lower level teams.  Conferences are made of top, mid, and low level teams, not just top teams present in annual rankings.  I played with using bowl results to include mid level teams, but the bowl season was too small until recently, and I believe bowl matchups are not inherently even.  A great example is 6-6 MSU facing 8-4 Texas Tech this year.  Bowls are based on conference contracts and grabs at TV ratings and ticket sales, not facilitating equal matchups between conferences. 

My methodology follows: 

1.  I’m using the AP Top 25 polls from 1969 through the final poll in 2009. 

2.  The #1 rated team in the poll receives 1 point, #2 gets 2 points, and so on.  Prior to 1989, the AP ranked 20 teams, and has ranked 25 teams ever since.  Because conferences have unranked teams,  they must be counted in some way.  I’ve decided to count each unranked team as five points below the lowest ranked team in the poll (25 points prior to 1989, 30 points thereafter).  While this does inflate the rankings of conferences with several truly crappy teams, I have no way to objectively evaluate the quality of 1983 Northwestern versus 1983 Vanderbilt.  I feel this method is relatively fair, because each conference receives the same treatment. 

I thought about rating 25% of unranked teams at 30, 50% at 50, and 25% at 75, but that seemed to complicated and error prone.

3.  I only used current BCS conferences.  It’s too difficult to try to measure the smaller conferences because so few teams were ranked over time.  Each team counts against their current conference.  The SWC doesn’t exist today, so they are not the best conference anymore. 

4.  The conference with the lowest average ranking wins that year.  The lowest per decade wins decade, etc. 

5.  Thanks to for the historical polls.

The decade by decade encapsulation follows:












Big 10






Big 12






Big East






Pac 10













Overall, it’s clear that human voters prefer the accomplishments of the SEC to every other conference, particularly in the new millennium.  The 22.5 average ranking has only been equaled seven times by every other conference COMBINED.  They have also had the lowest average six out of the past nine years and 15 total years, which is the most of any conference (the Big 10 is second with 12). 

I actually thought the Big 10 would come out stronger earlier and weaker later on.  However, based on this analysis, the Big 10 is in a solid second place behind the SEC today, and would be second overall if it weren’t for the teams that would become the Big 12 having an excellent period from 1969-1978.  The Big 10’s weakness was primarily due to the bottom portion of the conference.  Indiana and Minnesota were only ranked twice, for example.  The Big 10 rated highest from 1998-today five times, second only to the SEC, but ranked behind both the SEC and Big 12 for the first twenty years of my analysis.

The ACC is not a strong conference historically, but it benefits the most from my methodology.  FSU and Miami (that Miami) were not part of the ACC for a large portion of their respective runs, but count towards the ACC’s point totals—which is why they are second place from 1989-1998.  Two very strong teams move the ranking for a conference significantly.

The Big East is essentially a default.  The conference didn’t even exist until the 1991 season, and two of their members, UConn and South Florida didn’t play D-1A until the 21st Century.  I didn’t count them until they joined D-1A, for what it’s worth.

West Coast bias has a basis, regardless of whether it is valid or not.  Despite USC’s strength the past decade, the conference only beat the Big East, and only beat the Big East over time as well.

I spent way too much time on this project, and plan to examine why there may have been any changes in a later diary.  Please ask any questions in the comments and I’ll try to see if the analysis has any answers.



January 26th, 2010 at 3:52 PM ^

You say that Miami which was not part of the ACC but counts towards the ACC's point totals. So are you counting Miami's time in the Big East as part of the ACC? Ditto with VaTech?

Are you factoring in that for a long time Arkansas was in the Southwest Conference (which is now mainly the Big 12) or are you counting all of Arkansas with the SEC?

I really don't see how this can be that accurate if the years that Miami and VaTech spent in the Big East is counted for the ACC, and Arkansas time with the SWC is counted towards the SEC.

Also, are you factoring in Penn St for the Big Ten's totals all the way back to 1969? If so, the reality is if you take Penn State's totals out of the Big Ten's when PSU was an independent the Big Ten's totals will be every lower.

Zone Left

January 26th, 2010 at 4:18 PM ^

No, I'm counting teams only towards their current conference, which does hurt the Big East and helps the ACC. This took way too long anyways, and I felt like changing conferences would only induce more human (my) error.

Essentially, Miami is currently in the ACC, so all of their records, including independent and Big East time are counting towards the ACC.

I completely understand the issues that including teams like Miami in the ACC for every year and Penn State in the Big 10, but ultimately, this is just a fact finding post. I want to postulate why the relative strength fluctuates based on population trends and other items.

Hope this helps.


January 26th, 2010 at 3:52 PM ^

All the major conferences seem to trend higher as time passes. I sign that the smaller conferences are starting to have more of an impact on the top 25.

Tha Stunna

January 26th, 2010 at 6:48 PM ^

I don't understand your methodology for the unranked teams. If they are all 30, why do you think unranked teams should be that good? They should be more like 60.

Zone Left

January 26th, 2010 at 7:21 PM ^

I get what you're saying. I thought about creating a spread, 25% of unranked teams at 30, 50% at 50, and 25% at 75 or something like that, but I didn't have anything outside of records to evaluate unranked teams, and I felt it would punish parity (a bunch of teams in the 20-40 range would be bad for rankings.

Again, I need some high quality stats to do a really good job.


January 27th, 2010 at 6:56 PM ^

both florida and georiga LOST to Miami of Ohio in the tangerine bowl in the 70's. In fact, I think Ole Miss lost to them also.......and the other omission was the the SEC CHEATS THEIR BRAINS OUT...


January 30th, 2010 at 6:08 PM ^

most conferences have become more competitive. I kind of miss the 70s where there were quite a few baby seal U in our own conference! We always use to get the 2 or 3 big blowout conference games against NW, IL, IN, WI, MN and IA. Yawn!