ADVERTISEMENT

Advanced Statistical Rankings UK #66 & #70

Nov 22, 2004
116
131
43
Just throwing this out there. Most polls have UK ranked much higher. At least 20 spots or so. These statistical analyses do not. Based on this information UK is only favored to win 3 more games. (EKU, Auburn, & Charlotte)
Upon further research UK is ranked #70 in F/+. This ranking is another statistical model used to determine team strength. I will not try to explain the models, you can read about them yourself if you are interested. This is just something I wanted to talk about to maybe try and see how you accurate you feel these types of ranking systems are.

FYI UK & Remaining opponents:

S&P+ F/+

UK (66) (70)

EKU N/A N/A

Auburn (58) (43)

Miss St (37) (25)

Tenn (23) (24)

UGA (11) (7)

Vandy (53) (60)

Char (121) (118)

UofL (28) (27)


http://www.footballstudyhall.com/pages/2015-kentucky-advanced-statistical-profile

http://www.footballoutsiders.com/stats/fplus2015
 
Really find it hard to believe that any statistical model has Vandy rated higher than UK. Loses some credibility imo. Same for UL - they are 1-3. And probably headed to 1-4 this weekend.
 
Really find it hard to believe that any statistical model has Vandy rated higher than UK.

Agree.
 
Well that football study hall one is terrible...wont even look at the other...

Louisville lost to the #43 and #57, with their lone win over an N/A team...so lets rank them at #27

Kentucky has wins over #106, 41 & 55 and a loss to #36...so #70 sounds about right for them...[laughing]
 
  • Like
Reactions: GJNorman
Below is the summary from the F/+ ranking web site.

Beginning with Football Outsiders Almanac 2009, Brian Fremeau and Bill Connelly, originators of Football Outsiders' two statistical approaches -- FEI and S&P+, respectively -- began to create a combined ranking that would serve as Football Outsiders' 'official' college football rankings.

The Fremeau Efficiency Index (FEI) considers each of the nearly 20,000 possessions every season in major college football. All drives are filtered to eliminate first-half clock-kills and end-of-game garbage drives and scores. A scoring rate analysis of the remaining possessions then determines the baseline possession efficiency expectations against which each team is measured. A team is rewarded for playing well against good teams, win or lose, and is punished more severely for playing poorly against bad teams than it is rewarded for playing well against bad teams.

The S&P+ Ratings are a college football ratings system derived from both play-by-play and drive data from all 800+ of a season's FBS college football games (and 140,000+ plays).

The components for S&P+ reflect the components of four of what Bill Connelly has deemed the Five Factors of college football: efficiency, explosiveness, field position, and finishing drives. (A fifth factor, turnovers, is informed marginally by sack rates, the only quality-based statistic that has a consistent relationship with turnover margins.)

NOTE: Because special teams ratings are integrated into Offensive and Defensive S&P+, and because Special Teams has its own ratings within FEI, creating Offensive, Defensive, and Special Teams F/+ ratings has become difficult since the most recent S&P+ redesign. For now, there will therefore be no unit ratings presented -- only overall F/+ ratings.


Therefore just because team A beat team B, but team B is ranked higher than team A, that does not really change the rankings. This must be noted. It is much more detailed than saying Vandy sucks because they lost to WKU and we beat teams with higher rankings. There are no human factors in these rankings only numbers.
 
Below is the summary from the F/+ ranking web site.

Beginning with Football Outsiders Almanac 2009, Brian Fremeau and Bill Connelly, originators of Football Outsiders' two statistical approaches -- FEI and S&P+, respectively -- began to create a combined ranking that would serve as Football Outsiders' 'official' college football rankings.

The Fremeau Efficiency Index (FEI) considers each of the nearly 20,000 possessions every season in major college football. All drives are filtered to eliminate first-half clock-kills and end-of-game garbage drives and scores. A scoring rate analysis of the remaining possessions then determines the baseline possession efficiency expectations against which each team is measured. A team is rewarded for playing well against good teams, win or lose, and is punished more severely for playing poorly against bad teams than it is rewarded for playing well against bad teams.

The S&P+ Ratings are a college football ratings system derived from both play-by-play and drive data from all 800+ of a season's FBS college football games (and 140,000+ plays).

The components for S&P+ reflect the components of four of what Bill Connelly has deemed the Five Factors of college football: efficiency, explosiveness, field position, and finishing drives. (A fifth factor, turnovers, is informed marginally by sack rates, the only quality-based statistic that has a consistent relationship with turnover margins.)

NOTE: Because special teams ratings are integrated into Offensive and Defensive S&P+, and because Special Teams has its own ratings within FEI, creating Offensive, Defensive, and Special Teams F/+ ratings has become difficult since the most recent S&P+ redesign. For now, there will therefore be no unit ratings presented -- only overall F/+ ratings.


Therefore just because team A beat team B, but team B is ranked higher than team A, that does not really change the rankings. This must be noted. It is much more detailed than saying Vandy sucks because they lost to WKU and we beat teams with higher rankings. There are no human factors in these rankings only numbers.

Which basically means the rankings aren't an indicator at who is better at all...they may need to change their formula.
 
So now we have math geeks that rank football teams. I wander how anyone ever figured out which team was better before all the analytics and other math equations got involved with sports. I give them credit, they have started a very lucrative industry and are capitalizing on it big time.
 
Tennessee's ranking has to be on the first 3.5 quarters. They would be rated at N/A for last half of 4th quarter.
 
If you actually read and understand the methodology then you can understand why teams are ranked higher than some even though you may think the lower ranked team is the better team. As I said before these rankings are only using stats. There is no formula. Teams with better statistical offenses and defenses are ranked higher. It is that simple.
 
In fairness to the models, 30% of FEI's ratings incorporate preseason data still, and S&P+ still incorporates 50% as of last week. Obviously, UK has outperformed the preseason rankings so they'll have some lag in adjusting. All preseason data will be phased out by mid-October.

I can also explain Vandy's rating, I believe: They've played really well defensively against both highly-rated offenses UGA and Ole Miss. The models love both those teams, and because these are advanced metrics, Vandy gets a higher score due to playing well but losing against strong competition (versus playing poorly and winning against bad competition).
 
  • Like
Reactions: semicus
It's interesting how ESPN's advanced metric system FPI sees the remainder of UK's games compared to F+. They have UK beating Vandy, Char, and UofL, but losing every other game with exception of EKU. Auburn, though, is only a 53% favorite so that's a close one.
 
In fairness to the models, 30% of FEI's ratings incorporate preseason data still, and S&P+ still incorporates 50% as of last week. Obviously, UK has outperformed the preseason rankings so they'll have some lag in adjusting. All preseason data will be phased out by mid-October.

I can also explain Vandy's rating, I believe: They've played really well defensively against both highly-rated offenses UGA and Ole Miss. The models love both those teams, and because these are advanced metrics, Vandy gets a higher score due to playing well but losing against strong competition (versus playing poorly and winning against bad competition).

If 30% is based on preseason data, then that would explain a lot...and really makes things very inaccurate.
 
If you actually read and understand the methodology then you can understand why teams are ranked higher than some even though you may think the lower ranked team is the better team. As I said before these rankings are only using stats. There is no formula. Teams with better statistical offenses and defenses are ranked higher. It is that simple.

Well when teams go head to head and the team ranked 20 and in some cases 40 spots lower wins, I don't think it is a very good reflection on your rankings...especially when after the game there isn't much change if any to those rankings between those teams.

Should they wait until about 8 weeks of games have been played to release these? If not, I don't see what good they bring this early...

UK is sandwiched between MTSU and GA Southern, while trailing Mizzou and SC by 20-30 spots...IU is 4-0 and sitting at 78 with a win over #37 WKU...
 
If you actually read and understand the methodology then you can understand why teams are ranked higher than some even though you may think the lower ranked team is the better team. As I said before these rankings are only using stats. There is no formula. Teams with better statistical offenses and defenses are ranked higher. It is that simple.

Just another way to analyze past performance units which don't necessarily coincide with the final score of the games. I would consider these types of data more in line with "power" ratings, than team rankings. Power rating data can be useful in analyzing data and predicting trends, but there's usually reasons you win or lose frequently, and they're not always numerical measurable reasons.

Personally, I consider sites like this as nothing more than window dressing. You can sometimes get some interesting data to look at, and they serve that purpose I guess. But any computer ranking that isn't heavily weighted toward points scored and allowed, with a strength-of-opponent factor, is garbage to me.

At the end of the day, it really doesn't matter how you look, how big you are, how fast & well you do certain things, etc, what matters is did you score more points than your opponent.
 
I prefer to look with my own two eyes to see how good a team is. You can use statistics to prove damn near anything, but it doesn't mean it's correct.

My eyes tell me that UK is one of the top 40 teams in the country. It also tells me that Vandy is not, neither is Louisville.

Just because a Vandy or UL plays a much better team close, does not in any way mean that they are close to those teams and the fact that this model gives them bonus points for playing well against a good team is total garbage. How many good teams have UK been close to in the past...yet we were not a very good team. I'm sure we got dinged for not winning bigger against ULL but we were up 33-10 at one point and never really in danger of losing. We proved we were A LOT better than ULL despite the score. Vandy is a lot worse than Ole Miss despite the score. UL is alot worse than Clemson despite the score.

The fact that UL is #27 (1-3) and UK is #70 (3-1) with Vandy #60 (1-3) tells me all I need I to know about this garbage model.
 
I think I lost one of my comments, so let me repost.

I've followed the F+ guys for the last three seasons, and they tend to be pretty good predictors for how teams finish the season. A specific example, S&P+ (maybe FEI too) started ranking OSU as the best team in the country by either the first or second week in November, and started dropping Florida State.

There was a huge outcry because FSU fans are very vocal on SB Nation and Twitter (hence the #FSUTwitter). They repeatedly said, "This is BS. How can you rank OSU as the best team in the country when they haven't played anybody?"
 
I think I lost one of my comments, so let me repost.

I've followed the F+ guys for the last three seasons, and they tend to be pretty good predictors for how teams finish the season. A specific example, S&P+ (maybe FEI too) started ranking OSU as the best team in the country by either the first or second week in November, and started dropping Florida State.

There was a huge outcry because FSU fans are very vocal on SB Nation and Twitter (hence the #FSUTwitter). They repeatedly said, "This is BS. How can you rank OSU as the best team in the country when they haven't played anybody?"

Sounds like maybe need to wait to really look at these around week 8-10...I love stats, don't doubt that these can really provide some insight...but so far, they just don't make sense and pass any eye or common sense test regarding the rankings as they sit.
 
In remaining games the more familiar (but a totally different methodology) Sagarin Rankings presently have UK ranked above only Charlotte and Vandy even when the 2.65 home advantage is factored in. However, the AU and UofL games are figured to be close (< 3points).

All of the math based models will probably take another week or two (i.e., get deeper into conference play) to gain more accuracy and credibility. There are too many easy early schedules. At this time, 14 of Sagarin's Top 25 have played a schedule ranked 64th or worse.

Peace
 
  • Like
Reactions: TNTUK
If you actually read and understand the methodology then you can understand why teams are ranked higher than some even though you may think the lower ranked team is the better team. As I said before these rankings are only using stats. There is no formula. Teams with better statistical offenses and defenses are ranked higher. It is that simple.

But it must not account for the strength of a teams opponent and the ability to have a statistical field day against much weaker opponents - which most teams play very weak opponents in weeks 1-4.

UK hasn't really played any cupcakes yet (ULL is not a cupcake imo). Starts this weekend though.
 
Just another way to analyze past performance units which don't necessarily coincide with the final score of the games. I would consider these types of data more in line with "power" ratings, than team rankings. Power rating data can be useful in analyzing data and predicting trends, but there's usually reasons you win or lose frequently, and they're not always numerical measurable reasons.

Personally, I consider sites like this as nothing more than window dressing. You can sometimes get some interesting data to look at, and they serve that purpose I guess. But any computer ranking that isn't heavily weighted toward points scored and allowed, with a strength-of-opponent factor, is garbage to me.

At the end of the day, it really doesn't matter how you look, how big you are, how fast & well you do certain things, etc, what matters is did you score more points than your opponent.
Good post, AJG. Fun to look at but be cautious about it. Tough to find evidence of statistically significant variables in something like a football game to the individual level of attention it needs, which these sites do not do and for good reason. More games in the equation helps, though.
 
Looking at specific statistics is always a little misleading. You can make a bad team look good and a good team look bad. For example, a team that wins a lot and plays with the lead will, most likely, run the ball more often and have lower pass yards. So they will be lower in passing statistic than the bad teams playing from behind, passing the ball trying to catch up.
 
Well when teams go head to head and the team ranked 20 and in some cases 40 spots lower wins, I don't think it is a very good reflection on your rankings...especially when after the game there isn't much change if any to those rankings between those teams.

Should they wait until about 8 weeks of games have been played to release these? If not, I don't see what good they bring this early...

UK is sandwiched between MTSU and GA Southern, while trailing Mizzou and SC by 20-30 spots...IU is 4-0 and sitting at 78 with a win over #37 WKU...

Computer rankings are almost entirely useless this early in a season. I guess they feel like they "should" put out rankings all season, but they're boot particularly useful and they know it.
 
50% against the spread means nothing - only that it's no WORSE than random chance. And there are enough games early in the season with obviously unbalanced matchups that straight up picks are often not that hard.

Again, the models are still using preseason data to varying extents. That's a pretty good percentage considering its ATS, and even more impressive they an pick the winner head's up.

Which is really the point. In head-to-head match-ups, it's a reliable model with a strong track record. ATS is only good if you want to use the models for gambling purposes but there are far better measures out there.

I suggest reading some of the work. You might come away with a more nuanced view.
 
Worst system ever

91sn32Q.jpg
 
  • Like
Reactions: sgt4269
If you actually read and understand the methodology then you can understand why teams are ranked higher than some even though you may think the lower ranked team is the better team. As I said before these rankings are only using stats. There is no formula. Teams with better statistical offenses and defenses are ranked higher. It is that simple.
Do they consider context? Ie, does it matter that UT gave up points at brutally crucial times? Or that UK kept teams out of the end zone with the game on the line? Or are all possessions evaluated with equal weight? Because that would be an error, IMO.
 
Does the fact that 3 starters were out for ull game and we played a lot of freshmen to get them some experience figure into the equation? I get that statistically it may be accurate but like anything there will be outliers.

Got to think we are hurt by that score and vandy is helped by playing a hung over ole miss team close. I saw one poll that had osu ranked 7th.. seriously? Too early for most polls to mean much. Eye test tells me we are probably 30ish best team.
 
In fairness to the models, 30% of FEI's ratings incorporate preseason data still, and S&P+ still incorporates 50% as of last week. Obviously, UK has outperformed the preseason rankings so they'll have some lag in adjusting. All preseason data will be phased out by mid-October.

I can also explain Vandy's rating, I believe: They've played really well defensively against both highly-rated offenses UGA and Ole Miss. The models love both those teams, and because these are advanced metrics, Vandy gets a higher score due to playing well but losing against strong competition (versus playing poorly and winning against bad competition).
which means absolutely nothing if you end up 2-10.
 
What does beating the spread have to do with how good a football team is? If you think this analysis is 'right on' as you seem to, that means you believe that UK is the 70th best team in the country right? And you also believe that UL is the 27th best team in the country.
 
Sorry for not following, but does that mean they get within 5-10 pts of the spread 58-64% of the time? I suspect that's not what it means because that's not very impressive.
 
Sorry for not following, but does that mean they get within 5-10 pts of the spread 58-64% of the time? I suspect that's not what it means because that's not very impressive.
Actually, just looked closer and I think it means that they do better against the spread the bigger the spread gets. Which would be useful for betting purposes but not much else. At least that's what I got from reading it.
 
ADVERTISEMENT
ADVERTISEMENT