ADVERTISEMENT

Looking at KenPom, the Big10

Probably not. I’d say Zags -4
Hmm, you really think they're better than us on a neutral court? I dont. They've beaten some big names this year but all of them are having suspect seasons. I hope we get them in the tournament.
 
Hmm, you really think they're better than us on a neutral court? I dont. They've beaten some big names this year but all of them are having suspect seasons. I hope we get them in the tournament.

I mean they’re one of the best three point shooting teams in the country and have a front court of Petrusev and Tillie who can match up with any front court in the country.
 
Here is the problem with Kenpom, your efficiencies are based on your strength of schedule, which it should, but where does that strength of schedule come from?? So even though he says that the preseason rankings go away, they are still there hidden deep in the calculations. In other words, in order for strength of schedule to factor in, there has to be a starting point, a point that although diminishes as the season goes along, it never truly disappears, as it was the starting point.

The very information KenPom provides basically tells you that. The preseason rankings factor themselves out about January, which happens to be the time that all the non-conference games are over, and as stated before, the conference schedule is a net 0. Even weighting for recency is a fallacy, as there is still a SOS factor in there which basically comes from early season and preseason rankings. It would be impossible to completely remove those items and reset, as again, you have to have a starting point.

It comes back to what I've always said about preseason rankings, I want to be number 1, because it is much harder to fall out of the top 25 than it is to climb into it.
 
Here is the problem with Kenpom, your efficiencies are based on your strength of schedule, which it should, but where does that strength of schedule come from?? So even though he says that the preseason rankings go away, they are still there hidden deep in the calculations. In other words, in order for strength of schedule to factor in, there has to be a starting point, a point that although diminishes as the season goes along, it never truly disappears, as it was the starting point.

The very information KenPom provides basically tells you that. The preseason rankings factor themselves out about January, which happens to be the time that all the non-conference games are over, and as stated before, the conference schedule is a net 0. Even weighting for recency is a fallacy, as there is still a SOS factor in there which basically comes from early season and preseason rankings. It would be impossible to completely remove those items and reset, as again, you have to have a starting point.

It comes back to what I've always said about preseason rankings, I want to be number 1, because it is much harder to fall out of the top 25 than it is to climb into it.

Yea this isn’t true. The pre seasons rankings aren’t buried deep in there... they don’t effect the current rankings at all.

the pre season rankings are just there so he can have some actual rankings that are in the general ball park to start the season. As more data is accumulated those pre season rankings are weighted less and less until they’re removed.

Kenpoms efficiency model is just a mathematical algorithm. Just imagine he didn’t release his rankings until Feb 1st. He just collected all the game data up until that point in the season then dumped that data in his model and released his rankings. That’s what you’re getting once the pre season rankings are removed.
 
Here is the problem with Kenpom, your efficiencies are based on your strength of schedule, which it should, but where does that strength of schedule come from?? So even though he says that the preseason rankings go away, they are still there hidden deep in the calculations. In other words, in order for strength of schedule to factor in, there has to be a starting point, a point that although diminishes as the season goes along, it never truly disappears, as it was the starting point.

The very information KenPom provides basically tells you that. The preseason rankings factor themselves out about January, which happens to be the time that all the non-conference games are over, and as stated before, the conference schedule is a net 0. Even weighting for recency is a fallacy, as there is still a SOS factor in there which basically comes from early season and preseason rankings. It would be impossible to completely remove those items and reset, as again, you have to have a starting point.

It comes back to what I've always said about preseason rankings, I want to be number 1, because it is much harder to fall out of the top 25 than it is to climb into it.

I don't believe this is accurate.

While there's a starting point and that starting point may or may not be accurate, eventually the system corrects itself.

UK started off the season at 2nd. But UK would still be ranked 26th at this point regardless of whether or not they started the season at 2nd or 200th.
 
  • Like
Reactions: Big_Blue79
Looking at Purdue's record, I think I see the flaw in KenPom. Purdue is rated so highly based on winning by large margins in several of their games. KenPom loves those large margins. Kentucky hasn't won games by large margins, which is one reason the Cats are rated so poorly considering their record. So, Purdue "must be better based on the numbers." Apparently, blow-out losses don't hurt you in KenPom, though, because Purdue has multiple games they've lost by 10+ points. (They have 7 such losses) How many 10+ point losses does UK have? ZERO.

Kenpom places great value on these large margins but doesn't place as much value on the most important number of all, which is a win of any margin. That's what this Kentucky team has done- they just win. They don't care if it's by 1 point or 10. That's something Purdue hasn't done so well... FOURTEEN TIMES.

Trust me on this, Purdue isn't a better team than UK, and it's not close. They are a case study on inconsistency. Isn't consistency what matters? What are the chances UK could win 6 games in a row by 2-10 points? High. What are the chances Purdue could? Zero. They haven't done it all season. They haven't won more than 3 games in a row all season. Look at how Purdue and UK are rated by Sagarin, a system that values winning more than margin... UK #6 and Purdue #32. That's about right.
Probably semantics, but KenPom doesn’t “place great value” on winning by a large margin. It just doesn’t cap margin of victory in either direction, and measures efficiency on a possession basis.

All KenPom is saying is: You’ve had the ball 2000 times this season - how many points did you score in 2000 possessions? That’s your offensive efficiency. Your opponents have possessed the ball 2000 times - how many points did they score? That’s your defensive efficiency.

It’s more complex than that because there are strength of schedule adjustments to the efficiency numbers but that’s essentially it. Over hundreds and hundreds over possessions, Purdue and Kentucky have a similar adjusted scoring margin compared to their opponents. Purdue just happened to lose a bunch of those close games that Kentucky also easily could have lost.
 
  • Like
Reactions: Big_Blue79
In the end what it's trying to do is put people on a level playing field.

How many losses would UK have if it was in the Big Ten? Maybe not 14 like Purdue, but we'd definitely have much more than the 5 we got.

The reason I am a big fan of Kenpom is because the system makes sense logically to me.

I mean in the past we've always said oh look Team A scores 90 points a game they must have a good offense or Team B only allows 50 points a game they must have a good defense. All kenpom is doing is introducing pace into play..........realizing that 80 points scored in 80 possessions is way different than 80 points score in 60 possessions. It's points per poss. That's not some made up formula out of left field. That's what happened. It's actual results. If a team scores 60 points, they scored 60 points. If a team scores 1.25 points per poss, they scored 1.25 points per poss. It's clear cut.

IMO the only thing that can be argued is the SOS adjustment. But I'm unsure exactly what that argument could even be. Well maybe the Big Ten got worse over the course of the year and the SEC got better.......i mean that's what u hear but what evidence is there this is occurring?

I have no idea about NET, but the methodology of Kenpom is sound to me.
 
Last edited:
  • Like
Reactions: UKGrad93
Yea this isn’t true. The pre seasons rankings aren’t buried deep in there... they don’t effect the current rankings at all.

the pre season rankings are just there so he can have some actual rankings that are in the general ball park to start the season. As more data is accumulated those pre season rankings are weighted less and less until they’re removed.

Kenpoms efficiency model is just a mathematical algorithm. Just imagine he didn’t release his rankings until Feb 1st. He just collected all the game data up until that point in the season then dumped that data in his model and released his rankings. That’s what you’re getting once the pre season rankings are removed.

I don't believe this is accurate.

While there's a starting point and that starting point may or may not be accurate, eventually the system corrects itself.

UK started off the season at 2nd. But UK would still be ranked 26th at this point regardless of whether or not they started the season at 2nd or 200th.

If yall don't believe this is accurate, then you don't understand how math works. I'm sure he is saying that they are essentially removed from the equation, but the problem is, they are still the starting point. In other words, you can't really know a team's strength of schedule without the assumptions of the preseason, as even 15 games in, each team has played less than half of a percent of the division 1 teams. The only way this would even be able to remove the preseason assumptions entirely is if the teams played over 35-40% of the teams in Division 1, and even then the games would have to be specific enough that there was enough data points between teams to at least accurately rank each team based solely on performance to that date. The issue is, each team doesn't do that, even in an entire season as that would be close to 100 games. And not only that, the equation would have to be totally changed at that point, and if that were the case, you would see a lot of teams move significantly all at once.

Now, if we aren't going to use SOS at all, and aren't going to calculate an effective efficiency, then this could totally be done. But it could be done without any assumptions at the very first game. But that would also mean playing 10 terrible teams would likely launch you to the top because your PPP on offense and defense would look very good, but it would be against terrible teams.
 
If yall don't believe this is accurate, then you don't understand how math works. I'm sure he is saying that they are essentially removed from the equation, but the problem is, they are still the starting point. In other words, you can't really know a team's strength of schedule without the assumptions of the preseason, as even 15 games in, each team has played less than half of a percent of the division 1 teams. The only way this would even be able to remove the preseason assumptions entirely is if the teams played over 35-40% of the teams in Division 1, and even then the games would have to be specific enough that there was enough data points between teams to at least accurately rank each team based solely on performance to that date. The issue is, each team doesn't do that, even in an entire season as that would be close to 100 games. And not only that, the equation would have to be totally changed at that point, and if that were the case, you would see a lot of teams move significantly all at once.

Now, if we aren't going to use SOS at all, and aren't going to calculate an effective efficiency, then this could totally be done. But it could be done without any assumptions at the very first game. But that would also mean playing 10 terrible teams would likely launch you to the top because your PPP on offense and defense would look very good, but it would be against terrible teams.

So basically what you are saying is teams have not played enough games in a given season to accurately represent SOS?

I guess I'm trying to figure out where exactly you are getting the 35-40% playing all D1 teams to remove the pre-season entirely. That doesn't jive with Ken himself saying all previous weight is removed around the beginning of February.

Obviously the more games played, the more information we have on the true talent level of teams and thus the more accurate an SOS rating and overall rating would be. Also obvious is that teams play around 30-35 games a season.

I find it highly unlikely a team would have to play 35-40% of D1 schools to get an accurate representation of SOS.

And I hate to keep harping on it but the mere fact that the ratings take into account SOS and the predictions nearly mirror Vegas lines, I think that shows the adjustment is accurate.

All computer systems have some account for SOS. So your essentially arguing they are all off. Yet these systems come close to pretty much predicting what Vegas would have predicted.

Also the starting point has no bearing whatsoever..........as the season goes on and as the team's ratings change, the SOS also changes.
 
  • Like
Reactions: Big_Blue79
So basically what you are saying is teams have not played enough games in a given season to accurately represent SOS?

I guess I'm trying to figure out where exactly you are getting the 35-40% playing all D1 teams to remove the pre-season entirely. That doesn't jive with Ken himself saying all previous weight is removed around the beginning of February.

Obviously the more games played, the more information we have on the true talent level of teams and thus the more accurate an SOS rating and overall rating would be. Also obvious is that teams play around 30-35 games a season.

I find it highly unlikely a team would have to play 35-40% of D1 schools to get an accurate representation of SOS.

And I hate to keep harping on it but the mere fact that the ratings take into account SOS and the predictions nearly mirror Vegas lines, I think that shows the adjustment is accurate.

All computer systems have some account for SOS. So your essentially arguing they are all off. Yet these systems come close to pretty much predicting what Vegas would have predicted.

Also the starting point has no bearing whatsoever..........as the season goes on and as the team's ratings change, the SOS also changes.

Without getting into the actual statistics and giving a true number, I simply pulled 35-40% off the top of my head, so don't focus on that. That number may be 50%, or maybe even slightly below 30%, cause again, I didn't take the time to truly analyze what the percentage would be to put it in the 90th percentile, which is where you should be statistically.

Now to the other issue, if you remove the starting point, then you create a circular reference, which would lead to an unstable condition. In other words, a teams efficiency would continually change, even without games being played. I'm sure he is using a recency factor which diminish the effect of earlier games, and thus preseason rankings. And that equation may eventually even, and probably does, completely remove the preseason assumptions on the surface( directly in the equation). The problem is, the efficiencies of the teams that you are basing the gains or losses on is still a factor of the preseason assumption, as the efficiency is adjusted for SOS. So even though the equation itself may diminish the preseason assumptions, the SOS factor can't, or 15 games into the season there would be a complete reset, and the generated numbers would continually change and never be stable.

If the preseason assumptions were in fact correct, then there would be no movement in the rankings all year. The fact is, there is error in the preseason assumptions, and thus the rankings move, a lot, especially over the first few games. As the season goes on, there is less movement in the rankings, which in theory is the equation correcting the issues with the initial assumptions. The issue is that once the error is in there, you can diminish the effect of said error, but you can never totally remove it.

As for the betting lines and Vegas, the initial line is generally pretty close to what KenPom predicts, but the movement of the line is based on people making bets, some educated, some not. Vegas in an ideal world wants the same money on both sides of the line, because in that scenario, they can't lose any money, and will take the percentage on the winning side. So it makes sense that Vegas would set the line very close to what KenPom predicts, as that is one of the major tools that educated bettors will use.

The best example of the error of the computer rankings is with Duke in 2015. Even though they were 1 of the top 3 teams in the country, and everybody knew it, their efficiency ratings showed that they were not a candidate for winning the title. However, after playing 6 games in the tournament, they suddenly had an increase in those ratings to a level that made them a candidate. Why is that?? It is because they played 6 high level games and had to be efficient in them in order to win, which changed their ratings. That is my point about the number of games played. By the time there is actually enough data to accurately measure the top teams, there is no use for the data as the tournament is over, and nobody cares.
 
If yall don't believe this is accurate, then you don't understand how math works. I'm sure he is saying that they are essentially removed from the equation, but the problem is, they are still the starting point. In other words, you can't really know a team's strength of schedule without the assumptions of the preseason, as even 15 games in, each team has played less than half of a percent of the division 1 teams. The only way this would even be able to remove the preseason assumptions entirely is if the teams played over 35-40% of the teams in Division 1, and even then the games would have to be specific enough that there was enough data points between teams to at least accurately rank each team based solely on performance to that date. The issue is, each team doesn't do that, even in an entire season as that would be close to 100 games. And not only that, the equation would have to be totally changed at that point, and if that were the case, you would see a lot of teams move significantly all at once.

Now, if we aren't going to use SOS at all, and aren't going to calculate an effective efficiency, then this could totally be done. But it could be done without any assumptions at the very first game. But that would also mean playing 10 terrible teams would likely launch you to the top because your PPP on offense and defense would look very good, but it would be against terrible teams.

As I said in the previous post. Just imagine he didn’t release his rankings until Feb 1st. And on Feb 1st all he did was use game data from the 20 or so games teams had already played. That’s what his rankings are now. It’s rankings that are purely generated by the data from games played this season.

SOS is determined by the quality of teams you’ve played based on their performance this season.
 
  • Like
Reactions: Big_Blue79
Let me hit also this descriptively instead of mathematically. What you’re saying doesn’t make sense because in a closed conference measurement, everything is zero sum. Any time one team scores a lot of points, another team pays a price it its defensive efficiency. Any time one team prevents another team from scoring, it hurts the other team’s offensive efficiency. Net efficiency, which determines KenPom ranking, is just the difference between the two.

In other words, a snake that eats its tail doesn’t get bigger.

The Worm Ouroboros
 
You’re gonna have to explain that. If the conference is “just a bunch of mediocre teams beating up on each other,” then the end result should be bad overall efficiency scores, not good ones.

In a universe with only two teams and no games played, if Team A beats Team B 80-40 in a 60-possession game, Team A has excellent offensive and defensive efficiency (1.33 points scored per possession, .667 points per possession allowed). Team A would be far and away the highest efficiency team on KenPom with those scores.

If the tables turn in the next game, and Team B wins the same game by the same margin, then the scores for both teams average out to 1 point per possession scored, 1 point per possession allowed, which would be very poor efficiency numbers (dropping both teams to about 170 overall on KenPom).

If Team B instead wins the second game 60-58, then the resulting efficiencies are:

Team A: 1.15 points scored per possession, .833 points allowed per possession (would still rank first overall on KenPom this year)

Team B: .833 points scored per possession, 1.15 points allowed per possession (would still rank almost dead last on KenPom, despite having just defeated an incredibly powerful team).

A conference beating up on itself does not lead to excellent efficiency scores for all teams, unless that the teams in that conference torched all of their nonconference opponents.

So what is your exact theory, mathematically, about what is happening here?

Say What? This is so confusing. My brain is stuck on early morning status.
 
Well, the Big 10 did earn it by flat out dominating the non-conference season. The SEC needs to start taking November and December more seriously.
 
The best example of the error of the computer rankings is with Duke in 2015. Even though they were 1 of the top 3 teams in the country, and everybody knew it, their efficiency ratings showed that they were not a candidate for winning the title. However, after playing 6 games in the tournament, they suddenly had an increase in those ratings to a level that made them a candidate. Why is that?? It is because they played 6 high level games and had to be efficient in them in order to win, which changed their ratings. That is my point about the number of games played. By the time there is actually enough data to accurately measure the top teams, there is no use for the data as the tournament is over, and nobody cares.

While I agree with you the more games the better and you never get to 100% because you simply don't play that many games in a season, I don't really agree with this point.

Duke finished 3rd overall after all of the games. Pre tournament tho they were 6th. In a tournament where it's one and done, I think they were definitely a candidate for winning the title. Duke started the tournament with an efficiency margin of 29.30 and ended with 32.48.

I don't believe anyone is claiming that the systems get 100% of the way there tho. Even if one was able to do that, you'd still have luck to deal with.

But it's certainly close to accurate.
 
Say What? This is so confusing. My brain is stuck on early morning status.

The main takeaway is basically that conference strength is determined in the non conference basically

Teams playing one another in conference amount to a net zero. If one team scores 90, the other has given up 90. If one team scores 2 points per poss, the opposite has given up 2 points per poss.

So it's hard to assess whether a conference as a whole has gotten better or worse.
 
  • Like
Reactions: Double Tay
As I said in the previous post. Just imagine he didn’t release his rankings until Feb 1st. And on Feb 1st all he did was use game data from the 20 or so games teams had already played. That’s what his rankings are now. It’s rankings that are purely generated by the data from games played this season.

SOS is determined by the quality of teams you’ve played based on their performance this season.

So with your idea, how would you determine the strength of schedule?? You are saying based on team's performance, but what is the reference?? That is what I am saying about there aren't enough games played during the season to accurately define how good a team is, without putting assumptions in. You would have to be able to link each team to every other team multiple times with limited degrees of separation in order for that to work. Because as you know, if team A beats Team B by 15 points, then team B beats Team C by 10 points, that doesn't mean team A is 25 points better than team B. And that is a single degree of separation. You need a couple of other links with the same limited degree of separation to accurately define how good each team is. And that is just trying to accurately define 3 teams, now imagine trying that for over 300 teams. There simply isn't enough games to do that. So how does he compensate?? By taking assumptions in at the beginning. He essentially ranks teams A thru C in this example. Now to make it all converge, you add the SOS for each game, based on the initial rankings. As teams underperform or overperform, their efficiency adjusts, based on the level of competition. Therefore, the initial rankings always play a part in the SOS factor. And the error of those assumptions never truly go away. Now, if every team played enough games, you could link each team with limited degrees of separation to every other team, and the initial ranking could be completely removed because there is enough references between teams for that to be the case.

So to put it another way, over the course of the conference schedule, there would be enough evidence to rank the teams within a conference fairly accurately, though in a case like Purdue, it would have issues. The reason is because Purdue's performances have been anything but consistent. Back to the topic, the schedule of most conferences if not all conferences are going to have each team play a game directly against each other. That is 0 separation. In addition, each team can also be linked to every other team with multiple single degree of separation games. At that point, the data likely becomes pretty accurate (except in cases like Purdue) with respect to other teams within the conference. Now comes the issue, there aren't enough out-of-conference games to accurately define the ranking of college basketball as a whole. That is why people say you can't put a ton of weight into the Challenges (ACC-Big10, SEC-Big12). And those link the conferences through 10+ games. It would take a bare minimum of 2 games for each team against the other conference to even start to converge, unless of course, you already had an initial ranking to go on, then you would use that, but you would also be stuck with the error until enough links were created to remove it. So, for this to happen, Kentucky would have to play multiple games against each conference. Just using the Big10, Pac12, Big10, ACC, Big East, that would be 10 of Kentucky's 13 non-conference games. And the same would be true for every team in each conference. Now, how many teams do you think play 10 of their 13 or so non-conference games against teams from one of those conferences??

The initial point was that the initial assumptions never truly go away, and they can't as long as you are using a SOS factor, which is the only way to be very accurate at all. It generally works itself out, as teams play more and adjustments are made. But this is generally because the initial assumptions are pretty good. However, they can never be completely removed due to the lack of links between all teams.
 
UK has a much better shot at winning 6 in a row than Purdue. Kentucky has won 6 in a row twice, including their current streak which includes 3 difficult true road wins.

I still say the Big Ten is an overrated conference. Now, if you're talking about raw number of good teams, they're the best. But if you're talking about teams that could make a Final Four or have a shot to win it all? I don't see even one.
I don't agree with this at all. Are there any great teams in the Big 10? No, but take a look around college basketball. The fact is 1-12 the conference is tough. Look at Ohio State and Michigan prior to conference; they were considered 2 of the best in college basketball. What happened? They entered conference play in a conference that has 12 teams rated in the top 40 at Ken Pom. That is brutal. I love the Cats, but you are kidding yourself if you think we would be sitting at 2 conference losses or anything even close if we played in the Big 10 this year. As far as Big Ten teams good enough to win it all? Not saying they are favorites, but Maryland, Michigan, Michigan State, and Ohio State all could win 6 games in a row against good competition.
 
While I agree with you the more games the better and you never get to 100% because you simply don't play that many games in a season, I don't really agree with this point.

Duke finished 3rd overall after all of the games. Pre tournament tho they were 6th. In a tournament where it's one and done, I think they were definitely a candidate for winning the title. Duke started the tournament with an efficiency margin of 29.30 and ended with 32.48.

I don't believe anyone is claiming that the systems get 100% of the way there tho. Even if one was able to do that, you'd still have luck to deal with.

But it's certainly close to accurate.

The reference to Duke was because they were outside the top 15 or 20, whatever the cutoff was in likely defensive efficiency. But after the tournament was over, they were inside the top 15 in both. The reason is because you play 6 straight games against top competition (high SOS), and by virtue of winning those games (being more efficient), your efficiency climbs, especially in respect to the other top teams in the tournament who are losing (less efficient). It has been discussed several times since then, and is why even though the point is brought up, most know it is not correct with respect to the ratings pre tournament. The idea, according to the "experts" was that like 10 of the last 11 champions were top 15 in both offensive and defensive efficiency. And while that was statistically true, those were numbers post-tournament. So applying that logic to pre-tournament ratings doesn't work out. But it does generally work out post tournament for the reason described above.

The reason for the example was to point out that Duke wasn't likely that much better during the tournament than they were all year, but playing 6 straight games against high level competition and winning will certainly increase those ratings. Considering that most teams play less than 6 high level games pre-conference, it can make the ratings somewhat skewed going into conference play. Duke's efficiency going into the tournament was likely more because of the perceived level of competition of their competition.
 
I don't agree with this at all. Are there any great teams in the Big 10? No, but take a look around college basketball. The fact is 1-12 the conference is tough. Look at Ohio State and Michigan prior to conference; they were considered 2 of the best in college basketball. What happened? They entered conference play in a conference that has 12 teams rated in the top 40 at Ken Pom. That is brutal. I love the Cats, but you are kidding yourself if you think we would be sitting at 2 conference losses or anything even close if we played in the Big 10 this year. As far as Big Ten teams good enough to win it all? Not saying they are favorites, but Maryland, Michigan, Michigan State, and Ohio State all could win 6 games in a row against good competition.

I'd add Iowa, Penn, Purdue, Illinois and Indiana could make some noise in the dance too. BIG proved their strength in the OOC when they beat up on the top teams in the other conferences. Any one of the top 8 teams in the BIG would be contending for the SEC title.
 
The reference to Duke was because they were outside the top 15 or 20, whatever the cutoff was in likely defensive efficiency. But after the tournament was over, they were inside the top 15 in both. The reason is because you play 6 straight games against top competition (high SOS), and by virtue of winning those games (being more efficient), your efficiency climbs, especially in respect to the other top teams in the tournament who are losing (less efficient). It has been discussed several times since then, and is why even though the point is brought up, most know it is not correct with respect to the ratings pre tournament. The idea, according to the "experts" was that like 10 of the last 11 champions were top 15 in both offensive and defensive efficiency. And while that was statistically true, those were numbers post-tournament. So applying that logic to pre-tournament ratings doesn't work out. But it does generally work out post tournament for the reason described above.

The reason for the example was to point out that Duke wasn't likely that much better during the tournament than they were all year, but playing 6 straight games against high level competition and winning will certainly increase those ratings. Considering that most teams play less than 6 high level games pre-conference, it can make the ratings somewhat skewed going into conference play. Duke's efficiency going into the tournament was likely more because of the perceived level of competition of their competition.

Well then those people were wrong for looking at it that way.
I mean sure it's good to be great in both offense and defense but the most important number is the efficiency margin, the difference between the two numbers and what the system actually ranks. You take it in combination and overall they were 6th, which IMO definitely fits into the category of title contender.
 
  • Like
Reactions: Big_Blue79
It seems tho as if you are arguing because it cannot completely remove things and cannot completely define how good a team is, it shouldn't be used.

I look at it like even if it gets what 85-90% of the way there, that's pretty good. My feeling is the whole purpose of these are to estimate true talent level of teams. I feel looking at it in this regard, the system works just fine.
 
It seems tho as if you are arguing because it cannot completely remove things and cannot completely define how good a team is, it shouldn't be used.

I look at it like even if it gets what 85-90% of the way there, that's pretty good. My feeling is the whole purpose of these are to estimate true talent level of teams. I feel looking at it in this regard, the system works just fine.

I don't disagree with that. The whole thing started with me basically saying that even though the initial rankings may be removed from the equation at some point during the season, they never truly go away, as they are the starting point. I'm not, nor have I ever, argued that they shouldn't be used. I simply argued that the initial rankings(assumptions) do always play a bit of a factor in the ratings, and they do, even if they are directly linked in the equation, because they are a starting place, which cannot be removed.

Now, that is not a huge issue, as the initial rankings(assumptions) are generally in the ballpark. That definitely helps.

At the end of the day, what was stated, maybe by you, is correct, in that the NET rating of a conference is set before conference play begins, thus regardless of what happens during conference play, the rating of the conference essentially doesn't move. I, like most, believe that KenPom ratings are the best, and are useful, they just aren't perfect. No system is. But that doesn't mean we should disregard the whole system because it isn't perfect. It has value. Most UK fans are just upset because our rating in KenPom isn't where we think the team is playing. But for KenPom, there is no bias, every team is rated the same. Our main issue is that we aren't very consistent. We beat good teams, but let lesser teams hang around. For every good bump we get for a good performance, we get knocked down 2-3 times for bad performances. Are they really bad performances?? Maybe, maybe not. The SEC did a poor job, UK included, in getting the conference a high rating in the non-conference. Now, the lower rating of good teams hurts you in your efficiency when they play you close. And because UK isn't highly rated, the other teams in conference don't get as much of a bump as they normally would for playing us close.

Two weeks from Sunday, teams will get seeded, and the champion will have to win 6 games (or more I guess) in a row to get crowned. There will be outliers, but generally the perception of the teams by the committee, and KenPom will generally agree. And for the most part, they will be about as correct as they could hope to be.
 
I admit to wearing blue-tinted glasses. Perhaps it is the 20 years since a Big 10 team won a title, or perhaps it the perception (I believe I have actually recently read this as true) that the Big 10 has historically under-performed in the tourney relative to seed more than any other conference (I think the SEC has historically over-performed relative to seed historically), BUT I haven't seen a Big 10 I am scared of. Maybe this is their year. Maybe they "Buck" their 20 year trend. Maybe the overall maturity of those teams wins out this year. They certainly don't appear to be hurt by early departures on the level other conferences do. Maybe there is just no great conference this year. I dunno, but I'm not buying the Big 10 hype until it actually happens.
 
The whole thing started with me basically saying that even though the initial rankings may be removed from the equation at some point during the season, they never truly go away, as they are the starting point.

But this is still incorrect. At this point in the season, pre-season assumptions have nothing to do with the current ratings. They are not affecting the current ratings in even a tiny way. Their having been there at the beginning does not in any way affect ratings today.
 
  • Like
Reactions: Big_Blue79
Well, the Big 10 did earn it by flat out dominating the non-conference season. The SEC needs to start taking November and December more seriously.
Well, then the conference season is pretty useless to the rankings because the die has been cast by the pre season...becomes a self fulfilling prophecy because there aren't any "bad losses" in the Big as they're just beating up on other behemoths.
 
No relation to this year.

Correct.

but they have a history.

Oh.

So even though he says that the preseason rankings go away, they are still there hidden deep in the calculations.

This is quite literally incorrect.

IMO the only thing that can be argued is the SOS adjustment. But I'm unsure exactly what that argument could even be. Well maybe the Big Ten got worse over the course of the year and the SEC got better.......i mean that's what u hear but what evidence is there this is occurring?

This is the crux of the issue. There is no evidence because of how the schedule is set up, so conference strength tends to be sticky based on early season results. But this isn't a KenPom problem so much as a schedule problem (that and small sample size). Still, as you point out, KenPom is a terrific system that is prized for its accuracy.

If yall don't believe this is accurate, then you don't understand how math works. I'm sure he is saying that they are essentially removed from the equation, but the problem is, they are still the starting point. In other words, you can't really know a team's strength of schedule without the assumptions of the preseason, as even 15 games in, each team has played less than half of a percent of the division 1 teams.

You're 100% wrong for a variety of reasons. First, the author (Ken Pomeroy, a brilliant individual who assuredly knows more about this than anyone on this board) has explicitly stated that any preseason finger on the scale is entirely removed at a certain point in the season.

Second, you don't need to play very many teams to get a good start on strength of schedule because you also have efficiency (i.e. games are not just binary results) and the strength of schedule of your opponents' opponents (and their efficiency and their opponents' opponents, etc...). So at the point that KenPom removes preseason assumptions, the SOS adjustments are in decent shape already. (As an aside, your posts suggest that you see inputs as games rather than possessions, which is a mistake.).

Third, you also seem to ignore that SOS is not set in stone the moment that preseason assumptions are removed but constantly evolving with more and more data. As you posted elsewhere, a team's SOS can change even when they don't play a game (and even if their opponents do not play games). That's... how it works.

Fourth, you seem to assume that a system must get to X% accuracy or its bunk. But any system is limited by its inputs. For college basketball SOS, one of those limitations is the lack of cross-conference games in the last 3 months of the season, leading to sticky conference SOS ratings (and the cascading effect on individual teams). This is a limitation on the system, but no reason to throw the whole thing out or to question the explicit statements of its author, a well-respected expert, on how he constructed his own program.

The initial point was that the initial assumptions never truly go away, and they can't as long as you are using a SOS factor, which is the only way to be very accurate at all. It generally works itself out, as teams play more and adjustments are made. But this is generally because the initial assumptions are pretty good. However, they can never be completely removed due to the lack of links between all teams.

You're really doubling down on something that is easily verified as untrue: "The pre-season ratings will be degraded as real data accumulates. Starting next Monday, the influence of the initial rating will be dropped gradually each day until it reaches zero on the morning of January 23." https://kenpom.com/blog/what-happens-to-preseason-ratings-when-its-not-preseason-anymore/. IDK, he's only a data scientist who has been doing this for over a decade, whose work forms the basis for betting lines (i.e. real world tests), and who programs the database. Here's another post on why preseason assumptions are used at all when there is actual data: https://kenpom.com/blog/preseason-ratings-why-weight/.

Think of it this way: the preseason assumptions are weight on the scale that is gradually removed until there is zero weight remaining. You're approaching this like a limit equation where a value approaches, but never reaches, zero. Not only is that incorrect, but it begs the question of who cares because at some point any residual effect is so small as to be swallowed by the margin of error.
 
Well, then the conference season is pretty useless to the rankings because the die has been cast by the pre season...becomes a self fulfilling prophecy because there aren't any "bad losses" in the Big as they're just beating up on other behemoths.

To an extent, yes, conference SOS is somewhat sticky after there are no more (or very few) inter-conference games. But individual teams can still move dramatically.
 
The reference to Duke was because they were outside the top 15 or 20, whatever the cutoff was in likely defensive efficiency. But after the tournament was over, they were inside the top 15 in both. The reason is because you play 6 straight games against top competition (high SOS), and by virtue of winning those games (being more efficient), your efficiency climbs, especially in respect to the other top teams in the tournament who are losing (less efficient). It has been discussed several times since then, and is why even though the point is brought up, most know it is not correct with respect to the ratings pre tournament. The idea, according to the "experts" was that like 10 of the last 11 champions were top 15 in both offensive and defensive efficiency. And while that was statistically true, those were numbers post-tournament. So applying that logic to pre-tournament ratings doesn't work out. But it does generally work out post tournament for the reason described above.

Yeah, the pre-tournament v. post-tournament KenPom cutoffs (which make for great columns and okay analysis) has been hashed and rehashed in columns over the years. Most of the punditry gets it right now - look to pre-tournament efficiency rather than post-tournament efficiency - but since past pre-tournament ratings are behind a paywall the layperson still confuses the issue. And pundits still do dumb things like "top 20 in X" rather than relying on something like standard deviation or efficiency differential; the difference is that oftentimes there is little difference between (for example) # 12 and #22, so relying on rank rather than a more direct measure leads to odd results.
 
Kentucky moved up 0 spots after that 9 point win on the road at A&M

Started the day at 26 oh look at that still 26.

Ranked 29th in off eff
Ranked 43rd in def eff
 
Yeah you're not moving at all if the system says u win by 6 and u win by just 9.

We did pretty much as expected given our position in the system and Texas A&M.

At this rate tho I wouldn't even sweat it. The committee is going to give us a good seed because our resume will be good.
 
Correct.



Oh.



This is quite literally incorrect.



This is the crux of the issue. There is no evidence because of how the schedule is set up, so conference strength tends to be sticky based on early season results. But this isn't a KenPom problem so much as a schedule problem (that and small sample size). Still, as you point out, KenPom is a terrific system that is prized for its accuracy.



You're 100% wrong for a variety of reasons. First, the author (Ken Pomeroy, a brilliant individual who assuredly knows more about this than anyone on this board) has explicitly stated that any preseason finger on the scale is entirely removed at a certain point in the season.

Second, you don't need to play very many teams to get a good start on strength of schedule because you also have efficiency (i.e. games are not just binary results) and the strength of schedule of your opponents' opponents (and their efficiency and their opponents' opponents, etc...). So at the point that KenPom removes preseason assumptions, the SOS adjustments are in decent shape already. (As an aside, your posts suggest that you see inputs as games rather than possessions, which is a mistake.).

Third, you also seem to ignore that SOS is not set in stone the moment that preseason assumptions are removed but constantly evolving with more and more data. As you posted elsewhere, a team's SOS can change even when they don't play a game (and even if their opponents do not play games). That's... how it works.

Fourth, you seem to assume that a system must get to X% accuracy or its bunk. But any system is limited by its inputs. For college basketball SOS, one of those limitations is the lack of cross-conference games in the last 3 months of the season, leading to sticky conference SOS ratings (and the cascading effect on individual teams). This is a limitation on the system, but no reason to throw the whole thing out or to question the explicit statements of its author, a well-respected expert, on how he constructed his own program.



You're really doubling down on something that is easily verified as untrue: "The pre-season ratings will be degraded as real data accumulates. Starting next Monday, the influence of the initial rating will be dropped gradually each day until it reaches zero on the morning of January 23." https://kenpom.com/blog/what-happens-to-preseason-ratings-when-its-not-preseason-anymore/. IDK, he's only a data scientist who has been doing this for over a decade, whose work forms the basis for betting lines (i.e. real world tests), and who programs the database. Here's another post on why preseason assumptions are used at all when there is actual data: https://kenpom.com/blog/preseason-ratings-why-weight/.

Think of it this way: the preseason assumptions are weight on the scale that is gradually removed until there is zero weight remaining. You're approaching this like a limit equation where a value approaches, but never reaches, zero. Not only is that incorrect, but it begs the question of who cares because at some point any residual effect is so small as to be swallowed by the margin of error.

So let me walk you through this real slow. In game 1 you generate an efficiency which is weighted by assumptions. Let's just say that in theory at game 15 the preseason assumptions are removed from the equation. This is done weighting recent games more than previous ones, and the equation actually stopping at 15 games of opponent and opponents opponents and so on. The problem is that the efficiency generated at game 14 is weighted by preseason assumptions. So even though the preseason ranking no longer effect your strength of schedule, it was the basis by which you achieved the efficiency at game 15.

It is very similar to the NET saying they cap margins at 10 points, so it doesnt matter if you win by 10 or 25. The problem is, if you win by 25 instead of 10, your efficiency is higher, so in basically the margin isnt really capped at 10. The margin factor is, but the efficiencies are not.
 
So let me walk you through this real slow. In game 1 you generate an efficiency which is weighted by assumptions. Let's just say that in theory at game 15 the preseason assumptions are removed from the equation. This is done weighting recent games more than previous ones, and the equation actually stopping at 15 games of opponent and opponents opponents and so on. The problem is that the efficiency generated at game 14 is weighted by preseason assumptions. So even though the preseason ranking no longer effect your strength of schedule, it was the basis by which you achieved the efficiency at game 15.

It is very similar to the NET saying they cap margins at 10 points, so it doesnt matter if you win by 10 or 25. The problem is, if you win by 25 instead of 10, your efficiency is higher, so in basically the margin isnt really capped at 10. The margin factor is, but the efficiencies are not.

Dude. You are wrong. Period. The only thing in the model now is what happened this year. There is ZERO EFFECT of preseason assumptions. They do not enter into the calculations whatsoever. The only data entered is what happened this year. There is no such thing as "they played in game 12 and they were assumed to be so strong at that point factoring in preseason data." We beat a team in week 12 and that team's strength is 100% based on what that team did this year.
 
Yeah you're not moving at all if the system says u win by 6 and u win by just 9.

We did pretty much as expected given our position in the system and Texas A&M.

At this rate tho I wouldn't even sweat it. The committee is going to give us a good seed because our resume will be good.
It does matter because the committee has said they look at it. It also matters because the committee is always looking for ANY reason to underrate UK. If they can offer any plausible answer for why they rated Kentucky so differently from their ranking, they will use it and do just that.

One final possibility why the efficiency ratings don't tell the story on some teams, like UK this season, is because it is difficult to measure the quality of a team that just refuses to lose, even in games they could be in danger of losing. Back to my Purdue comparison, and someone mentioned it above as a detractor for this UK team, the Boilermakers have had some close games as well, but they've lost 14 of them. Still, since their efficiency numbers are better, they are better in KenPom. Are they better, though? Is a team that has blown out some good teams but also lost 14 games really better than a team that has only lost 5 games all season, despite playing in close games all season? KenPom can't measure heart and unwillingness to lose. That's the best quality of this Kentucky team.
 
  • Like
Reactions: catben
ADVERTISEMENT

Latest posts

ADVERTISEMENT