Ok, but do you really believe that? That's a lot against us on a neutral court.
Probably not. I’d say Zags -4
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Ok, but do you really believe that? That's a lot against us on a neutral court.
Hmm, you really think they're better than us on a neutral court? I dont. They've beaten some big names this year but all of them are having suspect seasons. I hope we get them in the tournament.Probably not. I’d say Zags -4
Hmm, you really think they're better than us on a neutral court? I dont. They've beaten some big names this year but all of them are having suspect seasons. I hope we get them in the tournament.
Here is the problem with Kenpom, your efficiencies are based on your strength of schedule, which it should, but where does that strength of schedule come from?? So even though he says that the preseason rankings go away, they are still there hidden deep in the calculations. In other words, in order for strength of schedule to factor in, there has to be a starting point, a point that although diminishes as the season goes along, it never truly disappears, as it was the starting point.
The very information KenPom provides basically tells you that. The preseason rankings factor themselves out about January, which happens to be the time that all the non-conference games are over, and as stated before, the conference schedule is a net 0. Even weighting for recency is a fallacy, as there is still a SOS factor in there which basically comes from early season and preseason rankings. It would be impossible to completely remove those items and reset, as again, you have to have a starting point.
It comes back to what I've always said about preseason rankings, I want to be number 1, because it is much harder to fall out of the top 25 than it is to climb into it.
Here is the problem with Kenpom, your efficiencies are based on your strength of schedule, which it should, but where does that strength of schedule come from?? So even though he says that the preseason rankings go away, they are still there hidden deep in the calculations. In other words, in order for strength of schedule to factor in, there has to be a starting point, a point that although diminishes as the season goes along, it never truly disappears, as it was the starting point.
The very information KenPom provides basically tells you that. The preseason rankings factor themselves out about January, which happens to be the time that all the non-conference games are over, and as stated before, the conference schedule is a net 0. Even weighting for recency is a fallacy, as there is still a SOS factor in there which basically comes from early season and preseason rankings. It would be impossible to completely remove those items and reset, as again, you have to have a starting point.
It comes back to what I've always said about preseason rankings, I want to be number 1, because it is much harder to fall out of the top 25 than it is to climb into it.
Probably semantics, but KenPom doesn’t “place great value” on winning by a large margin. It just doesn’t cap margin of victory in either direction, and measures efficiency on a possession basis.Looking at Purdue's record, I think I see the flaw in KenPom. Purdue is rated so highly based on winning by large margins in several of their games. KenPom loves those large margins. Kentucky hasn't won games by large margins, which is one reason the Cats are rated so poorly considering their record. So, Purdue "must be better based on the numbers." Apparently, blow-out losses don't hurt you in KenPom, though, because Purdue has multiple games they've lost by 10+ points. (They have 7 such losses) How many 10+ point losses does UK have? ZERO.
Kenpom places great value on these large margins but doesn't place as much value on the most important number of all, which is a win of any margin. That's what this Kentucky team has done- they just win. They don't care if it's by 1 point or 10. That's something Purdue hasn't done so well... FOURTEEN TIMES.
Trust me on this, Purdue isn't a better team than UK, and it's not close. They are a case study on inconsistency. Isn't consistency what matters? What are the chances UK could win 6 games in a row by 2-10 points? High. What are the chances Purdue could? Zero. They haven't done it all season. They haven't won more than 3 games in a row all season. Look at how Purdue and UK are rated by Sagarin, a system that values winning more than margin... UK #6 and Purdue #32. That's about right.
Yea this isn’t true. The pre seasons rankings aren’t buried deep in there... they don’t effect the current rankings at all.
the pre season rankings are just there so he can have some actual rankings that are in the general ball park to start the season. As more data is accumulated those pre season rankings are weighted less and less until they’re removed.
Kenpoms efficiency model is just a mathematical algorithm. Just imagine he didn’t release his rankings until Feb 1st. He just collected all the game data up until that point in the season then dumped that data in his model and released his rankings. That’s what you’re getting once the pre season rankings are removed.
I don't believe this is accurate.
While there's a starting point and that starting point may or may not be accurate, eventually the system corrects itself.
UK started off the season at 2nd. But UK would still be ranked 26th at this point regardless of whether or not they started the season at 2nd or 200th.
If yall don't believe this is accurate, then you don't understand how math works. I'm sure he is saying that they are essentially removed from the equation, but the problem is, they are still the starting point. In other words, you can't really know a team's strength of schedule without the assumptions of the preseason, as even 15 games in, each team has played less than half of a percent of the division 1 teams. The only way this would even be able to remove the preseason assumptions entirely is if the teams played over 35-40% of the teams in Division 1, and even then the games would have to be specific enough that there was enough data points between teams to at least accurately rank each team based solely on performance to that date. The issue is, each team doesn't do that, even in an entire season as that would be close to 100 games. And not only that, the equation would have to be totally changed at that point, and if that were the case, you would see a lot of teams move significantly all at once.
Now, if we aren't going to use SOS at all, and aren't going to calculate an effective efficiency, then this could totally be done. But it could be done without any assumptions at the very first game. But that would also mean playing 10 terrible teams would likely launch you to the top because your PPP on offense and defense would look very good, but it would be against terrible teams.
So basically what you are saying is teams have not played enough games in a given season to accurately represent SOS?
I guess I'm trying to figure out where exactly you are getting the 35-40% playing all D1 teams to remove the pre-season entirely. That doesn't jive with Ken himself saying all previous weight is removed around the beginning of February.
Obviously the more games played, the more information we have on the true talent level of teams and thus the more accurate an SOS rating and overall rating would be. Also obvious is that teams play around 30-35 games a season.
I find it highly unlikely a team would have to play 35-40% of D1 schools to get an accurate representation of SOS.
And I hate to keep harping on it but the mere fact that the ratings take into account SOS and the predictions nearly mirror Vegas lines, I think that shows the adjustment is accurate.
All computer systems have some account for SOS. So your essentially arguing they are all off. Yet these systems come close to pretty much predicting what Vegas would have predicted.
Also the starting point has no bearing whatsoever..........as the season goes on and as the team's ratings change, the SOS also changes.
If yall don't believe this is accurate, then you don't understand how math works. I'm sure he is saying that they are essentially removed from the equation, but the problem is, they are still the starting point. In other words, you can't really know a team's strength of schedule without the assumptions of the preseason, as even 15 games in, each team has played less than half of a percent of the division 1 teams. The only way this would even be able to remove the preseason assumptions entirely is if the teams played over 35-40% of the teams in Division 1, and even then the games would have to be specific enough that there was enough data points between teams to at least accurately rank each team based solely on performance to that date. The issue is, each team doesn't do that, even in an entire season as that would be close to 100 games. And not only that, the equation would have to be totally changed at that point, and if that were the case, you would see a lot of teams move significantly all at once.
Now, if we aren't going to use SOS at all, and aren't going to calculate an effective efficiency, then this could totally be done. But it could be done without any assumptions at the very first game. But that would also mean playing 10 terrible teams would likely launch you to the top because your PPP on offense and defense would look very good, but it would be against terrible teams.
Let me hit also this descriptively instead of mathematically. What you’re saying doesn’t make sense because in a closed conference measurement, everything is zero sum. Any time one team scores a lot of points, another team pays a price it its defensive efficiency. Any time one team prevents another team from scoring, it hurts the other team’s offensive efficiency. Net efficiency, which determines KenPom ranking, is just the difference between the two.
In other words, a snake that eats its tail doesn’t get bigger.
You’re gonna have to explain that. If the conference is “just a bunch of mediocre teams beating up on each other,” then the end result should be bad overall efficiency scores, not good ones.
In a universe with only two teams and no games played, if Team A beats Team B 80-40 in a 60-possession game, Team A has excellent offensive and defensive efficiency (1.33 points scored per possession, .667 points per possession allowed). Team A would be far and away the highest efficiency team on KenPom with those scores.
If the tables turn in the next game, and Team B wins the same game by the same margin, then the scores for both teams average out to 1 point per possession scored, 1 point per possession allowed, which would be very poor efficiency numbers (dropping both teams to about 170 overall on KenPom).
If Team B instead wins the second game 60-58, then the resulting efficiencies are:
Team A: 1.15 points scored per possession, .833 points allowed per possession (would still rank first overall on KenPom this year)
Team B: .833 points scored per possession, 1.15 points allowed per possession (would still rank almost dead last on KenPom, despite having just defeated an incredibly powerful team).
A conference beating up on itself does not lead to excellent efficiency scores for all teams, unless that the teams in that conference torched all of their nonconference opponents.
So what is your exact theory, mathematically, about what is happening here?
The best example of the error of the computer rankings is with Duke in 2015. Even though they were 1 of the top 3 teams in the country, and everybody knew it, their efficiency ratings showed that they were not a candidate for winning the title. However, after playing 6 games in the tournament, they suddenly had an increase in those ratings to a level that made them a candidate. Why is that?? It is because they played 6 high level games and had to be efficient in them in order to win, which changed their ratings. That is my point about the number of games played. By the time there is actually enough data to accurately measure the top teams, there is no use for the data as the tournament is over, and nobody cares.
Say What? This is so confusing. My brain is stuck on early morning status.
As I said in the previous post. Just imagine he didn’t release his rankings until Feb 1st. And on Feb 1st all he did was use game data from the 20 or so games teams had already played. That’s what his rankings are now. It’s rankings that are purely generated by the data from games played this season.
SOS is determined by the quality of teams you’ve played based on their performance this season.
I don't agree with this at all. Are there any great teams in the Big 10? No, but take a look around college basketball. The fact is 1-12 the conference is tough. Look at Ohio State and Michigan prior to conference; they were considered 2 of the best in college basketball. What happened? They entered conference play in a conference that has 12 teams rated in the top 40 at Ken Pom. That is brutal. I love the Cats, but you are kidding yourself if you think we would be sitting at 2 conference losses or anything even close if we played in the Big 10 this year. As far as Big Ten teams good enough to win it all? Not saying they are favorites, but Maryland, Michigan, Michigan State, and Ohio State all could win 6 games in a row against good competition.UK has a much better shot at winning 6 in a row than Purdue. Kentucky has won 6 in a row twice, including their current streak which includes 3 difficult true road wins.
I still say the Big Ten is an overrated conference. Now, if you're talking about raw number of good teams, they're the best. But if you're talking about teams that could make a Final Four or have a shot to win it all? I don't see even one.
While I agree with you the more games the better and you never get to 100% because you simply don't play that many games in a season, I don't really agree with this point.
Duke finished 3rd overall after all of the games. Pre tournament tho they were 6th. In a tournament where it's one and done, I think they were definitely a candidate for winning the title. Duke started the tournament with an efficiency margin of 29.30 and ended with 32.48.
I don't believe anyone is claiming that the systems get 100% of the way there tho. Even if one was able to do that, you'd still have luck to deal with.
But it's certainly close to accurate.
I don't agree with this at all. Are there any great teams in the Big 10? No, but take a look around college basketball. The fact is 1-12 the conference is tough. Look at Ohio State and Michigan prior to conference; they were considered 2 of the best in college basketball. What happened? They entered conference play in a conference that has 12 teams rated in the top 40 at Ken Pom. That is brutal. I love the Cats, but you are kidding yourself if you think we would be sitting at 2 conference losses or anything even close if we played in the Big 10 this year. As far as Big Ten teams good enough to win it all? Not saying they are favorites, but Maryland, Michigan, Michigan State, and Ohio State all could win 6 games in a row against good competition.
The reference to Duke was because they were outside the top 15 or 20, whatever the cutoff was in likely defensive efficiency. But after the tournament was over, they were inside the top 15 in both. The reason is because you play 6 straight games against top competition (high SOS), and by virtue of winning those games (being more efficient), your efficiency climbs, especially in respect to the other top teams in the tournament who are losing (less efficient). It has been discussed several times since then, and is why even though the point is brought up, most know it is not correct with respect to the ratings pre tournament. The idea, according to the "experts" was that like 10 of the last 11 champions were top 15 in both offensive and defensive efficiency. And while that was statistically true, those were numbers post-tournament. So applying that logic to pre-tournament ratings doesn't work out. But it does generally work out post tournament for the reason described above.
The reason for the example was to point out that Duke wasn't likely that much better during the tournament than they were all year, but playing 6 straight games against high level competition and winning will certainly increase those ratings. Considering that most teams play less than 6 high level games pre-conference, it can make the ratings somewhat skewed going into conference play. Duke's efficiency going into the tournament was likely more because of the perceived level of competition of their competition.
It seems tho as if you are arguing because it cannot completely remove things and cannot completely define how good a team is, it shouldn't be used.
I look at it like even if it gets what 85-90% of the way there, that's pretty good. My feeling is the whole purpose of these are to estimate true talent level of teams. I feel looking at it in this regard, the system works just fine.
1987.Kentucky would be a middle of the pack team in the Big Ten.
The whole thing started with me basically saying that even though the initial rankings may be removed from the equation at some point during the season, they never truly go away, as they are the starting point.
Well, then the conference season is pretty useless to the rankings because the die has been cast by the pre season...becomes a self fulfilling prophecy because there aren't any "bad losses" in the Big as they're just beating up on other behemoths.Well, the Big 10 did earn it by flat out dominating the non-conference season. The SEC needs to start taking November and December more seriously.
No relation to this year.
but they have a history.
So even though he says that the preseason rankings go away, they are still there hidden deep in the calculations.
IMO the only thing that can be argued is the SOS adjustment. But I'm unsure exactly what that argument could even be. Well maybe the Big Ten got worse over the course of the year and the SEC got better.......i mean that's what u hear but what evidence is there this is occurring?
If yall don't believe this is accurate, then you don't understand how math works. I'm sure he is saying that they are essentially removed from the equation, but the problem is, they are still the starting point. In other words, you can't really know a team's strength of schedule without the assumptions of the preseason, as even 15 games in, each team has played less than half of a percent of the division 1 teams.
The initial point was that the initial assumptions never truly go away, and they can't as long as you are using a SOS factor, which is the only way to be very accurate at all. It generally works itself out, as teams play more and adjustments are made. But this is generally because the initial assumptions are pretty good. However, they can never be completely removed due to the lack of links between all teams.
Well, then the conference season is pretty useless to the rankings because the die has been cast by the pre season...becomes a self fulfilling prophecy because there aren't any "bad losses" in the Big as they're just beating up on other behemoths.
The reference to Duke was because they were outside the top 15 or 20, whatever the cutoff was in likely defensive efficiency. But after the tournament was over, they were inside the top 15 in both. The reason is because you play 6 straight games against top competition (high SOS), and by virtue of winning those games (being more efficient), your efficiency climbs, especially in respect to the other top teams in the tournament who are losing (less efficient). It has been discussed several times since then, and is why even though the point is brought up, most know it is not correct with respect to the ratings pre tournament. The idea, according to the "experts" was that like 10 of the last 11 champions were top 15 in both offensive and defensive efficiency. And while that was statistically true, those were numbers post-tournament. So applying that logic to pre-tournament ratings doesn't work out. But it does generally work out post tournament for the reason described above.
I would not want to match up with Gonzaga. We don't shoot well enough for that one.I mean they’re one of the best three point shooting teams in the country and have a front court of Petrusev and Tillie who can match up with any front court in the country.
Correct.
Oh.
This is quite literally incorrect.
This is the crux of the issue. There is no evidence because of how the schedule is set up, so conference strength tends to be sticky based on early season results. But this isn't a KenPom problem so much as a schedule problem (that and small sample size). Still, as you point out, KenPom is a terrific system that is prized for its accuracy.
You're 100% wrong for a variety of reasons. First, the author (Ken Pomeroy, a brilliant individual who assuredly knows more about this than anyone on this board) has explicitly stated that any preseason finger on the scale is entirely removed at a certain point in the season.
Second, you don't need to play very many teams to get a good start on strength of schedule because you also have efficiency (i.e. games are not just binary results) and the strength of schedule of your opponents' opponents (and their efficiency and their opponents' opponents, etc...). So at the point that KenPom removes preseason assumptions, the SOS adjustments are in decent shape already. (As an aside, your posts suggest that you see inputs as games rather than possessions, which is a mistake.).
Third, you also seem to ignore that SOS is not set in stone the moment that preseason assumptions are removed but constantly evolving with more and more data. As you posted elsewhere, a team's SOS can change even when they don't play a game (and even if their opponents do not play games). That's... how it works.
Fourth, you seem to assume that a system must get to X% accuracy or its bunk. But any system is limited by its inputs. For college basketball SOS, one of those limitations is the lack of cross-conference games in the last 3 months of the season, leading to sticky conference SOS ratings (and the cascading effect on individual teams). This is a limitation on the system, but no reason to throw the whole thing out or to question the explicit statements of its author, a well-respected expert, on how he constructed his own program.
You're really doubling down on something that is easily verified as untrue: "The pre-season ratings will be degraded as real data accumulates. Starting next Monday, the influence of the initial rating will be dropped gradually each day until it reaches zero on the morning of January 23." https://kenpom.com/blog/what-happens-to-preseason-ratings-when-its-not-preseason-anymore/. IDK, he's only a data scientist who has been doing this for over a decade, whose work forms the basis for betting lines (i.e. real world tests), and who programs the database. Here's another post on why preseason assumptions are used at all when there is actual data: https://kenpom.com/blog/preseason-ratings-why-weight/.
Think of it this way: the preseason assumptions are weight on the scale that is gradually removed until there is zero weight remaining. You're approaching this like a limit equation where a value approaches, but never reaches, zero. Not only is that incorrect, but it begs the question of who cares because at some point any residual effect is so small as to be swallowed by the margin of error.
So let me walk you through this real slow. In game 1 you generate an efficiency which is weighted by assumptions. Let's just say that in theory at game 15 the preseason assumptions are removed from the equation. This is done weighting recent games more than previous ones, and the equation actually stopping at 15 games of opponent and opponents opponents and so on. The problem is that the efficiency generated at game 14 is weighted by preseason assumptions. So even though the preseason ranking no longer effect your strength of schedule, it was the basis by which you achieved the efficiency at game 15.
It is very similar to the NET saying they cap margins at 10 points, so it doesnt matter if you win by 10 or 25. The problem is, if you win by 25 instead of 10, your efficiency is higher, so in basically the margin isnt really capped at 10. The margin factor is, but the efficiencies are not.
It does matter because the committee has said they look at it. It also matters because the committee is always looking for ANY reason to underrate UK. If they can offer any plausible answer for why they rated Kentucky so differently from their ranking, they will use it and do just that.Yeah you're not moving at all if the system says u win by 6 and u win by just 9.
We did pretty much as expected given our position in the system and Texas A&M.
At this rate tho I wouldn't even sweat it. The committee is going to give us a good seed because our resume will be good.