Maybe hate is a strong word.
In fact, Ken Pomeroy probably doesn't even know what Joe Scott did to him. But, rest assured, if Pomeroy knew, it would keep him up nights.
You see, Mr. Pomeroy has a system. Actually, to be fair, the system is the brainchild of a different brilliant basketball mind, Dean Oliver, but Pomeroy has been the statistician and messenger that has been most effective at popularizing Oliver's work. Put simply, Oliver developed a method by which teams of all different compositions and strategies could be rated against each other on a level playing field, and, most importantly, team quality could be distributed down to the individual player level.
While the effectiveness of the latter is an oft-debated topic in the burgeoning "tempo-free" world of basketball analysis, the efficacy of the former (controlling for injuries) is quite strong.
The team-level outputs of the system are offensive and defensive ratings (merely the number of points scored or allowed per 100 possessions - the average usually hovers right at 100) and pace (the number of possessions a team normally uses in a game).
These three outputs can be compared across teams to develop win percentages by game and through simulations, projected win-loss records and confidence intervals that are pretty good at forecasting the future. Pomeroy updates his site with team ratings and projections every day and individual player ratings every week, and recently these ratings and projections have been cited more and more by leading publications and basketball writers.
As with any such system, however, the theory is only as useful as it stands up in practice. And that is where Pomeroy might develop some contempt for Coach Scott.
JUST A TEST
The most obvious question that falls out of this discussion is just how accurate have Pomeroy's ratings been for the Ivy League in past years. Pomeroy's site only lists the ratings as of the end of each season back to 2003, which isn't the exact information that we need, since those ratings are biased by the performance during 14 games of league play.
Using Kenneth Massey's (what is it about basketball statisticians named Ken?) incredibly detailed ratings archives, we can access an accurate record of each team's Pomeroy rating at the start of the league season. While the rank alone doesn't exactly give us the offensive and defensive ratings, or pace, necessary to compute win percentages, a simple assumption that parity is pretty consistent over time allows us to use a generic set of Pomeroy's college ratings to match the Ivy team's rank to the generic offensive and defensive rating for that rank.
Using these ratings and an average pace number for each team based on their season-ending pace since 2003, we can compute the conference win probabilities and rank probabilities and compare them with the actual observed finishes.
YOU'RE KILLING ME, JOE
Over the eight years involved in the study, Pomeroy nabbed the winner six out of eight times, while his projected second-place finisher took the title twice. The lowest probability team to win the title was Penn in 2005 (39%).
While picking the champion is nice (and in a one-bid league often all anyone cares about), that's only one data point in the overall study. The model should be tested against every team's finish, not just the winners.
Over the 72 observations in the analysis (8 teams, 7 years), a team finished within a game either way of its win projection 23 times (36%). Nineteen more (30%, total 66%) finished within two games of their win projection and nine more finished within three games (14%, total 80%). The 95% confidence interval of team win projections over a 14 game league season usually tops out around 3.5 wins above or below the mean. Three more observations fall within that band (5%, total 85%).
That leaves 10 data points where the number of league wins fell outside the 95% confidence interval. Seven of those 10 fell within three standard deviations of the mean. Three didn't.
Princeton 2005. Princeton 2006. Princeton 2007.
In other words, the entire Joe Scott era at Princeton.
In 2005, as defending champions and coming off a good non-conference run, the Tigers were expected to win 11.7 games. They won 6. After losing Judson Wallace and Will Venable and struggling through the non-conference slate at 2-11, Princeton was slated to win just 4.4 games. It won 10. It followed up that campaign with a decent non-league performance and was projected at 7.8 league wins. It won 2.
There are some reasonable theories to explain how this was remotely possible. The snail's pace and ridiculous reliance on three-pointers pushed variance to its logical extreme, which manifested itself in highly volatile and unpredictable results.
HARVARD, AND OTHER TEAMS THAT MISSED THE MARK
Enough picking on Scott's Princeton teams. What about the other seven observations that also fell outside of the 95% confidence interval?
The most common offender was Harvard, which by the way, has underachieved in league play (or over-achieved in non-conference play, depending on how you look at it) seven of the past eight seasons. Three of those were bad enough misses to fall outside the predicted range, including 2003 (8.1 expected, 4 actual), 2006 (10.1 expected, 5 actual) and 2008 (7.6 expected, 3 actual).
Two of those teams get a pass, based on the caveat that the model doesn't predict injuries or dismissals, given Pat Harvey leaving the team in 2003 and the Crimson's myriad frontcourt injuries in 2008.
The final four teams were Brown 2003, Dartmouth 2005 and 2009 and Penn 2008 and all four overachieved. Brown was only one that overachieved in a way that got it into the title race, as its 7.3 expected wins became 12 actual wins - a close second to Penn, which it lost to by four and seven during the year. The other three data points had teams expected to win between 2.4 and 3.6 games come up with 7 and 8 wins - out-performances that had very little to do with the league title chance.
WHAT DOES IT ALL MEAN???
From this analysis, it would be impossible to say that Pomeroy's projections are close to perfect. Given the latent injury/departure variable, that conclusion could have been expected.
Even removing the data points where roster change issues skewed the results, the predictive ability isn't perfectly sound (though, to be fair, it is very, very good).
One potential explanation could be a short-term serial correlation of performance that would correct itself over time, but not necessarily within the bounds of a 14 game sample. For instance, the 2006 Harvard team - one of the offenders listed above - started out 4-1 in the Ivies before losing a close game in Ithaca and collapsing in the final minute against Princeton. Essentially eliminated from the Ivy race at that point, the Crimson proceeded to lose by 13 at home to a Brown team it had beaten by 17 on the road three weeks earlier and by 27 at home to a Cornell team it had lost to by two a month earlier.
The result was an eight-game losing streak that had less to do with the Crimson's innate ability and more to do with a lack of desire to play defense after the NCAA hopes were dashed. If the 14-Game Tournament were a 140-Game Tournament, it's questionable as to whether Harvard would have fallen so far off of what would have been its 101 win expectation.
This effect is probably more pronounced in the Ivies, where there is no conference tournament to act as a second season so falling out of the regular season race is more of a finality, but it could exist in all conferences, though that is outside the scope of this analysis.
None of these nits should distract from the general point that the Pomeroy odds seem to be solid predictors of the ultimate finish of each of the Ivy teams. While the confidence intervals may keep you from being able to separate No. 3 from No. 5 with much certainty, they seem to be narrow enough to distinguish No. 1 from, say, No. 5.
Unless, of course, No. 5 is coached by Joe Scott. In that case, who knows where it will end up.
No comments:
Post a Comment