What's new

ESPN predicts wins for every team

here are last year's pythagorean expectations for each team, compared to their actual win total.

W L PCT Pyth PCT. Pyth Wins Diff.
chi 50 16 0.758 0.810 53.5 -3.5
mia 46 20 0.697 0.738 48.7 -2.7
ind 42 24 0.636 0.638 42.1 -0.1
bos 39 27 0.591 0.612 40.4 -1.4
atl 40 26 0.606 0.644 42.5 -2.5
orl 37 29 0.561 0.535 35.3 1.7
nyk 36 30 0.545 0.630 41.6 -5.6
phi 35 31 0.53 0.681 44.9 -9.9
mil 31 35 0.47 0.513 33.8 -2.8
det 25 41 0.379 0.300 19.8 5.2
tor 23 43 0.348 0.357 23.5 -0.5
njn 22 44 0.333 0.263 17.4 4.6
cle 21 45 0.318 0.226 14.9 6.1
was 20 46 0.303 0.305 20.1 -0.1
cha 7 59 0.106 0.080 5.3 1.7
sas 50 16 0.758 0.766 50.6 -0.6
okc 47 19 0.712 0.736 48.6 -1.6
lal 41 25 0.621 0.560 36.9 4.1
mem 41 25 0.621 0.587 38.7 2.3
lac 40 26 0.606 0.606 40.0 0.0
den 38 28 0.576 0.614 40.6 -2.6
dal 36 30 0.545 0.543 35.8 0.2
uta 36 30 0.545 0.529 34.9 1.1
hou 34 32 0.515 0.508 33.6 0.4
pho 33 33 0.5 0.492 32.4 0.6
por 28 38 0.424 0.475 31.3 -3.3
min 26 40 0.394 0.409 27.0 -1.0
gsw 23 43 0.348 0.363 23.9 -0.9
sac 22 44 0.333 0.287 18.9 3.1
noh 21 45 0.318 0.335 22.1 -1.1

i know these tables never display right... but if you look at the number to the farthest right, you'll see that pythag expectations were right within a couple of wins for the vast majority of the league.

philly underperformed relative to their point differential by a whopping 9.9 games. new york finished 5.2 worse than expected. cleveland, jersey and detroit probably benefitted from their decent closing stretches (and might partially be accountable for philly falling so far short, since philly played east lottery teams in virtually every game down the stretch. the lakers outplayed their expected win total by 4.1 games, but they frequently do that because they have an incredibly clutch player who gives them a big advantage in close games.

the jazz, meanwhile, won one more game than their pythagorean expectation says they should have.
 
let me see last year's pre-season predictions and be the judge of that.

YOU want to be the judge of it? not a whole community of people with applied mathematics degrees who have put this and several other methods to the test by running them against 40 years worth of basketball history to determine that this is one of the most accuracte predictors?

ok.
 
have you even looked at the piece and looked at his methodology?

remember, this isn't what HE expects at all. this is an objective projection of individual performance that he's putting together into expected team offensive & defensive performance to arrive at expected wins. there is no part of his system that involved sitting back in his chair, stroking his beard and saying, "hmm, i wonder how many wins randy foye will provide the jazz this season."

Doesn't he have to account for expected minutes of the individuals somehow? Or am I missing something?
 
Doesn't he have to account for expected minutes of the individuals somehow? Or am I missing something?

not sure... if he does, it's behind the curtains somewhere. basically he's projecting win shares of each player, so i would assume he's doing that in a similar fashion to how hollinger projects each guy's PER based on past performance, similar players, etc.

but if you're suggesting that's the variable, you're right. if derrick favors breaks out and has an absolutely beastly season, or if mo williams tweaks his ankle and plays half as many minutes as expected... those are the types of things that would move the needle up or down.
 
here are last year's pythagorean expectations for each team, compared to their actual win total.



i know these tables never display right... but if you look at the number to the farthest right, you'll see that pythag expectations were right within a couple of wins for the vast majority of the league.

philly underperformed relative to their point differential by a whopping 9.9 games. new york finished 5.2 worse than expected. cleveland, jersey and detroit probably benefitted from their decent closing stretches (and might partially be accountable for philly falling so far short, since philly played east lottery teams in virtually every game down the stretch. the lakers outplayed their expected win total by 4.1 games, but they frequently do that because they have an incredibly clutch player who gives them a big advantage in close games.

the jazz, meanwhile, won one more game than their pythagorean expectation says they should have.

Correct me if I'm wrong, but the table you gave is expected wins based on 2011-12 season performance. It is not the kind of prediction that Wes asked for in which 2010-11 data was used to predict 2011-12 performance. Is that correct? If correct, of course the data are still useful for the reasons you gave.

However, do you know if this author put out a similar prediction last year based on the previous year's data? I think Wes is right, despite his disdain for the methodology, that being able to see such predictions would help us determine whether they were more accurate than, say ESPN's 100+ "experts" average predictions.
 
Correct me if I'm wrong, but the table you gave is expected wins based on 2011-12 season performance. It is not the kind of prediction that Wes asked for in which 2010-11 data was used to predict 2011-12 performance. Is that correct? If correct, of course the data are still useful for the reasons you gave.

what i'm comparing is how 2011-12 wins compared to 2011-12 pythagorean wins. 2011-12 isn't used to predict 2012-13, what he's using to project 2012-13 is an expected team offensive/defensive rating that he's arriving at by using systematic projections of individual player performance.

in other words, in any particular season, a team's winning percentage should be somewhat equal (within a standard deviation) to off^16.5/(off^16.5+def^16.5). but the predictions don't work in terms of using one season's off/def performance to predict the NEXT season's win total.

However, do you know if this author put out a similar prediction last year based on the previous year's data? I think Wes is right, despite his disdain for the methodology, that being able to see such predictions would help us determine whether they were more accurate than, say ESPN's 100+ "experts" average predictions.

i couldn't find his 2011-12 projections, although i'm sure he did then somewhere. i found his 2010-11 projections here and added actual 2010-11 wins as a reference point:

team expected wins actual wins
atl 39.6 44
bos 53.7 56
cha 36.3 34
chi 38.3 62
cle 33.7 19
dal 49 57
den 50.1 50
det 34.6 30
gsw 35.9 36
hou 43.2 43
ind 35.4 37
lac 34.3 32
lal 55.1 57
mem 31.8 46
mia 61.6 58
mil 38.9 35
min 29.9 17
njn 34.9 24
nwo 50.1 46
nyk 40.5 42
okc 39.8 55
orl 49.4 52
phi 34.4 41
phx 37.9 40
por 44.1 48
sac 39.4 24
sas 45.9 61
tor 27.4 22
uta 42.1 39
was 30 23
 
Nerd,

Thanks for the clarification. I understand the pythagorean premise and tend to agree that it's a pretty interesting/useful stat in-season.

And thanks for providing that author's predictions from two seasons ago. Some appear to be pretty close. But there are some huge anomalies (SAS, SAC, OKC, NJN, MIN, MEM, CLE, CHI). I think that's Wes's point: that these pre-season win predictions aren't all that reliable. And I'd guess that there were probably some large anomalies from his win predictions last season as well. So whether his statistical prediction method is any better than the "experts'" collective hunches is still an open question.
 
Nerd,

Thanks for the clarification. I understand the pythagorean premise and tend to agree that it's a pretty interesting/useful stat in-season.

And thanks for providing that author's predictions from two seasons ago. Some appear to be pretty close. But there are some huge anomalies (SAS, SAC, OKC, NJN, MIN, MEM, CLE, CHI). I think that's Wes's point: that these pre-season win predictions aren't all that reliable. And I'd guess that there were probably some large anomalies from his win predictions last season as well. So whether his statistical prediction method is any better than the "experts'" collective hunches is still an open question.

yeah, i think the hard part is projecting what each team's offensive/defensive numbers are going to be. once you have those, the pythagorean method doesn't deviate from reality too much. but when you're using PROJECTED individual performance to arrive at those numbers and then use them to PROJECT win total, any error margin you have is now multiplied by itself.

that's why you'll see more deviations when you compare [pythag wins based on projected output] versus [pythag wins based on actual output].
 
having said that, it's not like his player projections are arbitrary, either, so this is probably as good a projection as we can get to without injecting some subjectivity into the calculation. in other words, for the jazz to be better than 7th, it's going to be because somebody (favors or hayward?) outplayed expectations.
 
having said that, it's not like his player projections are arbitrary, either, so this is probably as good a projection as we can get to without injecting some subjectivity into the calculation. in other words, for the jazz to be better than 7th, it's going to be because somebody (favors or hayward?) outplayed expectations.

In a strictly technical sense, you're right, I think. But just because the individual player projections (based on what? win shares, something else?) are statistical doesn't mean we can expect them to be accurate -- or at least more accurate than an educated non-statistics-based guess. I tend to think that these single-value individual player statistical projections from one year to the next, while not completely random, are hardly more accurate than other means of projecting player effectiveness -- especially when converting them over to a team success context.

The other problem, that Cowhide I think alluded to, is that this method of using individual projections in some kind of additive way to reach team projections cannot account for things like chemistry, lack of chemistry, good or poor coaching, good or bad system for players' talents, etc. It's more than just injuries. And it's more than just that a player blows up and has a great or awful season. If the 2010-11 projections you provided are any guide, we can expect more than 25% of teams to over- or under-perform the projections by at least 10 games (a pretty large amount if you think about it), with a few projections off by quite a bit more. So while this particular statistically based set of projections is interesting and surely has some usefulness, I hardly see how it is going to be any better guide to a team's season than a well-reasoned non-statistical projection.

(I should stress that I find stats compelling and interesting, but unless I'm missing something this particular type of projection just has too many questionable assumptions and omissions built in to take it very seriously.)
 
Another way to say this -- again please tell me if I'm wrong -- is that this statistical method has no way to evaluate what's really most interesting about the offseason: how new additions/subtractions will fit with what already exists on the team. In a sense, we all know what a Kirilenko bring to Minnesota or what a Marvin Williams brings to Utah from their previous team statistically, even if we don't perform the calculations precisely. What we don't know is how well they'll contribute in ways that don't show in their previous individual statistics. This method, by definition -- unless I'm mistaken, has to assume that those non-statistically measured contributions don't exist.
 
W L PCT Pyth PCT. Pyth Wins Diff.
chi 50 16 0.758 0.810 53.5 -3.5
mia 46 20 0.697 0.738 48.7 -2.7
ind 42 24 0.636 0.638 42.1 -0.1
bos 39 27 0.591 0.612 40.4 -1.4
atl 40 26 0.606 0.644 42.5 -2.5
orl 37 29 0.561 0.535 35.3 1.7
nyk 36 30 0.545 0.630 41.6 -5.6
phi 35 31 0.53 0.681 44.9 -9.9
mil 31 35 0.47 0.513 33.8 -2.8
det 25 41 0.379 0.300 19.8 5.2
tor 23 43 0.348 0.357 23.5 -0.5
njn 22 44 0.333 0.263 17.4 4.6
cle 21 45 0.318 0.226 14.9 6.1
was 20 46 0.303 0.305 20.1 -0.1
cha 7 59 0.106 0.080 5.3 1.7
sas 50 16 0.758 0.766 50.6 -0.6
okc 47 19 0.712 0.736 48.6 -1.6
lal 41 25 0.621 0.560 36.9 4.1
mem 41 25 0.621 0.587 38.7 2.3
lac 40 26 0.606 0.606 40.0 0.0
den 38 28 0.576 0.614 40.6 -2.6
dal 36 30 0.545 0.543 35.8 0.2
uta 36 30 0.545 0.529 34.9 1.1
hou 34 32 0.515 0.508 33.6 0.4
pho 33 33 0.5 0.492 32.4 0.6
por 28 38 0.424 0.475 31.3 -3.3
min 26 40 0.394 0.409 27.0 -1.0
gsw 23 43 0.348 0.363 23.9 -0.9
sac 22 44 0.333 0.287 18.9 3.1
noh 21 45 0.318 0.335 22.1 -1.1

Strong enough case to ban them from the league.
 
We should just have LA play Miami and cancel the rest of the season, IMO.
We're all about instant gratification in this society. It's terribly annoying to have to suffer through an entire 82-game season just to arrive at the outcome everyone already expects.

Just glancing at the numbers nerd has listed comparing 2010-11 actual vs. predicted, it's clear the methodology works best for established teams. But I think a "seat of the pants" method could work just as well. I'll bet I could finish within 5 wins for 19 out of the 30 NBA teams, just as this method did, simply based on their record from their previous season and factoring in roster changes. 11 of 30 were > 5 wins off. I'm not so sure this method is all that reliable.

Utah will finish with 47 wins. I'll bet I'm +/- 5 on that one and could pick most of the WC. The one I'd probably miss on is a team like Minny: complete unknown. And Houston. No one can pick wins for that team.
 
In a strictly technical sense, you're right, I think. But just because the individual player projections (based on what? win shares, something else?) are statistical doesn't mean we can expect them to be accurate -- or at least more accurate than an educated non-statistics-based guess. I tend to think that these single-value individual player statistical projections from one year to the next, while not completely random, are hardly more accurate than other means of projecting player effectiveness -- especially when converting them over to a team success context.

The other problem, that Cowhide I think alluded to, is that this method of using individual projections in some kind of additive way to reach team projections cannot account for things like chemistry, lack of chemistry, good or poor coaching, good or bad system for players' talents, etc. It's more than just injuries. And it's more than just that a player blows up and has a great or awful season. If the 2010-11 projections you provided are any guide, we can expect more than 25% of teams to over- or under-perform the projections by at least 10 games (a pretty large amount if you think about it), with a few projections off by quite a bit more. So while this particular statistically based set of projections is interesting and surely has some usefulness, I hardly see how it is going to be any better guide to a team's season than a well-reasoned non-statistical projection.

(I should stress that I find stats compelling and interesting, but unless I'm missing something this particular type of projection just has too many questionable assumptions and omissions built in to take it very seriously.)
you're absolutely right about all those variables, which is why nobody's come up with a fool-proof way of projecting individual performance. injuries, extra motivation, chemistry, playing time, which unit you play with, etc. all can have a bearing, so from a statistical standpoint, the best thing you can do is come up with a system that takes a standard approach and applies it consistently, accepting that you'll have some devation.

the way most stat guys look to do that is by looking for historical evidence. that's what hollinger does -- he compares each player to his entire database of past NBA players to find a group that is statistically similar in terms of age and production, and then asks that database, "what happened next for all THESE guys?" if the historical answer is that their PER goes down by an average of 10%, then hollinger's projected PER for the guy will be last year's PER minus 10%. again, not perfect, but it usually comes pretty close. he estimated a little low for hayward, paul and al last season, but for the rest of the roster he was really quite close. devin projected at 16.88 (actual 16.08), cj was at (12.46), roger at 7.76 (8.36), etc.

doolittle uses SCHOENE, which is similar in that it finds players who are similar across 13 factors (height, weight, usage, shooting percentage, etc.). he puts together a list of (ideally) 50 or so players who are at least 90% similar and uses their historical development to put together a hypothetical statline for the guy. then he creates a defensive value for the guy by looking at on-court/off-court as well as positional net values -- neither of which are perfect, but there aren't a lot of good tools for measuring individual defensive contributions.

so now he has 12-15 guys' imaginary PER/WS numbers and imaginary defensive numbers and he aggregates them to come up with an estimate on how good each team's offense and defense will be.

as you can see, a lot of variables here, but it's what there is. so any projection of the jazz winning 43 games is probably based on favors posting a 17 or 18 PER again, hayward at 16ish, etc. if we blow up and win 50, it's probably because one of those guys defied his list of "comps" and made a step to stardom that defines SCHOENE projections. i think most jazz fans would agree that our arrival at contender status probably depends on those two, anyway.

(sorry for nerding out, here. i find this type of stuff interesting, even though i recognize that every system has its flaws.)
 
what i'm comparing is how 2011-12 wins compared to 2011-12 pythagorean wins. 2011-12 isn't used to predict 2012-13, what he's using to project 2012-13 is an expected team offensive/defensive rating that he's arriving at by using systematic projections of individual player performance.

in other words, in any particular season, a team's winning percentage should be somewhat equal (within a standard deviation) to off^16.5/(off^16.5+def^16.5). but the predictions don't work in terms of using one season's off/def performance to predict the NEXT season's win total.



i couldn't find his 2011-12 projections, although i'm sure he did then somewhere. i found his 2010-11 projections here and added actual 2010-11 wins as a reference point:

team expected wins actual wins
atl 39.6 44
bos 53.7 56
cha 36.3 34
chi 38.3 62
cle 33.7 19
dal 49 57
den 50.1 50
det 34.6 30
gsw 35.9 36
hou 43.2 43
ind 35.4 37
lac 34.3 32
lal 55.1 57
mem 31.8 46
mia 61.6 58
mil 38.9 35
min 29.9 17
njn 34.9 24
nwo 50.1 46
nyk 40.5 42
okc 39.8 55
orl 49.4 52
phi 34.4 41
phx 37.9 40
por 44.1 48
sac 39.4 24
sas 45.9 61
tor 27.4 22
uta 42.1 39
was 30 23

And these geniuses of 40 years were off by 2.0+ wins for 24 of 30 teams? Off by 4.0+ for 11 teams? Laughable. I could predict how many wins these teams have at as close a rate using my own basic knowledge and common sense. And that was exactly my point. All those methodologies mean **** for the most part.
 
We should just have LA play Miami and cancel the rest of the season, IMO.
We're all about instant gratification in this society. It's terribly annoying to have to suffer through an entire 82-game season just to arrive at the outcome everyone already expects.

eh, i don't think it's about instant gratification, i think it's about the month of august and a huge community of NBA fans needing to do something to while away the dog days of the offseason. ;)

you're right, establish teams don't tend to deviate from the projection as often... usually if there's a huge deviation, you can kind of identify why... "so-and-so was injured," or "that team played better than its record for two weeks in april when it didn't matter." philly's a bit of an anomaly - 9.9 wins better than their point diff would suggest - but then, hollinger told us all year that those guys had gotten lucky a lot early in the season and that they'd backslide in the standings, which they did.
 
And these geniuses of 40 years were off by 2.0+ wins for 24 of 30 teams? Off by 4.0+ for 11 teams? Laughable. I could predict how many wins these teams have at as close a rate using my own basic knowledge and common sense. And that was exactly my point. All those methodologies mean **** for the most part.

Put your money where your mouth is.

This could be fun.
 
And these geniuses of 40 years were off by 2.0+ wins for 24 of 30 teams? Off by 4.0+ for 11 teams? Laughable. I could predict how many wins these teams have at as close a rate using my own basic knowledge and common sense. And that was exactly my point. All those methodologies mean **** for the most part.

2-4 wins out of 82 games is not that big a variation. if i said, "the jazz will win somewhere between 48 and 52 games" that would sound like a fair projection, wouldn't it? any system has some room for deviation.

but also, remember: we're talking about two very different things....

1) pythag or ACTUAL point differential correlates very strongly to actual winning percentage.
2) pythag based on ESTIMATED point differential (like the list you looked at) has more room for error.

i still think the overall system can probably outperform the subjective ranking more times than not... but it has some liabilities, sure. you might guess better than doolittle/pelton/hollinger where the jazz are concerned, but the chances of you guessing correctly on 30 teams year after year without falling victim to your own subjectivity is pretty slim... that's why systems like this are created, to account for as much variation as possible and get us closer than our preconceptions (usually) can.
 
2-4 wins out of 82 games is not that big a variation. if i said, "the jazz will win somewhere between 48 and 52 games" that would sound like a fair projection, wouldn't it? any system has some room for deviation.

A 4.0+ differential would be like saying the Jazz would have between 42 and 50 wins (not 48 and 52)...just middle it at 46 and you're good to go to stay under that 4.0+ margin.
 
Back
Top