What's new

ESPN predicts wins for every team

you might guess better than doolittle/pelton/hollinger where the jazz are concerned, but the chances of you guessing correctly on 30 teams year after year without falling victim to your own subjectivity is pretty slim... that's why systems like this are created, to account for as much variation as possible and get us closer than our preconceptions (usually) can.

That's why I'd like to see a comparison of ESPN's collective expert picks with the Doolittle/Pelton/Hollinger statistical picks. Can anyone find that data?

I suppose we could keep track of the Doolittle vs. experts picks this year, since they came out at basically the same time. Or we could have a contest where we make our own projections and see where we stack up compared to these other projections.
 
Umm. Pretty sure we're gonna win all the games.

Where the hell has the UPTIMISM gone on this board?
 
Team predictions
Miami 60.5
Hawks 48.7
Knicks 48.4
Celtics 46.9
Bulls 46.9
Pacers 43.7
Nets 42.9
Sixers 42.5
Raptors 41.8
Bucks 39.7
Pistons 32.9
Cavs 30.9
Wizards 30.2
Magic 29.2
Bobcats 15.7


Can someone please post all of their western conference ones? I'll then sit down and before the season starts and share my own predictions.
 
I will...someone make the rules and I'll determine if they're legit. I'll stipulate up front that I stated "at as close a rate." I think if they're closer on say 17 teams and I'm closer on 13, we're basically even.

How about this common statistical measure for closeness: sum of squares of differences between predicted vs. true values?

It's based on the assumption that a few large differences between predicted and true values means that the prediction is probably worse than many small differences between predicted true and large values. A lucky bounce here, a few injuries there can twist things a little without the prediction really having been a poor prediction. But if you have a few teams that you've missed by 15 games, there's something really wrong with the prediction.

For example if you predict 47, 45, and 40 wins for UTA, MIN, and GS respectively and they end up with 50, 42 and 35, is this a better prediction than if you had predicted 49, 40, and 44? If you simply totaled up the differences between predictions and actual wins, the second set of predictions looks better (3 +3 +5 =14, compared to 1 +2 +9 = 12). But sum of squares would show the first prediction is better:
9 + 9 + 25 = 43 vs 1 + 4 + 81 = 85). In other words you're rewarded if you get close most of the time without expecting complete precision, without really messing up on some teams.
 
How about this common statistical measure for closeness: sum of squares of differences between predicted vs. true values?

It's based on the assumption that a few large differences between predicted and true values means that the prediction is probably worse than many small differences between predicted true and large values. A lucky bounce here, a few injuries there can twist things a little without the prediction really having been a poor prediction. But if you have a few teams that you've missed by 15 games, there's something really wrong with the prediction.

For example if you predict 47, 45, and 40 wins for UTA, MIN, and GS respectively and they end up with 50, 42 and 35, is this a better prediction than if you had predicted 49, 40, and 44? If you simply totaled up the differences between predictions and actual wins, the second set of predictions looks better (3 +3 +5 =14, compared to 1 +2 +9 = 12). But sum of squares would show the first prediction is better:
9 + 9 + 25 = 43 vs 1 + 4 + 81 = 85). In other words you're rewarded if you get close most of the time without expecting complete precision, without really messing up on some teams.

I don't think that applies here. I would grade performance on a simpler formula. If one team is way off, it likely was because of a circumstance that could not have been predicted.
 
Get me their western conference predictions...I'm curious...

doolittle's west win projections based on pythagorean wins derived from projected off/def efficiency:

okc 57.9
lal 54.8
den 51.1
min 51.0
sas 50.7
lac 49.1
uta 42.9
mem 42.2
dal 38.2
gsw 34.7
sac 34.1
noh 33.2
por 33.1
hou 29.4
pho 27.8

if i had to guess, i'd say they're probably low on the spurs, jazz, mavs and maybe grizz and high on denver and minny... that said, their method has a lot more stuff factored in than my top-of-head analysis does.

good news: if doolittle is right, GSW will be sending us a top-10 pick.
 
doolittle's west win projections based on pythagorean wins derived from projected off/def efficiency:

okc 57.9
lal 54.8
den 51.1
min 51.0
sas 50.7
lac 49.1
uta 42.9
mem 42.2
dal 38.2
gsw 34.7
sac 34.1
noh 33.2
por 33.1
hou 29.4
pho 27.8

if i had to guess, i'd say they're probably low on the spurs, jazz, mavs and maybe grizz and high on denver and minny... that said, their method has a lot more stuff factored in than my top-of-head analysis does.

good news: if doolittle is right, GSW will be sending us a top-10 pick.

Thanks.
 
well it might feel that way... the pythagorean method and point differential have actually historically been pretty decent predictors of actual performance. there are obviously some things they can't/won't account for (injuries, a player taking a leap sooner than expected, a team that's extremely lucky/unlucky in close games), but overall, it has historically done a pretty decent job predicting success.
Do you have any evidence that what you are saying is true? Do you have his predictions from previous years?
 
Back
Top