So, this week, is a double dose of the whackometer!
Here's week 7.
Name Week-7 Week-8Remember that a number closer to 1.000 means that the judge agreed with the collective wisdom of the other judges, a number closer to -1.000 means that the judge disagreed with the collective wisdom of the other judges, and a number close to 0.000 means that the judge neither agreed nor disagreed with the collective wisdom of the other judges.
Jeff -0.024 0.313
Greg 0.694 0.513
Arun 0.826 0.384
JimD 0.500 0.019
Mike -0.267 0.640
In week 7, Jeff and Michael were the contrary ones, while Arun was in violent agreement with the other four judges.
However, in week 8, Michael agree with the other judges most, while Jim did not agree at all.
So now, the average correlations of all the judges across the eight weeks are...
Name AverageOnce again, all are close, and therefore, still no whacko!
Jeff 0.239
Greg 0.350
Arun 0.393
JimD 0.372
Mike 0.250
1 comment:
I admit I haven't really examined the math behind this whackometer, just been following it every week with morbid curiosity :)
I have to say though, the numbers this week definitely confuse me. I know when Greg and I compared our picks this week, we were marveling as to how in sync our picks were (having four of the five games the same and only one place different on each of those games).
Yet the difference between Greg's correlation (0.513) and my own (0.384) is larger than the difference between mine and Jeff's (0.313), even though my and Jeff's picks were far more different (he and I ranked only two out of five games in common and fairly different even on those two), and that seems odd.
Now of course, this is a correlation to the overall results, not a head to head thing so I can see how averaging things out could have a result like that. Still though it seems strange when Greg and I were so similar for such a difference in disparity to occur.
A Greg vs Michael comparison seems like a similar example of this. They had 1st/2nd/4th all the same (which wound up being the Top Three games) yet their correlations are still quite different -- I suppose because Michael's third place game got another third place vote while Greg's didn't? That also seems like a rather large disparity to be caused by essentially one ranking, especially with neither of their third place games coming even close to making the Top Three.
In general I'm not really sure where I'm going with this analysis, just some random observations about this which seem strange to me. In my opinion though, perhaps this model should be more heavily weighted towards how much people matched the Top Three Games (or how much they matched the other judges in the Top Three Games), as obviously those are the most important rankings. It seems like when two people have different say fifth place games, and one of those fifth place games gets another fifth place vote (for two points) while the other has only that one point that it is causing a bigger disparity in the correlation to the other judges than it really ought to as it's a very small difference which obviously has no effect on the relevant GOTW standings.
I of course have no clue how to set something like that up, again just rambling here!
Post a Comment