An interesting article:
http://www.kaggle.com/blog/wp-conten...kaggle_win.pdf
On a related issue, I am curious whether anyone has investigated the effects of having more frequent rating updates. The article above primarily discusses the importance of not overfitting to test data-- it should be clear that the more often the ratings are updated the more "fitting" is taking place. While the effects are quite modest in individual cases, there may be a systemic impact--
Example: Players A and B are both very active, with player A's rating averaging at 1900 and Player B's rating averaging at 1800 over the course of many months. On the ocassion they meet Player A is rated 1970 due to a good performance last week, while player B just came off a poor performance and is currently rated 1730. If monthly lists were used there would only be a 100 point difference, but with weekly lists the difference is a whopping 240 points.
While I suspect there is little net difference over the course of months on Players A and B, (it could afterall have been the other way with player A coming off a poor result and B a good result) it seems clear that the more frequent the updates the greatler to the volatility of ratings in general.