PDA

View Full Version : 35 Suggested Items for the Annual General Meeting



Lyle Craver
04-01-2011, 01:38 AM
Suggestions are welcome here

Paul Leblanc
04-04-2011, 09:52 AM
I hope to attend the AGM again this year. Last year I raised the issue of under-rated juniors and the rating auditor agreed to examine this issue. There was considerable discussion in the subsequent on line meeting but no resolution. I intend to formulate a motion to replace the existing bonus point system with one that gives larger rating point awards to exceptional performances and no rewards otherwise. I'm open to suggestions about the exact formula.

Fred McKim
04-04-2011, 09:57 AM
I hope to attend the AGM again this year. Last year I raised the issue of under-rated juniors and the rating auditor agreed to examine this issue. There was considerable discussion in the subsequent on line meeting but no resolution. I intend to formulate a motion to replace the existing bonus point system with one that gives larger rating point awards to exceptional performances and no rewards otherwise. I'm open to suggestions about the exact formula.

The current bonus point system is flawed as it favours high rated players, because of the fact that the bonus is based on percentage achieved.

Young players might gain 60 points in a big swiss, but receive no bonus points because of this threshold.

On the other hand titled players are receiving bonus points all of the time because they are always in the performance threshold, so they may only need to gain 20-30 points and be eligible for bonus points.

I think in theory, we could change the multiplier from 16 to 32, perhaps on a sliding scale for exceptional performances.

Garland Best
04-04-2011, 06:57 PM
I found this most recent article on Chessbase enlightening: http://www.chessbase.com/newsdetail.asp?newsid=7114

It would indicate that the FIDE rating system, with only minor tweaks (removal of the 400 point, rule, changing of the scale factor) serves as an accurate predictor of whether or not a person with rating X will defeat a person of rating Y. From a tournament director's point of view, this is its key function. It forms the basis of ranking persons for Swiss-style tournaments.

I therefore believe that the FIDE model is the one to follow. Any changes that deviate from this model, without careful analysis, is subject to controversy. We have made too many ad-hoc changes to the rating system without understanding their impact.

Paul Leblanc
04-08-2011, 09:57 AM
I read the article. It seems quite convincing. There was no reference to any formula or bonus point system to reward unusually strong performances. Are you aware if FIDE has such a system?

Garland Best
04-08-2011, 10:07 AM
I looked through the FIDE website, but cannot find any published algorithm, just on-line calculators. It must be publised somewhere though, given how sites like Chessbase seem to be able to calculate the ratings in advance of their official publication.

Stuart Brammall
04-08-2011, 11:26 AM
Are you aware if FIDE has such a system?

There are no bonus points. FIDE has some differing circumstances from us though in that that there are very few low rated players-- until recently the floor was 2000.

In fact when calculating provisional ratings there is an almost anti-bonus like mechanism whereby players with positive scores accross their first nine games are not given their performance rating, but rather given the average of their opposition plus 12.5 points per half point above an equal score.

8.5/9 against average 2100 opposition would only net you a provisional rating of 2200, even though your perf according o the expected results table was 2544, or rather 2500 when the 400 point rule is regarded.

Fred McKim
04-08-2011, 11:49 AM
From the FIDE site:

8.56
K is the development coefficient.

K = 25 for a player new to the rating list until he has completed events with at least 30 games
K = 15 as long as a player's rating remains under 2400.
K = 10 once a player's published rating has reached 2400 and remains at that level subsequently, even if the rating drops below 2400.

For example if you are above 2400 and beat somebody with the same rating you get (1.0 - 0.5) * 10 rating points = 5


The corresponding CFC numbers are
K = 16 (2200+)
K = 32 (everyone else)
K = 800/N (provisional players with N games played)

I think we could attempt to halt deflation by creating another K category, say of 48: for players under X, where X is 1600 or perhaps juniors < 1800.

Food for thought.

Paul Leblanc
04-09-2011, 07:09 PM
Changing K for U1600/U1800 is an intriguing idea. I see two challenges:

1. most players U1600/U1800 are correctly rated. The new K factor would introduce rating volatility for those folk.

2. it does not "refund" any of the rating points lost by players losing to under-rated opponents. Bill Doubleday recognized this as a priority in his search for a solution.

Stuart Brammall
04-09-2011, 07:53 PM
An interesting article:
http://www.kaggle.com/blog/wp-content/uploads/2011/02/kaggle_win.pdf

On a related issue, I am curious whether anyone has investigated the effects of having more frequent rating updates. The article above primarily discusses the importance of not overfitting to test data-- it should be clear that the more often the ratings are updated the more "fitting" is taking place. While the effects are quite modest in individual cases, there may be a systemic impact--

Example: Players A and B are both very active, with player A's rating averaging at 1900 and Player B's rating averaging at 1800 over the course of many months. On the ocassion they meet Player A is rated 1970 due to a good performance last week, while player B just came off a poor performance and is currently rated 1730. If monthly lists were used there would only be a 100 point difference, but with weekly lists the difference is a whopping 240 points.

While I suspect there is little net difference over the course of months on Players A and B, (it could afterall have been the other way with player A coming off a poor result and B a good result) it seems clear that the more frequent the updates the greatler to the volatility of ratings in general.

Fred McKim
04-09-2011, 10:33 PM
An interesting article:
http://www.kaggle.com/blog/wp-content/uploads/2011/02/kaggle_win.pdf

On a related issue, I am curious whether anyone has investigated the effects of having more frequent rating updates. The article above primarily discusses the importance of not overfitting to test data-- it should be clear that the more often the ratings are updated the more "fitting" is taking place. While the effects are quite modest in individual cases, there may be a systemic impact--

Example: Players A and B are both very active, with player A's rating averaging at 1900 and Player B's rating averaging at 1800 over the course of many months. On the ocassion they meet Player A is rated 1970 due to a good performance last week, while player B just came off a poor performance and is currently rated 1730. If monthly lists were used there would only be a 100 point difference, but with weekly lists the difference is a whopping 240 points.

While I suspect there is little net difference over the course of months on Players A and B, (it could afterall have been the other way with player A coming off a poor result and B a good result) it seems clear that the more frequent the updates the greatler to the volatility of ratings in general.

FIDE calculates ratings 6 times per year. All results for the two month period are effectively pooled.

CFC in fact rates after every event, even though the published results are only updated once a week.

Lyle Craver
04-12-2011, 06:03 PM
6 times a year? Well it used to be twice a year so we should give thanks.

Ironically enough 6 times a year was what the CFC used to do back in the days when ratings were done on 3x5" index cards. In fairness to FIDE they do a LOT more players than we used to "in the day".

Perhaps Francisco Cabanas (former Ratings Auditor and later CFC President) will care to hold forth on this subject?