Item # 17 - Modifications to the Can. Rating System / Report of the Rating Auditor

The introductory post is by Bill Doubleday, Rating Auditor:

Rating System Issues
The CFC Rating system allows players to have a reasonable expectation of the outcome when they play. Underlying the system is the assumption that players have a level of competence which persists. Statistical analyses of individual ratings over decades usually show a rapid rise over the first few years and a slower rise to a peak which persists for decades followed by a slow decline.
Ratings are self adjusting so that higher competence than reflected in the current rating is rewarded with a rating increase and ratings that are too high decrease as a result of tournament results.
In recent years, chess has been promoted in schools, resulting in large numbers of young players learning the game. Typically they begin as weak players with ratings of 1000 or less. With coaching and extensive practice, their competence rises. Much of this practice does not involve CFC rated games. Thus, competence as shown in CFC tournaments sometimes rises rapidly and discontinuously.
If a player begins playing in CFC rated tournaments, a provisional rating is created, based on performance. After a rating is established, changes occur due to transfers of points to or from other players. For a player rated 1000 to rise to 2000, other players have to give up 1000 points. The opponents who lose these points may be playing as well as before, but they lose points to the improving newcomer. This process deflates ratings. Players losing points often complain. There have been examples of sudden losses of 50-100 points.
In recent years the CFC has addressed this deflation by adding points to ratings as a one time boon or bonus reflecting activity or as participation points for each game played. This stopped and reversed the deflation, but did not address the root cause. Two equal players would see both their ratings increase after a drawn game even if there was no increase in skill. The effect was substantial, often 50 rating points per year or more.
A number of solutions to this problem could be considered.
a. Start juniors with higher provisional ratings (say 1500). That way an increase to 2000 would only take 500 points from established players, reducing the deflation.
b. Instead of the redistribution of points as now occurs. Opponents of rapidly rising players could have ratings revised based on their performance rating rather than the outdated official rating. This would require a standard of what constitutes a rapid rise to protect against random variations in performance.
I suggested approach b at the AGM, but would like to do some analyses and simulations to refine it and verify how it would work. Roger Patterson has agreed to work with me on this.

There are other issues as well. The current software gives incorrect ratings for players above 2200 who lose enough points to fall below 2200 – a factor 0f 32 is used instead of 16 as described in the handbook. Also, the program is compiled an undocumented which makes auditing more difficult.

Active ratings are much lower than regular ratings. I suggested adjusting them to bring average ratings to the same level but there was a lack of interest at the AGM. Some people think they should be abolished. I see some value in having them , but is it worth the effort to have two ratings.

Some food for thought.