Page 3 of 5 FirstFirst 12345 LastLast
Results 21 to 30 of 50

Thread: Item # 17 - Modifications to the Can. Rating System / Report of the Rating Auditor

  1. #21

    Default

    I don't think under-rated players is so much a problem as people claim... does anyone have an example of some player who are grossly under-rated right now? And any who have in the past been under-rated for a significant period of time?

    Inconsistancy in play is an entirely different matter--- if you wanted to try and correct for such things you would need a system which maintains a record of a player's average performance over many events and then adjusts their k factor specifically according to whatever a standard deviation of their performances is. Too messy to bother with. All players would need to have there own specific K.

    So does anyone have an example of a player who has performed at, say more then 350 points above their rating for their past 4-5 events?

  2. #22

    Default

    I just had an interesting idea---
    As many of you I am sure have gathered I am in general opposed to accelerating the rating increase of "under-rated" players, mainly because I do not think that they exist in great number, and any that there are are likely counterbalanced by "over-rated" players.

    However, if you wish to accelerate the increase of players performing higher then their rating, does it not make sense to counterbalsnce this with an accelerated decrease of players performing proportioally lower then their current rating?

    I give as an example: David Krupka, a gentlmanly Hart House Alum who had not played for some years begins playing again at Scarborough chess club in 2008. Starting rating 2262 (peak of 2307), current rating 1949. All those kids who beat him up in Scarborough certainly got a few more points out of him then they deserved, and worse still--- beating up on him made then have strangely high perfs of the kind everyone in theis thread is woried about.

  3. #23
    Join Date
    Aug 2008
    Location
    Victoria BC
    Posts
    694

    Default

    No there aren't that many under-rated juniors. But here are the ones out here that need their ratings adjusted: Tian Tian Geng, Jason Cao, Jeremy Hui, Ryan Lo, Loren Laceste. They don't all meet Stuart's criteria but if you check them out on the ratings page (and ignore junior events) you will see what I mean.

  4. #24
    Join Date
    Sep 2008
    Location
    Charlottetown, PE
    Posts
    2,158
    Blog Entries
    11

    Default

    PEI Case - 4 year rise of Anthony Banks (age 17-21)
    1 2005/07 1560 1646 1569
    2 2005/10 1569 1819 1629
    rating adjustment
    3 2007/05 1648 1892 1717
    4 2007/07 1717 2104 1823
    5 2007/11 1823 1725 1825
    6 2008/05 1825 2173 1951
    7 2008/07 1951 1749 1931
    8 2008/10 1931 1900 1955
    9 2009/05 1955 1906 1979
    10 2009/07 1979 2014 1993
    11 2009/11 1993 2163 2046

    You cannot use these performance ratings, as many games were against low rated players. Increase of 486 points in 4 years. 29 from rating adjustment and approximately 90 from participation points.

    So 347 gained points (and removed from stable pool) over approximately 55 games = 6.3 points gained per game or 150 + points stronger than rating over this period.

  5. #25
    Join Date
    Aug 2008
    Location
    Almonte, ON
    Posts
    371

    Thumbs down

    Quote Originally Posted by Stuart Brammall
    They don't get taken away, they get re-dristributed. If an old master say 2250 has a bad event and loses to a bunch of kids, the next event, when he manages to dogde the kids, or when the kids have got their ratings up, he should still perform at 2250 and go back up, unless he was overrated to begin with.

    It is important to note that rating never indicates a persons chess strength, only their strength relative to the pool. If the pool is getting stronger and you are not you go down, even if your strength is not.
    The common pattern is that a junior enters the system, gains ratings points rapidly and then loses interests as he/she enters high school/university, never to return. When they leave (the theory goes), the player removes from the pool those rating points gained from opponents.

    All of the issues with ratings here revolve around some fundamental misconceptions.

    Miconception 1: That rating deflation is a clearly defined, measured concept. I have yet to see any meaningful statistical analysis of the rating pool of CFC players over the past 30 years, even though the data is readily accessible for that time period. Show me data of the mean, standard deviation and quartile points of the distribution, and we can start saying if inflation or deflation really exists.

    Misconception 2: That ratings form an absolute measure of chess skill. The are supposed to form a relative measure of chess skill (Player a is rated 200 point higher than player b, so a should defeat b 3/4 times). Unfortunately since titles are directly tied to ratings, this misconception will continued to be propagated.

    So baring a radical change to how we rate people age give titles (sorry, won't happen) The only way to really resolve this dynamic is to statistically analyze the data gathered over the past 30 years to (a) determine how much the distribution has shifted over the years, and (b) evaluate proposals to minimize changes in overall rating distributions while still being a valid predictor of results.

    In my mind such a study requires somebody with a strong statistical background, like a university post docorate. Has anyone does such a study for the CFC in the past? What do we have as reference material? Can we approach a University regarding such a process?

    Without a proper analysis of the effect of changes to the rating system, any ad hoc change is reckless, to put it mildly. I will not support any changes to the rating system without a proper analysis of the consequences. Instead, all I can support are any corrections to arithmetic errors in the calculations, such as the 2200 rating bug already mentioned, and the identification of manipulation or malfeasance to the rating system.

  6. #26

    Default

    I took a look at their profiles and they just don't look that under-rated to me.
    They're all pretty inconsistant. In one of Geng's tournaments he loses every game then beats Szalay.

    For the sake of comparison I offer my own rating profile... Now I play so much that my rating should be more accurate then most. I have a couple perf over 2000 and one over 2200, similar looking to these kids.

  7. #27

    Default Provisional rating option

    Chris Mallon's suggestion is almost identical to my second option. The peformance rating is almost identical to a provisional rating for the same number of games. The only difereence is the provisional rating uses the linear aproximation 400(Wins - Losses)/games played.

    I expect it would stabilize somewhere between 10 and 20 games, but I would need to do some simulations to be precise.

    I don't follow Stuart's reasoning. When you lose to an underated player you lose more points than you would if he had a proper rating. The difference can be large.

    Bill Doubleday

  8. #28
    Join Date
    Aug 2008
    Location
    Kitchener, ON
    Posts
    2,235
    Blog Entries
    37

    Default

    A fully detailed investigation into the ratings history to chart any inflation/deflation allowing for various external factors should net the person doing the study either a large sum of money or a Master's degree, counting as their thesis.

    Inflation or deflation is not by itself bad, when you get right down to it. What is bad is perception.

    People, rightly or wrongly, are attached to their ratings. If they think that either a) if they play they will lose points they don't deserve to lose because there are lots of underrated players, or b) if they think their rating is abnormally high and thus not really valid, they will be unhappy and quite possibly not play as often.

    Yes, B can happen, but there was that recent study that showed the average chessplayer believed themselves to be 100 points underrated (and on average, their ratings were perfectly accurate, the study concluded). So option B would only kick in if their rating climbed more than 100 points abnormally, on average.

  9. #29

    Default

    This is interesting.... all the proposed mechanisms for identifing under-rated players have relied on performance, and yet here the performance is not indicative of the players strength.

    He does have 2 perfs that differ from his rating by more then 300 points though--- Is this enough to pick him out for a boost as high as 150 points?

  10. #30

    Default

    Quote Originally Posted by Christopher Mallon
    Yes, B can happen, but there was that recent study that showed the average chessplayer believed themselves to be 100 points underrated (and on average, their ratings were perfectly accurate, the study concluded). So option B would only kick in if their rating climbed more than 100 points abnormally, on average.
    Also, such a result show positively that the number of over-rated player counterbalances any who might be under-rated.

Page 3 of 5 FirstFirst 12345 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •