Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: Participation Points - Inflation

  1. #1

    Thumbs down Participation Points - Inflation

    Hi,


    I've thought this for a little while now, and had the opportunity to discuss it with a strong local master at the Ontario Open. These participation points, which were put in place to offset potential junior invasion and combat minor deflation, are now doing damage in the wrong direction. Most of the 1900s I knew from 2 years ago are now 2000s, and most 2200s are now 2300s. 2300s still beat up on 2000s, the point difference remains more or less the same, and clearly we're not all getting so much better at chess.

    I am at an all-time personal high of 2116, and even though capable of performing at that level on occasion, I don't believe I went from 1900 all of a sudden to 2100 strength. I probably improved a little bit in the past 2 years, but don't expect it to be 200 points. The master who I conversed with similarly went from high 2200s to 2400, and claims he didn't improve at the game.

    Looking around me, everyone who's active enough seems to be going up significantly. When we compared to USCF ratings, we're no longer deflated, and are maybe a bit on the inflated front. We used to be inflated compared to FIDE by 100 points, and now it's closer to 150 or 200, where players' CFC ratings are 150-200 points higher than FIDE.

    So why are we still doing this? It's awesome to think we're all getting better, but that's just very nice crap. Even if chess players hate to admit that they're not so skilled.

    Personal example at the Pan-American Intercollegiate competitions that my colleagues from University of Toronto and I have been to: we don't hold our own. To make matters worse, the TDs there (USA) insist on boosting our ratings up by 50 points (using the deflation adjustment from 4-5 years ago). At the 3 Pan-Ams I've been to, 20 out of 24 players (8 each year) underperformed significantly. We can whine about the traveling & sleep deprivation and this and that, but 200-300 points?

    What will happen if I play a friendly match against people like Daniel Abrahams, Kit-Sun Ng, Haizhou Xu or Geordie Derraugh, friends and University colleagues who have a similar rating to mine? We split 5-5 in a 10 game match, and we both go up 10 points. So here were are, boosting the pool by 20 points, 20 points that came out of nothing. Why if my rating is 2116 and I perform at 2110 in a 10 game match, my rating goes up?

    Now take into account that at the average Hart House open tournament, 100 players play 5 games each. We've just injected 500 points into the rating pool, out of thin air, regardless of the results. All because we get participation points.

    Every month that goes by, we're injecting way too many points into the system, way more than a flood of juniors can possibly slowly chew off the "established" players.

    Clearly you'll have a whining festival if you start taking people's points away by fixing the system, because "my rating is accurate, I am *that* good". But please stop these participation points' nonsense, the more time that goes by, the worse it'll be.

    Shouldn't CFC's goal be to have an accurate rating comparable to FIDE (and to a much lesser extent USCF) ?
    What is the purpose of all this? To have the most inflated rating system worldwide?


    Alex Ferreira

  2. #2
    Join Date
    Aug 2008
    Location
    Victoria BC
    Posts
    694

    Default Rating Inflation?

    In another discussion I recently commented on a different problem at the lower end of the rating system - underrated juniors. Sure enough, at this year's Keres two of our young Victoria players (Tian Tian Geng and Jason Cao) performed 448 and 469 points above their pre-event ratings. They had some unhappy opponents. It seems to me that it would make more sense to award bonus points for performance than participation points.
    I note that William Doubleday was elected rating auditor at the last AGM. What does he have to say about this?

  3. #3
    Join Date
    Aug 2008
    Posts
    1,746

    Default

    Quote Originally Posted by Alex Ferreira
    To have the most inflated rating system worldwide?
    The most inflated systems are on internet servers, and compared to them the CFC ratings are quite low

    A picture shows the difference of CFC-FIDE ratings (for all CFC players with the FIDE ratings, including players who have not played even in this century



    x-axis - a player's # in a TOP list (1. Kovalyov 2. Bluvshtein 3. Spraggett, etc)
    Attached Images Attached Images
    Last edited by Egidijus Zeromskis; 05-30-2010 at 01:38 AM.

  4. #4
    Join Date
    Sep 2008
    Location
    Charlottetown, PE
    Posts
    2,158
    Blog Entries
    11

    Default Another way of doing it

    Instead of awarding participation points the correct method to battle inflation is to adjust a player's RAW performance rating based on "correcting" the potentially underated players they have played against.

    Factors include whether the opponent is:
    1) A Junior (juniors are more underrated)
    2) At certain rating levels (lower rated players are more underrated)
    3) Exhibiting a big rating increase in this event (players with huge performance rating hikes are more underrated).
    4) Frequency of activity (this is harder to guage). Places like Toronto in reality would get many more points per player than places where only a few tournaments are held per year.

    I suggested a system like this the last time we went to participation points, but there wasn't time to study this.

  5. #5
    Join Date
    Aug 2008
    Location
    Victoria BC
    Posts
    694

    Default

    Fred, you may be onto something but I'm having a little difficulty following your train of thought:

    "adjust a player's RAW performance rating based on "correcting" the potentially underated players they have played against."

    Maybe you could clarify this with an example.

  6. #6
    Join Date
    Aug 2008
    Location
    Kitchener, ON
    Posts
    2,236
    Blog Entries
    37

    Default

    I think what he means is basically, if someone is showing a massive ratings gain, rate them first, and then rate everyone else based on that person's new rating.

  7. #7
    Join Date
    Sep 2008
    Location
    Charlottetown, PE
    Posts
    2,158
    Blog Entries
    11

    Default

    That's called feedback points and I think the USCF implemented this many years ago, although I don't know what model they use now.

    What I was suggesting goes a little further and considers the rating of your opponent to be adjusted (when calculating your performance rating) because of a 1) significant rating gain, 2) because they are a low rated player (and thus more likely to be improving), and 3) because they are a junior (and thus almost always underrated).

    So for example if I am 2000 and I play a 1500 junior and an 1800 junior and a 1700 adult and another 1700 adult who gains 200 points.

    1) The 1st junior is too far away for any feedback points.
    2) I would get X feedback points for the second junior because they are within 350 or 400 points.
    3) no feedback points for the third static player.
    4) I would get X feedback points for playing this rapidly improving player.

    If somebody with a rating of 1600 played the same group they'd get a different number of feedback points, and similarly if somoene 2200 played them they'd get different feedback points because of the 350-400 point rule.

    I also suggested using rating caps so that you wouldn't get as many fb points if you were at your rating cap.

    It all gets complicated and depends on probability of your opponent being stronger than their rating which you can't measure emperically very well.

    What Chris suggests could be much easier to implement. For example you get a feedback point for every 25 points an opponent within 350-400 rating points gains up to a maximum of 10 or 20.

    Let's consider a double round robin with 4 players all rated at say 2000 (just ot keep it easy). One player gains 75 points, one gains 25 points, one loses 25 points, one loses 75 points.
    Player 1 gets 2 feedback points
    Player 2 gets 6 feedback points
    Player 3 gets 8 feedback points
    Player 4 gets 8 feedback points

    The only problem with this model is that if we ran the same tournament with 4 1200 players they'd get the same feedback points, although it's my contention they'd be "learning more" and the same level of feedback points for 4 juniors at 1600. This is why I think the number of feedback points needs to reflect the type of player.

    We could also institute yearly caps on feedback or participation points to help equalize the areas with inflation and those with few events.

  8. #8
    Join Date
    Jan 2009
    Location
    Tecumseh, ON
    Posts
    3,275
    Blog Entries
    1

    Default

    It seems to me that the whole point of participation points was to reward people for participating and act as a counter balance to the rating points that were being siphoned off by improving juniors.

    I know that the fact that there are participation points often influence my decision to play in tournaments where I might not have played. I seem to do a good job of distributing any excess rating points that I gain from participation points to younger players in Toronto and locally.

    I have not noticed any significant deviation in playing strength between U.S. and Canadian players with equivalent ratings at least in Michigan versus Ontario where I tend to play.

  9. #9

    Default

    Quote Originally Posted by Alex Ferreira
    Hi,


    I've thought this for a little while now, and had the opportunity to discuss it with a strong local master at the Ontario Open. These participation points, which were put in place to offset potential junior invasion and combat minor deflation, are now doing damage in the wrong direction. Most of the 1900s I knew from 2 years ago are now 2000s, and most 2200s are now 2300s. 2300s still beat up on 2000s, the point difference remains more or less the same, and clearly we're not all getting so much better at chess.

    ....
    Looking around me, everyone who's active enough seems to be going up significantly. When we compared to USCF ratings, we're no longer deflated, and are maybe a bit on the inflated front. We used to be inflated compared to FIDE by 100 points, and now it's closer to 150 or 200, where players' CFC ratings are 150-200 points higher than FIDE.


    ......


    Now take into account that at the average Hart House open tournament, 100 players play 5 games each. We've just injected 500 points into the rating pool, out of thin air, regardless of the results. All because we get participation points.

    Every month that goes by, we're injecting way too many points into the system, way more than a flood of juniors can possibly slowly chew off the "established" players.

    Alex Ferreira

    I can't say that I have noticed the same inflation in BC. Certainly, my rating has drifted down :-( .... organizing is bad for one's chess. But then, there are not so many rated tournaments here. The participation point model systemically inflates active rating pools relative to others and entrenches rating differences accross the country.

    The point you raise illustrates the deficiencies of the motion adjusting the rating system passed by the governers. A ratings 'deflation' problem was alleged, was never measured, a simplistic solution based on no measurement or scientific thought was voted on and implemented, and the results have never been measured or monitored in order to check on whether the orginal allegation has been compensated for. (all of this was pointed out at the time.)

    Any change to the rating system should be well thought out and monitored on an ongoing basis. And the rating system should not be subject to 'democratic' changes. The governers should have the power only to direct general statements of principle and not have the power to dictate specific formulae for the rating system.

    Having said all that, some measure of anti deflationary process is probably required - as a rule, people start low, get better, then leave the system - taking the gain in strength (rating points) with them. Juniors are a particular problem as their strength may change faster than the rate that the rating system is designed to deal with.

  10. #10
    Join Date
    Aug 2008
    Posts
    1,564

    Default CFC Ratings Deflation

    Quote Originally Posted by roger patterson
    Having said all that, some measure of anti deflationary process is probably required - as a rule, people start low, get better, then leave the system - taking the gain in strength (rating points) with them. Juniors are a particular problem as their strength may change faster than the rate that the rating system is designed to deal with.
    Roger - you hit the nail on the head. Some anti deflationary measures are required. The year prior to the most recent introduction of participation points I did plot the trajectory based on the adult membership. It was a steady but slow drop, can't recall the numbers.

    Alex - you say you are witnessing rating inflation. But If your target group are active players, like yourself, you should expect to see big increases.

    Fred - your feedback points idea is intriging. It would be helpful to read the feedback from US.

    Whatever the techniques employed, an annual monitoring of rating points should be done. Turn the PP system on or off each year. Keep it simple!

    Junior ratings problems is a more complex question. Sounds like a task for the rating auditor!

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •