PDA

View Full Version : Item # 17 - Modifications to the Can. Rating System / Report of the Rating Auditor



Bob Armstrong
10-01-2010, 08:09 AM
Item # 17 - Modifications to the Can. Rating System / Report of the Rating Auditor

The introductory post is by Bill Doubleday, Rating Auditor:

Rating System Issues
The CFC Rating system allows players to have a reasonable expectation of the outcome when they play. Underlying the system is the assumption that players have a level of competence which persists. Statistical analyses of individual ratings over decades usually show a rapid rise over the first few years and a slower rise to a peak which persists for decades followed by a slow decline.
Ratings are self adjusting so that higher competence than reflected in the current rating is rewarded with a rating increase and ratings that are too high decrease as a result of tournament results.
In recent years, chess has been promoted in schools, resulting in large numbers of young players learning the game. Typically they begin as weak players with ratings of 1000 or less. With coaching and extensive practice, their competence rises. Much of this practice does not involve CFC rated games. Thus, competence as shown in CFC tournaments sometimes rises rapidly and discontinuously.
If a player begins playing in CFC rated tournaments, a provisional rating is created, based on performance. After a rating is established, changes occur due to transfers of points to or from other players. For a player rated 1000 to rise to 2000, other players have to give up 1000 points. The opponents who lose these points may be playing as well as before, but they lose points to the improving newcomer. This process deflates ratings. Players losing points often complain. There have been examples of sudden losses of 50-100 points.
In recent years the CFC has addressed this deflation by adding points to ratings as a one time boon or bonus reflecting activity or as participation points for each game played. This stopped and reversed the deflation, but did not address the root cause. Two equal players would see both their ratings increase after a drawn game even if there was no increase in skill. The effect was substantial, often 50 rating points per year or more.
A number of solutions to this problem could be considered.
a. Start juniors with higher provisional ratings (say 1500). That way an increase to 2000 would only take 500 points from established players, reducing the deflation.
b. Instead of the redistribution of points as now occurs. Opponents of rapidly rising players could have ratings revised based on their performance rating rather than the outdated official rating. This would require a standard of what constitutes a rapid rise to protect against random variations in performance.
I suggested approach b at the AGM, but would like to do some analyses and simulations to refine it and verify how it would work. Roger Patterson has agreed to work with me on this.

There are other issues as well. The current software gives incorrect ratings for players above 2200 who lose enough points to fall below 2200 – a factor 0f 32 is used instead of 16 as described in the handbook. Also, the program is compiled an undocumented which makes auditing more difficult.

Active ratings are much lower than regular ratings. I suggested adjusting them to bring average ratings to the same level but there was a lack of interest at the AGM. Some people think they should be abolished. I see some value in having them , but is it worth the effort to have two ratings.

Some food for thought.

Michael von Keitz
10-01-2010, 01:13 PM
Approach b sounds great, except for the fact that it may represent a significant time investment. What sort of time to completion would Bill be proposing?

I am among those who would be willing to see active ratings disappear. That said, I think a survey of our current members who possess active ratings is in order, as I cannot claim to represent that group.

Fred McKim
10-01-2010, 01:22 PM
Totally against touching Active ratings. Here in PEI we have 50% regular, 50% active. Leave participation points on active rating system. Once a year increase players (who have played) ratings who have lower active rating by half the difference, or something like that.

Have recommended applying participation/feedback points for playing juniors. Alternatively, instead of using the pre-event rating of a junior use an average of pre-event rating and performance rating for those whose Rp is 100 points > Ro.

all sorts of ideas.

Stuart Brammall
10-01-2010, 02:03 PM
What is the current problem with the system we have?

If it is fast improving juniors that in your opinion are under-rated that you are trying to correct, it seems to me that any mechanism which is used to artificially increase their rating will cause rating inflation---

This is my math intuition talking and I have not done any statistics to back it up, but it seems if you increase a low rated player you also increase all of his future oponents-- insofar as those he loses to will gain more, but perhaps more important, those he beats will lose less. Is this not inflating the pool as a whole?

If someone is really under-rated it does not take long for there rating to correct using the standard system-- and also, those he brings down on his way up should correct themselves quickly as well. Unless they were overrated. ;)

Fred McKim
10-01-2010, 02:08 PM
Sorry, but the rating system does not have a built-in way of re-introducing rating points taken from the pool by a player as they improve through the ranks.

Stuart Brammall
10-01-2010, 02:15 PM
They don't get taken away, they get re-dristributed. If an old master say 2250 has a bad event and loses to a bunch of kids, the next event, when he manages to dogde the kids, or when the kids have got their ratings up, he should still perform at 2250 and go back up, unless he was overrated to begin with.

It is important to note that rating never indicates a persons chess strength, only their strength relative to the pool. If the pool is getting stronger and you are not you go down, even if your strength is not.

Christopher Mallon
10-01-2010, 09:36 PM
I proposed this once a few years ago when I was on the ratings committee, but what about simply saying that anyone under 18, their rating acts as a provisional rating based on their most recent 24 games?

Yes I can think of a couple problems with that, but it's still worth discussing I think.

Fred McKim
10-01-2010, 09:50 PM
They don't get taken away, they get re-dristributed. If an old master say 2250 has a bad event and loses to a bunch of kids, the next event, when he manages to dogde the kids, or when the kids have got their ratings up, he should still perform at 2250 and go back up, unless he was overrated to begin with.

It is important to note that rating never indicates a persons chess strength, only their strength relative to the pool. If the pool is getting stronger and you are not you go down, even if your strength is not.

Ratings are meant to be stable, not fluctuating due to other people joining the pool. At least that what's how the CFC and FIDE systems work.

Rob Clark
10-01-2010, 10:44 PM
I agree that something needs to be done about the rating system. In regards to Stuart's post, I disagree. If the pool that the master is in gets stronger, than points will be taken from him and passed down to the up and coming juniors with established ratings. The only way to rectify this is for him to take points from another pool. However, coming from way up in Thunder Bay where the nearest large city is 10 hours away, it’s not always a possibility. Also this assumes that their ratings have not been deflated as well, which invariably over time they will. It is a problem, especially with how much emphasis is put on ratings, since this system will inevitably lead to deflation (especially in the small rating pools with many juniors). As for the active ratings, I think they serve a purpose. My club in Thunder Bay holds an active tournament every weekend with great results. Active chess is fun and it’s great that it’s far less of a time commitment than a regular tournament.

Paul Leblanc
10-02-2010, 12:16 PM
I have raised the issue of under-rated juniors several times, most recently after the Langley Open last month when five of the juniors had performance ratings 300 to 500 points above their current CFC ratings. There is a great deal of dissatisfaction among the experienced players who get crushed by 1300 rated grade 3 kids whose real playing strength is several hundred points higher.
Now that participation points have been eliminated (a good move), there will most certainly be deflation in the system since players tend to come into competitive chess with an initial low rating and leave with a higher rating. That leaves room to boost up the ratings of under-rated juniors.
Chris Mallon's idea has a great deal of merit but I'd like to see a smaller number of games used to calculate the current rating. Some of the juniors out here are playing only 6-12 CFC rated games per year but are improving very rapidly through their participation in unrated junior events, online chess, coaching and casual play.

Christopher Mallon
10-02-2010, 12:34 PM
That was one of the problems I saw with it Paul, but a way around that might be either a) most recent 24 games or b) all games in previous 12 or 18 months, whichever is the lower number of games.

Paul Leblanc
10-02-2010, 04:50 PM
Chris, I think you have found the solution. It is common sense and easy to apply. I hope the Rating Auditor is tuned in.

Bob Armstrong
10-02-2010, 05:15 PM
Hi Chris and Paul:

If you think you have an improvement to the system, I'd suggest you e-mail the proposition to Bill - he has not yet attended this meeting.

Bob

Fred McKim
10-02-2010, 05:41 PM
This is a variation on changing the k (memory) factor.

The basic principle is that the players likely to be improving fast, will have their ratings move faster.

For example junior gains 8 points from a game adult opponent loses only 4 points.

So an easier solution would be to base juniors memory on 15 games instead of 25 games. That means each win against an evenly rated player would be 26.7 points instead of the present 16.

Once juniors got to say 2000, they could drop back to a normal k value.

Stuart Brammall
10-02-2010, 11:46 PM
As many of you are probably aware there is currently a contest being held to see who can develop a rating system to out predict the Elo system--- perhaps changes to the rating system should wait for the results.

Bob Gillanders
10-03-2010, 01:53 AM
Okay, so the problem is that some juniors are underrated. And until their rating catches up to their real strength, their opponent is getting screwed.

How about this idea.

George (old guy) - rated 1800
Johnny (hot shot kid) - rated 1300 (but really much stronger)

George and Johnny play each other in tournament. Johnny wins!
Johnny's tournament performance rating = 1800

For their game:

George only loses 16 points, not 30 points. Calculated using Johnny's performance rating, not his pre-tournament rating.

Johnny gains 30 points! (same as now)
:D

Christopher Mallon
10-03-2010, 10:04 AM
Bob, that's similar to the other ideas proposed, except it's setting his provisional game limit to however many rounds were in the current tournament, sortof.

The problem is, what if Johnny is a typical youth player and is wildly inconsistent? Round 1 he crushes George, and the following four rounds he plays well but ends up making big mistakes and losing - bad timing/luck. His performance for the tournament - 900. So George loses 32 points now, and those who beat Johnny don't gain much at all despite him actually being rated 1300.

Better to use a larger (but still recent) sample size.

On another note, should we consider following FIDE's lead and NOT rating games played against unrated players? (And possibly Provisional players also)? It would be rated for one person (the unrated) but not the other.

Paul Leblanc
10-03-2010, 12:14 PM
Chris, just speaking for myself I'd prefer to have all my games rated. I don't mind playing the hot-shot kid if I know that his recent provisional rating is going to be used to indicate how many rating points I'm going to win or lose.
The game doesn't seem serious if it's not rated for one player.

Christopher Mallon
10-03-2010, 12:16 PM
The problem is you don't know - provisionals are calculated first, so you actually don't know what his rating will be when your game gets rated. Also, what about unrated opponents?

Bob Gillanders
10-03-2010, 12:45 PM
Chris, going back to my example.

If the kid loses his other 4 games with blunders etc. and has a performance rating of 900, well then is he really stronger than his 1300 rating?

To calculate George's rating, I would use the higher of Johnny's current rating of 1300 and his performance rating of 900.
Either way, poor George loses 30 points. :(

After all, the system needs some credibility. You can't just assume every kid is better than his/her rating! :rolleyes:

Stuart Brammall
10-03-2010, 01:25 PM
I don't think under-rated players is so much a problem as people claim... does anyone have an example of some player who are grossly under-rated right now? And any who have in the past been under-rated for a significant period of time?

Inconsistancy in play is an entirely different matter--- if you wanted to try and correct for such things you would need a system which maintains a record of a player's average performance over many events and then adjusts their k factor specifically according to whatever a standard deviation of their performances is. Too messy to bother with. All players would need to have there own specific K.

So does anyone have an example of a player who has performed at, say more then 350 points above their rating for their past 4-5 events?

Stuart Brammall
10-03-2010, 09:13 PM
I just had an interesting idea---
As many of you I am sure have gathered I am in general opposed to accelerating the rating increase of "under-rated" players, mainly because I do not think that they exist in great number, and any that there are are likely counterbalanced by "over-rated" players.

However, if you wish to accelerate the increase of players performing higher then their rating, does it not make sense to counterbalsnce this with an accelerated decrease of players performing proportioally lower then their current rating?

I give as an example: David Krupka, a gentlmanly Hart House Alum who had not played for some years begins playing again at Scarborough chess club in 2008. Starting rating 2262 (peak of 2307), current rating 1949. All those kids who beat him up in Scarborough certainly got a few more points out of him then they deserved, and worse still--- beating up on him made then have strangely high perfs of the kind everyone in theis thread is woried about.

Paul Leblanc
10-03-2010, 09:21 PM
No there aren't that many under-rated juniors. But here are the ones out here that need their ratings adjusted: Tian Tian Geng, Jason Cao, Jeremy Hui, Ryan Lo, Loren Laceste. They don't all meet Stuart's criteria but if you check them out on the ratings page (and ignore junior events) you will see what I mean.

Fred McKim
10-03-2010, 09:30 PM
PEI Case - 4 year rise of Anthony Banks (age 17-21)
1 2005/07 1560 1646 1569
2 2005/10 1569 1819 1629
rating adjustment
3 2007/05 1648 1892 1717
4 2007/07 1717 2104 1823
5 2007/11 1823 1725 1825
6 2008/05 1825 2173 1951
7 2008/07 1951 1749 1931
8 2008/10 1931 1900 1955
9 2009/05 1955 1906 1979
10 2009/07 1979 2014 1993
11 2009/11 1993 2163 2046

You cannot use these performance ratings, as many games were against low rated players. Increase of 486 points in 4 years. 29 from rating adjustment and approximately 90 from participation points.

So 347 gained points (and removed from stable pool) over approximately 55 games = 6.3 points gained per game or 150 + points stronger than rating over this period.

Garland Best
10-03-2010, 09:38 PM
They don't get taken away, they get re-dristributed. If an old master say 2250 has a bad event and loses to a bunch of kids, the next event, when he manages to dogde the kids, or when the kids have got their ratings up, he should still perform at 2250 and go back up, unless he was overrated to begin with.

It is important to note that rating never indicates a persons chess strength, only their strength relative to the pool. If the pool is getting stronger and you are not you go down, even if your strength is not.

The common pattern is that a junior enters the system, gains ratings points rapidly and then loses interests as he/she enters high school/university, never to return. When they leave (the theory goes), the player removes from the pool those rating points gained from opponents.

All of the issues with ratings here revolve around some fundamental misconceptions.

Miconception 1: That rating deflation is a clearly defined, measured concept. I have yet to see any meaningful statistical analysis of the rating pool of CFC players over the past 30 years, even though the data is readily accessible for that time period. Show me data of the mean, standard deviation and quartile points of the distribution, and we can start saying if inflation or deflation really exists.

Misconception 2: That ratings form an absolute measure of chess skill. The are supposed to form a relative measure of chess skill (Player a is rated 200 point higher than player b, so a should defeat b 3/4 times). Unfortunately since titles are directly tied to ratings, this misconception will continued to be propagated.

So baring a radical change to how we rate people age give titles (sorry, won't happen) The only way to really resolve this dynamic is to statistically analyze the data gathered over the past 30 years to (a) determine how much the distribution has shifted over the years, and (b) evaluate proposals to minimize changes in overall rating distributions while still being a valid predictor of results.

In my mind such a study requires somebody with a strong statistical background, like a university post docorate. Has anyone does such a study for the CFC in the past? What do we have as reference material? Can we approach a University regarding such a process?

Without a proper analysis of the effect of changes to the rating system, any ad hoc change is reckless, to put it mildly. I will not support any changes to the rating system without a proper analysis of the consequences. Instead, all I can support are any corrections to arithmetic errors in the calculations, such as the 2200 rating bug already mentioned, and the identification of manipulation or malfeasance to the rating system.

Stuart Brammall
10-03-2010, 09:41 PM
I took a look at their profiles and they just don't look that under-rated to me.
They're all pretty inconsistant. In one of Geng's tournaments he loses every game then beats Szalay.

For the sake of comparison I offer my own rating profile... Now I play so much that my rating should be more accurate then most. I have a couple perf over 2000 and one over 2200, similar looking to these kids.

William G. Doubleday
10-03-2010, 09:47 PM
Chris Mallon's suggestion is almost identical to my second option. The peformance rating is almost identical to a provisional rating for the same number of games. The only difereence is the provisional rating uses the linear aproximation 400(Wins - Losses)/games played.

I expect it would stabilize somewhere between 10 and 20 games, but I would need to do some simulations to be precise.

I don't follow Stuart's reasoning. When you lose to an underated player you lose more points than you would if he had a proper rating. The difference can be large.

Bill Doubleday

Christopher Mallon
10-03-2010, 09:48 PM
A fully detailed investigation into the ratings history to chart any inflation/deflation allowing for various external factors should net the person doing the study either a large sum of money or a Master's degree, counting as their thesis.

Inflation or deflation is not by itself bad, when you get right down to it. What is bad is perception.

People, rightly or wrongly, are attached to their ratings. If they think that either a) if they play they will lose points they don't deserve to lose because there are lots of underrated players, or b) if they think their rating is abnormally high and thus not really valid, they will be unhappy and quite possibly not play as often.

Yes, B can happen, but there was that recent study that showed the average chessplayer believed themselves to be 100 points underrated (and on average, their ratings were perfectly accurate, the study concluded). So option B would only kick in if their rating climbed more than 100 points abnormally, on average.

Stuart Brammall
10-03-2010, 10:06 PM
This is interesting.... all the proposed mechanisms for identifing under-rated players have relied on performance, and yet here the performance is not indicative of the players strength.

He does have 2 perfs that differ from his rating by more then 300 points though--- Is this enough to pick him out for a boost as high as 150 points?

Stuart Brammall
10-03-2010, 11:20 PM
Yes, B can happen, but there was that recent study that showed the average chessplayer believed themselves to be 100 points underrated (and on average, their ratings were perfectly accurate, the study concluded). So option B would only kick in if their rating climbed more than 100 points abnormally, on average.

Also, such a result show positively that the number of over-rated player counterbalances any who might be under-rated.

Valer Eugen Demian
10-04-2010, 12:34 AM
I have raised the issue of under-rated juniors several times, most recently after the Langley Open last month when five of the juniors had performance ratings 300 to 500 points above their current CFC ratings. There is a great deal of dissatisfaction among the experienced players who get crushed by 1300 rated grade 3 kids whose real playing strength is several hundred points higher.
Now that participation points have been eliminated (a good move), there will most certainly be deflation in the system since players tend to come into competitive chess with an initial low rating and leave with a higher rating. That leaves room to boost up the ratings of under-rated juniors.
Chris Mallon's idea has a great deal of merit but I'd like to see a smaller number of games used to calculate the current rating. Some of the juniors out here are playing only 6-12 CFC rated games per year but are improving very rapidly through their participation in unrated junior events, online chess, coaching and casual play.

What is missing from Paul's analysis is what lays below the Langley Open tournament. We have a few junior chess clubs here in BC and the players he sees coming to that Open are the best of the crop!!... For each one of those juniors playing hundreds of points above their rating, there are at least 10 or more kids who are correctly rated!

I have a well established rating structure based on the class level and it starts from novice all the way to 2000. The system has been proven right for the past 9 years (we started it back in 2001); any modification in inflating the ratings of these juniors will be a big mistake!

A junior can achieve a provisional CFC rating of roughly 1300 after only a good tournament (of 5 games); most players in this situation regress (normally) during the following ones and they are learning not to over estimate their true knowledge. Let's not mess with what works!

Paul Leblanc
10-04-2010, 12:58 AM
I'm not proposing that we do anything to the 10 juniors who are correctly rated, I only want to fix the one who is greatly under-rated and playing in open tournaments getting people upset because they lose more rating points to him/her than they would if the rating was more accurate.

Valer Eugen Demian
10-04-2010, 01:14 AM
I'm not proposing that we do anything to the 10 juniors who are correctly rated, I only want to fix the one who is greatly under-rated and playing in open tournaments getting people upset because they lose more rating points to him/her than they would if the rating was more accurate.

So every time we send over our best, they would get preferred treatment? Instead the solution has always been to increase the pool of players near the base of the pyramid. The more juniors are involved in CFC regular rated tournaments, the more accurate the top ones will be rated. I know it is hard and takes time... :)

Of course the top juniors raise fast; they are actually learning like sponges and improving fast. This is not the same with your regular adult player participating for fun... ;)

Stuart Brammall
10-04-2010, 02:00 AM
Allow me to be more precise in my objection:
The rating pool has a specific average, equal to the sum of all the points devided by the number of players--- in theory these points form a normal distribution of sorts. Since points are normally won and lost along a linear scale, that is to say, if someone wins 24 points for beating someone rated 200 points above, the person they beat loses 24 points, the average rating of the entire pool remains constant. Whenever someone is increased artificially this increases the average rating of the pool AKA Inflation.
It is the inflation that I am objecting to.

Fred McKim
10-04-2010, 08:26 AM
Stuart: In all due respect, I think you need to go back and read Elo's book. The goal of almost any national rating pool is to stabilize ratings of players who are not improving or decreasing in rating strength.

The two distributions are both normal distributions, just that one is shifted on the x axis from the other.

William G. Doubleday
10-05-2010, 09:27 PM
Hello All

Maybe I was not clear enough in describing my option 2. The rising player does not rise any faster, but his opponents lose points as if they were plaaying an opponent with the performance rating of the rapid improver. The rapid improver goes up without bringing his opponents down as much.

Regarding Stuart's inflation point. When a new player begins with a low rating, it brings down the average. This is deflation. When a junior rises to 2000 and stops playing, the average of the active players also goes down.

So far, I have not seen anything better than my option 2.

Bill Doubleday

Paul Leblanc
10-06-2010, 01:06 AM
Bill, are you convinced enough to put forward a motion? If so, I would be pleased to second it.

Paul Leblanc
10-07-2010, 03:26 PM
I had coffee with Roger Patterson this morning. He has indeed been doing quite a bit of analysis of the CFC rating data. Perhaps it would be premature to table a motion until the Ratings Auditor has had a chance to confer with Roger. I do feel, however, that we are generally on the right track with Option 2.

Egidijus Zeromskis
10-07-2010, 05:01 PM
Also, the program is compiled an undocumented which makes auditing more difficult.

I think I've read that the source exists too.
(or Does it mean that the source is undocumented?)

Lyle Craver
10-07-2010, 05:48 PM
I'm not convinced that is true.

Pretty much anywhere you have a relatively small geographically isolated rating pool the kinds of things being discussed can happen.

An example was Winnipeg where their players nearly always gained rating points when playing outside their area. However usually when a Winnipeg player 'went on the road' it would be to Minneapolis rather than to the larger rating pools in southern Ontario or Alberta. There was even talk at one point of a 'rating boon' for Manitoba on account of this.

Then Winnipeg hosted a Canadian Open and local players were faced with numerous players from outside their area. Nearly all of them did better than expected based on their ratings (which to me demonstrates they were in fact under-rated with respect to the national average) and voila! No more talk of being under-rated or desire for a rating boon.

How one quantifies this sort of thing in a fair and consistent manner while maintaining the integrity of the system is unclear but effectively the ONLY way you are going to avoid such anomalies is to create a means for the whole pool to play each other and in a country the size of Canada that is simply not going to happen.

Stuart Brammall
10-07-2010, 08:31 PM
I agree that the geography of Canada makes having a rating system which is consistent difficult....
An anecdote: When I began playing chess at the Hart House Club there were a number of players there who began at the same time. We all played in the CUCC reserves 2006-7 and recieved provis. of between 1000 and 1300. We all almost never played outside the club. We never played against the established players in the club. After a year or two of Friday night blitz we were all much improved--- Anyone of one of us on varying occassion would play an event elsewhere and have an 1700-1900 perf. Then the player who played an event outside would generously donate his new wealth of rating points to the rest of us at the next in house event... since we were all close to the same strength. Now (4 years later) most of us have caught up to the national pool (Though there are still a few guys in the 1500s).

The problem in this case was demographic--- though outsiders said we were underrated, in fact our rating were accurate within the pool we played in. To us it was everyone else who was overrated.

The problem of isolated rating pools is one which I do not think will be corrected by the means suggested in this thread. When I looked at the junior Paul mentioned, I noticed that although they would have occasional perfs. above the their current rating, within their localized pool they still performed at their current rating.

It seems this issue, which stems from demographic/geographic factors should require such similar solution... the issue being that the only way to determine how deflated/inflated a pool is is by having them play against outsiders---
which they don't do consistently enough to provide relavent data. :$

Christopher Mallon
10-07-2010, 08:58 PM
The best way we have to reduce the impact of regional pools is to promote major national events such as the Canadian Open, CYCC (and to a lesser extent, Closed and Junior type events). Also helping would be any "traveling matches" where one city sends a team to another city to play.

Fred McKim
10-07-2010, 09:11 PM
There have been several provincial adjustments made over the years, often after a number of players from an area have played in a Canadian Open or such event. To the best of me recollection the last such adjustment was to NL back in the 80's.

Valer Eugen Demian
10-08-2010, 06:40 PM
I agree that the geography of Canada makes having a rating system which is consistent difficult....
An anecdote: When I began playing chess at the Hart House Club there were a number of players there who began at the same time. We all played in the CUCC reserves 2006-7 and recieved provis. of between 1000 and 1300. We all almost never played outside the club. We never played against the established players in the club. After a year or two of Friday night blitz we were all much improved--- Anyone of one of us on varying occassion would play an event elsewhere and have an 1700-1900 perf. Then the player who played an event outside would generously donate his new wealth of rating points to the rest of us at the next in house event... since we were all close to the same strength. Now (4 years later) most of us have caught up to the national pool (Though there are still a few guys in the 1500s).

The problem in this case was demographic--- though outsiders said we were underrated, in fact our rating were accurate within the pool we played in. To us it was everyone else who was overrated.

The problem of isolated rating pools is one which I do not think will be corrected by the means suggested in this thread. When I looked at the junior Paul mentioned, I noticed that although they would have occasional perfs. above the their current rating, within their localized pool they still performed at their current rating.

It seems this issue, which stems from demographic/geographic factors should require such similar solution... the issue being that the only way to determine how deflated/inflated a pool is is by having them play against outsiders---
which they don't do consistently enough to provide relavent data. :$

The analysis is right on the money! All those juniors listed might or might not win a junior tournament against players of their own age and roughly same rating pool... Now if I could only lure them back at our club to "donate" their latest rating points gains :D

Lyle Craver
10-08-2010, 07:32 PM
On what do you base that comment and what are your criteria for making that conclusion?

Stuart Brammall
10-09-2010, 02:50 AM
I base it on the study Chris mentions.

Christopher Mallon
10-09-2010, 08:21 AM
The only study I mentioned was one that hasn't happened yet!

Fred McKim
10-09-2010, 01:35 PM
Back to feedback points:

These would be given to everyone who played a rapidly improving junior - defined as a junior who had gained 100 or more points in the past 12 months (or in the past 25 games or so).

We would just have to figure out the correct number of points to use. I picked 100 points as historically I've found most active juniors improve at that rate.

Lyle Craver
10-09-2010, 05:30 PM
The main reason why I hate facing 'up and coming juniors' across the board has nothing to do with my rating and everything to do with the fact that no one ever thinks you played well if you win but you always 'played like a chimp' if you lose - lots of blame but never any praise.

You pretty much have to play a Fischer - D Byrne game to get any credit! (I'm sure you all know the game I mean)

Les Bunning
10-09-2010, 11:44 PM
What system does the USCF use? They have more resources than we do. We should be using the same system as them.
Les Bunning