nuvigil
05-02-2018, 01:28 PM
Just dumping some conjecture establishing a theoretical basis for an empirical approach to map classification. No need to respond unless you find it interesting.
1.) Statistics Classification Model
Uses only statistics already available in surf-timer SQL
(# of attempts prior to completion/client rank)+(time in zone/client rank)/2
Client ranks are distributed. Exclusionary criteria include maps: newer than X days, and/or insufficient client sample N, and ST.D >= 1.5(Q3-Q1)
The remainder is a raw-score product, with advantages:
Absolute z-scoring allows quantitative map comparison
High positive distribution bias (should) correlate with the most competitive maps (the compliment to this is the map with the lowest <strike> ST.D</strike> VAR in !pr distribution)
Map distribution curves may prove helpful for the ambitious map creators
Bimodality as an indicator of skill bottlenecks This can be extrapolated down to the stage-level although it probably isn't worth the effort Conversely this can be extrapolated globally (beyond KSF servers) to calculate map Standard Error (contingency: congruent server settings) to have quantitative representation of human error
Beyond human error, external variables are network ping and hardware input lag. At this point we're far from the existing subjective ranking system, though I'd be surprised to find much difference.
Raw-scores can be converted into the existing 6-tier system, the difference being rank shifting phenomenon with the introduction of new maps that become eligible according to the exclusionary criteria aforementioned—if the point system were ever based on empirical classification this would disrupt the continuity of ranks/point awards.
2.) Computational Path Analysis
A computational geometric analysis from raw map files outputting the upper and lower limits of the map completion envelope—there's a reason I call this all theoretical.
That's all. This is not a suggestion, just a theoretical discussion mainly for educational purposes.
When I wrote the formula in OP I remember hoping someone else would come along with a new or improved version.
I am trying to insulate the formula to map popularity, we all know certain maps just don't get played often, but its important to preserve the validity of their rankings for objectivity. The impact of this is debatable but could be solved with weighted ranks.
1.) Statistics Classification Model
Uses only statistics already available in surf-timer SQL
(# of attempts prior to completion/client rank)+(time in zone/client rank)/2
Client ranks are distributed. Exclusionary criteria include maps: newer than X days, and/or insufficient client sample N, and ST.D >= 1.5(Q3-Q1)
The remainder is a raw-score product, with advantages:
Absolute z-scoring allows quantitative map comparison
High positive distribution bias (should) correlate with the most competitive maps (the compliment to this is the map with the lowest <strike> ST.D</strike> VAR in !pr distribution)
Map distribution curves may prove helpful for the ambitious map creators
Bimodality as an indicator of skill bottlenecks This can be extrapolated down to the stage-level although it probably isn't worth the effort Conversely this can be extrapolated globally (beyond KSF servers) to calculate map Standard Error (contingency: congruent server settings) to have quantitative representation of human error
Beyond human error, external variables are network ping and hardware input lag. At this point we're far from the existing subjective ranking system, though I'd be surprised to find much difference.
Raw-scores can be converted into the existing 6-tier system, the difference being rank shifting phenomenon with the introduction of new maps that become eligible according to the exclusionary criteria aforementioned—if the point system were ever based on empirical classification this would disrupt the continuity of ranks/point awards.
2.) Computational Path Analysis
A computational geometric analysis from raw map files outputting the upper and lower limits of the map completion envelope—there's a reason I call this all theoretical.
That's all. This is not a suggestion, just a theoretical discussion mainly for educational purposes.
When I wrote the formula in OP I remember hoping someone else would come along with a new or improved version.
I am trying to insulate the formula to map popularity, we all know certain maps just don't get played often, but its important to preserve the validity of their rankings for objectivity. The impact of this is debatable but could be solved with weighted ranks.