PDA

View Full Version : Theoretical Basis for Difficulty Classification



nuvigil
05-02-2018, 01:28 PM
Just dumping some conjecture establishing a theoretical basis for an empirical approach to map classification. No need to respond unless you find it interesting.

1.) Statistics Classification Model

Uses only statistics already available in surf-timer SQL


(# of attempts prior to completion/client rank)+(time in zone/client rank)/2


Client ranks are distributed. Exclusionary criteria include maps: newer than X days, and/or insufficient client sample N, and ST.D >= 1.5(Q3-Q1)


The remainder is a raw-score product, with advantages:

Absolute z-scoring allows quantitative map comparison
High positive distribution bias (should) correlate with the most competitive maps (the compliment to this is the map with the lowest <strike> ST.D</strike> VAR in !pr distribution)
Map distribution curves may prove helpful for the ambitious map creators

Bimodality as an indicator of skill bottlenecks This can be extrapolated down to the stage-level although it probably isn't worth the effort Conversely this can be extrapolated globally (beyond KSF servers) to calculate map Standard Error (contingency: congruent server settings) to have quantitative representation of human error


Beyond human error, external variables are network ping and hardware input lag. At this point we're far from the existing subjective ranking system, though I'd be surprised to find much difference.

Raw-scores can be converted into the existing 6-tier system, the difference being rank shifting phenomenon with the introduction of new maps that become eligible according to the exclusionary criteria aforementioned—if the point system were ever based on empirical classification this would disrupt the continuity of ranks/point awards.

2.) Computational Path Analysis
A computational geometric analysis from raw map files outputting the upper and lower limits of the map completion envelope—there's a reason I call this all theoretical.

That's all. This is not a suggestion, just a theoretical discussion mainly for educational purposes.



When I wrote the formula in OP I remember hoping someone else would come along with a new or improved version.
I am trying to insulate the formula to map popularity, we all know certain maps just don't get played often, but its important to preserve the validity of their rankings for objectivity. The impact of this is debatable but could be solved with weighted ranks.

Stevo_97
05-02-2018, 06:32 PM
I see.

Silverthing
05-04-2018, 04:51 PM
*SPEC* [clefty] Professional S1 Bumper : what teh FUCK is that thread
*SPEC* [clefty] Professional S1 Bumper : hol shit

Tomos
05-04-2018, 04:53 PM
wow

evolv
05-05-2018, 05:20 PM
Couldn't this formula allow high ranking players to skew the data by purposely staying within a zone for a long period of time or choosing to make a large number of attempts? Otherwise it would be nice to have a standardized score to compare the average difficulty for players on the server.

nuvigil
05-07-2018, 07:38 AM
It could be manipulated this way, but not quite if data points are only taken from PR information. In order to artificially sabotage the data you'd (theoretically) need a map you have not completed to try and deflate a maps rating. Perhaps you would need multiple people adding negative skew to the dataset, before risking reaching the variance limit on the distribution. But your point is correct in that knowing the specific criteria or formula variables exposes how to manipulate an algorithm.

buzuki
05-09-2018, 04:28 PM
haha mate my heads gone

Kiiru
05-11-2018, 10:03 AM
What?

nuvigil
05-12-2018, 03:25 AM
The idea here for those who are confused is about how maps (specifically surf maps) are classified into tiers. Maps are currently categorized, if I understand correctly, subjectively based on the opinions of various experienced players when a new map arrives. This seems to work fine. It's not a very robust system but I don't think anyone really cares or fusses too much about the rankings because who cares, really. Sometimes I hear people say this t4 is really a upper t3 and that t3 should really be t4. But what does that really mean/how could you fairly rank and then tier every map in comparison with every other map?

I had all these thoughts about some fancy executable someone had created where you put the map in and it spits out a rating, along with every possible set of input combinations to complete a map, blah blah blah.

Instead I settled on a more passive approach. Surftimer keeps track of map attempt counts in prinfo, specifically # of attempts before first completion, in addition to pr times data. If you just took every player's first-completion attempts number and averaged it out per map, you could say that harder maps are those which players generally had more attempts until first-completion and fewer for lower-tier maps. But the skill spread between players in surf is just incredibly wide, but we can control for this by weighting that attempt value with listed player rank.

I have no issue with the way things are, like I said, I was just interested in the problem and thought I'd share.

Ling_Ling
05-15-2018, 06:30 AM
This isnt regular virgin. This is advanced virgin.

skip tracer
05-16-2018, 10:32 AM
^ thank you for signing your post