I solve the classification problem. I am training my uncontrolled neural network for a set of objects (using the skip-gram architecture).
The way I evaluate is to search for k nearest neighbors for each point of validation data, from training data. I take the weighted sum (weights based on distance) of the nearest neighbor labels and use this estimate of each point of the validation data.
Observation . As I increase the number of eras ( model1 - 600 eras, model 2 - 1400 eras and model 3 - 2000 eras), my AUC improves with lower k values, but saturates with the same values.
What could be a possible explanation for this behavior?

[Sent from CrossValidated]
source share