Keras: Weighted Binary Cross-Entropy

I tried implementing weighted binary cross-entropy with Keras, but I'm not sure if the code is correct. The learning outcome is a bit confusing. After several eras, I get an accuracy of ~ 0.15. I think this is much less (even for a random guess).

In the general case, about 11% of units and 89% of zeros, therefore, the weights w_zero = 0.89 and w_one = 0.11.

My code is:

def create_weighted_binary_crossentropy(zero_weight, one_weight):

    def weighted_binary_crossentropy(y_true, y_pred):

        # Original binary crossentropy (see losses.py):
        # K.mean(K.binary_crossentropy(y_true, y_pred), axis=-1)

        # Calculate the binary crossentropy
        b_ce = K.binary_crossentropy(y_true, y_pred)

        # Apply the weights
        weight_vector = y_true * one_weight + (1. - y_true) * zero_weight
        weighted_b_ce = weight_vector * b_ce

        # Return the mean error
        return K.mean(weighted_b_ce)

    return weighted_binary_crossentropy

Can someone see what is wrong?

thank

+4
source share
2 answers

Usually a minority class will have a higher class. Better to use one_weight=0.89, zero_weight=0.11(by the way, you can use class_weight={0: 0.11, 1: 0.89}as indicated in the comment).

, . , , . , 0,11. , .

, , " " (.. ), .

, 0,89 0,11. - ( , one_weight > zero_weight), .

+1

, model.fit . {0: 0.11, 1: 0.89}, 0 , 0. Keras : https://keras.io/models/sequential/ class_weight: ( ) (float), ( ). , " " .

0

Source: https://habr.com/ru/post/1684931/


All Articles