How to use weighted categorical cross-entropy on FCN (U-Net) in Keras?

I built a Keras model for image segmentation (U-Net). However, in my samples, some incorrect classifications (areas) are not so important, while others are crucial, so I want to assign them a higher weight in the loss function. To complicate the situation, I would like some erroneous classifications (class 1 instead of 2) to have a very high penalty, and inverse (class 2 instead of 1) should not be punished so much.

As I see this, I need to use the sum (over all pixels) of the weighted categorical cross-entropy, but the best I could find was :

def w_categorical_crossentropy(y_true, y_pred, weights):
    nb_cl = len(weights)
    final_mask = K.zeros_like(y_pred[:, 0])
    y_pred_max = K.max(y_pred, axis=1)
    y_pred_max = K.reshape(y_pred_max, (K.shape(y_pred)[0], 1))
    y_pred_max_mat = K.cast(K.equal(y_pred, y_pred_max), K.floatx())
    for c_p, c_t in product(range(nb_cl), range(nb_cl)):
        final_mask += (weights[c_t, c_p] * y_pred_max_mat[:, c_p] * y_true[:, c_t])
    return K.categorical_crossentropy(y_pred, y_true) * final_mask

However, this code only works with one prediction, and I lack knowledge of Keras' internal work (and the mathematical side of this is not much better). Does anyone know how I can adapt it, or, even better, is there a ready-made loss function that suits my business?

I would appreciate some pointers.

EDIT: My question is similar to How to make an exact categorical cross-entropy loss in Keras? , except that I would like to use weighted strong> categorical cross-entropy.

+4
source share

Source: https://habr.com/ru/post/1677122/


All Articles