Keras class_weight in multi-character binary classification

Failed to use class_weight for my multiple label problem. That is, each label has a value of 0 or 1, but for each input sample there are many labels.

Code (with random data for MWE purposes):

import tensorflow as tf from keras.models import Sequential, Model from keras.layers import Input, Concatenate, LSTM, Dense from keras import optimizers from keras.utils import to_categorical from keras import backend as K import numpy as np # from http://www.deepideas.net/unbalanced-classes-machine-learning/ def sensitivity(y_true, y_pred): true_positives = tf.reduce_sum(tf.round(K.clip(y_true * y_pred, 0, 1))) possible_positives = tf.reduce_sum(tf.round(K.clip(y_true, 0, 1))) return true_positives / (possible_positives + K.epsilon()) # from http://www.deepideas.net/unbalanced-classes-machine-learning/ def specificity(y_true, y_pred): true_negatives = tf.reduce_sum(K.round(K.clip((1-y_true) * (1-y_pred), 0, 1))) possible_negatives = tf.reduce_sum(K.round(K.clip(1-y_true, 0, 1))) return true_negatives / (possible_negatives + K.epsilon()) def to_train(a_train, y_train): hours_np = [np.arange(a_train.shape[1])]*a_train.shape[0] train_hours = to_categorical(hours_np) n_samples = a_train.shape[0] n_classes = 4 features_in = np.zeros((n_samples, n_classes)) supp_feat = np.random.choice(n_classes, n_samples) features_in[np.arange(n_samples), supp_feat] = 1 #This model has 3 separate inputs seq_model_in = Input(shape=(1,),batch_shape=(1, 1, a_train.shape[2]), name='seq_model_in') feat_in = Input(shape=(1,), batch_shape=(1, features_in.shape[1]), name='feat_in') feat_dense = Dense(1)(feat_in) hours_in = Input(shape=(1,), batch_shape=(1, 1, train_hours.shape[2]), name='hours_in') #Model intermediate layers t_concat = Concatenate(axis=-1)([seq_model_in, hours_in]) lstm_layer = LSTM(1, batch_input_shape=(1, 1, (a_train.shape[2]+train_hours.shape[2])), return_sequences=False, stateful=True)(t_concat) merged_after_lstm = Concatenate(axis=-1)([lstm_layer, feat_dense]) #may need another Dense() after dense_merged = Dense(a_train.shape[2], activation="sigmoid")(merged_after_lstm) #Define input and output to create model, and compile model = Model(inputs=[seq_model_in, feat_in, hours_in], outputs=dense_merged) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[sensitivity, specificity]) class_weights = {0.:1., 1.:118.} seq_length = 23 #TRAINING (based on http://philipperemy.imtqy.com/keras-stateful-lstm/) for epoch in range(2): for i in range(a_train.shape[0]): y_true_1 = np.expand_dims(y_train[i,:], axis=1) y_true = np.swapaxes(y_true_1, 0, 1) #print 'y_true', y_true.shape for j in range(seq_length-1): input_1 = np.expand_dims(np.expand_dims(a_train[i][j], axis=1), axis=1) input_1 = np.reshape(input_1, (1, 1, a_train.shape[2])) input_2 = np.expand_dims(np.array(features_in[i]), axis=1) input_2 = np.swapaxes(input_2, 0, 1) input_3 = np.expand_dims(np.array([train_hours[i][j]]), axis=1) tr_loss, tr_sens, tr_spec = model.train_on_batch([input_1, input_2, input_3], y_true, class_weight=class_weights) model.reset_states() return 0 a_train = np.random.normal(size=(50,24,5625)) y_train = a_train[:, -1, :] a_train = a_train[:, :-1, :] y_train[y_train > 0.] = 1. y_train[y_train < 0.] = 0. to_train(a_train, y_train) 

The error I am getting is:

 ValueError: `class_weight` must contain all classes in the data. The classes set([330]) exist in the data but not in `class_weight`. 

The value inside 'set ([...]) changes every time it starts. But, as I said, there are only two classes in the data: 0 and 1; There are only a few labels per sample. For example, one answer (y_train) looks like this:

 print y_train[0,:] #[ 0. 0. 1. ..., 0. 1. 0.] 

How can I use class_weights for a class_weights task in Keras?

+5
source share
2 answers

Yeah. This is a known bug in keras ( issue # 8011 ). Basically, keras code involves one-time coding when the number of classes is determined, rather than multi-class ordinal coding.

keras/engine/training.py :

 # if 2nd dimension is greater than 1, it must be one-hot encoded, # so let just get the max index... if y.shape[1] > 1: y_classes = y.argmax(axis=1) 

I can’t think of a better way to solve the problem, except for the set y_true[:, 1] = 1 , that is, always β€œreserve” position 1 in y . This will call y_classes = 1 (which is the correct value in the binary classification).

Why does it work? The code does not work when y_true[i] receives a value of type [0, 0, ..., 0, 1, ...] with some leading zeros. Keras behavior (erroneously) estimates the number of classes through the index of the maximum element, which turns out to be some j > 1 , for which y[i][j] = 1 . This makes the Keras engine think that there are more than 2 classes, so the provided class_weights wrong. Setting y_true[i][1] = 1 ensures that j <= 1 (because np.argmax chooses the smallest maximum index), which allows you to bypass the protective devices of kera.

+1
source

you can create a callback that adds the label index to the list for example:

y = [[0,1,0,1,1], [0,1,1,0,0]]

will create a list: category_list = [1, 3, 4, 1, 2]

where each instance of the label is counted in list_ category

then you can use

weighted_list = class_weight.compute_class_weight ('balanced', np.unique (category_list), category_list)

Then just convert weighted_list to a dictionary for use in Keras.

0
source

Source: https://habr.com/ru/post/1274697/


All Articles