Keras Implementation of a user loss function requiring internal level output as labels

in keras, I want to set up a loss function that not only accepts (y_true, y_pred) as input, but also needs to use output from the internal network layer as a label for the output level. The figure shows the network layout

Here, the internal output is xn, which is a 1D sign. in the upper right corner is the output xn ', which is the prediction of xn. In other words, xn is the label for xn '.

Although [Ax, Ay] is traditionally called y_true, and [Ax ', Ay'] is y_pred.

I want to combine these two loss components into one and jointly organize a network.

Any ideas or thoughts are greatly appreciated!

+7
source share
3 answers

I figured out a way out, if someone is looking for the same thing, I posted here (based on the network indicated in this post):

The idea is to define a custom loss function and use it as a network output. (Designation: A is the true label of variable A , and A' is the predicted value of variable A )

 def customized_loss(args): #A is from the training data #S is the internal state A, A', S, S' = args #customize your own loss components loss1 = K.mean(K.square(A - A'), axis=-1) loss2 = K.mean(K.square(S - S'), axis=-1) #adjust the weight between loss components return 0.5 * loss1 + 0.5 * loss2 def model(): #define other inputs A = Input(...) # define input A #construct your model cnn_model = Sequential() ... # get true internal state S = cnn_model(prev_layer_output0) # get predicted internal state output S' = Dense(...)(prev_layer_output1) # get predicted A output A' = Dense(...)(prev_layer_output2) # customized loss function loss_out = Lambda(customized_loss, output_shape=(1,), name='joint_loss')([A, A', S, S']) model = Model(input=[...], output=[loss_out]) return model def train(): m = model() opt = 'adam' model.compile(loss={'joint_loss': lambda y_true, y_pred:y_pred}, optimizer = opt) # train the model .... 
+10
source

First of all, you should use the Functional API . Then you must determine the network output as the result and the result from the internal level, combine them into one output (by concatenation), and then create a user loss function, which then splits the combined output into two parts and performs loss calculations on its own.

Sort of:

 def customLoss(y_true, y_pred): #loss here internalLayer = Convolution2D()(inputs) #or other layers internalModel = Model(input=inputs, output=internalLayer) tmpOut = Dense(...)(internalModel) mergedOut = merge([tmpOut, mergedOut], mode = "concat", axis = -1) fullModel = Model(input=inputs, output=mergedOut) fullModel.compile(loss = customLoss, optimizer = "whatever") 
0
source

I have reservations regarding this implementation. Losses calculated in a merged layer propagate back to both merged branches. Generally, you would like to spread it through one layer.

0
source

Source: https://habr.com/ru/post/1014123/


All Articles