How to do multitask deep learning using tensor flow

Does anyone know how to do multi-tasking deep learning with TensorFlow? That is, separating the lower layers without separating the upper layers. Could you share some sample code?

+2
source share
1 answer

Keras with TensorFlow support can do this easily. A functional API has been developed for these use cases. Take a look at the functional API guide. Here is an LSTM example that separates layers taken from the above guide:

# this layer can take as input a matrix # and will return a vector of size 64 shared_lstm = LSTM(64) # when we reuse the same layer instance # multiple times, the weights of the layer # are also being reused # (it is effectively *the same* layer) encoded_a = shared_lstm(tweet_a) encoded_b = shared_lstm(tweet_b) # we can then concatenate the two vectors: merged_vector = merge([encoded_a, encoded_b], mode='concat', concat_axis=-1) # and add a logistic regression on top predictions = Dense(1, activation='sigmoid')(merged_vector) # we define a trainable model linking the # tweet inputs to the predictions model = Model(input=[tweet_a, tweet_b], output=predictions) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) model.fit([data_a, data_b], labels, nb_epoch=10) 

When you train a Keras model with multiple outputs, you can define a loss function for each output, and Keras will optimize the sum of all losses, which is very useful.

+4
source

Source: https://habr.com/ru/post/1239785/


All Articles