Keras LSTM Status

I would like to run LSTM in Keras and get the output plus status. Some things like this in TF

with tf.variable_scope("RNN"):
      for time_step in range(num_steps):
        if time_step > 0: tf.get_variable_scope().reuse_variables()
        (cell_output, state) = cell(inputs[:, time_step, :], state)
        outputs.append(cell_output)

Is there a way to do this in Keras where I can get the last state and pass it to new inputs when the length of the sequence is huge. I know the state = True state, but I want to have access to the states while I train too. I know that it uses scan not for for, but basically I want to save the states, and then in the next run make them the starting states for LSTM. In a nutshell, get both output and status.

+4
source share
1 answer

LSTM , keras ( , ), .

keras , , , . , , - keras, , .

-, call() keras/layers/recurrent.py , keras :

def call(self, x, mask=None):
    # input shape: (nb_samples, time (padded with zeros), input_dim)
    # note that the .build() method of subclasses MUST define
    # self.input_spec with a complete input shape.
    input_shape = self.input_spec[0].shape
    if K._BACKEND == 'tensorflow':
        if not input_shape[1]:
            raise Exception('When using TensorFlow, you should define '
                            'explicitly the number of timesteps of '
                            'your sequences.\n'
                            'If your first layer is an Embedding, '
                            'make sure to pass it an "input_length" '
                            'argument. Otherwise, make sure '
                            'the first layer has '
                            'an "input_shape" or "batch_input_shape" '
                            'argument, including the time axis. '
                            'Found input shape at layer ' + self.name +
                            ': ' + str(input_shape))
    if self.stateful:
        initial_states = self.states
    else:
        initial_states = self.get_initial_states(x)
    constants = self.get_constants(x)
    preprocessed_input = self.preprocess_input(x)

    last_output, outputs, states = K.rnn(self.step, preprocessed_input,
                                         initial_states,
                                         go_backwards=self.go_backwards,
                                         mask=mask,
                                         constants=constants,
                                         unroll=self.unroll,
                                         input_length=input_shape[1])
    if self.stateful:
        self.updates = []
        for i in range(len(states)):
            self.updates.append((self.states[i], states[i]))

    if self.return_sequences:
        return outputs
    else:
        return last_output

-, , script:

import keras.backend as K
from keras.layers import Input, LSTM
class MyLSTM(LSTM):
   def call(self, x, mask=None):
   # .... blablabla, right before return

   # we add this line to get access to states
   self.extra_output = states

   if self.return_sequences:
   # .... blablabla, to the end

   # you should copy **exactly the same code** from keras.layers.recurrent

I = Input(shape=(...))
lstm = MyLSTM(20)
output = lstm(I) # by calling, we actually call the `call()` and create `lstm.extra_output`
extra_output = lstm.extra_output # refer to the target

calculate_function = K.function(inputs=[I], outputs=extra_output+[output]) # use function to calculate them **simultaneously**. 
+2

Source: https://habr.com/ru/post/1648052/


All Articles