According to the documentation of the LSTM cell, the output parameter has the form (seq_len, batch, hidden_size * num_directions), so you can easily accept the last element of the sequence as follows:
rnn = nn.LSTM(10, 20, 2) input = Variable(torch.randn(5, 3, 10)) h0 = Variable(torch.randn(2, 3, 20)) c0 = Variable(torch.randn(2, 3, 20)) output, hn = rnn(input, (h0, c0)) print(output[-1])
Managing tensor and designing neural networks in PyTorch is incredibly simpler than in a torch, so you rarely have to use containers. In fact, as pointed out in the PyTorch tutorial for former Torch users, PyTorch is built around Autograd, so you no longer have to worry about containers. However, if you want to use the old Lua Torch code, you can take a look at the Legacy package .
source share