I am trying to get a basic LSTM working in TensorFlow. I get the following error:
TypeError: 'Tensor' object is not iterable.
Violation line:
rnn_outputs, final_state = tf.nn.dynamic_rnn(cell, x, sequence_length=seqlen,
initial_state=init_state,)`
I am using version 1.0.1 for Windows 7. My logins and tags have the following forms
x_shape = (50, 40, 18), y_shape = (50, 40)
Where:
- lot size = 50
- sequence length = 40
- input vector length at each step = 18
I build my schedule as follows
def build_graph(learn_rate, seq_len, state_size=32, batch_size=5):
seqlen = tf.constant(seq_len, shape=[batch_size],dtype=tf.int32)
x = tf.placeholder(tf.float32, [batch_size, None, 18])
y = tf.placeholder(tf.float32, [batch_size, None])
keep_prob = tf.constant(1.0)
cell = tf.contrib.rnn.LSTMCell(state_size)
init_state = tf.get_variable('init_state', [1, state_size],
initializer=tf.constant_initializer(0.0))
init_state = tf.tile(init_state, [batch_size, 1])
rnn_outputs, final_state = tf.nn.dynamic_rnn(cell, x, sequence_length=seqlen,
initial_state=init_state,)
rnn_outputs = tf.nn.dropout(rnn_outputs, keep_prob)
with tf.variable_scope('prediction'):
W = tf.get_variable('W', [state_size, num_classes])
b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0))
preds = tf.tanh(tf.matmul(rnn_outputs, W) + b)
loss = tf.square(tf.subtract(y, preds))
train_step = tf.train.AdamOptimizer(learn_rate).minimize(loss)
Can someone tell me what I am missing?