Tensor size range - tf.range

I am trying to determine the operation for NN that I am implementing, but for this I need to iterate the dimension of the tensor. Below is a small working example.

X = tf.placeholder(tf.float32, shape=[None, 10])
idx = [[i] for i in tf.range(X.get_shape()[0])]

This results in a message error

ValueError: Cannot convert an unknown Dimension to a Tensor: ?

When using the same code, but using tf.shapeinstead, the resulting code

X = tf.placeholder(tf.float32, shape=[None, 10])
idx = [[i] for i in tf.range(tf.shape(X)[0])]

Gives the following error:

TypeError: 'Tensor' object is not iterable.

The way I implement this NN is batch_sizenot defined before the training function, which is at the end of the code. This is exactly where I create the schedule myself, therefore I am batch_sizenot known on this issue, and it cannot be fixed, because the training batch_sizeand test suite batch_sizes are different.

? , , batch_size, . API TensorFlow .

, , / ,

X = tf.placeholder(tf.float32, shape=[None, 10])
bs = tf.placeholder(tf.int32)

def My_Function(X):
    # Do some stuff to X
    idx = [[i] for i in tf.range(bs)]
    # return some tensor

A = tf.nn.relu(My_Function(X))

,

TypeError: 'Tensor' object is not iterable.
+4
2

tf.map_fn , ?

x = tf.placeholder(tf.float32, shape=[None, 10])
f = tf.map_fn(lambda y: y, x) # or perhaps something more useful than identity

, , , , , .

, tf.range .

In [2]: import numpy as np
   ...: import tensorflow as tf
   ...: x = tf.placeholder(tf.float32, shape=[None, 10])
   ...: sess = tf.InteractiveSession()
   ...: sess.run(tf.range(tf.shape(x)[0]), {x: np.zeros((7,10))})
Out[2]: array([0, 1, 2, 3, 4, 5, 6])
+1

. tf.map_fn 1735003.

, tf.map_fn LSTM , weights['out'] biases['out'].

x = tf.placeholder("float", [features_dimension, None, n_timesteps])

weights = {'out': tf.Variable(tf.zeros([N_HIDDEN_LSTM, labels_dimension]))}
biases = {'out': tf.Variable(tf.zeros([labels_dimension]))}

def LSTM_model(x, weights, biases):
        lstm_cell = rnn.LSTMCell(N_HIDDEN_LSTM)
        # outputs is a Tensor of shape (n_timesteps, n_observations, N_HIDDEN_LSTM)
        outputs, states = tf.nn.dynamic_rnn(lstm_cell, x, dtype=tf.float32, time_major=True)
        # Linear activation
        def pred_fn(current_output):
            return tf.matmul(current_output, weights['out']) + biases['out']
        # Use tf.map_fn to apply pred_fn to each tensor in outputs, along
        # dimension 0 (timestep dimension)
        pred = tf.map_fn(pred_fn, outputs)

        return pred
+1

Source: https://habr.com/ru/post/1673638/


All Articles