I am wondering if there are significant drawbacks (for example, regarding computational efficiency, memory ...) when creating a TensorFlow placeholder for variable size inputs (compared to fixed sizes)?
Let's say I'm doing mini-batch training and initializing the schedule with a placeholder, where I assume a fixed batch_size pre-package:
tf.placeholder(..., shape=[batch_size, ...])
Alternatively, I can initialize the placeholder variable so that it accepts inputs with a variable size:
tf.placeholder(..., shape=[None, ...])
I am not familiar with the implementation of the low-level tensor level under the hood, but should the latter check sizes, allocate memory and create new arrays at each iteration to take into account the case when the size of my mini-camera changes during training? So, depending on the implementation, would it be computationally wasteful if I work with a fixed lot size?
source share