Efficient data feed for gain training algorithms

I am currently implementing a deep dual Q learning algorithm in TensorFlow. I have an experience playback buffer implemented based on NumPy arrays. However, some performance analyzes show that feeding data from NumPy arrays to a graph using feed_dict is very inefficient. This is also indicated in the documentation https://www.tensorflow.org/performance/performance_guide .

Does anyone have a suggestion so that meals can be made more efficiently? Using static datasets, feed can be done with input pipelines, such as readers. However, the re-buffer experience changes over time, which makes this type of feeding more difficult.

Any answers are much appreciated, thanks!

+5
source share
1 answer

The short answer is no way to speed it up if your data is minimal and unique. If your data has some redundancy or unnecessary decorations that you can remove, then delete it before submitting it to your model, but if you have already done this and your data is unique in each batch, then there is nothing (for that matter) that You can do.

However, there are things you can do to improve the performance of your networks.

  • Your Q-learning algorithm can be improved according to this article , which basically says that do not complete the training phase as long as your network is like an accumulated error beyond the threshold.
  • If you reuse or play some training sets, you can use the download step to load the training data into the GPU for quick playback.
0
source

Source: https://habr.com/ru/post/1264671/


All Articles