The short answer is no way to speed it up if your data is minimal and unique. If your data has some redundancy or unnecessary decorations that you can remove, then delete it before submitting it to your model, but if you have already done this and your data is unique in each batch, then there is nothing (for that matter) that You can do.
However, there are things you can do to improve the performance of your networks.
- Your Q-learning algorithm can be improved according to this article , which basically says that do not complete the training phase as long as your network is like an accumulated error beyond the threshold.
- If you reuse or play some training sets, you can use the download step to load the training data into the GPU for quick playback.
source share