Prevention of retraining in CNN convolutional layers

I am using TensorFlow to train the Nuclear Neural Network (CNN) for sign language applications. CNN needs to classify 27 different labels, so it’s not surprising that the main problem is the conversion. I took a few steps for this:

  • I have collected a large amount of high-quality training data (more than 5000 samples per label).
  • I built a fairly sophisticated pre-processing step to maximize invariance for things such as lighting conditions.
  • I use dropout on fully connected levels.
  • I apply L2 regularization to fully connected parameters.
  • I made extensive optimization of hyperparameters (as far as possible, given HW and time constraints) to determine the simplest model that can achieve almost 0% loss of training data.

Unfortunately, even after all these steps, I found that I can’t achieve much better than about 3% of the test errors. (This is not scary, but in order for the application to be viable, I will need to significantly improve this.)

I suspect that the source of the retraining lies in the convolutional layers, since I do not take any obvious steps for regularization (in addition to keeping as few layers as possible). But based on the examples provided by TensorFlow, it is not clear that regularization or dropout is usually applied to convolutional layers.

The only approach I found on the Internet that explicitly deals with preventing reassignment in convolutional layers is a fairly new approach called Stochastic Pooling . Unfortunately, there seems to be no implementation in TensorFlow, at least not yet.

In short, is there a recommended approach to prevent retraining in convolutional layers that can be achieved in TensorFlow? Or will it be necessary to create your own join operator to support the Stochastic Pooling approach?

Thanks for any guidance!

+5
source share
1 answer

How can I deal with retraining?

Miscellaneous

CNN should classify 27 different labels, so it’s not surprising that the main problem is oversaturation.

I do not understand how this is related. You can have hundreds of shortcuts without retraining issues.

+5
source

Source: https://habr.com/ru/post/1245509/


All Articles