How to train a network to learn a function using MatConvNet?

I am trying to understand how to use MatConvNet to learn the function provided to I / O pairs for training.

I want to open a function that maps a 486-dimensional input vector to a single output value. However, I can’t get the network to work correctly, I need help finding my mistake. Below is the details of what I did, please let me know if any other information is needed.

Here is my training data layout that it shows in the tooltip of the matlab command (I have 1 million samples for training data):

>> imdb.images
ans = 
data: [4-D single]
label: [1x1000000 single]
set: [1x1000000 double]

>> size(imdb.images.data)
ans =
1 1 486 1000000

I use a multilayer fully connected neural network with one input layer of 486 neurons, one hidden layer with 100 neurons and one output layer with 1 neuron.

:

trainOpts.batchSize = 10000 ;
trainOpts.numEpochs = 100 ;
trainOpts.continue = false ;
trainOpts.gpus = [1];
trainOpts.learningRate = 0.001 ;
trainOpts.numEpochs = 100 ;
trainOpts.expDir = 'xyz' ;

f = 1/100; 
net.layers = {};
net.layers{end+1} = struct('type','conv','weights'{{frandn(1,1,486,100,'single'),zeros(1,100,'single')}},'stride',1,'pad',0);
net.layers{end+1} = struct('type','sigmoid'); 
net.layers{end+1} = struct('type','conv', 'weights', {{frandn(1,1,100,1,'single'), zeros(1,1,'single')}}, 'stride',1,'pad',0);    
net.layers{end+1} = struct('type','sigmoid'); 
net.layers{end+1} = struct('type','nnL2');

"vl_nnL2" L2. vl_simplenn, https://github.com/vlfeat/matconvnet/issues/15. cnn_train :

function err = error_sqerror(opts, labels, res) 
predictions = gather(res(end-1).x) ;
if numel(labels) == size(predictions, 4)
labels = reshape(labels,1,1,1,[]) ;
end
error = (abs(labels-predictions).^2);
err = sum(squeeze(error));

:

enter image description here

- , ?

, Ekta​​p >

+4

Source: https://habr.com/ru/post/1629454/


All Articles