I am somewhat new to machine learning in general, and I wanted to do a simple experiment to get to know the neural network auto-encoders better: create an extremely simple auto-encoder that explores the identification function.
I use Keras to make life easier, so I did it first to make sure it works:
weights = [np.diag(np.ones(84)), np.zeros(84)]
model = Sequential([Dense(84, input_dim=84, weights=weights)])
model.compile(optimizer='sgd', loss='mean_squared_error')
model.fit(X, X, nb_epoch=10, batch_size=8, validation_split=0.3)
As expected, the loss is zero, both in the train data and in the validation:
Epoch 1/10
97535/97535 [
Epoch 2/10
97535/97535 [
, , , . . 200 , , L1 L2. , , , - , , .
- , 1,1. , , ?
" " - , , , - ? ?
, , , :
X = np.random.normal(1.1090579, 0.0012380764, (139336, 84))
, . ( 1e-6), , . , - / ? !
UPDATE
, , , 84 , ( ), , . , 84 . . , , , / , , . , , . , .