There is a way to see exactly how the values โโof all weights and offsets change over time. You can use the Keras callback method, which you can use to record weight values โโfor each training era. Using such a model, for example,
import numpy as np model = Sequential([Dense(16, input_shape=(train_inp_s.shape[1:])), Dense(12), Dense(6), Dense(1)])
add ** kwarg callbacks during fitting:
gw = GetWeights() model.fit(X, y, validation_split=0.15, epochs=10, batch_size=100, callbacks=[gw])
where the callback is defined as
class GetWeights(Callback):
This callback creates a dictionary with all the weights and offsets of the layers marked with layer numbers so you can see how they change over time as you train your model. You will notice that the shape of each array and offset depends on the shape of the model layer. One array of weights and one array of offsets are saved for each layer in your model. The third axis (depth) shows their evolution over time.
Here we used 10 eras and a model with layers of 16, 12, 6 and 1 neurons:
for key in gw.weight_dict: print(str(key) + ' shape: %s' %str(np.shape(gw.weight_dict[key]))) w_1 shape: (5, 16, 10) b_1 shape: (1, 16, 10) w_2 shape: (16, 12, 10) b_2 shape: (1, 12, 10) w_3 shape: (12, 6, 10) b_3 shape: (1, 6, 10) w_4 shape: (6, 1, 10) b_4 shape: (1, 1, 10)