According to keras documentation ( https://keras.io/layers/convolutional/) the shape of the output tensor Conv1D is (batch_size, new_steps, filters), while the shape of the input tensor is (batch_size, steps, input_dim). I don’t understand how this can happen, because it means that if you pass a 1d input of length 8000, where batch_size = 1 and steps = 1 (I heard that steps mean # channels at your input), then this layer will have the output of the form (1,1, X), where X is the number of filters in the Conv layer. But what happens to the input dimension? Since the X filters in the layer apply to the entire input size, none of the output measurements should be 8000 (or less depending on the fill), something like (1,1,8000, X)? I checked, and Conv2D layers behave so that their meaning is output_shape (samples, filters, new_rows, new_cols), where new_rows and new_cols will be the dimensions of the input image,configured again based on padding. If Conv2D layers retain their input sizes, why not Conv1D layers? Is something missing here?
Reference Information:
I'm trying to visualize the 1st convolutional activation asset of my CNN, but most of the tools that I found seem to just work for 2 convolutional layers, so I decided to write my own code for myself. I understand well how it works here - this is the code that I still have:
activation_output_tensors = [layer.output for layer in model.layers if type(layer) is keras.layers.Activation]
activation_comp_function = K.function([model.input, K.learning_phase()], activation_output_tensors)
activation_arrays = activation_comp_function([training_data[0,:-1], 0])
This code is based on the first julienr comment in this thread with some changes for the current version of keras. Of course, when I use it, although all activation arrays are of the form (1,1, X) ... Yesterday I spent the whole day trying to understand why this is not so, but no help is welcome.
: , input_dimension . , , , (X, Y) Conv1D X "" ( X) Y. gionni , , "input_dimension" "".