Why don't Keras Conv1D output tensors have an input measurement?

According to keras documentation ( https://keras.io/layers/convolutional/) the shape of the output tensor Conv1D is (batch_size, new_steps, filters), while the shape of the input tensor is (batch_size, steps, input_dim). I don’t understand how this can happen, because it means that if you pass a 1d input of length 8000, where batch_size = 1 and steps = 1 (I heard that steps mean # channels at your input), then this layer will have the output of the form (1,1, X), where X is the number of filters in the Conv layer. But what happens to the input dimension? Since the X filters in the layer apply to the entire input size, none of the output measurements should be 8000 (or less depending on the fill), something like (1,1,8000, X)? I checked, and Conv2D layers behave so that their meaning is output_shape (samples, filters, new_rows, new_cols), where new_rows and new_cols will be the dimensions of the input image,configured again based on padding. If Conv2D layers retain their input sizes, why not Conv1D layers? Is something missing here?

Reference Information:

I'm trying to visualize the 1st convolutional activation asset of my CNN, but most of the tools that I found seem to just work for 2 convolutional layers, so I decided to write my own code for myself. I understand well how it works here - this is the code that I still have:

# all the model activation layer output tensors
activation_output_tensors = [layer.output for layer in model.layers if type(layer) is keras.layers.Activation]

# make a function that computes activation layer outputs
activation_comp_function = K.function([model.input, K.learning_phase()], activation_output_tensors)

# 0 means learning phase = False (i.e. the model isn't learning right now)
activation_arrays = activation_comp_function([training_data[0,:-1], 0])

This code is based on the first julienr comment in this thread with some changes for the current version of keras. Of course, when I use it, although all activation arrays are of the form (1,1, X) ... Yesterday I spent the whole day trying to understand why this is not so, but no help is welcome.

: , input_dimension . , , , (X, Y) Conv1D X "" ( X) Y. gionni , , "input_dimension" "".

+4
1

2D . , , , (kernel_size, 1), (kernel_size, input_dim).

, , 1D kernel_size = 1 , .

, ( ). , input_dim channels 2D , reaoning ( channels, "" ).

, 1D- 2D- , kernel_size=(1D_kernel_size, input_dim) . :

from keras.layers import Conv1D, Conv2D
import keras.backend as K
import numpy as np

# create an input with 4 steps and 5 channels/input_dim
channels = 5
steps = 4
filters = 3
val = np.array([list(range(i * channels, (i + 1) * channels)) for i in range(1, steps + 1)])
val = np.expand_dims(val, axis=0)
x = K.variable(value=val)

# 1D convolution. Initialize the kernels to ones so that it easier to compute the result by hand

conv1d = Conv1D(filters=filters, kernel_size=1, kernel_initializer='ones')(x)

# 2D convolution that replicates the 1D one

# need to add a dimension to your input since conv2d expects 4D inputs. I add it at axis 4 since my keras is setup with `channel_last`
val1 = np.expand_dims(val, axis=3)
x1 = K.variable(value=val1)

conv2d = Conv2D(filters=filters, kernel_size=(1, 5), kernel_initializer='ones')(x1)

# evaluate and print the outputs

print(K.eval(conv1d))
print(K.eval(conv2d))

, , , , , .

+4

Source: https://habr.com/ru/post/1681381/


All Articles