Conv1D on two-dimensional input

Can someone explain to me what happens when a Keras Conv1D layer is applied to a two-dimensional input? For instance:

model=Sequential() model.add(Conv1D(input_shape=(9000,2),kernel_size=200,strides=1,filters=20)) 

By changing the input size between (9000,1) and (9000,2) and the calling model.summary (), I see that the output form remains unchanged, but the number of parameters changes. Does this mean that different filters are trained for each channel, but the result is summed / averaged over the second dimension before output? Or what?

+5
source share
2 answers

In the doc document, you can read that the input MUST be 2D.

Conv1D can be thought of as a time window passing through a sequence of vectors. The kernel will have a 2-dimensional window up to the length of the vectors (so the second dimension of your input) and will be as long as the size of your window ...

Indeed, it is perfectly normal that your two networks have the same output form ... and the number of parameters is higher, because the cores are 2 times larger due to the second dimension.

Hope this helps :-)

+2
source

Here is a clear illustration

 kernel_size = (2, ) ------------- | 1 1 1 1 1 | <---- kernel dim = kernel_size X 5 | 2 2 2 2 2 | ------------- 3 3 3 3 3 -------------------------- | 1 1 1 1 1 1 1 1 1 1 1 1 | <---- kernel dim = kernel_length X 12 | 2 2 2 2 2 2 2 2 2 2 2 2 | ie more params! but after -------------------------- you apply say MaxPool1D(pool_size=(2,2)) 3 3 3 3 3 3 3 3 3 3 3 3 in both cases, then layer shapes from here on out are the same, thus same outputs! 
+3
source

Source: https://habr.com/ru/post/1266071/


All Articles