Keras multiplies the output level with scalar

I have an output layer that I want to multiply by a scalar. I can do this with a lambda layer, i.e.

sc_mult = Lambda(lambda x: x * 2)(layer)

which works great. But if I want to use a different scalar for each example, I try to provide them as a second input, with the form (Examples, 1)

input_scalar = Input(shape = (1L,))

so my lambda layer becomes

sc_mult = Lambda(lambda x: x * input_scalar)(layer)

But it now throws an error during trains. Note: 32 is the lot size, and 128 is the size of the input and output layer levels - the input level multiplied by a scalar is (batch_size x 32 (filters on the previous layer) x 128 (spatial dim) x 128 (spatial dim)).

GpuElemwise. Input dimension mis-match. Input 5 (indices start at 0) has shape[2] == 32, but the output size on that axis is 128.

I assume that I am not feeding the correct form through the input layer, but I cannot understand why.

+4
source share

Source: https://habr.com/ru/post/1666766/


All Articles