Adding a variable to a dense layer Keras / TensorFlow CNN

I was wondering if it is possible to add a variable to the dense layer of the convolutional neural network (as well as the connections from the previous convolutional layers, will there be an additional set of functions that can be used for discriminatory purposes)? If possible, can someone give me an example / documentation explaining how to do this?

I hope to use Keras, but I'm glad to use TensorFlow if Keras is too restrictive.

EDIT: In this case, as I might think this should work, I provide a list containing images and related sets of functions to the neural network (and while learning the corresponding classifications).

EDIT2: The architecture I need looks something like this:

              ___________      _________      _________      _________     ________    ______
              | Conv    |     | Max    |     | Conv    |     | Max    |    |       |   |     |
    Image --> | Layer 1 | --> | Pool 1 | --> | Layer 2 | --> | Pool 2 | -->|       |   |     |
              |_________|     |________|     |_________|     |________|    | Dense |   | Out |
                                                                           | Layer |-->|_____|
   Other      ------------------------------------------------------------>|       |
   Data                                                                    |       |
                                                                           |_______|
+4
2

, @Marcin, .

API- Functionnal . , .

API keras:

from keras.layers.core import *
from keras.models import Model

# this is your image input definition. You have to specify a shape. 
image_input = Input(shape=(32,32,3))
# Some more data input with 10 features (eg.)
other_data_input = Input(shape=(10,))    

# First convolution filled with random parameters for the example
conv1 = Convolution2D(nb_filter = nb_filter1, nb_row = nb_row1, nb_col=_nb_col1, padding = "same", activation = "tanh")(image_input)
# MaxPool it 
conv1 = MaxPooling2D(pool_size=(pool_1,pool_2))(conv1)
# Second Convolution
conv2 = Convolution2D(nb_filter = nb_filter2, nb_row = nb_row2, nb_col=_nb_col2, padding = "same", activation = "tanh")(conv1)
# MaxPool it
conv2  = MaxPooling2D(pool_size=(pool_1,pool_2))(conv2)
# Flatten the output to enable the merge to happen with the other input
first_part_output = Flatten()(conv2)

# Merge the output of the convNet with your added features by concatenation
merged_model = keras.layers.concatenate([first_part_output, other_data_input])

# Predict on the output (say you want a binary classification)
predictions = Dense(1, activation ='sigmoid')(merged_model)

# Now create the model
model = Model(inputs=[image_input, other_data_input], outputs=predictions)
# see your model 
model.summary()

# compile it
model.compile(optimizer='adamax', loss='binary_crossentropy')

:) , , , , Model. , , .

+3

, , convoluton_model, :

convolution_model = Flatten()(convolution_model) # if it wasn't flattened before
static_features_input = Input(shape=(static_features_size,))
blended_features = merge([convolution_model, static_features_input], mode='concat')
... here you are defining a blending model with blended features as input

, .

+1

Source: https://habr.com/ru/post/1671250/


All Articles