Keras - How to create a common Embedding () layer for each neuron input

I want to create a deep neural network in keras where each element of the input layer is β€œencoded” using the same common Embedding () layer before it is fed into the deeper layers.

Each input will be a number that determines the type of object, and the network should study the attachment that encapsulates some internal representation of "what kind of object it is."

So, if the input layer has dimensions X, and the embedding has dimension Y, the first hidden layer should consist of X * Y neurons (each input neuron is embedded).

Here is a small image that should show the network architecture I would like to create, where each input element is encoded using 3D-Embedding

How can i do this?

+5
source share
1 answer
from keras.layers import Input, Embedding first_input = Input(shape = (your_shape_tuple) ) second_input = Input(shape = (your_shape_tuple) ) ... embedding_layer = Embedding(embedding_size) first_input_encoded = embedding_layer(first_input) second_input_encoded = embedding_layer(second_input) ... Rest of the model.... 

Emnedding_layer will have common weights. You can do this as layer lists if you have a lot of input.

If you want to convert the input tensor, the way to do this is:

 from keras.layers import Input, Embedding # If your inputs are all fed in one numpy array : input_layer = Input(shape = (num_input_indices,) ) # the output of this layer will be a 2D tensor of shape (num_input_indices, embedding_size) embedded_input = Embedding(embedding_size)(input_layer) 

Is this what you were looking for?

+5
source

Source: https://habr.com/ru/post/1263989/


All Articles