Im working with Keras on the proposal affinity problem (using the STS dataset), and I'm having problems merging layers. The data consists of 1184 pairs of sentences, each of which is evaluated between 0 and 5. Below are the shapes of my numpy arrays. Ive supplemented each sentence with up to 50 words and ran them and injected a layer using glove inserts with 100 sizes. When merging two networks, I get an error message.
Exception: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 arrays but instead got the following list of 2 arrays:
This is what my code looks like
total training data = 1184
X1.shape = (1184, 50)
X2.shape = (1184, 50)
Y.shape = (1184, 1)
embedding_matrix = np.zeros((len(word_index) + 1, EMBEDDING_DIM))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
embedding_layer = Embedding(len(word_index) + 1,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=50,
trainable=False)
s1rnn = Sequential()
s1rnn.add(embedding_layer)
s1rnn.add(LSTM(128, input_shape=(100, 1)))
s1rnn.add(Dense(1))
s2rnn = Sequential()
s2rnn.add(embedding_layer)
s2rnn.add(LSTM(128, input_shape=(100, 1)))
s2rnn.add(Dense(1))
model = Sequential()
model.add(Merge([s1rnn,s2rnn],mode='concat'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='RMSprop', metrics=['accuracy'])
model.fit([X1,X2], Y,batch_size=32, nb_epoch=100, validation_split=0.05)
source
share