How do you train a neural network to display from a vector representation, one hot vector? An example that interests me is where the vector representation is the result of embedding word2vec, and I would like to map to individual words that were in the language used to teach embedding, so I assume this is vec2word?
A little bit more; if I understand correctly, a cluster of points in the embedded space is like words. So, if you choose from the points in this cluster and use it as an input to vec2word, should the output be matching with similar individual words?
I think I could do something like an encoder / decoder, but does it need to be complicated / use so many parameters?
Here's the TensorFlow tutorial , how to train word2vec, but I can't find any help to do the opposite? I am happy to do this using any deeplearning library, and it is normal to do this using fetch / probabilistic.
Thanks so much for your help, Ajay.
source
share