No. You can freely specify the length of the vector.
Then what is a vector?
This is a distributed representation of the meaning of a word.
I do not understand how it can be trained. but, trained, matters as shown below.
If one vector representation like this,
[0.2 0.6 0.2]
It is closer to [0.2 0.7 0.2] than [0.7 0.2 0.5].
Here is another example.
CRY [0.5 0.7 0.2]
HAPPY [-0.4 0.3 0.1]
SAD [0.4 0.6 0.2]
"CRY" is closer to "SAD" than "HAPPY" because methods (CBOW or SKIP-GRAM, etc.) can make vectors closer when the meaning (or syntactic position) of words is similar.
In fact, exactly depends on many things. The choice of methods is also important. and a lot of good data (corpura), too.
If you want to check the similarity of some words, first create the word vectors and check the cosine similarity of these words.
The document ( https://arxiv.org/pdf/1301.3781.pdf ) explained some of the methods and the listed accuracy.
You can understand the c codes, it is useful to use the word2vec program ( https://code.google.com/archive/p/word2vec/ ). It implements CBOW (Continuous Bag-Of-Words) and SKIP grams.
ps) Please correct my bad english. ps) if you have a question, yet.
source share