Clustering Doc2Vec Offers

I have several documents containing several suggestions. I want to use doc2vec for a cluster (e.g. k-mean) of sentence vectors using sklearn .

Thus, the idea is that similar offers are grouped into several clusters. However, it is not clear to me whether I should train each individual document separately, and then use the clustering algorithm for sentence vectors. Or, if I could derive a sentence vector from doc2vec without learning every new sentence.

Now this is a piece of code:

sentenceLabeled = []
for sentenceID, sentence in enumerate(example_sentences):
    sentenceL = TaggedDocument(words=sentence.split(), tags = ['SENT_%s' %sentenceID])
    sentenceLabeled.append(sentenceL)

model = Doc2Vec(size=300, window=10, min_count=0, workers=11, alpha=0.025, 
min_alpha=0.025)
model.build_vocab(sentenceLabeled)
for epoch in range(20):
    model.train(sentenceLabeled)
    model.alpha -= 0.002  # decrease the learning rate
    model.min_alpha = model.alpha  # fix the learning rate, no decay
textVect = model.docvecs.doctag_syn0

## K-means ##
num_clusters = 3
km = KMeans(n_clusters=num_clusters)
km.fit(textVect)
clusters = km.labels_.tolist()

## Print Sentence Clusters ##
cluster_info = {'sentence': example_sentences, 'cluster' : clusters}
sentenceDF = pd.DataFrame(cluster_info, index=[clusters], columns = ['sentence','cluster'])

for num in range(num_clusters):
     print()
     print("Sentence cluster %d: " %int(num+1), end='')
     print()
     for sentence in sentenceDF.ix[num]['sentence'].values.tolist():
        print(' %s ' %sentence, end='')
        print()
    print()

Basically, what I'm doing now is teaching each marked sentence in a document. However, if there is an idea that this could be done in a simpler way.

, , . - .

, - . .

+4
1
  • ( DM = 1)? , ?
  • tSNE, , . PCA, , 50 , . , sklearn. , .
  • most_similar() infer_vector() , 1, . (infer_vector() , !)
0

Source: https://habr.com/ru/post/1675062/


All Articles