How to combine n-grams into one dictionary in Spark?

It is surprising if there is a built-in Spark function for combining 1-, 2-, n-gram functions into one lexicon. Setting n=2in NGramwith the subsequent call CountVectorizerleads to a dictionary containing only 2 grams. I really want to combine all the frequent 1 grams, 2 grams, etc. In one dictionary for my corpus.

+4
source share
1 answer

You can train the individual models NGramand CountVectorizerand combining them with VectorAssembler.

from pyspark.ml.feature import NGram, CountVectorizer, VectorAssembler
from pyspark.ml import Pipeline


def build_ngrams(inputCol="tokens", n=3):

    ngrams = [
        NGram(n=i, inputCol="tokens", outputCol="{0}_grams".format(i))
        for i in range(1, n + 1)
    ]

    vectorizers = [
        CountVectorizer(inputCol="{0}_grams".format(i),
            outputCol="{0}_counts".format(i))
        for i in range(1, n + 1)
    ]

    assembler = [VectorAssembler(
        inputCols=["{0}_counts".format(i) for i in range(1, n + 1)],
        outputCol="features"
    )]

    return Pipeline(stages=ngrams + vectorizers + assembler)

Usage example:

df = spark.createDataFrame([
  (1, ["a", "b", "c", "d"]),
  (2, ["d", "e", "d"])
], ("id", "tokens"))

build_ngrams().fit(df).transform(df) 
+9
source

Source: https://habr.com/ru/post/1650602/


All Articles