I have a panda framework. There is one column, name it: 'col' Each entry in this column is a list of words. ['word1', 'word2', etc.]
How can I efficiently calculate the lemma of all these words using the nklt library?
import nklt
nltk.stem.WordNetLemmatizer().lemmatize('word')
I want to find a lemma for all words of all cells in one column of the pandas dataset.
My data looks something like this:
import pandas as pd
data = [[['walked','am','stressed','Fruit']],[['going','gone','walking','riding','running']]]
df = pd.DataFrame(data,columns=['col'])
james source
share