You can use the apply API DataFrame method:
import pandas as pd import nltk df = pd.DataFrame({'sentences': ['This is a very good site. I will recommend it to others.', 'Can you please give me a call at 9983938428. have issues with the listings.', 'good work! keep it up']}) df['tokenized_sents'] = df.apply(lambda row: nltk.word_tokenize(row['sentences']), axis=1)
Output:
>>> df sentences \ 0 This is a very good site. I will recommend it ... 1 Can you please give me a call at 9983938428. h... 2 good work! keep it up tokenized_sents 0 [This, is, a, very, good, site, ., I, will, re... 1 [Can, you, please, give, me, a, call, at, 9983... 2 [good, work, !, keep, it, up]
To find the length of each text, try using the apply function and lambda again:
df['sents_length'] = df.apply(lambda row: len(row['tokenized_sents']), axis=1) >>> df sentences \ 0 This is a very good site. I will recommend it ... 1 Can you please give me a call at 9983938428. h... 2 good work! keep it up tokenized_sents sents_length 0 [This, is, a, very, good, site, ., I, will, re... 14 1 [Can, you, please, give, me, a, call, at, 9983... 15 2 [good, work, !, keep, it, up] 6