How to NLTK word_tokenize for pandas data for twitter data?

This is the code I use for semantic analysis of twitter: -

import pandas as pd import datetime import numpy as np import re from nltk.tokenize import word_tokenize from nltk.corpus import stopwords from nltk.stem.wordnet import WordNetLemmatizer from nltk.stem.porter import PorterStemmer df=pd.read_csv('twitDB.csv',header=None, sep=',',error_bad_lines=False,encoding='utf-8') hula=df[[0,1,2,3]] hula=hula.fillna(0) hula['tweet'] = hula[0].astype(str) +hula[1].astype(str)+hula[2].astype(str)+hula[3].astype(str) hula["tweet"]=hula.tweet.str.lower() ho=hula["tweet"] ho = ho.replace('\s+', ' ', regex=True) ho=ho.replace('\.+', '.', regex=True) special_char_list = [':', ';', '?', '}', ')', '{', '('] for special_char in special_char_list: ho=ho.replace(special_char, '') print(ho) ho = ho.replace('((www\.[\s]+)|(https?://[^\s]+))','URL',regex=True) ho =ho.replace(r'#([^\s]+)', r'\1', regex=True) ho =ho.replace('\'"',regex=True) lem = WordNetLemmatizer() stem = PorterStemmer() fg=stem.stem(a) eng_stopwords = stopwords.words('english') ho = ho.to_frame(name=None) a=ho.to_string(buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=False, index_names=True, justify=None, line_width=None, max_rows=None, max_cols=None, show_dimensions=False) wordList = word_tokenize(fg) wordList = [word for word in wordList if word not in eng_stopwords] print (wordList) 

Input, i.e. a: -

  tweet 0 1495596971.6034188::automotive auto ebc greens... 1 1495596972.330948::new free stock photo of cit... 

receiving output (wordList) in this format: -

 tweet 0 1495596971.6034188 : :automotive auto 

I want only the output of the string in string format. How can I do it? If you have the best code for semantic analysis of twitter, share it with me.

+5
source share
1 answer

In short:

 df['Text'].apply(word_tokenize) 

Or if you want to add another column to store the token list of rows:

 df['tokenized_text'] = df['Text'].apply(word_tokenize) 

There are tokens written specifically for twitter text, see http://www.nltk.org/api/nltk.tokenize.html#module-nltk.tokenize.casual

To use nltk.tokenize.TweetTokenizer :

 from nltk.tokenize import TweetTokenizer tt = TweetTokenizer() df['Text'].apply(tt.tokenize) 

Similarly:

+5
source

Source: https://habr.com/ru/post/1268206/


All Articles