edit : as a result of guaranteed pushing @alexis here is the best answer
Marking sentences
This should provide you with a DataFrame with one row for each id and sentence:
sentences = [] for row in df.itertuples(): for sentence in row[2].split('.'): if sentence != '': sentences.append((row[1], sentence)) new_df = pandas.DataFrame(sentences, columns=['ID', 'SENTENCE'])
The output is as follows:

split ('.') quickly breaks the lines into sentences if offers are actually separated by periods, and periods are not used for other things (for example, indicate abbreviations) and will delete periods in the process. This will not succeed if for periods there are several use cases and / or not all endings of sentences are indicated by periods. A slower, but much more reliable approach would be to use, as you requested, send_tokenize to break the lines into sentences:
sentences = [] for row in df.itertuples(): for sentence in sent_tokenize(row[2]): sentences.append((row[1], sentence)) new_df = pandas.DataFrame(sentences, columns=['ID', 'SENTENCE'])
This leads to the following conclusion:

If you want to quickly remove periods from these lines, you can do something like:
new_df['SENTENCE_noperiods'] = new_df.SENTENCE.apply(lambda x: x.strip('.'))
What will give:

You can also apply apply → map approach (df is your source table):
df = df.join(df.TEXT.apply(sent_tokenize).rename('SENTENCES'))
Yielding:

Continuation:
sentences = df.SENTENCES.apply(pandas.Series) sentences.columns = ['sentence {}'.format(n + 1) for n in sentences.columns]
This gives:

Since our indexes have not changed, we can join this in our original table:
df = df.join(sentences)

Toxicification of words
Continuing with df from above, we can extract the tokens in this sentence as follows:
df['sent_1_words'] = df['sentence 1'].apply(word_tokenize)
