There are a few answers that mention pandas.get_dummies as a method for this, but I believe the labelEncoder approach is cleaner for implementing the model. Other similar answers mention using DictVectorizer for this, but re-converting the entire DataFrame to a dict is probably not a great idea.
Take the following problem columns:
from sklearn import preprocessing import numpy as np import pandas as pd train = {'city': ['Buenos Aires', 'New York', 'Istambul', 'Buenos Aires', 'Paris', 'Paris'], 'letters': ['a', 'b', 'c', 'd', 'a', 'b']} train = pd.DataFrame(train) test = {'city': ['Buenos Aires', 'New York', 'Istambul', 'Buenos Aires', 'Paris', 'Utila'], 'letters': ['a', 'b', 'c', 'a', 'b', 'b']} test = pd.DataFrame(test)
Utila is less often a city, and it is not present in the training data, but in the test set, so that we can consider new data during the output.
The trick converts this value to "different" and includes this in the labelEncoder object. Then we can use it in production.
c = 'city' le = preprocessing.LabelEncoder() train[c] = le.fit_transform(train[c]) test[c] = test[c].map(lambda s: 'other' if s not in le.classes_ else s) le_classes = le.classes_.tolist() bisect.insort_left(le_classes, 'other') le.classes_ = le_classes test[c] = le.transform(test[c]) test city letters 0 1 a 1 3 b 2 2 c 3 1 a 4 4 b 5 0 b
To apply it to new data, we need to save the le object for each column, which can be easily done using Pickle.
This answer is based on this question , which, it seems to me, is not entirely clear, so I added this example.