使用以前从未见过的值的标签编码器

如果一个 sklearn.LabelEncoder已经安装在一个训练集,它可能会中断,如果它遇到新的价值时,使用一个测试集。

我能想到的唯一解决方案是将测试集中的所有新内容(即不属于任何现有类)映射到 "<unknown>",然后显式地向 LabelEncoder添加相应的类:

# train and test are pandas.DataFrame's and c is whatever column
le = LabelEncoder()
le.fit(train[c])
test[c] = test[c].map(lambda s: '<unknown>' if s not in le.classes_ else s)
le.classes_ = np.append(le.classes_, '<unknown>')
train[c] = le.transform(train[c])
test[c] = le.transform(test[c])

这是可行的,但是有更好的解决方案吗?

更新

正如@sapo _ cosmico 在评论中指出的那样,上面的方法似乎不再起作用了,因为我假设 LabelEncoder.transform的实现发生了变化,它现在似乎使用了 np.searchsorted(我不知道以前是不是这样)。因此,不需要将 <unknown>类附加到 LabelEncoder已经提取的类列表中,而是需要按照排序顺序插入:

import bisect
le_classes = le.classes_.tolist()
bisect.insort_left(le_classes, '<unknown>')
le.classes_ = le_classes

然而,由于这感觉相当笨重,总的来说,我确信有一个更好的方法。

74494 次浏览

I get the impression that what you've done is quite similar to what other people do when faced with this situation.

There's been some effort to add the ability to encode unseen labels to the LabelEncoder (see especially https://github.com/scikit-learn/scikit-learn/pull/3483 and https://github.com/scikit-learn/scikit-learn/pull/3599), but changing the existing behavior is actually more difficult than it seems at first glance.

For now it looks like handling "out-of-vocabulary" labels is left to individual users of scikit-learn.

I ended up switching to Pandas' get_dummies due to this problem of unseen data.

  • create the dummies on the training data
    dummy_train = pd.get_dummies(train)
  • create the dummies in the new (unseen data)
    dummy_new = pd.get_dummies(new_data)
  • re-index the new data to the columns of the training data, filling the missing values with 0
    dummy_new.reindex(columns = dummy_train.columns, fill_value=0)

Effectively any new features which are categorical will not go into the classifier, but I think that should not cause problems as it would not know what to do with them.

I know two devs that are working on building wrappers around transformers and Sklearn pipelines. They have 2 robust encoder transformers (one dummy and one label encoders) that can handle unseen values. Here is the documentation to their skutil library. Search for skutil.preprocessing.OneHotCategoricalEncoder or skutil.preprocessing.SafeLabelEncoder. In their SafeLabelEncoder(), unseen values are auto encoded to 999999.

I was trying to deal with this problem and found two handy ways to encode categorical data from train and test sets with and without using LabelEncoder. New categories are filled with some known cetegory "c" (like "other" or "missing"). First method seems to work faster. Hope that will help you.

import pandas as pd
import time
df=pd.DataFrame()


df["a"]=['a','b', 'c', 'd']
df["b"]=['a','b', 'e', 'd']




#LabelEncoder + map
t=time.clock()
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
suf="_le"
col="a"
df[col+suf] = le.fit_transform(df[col])
dic = dict(zip(le.classes_, le.transform(le.classes_)))
col='b'
df[col+suf]=df[col].map(dic).fillna(dic["c"]).astype(int)
print(time.clock()-t)


#---
#pandas category


t=time.clock()
df["d"] = df["a"].astype('category').cat.codes
dic =df["a"].astype('category').cat.categories.tolist()
df['f']=df['b'].astype('category',categories=dic).fillna("c").cat.codes
df.dtypes
print(time.clock()-t)

If it is just about training and testing a model, why not just labelencode on entire dataset. And then use the generated classes from the encoder object.

encoder = LabelEncoder()
encoder.fit_transform(df["label"])
train_y = encoder.transform(train_y)
test_y = encoder.transform(test_y)

I recently ran into this problem and was able to come up with a pretty quick solution to the problem. My answer solves a little more than just this problem but it will easily work for your issue too. (I think its pretty cool)

I am working with pandas data frames and originally used the sklearns labelencoder() to encode my data which I would then pickle to use in other modules in my program.

However, the label encoder in sklearn's preprocessing does not have the ability to add new values to the encoding algorithm. I solved the problem of encoding multiple values and saving the mapping values AS WELL as being able to add new values to the encoder by (here's a rough outline of what I did):

encoding_dict = dict()
for col in cols_to_encode:
#get unique values in the column to encode
values = df[col].value_counts().index.tolist()


# create a dictionary of values and corresponding number {value, number}
dict_values = {value: count for value, count in zip(values, range(1,len(values)+1))}


# save the values to encode in the dictionary
encoding_dict[col] = dict_values


# replace the values with the corresponding number from the dictionary
df[col] = df[col].map(lambda x: dict_values.get(x))

Then you can simply save the dictionary to a JSON file and are able to pull it and add any value you want by adding a new value and the corresponding integer value.

I'll explain some reasoning behind using map() instead of replace(). I found that using pandas replace() function took over a minute to iterate through around 117,000 lines of code. Using map brought that time to just over 100 ms.

TLDR: instead of using sklearns preprocessing just work with your dataframe by making a mapping dictionary and map out the values yourself.

LabelEncoder is basically a dictionary. You can extract and use it for future encoding:

from sklearn.preprocessing import LabelEncoder


le = preprocessing.LabelEncoder()
le.fit(X)


le_dict = dict(zip(le.classes_, le.transform(le.classes_)))

Retrieve label for a single new item, if item is missing then set value as unknown

le_dict.get(new_item, '<Unknown>')

Retrieve labels for a Dataframe column:

df[your_col] = df[your_col].apply(lambda x: le_dict.get(x, <unknown_value>))

I have created a class to support this. If you have a new label comes, this will assign it as unknown class.

from sklearn.preprocessing import LabelEncoder
import numpy as np




class LabelEncoderExt(object):
def __init__(self):
"""
It differs from LabelEncoder by handling new classes and providing a value for it [Unknown]
Unknown will be added in fit and transform will take care of new item. It gives unknown class id
"""
self.label_encoder = LabelEncoder()
# self.classes_ = self.label_encoder.classes_


def fit(self, data_list):
"""
This will fit the encoder for all the unique values and introduce unknown value
:param data_list: A list of string
:return: self
"""
self.label_encoder = self.label_encoder.fit(list(data_list) + ['Unknown'])
self.classes_ = self.label_encoder.classes_


return self


def transform(self, data_list):
"""
This will transform the data_list to id list where the new values get assigned to Unknown class
:param data_list:
:return:
"""
new_data_list = list(data_list)
for unique_item in np.unique(data_list):
if unique_item not in self.label_encoder.classes_:
new_data_list = ['Unknown' if x==unique_item else x for x in new_data_list]


return self.label_encoder.transform(new_data_list)

The sample usage:

country_list = ['Argentina', 'Australia', 'Canada', 'France', 'Italy', 'Spain', 'US', 'Canada', 'Argentina, ''US']


label_encoder = LabelEncoderExt()


label_encoder.fit(country_list)
print(label_encoder.classes_) # you can see new class called Unknown
print(label_encoder.transform(country_list))




new_country_list = ['Canada', 'France', 'Italy', 'Spain', 'US', 'India', 'Pakistan', 'South Africa']
print(label_encoder.transform(new_country_list))

I face the same problem and realized that my encoder was somehow mixing values within my columns dataframe. Lets say that you run your encoder for several columns and when assigning numbers to labels the encoder automatically writes numbers to it and sometimes turns out that you have two different columns with similar values. What I did to solve the problem was to create an instance of LabelEncoder() for each column in my pandas DataFrame and I have a nice result.

encoder1 = LabelEncoder()
encoder2 = LabelEncoder()
encoder3 = LabelEncoder()


df['col1'] = encoder1.fit_transform(list(df['col1'].values))
df['col2'] = encoder2.fit_transform(list(df['col2'].values))
df['col3'] = encoder3.fit_transform(list(df['col3'].values))

Regards!!

Here is with the use of the relatively new feature from pandas. The main motivation is machine learning packages like 'lightgbm' can accept pandas category as feature columns and it is better than using onehotencoding in some situations. And in this example, the transformer return an integer but can also change the date type and replace with the unseen categorical values with -1.

from collections import defaultdict
from sklearn.base import BaseEstimator,TransformerMixin
from pandas.api.types import CategoricalDtype
import pandas as pd
import numpy as np


class PandasLabelEncoder(BaseEstimator,TransformerMixin):
def __init__(self):
self.label_dict = defaultdict(list)


def fit(self, X):
X = X.astype('category')
cols = X.columns
values = list(map(lambda col: X[col].cat.categories, cols))
self.label_dict = dict(zip(cols,values))
# return as category for xgboost or lightgbm
return self


def transform(self,X):
# check missing columns
missing_col=set(X.columns)-set(self.label_dict.keys())
if missing_col:
raise ValueError('the column named {} is not in the label dictionary. Check your fitting data.'.format(missing_col))
return X.apply(lambda x: x.astype('category').cat.set_categories(self.label_dict[x.name]).cat.codes.astype('category').cat.set_categories(np.arange(len(self.label_dict[x.name]))))




def inverse_transform(self,X):
return X.apply(lambda x: pd.Categorical.from_codes(codes=x.values,
categories=self.label_dict[x.name]))


dff1 = pd.DataFrame({'One': list('ABCC'), 'Two': list('bccd')})
dff2 = pd.DataFrame({'One': list('ABCDE'), 'Two': list('debca')})




enc=PandasLabelEncoder()
enc.fit_transform(dff1)
One Two
0   0   0
1   1   1
2   2   1
3   2   2
dff3=enc.transform(dff2)
dff3
    One Two
0   0   2
1   1   -1
2   2   0
3   -1  1
4   -1  -1
enc.inverse_transform(dff3)
One Two
0   A   d
1   B   NaN
2   C   b
3   NaN c
4   NaN NaN

LabelEncoder() should be used only for target labels encoding. To encode categorical features, use OneHotEncoder(), which can handle unseen values: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder

If someone is still looking for it, here is my fix.

Say you have
enc_list : list of variables names already encoded
enc_map : the dictionary containing variables from enc_list and corresponding encoded mapping
df : dataframe containing values of a variable not present in enc_map

This will work assuming you already have category "NA" or "Unknown" in the encoded values

for l in enc_list:


old_list = enc_map[l].classes_
new_list = df[l].unique()
na = [j for j in new_list if j not in old_list]
df[l] = df[l].replace(na,'NA')

As of scikit-learn 0.24.0 you shouldn't have to use LabelEncoder on your features (and should use OrdinalEncoder), hence its name LabelEncoder.

Since models will never predict a label that wasn't seen in their training data, LabelEncoder should never support an unknown label.

For features though, it's different as obviously you might encounter different categories never seen in the training set. In version 0.24.0 scikit-learn presented two new arguments to the OrdinalEncoder that allows it to encode unknown categories.

An example usage of OrdinalEncoder to encode features, and converting unknown categories to the value -1

from sklearn.preprocessing import OrdinalEncoder


# Create encoder
ordinal_encoder = OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1)


# Fit on training data
ordinal_encoder.fit(np.array([1,2,3,4,5]).reshape(-1, 1))


# Transform, notice that 0 and 6 are values that were never seen before
ordinal_encoder.transform(np.array([0,1,2,3,4,5,6]).reshape(-1, 1))

Output:

array([[-1.],
[ 0.],
[ 1.],
[ 2.],
[ 3.],
[ 4.],
[-1.]])