如何从包含列表的熊猫列进行一次性编码?

我想把一个由一系列元素组成的“熊猫”列分解成尽可能多的列,只要有唯一的元素即 one-hot-encode(值 1表示一个给定的元素存在于一行中,如果没有,值 0表示给定的元素)。

例如,取数据帧 Df

Col1   Col2         Col3
C      33     [Apple, Orange, Banana]
A      2.5    [Apple, Grape]
B      42     [Banana]

我想将其转换为:

Df

Col1   Col2   Apple   Orange   Banana   Grape
C      33     1        1        1       0
A      2.5    1        0        0       1
B      42     0        0        1       0

我如何使用熊猫/学习来实现这一点?

33523 次浏览

Use get_dummies:

df_out = df.assign(**pd.get_dummies(df.Col3.apply(lambda x:pd.Series(x)).stack().reset_index(level=1,drop=True)).sum(level=0))

Output:

  Col1  Col2                     Col3  Apple  Banana  Grape  Orange
0    C  33.0  [Apple, Orange, Banana]      1       1      0       1
1    A   2.5           [Apple, Grape]      1       0      1       0
2    B  42.0                 [Banana]      0       1      0       0

Cleanup column:

df_out.drop('Col3',axis=1)

Output:

  Col1  Col2  Apple  Banana  Grape  Orange
0    C  33.0      1       1      0       1
1    A   2.5      1       0      1       0
2    B  42.0      0       1      0       0

You can loop through Col3 with apply and convert each element into a Series with the list as the index which become the header in the result data frame:

pd.concat([
df.drop("Col3", 1),
df.Col3.apply(lambda x: pd.Series(1, x)).fillna(0)
], axis=1)


#Col1   Col2    Apple   Banana  Grape   Orange
#0  C   33.0      1.0      1.0    0.0     1.0
#1  A    2.5      1.0      0.0    1.0     0.0
#2  B   42.0      0.0      1.0    0.0     0.0

You can get all unique fruits in Col3 using set comprehension as follows:

set(fruit for fruits in df.Col3 for fruit in fruits)

Using a dictionary comprehension, you can then go through each unique fruit and see if it is in the column.

>>> df[['Col1', 'Col2']].assign(**{fruit: [1 if fruit in cell else 0 for cell in df.Col3]
for fruit in set(fruit for fruits in df.Col3
for fruit in fruits)})
Col1  Col2  Apple  Banana  Grape  Orange
0    C  33.0      1       1      0       1
1    A   2.5      1       0      1       0
2    B  42.0      0       1      0       0

Timings

dfs = pd.concat([df] * 1000)  # Use 3,000 rows in the dataframe.


# Solution 1 by @Alexander (me)
%%timeit -n 1000
dfs[['Col1', 'Col2']].assign(**{fruit: [1 if fruit in cell else 0 for cell in dfs.Col3]
for fruit in set(fruit for fruits in dfs.Col3 for fruit in fruits)})
# 10 loops, best of 3: 4.57 ms per loop


# Solution 2 by @Psidom
%%timeit -n 1000
pd.concat([
dfs.drop("Col3", 1),
dfs.Col3.apply(lambda x: pd.Series(1, x)).fillna(0)
], axis=1)
# 10 loops, best of 3: 748 ms per loop


# Solution 3 by @MaxU
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()


%%timeit -n 10
dfs.join(pd.DataFrame(mlb.fit_transform(dfs.Col3),
columns=mlb.classes_,
index=dfs.index))
# 10 loops, best of 3: 283 ms per loop


# Solution 4 by @ScottBoston
%%timeit -n 10
df_out = dfs.assign(**pd.get_dummies(dfs.Col3.apply(lambda x:pd.Series(x)).stack().reset_index(level=1,drop=True)).sum(level=0))
# 10 loops, best of 3: 512 ms per loop


But...
>>> print(df_out.head())
Col1  Col2                     Col3  Apple  Banana  Grape  Orange
0    C  33.0  [Apple, Orange, Banana]   1000    1000      0    1000
1    A   2.5           [Apple, Grape]   1000       0   1000       0
2    B  42.0                 [Banana]      0    1000      0       0
0    C  33.0  [Apple, Orange, Banana]   1000    1000      0    1000
1    A   2.5           [Apple, Grape]   1000       0   1000       0

We can also use sklearn.preprocessing.MultiLabelBinarizer:

Often we want to use sparse DataFrame for the real world data in order to save a lot of RAM.

Sparse solution (for Pandas v0.25.0+)

from sklearn.preprocessing import MultiLabelBinarizer


mlb = MultiLabelBinarizer(sparse_output=True)


df = df.join(
pd.DataFrame.sparse.from_spmatrix(
mlb.fit_transform(df.pop('Col3')),
index=df.index,
columns=mlb.classes_))

result:

In [38]: df
Out[38]:
Col1  Col2  Apple  Banana  Grape  Orange
0    C  33.0      1       1      0       1
1    A   2.5      1       0      1       0
2    B  42.0      0       1      0       0


In [39]: df.dtypes
Out[39]:
Col1                object
Col2               float64
Apple     Sparse[int32, 0]
Banana    Sparse[int32, 0]
Grape     Sparse[int32, 0]
Orange    Sparse[int32, 0]
dtype: object


In [40]: df.memory_usage()
Out[40]:
Index     128
Col1       24
Col2       24
Apple      16    #  <--- NOTE!
Banana     16    #  <--- NOTE!
Grape       8    #  <--- NOTE!
Orange      8    #  <--- NOTE!
dtype: int64

Dense solution

mlb = MultiLabelBinarizer()
df = df.join(pd.DataFrame(mlb.fit_transform(df.pop('Col3')),
columns=mlb.classes_,
index=df.index))

Result:

In [77]: df
Out[77]:
Col1  Col2  Apple  Banana  Grape  Orange
0    C  33.0      1       1      0       1
1    A   2.5      1       0      1       0
2    B  42.0      0       1      0       0

Option 1
Short Answer
pir_slow

df.drop('Col3', 1).join(df.Col3.str.join('|').str.get_dummies())


Col1  Col2  Apple  Banana  Grape  Orange
0    C  33.0      1       1      0       1
1    A   2.5      1       0      1       0
2    B  42.0      0       1      0       0

Option 2
Fast Answer
pir_fast

v = df.Col3.values
l = [len(x) for x in v.tolist()]
f, u = pd.factorize(np.concatenate(v))
n, m = len(v), u.size
i = np.arange(n).repeat(l)


dummies = pd.DataFrame(
np.bincount(i * m + f, minlength=n * m).reshape(n, m),
df.index, u
)


df.drop('Col3', 1).join(dummies)


Col1  Col2  Apple  Orange  Banana  Grape
0    C  33.0      1       1       1      0
1    A   2.5      1       0       0      1
2    B  42.0      0       0       1      0

Option 3
pir_alt1

df.drop('Col3', 1).join(
pd.get_dummies(
pd.DataFrame(df.Col3.tolist()).stack()
).astype(int).sum(level=0)
)


Col1  Col2  Apple  Orange  Banana  Grape
0    C  33.0      1       1       1      0
1    A   2.5      1       0       0      1
2    B  42.0      0       0       1      0

Timing Results
Code Below

enter image description here


def maxu(df):
mlb = MultiLabelBinarizer()
d = pd.DataFrame(
mlb.fit_transform(df.Col3.values)
, df.index, mlb.classes_
)
return df.drop('Col3', 1).join(d)




def bos(df):
return df.drop('Col3', 1).assign(**pd.get_dummies(df.Col3.apply(lambda x:pd.Series(x)).stack().reset_index(level=1,drop=True)).sum(level=0))


def psi(df):
return pd.concat([
df.drop("Col3", 1),
df.Col3.apply(lambda x: pd.Series(1, x)).fillna(0)
], axis=1)


def alex(df):
return df[['Col1', 'Col2']].assign(**{fruit: [1 if fruit in cell else 0 for cell in df.Col3]
for fruit in set(fruit for fruits in df.Col3
for fruit in fruits)})


def pir_slow(df):
return df.drop('Col3', 1).join(df.Col3.str.join('|').str.get_dummies())


def pir_alt1(df):
return df.drop('Col3', 1).join(pd.get_dummies(pd.DataFrame(df.Col3.tolist()).stack()).astype(int).sum(level=0))


def pir_fast(df):
v = df.Col3.values
l = [len(x) for x in v.tolist()]
f, u = pd.factorize(np.concatenate(v))
n, m = len(v), u.size
i = np.arange(n).repeat(l)


dummies = pd.DataFrame(
np.bincount(i * m + f, minlength=n * m).reshape(n, m),
df.index, u
)


return df.drop('Col3', 1).join(dummies)


results = pd.DataFrame(
index=(1, 3, 10, 30, 100, 300, 1000, 3000),
columns='maxu bos psi alex pir_slow pir_fast pir_alt1'.split()
)


for i in results.index:
d = pd.concat([df] * i, ignore_index=True)
for j in results.columns:
stmt = '{}(d)'.format(j)
setp = 'from __main__ import d, {}'.format(j)
results.set_value(i, j, timeit(stmt, setp, number=10))

You can use the functions explode (new in version 0.25.0.) and crosstab:

s = df['Col3'].explode()
df[['Col1', 'Col2']].join(pd.crosstab(s.index, s))

or in Python 3.7+:

df[['Col1', 'Col2']].join(pd.crosstab((s:=df['Col3'].explode()).index, s))

another approach using the method isin:

from itertools import chain


lst = sorted(set(chain.from_iterable(df['Col3'])))
s = pd.Series(lst, index=lst)
df.join(df.pop('Col3').apply(lambda x: s.isin(x)).astype(int))

Output:

  Col1  Col2  Apple  Banana  Grape  Orange
0    C  33.0      1       1      0       1
1    A   2.5      1       0      1       0
2    B  42.0      0       1      0       0