和熊猫一起写专栏

我有一个包含数值的数据框列:

df['percentage'].head()
46.5
44.2
100.0
42.12

我希望看到的专栏是 垃圾桶也算:

bins = [0, 1, 5, 10, 25, 50, 100]

我怎样才能得到的结果作为垃圾桶与他们的 价值很重要

[0, 1] bin amount
[1, 5] etc
[5, 10] etc
...
217308 次浏览

你可以使用 pandas.cut:

bins = [0, 1, 5, 10, 25, 50, 100]
df['binned'] = pd.cut(df['percentage'], bins)
print (df)
percentage     binned
0       46.50   (25, 50]
1       44.20   (25, 50]
2      100.00  (50, 100]
3       42.12   (25, 50]

bins = [0, 1, 5, 10, 25, 50, 100]
labels = [1,2,3,4,5,6]
df['binned'] = pd.cut(df['percentage'], bins=bins, labels=labels)
print (df)
percentage binned
0       46.50      5
1       44.20      5
2      100.00      6
3       42.12      5

或者 numpy.searchsorted:

bins = [0, 1, 5, 10, 25, 50, 100]
df['binned'] = np.searchsorted(bins, df['percentage'].values)
print (df)
percentage  binned
0       46.50       5
1       44.20       5
2      100.00       6
3       42.12       5

然后是 value_counts或者 groupby汇总成 size:

s = pd.cut(df['percentage'], bins=bins).value_counts()
print (s)
(25, 50]     3
(50, 100]    1
(10, 25]     0
(5, 10]      0
(1, 5]       0
(0, 1]       0
Name: percentage, dtype: int64

s = df.groupby(pd.cut(df['percentage'], bins=bins)).size()
print (s)
percentage
(0, 1]       0
(1, 5]       0
(5, 10]      0
(10, 25]     0
(25, 50]     3
(50, 100]    1
dtype: int64

默认情况下,cut返回 categorical

Series.value_counts()这样的 Series方法将使用所有类别,即使有些类别不存在于数据 绝对运算中。

使用 Numba模块进行加速。

在大型数据集(超过500k)上,pd.cut对于数据分类来说可能相当慢。

我在 Numba 用即时编译编写了自己的函数,速度大约比 abc0快:

from numba import njit


@njit
def cut(arr):
bins = np.empty(arr.shape[0])
for idx, x in enumerate(arr):
if (x >= 0) & (x < 1):
bins[idx] = 1
elif (x >= 1) & (x < 5):
bins[idx] = 2
elif (x >= 5) & (x < 10):
bins[idx] = 3
elif (x >= 10) & (x < 25):
bins[idx] = 4
elif (x >= 25) & (x < 50):
bins[idx] = 5
elif (x >= 50) & (x < 100):
bins[idx] = 6
else:
bins[idx] = 7


return bins
cut(df['percentage'].to_numpy())


# array([5., 5., 7., 5.])

可选: 您还可以将其映射为字符串形式的垃圾桶:

a = cut(df['percentage'].to_numpy())


conversion_dict = {1: 'bin1',
2: 'bin2',
3: 'bin3',
4: 'bin4',
5: 'bin5',
6: 'bin6',
7: 'bin7'}


bins = list(map(conversion_dict.get, a))


# ['bin5', 'bin5', 'bin7', 'bin5']

速度比较 :

# Create a dataframe of 8 million rows for testing
dfbig = pd.concat([df]*2000000, ignore_index=True)


dfbig.shape


# (8000000, 1)
%%timeit
cut(dfbig['percentage'].to_numpy())


# 38 ms ± 616 µs per loop (mean ± standard deviation of 7 runs, 10 loops each)
%%timeit
bins = [0, 1, 5, 10, 25, 50, 100]
labels = [1,2,3,4,5,6]
pd.cut(dfbig['percentage'], bins=bins, labels=labels)


# 215 ms ± 9.76 ms per loop (mean ± standard deviation of 7 runs, 10 loops each)

我们还可以使用 np.select:

bins = [0, 1, 5, 10, 25, 50, 100]
df['groups'] = (np.select([df['percentage'].between(i, j, inclusive='right')
for i,j in zip(bins, bins[1:])],
[1, 2, 3, 4, 5, 6]))

产出:

   percentage  groups
0       46.50       5
1       44.20       5
2      100.00       6
3       42.12       5