熊猫: 在一个数据框架中创建两个新列,其值是从一个预先存在的列计算出来的

我正在使用 熊猫库,我想为数据帧 df添加两个新列,其中 n 列(n > 0)。
这些新列是将函数应用于数据框架中的一个列而产生的。

应用函数类似于:

def calculate(x):
...operate...
return z, y

为只返回值的函数创建新列的一种方法是:

df['new_col']) = df['column_A'].map(a_function)

所以,我想要的,并且尝试不成功的(*) ,是这样的:

(df['new_col_zetas'], df['new_col_ys']) = df['column_A'].map(calculate)

实现这一点的最佳方法是什么? 我扫描了 文件没有线索。

* * df['column_A'].map(calculate)返回一个熊猫系列,每个项目都包含一个元组 z,y。如果试图将这个元组分配给两个数据帧列,将产生一个 ValueError。*

126368 次浏览

I'd just use zip:

In [1]: from pandas import *


In [2]: def calculate(x):
...:     return x*2, x*3
...:


In [3]: df = DataFrame({'a': [1,2,3], 'b': [2,3,4]})


In [4]: df
Out[4]:
a  b
0  1  2
1  2  3
2  3  4


In [5]: df["A1"], df["A2"] = zip(*df["a"].map(calculate))


In [6]: df
Out[6]:
a  b  A1  A2
0  1  2   2   3
1  2  3   4   6
2  3  4   6   9

The top answer is flawed in my opinion. Hopefully, no one is mass importing all of pandas into their namespace with from pandas import *. Also, the map method should be reserved for those times when passing it a dictionary or Series. It can take a function but this is what apply is used for.

So, if you must use the above approach, I would write it like this

df["A1"], df["A2"] = zip(*df["a"].apply(calculate))

There's actually no reason to use zip here. You can simply do this:

df["A1"], df["A2"] = calculate(df['a'])

This second method is also much faster on larger DataFrames

df = pd.DataFrame({'a': [1,2,3] * 100000, 'b': [2,3,4] * 100000})

DataFrame created with 300,000 rows

%timeit df["A1"], df["A2"] = calculate(df['a'])
2.65 ms ± 92.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)


%timeit df["A1"], df["A2"] = zip(*df["a"].apply(calculate))
159 ms ± 5.24 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

60x faster than zip


In general, avoid using apply

Apply is generally not much faster than iterating over a Python list. Let's test the performance of a for-loop to do the same thing as above

%%timeit
A1, A2 = [], []
for val in df['a']:
A1.append(val**2)
A2.append(val**3)


df['A1'] = A1
df['A2'] = A2


298 ms ± 7.14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

So this is twice as slow which isn't a terrible performance regression, but if we cythonize the above, we get much better performance. Assuming, you are using ipython:

%load_ext cython


%%cython
cpdef power(vals):
A1, A2 = [], []
cdef double val
for val in vals:
A1.append(val**2)
A2.append(val**3)


return A1, A2


%timeit df['A1'], df['A2'] = power(df['a'])
72.7 ms ± 2.16 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Directly assigning without apply

You can get even greater speed improvements if you use the direct vectorized operations.

%timeit df['A1'], df['A2'] = df['a'] ** 2, df['a'] ** 3
5.13 ms ± 320 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

This takes advantage of NumPy's extremely fast vectorized operations instead of our loops. We now have a 30x speedup over the original.


The simplest speed test with apply

The above example should clearly show how slow apply can be, but just so its extra clear let's look at the most basic example. Let's square a Series of 10 million numbers with and without apply

s = pd.Series(np.random.rand(10000000))


%timeit s.apply(calc)
3.3 s ± 57.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Without apply is 50x faster

%timeit s ** 2
66 ms ± 2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

You can also use pipe:

df['new_col1'], df['new_col2'] = df['column_A'].pipe(calculate)