熊猫按 groupby 求和,但排除某些列

在熊猫数据框中执行 groupby,但是从 groupby 中排除一些列的最佳方法是什么?例如,我有以下的数据框架:

Code   Country      Item_Code   Item    Ele_Code    Unit    Y1961    Y1962   Y1963
2      Afghanistan  15          Wheat   5312        Ha      10       20      30
2      Afghanistan  25          Maize   5312        Ha      10       20      30
4      Angola       15          Wheat   7312        Ha      30       40      50
4      Angola       25          Maize   7312        Ha      30       40      50

我想按 Country 和 Item _ Code 列分组,只计算属于 Y1961、 Y1962和 Y1963列的行之和。得到的数据框架应该是这样的:

Code   Country      Item_Code   Item    Ele_Code    Unit    Y1961    Y1962   Y1963
2      Afghanistan  15          C3      5312        Ha      20       40       60
4      Angola       25          C4      7312        Ha      60       80      100

现在我要做的是:

df.groupby('Country').sum()

但是,这也会增加 Item _ Code 列中的值。有没有什么方法可以指定在 sum()操作中包含哪些列以及排除哪些列?

214467 次浏览

The agg function will do this for you. Pass the columns and function as a dict with column, output:

df.groupby(['Country', 'Item_Code']).agg({'Y1961': np.sum, 'Y1962': [np.sum, np.mean]})  # Added example for two output columns from a single input column

This will display only the group by columns, and the specified aggregate columns. In this example I included two agg functions applied to 'Y1962'.

To get exactly what you hoped to see, included the other columns in the group by, and apply sums to the Y variables in the frame:

df.groupby(['Code', 'Country', 'Item_Code', 'Item', 'Ele_Code', 'Unit']).agg({'Y1961': np.sum, 'Y1962': np.sum, 'Y1963': np.sum})

You can select the columns of a groupby:

In [11]: df.groupby(['Country', 'Item_Code'])[["Y1961", "Y1962", "Y1963"]].sum()
Out[11]:
Y1961  Y1962  Y1963
Country     Item_Code
Afghanistan 15            10     20     30
25            10     20     30
Angola      15            30     40     50
25            30     40     50

Note that the list passed must be a subset of the columns otherwise you'll see a KeyError.

If you are looking for a more generalized way to apply to many columns, what you can do is to build a list of column names and pass it as the index of the grouped dataframe. In your case, for example:

columns = ['Y'+str(i) for year in range(1967, 2011)]


df.groupby('Country')[columns].agg('sum')

If you want to add a suffix/prefix to the aggregated column names, use add_suffix() / add_prefix().

df.groupby(["Code", "Country"])[["Y1961", "Y1962", "Y1963"]].sum().add_suffix("_total")

suffix


If you want to retain Code and Country as columns after aggregation, set as_index=False in groupby() or use reset_index().

df.groupby(["Code", "Country"], as_index=False)[["Y1961", "Y1962", "Y1963"]].sum()
# df.groupby(["Code", "Country"])[["Y1961", "Y1962", "Y1963"]].sum().reset_index()

as_index