如何过滤熊猫数据框使用'in'和'not in'喜欢在SQL

如何实现SQL的INNOT IN的等价物?

我有一个包含所需值的列表。 这是一个场景:

df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']})
countries_to_keep = ['UK', 'China']


# pseudo-code:
df[df['country'] not in countries_to_keep]

我目前的做法如下:

df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']})
df2 = pd.DataFrame({'country': ['UK', 'China'], 'matched': True})


# IN
df.merge(df2, how='inner', on='country')


# NOT IN
not_in = df.merge(df2, how='left', on='country')
not_in = not_in[pd.isnull(not_in['matched'])]

但这看起来像一个可怕的拼凑。有人能改进吗?

993549 次浏览

您可以使用pd.Series.isin

对于“IN”使用:something.isin(somewhere)

或“不在”:~something.isin(somewhere)

作为一个工作示例:

>>> df
country
0        US
1        UK
2   Germany
3     China
>>> countries_to_keep
['UK', 'China']
>>> df.country.isin(countries_to_keep)
0    False
1     True
2    False
3     True
Name: country, dtype: bool
>>> df[df.country.isin(countries_to_keep)]
country
1        UK
3     China
>>> df[~df.country.isin(countries_to_keep)]
country
0        US
2   Germany

我通常对这样的行进行通用过滤:

criterion = lambda row: row['countries'] not in countries
not_in = df[df.apply(criterion, axis=1)]

我想过滤掉dfbc行有一个BUSINESS_ID,这也是在dfProfilesBusIds的BUSINESS_ID

dfbc = dfbc[~dfbc['BUSINESS_ID'].isin(dfProfilesBusIds['BUSINESS_ID'])]

使用. Query()方法的替代解决方案:

In [5]: df.query("countries in @countries_to_keep")
Out[5]:
countries
1        UK
3     China


In [6]: df.query("countries not in @countries_to_keep")
Out[6]:
countries
0        US
2   Germany
df = pd.DataFrame({'countries':['US','UK','Germany','China']})
countries = ['UK','China']

实施于

df[df.countries.isin(countries)]

实施不在与其他国家一样:

df[df.countries.isin([x for x in np.unique(df.countries) if x not in countries])]

如何实现大熊猫DataFrame的“in”和“not in”?

Pandas提供了两种方法:#E YZ 0#E YZ 1分别用于Series和DataFrames。


过滤基于一列的数据帧(也适用于系列)

最常见的情况是在特定列上应用isin条件来过滤DataFrame中的行。

df = pd.DataFrame({'countries': ['US', 'UK', 'Germany', np.nan, 'China']})
df
countries
0        US
1        UK
2   Germany
3     China


c1 = ['UK', 'China']             # list
c2 = {'Germany'}                 # set
c3 = pd.Series(['China', 'US'])  # Series
c4 = np.array(['US', 'UK'])      # array

Series.isin接受各种类型作为输入。以下是获得所需内容的所有有效方法:

df['countries'].isin(c1)


0    False
1     True
2    False
3    False
4     True
Name: countries, dtype: bool


# `in` operation
df[df['countries'].isin(c1)]


countries
1        UK
4     China


# `not in` operation
df[~df['countries'].isin(c1)]


countries
0        US
2   Germany
3       NaN

# Filter with `set` (tuples work too)
df[df['countries'].isin(c2)]


countries
2   Germany

# Filter with another Series
df[df['countries'].isin(c3)]


countries
0        US
4     China

# Filter with array
df[df['countries'].isin(c4)]


countries
0        US
1        UK

过滤许多列

有时,您需要在多个列上应用一些搜索词的“in”成员资格检查,

df2 = pd.DataFrame({
'A': ['x', 'y', 'z', 'q'], 'B': ['w', 'a', np.nan, 'x'], 'C': np.arange(4)})
df2


A    B  C
0  x    w  0
1  y    a  1
2  z  NaN  2
3  q    x  3


c1 = ['x', 'w', 'p']

要将isin条件应用于“A”和“B”列,请使用DataFrame.isin

df2[['A', 'B']].isin(c1)


A      B
0   True   True
1  False  False
2  False  False
3  False   True

由此,保留至少有一列为True的行,我们可以沿着第一轴使用any

df2[['A', 'B']].isin(c1).any(axis=1)


0     True
1    False
2    False
3     True
dtype: bool


df2[df2[['A', 'B']].isin(c1).any(axis=1)]


A  B  C
0  x  w  0
3  q  x  3

请注意,如果您想搜索每一列,您只需省略列选择步骤并执行

df2.isin(c1).any(axis=1)

类似地,保留所有列为True的行,以与之前相同的方式使用all

df2[df2[['A', 'B']].isin(c1).all(axis=1)]


A  B  C
0  x  w  0

值得注意的提及:numpy.isinquery,列表推导(字符串数据)

除了上面描述的方法,您还可以使用numpy等效:numpy.isin

# `in` operation
df[np.isin(df['countries'], c1)]


countries
1        UK
4     China


# `not in` operation
df[np.isin(df['countries'], c1, invert=True)]


countries
0        US
2   Germany
3       NaN

为什么值得考虑?NumPy函数通常比它们的熊猫等价物快一点,因为它的开销更低。由于这是一个不依赖于索引对齐的元素操作,因此在极少数情况下,此方法不是熊猫isin的合适替代品。

Pandas例程在处理字符串时通常是迭代的,因为字符串操作很难矢量化。有很多证据表明列表理解在这里会更快。。 我们现在使用in检查。

c1_set = set(c1) # Using `in` with `sets` is a constant time operation...
# This doesn't matter for pandas because the implementation differs.
# `in` operation
df[[x in c1_set for x in df['countries']]]


countries
1        UK
4     China


# `not in` operation
df[[x not in c1_set for x in df['countries']]]


countries
0        US
2   Germany
3       NaN

然而,指定它要笨拙得多,所以除非你知道自己在做什么,否则不要使用它。

最后,还有DataFrame.query,已在这个答案中涵盖。

从答案中整理可能的解决方案:

对于IN:df[df['A'].isin([3, 6])]

对于不在:

  1. df[-df["A"].isin([3, 6])]

  2. df[~df["A"].isin([3, 6])]

  3. df[df["A"].isin([3, 6]) == False]

  4. df[np.logical_not(df["A"].isin([3, 6]))]

一个技巧,如果你想保持列表的顺序:

df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']})
countries_to_keep = ['Germany', 'US']




ind=[df.index[df['country']==i].tolist() for i in countries_to_keep]
flat_ind=[item for sublist in ind for item in sublist]


df.reindex(flat_ind)


country
2  Germany
0       US

我的2c值: 我需要一个组合的in和ifelse语句的数据框架,这对我工作。

sale_method = pd.DataFrame(model_data["Sale Method"].str.upper())
sale_method["sale_classification"] = np.where(
sale_method["Sale Method"].isin(["PRIVATE"]),
"private",
np.where(
sale_method["Sale Method"].str.contains("AUCTION"), "auction", "other"
),
)

为什么没人讲各种过滤方法的性能呢?其实这里经常弹出来这个话题(看例子)。我自己做了一个大数据集的性能测试。很有趣,也很有教育意义。

df = pd.DataFrame({'animals': np.random.choice(['cat', 'dog', 'mouse', 'birds'], size=10**7),
'number': np.random.randint(0,100, size=(10**7,))})


df.info()


<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10000000 entries, 0 to 9999999
Data columns (total 2 columns):
#   Column   Dtype
---  ------   -----
0   animals  object
1   number   int64
dtypes: int64(1), object(1)
memory usage: 152.6+ MB
%%timeit
# .isin() by one column
conditions = ['cat', 'dog']
df[df.animals.isin(conditions)]
367 ms ± 2.34 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
# .query() by one column
conditions = ['cat', 'dog']
df.query('animals in @conditions')
395 ms ± 3.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
# .loc[]
df.loc[(df.animals=='cat')|(df.animals=='dog')]
987 ms ± 5.17 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
df[df.apply(lambda x: x['animals'] in ['cat', 'dog'], axis=1)]
41.9 s ± 490 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
new_df = df.set_index('animals')
new_df.loc[['cat', 'dog'], :]
3.64 s ± 62.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
new_df = df.set_index('animals')
new_df[new_df.index.isin(['cat', 'dog'])]
469 ms ± 8.98 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
s = pd.Series(['cat', 'dog'], name='animals')
df.merge(s, on='animals', how='inner')
796 ms ± 30.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

因此,isin方法被证明是最快的,apply()方法是最慢的,这并不奇怪。

您也可以在.query()中使用.isin()

df.query('country.isin(@countries_to_keep).values')


# Or alternatively:
df.query('country.isin(["UK", "China"]).values')

要否定您的查询,请使用~

df.query('~country.isin(@countries_to_keep).values')

更新时间:

另一种方法是使用比较运算符:

df.query('country == @countries_to_keep')


# Or alternatively:
df.query('country == ["UK", "China"]')

要否定查询,请使用!=

df.query('country != @countries_to_keep')