熊猫中 NaN 及其他名称的查询

假设我有一个数据帧 df,其中一列 value保存一些浮点值和一些 NaN。我怎样才能得到我们有 NaN 使用查询语法的数据帧的一部分?

例如,下列措施不起作用:

df.query( '(value < 10) or (value == NaN)' )

我得到 name NaN is not defined(df.query('value ==NaN')也是)

一般来说,在查询中有没有使用数字名称的方法,比如 infnanpie等等?

84231 次浏览

In general, you could use @local_variable_name, so something like

>>> pi = np.pi; nan = np.nan
>>> df = pd.DataFrame({"value": [3,4,9,10,11,np.nan,12]})
>>> df.query("(value < 10) and (value > @pi)")
value
1      4
2      9

would work, but nan isn't equal to itself, so value == NaN will always be false. One way to hack around this is to use that fact, and use value != value as an isnan check. We have

>>> df.query("(value < 10) or (value == @nan)")
value
0      3
1      4
2      9

but

>>> df.query("(value < 10) or (value != value)")
value
0      3
1      4
2      9
5    NaN

For rows where value is not null

df.query("value == value")

For rows where value is null

df.query("value != value")

According to this answer you can use:

df.query('value < 10 | value.isnull()', engine='python')

I verified that it works.

Pandas fills empty cells in a DataFrame with NumPy's nan values. As it turns out, this has some funny properties. For one, nothing is equal to this kind of null, even itself. As a result, you can't search for it by checking for any particular equality.

In : 'nan' == np.nan
Out: False


In : None == np.nan
Out: False


In : np.nan == np.nan
Out: False

However, because a cell containing a np.nan value will not be equal to anything, including another np.nan value, we can check to see if it is unequal to itself.

In : np.nan != np.nan
Out: True

You can take advantage of this using Pandas query method by simply searching for cells where the value in a particular column is unequal to itself.

df.query('a != a')

or

df[df['a'] != df['a']]

I think other answers will normally be better. In one case, my query had to go through eval (use eval very carefully) and the syntax below was useful. Requiring a number to be both less than and greater than or equal to excludes all numbers, leaving only null-like values.

df = pd.DataFrame({'value':[3,4,9,10,11,np.nan, 12]})


df.query("value < 10 or (~(value < 10) and ~(value >= 10))")

You can use the isna and notna Series methods, which is concise and readable.

import pandas as pd
import numpy as np


df = pd.DataFrame({'value': [3, 4, 9, 10, 11, np.nan, 12]})
available = df.query("value.notna()")
print(available)


#    value
# 0    3.0
# 1    4.0
# 2    9.0
# 3   10.0
# 4   11.0
# 6   12.0


not_available = df.query("value.isna()")
print(not_available)


#    value
# 5    NaN

In case you have numexpr installed, you need to pass engine="python" to make it work with .query. numexpr is recommended by pandas to speed up the performance of .query on larger datasets.

available = df.query("value.notna()", engine="python")
print(available)

Alternatively, you can use the toplevel pd.isna function, by referencing it as a local variable. Again, passing engine="python" is required when numexpr is present.

import pandas as pd
import numpy as np




df = pd.DataFrame({'value': [3, 4, 9, 10, 11, np.nan, 12]})
df.query("@pd.isna(value)")


#    value
# 5    NaN

This should also work: df.query("value == 'NaN'")