‘ double_scalars 中遇到的无效值’警告,可能是 numpy

当我运行我的代码时,我偶尔会收到这些警告,总是以四人为一组。我试图通过在某些语句之前和之后放置调试消息来定位源,以确定其来源。

Warning: invalid value encountered in double_scalars
Warning: invalid value encountered in double_scalars
Warning: invalid value encountered in double_scalars
Warning: invalid value encountered in double_scalars

这是一个 Numpy 警告吗? 什么是双标量?

我用的那个傻瓜

min(), argmin(), mean() and random.randn()

我也用 Matplotlib

314852 次浏览

It looks like a floating-point calculation error. Check the numpy.seterr function to get more information about where it happens.

Sometimes NaNs or null values in data will generate this error with Numpy. If you are ingesting data from say, a CSV file or something like that, and then operating on the data using numpy arrays, the problem could have originated with your data ingest. You could try feeding your code a small set of data with known values, and see if you get the same result.

In my case, I found out it was division by zero.

Zero-size array passed to numpy.mean raises this warning (as indicated in several comments).

For some other candidates:

  • median also raises this warning on zero-sized array.

other candidates do not raise this warning:

  • min,argmin both raise ValueError on empty array
  • randn takes *arg; using randn(*[]) returns a single random number
  • std,var return nan on an empty array

I ran into similar problem - Invalid value encountered in ... After spending a lot of time trying to figure out what is causing this error I believe in my case it was due to NaN in my dataframe. Check out working with missing data in pandas.

None == None True

np.nan == np.nan False

When NaN is not equal to NaN then arithmetic operations like division and multiplication causes it throw this error.

Couple of things you can do to avoid this problem:

  1. Use pd.set_option to set number of decimal to consider in your analysis so an infinitesmall number does not trigger similar problem - ('display.float_format', lambda x: '%.3f' % x).

  2. Use df.round() to round the numbers so Panda drops the remaining digits from analysis. And most importantly,

  3. Set NaN to zero df=df.fillna(0). Be careful if Filling NaN with zero does not apply to your data sets because this will treat the record as zero so N in the mean, std etc also changes.

Whenever you are working with csv imports, try to use df.dropna() to avoid all such warnings or errors.

I encount this while I was calculating np.var(np.array([])). np.var will divide size of the array which is zero in this case.

As soon as you perform an operation with NaN ('not a number'), math.inf, divide by zero etc. you get this warning. Beware that the output number of an operation with NaN etc. also results in NaN. For example:

import math as m
print(1 + m.nan)

has the output

NaN