I recommend using iloc in addition to John Galt's answer since this will work even with unsorted integer index, since .ix first looks at the index labels
If your series is already sorted, you could use something like this.
def closest(df, col, val, direction):
n = len(df[df[col] <= val])
if(direction < 0):
n -= 1
if(n < 0 or n >= len(df)):
print('err - value outside range')
return None
return df.ix[n, col]
df = pd.DataFrame(pd.Series(range(0,10,2)), columns=['num'])
for find in range(-1, 2):
lc = closest(df, 'num', find, -1)
hc = closest(df, 'num', find, 1)
print('Closest to {} is {}, lower and {}, higher.'.format(find, lc, hc))
df: num
0 0
1 2
2 4
3 6
4 8
err - value outside range
Closest to -1 is None, lower and 0, higher.
Closest to 0 is 0, lower and 2, higher.
Closest to 1 is 0, lower and 2, higher.
Apart from not completely answering the question, an extra disadvantage of the other algorithms discussed here is that they have to sort the entire list. This results in a complexity of ~N log(N).
However, it is possible to achieve the same results in ~N. This approach separates the dataframe in two subsets, one smaller and one larger than the desired value. The lower neighbour is than the largest value in the smaller dataframe and vice versa for the upper neighbour.
This approach is similar to using partition in pandas, which can be really useful when dealing with large datasets and complexity becomes an issue.
Comparing both strategies shows that for large N, the partitioning strategy is indeed faster. For small N, the sorting strategy will be more efficient, as it is implemented at a much lower level. It is also a one-liner, which might increase code readability.
The code to replicate this plot can be seen below:
from matplotlib import pyplot as plt
import pandas
import numpy
import timeit
value=3
sizes=numpy.logspace(2, 5, num=50, dtype=int)
sort_results, partition_results=[],[]
for size in sizes:
df=pandas.DataFrame({"num":100*numpy.random.random(size)})
sort_results.append(timeit.Timer("df.iloc[(df['num']-value).abs().argsort()[:2]].index",
globals={'find_neighbours':find_neighbours, 'df':df,'value':value}).autorange())
partition_results.append(timeit.Timer('find_neighbours(df,value)',
globals={'find_neighbours':find_neighbours, 'df':df,'value':value}).autorange())
sort_time=[time/amount for amount,time in sort_results]
partition_time=[time/amount for amount,time in partition_results]
plt.plot(sizes, sort_time)
plt.plot(sizes, partition_time)
plt.legend(['Sorting','Partitioning'])
plt.title('Comparison of strategies')
plt.xlabel('Size of Dataframe')
plt.ylabel('Time in s')
plt.savefig('speed_comparison.png')
If the series is already sorted, an efficient method of finding the indexes is by using bisect functions.
An example:
idx = bisect_left(df['num'].values, 3)
Let's consider that the column col of the dataframe dfis sorted.
In the case where the value val is in the column, bisect_left
will return the precise index of the value in the list and
bisect_right will return the index of the next position.
In the case where the value is not in the list, both bisect_left
and bisect_right will return the same index: the one where to
insert the value to keep the list sorted.
Hence, to answer the question, the following code gives the index of val in col if it is found, and the indexes of the closest values otherwise. This solution works even when the values in the list are not unique.
from bisect import bisect_left, bisect_right
def get_closests(df, col, val):
lower_idx = bisect_left(df[col].values, val)
higher_idx = bisect_right(df[col].values, val)
if higher_idx == lower_idx: #val is not in the list
return lower_idx - 1, lower_idx
else: #val is in the list
return lower_idx
Bisect algorithms are very efficient to find the index of the specific value "val" in the dataframe column "col", or its closest neighbours, but it requires the list to be sorted.
You can use numpy.searchsorted. If your search column is not already sorted, you can make a DataFrame that is sorted and remember the mapping between them with pandas.argsort. (This is better than the above methods if you plan on finding the closest value more than once.)
Once it's sorted, find the closest values for your inputs like this:
There are a lot of answers here and many of them are quite good. None are accepted and @Zero 's answer is currently most highly rated. Another answer points out that it doesn't work when the index is not already sorted, but he/she recommends a solution that appears deprecated.
I found I could use the numpy version of argsort() on the values themselves in the following manner, which works even if the indexes are not sorted:
The most intuitive way I've found to solve this sort of problem is to use the partition approach suggested by @ivo-merchiers but use nsmallest and nlargest. In addition to working on unsorted series, a benefit of this approach is that you can easily get several close values by setting k_matches to a number greater than 1.
If you need to find closest value to obj_num in 'num' column and in case there are multiple choices, you can choose best occurence based on values of other columns than 'num', for instance a second column 'num2'.
To do so, I would recommend to create a new column 'num_diff' then use sort_values. Example: we want to choose closest value to 3 in 'num' column, and in case there are many occurences, choose smallest value on 'num2' column. Code as bellow:
Here's a function to do the job using a dict of objective values and columns (it respects order of columns to use for sorting):
def colosest_row(df, obj):
'''
Sort df using specific columns given as obj keys.
If a key has None value:
sort column in ascending order.
If a key has a float value:
sort column from closest to farest value from obj[key] value.
Arguments
---------
df: pd.DataFrame
contains at least obj keys in its columns.
obj: dict
dict of objective columns.
Return
------
index of closest row to obj
'''
df_copy = df.loc[:, [*obj]].copy()
special_cols = []
obj_cols = []
for key in obj:
if obj[key] is None:
obj_cols.append(key)
else:
special_cols.append(key)
obj_cols.append(f'{key}_diff')
for key in special_cols:
df_copy[f'{key}_diff'] = (df[key]-obj[key]).abs()
df_copy.sort_values(
by=obj_cols,
axis=0,
ascending=True,
inplace=True
)
return df_copy.index[0]
obj_num_idx = colosest_row(
df=df,
obj={
"num": obj_num,
"num2": None # Sort using also 'num2'
}
)