在 x 和 y 坐标数组中查找最近点的索引

我有两个二维数组: x _ array 包含 x 方向的位置信息,y _ array 包含 y 方向的位置信息。

然后我有一个长长的 x,y 点列表。

对于列表中的每个点,我需要找到最接近该点的位置(在数组中指定)的数组索引。

基于这个问题,我天真地编写了一些可以工作的代码: 在 numpy 数组中查找最接近的值

也就是说。

import time
import numpy


def find_index_of_nearest_xy(y_array, x_array, y_point, x_point):
distance = (y_array-y_point)**2 + (x_array-x_point)**2
idy,idx = numpy.where(distance==distance.min())
return idy[0],idx[0]


def do_all(y_array, x_array, points):
store = []
for i in xrange(points.shape[1]):
store.append(find_index_of_nearest_xy(y_array,x_array,points[0,i],points[1,i]))
return store




# Create some dummy data
y_array = numpy.random.random(10000).reshape(100,100)
x_array = numpy.random.random(10000).reshape(100,100)


points = numpy.random.random(10000).reshape(2,5000)


# Time how long it takes to run
start = time.time()
results = do_all(y_array, x_array, points)
end = time.time()
print 'Completed in: ',end-start

我在一个大的数据集上做这个,并且真的想要加快一点速度。 有人能优化这个吗?

谢谢。


更新: 根据@silvado 和@justin (下面)的建议提出解决方案

# Shoe-horn existing data for entry into KDTree routines
combined_x_y_arrays = numpy.dstack([y_array.ravel(),x_array.ravel()])[0]
points_list = list(points.transpose())




def do_kdtree(combined_x_y_arrays,points):
mytree = scipy.spatial.cKDTree(combined_x_y_arrays)
dist, indexes = mytree.query(points)
return indexes


start = time.time()
results2 = do_kdtree(combined_x_y_arrays,points_list)
end = time.time()
print 'Completed in: ',end-start

上面的代码使我的代码(在100x100个矩阵中搜索5000个点)快了100倍。有趣的是,使用 Scypy.space. KDTree(而不是 Scypy.spatial.cKDTree)给我的初始解决方案提供了可比较的时间,因此使用 cKDTree 版本是绝对值得的..。

75286 次浏览

If you can massage your data into the right format, a fast way to go is to use the methods in scipy.spatial.distance:

http://docs.scipy.org/doc/scipy/reference/spatial.distance.html

In particular pdist and cdist provide fast ways to calculate pairwise distances.

scipy.spatial also has a k-d tree implementation: scipy.spatial.KDTree.

The approach is generally to first use the point data to build up a k-d tree. The computational complexity of that is on the order of N log N, where N is the number of data points. Range queries and nearest neighbour searches can then be done with log N complexity. This is much more efficient than simply cycling through all points (complexity N).

Thus, if you have repeated range or nearest neighbor queries, a k-d tree is highly recommended.

Here is a scipy.spatial.KDTree example

In [1]: from scipy import spatial


In [2]: import numpy as np


In [3]: A = np.random.random((10,2))*100


In [4]: A
Out[4]:
array([[ 68.83402637,  38.07632221],
[ 76.84704074,  24.9395109 ],
[ 16.26715795,  98.52763827],
[ 70.99411985,  67.31740151],
[ 71.72452181,  24.13516764],
[ 17.22707611,  20.65425362],
[ 43.85122458,  21.50624882],
[ 76.71987125,  44.95031274],
[ 63.77341073,  78.87417774],
[  8.45828909,  30.18426696]])


In [5]: pt = [6, 30]  # <-- the point to find


In [6]: A[spatial.KDTree(A).query(pt)[1]] # <-- the nearest point
Out[6]: array([  8.45828909,  30.18426696])


#how it works!
In [7]: distance,index = spatial.KDTree(A).query(pt)


In [8]: distance # <-- The distances to the nearest neighbors
Out[8]: 2.4651855048258393


In [9]: index # <-- The locations of the neighbors
Out[9]: 9


#then
In [10]: A[index]
Out[10]: array([  8.45828909,  30.18426696])

Search methods have two phases:

  1. build a search structure, e.g. a KDTree, from the npt data points (your x y)
  2. lookup nq query points.

Different methods have different build times, and different query times. Your choice will depend a lot on npt and nq:
scipy cdist has build time 0, but query time ~ npt * nq.
KDTree build times are complicated, lookups are very fast, ~ ln npt * nq.

On a regular (Manhatten) grid, you can do much better: see (ahem) find-nearest-value-in-numpy-array.

A little testbench: : building a KDTree of 5000 × 5000 2d points takes about 30 seconds, then queries take microseconds; scipy cdist 25 million × 20 points (all pairs, 4G) takes about 5 seconds, on my old iMac.

I have been trying to follow along with this, but new to Jupyter Notebooks, Python and the various tools being discussed here, but I have managed to get some way down the road I'm travelling.

BURoute = pd.read_csv('C:/Users/andre/BUKP_1m.csv', header=None)
NGEPRoute = pd.read_csv('c:/Users/andre/N1-06.csv', header=None)

I create a combined XY array from my BURoute dataframe

combined_x_y_arrays = BURoute.iloc[:,[0,1]]

And I create the points with the following command

points = NGEPRoute.iloc[:,[0,1]]

I then do the KDTree magic

def do_kdtree(combined_x_y_arrays, points):
mytree = scipy.spatial.cKDTree(combined_x_y_arrays)
dist, indexes = mytree.query(points)
return indexes


results2 = do_kdtree(combined_x_y_arrays, points)

This gives me an array of the indexes. I'm now trying to figure out how to calculate the distance between the points and the indexed points in the results array.