I am currently using Python / Numpy to process geographical / GPS data (loving it!), And I am faced with the repetitive task of calculating the distances between geographical points determined by the coordinate pair pn = [lon, lat] .
I have a function that I use as follows: dist = geodistance(p1, p2) , an analog of Euclidean distance in linear algebra (vector subtraction / difference), but occurs in geodesic (spherical) space instead of a rectangular Euclidean space.
Software, Euclidean distance is given by
dist = ((p2[0] - p1[0])**2 + (p2[1] - p1[1])**2)**0.5
Mathematically, this is equivalent to the "idiomatic" (due to lack of a better word) sentence
dist = p1 - p1
I am currently getting my distance as follows:
p1 = [-51.598354,-29.953363] p2 = [-51.598701,-29.953045] dist = geodistance(p1, p2) print dist >> 44.3904032407
I would like to do this:
print p2 - p1
And the ultimate goal:
track = numpy.array([[-51.203018 -29.996149] [-51.203018 -29.99625 ] [-51.20266 -29.996229] [-51.20229 -29.996309] [-51.201519 -29.99416 ]], dtype=fancy)
Similarly: if you take two datetime objects and subtract them, the operation returns a timedelta object. I want to subtract two coordinates and get the geodesic distance as a result.
I wonder if the class will work, but dtype (for example, a "subtype" of float32) could help create an array from lists (this is how I read things from xml files).
Thanks a lot!