I have a list of addresses for many people (1-8 addresses each), and I'm trying to determine the number of unique addresses that each person has.
here is an example set of address data for one person
addresses = [['PULMONARY MED ASSOC MED GROUP INC 1485 RIVER PARK DR STE 200',
'95815'],
['1485 RIVER PARK DRIVE SUITE 200', '95815'],
['1485 RIVER PARK DR SUITE 200', '95815'],
['3637 MISSION AVE SUITE 7', '95608']]
I have an address parser that separates the different parts of the address, "attn", house number, street name, PO Box, etc., so that I can compare them separately (the code is found here )
As you can see from the above data, addresses 1-3 are probably the same, and address 4 is different.
I wrote the following method for calculating the similarity - there is no magic for the scales, just what my intuition said should be the most important
def calcDistance(a1, a2,z1,z2, parser):
z1 = str(z1)
z2 = str(z2)
add1 = parser.parse(a1)
add2 = parser.parse(a2)
zip_dist = 0 if z1 == z2 else distance.levenshtein(z1,z2)
zip_weight = .4
attn_dist = distance.levenshtein(add1['attn'],add2['attn']) if add1['attn'] and add2['attn'] else 0
attn_weight = .1 if add1['attn'] and add2['attn'] else 0
suite_dist = distance.levenshtein(add1['suite_num'],add2['suite_num']) if add1['suite_num'] and add2['suite_num'] else 0
suite_weight = .1 if add1['suite_num'] and add2['suite_num'] else 0
street_dist = distance.levenshtein(add1['street_name'],add2['street_name']) if add1['street_name'] and add2['street_name'] else 0
street_weight = .3 if add1['street_name'] and add2['street_name'] else 0
house_dist = distance.levenshtein(add1['house'],add2['house']) if add1['house'] and add2['house'] else 0
house_weight = .1 if add1['house'] and add2['house'] else 0
weight = (zip_dist * zip_weight + attn_dist * attn_weight + suite_dist * suite_weight + street_dist * street_weight
+ house_dist * house_weight ) / (zip_weight +attn_weight + suite_weight + street_weight + house_weight )
return weight
, , 1-3 , 4 .
similarity = -1*np.array([[calcDistance(a1[0],a2[0],a1[1],a2[1],addr_parser) for a1 in addresses] for a2 in addresses])
print similarity
array([[-0. , -0. , -0. , -5.11111111],
[-0. , -0. , -0. , -5.11111111],
[-0. , -0. , -0. , -5.11111111],
[-5.11111111, -5.11111111, -5.11111111, -0. ]])
, , - , , "" . , : 3 2.
affprop = sklearn.cluster.AffinityPropagation(affinity="precomputed", damping=.5)
affprop.fit(similarity)
print affprop.labels_
array([0, 0, 1, 2], dtype=int64)
, DBSCAN
dbscan = sklearn.cluster.DBSCAN(min_samples=1)
dbscan.fit(similarity)
print dbscan.labels_
array([0, 0, 0, 1], dtype=int64)
, , - , .
DBSCAN?