You may consider a trie or DAWG or database. There are several Python implementations.
Here are some relative timings for you to consider vs vs list:
import timeit import random with open('/usr/share/dict/words','r') as di: # UNIX 250k unique word list all_words_set={line.strip() for line in di} all_words_list=list(all_words_set) # slightly faster if this list is sorted... test_list=[random.choice(all_words_list) for i in range(10000)] test_set=set(test_list) def set_f(): count = 0 for word in test_set: if word in all_words_set: count+=1 return count def list_f(): count = 0 for word in test_list: if word in all_words_list: count+=1 return count def mix_f(): # use list for source, set for membership testing count = 0 for word in test_list: if word in all_words_set: count+=1 return count print "list:", timeit.Timer(list_f).timeit(1),"secs" print "set:", timeit.Timer(set_f).timeit(1),"secs" print "mixed:", timeit.Timer(mix_f).timeit(1),"secs"
Print
list: 47.4126560688 secs set: 0.00277495384216 secs mixed: 0.00166988372803 secs
those. matching a set of 10,000 words with a set of 250,000 words is 17,085 X faster than matching a list of 10,000 words in a list of the same 250,000 words. Using a list for source and a membership testing kit 28,392 X is faster than one unsorted list.
For membership testing, the list is O (n), and sets and dict are O (1) for the search.
Conclusion: Use the best data structures for 600 million lines of text!
source share