A more pythonic approach would be to use set and list understanding.
name1 = "naveen"; name2 = "darshana" name1_set=set(name1) name2_set=set(name2) clean1=[x for x in name1 if x not in name2_set] clean2=[x for x in name2 if x not in name1_set] clean1.extend(['0']*(len(name1)-len(clean1))) clean2.extend(['1']*(len(name2)-len(clean2))) print clean1,clean2
set gives us O (1) searches, thereby making the whole process faster, making it O (N) instead of O (N ^ 2).
EDIT: In the light of your later comment that the number of events matters, this is an updated version that takes this into account.
name1 = "naveen"; name2 = "darshana" count1={} count2={} for x in name1: count1[x]=count1.get(x,0)+1 for x in name2: count2[x]=count2.get(x,0)+1 def remove_dups(name,count,null): clean=[] for x in name: if count.get(x,0): count[x]-=1 else: clean.append(x) clean.extend([null]*(len(name)-len(clean))) return clean clean1=remove_dups(name1,count2,'0') clean2=remove_dups(name2,count1,'1') print clean1,clean2
It uses a dict to count occurrences. Whenever a character is deleted, the corresponding counter for another name is reduced. The complexity is still O (N).
It prints ['v', 'e', 'e', 'n', '0', '0'] and ['d', 'r', 's', 'h', 'a', 'a', '1', '1'] . Is that what you wanted?