I have a scenario in which I have a method that gets the results as an Arraylist in the form as shown in the image below.

So, as a brief explanation of the picture, I will get Result 1 as the first piece of objects, then I will get Result 2 , which actually contains Result 1 and a new set of objects, and this continues.
Note. All these fragments of objects will contain duplicates. So I have to filter it out.
My goal is to create one single list of these pieces without any duplicates and have only one object from the family (one special character of these objects).
Please find the current code snippet used in the synchronized method that I call when I get a piece of the result that I use to implement this:
In each update of the result, this method will be called using resultList.
private synchronized void processRequestResult(QueryResult result) { ArrayList currArrayList = result.getResultsList(); ArrayList tempArrayList = result.getResultsList(); /** * Remove all elements in prevArrayList from currArrayList * * As per the javadocs, this would take each record of currArrayList and compare with each record of prevArrayList, * and if it finds both equal, it will remove the record from currArrayList * * The problem is that its easily of n square complexity. */ currArrayList.removeAll(prevArrayList); // Clone and keep the currList for dealing with next List prevArrayList = (ArrayList) tempArrayList.clone(); for (int i = 0; i < currArrayList.size(); i++) { Object resultObject = currArrayList.get(i); // Check for if it reached the max of items to be displayed in the list. if (hashMap.size() >= MAX_RESULT_LIMIT) { //Stop my requests //Launch Message break; } //To check if of the same family or duplicate if (resultObject instanceof X) { final Integer key = Integer.valueOf(resultObject.familyID); hashMap.put(key, (X)myObject); } else if (resultObject instanceof Y) { final Integer key = Integer.valueOf(resultObject.familyID); hashMap.put(key, (Y)myObject); } } // Convert the HashSet to arrayList allResultsList = new ArrayList(hashMap.values()); //Update the change to screen }
In theory, I should only try to analyze the delta object as a result, which I will get further. So I went to the removeAll method for the array and then checked for duplicates of the same family using hashMap.
Please see my inline comments in the code, because of this I would like to get some pointers to improve my performance for this process.
Update :
The special nature of this object is that a set of objects can belong to the same family (identifier). Therefore, the final list should contain only one object from each family.
SO, so I used hashMap and made familyID as a key.