The user uploads a huge file of 1 million words. I parse the file and put each line of the file in LinkedHashMap<Integer, String> .
I need O (1) key access and deletion. In addition, I need to save the access order, iterate over from any position and sort it.
The memory consumption is huge. I have included the String deduplication function , which appears in Java 8, but it turns out that LinkedHashMap consumes most of the memory.
I found that LinkedHashMap.Entry consumes 40 bytes , but there are only 2 pointers - one for the next record and one for the previous record. I thought 1 pointer should be 64 bits or 32 bits. Buy if I divide 409,405,320 (bytes) by 6 823 422 (number of records) I have 60 bytes per record.
It seems to me that I do not need the previous pointer, the next pointer should be sufficient to maintain order. Why LinkedHashMap consume so much memory? How to reduce memory consumption?

source share