So, your thousands of massive objects have constructor, destructor, virtual functions and pointers. This means that you cannot easily withdraw them. The OS can do this for you, so your most practical approach is simply to add more physical memory, perhaps swap space for SSDs and use this 64-bit address space. (I donβt know how much your OS is actually addressing, but apparently enough to match your ~ 4G objects).
The second option is to find a way to simply save some memory. This may be the use of a specialized distributor to reduce sagging or remove layers of indirection. You did not provide sufficient information about your data so that I made specific proposals on this subject.
The third option, assuming that you can put your program in memory, is to simply speed up deserialization. Can you change the format to something that you can analyze more efficiently? Can you somehow deserialize objects quickly on demand?
The last option and the biggest job is to manually manage the swap file. It would be wise, as a first step, to divide your massive polymorphic classes into two parts: polymorphic flies (with one instance per specific subtype) and a smoothed aggregate contextual structure. This unit is the part that you can safely change and move in your address space.
Now you just need a swap mechanism with memory mapping, some kind of cache tracking, whose pages are currently being displayed, perhaps a smart pointer that replaces your raw pointer with page offset +, which can display data on demand, etc., you donβt provided enough information about their data structure and access patterns to make more detailed suggestions.
source share