I am working on a java project that will allow users to parse multiple files with potentially thousands of lines. The analyzed information will be stored in different objects, which will then be added to the collection.
Since the GUI does not require the simultaneous loading of all these objects and storing them in memory, I am looking for an effective way to load / unload data from files so that the data is loaded only into the collection when the user requests it.
I'm just evaluating now. I also thought about when, after loading a subset of the data into the collection and presenting it in the GUI, the best way to reload previously discovered data. Re-run the collector / data collection / populate the GUI? or perhaps find a way to keep the collection in memory or serialize / deserialize the collection itself?
I know that a subset of data loading / unloading data can be complicated if some sort of data filtering is performed. Say I'm filtering an identifier, so my new subset will contain data from the two previous analyzed subsets. This is not a problem, I keep the main copy of all the data in memory.
I read that google collecting is good and efficient at processing large amounts of data and offers methods that simplify many things, so this may offer an alternative that allows me to store the collection in memory. This is just talk. The question of which collection to use is a separate and complex thing.
Do you know what is the general recommendation for this type of task? I would like to hear what you did with similar scenarios.
If necessary, I can provide more detailed information.
source
share