Specification
I have a MongoDB that contains a collection of records, call them operations for simplicity. Some of these operations running, and those that work contain a series of events that arrive in real time.
I emit these events in real time through socket.io , and also provide an API endpoint whose purpose is to provide an updated list of events.
Current situation
Watching how events arrive fast (up to a thousand per second) seems suboptimal for .save() recording (in this case, I use Mongoose as a mapper object) for each incoming event. The current situation is that I am compressing the .save() call only to execute execution every 2 seconds. Because of this, the list of requests by request is always somewhere between 0 and 2 seconds in the real-time stream when the operation continues.
Suggested Optimization
I am considering the possibility of creating a "registry" in the memory that contains links to all running operations (getting into the memory limits is hardly a concern, since there will be no more than 10 simultaneous operations in the foreseeable future).
Whenever a request arrives, the โregistryโ will first search for the entry, and if it is found, the latest version will be sent from there. If not, it will really query the DB.
TL; DR: the gap between real-time and on-demand events due to the throttle calls of model.save() , the proposed optimization is to use in-memory storage for a specific subset of records.
Question
Is this an effective optimization or will I skip the Mongoose point and possibly ignore other, more viable / relevant solutions?
source share