From what I understand when downloading files with fragmented files, the pieces are stored in memory, so the download can be resumed from this point in the event of a failure. However, I assume that in a multi-node environment this requires the use of a “sticky session”, so that the same client is always redirected to the same node (the one that contains the pieces in memory). However, apart from this, we do not need to use sticky sessions anywhere else, so we would prefer not to.
Is there any way (using, for example, Hazelcast or any other data grid in memory) to distribute pieces through the nodes of the cluster so that the descendant can be resumed even if the client is connected to another node? In case that matters, we use Spring Boot (the latter).
source
share