[Background]
I am working on a Java server side process which is for signal analysis / image processing. The main server process will accept requests / input parameters (XML / image) from users. Then it will distribute the request / input to several processing processors. Processing processors are written in Java. They will make a JNI call to process the images / signals. They will be transmitted through RMI.
The same request will be processed again with different input parameters, and the input parameters are very large (image size: 1-2 MB). We do not want to send a request for processing engines each time. We cache requests / inputs in the processing processor. The JNI object is operational and is stored in the processing processor. The same calculation will always be calculated by the same processing processor.
[Problem]
We cannot accurately predict real-time usage, and workload distribution isnβt even. Since the same request always works with the same processing processor, some processing processors may be overloaded and some may be inactive.
[Demand]
Here is what I want to achieve: - Dynamic workload distribution: requests can be processed by different machines. - Low latency caching for data (large XML / Imagine): instead of sending input / data to the engine using RMI, we want to use a distributed cache. - The JNI object will also be stored in a distributed cache and will retain its state
[Question]
- Is there a low latency distributed cache for Java suitable for my requirement (especially caching a JNI object)?
- I had not used distributed cache before and looked at Terracotta. Any other recommendations?
Thanks for any input!
Alex source
share