You would be much better off using a set of long-lived processes that pulled your data from the queues and sent them back, which constantly opened new processes for each event, especially from the JVM host with this huge heap.
Capturing an image of 240 GB is not free, it consumes a large amount of virtual resources, even if only for a second. The OS does not know how long the new process will be aware, so it should prepare as if the whole process will be long-lasting, so it sets up a virtual clone of all 240 GB before destroying it with the exec call.
If instead you had a long-lived process in which you could end objects through some kind of queue mechanism (and there are many for Java and C, etc.), this will save you some pressure from the branching process.
I do not know how you transfer data from the JVM to an external program. But if your external program can work with stdin / stdout, then (if you use unix), you can use inetd. Here you make a simple entry in the inetd configuration file for your process and assign it a port. Then you open the socket, pour the data into it and then read it back from the socket. Inetd processes the network data for you, and your program runs simply with stdin and stdout. Keep in mind that you will have an open socket on the network, which may or may not be secure in your deployment. But it’s quite simple to configure even outdated code to run through a network service.
You can use a simple shell, for example:
#!/bin/sh infile=/tmp/$$.in outfile=/tmp/$$.out cat > $infile /usr/local/bin/process -input $infile -output $outfile cat $outfile rm $infile $outfile
This is not the highest-performing server on the planet, designed for a billion transactions, but it is much faster than deploying 240 GB again and again.
source share