Exclude from memory when running multi-threaded code

I am working on a project in which I will have different Bundles. Let’s take an example: suppose I have 5 bundles, and each of these bundles will have a process method name.

Now, I'm currently calling the process method for all of these 5 bundles in parallel using the multi-threaded code below.

But anyway, every time I run the following multithreaded code, it always gives me an exception in memory. But if I run it sequentially, then calling the method method one by one, then this does not give me an exception from memory.

Below is the code -

 public void callBundles(final Map<String, Object> eventData) { // Three threads: one thread for the database writer, two threads for the plugin processors final ExecutorService executor = Executors.newFixedThreadPool(3); final Map<String, String> outputs = (Map<String, String>)eventData.get(Constants.EVENT_HOLDER); for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) { executor.submit(new Runnable () { public void run() { try { final Map<String, String> response = entry.getPlugin().process(outputs); //process the response and update database. System.out.println(response); } catch (Exception e) { e.printStackTrace(); } } }); } } 

The exception below is what I get whenever I run on multi-threaded code.

 JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait. JVMDUMP032I JVM requested Heap dump using 'S:\GitViews\Stream\goldseye\heapdump.20130904.175256.12608.0001.phd' in response to an event JVMDUMP010I Heap dump written to S:\GitViews\Stream\goldseye\heapdump.20130904.175256.12608.0001.phd JVMDUMP032I JVM requested Java dump using 'S:\GitViews\Stream\goldseye\javacore.20130904.175256.12608.0002.txt' in response to an event UTE430: can't allocate buffer UTE437: Unable to load formatStrings for j9mm JVMDUMP010I Java dump written to S:\GitViews\Stream\goldseye\javacore.20130904.175256.12608.0002.txt JVMDUMP032I JVM requested Snap dump using 'S:\GitViews\Stream\goldseye\Snap.20130904.175256.12608.0003.trc' in response to an event UTE001: Error starting trace thread for "Snap Dump Thread": -1 JVMDUMP010I Snap dump written to S:\GitViews\Stream\goldseye\Snap.20130904.175256.12608.0003.trc JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError". ERROR: Bundle BullseyeModellingFramework [1] EventDispatcher: Error during dispatch. (java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 12) java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 12 JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait. JVMDUMP032I JVM requested Heap dump using 'S:\GitViews\Stream\goldseye\heapdump.20130904.175302.12608.0004.phd' in response to an event JVMDUMP010I Heap dump written to S:\GitViews\Stream\goldseye\heapdump.20130904.175302.12608.0004.phd JVMDUMP032I JVM requested Java dump using 'S:\GitViews\Stream\goldseye\javacore.20130904.175302.12608.0005.txt' in response to an event 

I am using JDK1.6.0_26 as the installed JRE in my eclipse.

+4
source share
2 answers

The main problem is that you really are not using the thread pool correctly. If all your process threads have the same priority, there is no reason not to create one large thread pool and send all your Runnable tasks to it. Note. β€œBig” in this case is determined through experimentation and profiling: tune it until your performance in terms of speed and memory is expected.

Here is an example of what I am describing:

 // Using 10000 purely as a concrete example - you should define the correct number public static final LARGE_NUMBER_OF_THREADS = 10000; // Elsewhere in code, you defined a static thread pool public static final ExecutorService EXECUTOR = Executors.newFixedThreadPool(LARGE_NUMBER_OF_THREADS); public void callBundles(final Map<String, Object> eventData) { final Map<String, String> outputs = (Map<String, String>)eventData.get(Constants.EVENT_HOLDER); for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) { // "Three threads: one thread for the database writer, // two threads for the plugin processors" // so you'll need to repeat this future = E.submit() pattern two more times Future<?> processFuture = EXECUTOR.submit(new Runnable() { public void run() { final Map<String, String> response = entry.getPlugin().process(outputs); //process the response and update database. System.out.println(response); } } // Note, I'm catching the exception out here instead of inside the task // This also allows me to force order on the three component threads try { processFuture.get(); } catch (Exception e) { System.err.println("Should really do something more useful"); e.printStackTrace(); } // If you wanted to ensure that the three component tasks run in order, // you could future = executor.submit(); future.get(); // for each one of them } 

For completeness, you can also use a cached thread pool to avoid re-creating short-lived threads. However, if you are already worried about memory consumption, it might be better to use a fixed pool.

When you upgrade to Java 7, you may find that Fork-Join is a better example than the Futures series. Everything that best suits your needs.

+1
source

Each call to callBundles() will create a new thread, creating its own executor. Each thread has its own stack space! So, if you say that you started the JVM, the first call will create three threads with a 3M heap sum (1024k is the default stack size for a 64-bit JVM), the next call to another 3M, etc. 1000 calls / s 3Gb / s!

The second problem is that you never execute shutdown() created executing services, so the thread will work until the garbage collector removes the executor ( finalize() also call shutdown() ). But the GC will never clear the memory stack, so if the stack memory is a problem and the heap is not full, GC will never help!

You need to use one ExecutorService , say, from 10 to 30 threads or a custom ThreadPoolExecutor with 3-30 cached threads and LinkedBlockingQueue . Call shutdown() on the service before your application stops, if possible.

Check the physical RAM, load, and response time of your application to adjust the size of the parameter heap, maximum threads, and save the thread lifetime in the pool. Look at the other blocking parts of the code (the size of the database connection pool ...) and the number of processors / cores of your server. The starting point for the thread pool size can be the number of processors / core plus 1. with a large revitalization of I / O.

+2
source

Source: https://habr.com/ru/post/1500634/


All Articles