What is recommended for spawning streams from servlet in Tomcat

Possible repetition! I am using Tomcat as my server and want to know that it is best to run threads in a servlet with deterministic results. I am launching some lengthy updates from a servlet action and would like the request to be completed and the updates performed in the background. Instead of adding middleware for messaging such as RabbitMQ, I thought I could create a thread that could run in the background and end in due time. I read in other SO threads that the server terminates the threads generated by the server so that it manages resources well.

Is there a recommended way to spawn threads, background jobs when using Tomcat. I also use Spring MVC for the application.

+46
java multithreading spring-mvc tomcat servlets
Sep 19 2018-10-19
source share
6 answers

Your safest bet uses the widespread application thread pool with the maximum number of threads, so tasks will be queued if necessary. ExecutorService very useful in this.

When starting the application or initializing the servlet, use the Executors class:

 executor = Executors.newFixedThreadPool(10); // Max 10 threads. 

Then during servlet service (you can ignore the result for a case that you are not interested in):

 Future<ReturnType> result = executor.submit(new CallableTask()); 

Finally, during application shutdown or servlet destruction:

 executor.shutdownNow(); // Returns list of undone tasks, for the case that. 
+41
Sep 19 '10 at 14:21
source share

Perhaps you can use the implementation of the CommonJ WorkManager (JSR 237), for example Foo-CommonJ :

CommonJ - JSR 237 Timer and WorkManager

Foo-CommonJ is a JSR 237 timer and WorkManager implementation. These are intended for use in containers that do not come with their own implementation - basically a simple servlet such as Tomcat . It can also be used in fully bloated Java EE server applications that do not have a WorkManager API or have a non-standard API such as JBoss.

Why use a WorkManager?

A common use case is that a servlet or JSP needs to aggregate data from multiple sources and display them on one page. Executing native threads controlled by environement as a J2EE container is not suitable and should never be executed at the application level code . In this case, the WorkManager API can be used to retrieve data in parallel to each other.

Install / Deploy CommonJ

The deployment of JNDI resources is vendor-specific. This implementation comes with the Factory class, which implements the javax.naming.spi.ObjectFactory interface, which is easily deployed to the most popular containers. It is also available as JBoss Service. more...

Update:. To clarify, here's what the Concurrency Utilities for previewing Java EE (looks like the successor to JSR-236 and JSR-237) writes about unmanaged threads:

2.1 Container-controlled or unmanaged threads

Java EE application servers require resource management in order to centralize administration and protect application components from consuming unnecessary resources. This can be achieved by pooling resources and managing the resource life cycle. Using Java SE concurrency such as the java.util.concurrency API, java.lang.Thread and java.util.Timer on the server, application components such as servlet or EJB are problematic because the container and server do not have knowledge of these resources .

By expanding the java.util.concurrent API, application servers and Java EE containers can recognize the resources that are being used and provide the proper execution context for asynchronous operations to work with .

This is mainly achieved by providing managed versions of the prevailing java.util.concurrent.ExecutorService interfaces.

So, nothing new IMO, the "old" problem does not change, the unmanaged thread is still unmanaged threads:

  • They are unknown to the application server and do not have access to Java EE contextual information.
  • They can use the resources on the back of the application server and without any administrative ability to control their number and resource use, this can affect the ability of the application server to recover resources due to a failure or gracefully shut down.

References

+30
Sep 19 '10 at 14:53
source share

Spring supports an asynchronous task (in your case a long one) through spring -scheduling. Instead of using Java streams directly, I suggest using it with Quartz.

resources:

+7
Sep 19 '10 at 14:29
source share

Strictly speaking, you are not allowed to create threads according to the Java EE specification. I would also like to consider a denial of service attack (intentionally or otherwise) if several requests are received at once.

A middleware solution will certainly be more reliable and standards compliant.

+4
Sep 19 '10 at 13:53 on
source share

I know this is an old question, but people keep asking about it all the time, trying to do it (explicitly spawning threads when processing a servlet request) all the time ... This is a very erroneous approach - for more than one reason ... Just saying that Java EE containers frowned in such a practice is not enough, although overall it's true ...

Most importantly, you can never predict how many concurrent requests a servlet will receive at any given time. A web application, a servlet, is, by definition, designed to process multiple requests at a given endpoint at a time. If you program, you request processing logic to explicitly run a certain number of parallel threads, you run the risk of any, but inevitable situation, when available threads end and suffocate your application. Your task performer is always set up to work with a thread pool that is limited to a limited reasonable size. Most often, it does not exceed 10-20 (you do not want too many threads to execute your logic - depending on the nature of the task, the resources for which they are competing, the number of processors on your server, etc.). Say your request handler (for example, an MVC controller method) calls one or more @ Async-annotated methods (in this case, Spring abstracts the task executor and makes the task easier for you) or explicitly uses the task executor. As the code executes, it begins to capture available threads from the pool. This is great if you always process one request at a time without immediate subsequent requests. (In this case, you are probably trying to use the wrong technology to solve your problem.) However, if it is a web application that is exposed to arbitrary (or even famous) clients that can clog the endpoint with requests, you will quickly exhaust the thread pool , and requests will begin to accumulate, waiting for threads to appear. For this reason, you should understand that you can be wrong if you are considering such a design.

The best solution may be the stage of processing data that will be processed asynchronously (it can be a queue or any other type of temporary / intermediate data storage) and return a response. Have an external, independent application or even several instances of it (deployed outside your web container) polling the endpoint (s) of the intermediate process and processing the data in the background, possibly using a finite number of parallel threads. Not only will this solution give you the advantage of asynchronous / parallel processing, but it will also scale, because you can run as many instances of such a prowl as you need, and they can be distributed by pointing to the endpoint of the intermediate level. Hth

+4
Apr 30 '15 at 17:10
source share

Since Spring 3, you can use @Async annotations:

 @Service public class smg { ... @Async public getCounter() {...} } 

with <context:component-scan base-package="ch/test/mytest"> and <task:annotation-driven/> in the context file

Refer to this guide: http://spring.io/blog/2010/01/05/task-scheduling-simplifications-in-spring-3-0/

Works great for me on Tomcat7, and you don't need to manage the thread pool.

+2
Oct 18 '13 at 7:07
source share



All Articles