Can new web containers with Servlet 3 extend BlazeDS max # concurrent users?

BlazeDS is implemented as a servlet and is thus limited to approximately hundreds of concurrent users.

I wonder if later web containers (Tomcat 7, GlassFish / Grizzly, Jetty, etc.) that support Servlet 3 can be used to create NIO endpoints to increase the number of concurrent users to thousands?

Is this a valid and practical solution? Does anyone do this in production?

Something like a mature version of this: http://flex.sys-con.com/node/720304 If then it was very important, why now (when servlet 3 is widely available) there was no effort to try to implement the NIO endpoints ? (mind you, I'm new here, so feel free to state the obvious if I missed something)

Benefits of NIO: http://www.javalobby.org/java/forums/t92965.html

If not, is it a load balancer and several application servers, each of which has a BlazeDS instance, the recommended solution (with the exception of switching to LCDS, etc.)?

+4
source share
1 answer

Granite Connections and Asynchronous Servlets

GraniteDS is, as far as I know, the only solution that implements asynchronous servlets for real-time messaging, i.e. data push. This function is available not only for Servlet 3 containers (Tomcat 7, JBoss 7, Jetty 8, GlassFish 3, etc.), but also for old or other containers with special asynchronous support (for example, Tomcat 6 / CometProcessor, WebLogic 9 + / AbstractAsyncServlet, etc.)

Other solutions do not have this feature (BlazeDS) or use RTMP (LCDS, WebORB and the latest Clear Toolkit). I can't say much about the RTMP implementation, but BlazeDS clearly misses the scalable real-time messaging implementation because it uses only a synchronous servlet model.

If you need to handle many thousands of concurrent users, you can even create a cluster of GraniteDS servers to further increase scalability and reliability (see this video for example).

Asynchronous Servlet Performance

The predictability of asynchronous servlets and classic servlets has been compared several times and gives impressive results. See, for example, this Jetty blog post:

With a server that does not support NIO or without continuation, this one will require about 11,000 threads to handle 10,000 concurrent users. Jetty handles this number of connections with only 250 threads.

Classic synchronous model:

  • 10,000 concurrent users → 11,000 server threads.
  • 1.1.

Comet Asset Model:

  • 10,000 concurrent users → 250 server threads.
  • 0.025.

This kind of relationship can be roughly expected from other asynchronous implementations (rather than Jetty), and using Flex / AMF3 instead of a simple HTTP text request should not greatly change the result.

Why asynchronous servlets?

The classic (synchronous) servlet model is acceptable when each request is processed immediately:

request -> immediate processing -> response 

The problem with push data is that with the HTTP protocol there is no such thing as a true "data transfer": the server cannot initiate a call to the client to send data, it must respond to the request. That's why comet implementations rely on a different model:

  request -> wait for available data -> response 

In synchronous servlet processing, each request is processed by one dedicated server thread. However, in the context of push data processing, this stream in most cases simply waits for available data and does nothing, consuming significant server resources.

The whole purpose of asynchronous processing is to allow the servlet container to use these (often) idle threads to handle other incoming requests and that you can expect dramatic improvements in scalability when your application needs real-time messaging features.

You can find many other resources on the Internet explaining this mechanism, just Google on Comet.

+3
source

Source: https://habr.com/ru/post/1394507/


All Articles