How to configure the Play Framework application using the appropriate threads?

I am working with Play Framework (Scala) version 2.3. From the docs:

You cannot magically turn synchronous IO into asynchronous, wrapping it in the Future. If you cannot change the application architecture to avoid blocking operations, you will need to perform an operation at some point, and this thread will block. Thus, in addition to including the operation in the future, you must configure it to run in a separate execution context that has been configured with enough threads to handle the expected concurrency.

This confuses me a bit on how to configure my webapp. In particular, since my application has many blocking calls: a combination of JDBC calls and calls to third-party services using the SDK lock, what is the strategy for setting the execution context and determining the number of threads to provide? Do I need a separate execution context? Why can't I just configure the default pool in order to have enough threads (and if I do, why do I still need to end calls in the future?)?

I know that this will ultimately depend on the features of my application, but I am looking for some recommendations on the strategy and approach. In docs games, non-blocking operations are preached everywhere, but in reality a typical web application that gets into the sql database has a lot of blocking calls, and I got the impression that you are reading documents that this type of application will not perform optimally with using the default configuration.

+6
source share
2 answers

[...] what is a strategy for setting the execution context and determining the number of threads to ensure

Well, this is the difficult part, which depends on your individual requirements.

  • First of all, you should probably choose a basic profile from docs (pure asynchronous, highly synchronous, or many specific thread pools)
  • The second step is to customize your settings by profiling and testing your application.

Do I need a separate execution context?

Not necessary. But it makes sense to use separate execution contexts if you want to start all blocking IO calls at the same time, and not in a sequential way (so database call B should not wait for call A to complete).

Why can't I just configure the default pool to have enough threads (and if I do, why do I still need to wrap calls in the Future?)?

You can check docs :

play { akka { akka.loggers = ["akka.event.slf4j.Slf4jLogger"] loglevel = WARNING actor { default-dispatcher = { fork-join-executor { parallelism-min = 300 parallelism-max = 300 } } } } } 

With this approach, you basically turn Play into a model with one thread per request. This is not a game idea, but if you do a lot of blocking I / O calls, this is the easiest approach. In this case, you do not need to wrap database calls in the future.

To say this, you have basically three ways:

  • Use only technologies (IO-) whose API calls are not blocking and asynchronous. This allows you to use a small thread context / default execution context that matches the nature of the Play
  • Turn Play into a structure with one thread per request, dramatically increasing the default execution context. There is no need for futures, just call your lock database, as always.
  • Create specific execution contexts to block IO calls and gain small-scale control over what you do.
+6
source

First, before diving and refactoring the application, you must determine if this is really a problem for you. Run some tests (gatling excellent) and do some profiles with something like JProfiler. If you can live with the current performance, then happy days.

Ideal is to use a jet driver that will return you to the future, which will then be sent back to your controller. Unfortunately, async is still an open ticket for the spot . Interaction with the REST API can be made reactive using the PlayWS library, but if you need to go through the library provided by a third party, then you are stuck.

So, assuming that none of these options is feasible, and that you need to improve performance, the question is, what is the use for Play offered? I think they get here, which is useful to split the threads into those that are blocked and those that can use asynchronous methods.

If, for example, only some of your requests are long and blocked, then using one thread pool you run the risk of using all threads for blocking operations. Then your controller will not be able to process any new requests, regardless of whether this request should cause a blocking service. If you can allocate enough threads, this will never happen, then no problem.

If, on the other hand, you push your limit on threads, then using two pools, you can quickly and quickly execute fast, non-blocking requests. You will have one pool service request in your controller and a call to services that return futures. Some of these futures actually did the work using a separate thread pool, but only for blocking operations. If any part of your application can be made reactive, then your controller can take advantage of this by isolating the controller from blocking operations.

+3
source

Source: https://habr.com/ru/post/976190/


All Articles