The web server built into the flask is not intended for use in production, precisely for the reasons you list, it is single-threaded and easily linked if any request blocks for a non-trivial amount of time. The flag documentation contains several deployment options in a production environment ; mod_wsgi , gunicorn , uSWGI etc. All of these deployment options provide mechanisms for handling concurrency either through threads, processes, or non-blocking I / O. However, note that if you are performing processor-bound operations, the only option that will give true concurrency is to use multiple processes.
If you want to use tornado , you need to rewrite the application in tornado style. Since its architecture is based on explicit asynchronous I / O, you cannot use its asynchronous functions if you deploy it as a WSGI application. The tornado style basically means using non-blocking APIs for all I / O and using subprocesses to handle any lengthy CPU-bound operations. The tornado documentation describes how to make asynchronous I / O calls, but here's a basic description of how this works:
from tornado import gen @gen.coroutine def fetch_coroutine(url): http_client = AsyncHTTPClient() response = yield http_client.fetch(url) return response.body
The response = yield http_client.fetch(curl) is actually asynchronous; it will return control to the tornado event loop when the request begins, and resume again after receiving a response. This allows you to run multiple asynchronous HTTP requests at the same time, all in one thread. Note that everything you do inside fetch_coroutine , not asynchronous I / O, blocks the event loop, and no other requests can be processed while this code is executing.
To handle long-term processor-bound operations, you need to send work to a subprocess to avoid blocking the event loop. For Python, this usually means using multiprocessing or concurrent.futures . I would consider this question for more information on how best to integrate these libraries with tornado . Please note that you do not want to support the process pool more than the number of processors that you have in the system, so consider how many simultaneous operations related to the processor you plan to start at any time when you figure out how to scale it beyond the limits of one machine.
The tornado documentation contains a section on launching the load balancer . They recommend using NGINX for this purpose.
source share