What is the most scalable way to handle multiple large file downloads in a Python web application?

We have a web application that downloads files for some parts. Uploading files is not very large (mostly text documents, etc.), but they are much larger than your typical web request, and they tend to link our streaming servers (zope 2 server, which works with the Apache proxy server) .

Mostly I'm in the brainstorming phase and trying to understand the general technique. Some of my ideas are:

  • Using an asynchronous python server such as a tornado or diesel or gunicorn.
  • Writing something in a twisted state to process it.
  • Just using nginx to handle actual file downloads.

It is surprisingly hard to find information on which approach I should take. I am sure that there are many details that are needed to make an actual decision, but I am more worried about how to make this decision than anything else. Can someone give me some tips on how to do this?

+3
source share
2 answers

If you open to add Django to your environment, its file upload function has built-in support for downloading files in multiple fragments.

Take a look at the "Uploading Files in the django Documentation " section .

+1
source

, , , Amazon S3 rackspace.

.

0

Source: https://habr.com/ru/post/1746428/


All Articles