We have a web application that downloads files for some parts. Uploading files is not very large (mostly text documents, etc.), but they are much larger than your typical web request, and they tend to link our streaming servers (zope 2 server, which works with the Apache proxy server) .
Mostly I'm in the brainstorming phase and trying to understand the general technique. Some of my ideas are:
- Using an asynchronous python server such as a tornado or diesel or gunicorn.
- Writing something in a twisted state to process it.
- Just using nginx to handle actual file downloads.
It is surprisingly hard to find information on which approach I should take. I am sure that there are many details that are needed to make an actual decision, but I am more worried about how to make this decision than anything else. Can someone give me some tips on how to do this?
source
share