How can I serve a large file using Pylons?

I am writing a Pylons-based download gateway. The gateway client will access the files by ID:

/file_gw/download/1 

Internally, the file itself is accessible via HTTP from the internal file server:

 http://internal-srv/path/to/file_1.content 

Files can be quite large, so I want to transfer the contents. I store metadata about a file in a StoredFile model object:

 class StoredFile(Base): id = Column(Integer, primary_key=True) name = Column(String) size = Column(Integer) content_type = Column(String) url = Column(String) 

Given this, what is the best (that is: the most architectural-sound, performing, etc.) way to write my file_gw controller?

+4
source share
3 answers

One thing you want to avoid is to load the entire file into memory before returning the first byte to the client. In wsgi you can return an iterator for the response body. There is an example in the webob documentation that you should be able to work in your controller. After all, pylons use webob.

The overall effect for this is that the client immediately receives feedback that the file is being downloaded as long as it takes to return the first fragment.

You can also look at GridFS for MongoDB, this is a pretty good way to get a distributed file system that is optimized for writing, once read a lot of file type operations.

A combination of these two things would be a good start if you have to do it yourself.

+2
source

I would consider using nginx and http://wiki.nginx.org/XSendfile or equivalent.

+1
source

The most architectural sound is to redirect the controller to Amazon S3 to upload the file and save the files to Amazon S3.

0
source

Source: https://habr.com/ru/post/1333220/


All Articles