Mounting the s3 bucket in ec2 and transparently using the mnt point

I have a webapp (call it myapp.com) that allows users to upload files. Webapp will be deployed on an Amazon EC2 instance. I would like these files to return to webapp consumers through a domain with s3 list (i.e. Uploads.myapp.com).

When a user uploads files, I can easily paste them into a folder named "site_uploads" on a local ec2 instance. However, since my ec2 instance has limited storage, with a lot of downloads, the ec2 file system will fill up quickly.

It would be great if the ec2 instance could mount s3 bucket as the site_upload directory as well. So uploading to the EC2 directory "site_upload" automatically ends at uploads.myapp.com (and my webapp can use template tags to make sure the links for this downloaded content are based on this supported s3 domain). It also gives me a scalable file service, as the file request falls into s3, and not into the ec2 instance. In addition, it makes it easier for my webapp to scale / resize images that appear locally in "site_upload" but actually on s3.

I look at s3fs, but judging by the comments, this doesn't seem like a completely baked solution. I am looking for a non-profit solution.

FYI, webapp is written in django, and not that it changes data too much.

+4
source share
5 answers

I do not use EC2, but I have my S3 bucket permanently installed on my Linux server. Like I did with Jungledisk. This is not a non-profit solution, but it is very inexpensive.

First I set up jungledisk as usual. Then make sure the fuse is installed. Basically you just need to create a configuration file using your private keys, etc. Then just add a line to your fstab something like this.

jungledisk /path/to/mount/at fuse noauto,allow_other,config=/path/to/jungledisk/config/file.xml 0 0 

Then just install and you will go well.

+5
source

For downloads, your users can download directly to S3 , as described here .

This way you do not need to install S3.

When servicing files, you can also do this with S3 directly, by marking the files as public, I would rather name the site "files.mydomain.com" or "images.mydomain.com" pointing to s3.

+2
source

I use s3fs but there are no distributions available. I have my build for anyone who wants this easier.

The configuration documentation is not available, so I closed it until I received it in my fstab:

s3fs # {{bucket name}} {{/ path / to / mount / point}} Fuse allow_other, accessKeyId = {{key}}, secretAccessKey = {{secret key}} 0 0

s3fs

+2
source

This is a little cut off that I use for the Ubuntu system, and I have not tested it, so it will obviously need to be adapted for the M $ system. You will also need to install s3-simple-fuse . If you finish, in the end, put your work in clound, I would recommend fabrics to execute the same command.

 import os, subprocess ''' Note: this is for Linux with s3cmd installed and libfuse2 installed Run: 'fusermount -u mount_directory' to unmount ''' def mountS3(aws_access_key_id, aws_secret_access_key, targetDir, bucketName = None): ####### if bucketName is None: bucketName = 's3Bucket' mountDir = os.path.join(targetDir, bucketName) if not os.path.isdir(mountDir): os.path.mkdir(mountDir) subprocess.call('s3-simple-fuse %s -o AWS_ACCESS_KEY_ID=%s,AWS_SECRET_ACCESS_KEY=%s,bucket=%s'%(mountDir, aws_access_key_id, aws_secret_access_key, bucketName) 
+2
source

I would suggest using a separately installed EBS volume. I tried to do the same for some movie files. Access to S3 was slow, and S3 has some limitations, such as the inability to rename files, the lack of a real directory structure, etc.

You can configure EBS volumes in a RAID5 configuration and add space as needed.

0
source

Source: https://habr.com/ru/post/1285818/


All Articles