When you βcapture an imageβ, you will have to at least write it to a temporary file locally. This is because if you use fopen () or curl to access the file, you will need some way to write the stream to Amazon. I am not sure about the possibility of programming a stream that essentially connects the remote file directly to S3. In fact, this is theoretically impossible, since S3 cannot run scripts, and the image cannot run a script.
You can load an image through a buffer of some form of buffer if you want to minimize the amount of information stored in memory, but writing it to a temporary file should be the easiest. If you do not have enough space because you have so many users in the system, you either go to a large server or add another server under the load balancer. Personally, I use the Amazon S3 PHP class on my system and move it from the temporary file locally directly to S3 using a script like this:
function upload_image( $image_data ) { // // Write the image to S3 // $s3 = new AmazonS3(); $bucket = ACTIVITY_S3_BUCKET; $image_id = uniqid('myscript'); $path = ACTIVITY_S3_FOLDER.'/'.$image_id; $response = $s3->create_object($bucket, $path, array( 'body' => $image_data, 'acl' => AmazonS3::ACL_PUBLIC )); $image_url = 'https://s3.amazonaws.com/'.ACTIVITY_S3_BUCKET.'/'.$path; return $image_id; }
It is clear that this is not a reliable script, but decided that I would just talk about the path that I myself took down. I have to add, since it seems that your main concern in this case is computing power, look at this interesting post when resizing images in the cloud. http://www.nslms.com/2010/11/01/how-to-resize-billions-of-images-in-the-cloud/
Update In accordance with this answer How to resize images outside the server "PHP / GD allows you to send jpeg directly to the HTTP response".
source share