Heroku memory error with PHP and reading a large file with S3

I am using the AWS 2.3.2 SDK for PHP to try to pull a large file (~ 4g) from S3 using their streamline, which should allow me to use fopen / fwrite to write the file to disk and not the buffer to memory.

Here's a link:

http://docs.aws.amazon.com/aws-sdk-php-2/guide/latest/service-s3.html#downloading-data

Here is my code:

public function download() { $client = S3Client::factory(array( 'key' => getenv('S3_KEY'), 'secret' => getenv('S3_SECRET') )); $bucket = getenv('S3_BUCKET'); $client->registerStreamWrapper(); try { error_log("calling download"); // Open a stream in read-only mode if ($stream = fopen('s3://'.$bucket.'/tmp/'.$this->getOwner()->filename, 'r')) { // While the stream is still open if (($fp = @fopen($this->getOwner()->path . '/' . $this->getOwner()->filename, 'w')) !== false){ while (!feof($stream)) { // Read 1024 bytes from the stream fwrite($fp, fread($stream, 1024)); } fclose($fp); } // Be sure to close the stream resource when you're done with it fclose($stream); } 

File download, but I constantly get error messages from Heroku:

2013-08-22T19: 57: 59.537740 + 00: 00 heroku [run.9336]: the process is running mem = 515M (100.6%) 2013-08-22T19: 57: 59.537972 + 00: 00 heroku [run.9336]: Error R14 (memory quota exceeded)

This makes me think that it buffers memory anyway. I tried using https://github.com/arnaud-lb/php-memory-profiler but got Seg Fault.

I also tried to download the file using cURL with the CURLOPT_FILE parameter to write directly to disk, and I still do not have enough memory. The odd thing, according to top my php instance uses 223 m of memory, so not even half of the 512 allowed.

Does anyone have any ideas? I run this from php 5.4.17 cli for testing.

+6
source share
1 answer

Have you tried with 2x dinov, they have 1 GB of memory?

What you can also try is to upload the file by running the curl command in PHP. This is not the cleanest way, but it will be much faster / more reliable and convenient to use.

 exec("curl -O http://test.s3.amazonaws.com/file.zip", $output); 

This example is for a public URL. If you do not want to publish your S3 files, you can always create a signed URL and use it in conjunction with the curl command.

+2
source

Source: https://habr.com/ru/post/952311/


All Articles