Download RequestTimeout to S3 using PHP

I am having problems uploading files to S3 from one of our servers. We use S3 to store our backups, and Ubuntu 8.04 with PHP 5.2.4 and libcurl 7.18.0 runs on all of our servers. Whenever I try to upload a file, Amazon returns a RequestTimeout error. I know that in our current version of libcurl there is an error preventing the download of more than 200 MB. For this reason, we split our backups into smaller files.

We have servers hosted on Amazon EC2 and servers hosted on client "private clouds" (the VMWare ESX field behind the company's firewall). The specific server I'm having problems with is located in the client’s private cloud.

We are using the Amazon S3 PHP class from http://undesigned.org.za/2007/10/22/amazon-s3-php-class . I tried files of 200 MB, 100 MB and 50 MB in size, all with the same results. To download files, we use the following:

$s3 = new S3($access_key, $secret_key, false); $success = $s3->putObjectFile($local_path, $bucket_name, $remote_name, S3::ACL_PRIVATE); 

I tried setting curl_setopt($curl, CURLOPT_NOPROGRESS, false); to view the progress bar while it is loading the file. The first time I ran it using this option, it worked. However, each subsequent time he failed. It seems to download a file at about 3 Mbps for 5-10 seconds, then drop to 0. After 20 seconds, sitting at 0, Amazon returns "RequestTimeout". The socket connection to the server was not read or written during the timeout period. Invalid connections will be closed. "Error.

I tried updating the S3 class to the latest version of GitHub , but that didn't make any difference. I also found the Amazon S3 Stream Wrapper class and tried using the following code:

 include 'gs3.php'; define('S3_KEY', 'ACCESSKEYGOESHERE'); define('S3_PRIVATE','SECRETKEYGOESHERE'); $local = fopen('/path/to/backup_id.tar.gz.0000', 'r'); $remote = fopen('s3://bucket-name/customer/backup_id.tar.gz.0000', 'w+r'); $count = 0; while (!feof($local)) { $result = fwrite($remote, fread($local, (1024 * 1024))); if ($result === false) { fwrite(STDOUT, $count++.': Unable to write!'."\n"); } else { fwrite(STDOUT, $count++.': Wrote '.$result.' bytes'."\n"); } } fclose($local); fclose($remote); 

This code reads the file one MB at a time to transfer it to S3. For a 50 MB file I get "1: wrote 1048576 bytes" 49 times (the first number changes each time, of course), but at the last iteration of the loop I get the error message "Notice: fputs (): send of 8192 bytes with error = 11 The resource is temporarily unavailable in /path/to/http.php on line 230. "

My first thought was that this is a network problem. We called the client and explained the problem, and asked them to look at their firewall to see that they were throwing something. According to their network administrator, the traffic is going very well.

I do not understand what I can do next. I backed up manually and using SCP transferred them to another machine and downloaded them. This is obviously not perfect, and any help would be greatly appreciated.

Update - 06/23/2011

I tried many options below, but they all gave the same result. I found that I was even trying to copy the file from the server in question, to another server immediately stops and, in the end, shuts down. However, I can use scp to download the same file from another computer. This makes me even more convinced that this is a network problem on the client side, any further suggestions would be very helpful.

+6
source share
5 answers

This problem exists because you are trying to download the same file again. Example:

 $s3 = new S3('XXX','YYYY', false); $s3->putObjectFile('file.jpg','bucket-name','file.jpg'); $s3->putObjectFile('file.jpg','bucket-name','newname-file.jpg'); 

To fix this, simply copy the file and give it a new name, and then upload it normally.

Example:

 $s3 = new S3('XXX','YYYY', false); $s3->putObjectFile('file.jpg','bucket-name','file.jpg'); now rename file.jpg to newname-file.jpg $s3->putObjectFile('newname-file.jpg','bucket-name','newname-file.jpg'); 
+4
source

I solved this problem differently. My mistake was that the filesize () function returns an invalid cached size value. So just use clearstatcache ()

+2
source

I have repeatedly experienced this same problem.

I now have many scripts that constantly upload files to S3.

The best solution I can offer is to use Zend libraries (either a stream fairing or a direct S3 API).

http://framework.zend.com/manual/en/zend.service.amazon.s3.html

Since the last version of the Zend platform, I have not seen any problems with timeouts. But, if you find that you still have problems, a simple setup will do the trick.

Just open the Zend / Http / Client.php file and change the "timeout" value in the $ config array. At the time of this writing, it existed on line 114. Until the latest version, I worked for 120 seconds, but now everything works exactly with a 10-second timeout.

Hope this helps!

+1
source

There are many solutions. I had this exact problem, but I did not want to write code and figure out the problem.

Initially, I was looking for an opportunity to mount an S3 bucket on a Linux machine, I found something interesting:

s3fs - http://code.google.com/p/s3fs/wiki/InstallationNotes - This worked for me. It uses the FUSE + rsync file system to synchronize files in S3. It copies a copy of all file names on the local system and makes them look like FILE / FOLDER.

This saves the BUNCH of our time + no headache when writing code to transfer files.

Now, when I tried to see if there were any other options, I found that a ruby ​​script that works in the CLI can help you manage your S3 account.

s3cmd - http://s3tools.org/s3cmd - this looks pretty clear.

[UPDATE] Another CLI tool found - s3sync

s3sync - https://forums.aws.amazon.com/thread.jspa?threadID=11975&start=0&tstart=0 - found in the Amazon AWS community.

I do not see that they are both different, if you are not worried about disk space, then I would choose s3fs rather than s3cmd. The disk makes you feel more comfortable + you can see the files on the disk.

Hope this helps.

+1
source

You should take a look at the AWS PHP SDK. This is the AWS PHP library, formerly known as tarzan and cloudfusion.

http://aws.amazon.com/sdkforphp/

The class S3 included in it is solid. We use it to download multiple GB files all the time.

+1
source

Source: https://habr.com/ru/post/888641/


All Articles