My API server has very limited disk space (500 MB) and memory (1 GB). One of the API calls that it receives is getting the file. The user calls the API and passes the download URL.
My serverβs βgoalβ is to upload this file to Amazon S3. Unfortunately, I cannot ask the user to upload the file directly to S3 (part of the requirements).
The problem is that sometimes these are huge files (10 GB) and saving them to disk, and then downloading to S3 is not an option (500 MB limit on disk).
My question is: how can I "pipe" a file from an S3 input URL using curl Linux?
Note. I managed to transfer it in different ways, but either it first tries to download the whole file and does not work, or I hit a memory error and curls. I assume the download is much faster than the download, so the channel buffer / memory grows and explodes (1 GB of memory on the server) when I receive 10 GB files.
Is there a way to achieve what I'm trying to do using curl and piping?
Thanks, Jack
source share