I finally found the answer to my main question: What happens during this 3-minute wait after the download is complete and before my action is called?
All of this was explained very clearly in this post: Rails Path - Uploading Files
"When the browser downloads the file, it encodes the contents in the format" multipart mime "(the same format used when sending an email attachment). For your application to be able to do something with this file, rails must cancel this encoding. To do this "You need to read the huge request body and match each line with a few regular expressions. It can be incredibly slow and use a huge amount of processor and memory."
I tried the modporter Apache module mentioned in the post. The only problem is that the module and its corresponding plug-in were written 4 years ago, and it no longer works with their website, there is almost no documentation.
With modporter, I wanted to specify my NFS-based directory as PorterDir, in the hope that it would transfer file permissions along with the NAS without additional copying from a temporary directory. However, I could not go this far, because the module seemed to ignore my specified PorterDir and return a completely different path to my actions. In addition, the path that he was returning did not even exist, so I had no idea what was really happening with my additions.
My workaround
I had to quickly solve the problem, so at the moment I went with a somewhat hacky solution, which consisted of writing the appropriate JavaScript / Ruby code to process the downloaded files.
JS example:
var MAX_CHUNK_SIZE = 20000000; // in bytes window.FileUploader = function (opts) { var file = opts.file; var url = opts.url; var current_byte = 0; var success_callback = opts.success; var progress_callback = opts.progress; var percent_complete = 0; this.start = this.resume = function () { paused = false; upload(); }; this.pause = function () { paused = true; }; function upload() { var chunk = file.slice(current_byte, current_byte + MAX_CHUNK_SIZE); var fd = new FormData(); fd.append('chunk', chunk); fd.append('filename', file.name); fd.append('total_size', file.size); fd.append('start_byte', current_byte); $.ajax(url, { type: 'post', data: fd, success: function (data) { current_byte = data.next_byte; upload_id = data.upload_id; if (data.path) { success_callback(data.path); } else { percent_complete= Math.round(current_byte / file.size * 100); if (percent_complete> 100) percent_complete = 100; progress_callback(percent_complete); // update some UI element to provide feedback to user upload(); } } }); } };
(forgive any syntax errors just by typing it from my head)
On the server side, I created a new route for receiving file fragments. When sending the first fragment, I create the upload_id file based on the file name / size and determine if I already have a partial file from the interrupted download. If so, I will return the next start byte, which I need along with id. If not, I save the first piece and pass the identifier.
The process with the addition of additional fragments adds a partial file until the file size matches the size of the original file. At this point, the server responds temporarily to the file.
Then javascript removes the input file from the form and replaces it with hidden input, whose value is the path to the file returned from the server, and then submits the form.
Then finally, on the server side, I handle moving / renaming the file and saving its final path to my model.
Phew