Marc's answer is definitely suitable for long-term storage of results. Depending on the I / O needs and reliability, you can also configure one server as an NFS server and use it to remotely connect to other nodes.
Typically, the NFS server will be your "master node", and it can serve both binary files and configuration. Workers periodically skipped directories exported from the wizard to receive new binaries or configurations. If you donβt need a lot of disk I / O (you mentioned neural modeling, so I assume the dataset is suitable in memory and you only give the final results), it may be acceptable quickly, just write your output to NFS on your host node, and then do the basic results of backing up the node to some place like GCS.
The main advantage of using NFS through GCS is that NFS offers familiar file system semantics that can help if you use third-party software that expects files to be read from file systems. It is very easy to periodically synchronize files from GCS to local storage, but this requires the launch of an additional agent on the host.
The disadvantages of configuring NFS are that you probably need to synchronize the UID between the hosts, NFS can be a security hole (I would only expose NFS on my private network, and not anything outside 10/8), and that this will be required install additional packages on the client and server for setting shares. In addition, NFS will only be more reliable than a hosting machine, while object storage, such as GCS or S3, will be implemented with redundant servers and, possibly, even with geographic diversity.
source share