It looks like you are looking for a common file server such as NFS. You can run the NFS server on one GCE instance to distribute data to different compute nodes. The Linux documentation project has reasonable guidance .
Another option is to use an object storage such as Google Cloud Storage , which allows you to store drops of binary data under different names (a bit like a cloud file system). If your software needs to use standard file system commands to access data, a FUSE file system such as s3fuse can be used to export a Google Storage bucket as a set of files and directories on each machine.
How to choose one of two options:
- If you are already using NFS, it may be more convenient for you to continue with the same configuration as in place. If not, I suggest trying s3fuse and GCS.
- If you run your own NFS server, you will need to be responsible for any backups and so on that you may need. Google cloud storage is replicated between multiple sites, so even if there is service on one site, you can still read and write your data.
- FUSE file systems such as s3fuse typically support read and write operations, but cannot support complex locking behavior, etc., which makes NFS.
- You may be charged for the number of readings and records that you make with data stored in the GCS. (I donβt remember, I think that the network traffic to / from GCS from GCE is free .) If you decide to start your own NFS server, you will have to pay for the executable instance and the permanent disk, as well as for the read and write operations to the disk.
You might also be interested in this other stack overflow question that covers some of the same reasons: Storage options for diskless servers
source share