Share of standing orders for volume among containers in Kubernet / OpenShift

This may be a dumb question, but I have not found much online and want to clarify this.

Given the two deployments A and B, both with different container images:

  • They are deployed in two different containers (different rc, svc, etc.) in the K8 / OpenShift cluster.
  • Both of them need to access the same thing in order to read files (even if this time it stops being blocked) or at least the same directory structure in this volume.
  • Install this volume using PVC (Persistent Volume Claim) supported by PV (Persistent Volume) configured on an NFS share.

Can I confirm that this was possible? That is, two different pods connected to the same volume with the same PVC. Therefore, they both read the same volume.

Hope this makes sense ...

+7
source share
2 answers

TL DR You can share PV and PVC in the same project / namespace for shared volumes (nfs, gluster, etc.), you can also access a shared volume from several project / namespaces, but this will require project PV and PVC, as PV is tied to a single project / namespace, and PVC is a project / namespace.

Below I tried to illustrate the current behavior and how PV and PVC are covered by OpenShift. These are simple examples using NFS as a persistent storage tier.

accessModes at this moment are just tags, they do not have real functionality in terms of access control to PV. Below are some examples to show this.

PV is global in the sense that it can be viewed / accessed by any project / namespace, HOWEVER, once it is attached to a project, only containers from the same project / namespace can be accessed

PVC is the project / namespace (therefore, if you have multi-project projects, you will need to have a new PV and PVC for each project to connect to the shared NFS volume - cannot reuse PV from the first project)

Example 1:
I have two different modules working in the "project" by default or in the namespace, both have access to the same PV and the exported NFS file. Both are mounted and working fine.

[ root@k8dev nfs_error]# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv-nfs <none> 1Gi RWO Bound default/nfs-claim 3m [ root@k8dev nfs_error]# oc get pods <--- running from DEFAULT project, no issues connecting to PV NAME READY STATUS RESTARTS AGE nfs-bb-pod2-pvc 1/1 Running 0 11m nfs-bb-pod3-pvc 1/1 Running 0 10m 

Example 2:
I have 2 different modules working in the default project / namespace, and am trying to create another module using the same PV, but from a new project called testproject to access the same NFS export. The third module from the new testproject will not be able to bind to PV, since it is already connected by the default project.

 [ root@k8dev nfs_error]# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv-nfs <none> 1Gi RWO Bound default/nfs-claim 3m [ root@k8dev nfs_error]# oc get pods <--- running from DEFAULT project, no issues connecting to PV NAME READY STATUS RESTARTS AGE nfs-bb-pod2-pvc 1/1 Running 0 11m nfs-bb-pod3-pvc 1/1 Running 0 10m 

** Create a new claim against an existing PV from another project (testproject), and the PVC will fail

 [ root@k8dev nfs_error]# oc get pvc NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-claim <none> Pending 2s 

** nfs-claim will never be associated with PVv-nfs PV, because it cannot see it from the current project area

Example 3:

I have 2 different containers working in the project by default, and then create another PV, PVC and Pod from testproject . Both projects will be able to access the same exported part of NFS, but I need PV and PVC in each of the projects.

 [ root@k8dev nfs_error]# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv-nfs <none> 1Gi RWX Bound default/nfs-claim 14m pv-nfs2 <none> 1Gi RWX Bound testproject/nfs-claim2 9m [ root@k8dev nfs_error]# oc get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default nfs-bb-pod2-pvc 1/1 Running 0 11m default nfs-bb-pod3-pvc 1/1 Running 0 11m testproject nfs-bb-pod4-pvc 1/1 Running 0 15s 

**, now I have three containers that work with the same NFS shared volume in two projects, but I needed two PVs, because they are tied to one project, and 2 PVCs, one for each project and NFS PV i am trying to access

Example 4:

If I get around PV and PVC, I can connect to NFS shared volumes directly from any project using the nfs plugin directly

 volumes: - name: nfsvol nfs: path: /opt/data5 server: nfs1.rhs 

Now volume security is another layer on top of this, using additional groups (for shared storage, i.e. nfs, gluster, etc.), administrators and developers should also be able to manage and control access to the shared NFS system.

Hope that helps

+11
source

AFAIK, PV binding is not supported several times. You can use the volume source (NFS in your case) directly for your use case.

0
source

Source: https://habr.com/ru/post/1242875/


All Articles