TL DR You can share PV and PVC in the same project / namespace for shared volumes (nfs, gluster, etc.), you can also access a shared volume from several project / namespaces, but this will require project PV and PVC, as PV is tied to a single project / namespace, and PVC is a project / namespace.
Below I tried to illustrate the current behavior and how PV and PVC are covered by OpenShift. These are simple examples using NFS as a persistent storage tier.
accessModes at this moment are just tags, they do not have real functionality in terms of access control to PV. Below are some examples to show this.
PV is global in the sense that it can be viewed / accessed by any project / namespace, HOWEVER, once it is attached to a project, only containers from the same project / namespace can be accessed
PVC is the project / namespace (therefore, if you have multi-project projects, you will need to have a new PV and PVC for each project to connect to the shared NFS volume - cannot reuse PV from the first project)
Example 1:
I have two different modules working in the "project" by default or in the namespace, both have access to the same PV and the exported NFS file. Both are mounted and working fine.
[ root@k8dev nfs_error]
Example 2:
I have 2 different modules working in the default project / namespace, and am trying to create another module using the same PV, but from a new project called testproject to access the same NFS export. The third module from the new testproject will not be able to bind to PV, since it is already connected by the default project.
[ root@k8dev nfs_error]
** Create a new claim against an existing PV from another project (testproject), and the PVC will fail
[ root@k8dev nfs_error]
** nfs-claim will never be associated with PVv-nfs PV, because it cannot see it from the current project area
Example 3:
I have 2 different containers working in the project by default, and then create another PV, PVC and Pod from testproject . Both projects will be able to access the same exported part of NFS, but I need PV and PVC in each of the projects.
[ root@k8dev nfs_error]# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv-nfs <none> 1Gi RWX Bound default/nfs-claim 14m pv-nfs2 <none> 1Gi RWX Bound testproject/nfs-claim2 9m [ root@k8dev nfs_error]# oc get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default nfs-bb-pod2-pvc 1/1 Running 0 11m default nfs-bb-pod3-pvc 1/1 Running 0 11m testproject nfs-bb-pod4-pvc 1/1 Running 0 15s
**, now I have three containers that work with the same NFS shared volume in two projects, but I needed two PVs, because they are tied to one project, and 2 PVCs, one for each project and NFS PV i am trying to access
Example 4:
If I get around PV and PVC, I can connect to NFS shared volumes directly from any project using the nfs plugin directly
volumes: - name: nfsvol nfs: path: /opt/data5 server: nfs1.rhs
Now volume security is another layer on top of this, using additional groups (for shared storage, i.e. nfs, gluster, etc.), administrators and developers should also be able to manage and control access to the shared NFS system.
Hope that helps