vSphere CSI Driver - Deployment with Topology
Note: Volume Topology and Availability Zone feature is beta in vSphere CSI Driver.
When you deploy CSI in a vSphere environment that includes multiple data centers or host clusters, you can use zoning.
Zoning enables orchestration systems, like Kubernetes, to integrate with vSphere storage resources that are not equally available to all nodes. As a result, the orchestration system can make intelligent decisions when dynamically provisioning volumes, and avoid situations such as those where a pod cannot start because the storage resource it needs is not accessible.
Set Up Zones in the vSphere CNS Environment
Depending on your vSphere storage environment, you can use different deployment scenarios for zones. For example, you can have zones per host cluster, per data center, or have a combination of both.
In the following example, the vCenter Server environment includes three clusters with node VMs located on all three clusters.
The sample workflow creates zones per cluster and per data center.
Procedure
- Create Zones Using vSphere Tags
- You can use vSphere tags to label zones in your vSphere environment.
- Enable Zones for the CCM and CSI Driver
- Install the CCM and the CSI driver using the zone and region entries.
Create Zones Using vSphere Tags
You can use vSphere tags to label zones in your vSphere environment.
The task assumes that your vCenter Server environment includes three clusters, cluster1, cluster2, and cluster3, with the node VMs on all three clusters. In the task, you create two tag categories, k8s-zone and k8s-region. You tag the clusters as three zones, zone-a, zone-b, and zone-c, and mark the data center as a region, region-1.
Prerequisites
Make sure that you have appropriate tagging privileges that control your ability to work with tags. See vSphere Tagging Privileges in the vSphere Security documentation.
Note: Ancestors of node VMs, such as host, cluster, and data center, must have the ReadOnly role set for the vSphere user configured to use the CSI driver and CCM. This is required to allow reading tags and categories to prepare nodes' topology.
Procedure
In the vSphere Client, create two tag categories, k8s-zone and k8s-region.
For information, see Create, Edit, or Delete a Tag Category in the vCenter Server and Host Management documentation.
In each category, create appropriate zone tags.
For information on creating tags, see Create, Edit, or Delete a Tag in the vCenter Server and Host Management documentation.
Categories Tags k8s-zone zone-a
zone-b
zone-ck8s-region region-1 Apply corresponding tags to the data center and clusters as indicated in the table.
For information, see Assign or Remove a Tag in the vCenter Server and Host Management documentation.
vSphere Objects Tags datacenter region-1 cluster1 zone-a cluster2 zone-b cluster3 zone-c
Enable Zones for the vSphere CSI Driver
Install the vSphere CSI driver using the zone and region entries.
Procedure
In the vsphere config secret file, add entries for region and zone.
[Labels] region = k8s-region zone = k8s-zone
Make sure
external-provisioner
is deployed with the arguments--feature-gates=Topology=true
and--strict-topology
.- Uncomment lines in the yaml file marked with
needed only for topology aware setup
. - https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/v2.2.0/manifests/v2.2.0/deploy/vsphere-csi-controller-deployment.yaml#L160-L161
- Uncomment lines in the yaml file marked with
Make sure secret is mounted on all workload nodes as well. This is required to help node discover its topology.
- Uncomment lines in the yaml file marked with
needed only for topology aware setup
.- https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/v2.2.0/manifests/v2.2.0/deploy/vsphere-csi-node-ds.yaml#L67-L68
- https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/v2.2.0/manifests/v2.2.0/deploy/vsphere-csi-node-ds.yaml#L84-L86
- https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/v2.2.0/manifests/v2.2.0/deploy/vsphere-csi-node-ds.yaml#L121-L123
- Uncomment lines in the yaml file marked with
After the installation, verify all
csinodes
objects hastopologyKeys
.kubectl get csinodes -o jsonpath='{range .items[*]}{.metadata.name} {.spec}{"\n"}{end}' k8s-node1 map[drivers:[map[name:csi.vsphere.vmware.com nodeID:k8s-node1 topologyKeys:[failure-domain.beta.kubernetes.io/region failure-domain.beta.kubernetes.io/zone]]]] k8s-node2 map[drivers:[map[name:csi.vsphere.vmware.com nodeID:k8s-node2 topologyKeys:[failure-domain.beta.kubernetes.io/region failure-domain.beta.kubernetes.io/zone]]]] k8s-node3 map[drivers:[map[name:csi.vsphere.vmware.com nodeID:k8s-node3 topologyKeys:[failure-domain.beta.kubernetes.io/region failure-domain.beta.kubernetes.io/zone]]]] k8s-node4 map[drivers:[map[name:csi.vsphere.vmware.com nodeID:k8s-node4 topologyKeys:[failure-domain.beta.kubernetes.io/region failure-domain.beta.kubernetes.io/zone]]]] k8s-node5 map[drivers:[map[name:csi.vsphere.vmware.com nodeID:k8s-node5 topologyKeys:[failure-domain.beta.kubernetes.io/region failure-domain.beta.kubernetes.io/zone]]]]
Verify labels
failure-domain.beta.kubernetes.io/region
andfailure-domain.beta.kubernetes.io/zone
should be applied to all nodes.kubectl get nodes -L failure-domain.beta.kubernetes.io/zone -L failure-domain.beta.kubernetes.io/region NAME STATUS ROLES AGE VERSION ZONE REGION k8s-master Ready master 32m v1.19.0 zone-a region-1 k8s-node1 Ready <none> 18m v1.19.0 zone-a region-1 k8s-node2 Ready <none> 18m v1.19.0 zone-b region-1 k8s-node3 Ready <none> 18m v1.19.0 zone-b region-1 k8s-node4 Ready <none> 18m v1.19.0 zone-c region-1 k8s-node5 Ready <none> 18m v1.19.0 zone-c region-1