
The Kubernetes Secrets Store CSI Driver integrates secrets stores with Kubernetes through a Container Storage Interface (CSI) volume. Integrating the Secrets Store CSI Driver with TKG on Azure allows to mount secrets, keys, and certificates as a volume, and the data is mounted into the container’s file system. In this blog, I will explain the step by step process to manage secrets.
Note: Talk to respective vendors before you decide to use in production environments. Feel free to experiment this in test env. and let me know if this works for you.
Pre-requirements
- az CLI is configured and a valid subscription is available
- kubectl and helm is installed and configured
- TKG workload cluster is configured and you have administrative access
Steps
Install the Secret Store CSI Driver on a TKG workload Cluster
- Run the following helm commands to add the repo and install the chart
$ helm repo add csi-secrets-store-provider-azure https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/charts
"csi-secrets-store-provider-azure" has been added to your repositories
$ helm install csi csi-secrets-store-provider-azure/csi-secrets-store-provider-azure --namespace kube-system
NAME: csi
LAST DEPLOYED: Sun Feb 27 12:31:14 2022
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
Note: It’s recommended to install the Secrets Store CSI driver and Azure Key Vault provider in the kube-system
namespace. But this is not a mandatory.
- Verify the installed components on TKG workload cluster. Run the below commands to check.
$ kubectl get pods -l app=secrets-store-csi-driver -n kube-system
NAME READY STATUS RESTARTS AGE
secrets-store-csi-driver-27db8 3/3 Running 0 30s
secrets-store-csi-driver-ffp62 3/3 Running 0 30s
$ kubectl get pods -l app=csi-secrets-store-provider-azure -n kube-system
NAME READY STATUS RESTARTS AGE
csi-csi-secrets-store-provider-azure-2wzsr 1/1 Running 0 40s
csi-csi-secrets-store-provider-azure-pmb4p 1/1 Running 0 40s
Create an Azure Key Vault
$ az keyvault create -n tkgvault -g tkgworkloadcluster -l westus2
By running the above command, it will create an azure key vault with name “tkgvault” under resource group “tkgworkloadcluster” and “West US 2″ region. You can updated these parameters as per your requirement.
Create a Secret in Azure Key Vault
$ az keyvault secret set --vault-name tkgvault -n tkgsecret --value tkgclusterdemo
By running the above command, it will create a secret under key vault. You can specify the name and value for the secret. Once created, you can also validate on the Azure portal.

Creating a Service Principal to access the Key Vault instance
I will be using same service principal that we create during TKG cluster deployment on Azure. If you want, you can follow azure documentation and create the one else you can also use the same.
Once service principal is in place, go ahead and do the next steps.
Provide the identity to access Azure Key Vault
$ az keyvault set-policy -n tkgvault --secret-permissions get --spn <client-id>
$ kubectl create secret generic secrets-store-creds --from-literal clientid=<client-id> --from-literal clientsecret=<client-secret>
$ kubectl label secret secrets-store-creds secrets-store.csi.k8s.io/used=true
store.csi.k8s.io/used=true
secret/secrets-store-creds labelled
Note: Update the values highlighted above before running the command. FYI, when you register an app before TKG deployment, it internally creates an SPN. You can obtain the client-id and client-secret from the registered Azure app.
Create and apply a SecretProviderClass object
Here is the sample yaml, that you can modify a bit and create this resource.
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: tkgvault # The name of the Azure Key Vault
namespace: default
spec:
provider: azure
parameters:
keyvaultName: "tkgvault" # The name of the Azure Key Vault
useVMManagedIdentity: "false"
userAssignedIdentityID: "false"
cloudName: "" # [OPTIONAL for Azure] if not provided, Azure environment will default to AzurePublicCloud
objects: |
array:
- |
objectName: tkgsecret # In this example, 'tkgsecret'
objectType: secret # Object types: secret, key or cert
objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
tenantId: "<tenant-id>" # the tenant ID containing the Azure Key Vault instance
Save the yaml file and apply it on your TKG workload cluster.
$ kubectl apply -f scs.yaml
secretproviderclass.secrets-store.csi.x-k8s.io/tkgvault created
Validate the customer resource.
$ kubectl get SecretProviderClass -n default
NAME AGE
tkgvault 25s
Create a POD and validate the Secret
You can use the below sample yaml file for the testing.
kind: Pod
apiVersion: v1
metadata:
name: busybox-secrets-store-inline
spec:
containers:
- name: busybox
image: k8s.gcr.io/e2e-test-images/busybox:1.29
command:
- "/bin/sleep"
- "10000"
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "tkgvault"
nodePublishSecretRef: # Only required when using service principal mode
name: secrets-store-creds # Only required when using service principal mode
Save the yaml and apply it. It only creates one container but since I have TSM in place, so it is injecting the sidecar automatically.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-secrets-store-inline 0/2 PodInitializing 0 3s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-secrets-store-inline 2/2 Running 0 7s
Validating the Secret Value
Run the below kubectl
commands to validate the secret.
$ kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
tkgsecret
$ kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/tkgsecret
tkgclusterdemo
We have successfully validate the secret value.