Part 1 - Kubernetes Vault Integration - Secrets Store CSI Driver
Introduction Link to heading
Our aim is to enable k3s Kubernetes cluster to access HashiCorp Vault Secrets. We’ll explore two ways to do this and break down how each one works:
- Secrets Store CSI Driver with the Vault CSI provider
- Part 2 - External Secrets Operator (ESO) + Reloader
We would be setting up the Secrets Store CSI Driver(mounts secrets in pods as files) along with the Vault CSI provider(helps communicate with Hashicorp Vault) and a SecretProviderClass.
To validate the implementation, we’ll create a sample Deployment that uses environment variables populated from Kubernetes Secrets, with the values securely retrieved from Vault using Secrets Store CSI Driver.
Key Sections Covered: Link to heading
- Installation and configuration of HashiCorp Vault on a VM.
- Enabling Kubernetes authentication for Vault.
- Install Vault Agent CSI Driver on k3s.
- Demonstrate Secret Usage.
- Validating dynamic secret updates with pod restart behavior.
Prerequisites Link to heading
Before we begin, ensure we have the following:
vm1
with a single-node k3s cluster.vm2
to host the HashiCorp Vault server.- Network connectivity between
vm1
andvm2
—both VMs must be able to communicate with each other. kubectl
configured to interact with k3s cluster (fromvm1
or local machine).- SSH access on both k3s VM (
vm1
) and Vault VM (vm2
).
Step 1: Install HashiCorp Vault on vm2
Link to heading
1.1. Download and Install Vault on vm2
:
SSH into the dedicated Vault VM (vm2
) and run the following commands:
# Update package list
sudo apt update
# Install unzip (if not already installed)
sudo apt install unzip
# Download Vault (check HashiCorp website for the latest version, e.g., 1.16.0)
wget https://releases.hashicorp.com/vault/1.16.0/vault_1.16.0_linux_amd64.zip
# Unzip the downloaded file
unzip vault_1.16.0_linux_amd64.zip
# Move the Vault binary to a directory in PATH
sudo mv vault /usr/local/bin/
# Verify installation
vault --version
1.2. Configure Vault (Basic Server Mode) on vm2
:
Create a directory for Vault configuration and data on vm2
:
sudo mkdir -p /etc/vault.d
sudo mkdir -p /var/lib/vault/data
sudo chown -R vault:vault /var/lib/vault/data # Create a 'vault' user and group if they don't exist: sudo useradd --system --home /etc/vault.d --shell /bin/false vault
Create a basic Vault configuration file (/etc/vault.d/vault.hcl
) on vm2
:
storage "file" {
path = "/var/lib/vault/data"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = "true" # For simplicity in development. ENABLE TLS in production for secure communication!
}
api_addr = "http://<YOUR_VAULT_VM_IP>:8200"
cluster_addr = "http://<YOUR_VAULT_VM_IP>:8201"
ui = true
Replace <YOUR_VAULT_VM_IP>
with the actual IP address of dedicated Vault VM (vm2
).
1.3. Create a Systemd Service for Vault on vm2
:
Create a service file (/etc/systemd/system/vault.service
) on vm2
:
[Unit]
Description="HashiCorp Vault - A tool for managing secrets"
Documentation=https://www.vaultproject.io/docs/
Requires=network-online.target
After=network-online.target
[Service]
User=vault
Group=vault
ProtectSystem=full
ProtectHome=read-only
PrivateTmp=yes
PrivateDevices=yes
SecureBits=keep-caps
AmbientCapabilities=CAP_IPC_LOCK
Capabilities=CAP_IPC_LOCK+ep
CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
NoNewPrivileges=yes
ExecStart=/usr/local/bin/vault server -config=/etc/vault.d/vault.hcl
ExecReload=/bin/kill --signal HUP $MAINPID
KillMode=process
KillSignal=SIGINT
Restart=on-failure
RestartSec=5
TimeoutStopSec=30
StartLimitInterval=60s
StartLimitBurst=3
[Install]
WantedBy=multi-user.target
Reload systemd, enable, and start Vault on vm2
:
sudo systemctl daemon-reload
sudo systemctl enable vault
sudo systemctl start vault
sudo systemctl status vault
1.4. Initialize and Unseal Vault on vm2
:
From vm2
(where Vault is installed), set the VAULT_ADDR
environment variable:
export VAULT_ADDR='http://<YOUR_VAULT_VM_IP>:8200'
Initialize Vault (this will generate unseal keys and a root token):
vault operator init -key-shares=1 -key-threshold=1
IMPORTANT: Save the unseal key and root token securely. You’ll need them to unseal Vault after restarts.
Unseal Vault using the generated unseal key:
vault operator unseal <YOUR_UNSEAL_KEY>
Log in to Vault using the root token:
vault login <YOUR_ROOT_TOKEN>
Step 2: Configure Kubernetes Authentication for Vault (on vm2
)
Link to heading
This allows Kubernetes service accounts from vm1
to authenticate with Vault running on vm2
. All vault
commands in this step should be executed on vm2
.
2.1. Enable Kubernetes Auth Method:
vault auth enable kubernetes
2.2. Configure Kubernetes Auth Method:
You’ll need the Kubernetes host, CA certificate, and a service account token from k3s cluster (vm1
).
Execute the following kubectl
commands on vm1
(k3s VM) to get the necessary details:
# 1. Create a dedicated ServiceAccount for Vault to use for token review
kubectl create serviceaccount vault-auth-reviewer
# 2. Create a Secret of type kubernetes.io/service-account-token that binds to the service account.
# This explicitly creates a token that Vault can use for its token reviewer JWT.
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: vault-auth-reviewer-token
annotations:
kubernetes.io/service-account.name: vault-auth-reviewer
type: kubernetes.io/service-account-token
EOF
# Wait a moment for the token to be populated in the secret
echo "Waiting for token secret to be populated... (approx. 5 seconds)"
sleep 5
# 3. Get the Kubernetes API server address (as per successful commands)
KUBERNETES_PORT_443_TCP_ADDR=$(kubectl get endpoints kubernetes -o jsonpath='{.subsets[0].addresses[0].ip}')
KUBERNETES_PORT_443_TCP_PORT=$(kubectl get endpoints kubernetes -o jsonpath='{.subsets[0].ports[0].port}')
KUBERNETES_HOST="https://${KUBERNETES_PORT_443_TCP_ADDR}:${KUBERNETES_PORT_443_TCP_PORT}"
echo "KUBERNETES_HOST: ${KUBERNETES_HOST}"
# 4. Get the base64-encoded Kubernetes CA certificate and save it to a temporary file on vm1.
# This is the format Vault's kubernetes_ca_cert parameter expects.
kubectl get secret vault-auth-reviewer-token -o jsonpath='{.data.ca\.crt}' | base64 --decode> /tmp/k8s_ca_cert_b64.txt
echo "Base64-encoded Kubernetes CA certificate saved to /tmp/k8s_ca_cert_b64.txt"
# 5. Get the decoded Service Account token and save it to a temporary file on vm1.
# This is the format Vault's token_reviewer_jwt parameter expects.
kubectl get secret vault-auth-reviewer-token -o jsonpath='{.data.token}' | base64 --decode > /tmp/k8s_token_decoded.txt
echo "Decoded Service Account token saved to /tmp/k8s_token_decoded.txt"
# echo ""
# echo "Copy /tmp/k8s_ca_cert_b64.txt and /tmp/k8s_token_decoded.txt from vm1 to vm2."
# echo "scp /tmp/k8s_ca_cert_b64.txt /tmp/k8s_token_decoded.txt <VM2_USER>@<VM2_IP>:/tmp/"
Now, switch back to vm2
(Vault VM) and configure Vault’s Kubernetes auth method using the values obtained from vm1
:
# On vm2:
# First, ensure we have copied the files from vm1 to /tmp/ on vm2.
# Example: scp <VM1_USER>@<VM1_IP>:/tmp/k8s_ca_cert_b64.txt /tmp/
# scp <VM1_USER>@<VM1_IP>:/tmp/k8s_token_decoded.txt /tmp/
# Set the Kubernetes Host (use the exact value we got from vm1)
export KUBERNETES_HOST="https://<Kubernetes-VM-IP>:6443"
# Read the base64-encoded CA certificate from the copied file
KUBERNETES_CA_CERT=$(cat /tmp/k8s_ca_cert_b64.txt)
# Read the decoded token from the copied file
KUBERNETES_TOKEN=$(cat /tmp/k8s_token_decoded.txt)
# Configure Vault's Kubernetes auth method
vault write auth/kubernetes/config \
token_reviewer_jwt="${KUBERNETES_TOKEN}" \
kubernetes_host="${KUBERNETES_HOST}" \
kubernetes_ca_cert="${KUBERNETES_CA_CERT}"
2.3. Create a Policy in Vault (on vm2
):
This policy will define what secrets a Kubernetes service account can access. Let’s create a simple policy that allows reading secrets from secret/data/sockshop/*
.
Create a file named sockshop-policy.hcl
on vm2
:
path "secret/data/sockshop/*" {
capabilities = ["read"]
}
Write the policy to Vault on vm2
:
vault policy write sockshop-policy sockshop-policy.hcl
2.4. Create a Kubernetes Auth Role (on vm2
):
This role binds a Kubernetes service account to a Vault policy.
vault write auth/kubernetes/role/sockshop-app \
bound_service_account_names=default \
bound_service_account_namespaces="*" \
policies=sockshop-policy \
ttl=24h
Here, we’re binding the default
service account in all
namespace (on vm1
) to the sockshop-policy
(on vm2
). Adjust as needed for application’s service accounts and namespaces.
Step 3: Install Vault Agent CSI Driver on k3s. (vm1
)
Link to heading
The Vault Agent Injector is a Kubernetes Mutating Admission Webhook that injects Vault Agent containers into pods based on annotations. This is the recommended way to inject secrets. This step is performed on k3s VM (vm1
).
3.1. Install Vault Agent Injector using Helm on vm1
:
First, add the HashiCorp Helm repository on vm1
:
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
Install the Vault CSI Driver on vm1
, pointing it to Vault server on vm2
:
helm install vault hashicorp/vault \
--set server.enabled=false \
--set global.externalVaultAddr="http://<Vault-VM-IP>:8200" \
--set csi.enabled=true \
--set syncSecret.enabled=true
Replace <YOUR_VAULT_VM_IP>
with the actual IP address of dedicated Vault VM (vm2
).
Step 4: Demonstrate Secret Usage (on vm1
)
Link to heading
Now, let’s create a secret in Vault (on vm2
) and demonstrate how to inject it into a k3s pod (on vm1
).
4.1. Create a Secret in Vault (on vm2
):
SSH into vm2
and ensure VAULT_ADDR
is set.
Enable the KV secrets engine (if not already enabled):
vault secrets enable -path=secret kv-v2
Write a secret to Vault:
vault kv put secret/sockshop/database username="sockuser" password="sockpassword123"
-------Secrets Store CSI Driver with the Vault CSI provider-------
┌──────────────────────────────┐
│ Pod Start Requested │
└────────────┬─────────────────┘
│
▼
┌──────────────────────────────┐
│ Pod uses SecretProviderClass│
└────────────┬─────────────────┘
│
▼
┌──────────────────────────────┐
│ CSI Driver on Node Handles It│
└────────────┬─────────────────┘
│
▼
┌──────────────────────────────┐
│ Vault CSI Provider │
└────────────┬─────────────────┘
│
▼
┌──────────────────────────────┐
│ Auth to Vault using SA Token │
│ via Kubernetes Auth Role │
└────────────┬─────────────────┘
│
▼
┌──────────────────────────────┐
│ Fetch Secret from Vault │
│ e.g., secret/data/my-app │
└────────────┬─────────────────┘
│
▼
┌──────────────────────────────┐
│ Mount Secret to /mnt/secrets│
│ (tmpfs, in-memory volume) │
└────────────┬─────────────────┘
│
▼
┌──────────────────────────────┐
│ Pod reads secret from volume │
└────────────┬─────────────────┘
│
▼
┌──────────────────────────────┐
│ (Optional) Sync to K8s Secret│
└──────────────────────────────┘
4.2. Kubernetes Deployment (on vm1
):
Modify Sock Shop deployment (or any other deployment) on vm1
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: your-sockshop-service
spec:
replicas: 1
selector:
matchLabels:
app: sockshop-nginx
template:
metadata:
labels:
app: sockshop-nginx
spec:
serviceAccountName: default
containers:
- name: your-app-container
image: nginx:latest
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: sockshop-db-secrets
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: sockshop-db-secrets
key: DB_PASSWORD
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "vault-db-creds"
---
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: vault-db-creds
namespace: default
spec:
provider: vault
secretObjects:
- secretName: sockshop-db-secrets # Kubernetes Secret name
type: Opaque
data:
- objectName: username # Will become DB_USERNAME
key: DB_USERNAME
- objectName: password # Will become DB_PASSWORD
key: DB_PASSWORD
parameters:
roleName: "sockshop-app"
vaultAddress: "http://<Vault-VM-IP>:8200" # Replace with Vault address if needed
vaultSkipVerify: "true"
objects: |
- objectName: "username"
secretPath: "secret/data/sockshop/database"
secretKey: "username"
- objectName: "password"
secretPath: "secret/data/sockshop/database"
secretKey: "password"
Step 4 : Verification Link to heading
-
Exec into the Pod on
vm1
:kubectl exec -it <your-sockshop-pod-name> -- /bin/sh
Navigate to
/mnt/secrets-store
(or the path defined by the injector) and verify thatusername
andpassword
exists and contains the correct secret values.
Mounted secrets and environment variables after deployment:
ubuntu@k3s1 ~> kubectl exec -it your-sockshop-service-5bc8fc6c6c-9ms4n -- cat /mnt/secrets-store/..data/password
sockpassword123-1⏎
ubuntu@k3s1 ~> kubectl exec -it your-sockshop-service-5bc8fc6c6c-9ms4n -- env | grep -i db
DB_USERNAME=sockuser-1
DB_PASSWORD=sockpassword123-1
Update the Password in Vault server (After updating the secret value in Vault):
ubuntu@k3s1 ~> kubectl exec -it your-sockshop-service-5bc8fc6c6c-9ms4n -- cat /mnt/secrets-store/..data/password
sockpassword123-2⏎
ubuntu@k3s1 ~> kubectl exec -it your-sockshop-service-5bc8fc6c6c-9ms4n -- env | grep -i db
DB_USERNAME=sockuser-1
DB_PASSWORD=sockpassword123-1
This shows that the secret was changed in the mount path but not in the environment variables. We need to restart the pod to get the secrets reflected or updated in the env variables.
ubuntu@k3s1 ~> kubectl get po
NAME READY STATUS RESTARTS AGE
your-sockshop-service-5bc8fc6c6c-9ms4n 1/1 Running 0 21h
ubuntu@k3s1 ~> kubectl delete pod/your-sockshop-service-5bc8fc6c6c-9ms4n
pod "your-sockshop-service-5bc8fc6c6c-9ms4n" deleted
ubuntu@k3s1 ~> kubectl get po
NAME READY STATUS RESTARTS AGE
your-sockshop-service-5bc8fc6c6c-7fw9z 1/1 Running 0 3s
ubuntu@k3s1 ~> kubectl exec -it your-sockshop-service-5bc8fc6c6c-7fw9z -- cat /mnt/secrets-store/..data/password
sockpassword123-2⏎
ubuntu@k3s1 ~> kubectl exec -it your-sockshop-service-5bc8fc6c6c-7fw9z -- env | grep -i db
DB_USERNAME=sockuser-1
DB_PASSWORD=sockpassword123-2
ubuntu@k3s1 ~>
✅ Result: After restarting the pod, the updated secret is reflected both in the mounted file and in the environment variables.