Kubernetes RBAC Best Practices: Least Privilege Done Right
Master Kubernetes RBAC with least-privilege roles, service account hardening, projected tokens, workload identity, and common RBAC mistakes to avoid.
Kubernetes RBAC Best Practices: Least Privilege Done Right
RBAC (Role-Based Access Control) is the gatekeeper of your Kubernetes cluster. Every API call — from kubectl get pods to deploying a new service — passes through RBAC. Get it wrong, and a compromised service account token gives an attacker the keys to the entire cluster. Get it right, and even a full container compromise is contained to a single namespace with minimal permissions.
The Principle of Least Privilege
Least privilege means granting only the exact permissions needed for a workload to function, and nothing more. In Kubernetes, this translates to:
- No wildcards in roles (no
*for verbs or resources) - Namespace-scoped roles instead of cluster-wide roles
- One service account per application instead of sharing the default
- No
cluster-adminbindings except for the absolute minimum of administrative accounts - Short-lived tokens instead of long-lived static credentials
The consequences of getting this wrong are severe. A single cluster-admin binding on a compromised service account means the attacker has full control over every namespace, every secret, every node — game over.
RBAC Building Blocks
Kubernetes RBAC uses four resource types:
Role — Grants permissions within a single namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "watch", "list"]
ClusterRole — Grants permissions cluster-wide or across all namespaces:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-viewer
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
RoleBinding — Binds a Role to a user, group, or service account within a namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: production
subjects:
- kind: ServiceAccount
name: my-app-sa
namespace: production
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
ClusterRoleBinding — Binds a ClusterRole to subjects across the entire cluster.
The key distinction: always prefer Role + RoleBinding over ClusterRole + ClusterRoleBinding. Namespace-scoped permissions limit the blast radius of compromised credentials.
Designing Least-Privilege Roles
Start by asking: “What does this workload actually need to do?” Most applications need far less access than they’re given.
A typical read-only application role:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "watch", "list"]
Key principles:
- Specify exact resources —
["pods", "pods/log"]instead of["*"] - Specify exact verbs —
["get", "watch", "list"]instead of["*"] - Use
resourceNamesto restrict to specific named resources when possible:
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
resourceNames: ["app-config", "feature-flags"]
This restricts the service account to reading only two specific ConfigMaps by name — even if other ConfigMaps exist in the namespace.
Common role patterns by workload type:
| Workload Type | Typical Permissions |
|---|---|
| Web application | Read ConfigMaps, read Secrets (own app only) |
| Background worker | Read ConfigMaps, read/write Jobs |
| Operator/controller | Watch/list/create/update specific CRDs |
| CI/CD agent | Create/delete Deployments, get Pods, get logs |
| Monitoring agent | Read pods, services, endpoints (cluster-wide) |
Service Account Hardening
Service accounts are the primary identity mechanism for pods. By default, every pod gets a long-lived token mounted at /var/run/secrets/kubernetes.io/serviceaccount/ — even if the pod never needs API access.
Disable Automatic Token Mounting
Most application pods don’t need to talk to the Kubernetes API. Disable the automatic token mount:
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-workload-sa
namespace: production
automountServiceAccountToken: false
Also set it at the pod level for defense-in-depth:
spec:
serviceAccountName: app-workload-sa
automountServiceAccountToken: false
This removes the most common credential theft vector — stolen service account tokens from compromised containers. Without a token, an attacker who gains shell access in a container cannot interact with the Kubernetes API.
Verify which pods are using the default service account:
kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"/"}{.metadata.name}{"\t"}{.spec.serviceAccountName}{"\n"}{end}' | grep "default"
Any pod using the default service account should be switched to a dedicated service account with automountServiceAccountToken: false.
One Service Account Per Application
Never share service accounts between applications. If multiple applications share the same service account and one is compromised, the attacker inherits the permissions of all applications.
apiVersion: v1
kind: ServiceAccount
metadata:
name: payment-service-sa # Specific to this application
namespace: production
automountServiceAccountToken: false
imagePullSecrets:
- name: registry-credentials
Configure imagePullSecrets
For private container registries, attach pull secrets to the service account rather than specifying them on every pod:
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-workload-sa
namespace: production
imagePullSecrets:
- name: registry-credentials
Projected Token Volumes and Token Expiry
When your pod genuinely needs Kubernetes API access, use projected token volumes instead of the default auto-mounted token. Projected tokens are:
- Time-limited — expire after a configurable duration (e.g., 1 hour), auto-rotated by the kubelet
- Audience-bound — only valid for a specific API server
- Not stored as Kubernetes Secrets — reducing the attack surface
apiVersion: v1
kind: Pod
metadata:
name: app-with-api-access
namespace: production
spec:
serviceAccountName: app-workload-sa
automountServiceAccountToken: false
containers:
- name: app
image: registry.example.com/app:v1.2.3
env:
- name: KUBERNETES_TOKEN_PATH
value: /var/run/secrets/tokens/api-token
volumeMounts:
- name: api-token
mountPath: /var/run/secrets/tokens
readOnly: true
volumes:
- name: api-token
projected:
defaultMode: 0440
sources:
- serviceAccountToken:
expirationSeconds: 3600 # Expires after 1 hour
audience: "https://kubernetes.default.svc"
path: api-token
- configMap:
name: kube-root-ca.crt
items:
- key: ca.crt
path: ca.crt
With a projected token, even if an attacker steals the token, it expires within the configured duration. The kubelet automatically rotates the token before expiry, so applications using client-go or similar libraries handle this transparently by re-reading the token file.
Compare this to the default behavior where tokens are long-lived and never expire — a stolen default token remains valid until the service account is deleted.
Workload Identity (GKE, EKS, AKS)
For pods that need to access cloud services (S3, Cloud SQL, Key Vault), workload identity eliminates the need for static cloud credentials inside the cluster.
AWS EKS: IAM Roles for Service Accounts (IRSA)
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-workload-sa
namespace: production
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::123456789012:role/app-workload-role"
The EKS pod identity webhook injects an OIDC token that AWS STS exchanges for temporary IAM credentials. No AWS access keys are stored in the cluster.
GCP GKE: Workload Identity
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-workload-sa
namespace: production
annotations:
iam.gke.io/gcp-service-account: "app-workload@my-project.iam.gserviceaccount.com"
Azure AKS: Workload Identity (Federated)
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-workload-sa
namespace: production
annotations:
azure.workload.identity/client-id: "12345678-1234-1234-1234-123456789012"
azure.workload.identity/tenant-id: "12345678-1234-1234-1234-123456789012"
All three approaches share the same principle: the pod gets short-lived, automatically-rotated cloud credentials without any static secrets stored in Kubernetes. If the pod is compromised, the credentials expire quickly and the IAM role should have minimal permissions.
Common RBAC Mistakes and How to Fix Them
Mistake 1: Wildcard Permissions
# BAD -- gives access to everything
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
Find wildcard roles:
kubectl get clusterroles -o json | jq '.items[] | select(.rules[]? | .verbs[]? == "*" or .resources[]? == "*") | .metadata.name'
Fix: Replace wildcards with explicit resource and verb lists.
Mistake 2: Unnecessary cluster-admin Bindings
# Find all cluster-admin bindings
kubectl get clusterrolebindings -o jsonpath='{range .items[?(@.roleRef.name=="cluster-admin")]}{.metadata.name}{"\t"}{.subjects}{"\n"}{end}'
Most organizations have far more cluster-admin bindings than they need. Common offenders:
- CI/CD service accounts (need only deploy access, not full admin)
- Monitoring tools (need read-only access, not full admin)
- Developer accounts (should have namespace-scoped access)
Fix: Delete unnecessary bindings and replace with namespace-scoped RoleBindings.
Mistake 3: Using the “escalate” and “bind” Verbs
The escalate and bind verbs allow a user to grant themselves or others higher privileges than they currently have. This is a privilege escalation vector.
# Find roles with escalate or bind
kubectl get clusterroles -o json | jq '.items[] | select(.rules[]? | .verbs[]? == "escalate" or .verbs[]? == "bind") | .metadata.name'
These verbs should be restricted to a very small number of dedicated admin roles.
Mistake 4: Not Rotating Kubeconfig Credentials
Long-lived kubeconfig files on developer laptops are a prime target. Check certificate expiry:
kubectl config view --raw -o jsonpath='{.users[*].user.client-certificate-data}' | base64 -d | openssl x509 -noout -enddate
Fix: Use short-lived tokens via OIDC provider integration or kubectl create token <sa> --duration=1h.
Mistake 5: Sharing Default Service Accounts
Every namespace has a default service account. If you don’t specify a service account, pods use it. All pods in the same namespace sharing the default SA means a compromise of one application gives access to all others.
Fix: Create a dedicated service account for each application with automountServiceAccountToken: false.
Auditing RBAC with kubectl
Regular RBAC audits are essential. Here are the most useful commands:
# Who has cluster-admin?
kubectl get clusterrolebindings -o jsonpath='{range .items[?(@.roleRef.name=="cluster-admin")]}{.metadata.name}{"\t"}{.subjects}{"\n"}{end}'
# What can a specific service account do?
kubectl auth can-i --list --as=system:serviceaccount:production:my-app-sa
# Can this service account read secrets?
kubectl auth can-i get secrets --as=system:serviceaccount:production:my-app-sa -n production
# Find all roles with wildcard permissions
kubectl get clusterroles -o json | jq '.items[] | select(.rules[]? | .verbs[]? == "*") | .metadata.name'
# List all role bindings in a namespace
kubectl get rolebindings -n production -o wide
# Find pods using the default service account
kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"/"}{.metadata.name}{"\t"}{.spec.serviceAccountName}{"\n"}{end}' | grep "default"
Run these commands as part of a monthly security audit. Any new cluster-admin binding or wildcard role should be immediately investigated.
Checklist Items 14-18 Walkthrough
These correspond to the RBAC section of the K8s Security Pro checklist:
| # | Control | Severity | Quick Check |
|---|---|---|---|
| 14 | Audit ClusterRoles — remove unnecessary cluster-admin | Critical | kubectl get clusterrolebindings | grep cluster-admin |
| 15 | Minimize wildcards — no * in verbs or resources | High | Use jq query above |
| 16 | Service account segregation — one SA per app | Medium | kubectl get pods -A | grep default |
| 17 | Restrict “escalate” and “bind” verbs | Critical | Use jq query above |
| 18 | Rotate kubeconfig credentials — short TTLs | Medium | Check cert expiry with openssl |
Each control includes a verify command and a fix procedure. The full 50-point checklist with MITRE ATT&CK mappings and CIS Benchmark IDs is available in the K8s Security Pro template pack.
Next Steps
RBAC is one of the most powerful security controls in Kubernetes, but it’s also one of the most commonly misconfigured. Start by auditing your cluster-admin bindings, then work through each service account to ensure it has the minimum permissions required.
The templates referenced in this guide — least-privilege roles, secure service accounts with projected tokens, and workload identity configurations — are all included in the K8s Security Pro template pack. Get started with the free K8s Security Quick-Start Kit which includes the checklist and 5 essential security templates.
Related Templates
Implement what you’ve learned with these production-ready YAML templates:
- Template 04: Least Privilege RBAC — Namespace-scoped Role and RoleBinding with minimal read-only permissions.
- Template 05: Secure Service Account — Disabled auto-mount, projected tokens, and workload identity annotations.
Related Articles
- Kubernetes Network Policies: The Complete Guide to Zero Trust Networking — Pair RBAC controls with network segmentation for defense in depth.
- Kubernetes Pod Security Standards: From PSP to PSS Migration Guide — Enforce pod-level restrictions alongside your RBAC policies.
Get the Free K8s Security Quick-Start Kit
Join 500+ engineers. Get 5 essential templates + audit checklist highlights delivered to your inbox.
No spam. Unsubscribe anytime.
Secure Your Kubernetes Clusters
Get the complete 50-point audit checklist and 20+ production-ready YAML templates.
View Pricing Plans