Kubernetes Pod Security Standards: From PSP to PSS Migration Guide
Complete guide to Kubernetes Pod Security Standards (PSS), namespace-level enforcement, seccomp profiles, and migrating from deprecated PodSecurityPolicy.
Kubernetes Pod Security Standards: From PSP to PSS Migration Guide
PodSecurityPolicy (PSP) was deprecated in Kubernetes 1.21 and removed in 1.25. If you’re still running PSPs or haven’t implemented their replacement, your cluster is missing a critical security layer. Pod Security Standards (PSS) are the built-in replacement, and they’re simpler, more reliable, and require zero additional components.
The End of PodSecurityPolicy
PodSecurityPolicy was the original mechanism for controlling pod security in Kubernetes. It served its purpose but had fundamental design problems:
- Confusing authorization model — PSPs were activated via RBAC, but the mapping between a user, the PSP, and the resulting pod security was unintuitive and error-prone
- Difficult to audit — Determining which PSP applied to a given pod required tracing through multiple RBAC bindings
- Mutation side effects — PSPs could silently mutate pod specs, making debugging difficult
- No dry-run mode — No way to test policies without enforcing them
The Kubernetes community removed PSPs entirely in 1.25 and replaced them with Pod Security Standards enforced by the Pod Security Admission controller — a built-in, no-install-required solution.
Timeline:
- Kubernetes 1.21: PSP deprecated
- Kubernetes 1.23: Pod Security Admission available as beta
- Kubernetes 1.25: PSP removed, Pod Security Admission stable (GA)
Pod Security Standards: Three Levels
PSS defines three security profiles, each more restrictive than the last:
Privileged
No restrictions. The pod can do anything — run as root, use host networking, mount any volume. This is the default behavior when no PSS labels are applied.
Use case: System-level infrastructure components only (CNI plugins, log collectors that need host access). Never for application workloads.
Baseline
Prevents known privilege escalation vectors while maintaining broad compatibility. Blocks the most dangerous configurations:
- No privileged containers
- No host namespaces (hostPID, hostIPC, hostNetwork)
- No hostPath volumes
- Limited host port ranges
- No
SYS_ADMINcapability - No unsafe sysctl settings
Use case: A reasonable starting point for workloads that can’t meet the restricted standard. Also useful as a warn level during migration.
Restricted
The strictest standard. Enforces current best practices for pod hardening:
- Everything in Baseline, plus:
- Must run as non-root (
runAsNonRoot: true) - Must drop all capabilities (
drop: ["ALL"]) - Must use a seccomp profile (RuntimeDefault or Localhost)
- Restricted volume types (no hostPath, no projected tokens unless explicitly configured)
- No privilege escalation (
allowPrivilegeEscalation: false)
Use case: All application workloads in production. This should be your default target.
Namespace-Level Enforcement
PSS is enforced at the namespace level using labels. There are three modes:
| Mode | Behavior |
|---|---|
enforce | Reject pods that violate the standard |
warn | Allow pods but show a warning to the user |
audit | Allow pods but log violations in the audit log |
You can mix modes and levels. The recommended approach:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: v1.28
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: v1.28
This configuration:
- Rejects any pod that doesn’t meet the restricted standard
- Warns the user about violations (helpful for debugging)
- Pins the version to prevent unexpected behavior on cluster upgrades
Apply it:
kubectl label namespace production \
pod-security.kubernetes.io/enforce=restricted \
pod-security.kubernetes.io/enforce-version=v1.28 \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/warn-version=v1.28
Verify all namespaces have PSS labels:
kubectl get namespaces -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.pod-security\.kubernetes\.io/enforce}{"\n"}{end}'
Any namespace without an enforce label is running in privileged mode by default — no restrictions at all.
Pod Security Context Deep Dive
To pass the restricted PSS standard, your pods need a properly configured security context. Here’s a fully compliant deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hardened-app
namespace: production
spec:
selector:
matchLabels:
app: hardened-app
template:
metadata:
labels:
app: hardened-app
spec:
securityContext:
runAsNonRoot: true
runAsUser: 10001
runAsGroup: 10001
fsGroup: 10001
seccompProfile:
type: RuntimeDefault
containers:
- name: my-app
image: my-app:1.0.0
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
ports:
- containerPort: 8080
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "500m"
Let’s break down each setting:
Pod-level security context:
runAsNonRoot: true— Prevents the container from starting as UID 0. If the container image’sUSERis root, the pod fails to start.runAsUser: 10001— Explicitly sets a high UID. Not UID 0, not a system user.runAsGroup: 10001— Sets the primary group ID.fsGroup: 10001— Ensures mounted volumes are readable by this group.seccompProfile.type: RuntimeDefault— Applies the container runtime’s default seccomp profile, blocking ~44 dangerous syscalls.
Container-level security context:
allowPrivilegeEscalation: false— Prevents setuid binaries (like sudo) from granting root.readOnlyRootFilesystem: true— Makes the container filesystem immutable. Attackers can’t install tools, modify binaries, or write malware.capabilities.drop: ["ALL"]— Drops all Linux capabilities. Without this, containers get a default set that includes capabilities likeNET_RAW(useful for network attacks).
If your app needs writable directories with readOnlyRootFilesystem: true, use emptyDir volume mounts:
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /var/cache
volumes:
- name: tmp
emptyDir:
sizeLimit: "64Mi"
- name: cache
emptyDir:
sizeLimit: "128Mi"
Seccomp Profiles: RuntimeDefault and Custom
Seccomp (Secure Computing Mode) restricts the Linux system calls a container can make. This is a critical defense layer — most container escapes rely on syscalls like unshare, mount, or ptrace that legitimate applications never need.
RuntimeDefault: The Minimum Baseline
The RuntimeDefault profile is provided by the container runtime (containerd, CRI-O) and blocks approximately 44 dangerous syscalls while maintaining compatibility with most applications.
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
This is the absolute minimum for any production workload. It blocks:
unshare— prevents creating new namespaces (container escape via CVE-2022-0185)mount/umount2— prevents filesystem manipulationptrace— prevents process debugging and injectionreboot— prevents system disruptionkeyctl— prevents kernel keyring manipulation- And ~38 more dangerous syscalls
Custom Profiles: For Sensitive Workloads
For workloads handling secrets, PII, or financial data, a custom seccomp profile provides tighter restrictions. The custom profile uses a deny-by-default approach, explicitly allowing only the syscalls your application needs:
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/k8s-security-strict.json
The custom profile must be deployed to every node at /var/lib/kubelet/seccomp/profiles/. For automated distribution, use the Security Profiles Operator (SPO):
apiVersion: security-profiles-operator.x-k8s.io/v1beta1
kind: SeccompProfile
metadata:
name: k8s-security-strict
spec:
defaultAction: "SCMP_ACT_ERRNO"
architectures:
- SCMP_ARCH_X86_64
- SCMP_ARCH_AARCH64
syscalls:
- action: "SCMP_ACT_ALLOW"
names:
- read
- write
- close
- openat
- fstat
- mmap
- mprotect
- munmap
- brk
- socket
- connect
- accept
- sendto
- recvfrom
# ... (full list in template 09)
SPO automatically distributes the profile to all nodes and manages the lifecycle. Install it with:
kubectl apply -f https://github.com/kubernetes-sigs/security-profiles-operator/releases/latest/download/install.yaml
Which Syscalls to Block and Why
| Blocked Category | Syscalls | Risk |
|---|---|---|
| Container escape | unshare, mount, pivot_root | CVE-2022-0185, filesystem manipulation |
| Process debugging | ptrace, process_vm_readv | Credential theft, code injection |
| Kernel modules | init_module, finit_module | Rootkit installation |
| Key management | keyctl, add_key | Kernel keyring theft |
| BPF programs | bpf | Monitoring evasion |
| System disruption | reboot, sethostname | Denial of service |
| Raw I/O | iopl, ioperm | Hardware-level attacks |
Migration Guide: PSP to PSS
If you’re migrating from PodSecurityPolicy to Pod Security Standards, follow this phased approach:
Phase 1: Audit Current State
# List all existing PSPs
kubectl get psp
# See which PSP each pod is using
kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"/"}{.metadata.name}{"\t"}{.metadata.annotations.kubernetes\.io/psp}{"\n"}{end}'
Map each PSP to the equivalent PSS level:
- PSPs that allow privileged, hostNetwork, hostPID ->
privileged - PSPs that block privileged but allow running as root ->
baseline - PSPs that require non-root, drop all capabilities, require seccomp ->
restricted
Phase 2: Apply PSS in Warn/Audit Mode
Start with warn and audit modes to identify which pods would be rejected:
kubectl label namespace production \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/audit=restricted
Then deploy or restart pods in the namespace. You’ll see warnings like:
Warning: would violate PodSecurity "restricted:latest":
allowPrivilegeEscalation != false
(container "app" must set securityContext.allowPrivilegeEscalation=false)
Phase 3: Fix Violations
For each warning, update the pod spec:
- allowPrivilegeEscalation — Add
allowPrivilegeEscalation: falseto every container - runAsNonRoot — Set
runAsNonRoot: trueat the pod level - capabilities — Add
capabilities: { drop: ["ALL"] }to every container - seccompProfile — Add
seccompProfile: { type: RuntimeDefault }at the pod level - readOnlyRootFilesystem — Set
readOnlyRootFilesystem: trueand addemptyDirmounts for writable paths
Phase 4: Enforce
Once all warnings are resolved:
kubectl label namespace production \
pod-security.kubernetes.io/enforce=restricted \
pod-security.kubernetes.io/enforce-version=v1.28
Phase 5: Remove PSPs
After all namespaces are running with PSS enforcement:
kubectl delete psp --all
Remove the PodSecurityPolicy admission controller from the API server configuration if you added it explicitly.
Baseline vs Restricted: When to Use Which
| Criterion | Baseline | Restricted |
|---|---|---|
| Container can run as root? | Yes | No |
| Requires seccomp profile? | No | Yes |
| Requires dropping all capabilities? | No | Yes |
Requires allowPrivilegeEscalation: false? | No | Yes |
| Compatible with most Helm charts? | Yes | Often needs values overrides |
| Recommended for production? | Minimum acceptable | Yes, this is the target |
Use baseline when:
- You’re migrating from no policies and need a stepping stone
- Third-party Helm charts won’t run under restricted without significant customization
- System namespaces where components legitimately need some elevated access
Use restricted when:
- All application workloads in production
- Any namespace handling sensitive data
- Compliance requirements (CIS Benchmark, SOC2, PCI-DSS)
The goal is to get every application namespace to restricted. Use baseline as a temporary measure during migration, not as a permanent state.
Real-World Examples and Gotchas
Gotcha 1: Images That Run as Root
Many popular container images run as root by default (Nginx, PostgreSQL, Redis). You’ll need to either:
- Use a non-root variant:
nginx:1.25-alpinewith a custom config that binds to port 8080 - Override the user in the pod spec:
securityContext: { runAsUser: 1000 } - Use Bitnami images which are designed to run as non-root
Gotcha 2: readOnlyRootFilesystem Breaks Applications
Many applications write to the filesystem at runtime (logs, temp files, PID files). The fix is always the same — emptyDir volume mounts:
volumeMounts:
- name: tmp
mountPath: /tmp
- name: var-run
mountPath: /var/run
- name: var-log
mountPath: /var/log
volumes:
- name: tmp
emptyDir: {}
- name: var-run
emptyDir: {}
- name: var-log
emptyDir: {}
Gotcha 3: Helm Charts That Don’t Support Restricted Mode
Many Helm charts don’t set security contexts by default. Override them in your values.yaml:
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containerSecurityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
Gotcha 4: PSS Labels Don’t Apply to Existing Pods
When you add enforce labels to a namespace, only new pods are checked. Existing pods continue to run. To check existing pods against the standard:
kubectl label namespace production pod-security.kubernetes.io/warn=restricted --overwrite
# Then trigger a rollout to recreate pods:
kubectl rollout restart deployment -n production
Gotcha 5: Init Containers Need Security Contexts Too
Init containers are validated against PSS just like regular containers. Don’t forget to add security contexts to them:
initContainers:
- name: init-db
image: busybox:1.36
securityContext:
runAsNonRoot: true
runAsUser: 1000
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
Enforcing Pod Security with Policy Engines
PSS provides namespace-level enforcement, but for more granular control, consider admission controllers like Kyverno or OPA/Gatekeeper:
Kyverno mutate policy — automatically injects security contexts on pods that don’t have them:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-default-security-context
spec:
rules:
- name: add-default-pod-security-context
match:
any:
- resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
This acts as a safety net — even workloads deployed without explicit security configurations receive baseline security settings.
Conclusion
Pod Security Standards are simpler and more reliable than PodSecurityPolicy. The migration path is clear: audit your current state, apply PSS in warn mode, fix violations, then enforce. Target restricted for every application namespace.
The templates referenced in this guide — PSS namespace labels, hardened pod security contexts, seccomp profiles (RuntimeDefault, custom, and SPO), and Kyverno security policies — are all included in the K8s Security Pro template pack. Start with the free K8s Security Quick-Start Kit which includes the checklist and essential security templates.
Related Templates
Implement what you’ve learned with these production-ready YAML templates:
- Template 02: Restricted PSS Namespace — Namespace configured with the Restricted Pod Security Standard for maximum protection.
- Template 03: Hardened Pod Security Context — Deployment template with non-root, read-only filesystem, dropped capabilities, and seccomp.
- Template 09: Seccomp Profile — RuntimeDefault baseline, custom strict profile, and Security Profiles Operator managed profile.
Related Articles
- Kubernetes RBAC Best Practices: Least Privilege Done Right — Combine pod security with proper RBAC for comprehensive workload protection.
- Kubernetes Supply Chain Security: From Image Scanning to SLSA — Extend pod hardening with image verification and admission control policies.
Get the Free K8s Security Quick-Start Kit
Join 500+ engineers. Get 5 essential templates + audit checklist highlights delivered to your inbox.
No spam. Unsubscribe anytime.
Secure Your Kubernetes Clusters
Get the complete 50-point audit checklist and 20+ production-ready YAML templates.
View Pricing Plans