Kubernetes Pod Security Policies were once a central part of Kubernetes Security and Pod Security hardening. They gave platform teams a way to control what a pod could do before it ever started, which mattered because containerized apps can become dangerous very quickly when they run with too much privilege. A single workload with host access, unsafe volume mounts, or privilege escalation enabled can turn a routine application issue into a node compromise.
This is not a theoretical problem. In real clusters, the gap between “works in development” and “safe in production” is often a few fields in a pod spec: securityContext, volume definitions, Linux capabilities, and namespace or host settings. Those details determine whether a compromised container stays contained or becomes a path to the host, other pods, or sensitive data. That is why Container Protection is not just about scanning images and securing registries. It also depends on how workloads are admitted and what they are allowed to request.
For DevOps and Cloud Native teams, the challenge is balance. Developers need enough flexibility to ship useful services. Platform and security teams need guardrails that prevent risky configurations from reaching the cluster. This article breaks down how Pod Security Policies worked, why they mattered, what controls they enforced, and how modern alternatives handle the same security intent. It also covers policy design, rollout practices, auditing, and migration strategies that apply directly to production environments managed by teams like Vision Training Systems.
Understanding Pod Security Policies
A Pod Security Policy was a Kubernetes admission control resource used to define what security settings a pod could request. It operated at admission time, which means Kubernetes checked the pod specification before allowing it to run. If the pod violated policy, the API server rejected it immediately. That made PSPs a preventive control, not a detective one.
PSPs could enforce controls such as whether a container could run as root, whether it could escalate privileges, which Linux capabilities were allowed, and whether certain volume types were permitted. They also controlled host-level settings like host networking, host PID, and host IPC. These are powerful switches. For example, allowing hostPath volumes can expose the underlying node filesystem if the policy is too loose.
PSPs did not operate in isolation. Kubernetes paired them with RBAC, service accounts, namespaces, and admission controllers. RBAC determined which users or service accounts could “use” a given policy, while the admission controller enforced the decision. In practice, a namespace might contain a workload that is allowed to use a restrictive policy, while a cluster-admin or platform service account could attach an exception-based policy for a specific system component. The policy decision was therefore both security-driven and identity-driven.
According to the Kubernetes documentation, Pod Security Policy was deprecated and removed in later releases, which is why many clusters now use Pod Security Admission or dedicated policy engines. That change matters because PSPs were powerful, but they also introduced operational complexity that made some teams struggle with adoption.
Note
PSPs were never just “settings.” They were an admission-time authorization layer that tied workload identity, namespace design, and container risk into one control point.
Why Pod Security Matters for Containerized Apps
One overly permissive pod can expose an entire node. If a container runs as privileged, mounts the host filesystem, or has broad Linux capabilities, an attacker who breaks into that container may be able to inspect host files, tamper with logs, or pivot into neighboring workloads. That is why Container Protection starts at the pod boundary. If the boundary is weak, the rest of the stack has to absorb the risk.
Common attack paths are straightforward. A container running as root can overwrite files it should never touch. A pod with hostPath mounted to / can read sensitive node data. A container with CAP_SYS_ADMIN or broad privilege escalation settings can gain far more control than the application needs. Even a well-built image can become dangerous if the runtime privileges are wrong.
The business impact is larger than the technical issue. Misconfigured workloads can leak data, enable lateral movement, or become a stepping stone in a supply chain attack. If a compromised pod has access to secrets, service account tokens, or shared volumes, the attacker may move from one application to another with little resistance. That is why operational concerns like tenant isolation, reduced incident scope, and compliance evidence are tightly connected to pod-level controls.
Defense-in-depth is still required even when other layers are strong. A secure CI pipeline, trusted registry, and image scanning program help, but they do not stop a user from deploying a dangerous manifest. NIST’s security guidance consistently treats least privilege and boundary protection as core defensive principles. In Kubernetes, pod-level policy is where those principles become real.
“If a container can do everything the node can do, you no longer have container isolation. You have a smaller shell around a bigger problem.”
Core Policy Controls in a Pod Security Policy
The most important Pod Security Policy control is whether a container runs as non-root. Running as a non-root user reduces the blast radius of compromise. If an attacker breaks into the app, they inherit a limited account instead of an administrative one. That does not eliminate risk, but it makes privilege escalation much harder.
AllowPrivilegeEscalation is another critical setting. Disabling it is a strong baseline because it prevents the process from gaining more privilege through setuid binaries or similar mechanisms. If a workload truly needs escalation, it should be an exception, not the default. The same principle applies to Linux capabilities. Most containers do not need them. Start by dropping all capabilities, then add only a specific exception if the workload has a documented requirement.
Host namespace controls matter just as much. Host networking, host PID, and host IPC all expand the container’s visibility into the node. For the majority of application workloads, these should be avoided entirely. If a service needs to bind to a node interface or inspect system processes, that requirement should be rare, explicit, and tightly scoped.
Volume restrictions are another major control surface. A policy should distinguish among benign ephemeral storage like emptyDir, persistent storage backed by approved storage classes, and risky mounts such as hostPath. Config-driven mounts like ConfigMaps and Secrets are usually safer than arbitrary filesystem access, but they still need careful handling because they can expose sensitive application configuration.
- Non-root execution lowers direct host and file permission risk.
- No privilege escalation blocks common local escalation paths.
- Minimal Linux capabilities keeps the process within its job.
- No host namespace sharing preserves node isolation.
- Restricted volume types prevents accidental host exposure.
Key Takeaway
A strong Pod Security Policy is built from explicit denial. Deny root, deny escalation, deny host access, and allow only the smallest number of capabilities and volumes required for the workload.
Designing Secure Policies for Different Workloads
Policy design should begin with workload classification. A web application, a batch job, and a system daemon have different needs. A web app usually has no reason to run privileged or share the host network. A batch job may need write access to a persistent volume but still not need host access. A system component like a CNI plugin or storage driver may need broader rights, but those rights should be isolated to a very small set of service accounts and namespaces.
That is where the idea of baseline, restricted, and exception-based policies becomes useful. Baseline policies cover common safe settings for most application workloads. Restricted policies add tighter controls like non-root execution and no privilege escalation. Exception-based policies exist for special cases that must break the norm. The goal is not to create one giant rule set. The goal is to create a predictable policy ladder that teams can understand.
Namespace scoping is often the cleanest enforcement model. For example, all production web services can live in a namespace that only permits restricted workloads. Monitoring agents may live in a dedicated namespace with a separate policy. Service account mappings are also useful when a specific component, such as an ingress controller, needs access to a more permissive policy but should never be broadly available to application developers.
Reusable policy patterns matter. If every team writes its own exceptions from scratch, policy drift becomes inevitable. Standardize patterns for common workload types, document the rationale, and make the policy names clear. For Vision Training Systems clients, the practical test is simple: can a new team understand the allowed pod pattern in minutes, not hours?
| Workload Type | Typical Policy Shape |
| Web app | Restricted, non-root, no host access, minimal capabilities |
| Batch job | Restricted with approved persistent volume access |
| Monitoring agent | Exception policy, carefully scoped host reads only if required |
| Ingress controller | Scoped exception, documented service account, narrow host/network needs |
Implementing Pod Security Policies in a Cluster
Implementing PSPs started with enabling the admission controller and defining the policy resources themselves. Once the policy objects existed, RBAC decided who could use them. That meant users and service accounts needed both the right to create workloads and the right to “use” a specific Pod Security Policy. Without the RBAC binding, the pod would be rejected even if the manifest looked valid.
That interaction between admission and authorization is where many teams made mistakes. They built a policy, but they did not map it to the correct namespace or service account. The result was either a flood of denials or a permissive workaround that bypassed the original intent. The better approach is to test policy behavior in staging first, using sample pods that intentionally vary security settings. One pod should pass with a non-root configuration. Another should fail because it requests privilege escalation. Another should fail because it uses hostPath.
Validation should be hands-on. Use kubectl apply -f against test manifests, inspect admission errors, and confirm that the pod spec reflects the intended security context. Also document policy ownership. Every policy needs an owner, a change process, and a rollback path. Otherwise, an innocuous cluster upgrade or namespace migration can break production workloads without warning.
For official Kubernetes concepts, the Pod Security Admission documentation is now the better reference point for modern clusters. It explains how the replacement model works and why the ecosystem moved away from PSP. That history matters because many operational lessons from PSP still apply even when the mechanism changes.
Pro Tip
Before broad rollout, test one namespace with a strict policy and one with a documented exception policy. That exposes real developer friction before the policy is enforced everywhere.
Best Practices for Strong Pod-Level Security
Strong Pod Security starts with a few non-negotiable defaults. Use a read-only root filesystem whenever the application can tolerate it. If the container does not need to write to its own image layers, there is no reason to permit write access. This makes certain persistence and tampering attacks harder.
Set explicit securityContext fields in the manifest. Do not rely on cluster defaults that may change across namespaces or environments. Define runAsUser, runAsNonRoot, and fsGroup when they matter to the application. That removes ambiguity and makes the pod’s intended privilege model visible to reviewers.
Drop all capabilities by default and add only the one or two the workload actually needs. Avoid wildcard allowances. They are convenient during development and painful during incident response. Pair that with minimal writable host mounts. If a pod must mount a path from the node, restrict it tightly and document why the mount is necessary.
Resource limits matter too. A noisy or compromised container can still cause outages through CPU or memory exhaustion. Add seccomp profiles where possible, and use AppArmor or SELinux to add another layer of syscall and file access control. The Kubernetes Pod Security Standards provide a useful model for choosing safe defaults, while CIS Benchmarks offer hardening guidance that maps well to host and runtime controls.
- Use read-only root filesystems unless write access is required.
- Declare security context fields explicitly in every production manifest.
- Prefer zero capabilities, then add exceptions with approval.
- Keep host mounts rare, narrow, and documented.
- Layer pod controls with seccomp, AppArmor, or SELinux.
Common Misconfigurations and How to Avoid Them
The most common misconfiguration is a policy that looks restrictive but effectively allows anything. If a Pod Security Policy permits privileged pods, host namespaces, broad capabilities, and unrestricted volumes all at once, it is not much of a policy. It is a rubber stamp. Teams often introduce this pattern to unblock deployment pressure, then forget to tighten it later.
Another frequent problem is allowing hostPath volumes without path restrictions. That can let pods read from or write to sensitive node directories. Similarly, allowing containers to run as root by default creates avoidable exposure. If a pod only needs to bind to a port or read application files, that should not require root. Map the application requirement to the smallest possible privilege set.
Inconsistent defaults across namespaces or clusters are another source of trouble. A manifest that works in dev might fail in prod because one cluster enforces stricter admission rules. That inconsistency wastes time and encourages risky exceptions. Standardize the security baseline wherever possible, then allow documented exceptions only when justified.
Troubleshooting denials is straightforward if you know where to look. Check pod events, inspect the admission error message, and compare the rejected spec to a known-good manifest. Admission controller logs can reveal which rule triggered the denial. If the failure is unclear, compare namespace labels, service account bindings, and security context fields side by side.
Warning
If developers start bypassing policy by copying “working” manifests from old namespaces, the policy program is already drifting. Fix the standards, not just the symptoms.
Auditing, Monitoring, and Ongoing Governance
Security policy should never be a one-time setup. Workloads change. Teams change. Kubernetes versions change. A policy that was appropriate six months ago may now block a critical deployment or, worse, fail to stop a risky one because the application changed shape. Continuous review is the only practical answer.
Auditing should answer three questions: which workloads are using which policy, which exceptions are still active, and which denials indicate policy drift or developer friction. If a namespace has had the same exception for a year, the exception may no longer be justified. If admissions are being rejected repeatedly, the policy may be too strict or the application team may not understand the standard.
Policy-as-code belongs in version control. That gives teams change history, peer review, and rollback capability. It also supports compliance evidence. For organizations aligning to standards such as ISO/IEC 27001 or NIST guidance, the ability to show who changed a security policy, when, and why is not optional. It is evidence of governance.
Periodic assessments should also line up with cluster upgrades. Kubernetes policy behavior can change across versions, especially as deprecated features are removed. Review security posture after every major upgrade and after any namespace expansion. That is how you keep Cloud Native controls aligned with real operations rather than stale assumptions.
- Review policy exceptions on a fixed schedule.
- Track rejected admissions as a signal, not just an error.
- Store policy definitions in source control.
- Reassess policy after cluster upgrades and major app changes.
Migration Considerations and Modern Alternatives
Pod Security Policies were deprecated because Kubernetes needed a simpler, more maintainable model. That does not mean the security problem disappeared. It means the enforcement mechanism changed. New environments should not treat PSP as the starting point. They should use modern controls that preserve the same security intent without the complexity that made PSP harder to operate.
The most direct replacement is Pod Security Admission, which uses namespace labels to enforce predefined security standards. For custom needs, policy engines such as OPA Gatekeeper and Kyverno provide more expressive controls. They can validate manifests, enforce constraints, and apply organization-specific policy logic. That matters when a cluster needs both a simple baseline and a nuanced exception framework.
Migration should focus on mapping intent, not copying YAML line for line. If a PSP prevented hostPath and required non-root execution, the modern policy should preserve those requirements in the new framework. If a PSP allowed a special service account to use a more permissive profile, the replacement should handle that through namespace labels, constraints, or narrowly scoped exceptions. The right question is not “How do we recreate PSP?” The right question is “How do we preserve the same security boundary using current Kubernetes features?”
Layered policy systems often work best. Use a default namespace-level standard for common workloads, then add a custom policy engine for edge cases. That gives you simplicity for most teams and flexibility for the few that genuinely need it. For Kubernetes Security, that layered design is usually easier to explain, audit, and maintain than a single monolithic rule set.
| Approach | Best Use |
| Pod Security Admission | Default baseline and restricted enforcement by namespace |
| OPA Gatekeeper | Custom constraints and enterprise policy logic |
| Kyverno | Kubernetes-native validation and mutation policies |
Conclusion
Pod-level policy is one of the most effective ways to reduce risk in Kubernetes. Whether you are dealing with Pod Security Policies in a legacy cluster or modern replacements in a current platform, the underlying goal is the same: stop unsafe workloads before they start. That means controlling root access, privilege escalation, host namespace use, volume mounts, and capabilities as part of a broader Container Protection strategy.
The practical lessons are consistent. Start with least privilege. Make security settings explicit in manifests. Separate baseline workloads from exception-based workloads. Test policies in staging before enforcement. Then keep auditing them. A secure cluster is not built by one control alone. It is built by policy, review, monitoring, and operational discipline working together.
If your team is still relying on informal conventions for Kubernetes Security, it is time to close the gap. Review your namespace standards, look for risky host access, and identify where developers are depending on defaults they do not fully understand. Vision Training Systems helps teams build practical security skills that translate into better cluster operations, better policy design, and fewer surprises in production. The next step is simple: evaluate your current pod posture, find the weak spots, and make the rules explicit before an attacker does it for you.