Get our Bestselling Ethical Hacker Course V13 for Only $12.99

For a limited time, check out some of our most popular courses for free on Udemy.  View Free Courses.

Deep Dive Into Kubernetes Pod Security Policies For Container Security

Vision Training Systems – On-demand IT Training

Kubernetes Pod Security is one of those topics that looks simple until a production incident forces the issue. A single overly permissive pod can expose the host network, mount sensitive paths, or run with capabilities that should never be available in a shared cluster. For teams running multi-tenant Kubernetes, that is not a theoretical risk. It is a direct Container Security problem that can affect every workload on the node.

Pod Security Policies, usually called PSPs, were Kubernetes’ original admission control mechanism for governing pod-level security settings. They were designed to stop unsafe pod specs before they were created. That made them an important part of early cluster hardening, especially where platform teams needed to enforce guardrails without manually reviewing every deployment.

This article breaks down what PSPs did, why they mattered, where they broke down operationally, and what replaced them. It also connects the old model to current DevOps Best Practices so you can modernize your policy approach without losing the security intent behind PSPs. If you manage shared clusters, regulated workloads, or internal platform services, the details here matter.

Understanding Pod Security Policies

Pod Security Policies were cluster-level admission controls that evaluated pod specifications before Kubernetes allowed them to run. In plain terms, PSPs acted like a gatekeeper. If a pod asked for privileges that were not explicitly allowed, the API server rejected it during admission. That made PSPs a preventative control, not a detective one.

PSPs governed a wide range of security settings. Administrators could control whether a pod could run in privileged mode, share the host network namespace, access the host PID or IPC namespaces, or mount unsafe volume types such as hostPath. They also controlled Linux security context settings like user IDs, group IDs, privilege escalation, capabilities, and filesystem access.

The authorization model was tightly coupled to RBAC. A user, group, or service account needed permission to “use” a PSP. That meant a pod could only be admitted if two things were true: the pod spec matched the policy rules, and the submitting identity was authorized to reference that policy. The Kubernetes admission flow checked the request before the pod object was persisted.

According to the official Kubernetes documentation, PSPs were part of the admission chain and were removed in later releases in favor of newer controls. That removal matters because it shows PSPs were never meant to be the end state. They were an early mechanism for enforcing Pod Security constraints, not a permanent architecture. See the Kubernetes docs at Kubernetes Pod Security Policy.

  • Admission control: checked before pod creation.
  • Cluster-scoped: applied across namespaces based on permissions.
  • Policy + RBAC: both authorization and specification had to align.

Why Pod Security Policies Mattered For Container Security

PSPs mattered because they helped enforce least privilege at the pod boundary. Container breakout stories usually start with small mistakes: a container runs as root, gets extra Linux capabilities, mounts the host filesystem, or can reach services through the host network. A properly tuned PSP prevented many of those mistakes from entering the cluster in the first place.

One of the most important benefits was blocking root execution when workloads did not need it. If a policy required runAsNonRoot or restricted runAsUser values, the pod could not silently launch as UID 0. That reduced the blast radius of application bugs and made privilege escalation harder inside the container.

PSPs also limited access to kernel-adjacent features. Disallowing privileged mode, restricting capabilities like SYS_ADMIN, and blocking host namespace sharing reduced the chance that a container could inspect processes, open raw sockets, or tamper with the node. In shared clusters, those controls were critical for tenant isolation. They also supported compliance goals in environments governed by frameworks such as NIST Cybersecurity Framework or CIS Benchmarks.

“Container security starts with denying unnecessary power at admission time. If a pod never gets unsafe privileges, the platform does not have to recover from them later.”

Common real-world use cases included preventing developers from deploying debug containers with host access, stopping third-party images from mounting arbitrary paths, and reducing exposure from workloads that were copied from bare-metal deployment patterns into Kubernetes without proper hardening.

Key Controls Enforced By Pod Security Policies

PSPs were valuable because they gave administrators a detailed security checklist for pods. The most obvious controls were privileged and allowPrivilegeEscalation. A privileged container has near-host-level access, so most production clusters should block it except for narrow infrastructure use cases. allowPrivilegeEscalation determines whether a process can gain more privileges than its parent, which matters even when the container is not fully privileged.

Linux capability filtering was another major control. Containers start with a default capability set, but some workloads try to add dangerous ones. PSPs could allow only a narrow list and explicitly deny the rest. That is a practical way to preserve functionality without opening the door to capabilities that are effectively host-level shortcuts.

Identity settings were just as important. runAsUser, runAsGroup, fsGroup, and supplementalGroups helped define what the process could access inside mounted volumes and shared filesystems. If these are left loose, a pod can inherit identities that make it easier to read data it should not see.

Namespace and filesystem controls were equally important. PSPs could restrict hostNetwork, hostPID, hostIPC, hostPorts, and hostPath volumes. That prevented common misconfigurations such as exposing a debug service on a host port, reading the node filesystem, or viewing processes from the host namespace. Secure runtime controls like seccomp, AppArmor, and SELinux could also be referenced where supported. A read-only root filesystem and a whitelist of allowed volume types completed the picture.

Pro Tip

If a workload needs one exception, do not loosen the global policy for everything. Create a separate policy for that workload class and bind it to one service account only. That keeps Container Security controls understandable and auditable.

  • Privilege controls: privileged, privilege escalation, capabilities.
  • Identity controls: runAsUser, runAsGroup, fsGroup, supplementalGroups.
  • Host access controls: hostNetwork, hostPID, hostIPC, hostPorts, hostPath.
  • Runtime hardening: seccomp, AppArmor, SELinux, read-only filesystem.

How Pod Security Policies Were Configured

A PSP was defined in YAML as a Kubernetes policy object with fields that described allowed and disallowed pod behaviors. Administrators declared what the pod could request, then RBAC determined who could use that policy. The structure typically included rules for user identity, volumes, privilege settings, capability management, and namespace sharing options.

A common pattern was to create multiple PSPs for different workload classes. A restricted policy would block privileged mode, forbid host namespaces, require non-root execution, and allow only safe volume types. A baseline policy might permit a little more flexibility for legacy apps. A privileged policy was usually reserved for infrastructure components such as CNI plugins, storage drivers, or node agents.

The RBAC layer was what made PSPs practical and dangerous at the same time. If a service account had permission to use multiple policies, Kubernetes would choose the least restrictive policy that still matched the pod spec. That meant policy design had to be deliberate. If an admin accidentally granted broad “use” permissions, the safety model weakened quickly.

A strong pattern was to define a default restrictive PSP and then carve out narrow exceptions. For example, application namespaces could be bound to a non-root policy with no host access, while one monitoring namespace might get a special policy that allowed read-only hostPath mounts for node metrics. That is a good example of how Pod Security controls fit real operational needs without turning every exception into a platform-wide exception.

Policy Type Typical Use
Restricted General application workloads
Baseline Moderately trusted legacy services
Privileged Infrastructure agents and node-level components

Common Use Cases And Real-World Examples

PSPs were often deployed first in application namespaces where platform teams wanted to block dangerous defaults. A team could require every pod to run as non-root, use a read-only root filesystem, and avoid host namespaces. That immediately improved the posture of web apps, APIs, and background workers without forcing developers to understand every node-level security detail.

Stateful workloads were trickier. Databases, message brokers, and storage clients sometimes needed looser permissions for persistent storage or specialized networking. A database operator might need access to a storage plugin or a specific volume type, while still not needing host PID or privileged mode. PSPs gave teams a way to scope those exceptions without abandoning the rest of the guardrails.

CI/CD runners were another common edge case. Build jobs sometimes needed extra filesystem access, package installation, or nested tooling. Monitoring agents also frequently needed read-only access to host metrics, logs, or sockets. Ingress controllers and service mesh components often needed broader network permissions than ordinary apps because they interacted directly with cluster traffic.

These cases show why PSPs were so useful in multi-tenant clusters. One tenant’s operational requirement did not have to become everybody’s default. That helped satisfy internal security reviews, external audit expectations, and platform governance objectives. For organizations mapping workloads to risk, the model aligned well with frameworks like NIST NICE and security governance guidance from COBIT.

Note

The best PSP deployments were not “set it and forget it.” They were built around workload classes, with clear owners for application, platform, and security teams. That ownership model reduced confusion when a pod was denied.

  • Application namespaces: strict non-root defaults.
  • Stateful services: selective volume and filesystem exceptions.
  • Platform services: specialized controls for agents, ingress, and storage.

Limitations And Operational Challenges

PSPs were powerful, but they were also hard to operate at scale. The biggest issue was complexity. A policy that was secure on paper could become fragile in practice if it had too many exceptions, too many bindings, or too many workload-specific edge cases. Platform teams often ended up managing a maze of policies and service account permissions.

Debugging was another pain point. When Kubernetes rejected a pod, the denial message was not always developer-friendly. The symptom might be a generic admission failure, but the underlying cause could be a single forbidden capability, a hostPath mount, or a UID mismatch. That made the feedback loop slow, especially for teams that did not understand pod security settings deeply.

The maintenance burden also grew over time. Applications changed. Images changed. Operators updated. New sidecars were added. Every change risked violating a PSP that had been written months earlier. The result was either frequent breakage or gradual policy drift, where admins loosened controls just to keep deployments moving. That is exactly the kind of outcome DevOps Best Practices are supposed to prevent.

There was also a user-experience problem. PSPs were not very intuitive for developers. The policy intent lived in cluster YAML and RBAC bindings rather than in a simple namespace-level model. That created inconsistent enforcement across teams and made it easy for one namespace to become an exception-heavy outlier. The controls were effective, but cumbersome.

For that reason, PSPs became a good example of a security control that was technically strong but operationally awkward. Organizations that relied on them heavily often needed significant platform engineering maturity just to keep policy changes from becoming an outage source.

Pod Security Policy Deprecation And What Replaced It

Pod Security Policies were deprecated and later removed from Kubernetes. The replacement is the Pod Security Admission model, which uses namespace labels rather than per-policy objects and RBAC bindings. That is a big conceptual shift. Instead of authoring custom policy documents for every workload class, administrators label namespaces as privileged, baseline, or restricted.

The new model is simpler and more predictable. It gives built-in, versioned policy enforcement without the management overhead of PSP objects. It is also easier for development teams to understand because the security posture is visible at the namespace level. That does not eliminate the need for custom controls, but it removes a lot of policy plumbing.

Kubernetes documents Pod Security Admission as the built-in replacement for PSPs. For more complex requirements, organizations can layer tools such as OPA Gatekeeper, Kyverno, or custom admission webhooks. These tools fill the gap when a team needs policies beyond the built-in profile levels. That is often the case for regulatory constraints, image source restrictions, label enforcement, or application-specific exceptions.

The main migration idea is simple: use Pod Security Admission for baseline cluster protection, then add custom policy only where necessary. That model is easier to audit and align with Container Security goals than a large PSP catalog. See the Kubernetes documentation at Pod Security Admission.

  • PSP model: policy objects plus RBAC bindings.
  • Pod Security Admission: namespace labels with built-in profiles.
  • Advanced enforcement: OPA Gatekeeper, Kyverno, admission webhooks.

Best Practices For Kubernetes Pod-Level Security Today

The best modern approach is layered. Start with Pod Security Admission as the baseline enforcement mechanism. Use restricted wherever possible, and only relax to baseline or privileged where a workload truly requires it. That keeps the default safe and makes exceptions visible.

Next, add custom policy engines when you need rules that the built-in admission model cannot express. Kyverno and OPA Gatekeeper are both useful for enforcing image registries, labeling standards, resource settings, and security context requirements. Use them selectively. The goal is not more policy. The goal is clearer policy.

Least privilege still matters at every layer. Limit service account permissions. Drop unnecessary Linux capabilities. Avoid host networking unless there is a clear need. Mount only the volumes required for the application. Prefer read-only root filesystems for immutable images. Those controls reduce the attack surface even if a pod is compromised.

Strong operational discipline matters too. Test policy changes in a staging cluster, run admission checks in CI/CD, and document namespace expectations for developers. Continuous auditing is not optional. If you do not measure policy drift, you will discover it during an incident. The OWASP Top 10 is a useful reminder that misconfiguration and access control mistakes remain common failure points.

Key Takeaway

Modern Kubernetes security works best when policy is visible, layered, and easy to validate. Built-in controls handle the common case; custom policy handles exceptions.

  • Use namespace labels as the first-line control.
  • Prefer default-deny behavior for privileged settings.
  • Integrate security checks into pull request and deployment workflows.
  • Publish a short policy guide for developers and platform teams.

Migration Strategy From Pod Security Policies

Migration starts with inventory. List every PSP in use, every namespace it applies to, and every service account that can reference it. Then map each policy to the actual workload behavior it allowed. That mapping is essential because many PSPs were written around old assumptions that no longer match the current application stack.

Identify workloads that rely on privileged access, host networking, special volume types, or custom capabilities. Some of those workloads may be candidates for redesign. Others may simply need a modern exception path. The point is to separate true infrastructure requirements from accidental privilege that crept in over time.

A phased approach works best: assess, classify, replace, test, and enforce. First, assess current policy usage and create a risk matrix. Next, classify workloads into restricted, baseline, or privileged groups. Then replace PSP logic with Pod Security Admission labels and custom admission rules where needed. After that, test in staging and use audit logs to identify denials before production rollout.

Validation should include dry runs, deployment pipeline checks, and cluster audit review. If a workload breaks, fix the workload or refine the policy intentionally. Do not weaken security controls just to reduce noise. For organizations following federal guidance, the migration should also align with CISA hardening recommendations and internal governance requirements.

Vision Training Systems recommends treating the migration as a platform program, not a one-time YAML conversion. That keeps security, operations, and application teams aligned on what the new policy model should protect.

  • Assess: inventory PSPs and bindings.
  • Classify: group workloads by required privilege.
  • Replace: move to namespace labels and custom rules.
  • Test: validate in staging and CI/CD.
  • Enforce: roll out with audit visibility and owner sign-off.

Conclusion

Pod Security Policies played a major role in shaping how teams think about Kubernetes hardening, Pod Security, and Container Security. They taught platform teams to think in terms of admission-time guardrails, least privilege, and workload classes instead of trusting every pod spec by default. That idea is still valid, even though the mechanism has changed.

The modern path is cleaner. Use Pod Security Admission for built-in cluster-wide enforcement, then add Kyverno, Gatekeeper, or admission webhooks where you need more granular controls. Keep the focus on practical guardrails: non-root execution, capability limits, filesystem restrictions, namespace isolation, and clear developer guidance. That is how security becomes repeatable instead of reactive.

If you are still carrying PSP-era policy logic in your environment, now is the time to review it. Inventory your workloads, map exceptions, and replace brittle policy sprawl with a simpler model that your teams can actually operate. That is the kind of modernization that improves both security and delivery speed.

Vision Training Systems helps IT teams build the skills needed to design, enforce, and migrate Kubernetes security controls with confidence. Evaluate your current pod security posture, identify the gaps, and move toward a policy model that is easier to manage and harder to bypass.

Common Questions For Quick Answers

What problem did Kubernetes Pod Security Policies address in container security?

Kubernetes Pod Security Policies were designed to control the security-sensitive settings that a pod could request at creation time. In practice, they helped cluster administrators prevent risky configurations such as privileged containers, host network access, hostPath mounts, added Linux capabilities, and unsafe privilege escalation. That made PSPs an important control for reducing the blast radius of a misconfigured or malicious workload.

From a container security perspective, PSPs acted as a guardrail between developer flexibility and operational safety. In multi-tenant Kubernetes environments, that guardrail mattered because one overly permissive pod could affect the node, neighboring workloads, or sensitive data mounted from the host. PSPs were especially useful for enforcing least privilege, which is still a core best practice even as the ecosystem has moved toward newer policy approaches.

Why are Pod Security Policies considered difficult to manage at scale?

Pod Security Policies were powerful, but they were also relatively complex to configure and maintain. A usable policy often needed to align with service account bindings, namespace design, volume requirements, securityContext settings, and workload-specific exceptions. That meant teams had to understand both the policy rules and the exact behavior of each application, which increased operational overhead.

At scale, the biggest challenge was policy sprawl. Different teams might need different levels of access, and administrators had to keep policies narrow enough to be secure while broad enough to avoid breaking legitimate workloads. Small changes, such as allowing a new volume type or a specific capability, could have cluster-wide effects if not carefully scoped. This is why many teams combined policy enforcement with clear standards for pod security, admission control, and workload review processes.

What are the most important security settings to restrict in Kubernetes pods?

The most important pod security settings to restrict are the ones that expand access beyond the container boundary. Common examples include privileged mode, hostNetwork, hostPID, hostIPC, hostPath volumes, added Linux capabilities, runAsRoot behavior, and allowPrivilegeEscalation. Each of these settings can increase the chance that a container can interfere with the node or access data it should not touch.

A strong container security posture usually starts with least privilege and only adds exceptions when there is a clear application need. Good practice also includes using read-only root filesystems where possible, dropping unnecessary capabilities, and setting explicit securityContext values for users and groups. In addition, teams should review whether workloads truly need to run as root, because many applications can function safely with a non-root runtime user and tighter filesystem permissions.

How do pod security controls support a least-privilege Kubernetes strategy?

Pod security controls support least privilege by making insecure configurations difficult or impossible to deploy. Instead of trusting every workload to behave correctly, the cluster enforces a baseline that limits what pods can request. That reduces the chance that accidental misconfigurations, unsafe defaults, or compromised application images turn into security incidents.

In a least-privilege Kubernetes strategy, the goal is not to eliminate functionality, but to constrain it to the minimum required for the workload. That means permitting only the volumes, capabilities, and runtime permissions an application genuinely needs. When combined with namespace-level controls, image security practices, and runtime monitoring, pod security enforcement becomes part of a layered defense model that strengthens overall container security without relying on a single control.

What is the biggest misconception teams have about pod security enforcement?

A common misconception is that pod security enforcement alone is enough to secure a Kubernetes cluster. In reality, it is only one layer of defense. Even a well-tuned policy cannot fully protect against vulnerable application code, insecure images, exposed secrets, weak RBAC, or network paths that allow lateral movement after a compromise.

Another misconception is that tighter controls always mean slower delivery. Well-designed pod security standards can actually improve reliability by making runtime behavior more predictable and reducing emergency fixes after incidents. The key is to define clear security profiles, communicate them to application teams, and allow exceptions through a controlled process rather than ad hoc changes. That balance helps organizations maintain both developer velocity and strong container security.

Get the best prices on our best selling courses on Udemy.

Explore our discounted courses today! >>

Start learning today with our
365 Training Pass

*A valid email address and contact information is required to receive the login information to access your free 10 day access.  Only one free 10 day access account per user is permitted. No credit card is required.

More Blog Posts