Securing Kubernetes Cluster with PodSecurityPolicy

Quick Intro

Kubernetes is flexible enough to let us to share certain kernel capabilities that traditionally are not accessible to pods if we don't explicitly specify them.

In this article, I will walk you through 2 examples of these capabilities

  • Making worker node directly share its NIC with pod
  • Giving privileged worker node's kernel access to pod/containers 

I'll also show you how can we create a policy to limit pods being created with such capabilities making your cluster more secure.

This is a short hands-on walk through and you can follow along if you've got a running Kubernetes cluster.

Example 1: A pod that shares NIC with its worker node

Typically a pod would have its own IP address and would never use directly a node's IP address, right?

We can change that by setting hostNetwork to true in pod's spec.

Let's create a regular pod first:

Ok, now let me create another pod with hostNetwork added to its spec.

Let's redirect a template to pod-with-hostNetwork.yaml first:

Now, I've cleaned a bit the above template and added hostNetwork:

Now, let's check the IP address of my nodes:

And the IP address of our pods:

Notice that the pod-with-hostnetwork shares the same IP address of its node (kube-worker2) and our regular pod does not.

Example 2: Running a pod with with kernel privilege as other processes outside container

Typically, a container would have limited access to worker node's kernel.

For example, access to device drivers is limited.

Lets list the visible /dev directory within our regular pod:

Right, so just a few files.

Now, I'll just use our previous YAML file as template and set privileged to true:

Cool, so now let's check the first 10 lines of /dev within our privileged pod container and list the number of files again:

We can see that our privileged pod has now a much more privileged access to kernel than our regular pod.

This can be dangerous, right?

Here's my question: as Kubernetes admin, how can we limit or protect our cluster from accidental or deliberate (in case of an attacker) use of such capabilities?

What if we don't need our pods to have access to these capabilities at all?

Creating PodSecurityPolicy

Initially, in my lab I have no security policies defined:

Here's the simplest YAML file I could create:

I'm explicitly disallowing pods to be created using the hostNetwork and privileged capabilities I previously demonstrated here.

The other capabilities are mandatory as I need to specify if containers are allowed to be created running as any user (runAsUser) belonging to any group (fsGroup) or any supplemental groups (sppluementalGroups) and if require SELinux options to be configured or not.

For learning purposes, focus on what we've already talked about here. Keep in mind we're explicitly denying any hostNetwork or privileged directives to be set to true, ok?

Now, let's delete our privilege pod and create it again to see if our policy is being enforced:

WAIT A MINUTE! Nothing happened! Why is that?

That's because we need to enable the Policy Security Policy controller.

Do we need it to enable it first? NO!

Make sure you create your policy first just like we did and ONLY then enable the controller.

The reason is because it will default to a deny-all policy and prevent you from creating pods and in my lab kube-controller-manager and kube-scheduler pods went into a restart loop.

In the next section we will learn how to enable the controller.

Enabling PodSecurityPolicy

If you're on Google Cloud, you need to either re-create your cluster or to update your existing one with --enable-pod-security-policy flag like this:

Notice there is no Pod Security Policy (PSP) by default on GCP:

On AWS EKS, it is enabled by default and there is a default PSP running:

The above policy has no restrictions which is pretty much equivalent to running Kubernetes with PodSecurityPolicy controller disabled.

If you've installed Kubernetes using kubeadm in your lab or Enterprise, you can manually edit the kube-api server's YAML file to enable PSP.

Note that the API server runs as a pod kube-system namespace of your node's API server:

We can edit it's YAML file on /etc/kubernetes/manifests/kube-apiserver.json in the kube API master node and add PodSecurityPolicy to container's general settings under --enable-admission-plugins.

Here's what it looks like when I first open the file with my vim editor:

There is just NodeRestriction plugin enabled and I can add PodSecurityPolicy like this:

Testing Policy Enforcement

Let's check our pods again:

Let's delete and re-create privileged pod and pod-with-hostnetwork:

It works! None of the pods were allowed to be re-created and our Policy enforced.

Published Oct 21, 2019
Version 1.0

Was this article helpful?

No CommentsBe the first to comment