Technical Articles
F5 SMEs share good practice.
cancel
Showing results for 
Search instead for 
Did you mean: 
Custom Alert Banner

Quick Intro

Kubernetes is flexible enough to let us to share certain kernel capabilities that traditionally are not accessible to pods if we don't explicitly specify them.

In this article, I will walk you through 2 examples of these capabilities

  • Making worker node directly share its NIC with pod
  • Giving privileged worker node's kernel access to pod/containers 

I'll also show you how can we create a policy to limit pods being created with such capabilities making your cluster more secure.

This is a short hands-on walk through and you can follow along if you've got a running Kubernetes cluster.

Example 1: A pod that shares NIC with its worker node

Typically a pod would have its own IP address and would never use directly a node's IP address, right?

We can change that by setting hostNetwork to true in pod's spec.

Let's create a regular pod first:

0151T000003l7jCQAQ.png

Ok, now let me create another pod with hostNetwork added to its spec.

Let's redirect a template to pod-with-hostNetwork.yaml first:

0151T000003l7jHQAQ.png

Now, I've cleaned a bit the above template and added hostNetwork:

0151T000003l7jMQAQ.png

Now, let's check the IP address of my nodes:

0151T000003l7jRQAQ.png

And the IP address of our pods:

0151T000003l7jWQAQ.png

Notice that the pod-with-hostnetwork shares the same IP address of its node (kube-worker2) and our regular pod does not.

Example 2: Running a pod with with kernel privilege as other processes outside container

Typically, a container would have limited access to worker node's kernel.

For example, access to device drivers is limited.

Lets list the visible /dev directory within our regular pod:

0151T000003l7jbQAA.png

Right, so just a few files.

Now, I'll just use our previous YAML file as template and set privileged to true:

0151T000003l7jgQAA.png

Cool, so now let's check the first 10 lines of /dev within our privileged pod container and list the number of files again:

0151T000003l7jhQAA.png

We can see that our privileged pod has now a much more privileged access to kernel than our regular pod.

This can be dangerous, right?

Here's my question: as Kubernetes admin, how can we limit or protect our cluster from accidental or deliberate (in case of an attacker) use of such capabilities?

What if we don't need our pods to have access to these capabilities at all?

Creating PodSecurityPolicy

Initially, in my lab I have no security policies defined:

0151T000003l7jlQAA.png

Here's the simplest YAML file I could create:

0151T000003l7jqQAA.png

I'm explicitly disallowing pods to be created using the hostNetwork and privileged capabilities I previously demonstrated here.

The other capabilities are mandatory as I need to specify if containers are allowed to be created running as any user (runAsUser) belonging to any group (fsGroup) or any supplemental groups (sppluementalGroups) and if require SELinux options to be configured or not.

For learning purposes, focus on what we've already talked about here. Keep in mind we're explicitly denying any hostNetwork or privileged directives to be set to true, ok?

Now, let's delete our privilege pod and create it again to see if our policy is being enforced:

0151T000003l7jvQAA.png

WAIT A MINUTE! Nothing happened! Why is that?

That's because we need to enable the Policy Security Policy controller.

Do we need it to enable it first? NO!

Make sure you create your policy first just like we did and ONLY then enable the controller.

The reason is because it will default to a deny-all policy and prevent you from creating pods and in my lab kube-controller-manager and kube-scheduler pods went into a restart loop.

In the next section we will learn how to enable the controller.

Enabling PodSecurityPolicy

If you're on Google Cloud, you need to either re-create your cluster or to update your existing one with --enable-pod-security-policy flag like this:

0151T000003l7k0QAA.png

Notice there is no Pod Security Policy (PSP) by default on GCP:

0151T000003l7k5QAA.png

On AWS EKS, it is enabled by default and there is a default PSP running:

0151T000003l7kAQAQ.png

The above policy has no restrictions which is pretty much equivalent to running Kubernetes with PodSecurityPolicy controller disabled.

If you've installed Kubernetes using kubeadm in your lab or Enterprise, you can manually edit the kube-api server's YAML file to enable PSP.

Note that the API server runs as a pod kube-system namespace of your node's API server:

0151T000003l7k6QAA.png

We can edit it's YAML file on /etc/kubernetes/manifests/kube-apiserver.json in the kube API master node and add PodSecurityPolicy to container's general settings under --enable-admission-plugins.

Here's what it looks like when I first open the file with my vim editor:

0151T000003l7kFQAQ.png

There is just NodeRestriction plugin enabled and I can add PodSecurityPolicy like this:

0151T000003l7kKQAQ.png

Testing Policy Enforcement

Let's check our pods again:

0151T000003l7kPQAQ.png

Let's delete and re-create privileged pod and pod-with-hostnetwork:

0151T000003l7kZQAQ.png

It works! None of the pods were allowed to be re-created and our Policy enforced.

Version history
Last update:
‎21-Oct-2019 04:04
Updated by:
Contributors