Technical Articles
F5 SMEs share good practice.
cancel
Showing results for 
Search instead for 
Did you mean: 

Quick Intro

Say we're creating a cloud-native Application and we've noticed that some pods need to access API.

If all our App needs is pod's metadata, we can use Downward API to retrieve and copy such data to environment variables or to a specific downward API volume mounted to container.

For everything else not 'supported' by Downward API, we can configure our app to 'talk' directly to Kube API server.

Can we allow pods to access only specific API subset? 

Absolutely! One way to achieve this is to use Kubernetes RBAC!

It should be enabled by default but it doesn't hurt to double check with the following command:

0151T000003lEkRQAU.png

In this article I'm going to explain how Kube API authenticates pods and authorise their access to some or all of its resources.

Here's what we'll do:

  • Quickly explain what Service Accounts (SAs), Roles, Cluster Roles, Role Bindings and Cluster Role Bindings are
  • Lab test so our concepts become rock solid!

Ready? Let's go!

Service Accounts, Roles, Cluster Roles, Role Bindings and Cluster Role Bindings

Think of Service Accounts as what user accounts are for us (people) but for apps.

It's the username/account of our app!

Roles are like pre-assigned permissions attached to a user account to restrict what users can do.

In the world of Kubernetes there are 2 kinds of roles: roles and cluster roles.

The difference between roles and cluster roles is that roles are confined to a namespace and cluster roles are cluster-wide.

The way we 'attach' roles/cluster roles to a user account is by creating a role/cluster role binding.

To set it all up, we first create a Service Account and a Role (or a Cluster Role) separately like this:

0151T000003lEkWQAU.png

Easy?

Lastly, we'd assign/attach the myapp1 Service Account (SA) to a pod.

Such pod will end up with the permissions we set to the role bound to myapp1.

In above example, service account myapp1 would have permission to execute kubectl "get" and "list" commands.

Let's go through a lab test to make it clearer.

Creating Service Account and Role

First we create Service Account (myapp1😞

0151T000003lEkgQAE.png

And a "namespaced" role (pod-reader) on default namespace:

0151T000003lEklQAE.png

FYI, for automation guys out there, the YAML file equivalent for above command would the following:

0151T000003lEkmQAE.png

Creating test pod

Let's create a pod that uses our newly created service account to authenticate to Kube API:

This is the pod's YAML file:

0151T000003lEonQAE.png

And this is the command to create the pod:

0151T000003lEkvQAE.png

As we can see, this pod uses our newly created SA myapp1 but we're still not able to list any pods:

0151T000003lElZQAU.png

We're getting an 403 error but that's because we didn't bind the role yet.

Role Binding

Now, let's bind myapp1 to our pod-reader role using the following command:

0151T000003lEleQAE.png

Ops! We need to add the namespace too:

0151T000003lElfQAE.png

If our theory is correct, if we bind our role to myapp1 SA we should be able to list pods that belong to default namespace:

0151T000003lEkwQAE.png

It works!

BONUS: How Kubernetes Authentication works behind the scene

Could we use just one container rather than two to perform above tests?

0151T000003lEljQAE.png

The short answer is yes.

However, in order to "talk" to Kube API we need to go through authentication phase.

Therefore, it is much more convenient to delegate the authentication part to a separate proxy (helper) container.

Just because container is running in a Kubernetes environment, it doesn't necessarily mean it should be able to "talk" to Kube API without authentication.

Let's create a single container and I'll show you.

I'll use roughly same config as our previous test without the proxy container:

0151T000003lEloQAE.png

Every container we run, we'll find a directory with the CA certificate of Kube API server along with a JWT token (RFC7519) and corresponding namespace:

0151T000003lElyQAE.png

As a side note, notice that token matches the one from myapp1 SA we assigned to this pod:

0151T000003lEm3QAE.png

In another tab, this is how (step by step) I retrieved the token inside API server:

0151T000003lEm8QAE.png

It's the same token, see?

Also, Kube API's default address can be found by using Kubernetes DNS name and such info is available in environment variable KUBERNETES_SERVICE_HOST along with KUBERNETES_SERVICE_PORT:

0151T000003lEmIQAU.png

If we try to reach Kube API directly it fails:

0151T000003lEmSQAU.png

If we want to manually authenticate ourselves we'd need to use CA + token we retrieved from serviceaccount 

Let's make things easier and copy token's and cert's file path to a variable: 

0151T000003lEmcQAE.png

Now we can make the request:

0151T000003lEmrQAE.png

It works!

The other option would be to install kubectl in your container and use kubectl proxy to do all the authentication for us.

As kubectl would be our proxy, we could issue our requests to localhost and kubectl would reach Kube API for us.

However, as containers within the same pod share the same network namespace, I find it more convenient (and easier) to run a second container as a proxy and let first container do whatever we want it to do without having to worry about authentication.

That's it for now.

Version history
Last update:
‎20-Nov-2019 07:41
Updated by:
Contributors