What is Kubernetes?

Kubernetes is a container-orchestration platform.

Its goal is to abstract away the complexity to run containerised applications in terms of network, storage, scaling and more.

It also provides a declarative REST API (which is extensible) in order to automate the process of application hosting and exposure.

If that sounds confusing, think of it as the thing that abstracts your infrastructure.

We no longer have to worry about servers but only how to deploy our application to Kubernetes.

This is how a Kubernetes cluster may look like:

It is comprised of a cluster of physical servers or virtual machines known as nodes in Kubernetes world.

We can add or remove nodes at will and Kubernetes can scale down or up to a staggering amount of up to 5,000 nodes!

Master nodes vs Worker Nodes

There are 2 kinds of nodes you should initially know about: master and worker nodes¹.

¹ OpenShift (an enterprise fork of Kubernetes) adds the notion of infrastructure node. Infrastructure nodes are meant to host shared services (e.g. router nodes, monitoring, etc).

Master nodes manage the Kubernetes cluster using 4 main components: 

Scheduler schedules pods to worker nodes.

Controller manager makes sure cluster's actual state = desired state.

ETCD is where Kubernetes store its objects and metadata.

API Server validates objects before they're stored in ETCD and of course is the central point of contact for object creation, retrieval and to watch the state of objects and cluster in general.

A popular tool to "talk" to API Server is kubectl. If you install Kubernetes, you would definitely use kubectl².

² Openshift has a similar tool called "oc"

Worker nodes communicate with Master node's API Server in the following manner:

Kubelet runs on each worker node and watches API SERVER to continuously monitor for Pods that should be created, deleted or changed.

When we first add a Node to Kubernetes cluster, Kubelet is the daemon that registers the Node resource to API Server.

Kube-proxy makes sure client traffic is redirected to the correct pod networking-wise in an efficient manner.

Redirection is accomplished by using either iptables rules or IPVS virtual servers.

Container runtime is usually Docker.

Pods and Containers: Where does a Kubernetes application reside?

Not every application is compatible with Kubernetes environment.

Developers have to create their application in a specific way with small replicable components (also known as micro-services) that are independent from other components.

Such components are hosted inside of a Pod.

Pods run in Worker nodes:

Within pods we can find one or more containers and that's where our application (or small chunk of our application) resides.

In Appendix section, I will explain why using pods instead of containers directly.

Understanding Pod's scalability component

Pods are supposed to be replicable so application is designed in such a way to enable horizontal (auto) scalability.

That's one of the powers of Kubernetes! 

We have a cluster of nodes where chunks of our application (pods) can easily increase or decrease in numbers.

This is also the reason why our pods should be coded in a way that allows them to be replicable.

Imagine our application has a component called shopping-trolley and another one called check-out:

Our shopping-trolley pods may eventually become too overloaded and we might need more replicas to cope with the additional traffic/load.

Increasing/reducing the number of replicas is as easy as writing down the number of replicas or letting cloud providers auto-scale it for you.

Cloud providers also allow us to increase replicas based on CPU cycles, memory, etc.

Before the rise of container orchestrators like Kubernetes, we would have to scale out the whole application stack, unnecessarily overloading servers.

Kubernetes also allows us to scale out only parts of the application that needs it, effectively reducing unnecessary overload on servers and costs.

The other advantage is that we can upgrade part of our application with zero downtime without the overhead to re-deploy the whole application at once.

Services: how traffic reaches Application within a Pod

Scheduler spreads out pods throughout Kubernetes cluster.

However, it is usually a good idea to group pod replicas behind a single entry-point for reachability purposes as a pod's IP may change.

This is where a Kubernetes Services comes in:

Services act as the single point of access for a groups of pods with fixed DNS name and port.

The way Services work out which pod should belong to it is by the use of labels.

There is a label selector on a service and pods with same label are grouped into the Service.

Understanding the 3 ways Services can be exposed

ClusterIP

Services can be exposed internally when one group of pods want to communicate with another one.

This is the default and is called ClusterIP type.

A private IP address reachable only within Kubernetes cluster is used as single point of access for the group of pods.

NodePort

Services can be exposed externally by using Node's public IP address and port as cluster's entry point for client's external traffic.

This is called NodePort type.

If we use NodePort, external clients would have to directly reach one of the nodes so NodePort might not be suitable for most production environments.

If we need to load balance traffic among nodes, the next type is the solution.

LoadBalancer

This is another layer on top of NodePort that load balances traffic in a round robin fashion to all nodes.

However, the LoadBalancer type can only be tied to a single Service, i.e. if we have multiple Services, we would need one LoadBalancer per Service which could become quite costly.

If we want to use a single Public IP address to direct external traffic to the right service based on URL, the next type is the solution.

Ingress Resource

The Ingress type reads the HTTP Host header and forwards connection to a Service based on the URL/PATH.

An Ingress can point to multiple Services based on URL using a single Public IP address as entry point.

This overcomes the limitation of one LoadBalancer per Service on LoadBalancer type.

Final Remarks

Kubernetes is currently a well-established DevOps tool but it is a very extensive topic and is constantly evolving. For release update, please watch Kubernetes official blog. There are many Kubernetes objects that were not covered here but should be covered in a future article.

Appendix: Why Pods? Why not using containers directly?

Design

The underlying Container technology is independent from Kubernetes.

Pods act as a layer of abstraction on top of it.

With that in mind, Kubernetes doesn't really have to adapt to different container technology (such as Docker, Rocket, etc) and avoids runtime lock-in, i.e. each container runtime has its own strength.

Application Requirements 

Within pods, containers can potentially share resources more easily.

For example, one container might run to perform a certain task and the other one to take care of authentication.

Another example would be one container writing to a shared storage volume and another one reading from it to perform additional processing.

Containers in the same pod share the same network and IPC namespaces.

Published Jan 03, 2020
Version 1.0

Was this article helpful?