docker
14 TopicsIntroducing the New Docker Compose Installation Option for F5 NGINX Instance Manager
F5 NGINX Instance Manager (NIM) is a centralised management solution designed to simplify the administration and monitoring of F5 NGINX instances across various environments, including on-premises, cloud, and hybrid infrastructures. It provides a single interface to efficiently oversee multiple NGINX instances, making it particularly useful for organizations using NGINX at scale. We’re excited to introduce a new Docker Compose installation option for NGINX Instance Manager, designed to help you get up and running faster than ever before, in just a couple of steps. Key Features: Quick and Easy Installation: With just a couple of steps, you can pull and deploy NGINX Instance Manager on any Docker host, without having to manually configure multiple components. The image is available in our container registry, so once you have a valid license to access it, getting up and running is as simple as pulling the container. Fault-Tolerant and Resilient: This installation option is designed with fault tolerance in mind. Persistent storage ensures your data is safe even in the event of container restarts or crashes. Additionally, with a separate database container, your product’s data is isolated, adding an extra layer of resilience and making it easier to manage backups and restores. Seamless Upgrades: Upgrades are a breeze. You can update to the latest version of NGINX Instance Manager by simply updating the image tag in your Docker Compose file. This makes it easy to stay up-to-date with the latest features and improvements without worrying about downtime or complex upgrade processes. Backup and Restore Options: To ensure your data is protected, this installation option comes with built-in backup and restore capabilities. Easily back up your data to a safe location and restore it in case of any issues. Environment Configuration Flexibility: The Docker Compose setup allows you to define custom environment variables, giving you full control over configuration settings such as log levels, timeout values, and more. Production-Ready: Designed for scalability and reliability, this installation method is ready for production environments. With proper resource allocation and tuning, you can deploy NGINX Instance Manager to handle heavy workloads while maintaining performance. The following steps walk you through how to deploy and manage NGINX Instance Manager using Docker Compose. What you need A working version of Docker Your NGINX subscription’s JSON Web Token from MyF5 This pre-configured docker-compose.yaml file: Download docker-compose.yaml file . Step 1 - Set up Docker for NGINX container registry Log in to the Docker registry using the contents of the JSON Web Token file you downloaded from MyF5 : docker login private-registry.nginx.com --username=<JWT_CONTENTS> --password=none Step 2 - Run “docker login” and then “docker compose up” in the directory where you downloaded docker-compose.yaml Note: You can optionally set the Administrator password for NGINX Instance Manager prior to running Docker Compose. ~$ docker login private-registry.nginx.com --username=<JWT_CONTENTS> --password=none ~$ echo "admin" > admin_password.txt ~$ docker compose up -d [+] Running 6/6 ✔ Network nim_clickhouse Created 0.1s ✔ Network nim_external_network Created 0.2s ✔ Network nim_default Created 0.2s ✔ Container nim-precheck-1 Started 0.8s ✔ Container nim-clickhouse-1 Healthy 6.7s ✔ Container nim-nim-1 Started 7.4s. Step 3 – Access NGINX Instance Manager Go to the NGINX Instance Manager UI on https://<<DOCKER_HOST>>:443 and license the product using the same JSON Web Token you downloaded from MyF5 earlier. Conclusion With this new setup, you can install and run NGINX Instance Manager on any Docker host in just 3 steps, dramatically reducing setup time and simplifying deployment. Whether you are working in a development environment or deploying to production, the Docker Compose-based solution ensures a seamless and reliable experience. For more information on using the docker compose option with NGINX Instance manager such as running a backup and restore, using secrets, and many more, please see the instructions here.69Views1like0CommentsCreating a Docker Container to Run AS3 Declarations
This guide will take you through some very basic docker, Python, and F5 AS3 configuration to create a single-function container that will update a pre-determined BIG-IP using an AS3 declaration stored on Github. While it’s far from production ready, it might serve as a basis for more complex configurations, plus it illustrates nicely some technology you can use to automate BIG-IP configuration using AS3, Python and containers. I'm starting with a running BIG-IP - in this case a VE running on the Google Cloud Platform, with the AS3 wroker installed and provisioned, plus a couple of webservers listening on different ports. First we’re going to need a host running docker. Fire up an instance in on the platform of your choice – in this example I’m using Ubuntu 18.04 LTS on the Google Cloud platform – that’s purely from familiarity – anything that can run Docker will do. The install process is well documented but looks a bit like this: $ sudo apt-get update $ sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - $ sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" $sudo apt-get update $ sudo apt-get install docker-ce docker-ce-cli containerd.io It's worth adding your user to the docker group to avoid repeadly forgetting to type sudo (or is that just me?) $ sudo usermod -aG docker $USER Next let's test it's all working: $ docker run hello-world Next let’s take a look at the AS3 declaration, as you might expect form me by now, it’s the most basic version – a simple HTTP app, with two pool members. The beauty of the AS3 model, of course, is that it doesn’t matter how complex your declaration is, the implementation is always the same. So you could take a much more involved declaration and just by changing the file the python script uses, get a more complex configuration. { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.0.0", "id": "urn:uuid:33045210-3ab8-4636-9b2a-c98d22ab915d", "label": "Sample 1", "remark": "Simple HTTP Service with Round-Robin Load Balancing", "Sample_01": { "class": "Tenant", "A1": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualAddresses": [ "10.138.0.4" ], "pool": "web_pool" }, "web_pool": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 8080, "serverAddresses": [ "10.138.0.3" ] }, { "servicePort": 8081, "serverAddresses": [ "10.138.0.3" ] } ] } } } } } Now we need some python code to fire up our request. The code below is absolutely a minimum viable set that’s been written for simplicity and clarity and does minimal error checking. There are more ways to improve it that lines of code in it, but it will get you started. #Python Code to run an as3 declaration # import requests import os from requests.auth import HTTPBasicAuth # Get rid of annoying insecure requests waring from requests.packages.urllib3.exceptions import InsecureRequestWarning requests.packages.urllib3.disable_warnings(InsecureRequestWarning) # Declaration location GITDRC = 'https://raw.githubusercontent.com/RuncibleSpoon/as3/master/declarations/payload.json' IP = '10.138.0.4' PORT = '8443' USER = os.environ['XUSER'] PASS = os.environ['XPASS'] URLBASE = 'https://' + IP + ':' + PORT TESTPATH = '/mgmt/shared/appsvcs/info' AS3PATH = '/mgmt/shared/appsvcs/declare' print("########### Fetching Declaration ###########") d = requests.get(GITDRC) # Check we have connectivity and AS3 is installed print('########### Checking that AS3 is running on ', IP ,' #########') url = URLBASE + TESTPATH r = requests.get(url, auth=HTTPBasicAuth(USER, PASS), verify=False) if r.status_code == 200: data = r.json() if data["version"]: print('AS3 version is ', data["version"]) print('########## Runnig Declaration #############') url = URLBASE + AS3PATH headers = { 'content-type': 'application/json', 'accept': 'application/json' } r = requests.post(url, auth=HTTPBasicAuth(USER, PASS), verify=False, data=d.text, headers=headers) print('Status Code:', r.status_code,'\n', r.text) else: print('AS3 test to ',IP, 'failed: ', r.text) This simple python code will pull down an S3 declaration from GitHub using the 'requests' Python library, and the GITDRC variable, connect to a specific BIG-IP, test it’s running AS3 (see here for AS3 setup instructions), and then apply the declaration. It will give you some tracing output, but that’s about it. There are couple of things to note about IP’s, users, and passwords: IP = '10.138.0.4' PORT = '8443' USER = os.environ['XUSER'] PASS = os.environ['XPASS' As you can see, I’ve set the IP and port statically and the username and passwords are pulled in from environment variables in the container. We’ll talk more about the environment variables below, but this is more a way to illustrate your options than design advice. Now we need to build a container to run it in. Containers are relatively easy to build with just a Dockerfile and a few more test files in a directory. Here's the docker file: FROM python:3 WORKDIR /usr/src/app ARG Username=admin ENV XUSER=$Username ARG Password=admin ENV XPASS=$Password # line bleow is not actually used - see comments - but oy probably is a better way ARG DecURL=https://raw.githubusercontent.com/RuncibleSpoon/as3/master/declarations/payload.json ENV Declaration=$DecURL COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt COPY . . ENTRYPOINT [ "python", "./as3.py" ] You can see a couple of ARG and ENV statements , these simple set the environment variables that we’re (somewhat arbitrarily) using in the python script. Further more we’re going to override them in then build command later. It’s worth noting this isn’t a way to obfuscate passwords, they are exposed by a simple $ docker image history command that will expose all sorts of things about the build of the container, including the environment variables passes to it. This can be overcome by a multi-stage build – but proper secret management is something you should explore – and comment below if you’d like some examples. What’s this requirements.txt file mentioned in the Dockerfile it’s just a manifest to the install of the python package we need: # This file is used by pip to install required python packages # Usage: pip install -r requirements.txt # requests package requests==2.21.0 With our Dockerfile, requirements.txt and as3.py files in a directory we're ready to build a container - in this case I'm going to pass some environment variables into the build to be incorporated in the image - and replace the ones we have set in the Dockerfile: $ export XUSER=admin $ export XPASS=admin Build the container (the -t flag names and tags your container, more of which later): $ docker build -t runciblespoon/as3_python:A --build-arg Username=$XUSER --build-arg Password=$XPASS . The first time you do this there will be some lag as files for the python3 source container are downloaded and cached, but once it has run you should be able to see your image: $ docker image list REPOSITORY TAG IMAGE ID CREATED SIZE runciblespoon/as3_python A 819dfeaad5eb About a minute ago 936MB python 3 954987809e63 42 hours ago 929MB hello-world latest fce289e99eb9 3 months ago 1.84kB Now we are ready to run the container maybe a good time for a 'nothing up my sleeve' moment - here is the state of the BIG-IP before hand Now let's run the container from our image. The -tty flag attached a pseudo terminal for output and --rm deletes the container afterwards: $ docker run --tty --rm runciblespoon/as3_python:A ########### Fetching Declaration ########### ########### Checking that AS3 is running on 10.138.0.4 ######### AS3 version is 3.10.0 ########## Runing Declaration ############# Status Code: 200 {"results":[{"message":"success","lineCount":23,"code":200,"host":"localhost","tenant":"Sample_01","runTime":929}],"declaration":{"class":"ADC","schemaVersion":"3.0.0","id":"urn:uuid:33045210-3ab8-4636-9b2a-c98d22ab915d","label":"Sample 1","remark":"Simple HTTP Service with Round-Robin Load Balancing","Sample_01":{"class":"Tenant","A1":{"class":"Application","template":"http","serviceMain":{"class":"Service_HTTP","virtualAddresses":["10.138.0.4"],"pool":"web_pool"},"web_pool":{"class":"Pool","monitors":["http"],"members":[{"servicePort":8080,"serverAddresses":["10.138.0.3"]},{"servicePort":8081,"serverAddresses":["10.138.0.3"]}]}}},"updateMode":"selective","controls":{"archiveTimestamp":"2019-04-26T18:40:56.861Z"}}} Success, by the looks of things. Let's check the BIG-IP: running our container has pulled down the AS3 declaration, and applied it to the BIG-IP. This same container can now be run repeatedly - and only the AS3 declaration stored in git (or anywhere else your container can get it from) needs to change. So now you have this container running locally you might want ot put it somewhere. Docker hub is a good choice and lets you create one private repository for free. Remember this container image has credentials, so keep it safe and private. Now the reason for the -t runciblespoon/as3_python:A flag earlier. My docker hub user is "runciblespoon" and my private repository is as3_python. So now all i need ot do is login to Docker Hub and push my image there: $ docker login $ docker push runciblespoon/as3_python:B Now I can go to any other host that runs Docker, login to Docker hub and run my container: $ docker login $ docker run --tty --rm runciblespoon/as3_python:A Unable to find image 'runciblespoon/as3_python:A' locally B: Pulling from runciblespoon/as3_python ... ########### Fetching Declaration ########### Docker will pull down my container form my private repo and run it, using the AS3 declaration I've specified. If I want to change my config, I just change the declaration and run it again. Hopefully this article gives you a starting point to develop your own containers, python scripts, or AS3 declarations, I'd be interested in what more you would like to see, please ask away in the comments section.1.5KViews0likes4CommentsF5 Kubernetes Container Integration
Two problems, finding docs to setup f5 kube-proxy. The doc is missing from this link - http://clouddocs.f5.com/products/asp/v1.0/tbd but I havn't gotten far enough to be able to test communication. The second is k8s-bigip-ctlr is not writing VIP or pool updates. I have k8s-bigip-ctlr and asp running. $ kubectl get pods --namespace kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE f5-asp-1d61j 1/1 Running 0 57m 10.20.30.168 ranchernode2.lax.verifi.com f5-asp-9wmbw 1/1 Running 0 57m 10.20.30.162 ranchernode1.lax.verifi.com heapster-818085469-4bnsg 1/1 Running 7 25d 10.42.228.59 ranchernode1.lax.verifi.com k8s-bigip-ctlr-deployment-1527378375-d1p8v 1/1 Running 0 41m 10.42.68.136 ranchernode2.lax.verifi.com kube-dns-1208858260-ppgc0 4/4 Running 8 25d 10.42.26.16 ranchernode1.lax.verifi.com kubernetes-dashboard-2492700511-r20rw 1/1 Running 6 25d 10.42.29.28 ranchernode1.lax.verifi.com monitoring-grafana-832403127-cq197 1/1 Running 7 25d 10.42.240.16 ranchernode1.lax.verifi.com monitoring-influxdb-2441835288-p0sg1 1/1 Running 5 25d 10.42.86.70 ranchernode1.lax.verifi.com tiller-deploy-3991468440-1x80g 1/1 Running 6 25d 10.42.6.76 ranchernode1.lax.verifi.com I have tried with k8s-bigip-ctlr 1.0.0 (Latest), which fails with different errors. Creating VIP with bigip-virtual-server_v0.1.0.json 2017/06/27 22:50:13 [WARNING] Could not get config for ConfigMap: k8s.vs - minLength must be of an integer Creating Pool with bigip-virtual-server_v0.1.0.json 2017/06/27 22:46:45 [WARNING] Could not get config for ConfigMap: k8s.pool - format must be a valid format . So I tired 1.1.0-beta.1 and it does produce something in the logs like its working but doesn't write any changes to the F5. (using f5schemadb bigip-virtual-server_v0.1.3.json) Here using f5schemadb://bigip-virtual-server_v0.1.3.json with 1.1.0-beta.1 seems get the farthest. 2017/06/27 22:58:19 [DEBUG] Delegating type *v1.ConfigMap to virtual server processors 2017/06/27 22:58:19 [DEBUG] Process ConfigMap watch - change type: Add name: hello-vs namespace: default 2017/06/27 22:58:19 [DEBUG] Add watch of namespace default and resource services, store exists:true 2017/06/27 22:58:19 [DEBUG] Looking for service "hello" in namespace "default" as specified by ConfigMap "hello-vs". 2017/06/27 22:58:19 [DEBUG] Requested service backend {ServiceName:hello ServicePort:80 Namespace:default} not of NodePort type 2017/06/27 22:58:19 [DEBUG] Updating ConfigMap {ServiceName:hello ServicePort:80 Namespace:default} annotation - status.virtual-server.f5.com/ip: 10.20.28.70 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) writing section name services 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) successfully wrote section (services) 2017/06/27 22:58:19 [INFO] Wrote 0 Virtual Server configs 2017/06/27 22:58:19 [DEBUG] Services: [] 2017/06/27 22:58:19 [DEBUG] Delegating type *v1.ConfigMap to virtual server processors 2017/06/27 22:58:19 [DEBUG] Process ConfigMap watch - change type: Update name: hello-vs namespace: default 2017/06/27 22:58:19 [DEBUG] Add watch of namespace default and resource services, store exists:true 2017/06/27 22:58:19 [DEBUG] Looking for service "hello" in namespace "default" as specified by ConfigMap "hello-vs". 2017/06/27 22:58:19 [DEBUG] Requested service backend {ServiceName:hello ServicePort:80 Namespace:default} not of NodePort type 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) writing section name services 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) successfully wrote section (services) 2017/06/27 22:58:19 [INFO] Wrote 0 Virtual Server configs 2017/06/27 22:58:19 [DEBUG] Services: [] Config Map kind: ConfigMap apiVersion: v1 metadata: name: hello-vs namespace: default labels: f5type: virtual-server data: schema: "f5schemadb://bigip-virtual-server_v0.1.3.json" data: |- { "virtualServer": { "frontend": { "balance": "round-robin", "mode": "http", "partition": "kubernetes", "virtualAddress": { "bindAddr": "10.20.28.70", "port": 443 } }, "backend": { "serviceName": "hello", "servicePort": 80 } } }884Views0likes8CommentsPort Redirection Failure
I'm using a Non-Prod F5 running 12.1.2 Build 1.292.271. We have a cluster of nodes that serve up various Apps on different ports. /App1 - 80 /App2 - 81 /App3 - 82 I have configured a pool with members that have all service ports enabled. Also a single VS with a VIP and a service port of 0. Here is my iRule: when HTTP_REQUEST { switch -glob [HTTP::uri] { "/App1*" { set port 80 } "/App2*" { set port 81 } "/App3* { set port 82 } } } when LB_SELECTED { pool [LB::server pool] member [LB::server addr] $port } When running statistics on the iRule I get failures in the "LB_SELECTED" part however from my prospective this should be the correct syntax to change the service port on a pool. I would like some feedback on this configuration and if someone can comment on this configuration. Thanks.244Views0likes1CommentKnowledge sharing: Containers, Kubernetes, Openshift, F5 Container Connector, NGINX Ingress
For anyone interested about the free traning for "F5 Container Connector for Kubernetes" or "F5 OpenShift Container Integration" at "LearnF5". For NGINX being installed in Kubernetes there is enough info but for F5 Contaner Connector/Container Ingress Services there is not so much: https://docs.nginx.com/nginx-ingress-controller/f5-ingresslink/ https://www.nginx.com/products/nginx-ingress-controller/ https://community.f5.com/t5/technical-articles/better-together-f5-container-ingress-services-and-nginx-plus/ta-p/280471 F5 Devcentral also has youtube channel with usefull info: https://www.youtube.com/c/devcentral If you don't have good knowledge about containers and kubernetes then first check the links below. For Docker containers in youtube you will find a lot of good training for example: you need to learn Kubernetes RIGHT NOW!! - YouTube Docker Tutorial for Beginners [FULL COURSE in 3 Hours] - YouTube Docker overview | Docker Documentation The same is true for Kubernetes and they have a free test lab on their site: Learn Kubernetes Basics | Kubernetes you need to learn Docker RIGHT NOW!! // Docker Containers 101 - YouTube Red Hat has some free training and IBM provides some free labs for Containers, Kubernetes, Openshift etc.: Training and Certification (redhat.com) IBM CloudLabs: Free, Interactive Kubernetes Tutorials | IBM Red Hat OpenShift Tutorials | IBM966Views5likes2CommentsWhat is Kubernetes?
Kubernetes is a container-orchestration platform. Its goal is to abstract away the complexity to run containerised applications in terms of network, storage, scaling and more. It also provides a declarative REST API (which is extensible) in order to automate the process of application hosting and exposure. If that sounds confusing, think of it as the thing that abstracts your infrastructure. We no longer have to worry about servers but only how to deploy our application to Kubernetes. This is how a Kubernetes cluster may look like: It is comprised of a cluster of physical servers or virtual machines known as nodes in Kubernetes world. We can add or remove nodes at will and Kubernetes can scale down or up to a staggering amount of up to 5,000 nodes! Master nodes vs Worker Nodes There are 2 kinds of nodes you should initially know about: master and worker nodes¹. ¹ OpenShift (an enterprise fork of Kubernetes) adds the notion of infrastructure node. Infrastructure nodes are meant to host shared services (e.g. router nodes, monitoring, etc). Master nodes manage the Kubernetes cluster using 4 main components: Scheduler schedules pods to worker nodes. Controller manager makes sure cluster's actual state = desired state. ETCD is where Kubernetes store its objects and metadata. API Server validates objects before they're stored in ETCD and of course is the central point of contact for object creation, retrieval and to watch the state of objects and cluster in general. A popular tool to "talk" to API Server is kubectl. If you install Kubernetes, you would definitely use kubectl². ² Openshift has a similar tool called "oc" Worker nodes communicate with Master node's API Server in the following manner: Kubelet runs on each worker node and watches API SERVER to continuously monitor for Pods that should be created, deleted or changed. When we first add a Node to Kubernetes cluster, Kubelet is the daemon that registers the Node resource to API Server. Kube-proxy makes sure client traffic is redirected to the correct pod networking-wise in an efficient manner. Redirection is accomplished by using either iptables rules or IPVS virtual servers. Container runtime is usually Docker. Pods and Containers: Where does a Kubernetes application reside? Not every application is compatible with Kubernetes environment. Developers have to create their application in a specific way with small replicable components (also known as micro-services) that are independent from other components. Such components are hosted inside of a Pod. Pods run in Worker nodes: Within pods we can find one or more containers and that's where our application (or small chunk of our application) resides. In Appendix section, I will explain why using pods instead of containers directly. Understanding Pod's scalability component Pods are supposed to be replicable so application is designed in such a way to enable horizontal (auto) scalability. That's one of the powers of Kubernetes! We have a cluster of nodes where chunks of our application (pods) can easily increase or decrease in numbers. This is also the reason why our pods should be coded in a way that allows them to be replicable. Imagine our application has a component called shopping-trolley and another one called check-out: Our shopping-trolley pods may eventually become too overloaded and we might need more replicas to cope with the additional traffic/load. Increasing/reducing the number of replicas is as easy as writing down the number of replicas or letting cloud providers auto-scale it for you. Cloud providers also allow us to increase replicas based on CPU cycles, memory, etc. Before the rise of container orchestrators like Kubernetes, we would have to scale out the whole application stack, unnecessarily overloading servers. Kubernetes also allows us to scale out only parts of the application that needs it, effectively reducing unnecessary overload on servers and costs. The other advantage is that we can upgrade part of our application with zero downtime without the overhead to re-deploy the whole application at once. Services: how traffic reaches Application within a Pod Scheduler spreads out pods throughout Kubernetes cluster. However, it is usually a good idea to group pod replicas behind a single entry-point for reachability purposes as a pod's IP may change. This is where a Kubernetes Services comes in: Services act as the single point of access for a groups of pods with fixed DNS name and port. The way Services work out which pod should belong to it is by the use of labels. There is a label selector on a service and pods with same label are grouped into the Service. Understanding the 3 ways Services can be exposed ClusterIP Services can be exposed internally when one group of pods want to communicate with another one. This is the default and is called ClusterIP type. A private IP address reachable only within Kubernetes cluster is used as single point of access for the group of pods. NodePort Services can be exposed externally by using Node's public IP address and port as cluster's entry point for client's external traffic. This is called NodePort type. If we use NodePort, external clients would have to directly reach one of the nodes so NodePort might not be suitable for most production environments. If we need to load balance traffic among nodes, the next type is the solution. LoadBalancer This is another layer on top of NodePort that load balances traffic in a round robin fashion to all nodes. However, the LoadBalancer type can only be tied to a single Service, i.e. if we have multiple Services, we would need one LoadBalancer per Service which could become quite costly. If we want to use a single Public IP address to direct external traffic to the right service based on URL, the next type is the solution. Ingress Resource The Ingress type reads the HTTP Host header and forwards connection to a Service based on the URL/PATH. An Ingress can point to multiple Services based on URL using a single Public IP address as entry point. This overcomes the limitation of one LoadBalancer per Service on LoadBalancer type. Final Remarks Kubernetes is currently a well-established DevOps tool but it is a very extensive topic and is constantly evolving. For release update, please watch Kubernetes official blog. There are many Kubernetes objects that were not covered here but should be covered in a future article. Appendix: Why Pods? Why not using containers directly? Design The underlying Container technology is independent from Kubernetes. Pods act as a layer of abstraction on top of it. With that in mind, Kubernetes doesn't really have to adapt to different container technology (such as Docker, Rocket, etc) and avoids runtime lock-in, i.e. each container runtime has its own strength. Application Requirements Within pods, containers can potentially share resources more easily. For example, one container might run to perform a certain task and the other one to take care of authentication. Another example would be one container writing to a shared storage volume and another one reading from it to perform additional processing. Containers in the same pod share the same network and IPC namespaces.1.6KViews2likes1CommentExploring Kubernetes API using Wireshark part 2: Namespaces
Related Articles: Exploring Kubernetes API using Wireshark part 1: Creating, Listing and Deleting Pods Exploring Kubernetes API using Wireshark part 3: Python Client API Quick Intro Using kubectl command is pretty useful: When you execute the above command, kubectl sends a GET request to /api/v1/namespaces/default/pods: Kubernetes master node replies with a JSON file containing all pods (along with their info) that belong to namespace 'default'. In this article, I'm going to explain what Kubernetes namespaces are by showing you real HTTP traffic reaching Kubernetes master node. I've removed the TLS complexity by using proxy so we can just focus on HTTP headers only. Understanding namespaces Initially, I'd say just memorise that /api/v1 is like the root directory of Kubernetes master node's API where client is going to retrieve all sorts of information. Have you noticed the namespaces in /api/v1/namespaces/default/pods? default just happens to be the namespace that our pods listed here belong to. Think of namespaces for Kubernetes as virtual Kubernetes clusters just like Virtual Machines (VMs) for OS. We can have identical objects with same name that belong to different namespaces and therefore are isolated from each other from the point of view of the API. Creating a new custom namespace I can create a new namespace like this using kubectl command: I can then create the same identical pods from default namespace in rodrigo's namespace. Let's see what happened under the hood when I typed the above command. When we create a new namespace, kubectl sends an HTTP POST request Kubernetes master node: pcap: creating-rodrigo-namespace.pcap The kubectl client then sends a JSON file like this in the POST request: Then, Kubernetes Master responds with HTTP 201 Created message and another JSON file with all newly created namespace's info: I've described some of the JSON info that came back from API just out of curiosity. Note that many different objects are 'namespaced', i.e. they belong to a namespace. Others like nodes are namespace-independent. I used pods as an example here to explain namespaces as pods are most popular and well-known object in Kubernetes world. Keeping 2 identical pods in 2 namespaces Let me create a new NGINX pod in the new namespace: Ops! We need to specify that we're creating the same pod in the new namespace we've just created, otherwise it defaults to default namespace where the nginx pod already exists: It now worked. Let's list only pods from rodrigo's namespace only with kubectl: When we capture the above request on Wireshark, we now see that our GET request to Kubernetes Master now uses rodrigo's namespace so we're now listing only pods from rodrigo namespace only : We also have this same exact pod using same name in default namespace. Remember? Deleting my custom namespace Now, let's delete our pod: And that's the API call under the hood (an HTTP DELETE request to complete path of namespace - just like we're deleting a folder): pcap: deleting-namespace.pcap Listing pods from all namespaces If you're curious about how the URL would look like when we list pods from all namespaces with kubectl: The answer is this: This request will list all pods from all namespaces. Troubleshooting Namespaces Remember I mentioned the finalizer attribute? When I was creating this article and I tried to delete the my custom namespace (rodrigo), it got stuck in Terminating state: Initially I thought it was just Google Cloud slowness but 40 minutes? That's a lot. So I suspected it could be because of finalize attribute and googled it so found that it was a bug and here's the solution: Retrieve namespace's JSON declaration to temporary file: Delete kubernetes keyword from finalizers attribute: Now send a PUT request to API and the JSON file above: Then, when I looked back it was finally gone:971Views1like0CommentsExploring Kubernetes API using Wireshark part 1: Creating, Listing and Deleting Pods
Related Articles: Exploring Kubernetes API using Wireshark part 2: Namespaces Exploring Kubernetes API using Wireshark part 3: Python Client API Quick Intro This article answers the following question: What happens when we create, list and delete pods under the hood? More specifically on the wire. I used these 3 commands: I'll show you on Wireshark the communication between kubectl client and master node (API) for each of the above commands. I used a proxy so we don't have to worry about TLS layer and focus on HTTP only. Creating NGINX pod pcap: creating_pod.pcap (use http filter on Wireshark) Here's our YAML file: Here's how we create this pod: Here's what we see on Wireshark: Behind the scenes, kubectl command sent an HTTP POST with our YAML file converted to JSON but notice the same thing was sent (kind, apiVersion, metadata, spec): You can even expand it if you want to but I didn't to keep it short. Then, Kubernetes master (API) responds with HTTP 201 Created to confirm our pod has been created: Notice that master node replies with similar data with the additional status column because after pod is created it's supposed to have a status too. Listing Pods pcap: listing_pods.pcap (use http filter on Wireshark) When we list pods, kubectl just sends a HTTP GET request instead of POST because we don't need to submit any data apart from headers: This is the full GET request: And here's the HTTP 200 OK with JSON file that contains all information about all pods from default's namespace: I just wanted to emphasise that when you list a pod the resource type that comes back is PodList and when we created our pod it was just Pod. Remember? The other thing I'd like to point out is that all of your pods' information should be listed under items. All kubectl does is to display some of the API's info in a humanly readable way. Deleting NGINX pod pcap: deleting_pod.pcap (use http filter on Wireshark) Behind the scenes, we're just sending an HTTP DELETE to Kubernetes master: Also notice that the pod's name is also included in the URI: /api/v1/namespaces/default/pods/nginx ← this is pods' name HTTP DELETE just like HTTP GET is pretty straightforward: Our master node replies with HTTP 200 OK as well as some json file with all the info about the pod, including about it's termination: It's also good to emphasise here that when our pod is deleted, master node returns JSON file with all information available about the pod. I highlighted some interesting info. For example, resource type is now just Pod (not PodList when we're just listing our pods).4.6KViews3likes0Comments