docker
13 TopicsCreating a Docker Container to Run AS3 Declarations
This guide will take you through some very basic docker, Python, and F5 AS3 configuration to create a single-function container that will update a pre-determined BIG-IP using an AS3 declaration stored on Github. While it’s far from production ready, it might serve as a basis for more complex configurations, plus it illustrates nicely some technology you can use to automate BIG-IP configuration using AS3, Python and containers. I'm starting with a running BIG-IP - in this case a VE running on theGoogle Cloud Platform, with the AS3 wroker installed and provisioned, plus a couple of webservers listening on different ports. First we’re going to need a host running docker. Fire up an instance in on the platform of your choice – in this example I’m using Ubuntu 18.04 LTS on the Google Cloud platform – that’s purely from familiarity – anything that can run Docker will do. The install process is well documented but looks a bit like this: $ sudo apt-get update $ sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - $sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" $sudo apt-get update $ sudoapt-get install docker-ce docker-ce-cli containerd.io It's worth adding your user to the docker group to avoid repeadly forgetting to type sudo (or is that just me?) $ sudo usermod -aGdocker$USER Next let's test it's all working: $ docker run hello-world Next let’s take a look at the AS3 declaration,as you might expect form me by now, it’s the most basic version – a simple HTTP app, with two pool members. The beauty of the AS3 model, of course, is that it doesn’t matter how complex your declaration is, the implementation is always the same. So you could take a much more involved declaration and just by changing the file the python script uses, get a more complex configuration. { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.0.0", "id": "urn:uuid:33045210-3ab8-4636-9b2a-c98d22ab915d", "label": "Sample 1", "remark": "Simple HTTP Service with Round-Robin Load Balancing", "Sample_01": { "class": "Tenant", "A1": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualAddresses": [ "10.138.0.4" ], "pool": "web_pool" }, "web_pool": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 8080, "serverAddresses": [ "10.138.0.3" ] }, { "servicePort": 8081, "serverAddresses": [ "10.138.0.3" ] } ] } } } } } Now we need some python code to fire up our request. The code below is absolutely a minimum viable set that’s been written for simplicity and clarity and does minimal error checking. There are more ways to improve it that lines of code in it, but it will get you started. #Python Code to run an as3 declaration # import requests import os from requests.auth import HTTPBasicAuth # Get rid of annoying insecure requests waring from requests.packages.urllib3.exceptions import InsecureRequestWarning requests.packages.urllib3.disable_warnings(InsecureRequestWarning) # Declaration location GITDRC = 'https://raw.githubusercontent.com/RuncibleSpoon/as3/master/declarations/payload.json' IP = '10.138.0.4' PORT = '8443' USER = os.environ['XUSER'] PASS = os.environ['XPASS'] URLBASE = 'https://' + IP + ':' + PORT TESTPATH = '/mgmt/shared/appsvcs/info' AS3PATH = '/mgmt/shared/appsvcs/declare' print("########### Fetching Declaration ###########") d = requests.get(GITDRC) # Check we have connectivity and AS3 is installed print('########### Checking that AS3 is running on ', IP ,' #########') url = URLBASE + TESTPATH r = requests.get(url, auth=HTTPBasicAuth(USER, PASS), verify=False) if r.status_code == 200: data = r.json() if data["version"]: print('AS3 version is ', data["version"]) print('########## Runnig Declaration #############') url = URLBASE + AS3PATH headers = { 'content-type': 'application/json', 'accept': 'application/json' } r = requests.post(url, auth=HTTPBasicAuth(USER, PASS), verify=False, data=d.text, headers=headers) print('Status Code:', r.status_code,'\n', r.text) else: print('AS3 test to ',IP, 'failed: ', r.text) This simple python code will pull down an S3 declaration from GitHub using the 'requests' Python library, and the GITDRC variable, connect to a specific BIG-IP, test it’s running AS3 (see here for AS3 setup instructions), and then apply the declaration. It will give you some tracing output, but that’s about it. There are couple of things to note about IP’s, users, and passwords: IP = '10.138.0.4' PORT = '8443' USER = os.environ['XUSER'] PASS = os.environ['XPASS' As you can see, I’ve set the IP and port statically and the username and passwords are pulled in from environment variables in the container. We’ll talk more about the environment variables below, but this is more a way to illustrate your options than design advice. Now we need to build a container to run it in. Containers are relatively easy to build with just a Dockerfile and a few more test files in a directory. Here's the docker file: FROM python:3 WORKDIR /usr/src/app ARG Username=admin ENV XUSER=$Username ARG Password=admin ENV XPASS=$Password # line bleow is not actually used - see comments - but oy probably is a better way ARG DecURL=https://raw.githubusercontent.com/RuncibleSpoon/as3/master/declarations/payload.json ENV Declaration=$DecURL COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt COPY . . ENTRYPOINT [ "python", "./as3.py" ] You can see a couple of ARG and ENV statements , these simple set the environment variables that we’re (somewhat arbitrarily) using in the python script. Further more we’re going to override them in then build command later. It’s worth noting this isn’t a way to obfuscate passwords, they are exposed by a simple $ docker image history command that will expose all sorts of things about the build of the container, including the environment variables passes to it. This can be overcome by a multi-stage build – but proper secret management is something you should explore – and comment below if you’d like some examples. What’s this requirements.txt file mentioned in the Dockerfile it’s just a manifest to the install of the python package we need: # This file is used by pip to install required python packages # Usage: pip install -r requirements.txt # requests package requests==2.21.0 With our Dockerfile, requirements.txt and as3.py files in a directory we're ready to build a container - in this case I'm going to pass some environment variables into the build to be incorporated in the image - and replace the ones we have set in the Dockerfile: $ export XUSER=admin $ export XPASS=admin Build the container (the -t flag names and tags your container, more of which later): $ docker build -t runciblespoon/as3_python:A--build-arg Username=$XUSER --build-arg Password=$XPASS . The first time you do this there will be some lag as files for the python3 source containerare downloaded and cached, but once it has run you should be able to see your image: $ docker image list REPOSITORY TAG IMAGE ID CREATED SIZE runciblespoon/as3_python A 819dfeaad5eb About a minute ago 936MB python 3 954987809e63 42 hours ago 929MB hello-world latest fce289e99eb9 3 months ago 1.84kB Now we are ready to run the container maybe a good time for a 'nothing up my sleeve' moment - here is the state of the BIG-IP before hand Now let's run the container from our image.The -tty flag attached a pseudo terminal for output and --rm deletes the container afterwards: $ docker run --tty --rm runciblespoon/as3_python:A ########### Fetching Declaration ########### ########### Checking that AS3 is running on 10.138.0.4 ######### AS3 version is 3.10.0 ########## Runing Declaration ############# Status Code: 200 {"results":[{"message":"success","lineCount":23,"code":200,"host":"localhost","tenant":"Sample_01","runTime":929}],"declaration":{"class":"ADC","schemaVersion":"3.0.0","id":"urn:uuid:33045210-3ab8-4636-9b2a-c98d22ab915d","label":"Sample 1","remark":"Simple HTTP Service with Round-Robin Load Balancing","Sample_01":{"class":"Tenant","A1":{"class":"Application","template":"http","serviceMain":{"class":"Service_HTTP","virtualAddresses":["10.138.0.4"],"pool":"web_pool"},"web_pool":{"class":"Pool","monitors":["http"],"members":[{"servicePort":8080,"serverAddresses":["10.138.0.3"]},{"servicePort":8081,"serverAddresses":["10.138.0.3"]}]}}},"updateMode":"selective","controls":{"archiveTimestamp":"2019-04-26T18:40:56.861Z"}}} Success, by the looks of things. Let's check the BIG-IP: running our container has pulled down the AS3 declaration, and applied it to the BIG-IP. This same container can now be run repeatedly - and only the AS3 declaration stored in git (or anywhere else your container can get it from) needs to change. So now you have this container running locally you might want ot put it somewhere. Docker hub is a good choice and lets you create one private repository for free. Remember this container image has credentials, so keep it safe and private. Now the reason for the -t runciblespoon/as3_python:A flag earlier. My docker hub user is "runciblespoon" and my private repository is as3_python. So now all i need ot do is login to Docker Hub and push my image there: $ docker login $ docker push runciblespoon/as3_python:B Now I can go to any other host that runs Docker, login to Docker hub and run my container: $ docker login $ docker run --tty --rm runciblespoon/as3_python:A Unable to find image 'runciblespoon/as3_python:A' locally B: Pulling from runciblespoon/as3_python ... ########### Fetching Declaration ########### Docker will pull down my container form my private repo and run it, using the AS3 declaration I've specified. If I want to change my config, I just change the declaration and run it again. Hopefully this article gives you a starting point to develop your own containers, python scripts, or AS3 declarations, I'd be interested in what more you would like to see, please ask away in the comments section.1.4KViews0likes4CommentsF5 Kubernetes Container Integration
Two problems, finding docs to setup f5 kube-proxy. The doc is missing from this link - http://clouddocs.f5.com/products/asp/v1.0/tbd but I havn't gotten far enough to be able to test communication. The second is k8s-bigip-ctlr is not writing VIP or pool updates. I have k8s-bigip-ctlr and asp running. $ kubectl get pods --namespace kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE f5-asp-1d61j 1/1 Running 0 57m 10.20.30.168 ranchernode2.lax.verifi.com f5-asp-9wmbw 1/1 Running 0 57m 10.20.30.162 ranchernode1.lax.verifi.com heapster-818085469-4bnsg 1/1 Running 7 25d 10.42.228.59 ranchernode1.lax.verifi.com k8s-bigip-ctlr-deployment-1527378375-d1p8v 1/1 Running 0 41m 10.42.68.136 ranchernode2.lax.verifi.com kube-dns-1208858260-ppgc0 4/4 Running 8 25d 10.42.26.16 ranchernode1.lax.verifi.com kubernetes-dashboard-2492700511-r20rw 1/1 Running 6 25d 10.42.29.28 ranchernode1.lax.verifi.com monitoring-grafana-832403127-cq197 1/1 Running 7 25d 10.42.240.16 ranchernode1.lax.verifi.com monitoring-influxdb-2441835288-p0sg1 1/1 Running 5 25d 10.42.86.70 ranchernode1.lax.verifi.com tiller-deploy-3991468440-1x80g 1/1 Running 6 25d 10.42.6.76 ranchernode1.lax.verifi.com I have tried with k8s-bigip-ctlr 1.0.0 (Latest), which fails with different errors. Creating VIP with bigip-virtual-server_v0.1.0.json 2017/06/27 22:50:13 [WARNING] Could not get config for ConfigMap: k8s.vs - minLength must be of an integer Creating Pool with bigip-virtual-server_v0.1.0.json 2017/06/27 22:46:45 [WARNING] Could not get config for ConfigMap: k8s.pool - format must be a valid format . So I tired 1.1.0-beta.1 and it does produce something in the logs like its working but doesn't write any changes to the F5. (using f5schemadb bigip-virtual-server_v0.1.3.json) Here using f5schemadb://bigip-virtual-server_v0.1.3.json with 1.1.0-beta.1 seems get the farthest. 2017/06/27 22:58:19 [DEBUG] Delegating type *v1.ConfigMap to virtual server processors 2017/06/27 22:58:19 [DEBUG] Process ConfigMap watch - change type: Add name: hello-vs namespace: default 2017/06/27 22:58:19 [DEBUG] Add watch of namespace default and resource services, store exists:true 2017/06/27 22:58:19 [DEBUG] Looking for service "hello" in namespace "default" as specified by ConfigMap "hello-vs". 2017/06/27 22:58:19 [DEBUG] Requested service backend {ServiceName:hello ServicePort:80 Namespace:default} not of NodePort type 2017/06/27 22:58:19 [DEBUG] Updating ConfigMap {ServiceName:hello ServicePort:80 Namespace:default} annotation - status.virtual-server.f5.com/ip: 10.20.28.70 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) writing section name services 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) successfully wrote section (services) 2017/06/27 22:58:19 [INFO] Wrote 0 Virtual Server configs 2017/06/27 22:58:19 [DEBUG] Services: [] 2017/06/27 22:58:19 [DEBUG] Delegating type *v1.ConfigMap to virtual server processors 2017/06/27 22:58:19 [DEBUG] Process ConfigMap watch - change type: Update name: hello-vs namespace: default 2017/06/27 22:58:19 [DEBUG] Add watch of namespace default and resource services, store exists:true 2017/06/27 22:58:19 [DEBUG] Looking for service "hello" in namespace "default" as specified by ConfigMap "hello-vs". 2017/06/27 22:58:19 [DEBUG] Requested service backend {ServiceName:hello ServicePort:80 Namespace:default} not of NodePort type 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) writing section name services 2017/06/27 22:58:19 [DEBUG] ConfigWriter (0xc42039b3b0) successfully wrote section (services) 2017/06/27 22:58:19 [INFO] Wrote 0 Virtual Server configs 2017/06/27 22:58:19 [DEBUG] Services: [] Config Map kind: ConfigMap apiVersion: v1 metadata: name: hello-vs namespace: default labels: f5type: virtual-server data: schema: "f5schemadb://bigip-virtual-server_v0.1.3.json" data: |- { "virtualServer": { "frontend": { "balance": "round-robin", "mode": "http", "partition": "kubernetes", "virtualAddress": { "bindAddr": "10.20.28.70", "port": 443 } }, "backend": { "serviceName": "hello", "servicePort": 80 } } }876Views0likes8CommentsPort Redirection Failure
I'm using a Non-Prod F5 running 12.1.2 Build 1.292.271. We have a cluster of nodes that serve up various Apps on different ports. /App1 - 80 /App2 - 81 /App3 - 82 I have configured a pool with members that have all service ports enabled. Also a single VS with a VIP and a service port of 0. Here is my iRule: when HTTP_REQUEST { switch -glob [HTTP::uri] { "/App1*" { set port 80 } "/App2*" { set port 81 } "/App3* { set port 82 } } } when LB_SELECTED { pool [LB::server pool] member [LB::server addr] $port } When running statistics on the iRule I get failures in the "LB_SELECTED" part however from my prospective this should be the correct syntax to change the service port on a pool. I would like some feedback on this configuration and if someone can comment on this configuration. Thanks.233Views0likes1CommentKnowledge sharing: Containers, Kubernetes, Openshift, F5 Container Connector, NGINX Ingress
For anyone interested about the free traning for "F5 Container Connector for Kubernetes" or "F5 OpenShift Container Integration" at "LearnF5". For NGINX being installed in Kubernetes there is enough info but for F5 Contaner Connector/Container Ingress Services there is not so much: https://docs.nginx.com/nginx-ingress-controller/f5-ingresslink/ https://www.nginx.com/products/nginx-ingress-controller/ https://community.f5.com/t5/technical-articles/better-together-f5-container-ingress-services-and-nginx-plus/ta-p/280471 F5 Devcentral also has youtube channel with usefull info: https://www.youtube.com/c/devcentral If you don't have good knowledge about containers and kubernetes then first check the links below. For Docker containers in youtube you will find a lot of good training for example: you need to learn Kubernetes RIGHT NOW!! - YouTube Docker Tutorial for Beginners [FULL COURSE in 3 Hours] - YouTube Docker overview | Docker Documentation The same is true for Kubernetes and they have a free test lab on their site: Learn Kubernetes Basics | Kubernetes you need to learn Docker RIGHT NOW!! // Docker Containers 101 - YouTube Red Hat has some free training and IBM provides some free labs for Containers, Kubernetes, Openshift etc.: Training and Certification (redhat.com) IBM CloudLabs: Free, Interactive Kubernetes Tutorials | IBM Red Hat OpenShift Tutorials | IBM963Views5likes2CommentsWhat is Kubernetes?
Kubernetes is a container-orchestration platform. Its goal is to abstract away the complexity to run containerised applications in terms of network, storage, scaling and more. It also provides a declarative REST API (which is extensible) in order to automate the process of application hosting and exposure. If that sounds confusing, think of it as the thing that abstracts your infrastructure. We no longer have to worry about servers but onlyhow to deploy our application to Kubernetes. This is how a Kubernetes cluster may look like: It is comprised of a cluster of physical servers or virtual machines known as nodes in Kubernetes world. We can add or remove nodes at will and Kubernetes can scale down or up to a staggering amount of up to 5,000 nodes! Master nodes vs Worker Nodes There are 2 kinds of nodes you should initially know about: master and worker nodes¹. ¹ OpenShift (an enterprise fork of Kubernetes) adds the notion of infrastructure node. Infrastructure nodes are meant to host shared services (e.g. router nodes, monitoring, etc). Master nodes manage the Kubernetes cluster using 4 main components: Schedulerschedules pods to worker nodes. Controller managermakes sure cluster's actual state = desired state. ETCDis where Kubernetes store its objects and metadata. API Servervalidates objects before they're stored inETCDand of course is the central point of contact for object creation, retrieval and to watch the state of objects and cluster in general. A popular tool to "talk" to API Server iskubectl. If you install Kubernetes, you would definitely usekubectl². ² Openshift has a similar tool called "oc" Worker nodescommunicate withMaster node's API Serverin the following manner: Kubeletruns on each worker node and watches API SERVER to continuously monitor for Pods that should be created, deleted or changed. When we first add a Node to Kubernetes cluster,Kubeletis the daemon that registers the Node resource to API Server. Kube-proxymakes sure client traffic is redirected to the correct pod networking-wise in an efficient manner. Redirection is accomplished by using either iptables rules orIPVSvirtual servers. Container runtimeis usually Docker. Pods and Containers: Where does a Kubernetes application reside? Not every application is compatible with Kubernetes environment. Developers have to create their application in a specific way with small replicable components (also known as micro-services) that are independent from other components. Such components are hosted inside of a Pod. Pods run in Worker nodes: Within pods we can find one or more containers and that's where our application (or small chunk of our application) resides. In Appendix section, I will explain why using pods instead of containers directly. Understanding Pod's scalability component Pods are supposed to be replicable so application is designed in such a way to enable horizontal (auto) scalability. That's one of the powers of Kubernetes! We have a cluster of nodes where chunks of our application (pods) can easily increase or decrease in numbers. This is also the reason why our pods should be coded in a way that allows them to be replicable. Imagine our application has a component calledshopping-trolleyand another one calledcheck-out: Ourshopping-trolleypods may eventually become too overloaded and we might need more replicas to cope with the additional traffic/load. Increasing/reducing the number of replicas is as easy as writing down the number of replicas or letting cloud providers auto-scale it for you. Cloud providers also allow us to increase replicas based on CPU cycles, memory, etc. Before the rise of container orchestrators like Kubernetes, we would have to scale out the whole application stack, unnecessarily overloading servers. Kubernetes also allows us to scale out only parts of the application that needs it, effectively reducing unnecessary overload on servers and costs. The other advantage is that we can upgrade part of our application with zero downtime without the overhead to re-deploy the whole application at once. Services: how traffic reaches Application within a Pod Scheduler spreads out pods throughout Kubernetes cluster. However, it is usually a good idea to group pod replicas behind a single entry-point for reachability purposes as a pod's IP may change. This is where a Kubernetes Servicescomes in: Services act as the single point of access for a groups of pods with fixedDNS name and port. The way Services work out which pod should belong to it is by the use of labels. There is a label selector on a service and pods with same label are grouped into the Service. Understanding the 3 ways Services can be exposed ClusterIP Services can be exposed internally when one group of pods want to communicate with another one. This is the default and is calledClusterIPtype. A private IP address reachable only within Kubernetes cluster is used as single point of access for the group of pods. NodePort Services can be exposed externally by using Node's public IP address and port as cluster's entry point for client's external traffic. This is calledNodePorttype. If we useNodePort, external clients would have to directly reach one of the nodes soNodePortmight not be suitable for most production environments. If we need to load balance traffic among nodes, the next type is the solution. LoadBalancer This is another layer on top ofNodePortthat load balances traffic in a round robin fashion to all nodes. However, theLoadBalancertype can only be tied to a single Service, i.e. if we have multiple Services,we would need oneLoadBalancerper Service which could become quite costly. If we want to use a single Public IP address to direct external traffic to the right service based on URL, the next type is the solution. Ingress Resource TheIngresstype reads the HTTP Host header and forwards connection to a Service based on the URL/PATH. An Ingress can point to multiple Services based on URL using a single Public IP address as entry point. This overcomes the limitation of oneLoadBalancerper Service onLoadBalancertype. Final Remarks Kubernetes is currently a well-established DevOps tool but it is a very extensive topic and is constantly evolving. For release update, please watch Kubernetes official blog. There are many Kubernetes objects that were not covered here but should be covered in a future article. Appendix: Why Pods? Why not using containers directly? Design The underlying Container technology is independent from Kubernetes. Pods act as a layer of abstraction on top of it. With that in mind, Kubernetes doesn't really have to adapt to different container technology (such as Docker, Rocket, etc) and avoids runtime lock-in, i.e. each container runtime has its own strength. Application Requirements Within pods, containers can potentially share resources more easily. For example, one container might run to perform a certain task and the other one to take care of authentication. Another example would be one container writing to a shared storage volume and another one reading from it to perform additional processing. Containers in the same pod share the same network and IPC namespaces.1.6KViews2likes1CommentExploring Kubernetes API using Wireshark part 2: Namespaces
Related Articles: Exploring Kubernetes API using Wireshark part 1: Creating, Listing and Deleting Pods Exploring Kubernetes API using Wireshark part 3: Python Client API Quick Intro Using kubectl command is pretty useful: When you execute the above command, kubectl sends a GET request to /api/v1/namespaces/default/pods: Kubernetes master node replies with a JSON file containing all pods (along with their info) that belong to namespace 'default'. In this article, I'm going to explain what Kubernetes namespaces are by showing you real HTTP traffic reaching Kubernetes master node. I've removed the TLS complexity by using proxy so we can just focus on HTTP headers only. Understanding namespaces Initially, I'd say just memorise that /api/v1 is like the root directory of Kubernetes master node's API where client is going to retrieve all sorts of information. Have you noticed thenamespacesin /api/v1/namespaces/default/pods? defaultjust happens to be the namespace that our pods listed here belong to. Think of namespaces for Kubernetes as virtual Kubernetes clusters just like Virtual Machines (VMs) for OS. We can have identical objects with same name that belong to different namespaces and therefore are isolated from each other from the point of view of the API. Creating a new custom namespace I can create a new namespace like this usingkubectlcommand: I can then create the same identical pods from default namespace inrodrigo's namespace. Let's see what happened under the hood when I typed the above command. When we create a new namespace, kubectl sends an HTTP POST request Kubernetes master node: pcap: creating-rodrigo-namespace.pcap Thekubectlclient then sends a JSON file like this in the POST request: Then, Kubernetes Master responds withHTTP 201 Createdmessage and another JSON file with all newly creatednamespace's info: I've described some of the JSON info that came back from API just out of curiosity. Note that many different objects are 'namespaced', i.e. they belong to a namespace. Others like nodes are namespace-independent. I used pods as an example here to explain namespaces as pods are most popular and well-known object in Kubernetes world. Keeping 2 identical pods in 2 namespaces Let me create a new NGINX pod in the new namespace: Ops! We need to specify that we're creating the same pod in the new namespace we've just created, otherwise it defaults to default namespace where thenginxpod already exists: It now worked. Let's list only pods from rodrigo's namespace only with kubectl: When we capture the above request on Wireshark, we now see that our GET request to Kubernetes Master now usesrodrigo's namespace so we're now listing only pods fromrodrigonamespace only: We also have this same exact pod using same name indefaultnamespace. Remember? Deleting my custom namespace Now, let's delete our pod: And that's the API call under the hood (an HTTP DELETE request to complete path of namespace - just like we're deleting a folder): pcap: deleting-namespace.pcap Listing pods from all namespaces If you're curious about how the URL would look like when we list pods from all namespaces with kubectl: The answer is this: This request will list all pods from all namespaces. Troubleshooting Namespaces Remember I mentioned thefinalizerattribute? When I was creating this article and I tried to delete the my custom namespace (rodrigo), it got stuck in Terminating state: Initially I thought it was just Google Cloud slowness but 40 minutes? That's a lot. So I suspected it could be because offinalizeattribute and googled it sofound that it was a bugand here's the solution: Retrieve namespace's JSON declaration to temporary file: Deletekuberneteskeyword fromfinalizersattribute: Now send a PUT request to API and the JSON file above: Then, when I looked back it was finally gone:959Views1like0CommentsExploring Kubernetes API using Wireshark part 1: Creating, Listing and Deleting Pods
Related Articles: Exploring Kubernetes API using Wireshark part 2: Namespaces Exploring Kubernetes API using Wireshark part 3: Python Client API Quick Intro This article answers the following question: What happens when we create, list and delete pods under the hood? More specifically on the wire. I used these 3 commands: I'll show you on Wireshark the communication between kubectl client and master node (API) for each of the above commands. I used a proxy so we don't have to worry about TLS layer and focus on HTTP only. Creating NGINX pod pcap:creating_pod.pcap (use http filter on Wireshark) Here's our YAML file: Here's how we create this pod: Here's what we see on Wireshark: Behind the scenes, kubectl command sent an HTTP POST with our YAML file converted to JSON but notice the same thing was sent (kind, apiVersion, metadata, spec): You can even expand it if you want to but I didn't to keep it short. Then, Kubernetes master (API) responds with HTTP 201 Created to confirm our pod has been created: Notice that master node replies with similar data with the additional status column because after pod is created it's supposed to have a status too. Listing Pods pcap:listing_pods.pcap (use http filter on Wireshark) When we list pods, kubectl just sends a HTTP GET request instead of POST because we don't need to submit any data apart from headers: This is the full GET request: And here's the HTTP 200 OK with JSON file that contains all information about all pods from default's namespace: I just wanted to emphasise that when you list a pod the resource type that comes back isPodListand when we created our pod it was justPod. Remember? The other thing I'd like to point out is that all of your pods' information should be listed underitems. Allkubectldoes is to display some of the API's info in a humanly readable way. Deleting NGINX pod pcap:deleting_pod.pcap (use http filter on Wireshark) Behind the scenes, we're just sending an HTTP DELETE to Kubernetes master: Also notice that the pod's name is also included in the URI: /api/v1/namespaces/default/pods/nginx← this is pods' name HTTP DELETEjust likeHTTP GETis pretty straightforward: Our master node replies with HTTP 200 OK as well as some json file with all the info about the pod, including about it's termination: It's also good to emphasise here that when our pod is deleted, master node returns JSON file with all information available about the pod. I highlighted some interesting info. For example, resource type is now just Pod (not PodList when we're just listing our pods).4.6KViews3likes0CommentsContainers: plug-and-play code in DevOps world - part 2
Related articles: DevOps Explained to the Layman Containers: plug-and-play code in DevOps world - part 1 Quick Intro Inpart 1, I explained containers at a very high level and mentioned that Docker was the most popular container platform. I also added that containers are tiny isolated environments within the same Linux host using the same Linux kernel and it is so lightweight that it packs only the libraries and dependencies just enough to get your application running. This is good because even for very distinct applications that require a certain Linux distro to run or different libraries, there should be no problem at all. If you're a Linux guy like me you'd probably want to know that the mostpopular container platform (docker) usesdockerdas the front-line daemon: root@albuquerque-docker:~# ps aux | grep dockerd root 753 0.1 0.4 1865588 74692 ? Ssl Mar19 4:39 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock This is what we're going to do here: Running my hello world app in traditional way (just a simple hello world!) Containerising my hello world app! (how to pack your application into a Docker image!) Quick note about image layers (brief note about Docker image layers!) Running our containerised app (we run the same hello worldapp but now within aDocker container) Uploading my app to online Registry and Retrieving it(we now upload our app to DockerHub so we can pull it from anywhere) How do we manage multiple container images (what do we do if our application is so big that we've got lots of containers?) Running my hello world app in traditional way There is no mystery running an application (or a component of an application) in the traditional way. We've got our physical or virtual machine with an OS installed and you just run it: root@albuquerque-docker:~# cat hello.py #!/usr/bin/python3 print('hello, Rodrigo!') root@albuquerque-docker:~# ./hello.py hello, Rodrigo! Containerising my hello world app! Here I'm going to show you how you can containerise your application and it's best if you follow along with me. First,install Docker. Once installed the command you'll use is alwaysdocker <something>ok? In DevOps world things are usually done in a declarative manner, i.e. you tell docker what you want to do and you don't worry much about the how. With that in mind, by default we can tell Docker in its default configuration file (Dockerfile) about the application you'd like it to pack (pack = creatingan image): root@albuquerque-docker:~# cat Dockerfile FROM ubuntu:latest RUN apt-get update && apt-get upgrade -y && apt-get install python3 -y ADD hello.py / CMD [ "./hello.py" ] root@albuquerque-docker:~# FROM: tell docker what is your base image (don't worry, it automatically downloads the image fromDockerhubif your image is not locally installed) RUN: Any command typed in here is executed and becomes part of base image ADD: copies source file (hello.py) to a directory you pick inside your container (/ in this case here) CMD: Any command typed in here is executed after container is already running So, in above configuration we're telling Docker to build an image to do the following: Install ubuntu Linux as our base image (this is not the whole OS, just the bare minimum) Update and upgrade all packages installed and install python3 Add our hello.py script from current directory to / directory inside the container Run it Exit, because the only task it had (running our script) has been completed Now we execute this command to build the image based on ourDockerfile: Note: notice I didn't specify Dockerfile in my command below. That's because it's the default filename so I just omitted it. root@albuquerque-docker:~# docker build -t hello-world-rodrigo . Sending build context to Docker daemon 607MB Step 1/4 : FROM ubuntu:latest ---> 94e814e2efa8 Step 2/4 : RUN apt-get update && apt-get upgrade -y && apt-get install python3 -y ---> Running in a63919569292 Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] . . <omitted for brevity> . Reading state information... Calculating upgrade... The following packages will be upgraded: apt libapt-pkg5.0 libseccomp2 libsystemd0 libudev1 5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 2268 kB of archives. After this operation, 15.4 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libudev1 amd64 237-3ubuntu10.15 [54.2 kB] Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libapt-pkg5.0 amd64 1.6.10 [805 kB] Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libseccomp2 amd64 2.3.1-2.1ubuntu4.1 [39.1 kB] Get:4 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 apt amd64 1.6.10 [1165 kB] Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libsystemd0 amd64 237-3ubuntu10.15 [205 kB] debconf: delaying package configuration, since apt-utils is not installed Fetched 2268 kB in 3s (862 kB/s) (Reading database ... 4039 files and directories currently installed.) Preparing to unpack .../libudev1_237-3ubuntu10.15_amd64.deb ... . . <omitted for brevity> . Suggested packages: python3-doc python3-tk python3-venv python3.6-venv python3.6-doc binutils binfmt-support readline-doc The following NEW packages will be installed: file libexpat1 libmagic-mgc libmagic1 libmpdec2 libpython3-stdlib libpython3.6-minimal libpython3.6-stdlib libreadline7 libsqlite3-0 libssl1.1 mime-support python3 python3-minimal python3.6 python3.6-minimal readline-common xz-utils 0 upgraded, 18 newly installed, 0 to remove and 0 not upgraded. Need to get 6477 kB of archives. After this operation, 33.5 MB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libssl1.1 amd64 1.1.0g-2ubuntu4.3 [1130 kB] . . <omitted for brevity> . Setting up libpython3-stdlib:amd64 (3.6.7-1~18.04) ... Setting up python3 (3.6.7-1~18.04) ... running python rtupdate hooks for python3.6... running python post-rtupdate hooks for python3.6... Processing triggers for libc-bin (2.27-3ubuntu1) ... Removing intermediate container a63919569292 ---> 6d564b46521d Step 3/4 : ADD hello.py / ---> a936bffc4f17 Step 4/4 : CMD [ "./hello.py" ] ---> Running in bea77d51f830 Removing intermediate container bea77d51f830 ---> e6e4f99ed9f3 Successfully built e6e4f99ed9f3 Successfully tagged hello-world-rodrigo:latest That's it. You've now packed your application into a docker image! We can now list our images to confirm our image is there: root@albuquerque-docker:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello-world-rodrigo latest e6e4f99ed9f3 2 minutes ago 155MB ubuntu latest 94e814e2efa8 2 minutes ago 88.9MB root@albuquerque-docker:~# Note that Ubuntu image was also installed as it is the base image where our app runs. Quick note about image Layers Notice that Docker uses layers to be more efficient and they're reused among containers in the same Host: root@albuquerque-docker:~# docker inspect hello-world-rodrigo | grep Layers -A 8 "Layers": [ "sha256:762d8e1a60542b83df67c13ec0d75517e5104dee84d8aa7fe5401113f89854d9", "sha256:e45cfbc98a505924878945fdb23138b8be5d2fbe8836c6a5ab1ac31afd28aa69", "sha256:d60e01b37e74f12aa90456c74e161f3a3e7c690b056c2974407c9e1f4c51d25b", "sha256:b57c79f4a9f3f7e87b38c17ab61a55428d3391e417acaa5f2f761c0e7e3af409", "sha256:51bedea20e25171f7a6fb32fdba24cce322be0d1a68eab7e149f5a7ee320290d", "sha256:b4cfcee2534584d181cbedbf25a5e9daa742a6306c207aec31fc3a8197606565" ] }, You can think of layers roughly like first layer is bare bones base OS for example, then second one would be a subsequent modification (e.g. installing python3), and so on. The idea is to share layers (read-only) with different containers so we don't need to create a copy of the same layer. Just make sure you understand that what we're sharing here is a read-only image. Anything you write on top of that, docker creates another layer! That's the magic! Running our Containerised App Lastly, we run our image with our hello world app: root@albuquerque-docker:~# docker run hello-world-rodrigo hello, Rodrigo! root@albuquerque-docker:~# As I said before, our container exited and that's because the only task assigned to the container was to run our script so by default it exits. We can confirm there is no container running: root@albuquerque-docker:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES root@albuquerque-docker:~# If you want to run it in daemon mode there is an option called -d but you're only supposed to run this option if you're really going to run a daemon. Let me useNGINXbecause our hello-world image is not suitable for daemon mode: root@albuquerque-docker:~# docker run -d nginx Unable to find image 'nginx:latest' locally latest: Pulling from library/nginx f7e2b70d04ae: Pull complete 08dd01e3f3ac: Pull complete d9ef3a1eb792: Pull complete Digest: sha256:98efe605f61725fd817ea69521b0eeb32bef007af0e3d0aeb6258c6e6fe7fc1a Status: Downloaded newer image for nginx:latest c97d363a1cc2bf578d62e57ec677bca69f27746974b9d5a49dccffd17dd75a1c Yes, you can run docker run command and it will download the image and run the container for you. Let's just confirm our container didn't exit and it is still there: root@albuquerque-docker:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c97d363a1cc2 nginx "nginx -g 'daemon of…" 5 seconds ago Up 4 seconds 80/tcp amazing_lalande Let's confirm we can reach NGINX inside the container. First we check container's locally assigned IP address: root@albuquerque-docker:~# docker inspect c97d363a1cc2 | grep IPAdd "SecondaryIPAddresses": null, "IPAddress": "172.17.0.2", "IPAddress": "172.17.0.2", Now we confirm we have NGINX running inside a docker container: root@albuquerque-docker:~# curl http://172.17.0.2 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> At the moment, my NGINX server is not reachable outside of my host machine (172.16.199.57 is the external IP address of our container's host machine). rodrigo@ubuntu:~$ curl http://172.16.199.57 curl: (7) Failed to connect to 172.16.199.57 port 80: Connection refused To solve this, just add -p flag like this: -p <port host will listen for external connections>:<port our container is listening> Let's delete our our NGINX container first: root@albuquerque-docker:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c97d363a1cc2 nginx "nginx -g 'daemon of…" 8 minutes ago Up 8 minutes 80/tcp amazing_lalande root@albuquerque-docker:~# docker rm c97d363a1cc2 Error response from daemon: You cannot remove a running container c97d363a1cc2bf578d62e57ec677bca69f27746974b9d5a49dccffd17dd75a1c. Stop the container before attempting removal or force remove root@albuquerque-docker:~# docker stop c97d363a1cc2 c97d363a1cc2 root@albuquerque-docker:~# docker rm c97d363a1cc2 c97d363a1cc2 root@albuquerque-docker:~# docker run -d -p 80:80 nginx a8b0454bae36e52f3bdafe4d21eea2f257895c9ea7ca93542b760d7ef89bdd7f root@albuquerque-docker:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a8b0454bae36 nginx "nginx -g 'daemon of…" 7 seconds ago Up 5 seconds 0.0.0.0:80->80/tcp thirsty_poitras Now, let me reach it from an external host: rodrigo@ubuntu:~$ curl http://172.16.199.57 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> Uploading my App to online Registry and Retrieving it You can also upload your containerised application to an online registry such asdockerhubwith docker push command. We first need to create an accountdockerhuband then a repository: Because my username isdigofarias, my hello-world-rodrigo imagewill actually have to be named locally as digofarias/hello-world-rodrigo. Let's list our images: root@albuquerque-docker:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 94e814e2efa8 8 minutes ago 88.9MB hello-world-rodrigo latest e6e4f99ed9f3 8 minutes ago 155MB If I upload the image this way, it won't work so I need to rename it todigofarias/hello-world-rodrigolike this: root@albuquerque-docker:~# docker tag hello-world-rodrigo:latest digofarias/hello-world-rodrigo:latest root@albuquerque-docker:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello-world-rodrigo latest e6e4f99ed9f3 9 minutes ago 155MB digofarias/hello-world-rodrigo latest e6e4f99ed9f3 9 minutes ago 155MB We can now login to our newly created account: root@albuquerque-docker:~# docker login Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one. Username: digofarias Password: ********** WARNING! Your password will be stored unencrypted in /home/rodrigo/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded Lastly, we push our code toDockerHub: root@albuquerque-docker:~# docker push digofarias/hello-world-rodrigo The push refers to repository [docker.io/digofarias/hello-world-rodrigo] b4cfcee25345: Pushed 51bedea20e25: Pushed b57c79f4a9f3: Pushed d60e01b37e74: Pushed e45cfbc98a50: Pushed 762d8e1a6054: Pushed latest: digest: sha256:b69a5fd119c8e9171665231a0c1b40ebe98fd79457ede93f45d63ec1b17e60b8 size: 1569 If you go to any other machine connected to the Internet with Docker installed you can run my hello-world app: root@albuquerque-docker:~# docker run digofarias/hello-world-rodrigo hello, Rodrigo! You don't need to worry about dependencies or anything else. If it worked properly in your machine, it should also work anywhere else as the environment inside the container should be the same. In real world, you'd probably be uploading just a component of your code and your real application could be comprised of lots of containers that can potentially communicate with each other via an API. How do we manage multiple container images? Remember that in the real-world we might need to create multiple components (each inside of its container) and we'll have an ecosystem of containers that eventually make up our application or service. As I said inpart 1, in order to manage this ecosystem we typically use a container orchestrator. Currently there are a couple of them like Docker swarm but Kubernetes is the most popular one. Kubernetes is a topic for a whole new article or many articles but you typically declare the container images to a Kubernetes deployment file and it downloads, installs, run and monitors the whole ecosystem (i.e. your application) for you. Just remember that a container is typically just one component of your application that communicates with other components/containers via an API.1.4KViews0likes0Comments