container
7 TopicsRestnoded keep coring dump when running as3 container
Follow the document, run below commands: docker run --name as3_container --rm -d -p 8443:443 -p 8080:80 f5devcentral/f5-as3-container:latest The restnoded keep restarting: [root@k8s-node1 ~] docker exec -it 715941f037b07d01a5fbb3fe990540a6b73627d9d1ce9198614ed3dfe828888b /bin/bash bash-4.4 cd /etc/service/restnoded/ bash-4.4 ls core.13042 core.13189 core.13220 core.13240 core.13270 core.13291 core.13328 core.13349 core.13396 core.13420 core.13443 core.13503 core.13525 finish run core.13170 core.13206 core.13230 core.13253 core.13280 core.13314 core.13339 core.13364 core.13410 core.13433 core.13491 core.13515 core.13542 log supervise logs of the container: bash-4.4 cd /var/log bash-4.4 ls -lrt total 76 drwxrwxrwx 2 root root 6 Jul 25 16:14 restnoded -rw-r--r-- 1 root root 136 Nov 28 02:16 restjavad.out -rw-r--r-- 1 root root 0 Nov 28 02:16 restjavad.0.log.lck -rw-r--r-- 1 root root 2479 Nov 28 02:16 restjavad-api-usage.json -rw-r--r-- 1 root root 58719 Nov 28 02:16 restjavad.0.log -rw-r--r-- 1 root root 1342 Nov 28 02:17 restjavad-gc.log.0.current -rw-r--r-- 1 root root 79 Nov 28 02:17 restnoded.out bash-4.4 tail -f restnoded.out lowering process privileges to: root/root, groups:0,0,1,2,3,4,6,10,11,20,26,27 ^C bash-4.4 ls restnoded/ bash-4.4 tail -f restjavad-gc.log.0.current 2018-11-28T02:16:51.923+0000: 8.186: [GC 40516K->14918K(95040K), 0.0107790 secs] 2018-11-28T02:16:56.864+0000: 13.127: [GC 41158K->14976K(95040K), 0.0038870 secs] 2018-11-28T02:17:02.518+0000: 18.781: [GC 41216K->14982K(95040K), 0.0028530 secs] 2018-11-28T02:17:08.314+0000: 24.577: [GC 41222K->14957K(95040K), 0.0038880 secs] 2018-11-28T02:17:14.038+0000: 30.302: [GC 41197K->15010K(95040K), 0.0028630 secs] 2018-11-28T02:17:18.423+0000: 34.686: [GC 41250K->14592K(95040K), 0.0033240 secs] 2018-11-28T02:17:24.217+0000: 40.480: [GC 40832K->14486K(95040K), 0.0028170 secs] 2018-11-28T02:17:29.925+0000: 46.188: [GC 40726K->14569K(95040K), 0.0035090 secs] 2018-11-28T02:17:35.722+0000: 51.985: [GC 40809K->14492K(95040K), 0.0028700 secs] 2018-11-28T02:17:41.398+0000: 57.662: [GC 40732K->14561K(95040K), 0.0029170 secs] 2018-11-28T02:17:46.786+0000: 63.049: [GC 40801K->14345K(95040K), 0.0029630 secs] 2018-11-28T02:17:52.369+0000: 68.632: [GC 40585K->14375K(95040K), 0.0027940 secs] Go into the container,and check logs: bash-4.4 cd /var/log bash-4.4 ls -lrt total 76 drwxrwxrwx 2 root root 6 Jul 25 16:14 restnoded -rw-r--r-- 1 root root 136 Nov 28 02:16 restjavad.out -rw-r--r-- 1 root root 0 Nov 28 02:16 restjavad.0.log.lck -rw-r--r-- 1 root root 2479 Nov 28 02:16 restjavad-api-usage.json -rw-r--r-- 1 root root 58719 Nov 28 02:16 restjavad.0.log -rw-r--r-- 1 root root 1342 Nov 28 02:17 restjavad-gc.log.0.current -rw-r--r-- 1 root root 79 Nov 28 02:17 restnoded.out bash-4.4 tail -f restnoded.out lowering process privileges to: root/root, groups:0,0,1,2,3,4,6,10,11,20,26,27 ^C bash-4.4 ls restnoded/ bash-4.4 tail -f restjavad-gc.log.0.current 2018-11-28T02:16:51.923+0000: 8.186: [GC 40516K->14918K(95040K), 0.0107790 secs] 2018-11-28T02:16:56.864+0000: 13.127: [GC 41158K->14976K(95040K), 0.0038870 secs] 2018-11-28T02:17:02.518+0000: 18.781: [GC 41216K->14982K(95040K), 0.0028530 secs] 2018-11-28T02:17:08.314+0000: 24.577: [GC 41222K->14957K(95040K), 0.0038880 secs] 2018-11-28T02:17:14.038+0000: 30.302: [GC 41197K->15010K(95040K), 0.0028630 secs] 2018-11-28T02:17:18.423+0000: 34.686: [GC 41250K->14592K(95040K), 0.0033240 secs] 2018-11-28T02:17:24.217+0000: 40.480: [GC 40832K->14486K(95040K), 0.0028170 secs] 2018-11-28T02:17:29.925+0000: 46.188: [GC 40726K->14569K(95040K), 0.0035090 secs] 2018-11-28T02:17:35.722+0000: 51.985: [GC 40809K->14492K(95040K), 0.0028700 secs] 2018-11-28T02:17:41.398+0000: 57.662: [GC 40732K->14561K(95040K), 0.0029170 secs] 2018-11-28T02:17:46.786+0000: 63.049: [GC 40801K->14345K(95040K), 0.0029630 secs] 2018-11-28T02:17:52.369+0000: 68.632: [GC 40585K->14375K(95040K), 0.0027940 secs]475Views0likes1CommentModern Application Architecture - Cloud-Native Architecture Platform - Part 1 of 3
Overview In this multi part series of articles, I will be sharing with you on how to leverage F5’s BIG-IP (BIG-IP), Aspen Mesh service mesh and NGINX ingress controller to create a cloud-agnostic, resilient and secure cloud-native architecture platform to support your cloud-native applications requirement. Cloud-native is a term used to describe container-based environment. Microservices is an architectural pattern/approach/style where application are structured into multiple loosely couple, independent services delivered in a containerized form factor. Hence, for simplicity, in this series of articles, cloud-native architecture and microservices architecture platform (cPaaS) are use interchangeably. Note: Although BIG-IP is not in the category of a cloud-native apps (in comparison with F5's Service Proxy for Kubernetes (SPK) - which is cloud-native), currently, BIG-IP is feature rich and play a key role in this reference architecture pattern. For existing customer who has BIG-IP, this could be a first step for an organic transition from existing BIG-IP to cloud-native SPK. Part 1 – Cloud-Native Architecture Platform Formulate a cloud-agnostic architecture pattern. Architect/Build Kubernetes Platform for development (based on k3d with k3s). Architect and integrate keys technologies for this pattern. BIG-IP Aspen Mesh Service Mesh + Jaeger distributed tracing NGINX Ingress Controller Container Ingress Services (CIS) Application Services v3 (AS3) Grafana/Prometheus Monitoring Part 2 – Traffic Management, Security and Observability Establish common ingress/egress architecture pattern For HTTP based application (e.g., http/http2 web application) For non-HTTP (e.g. TCP/UDP) based application (e.g., MQTT) Uplift cloud-native apps protection with Web Application Firewall. Aspen Mesh Service Mesh Bookinfo apps Httpbin apps NGINX Ingress controller Grafana apps Grafana and Prometheus monitoring for Aspen Mesh and NGINX Part 3 – Unified Authentication (AuthN) and Authorization (AuthZ) for cloud-native apps. OAUTH authentication (Azure AD and Google) Legacy Windows AD Authentication Why cloud-native architecture platform? The proliferation of Internet based applications, software and usage on mobile devices has grown substantially over the years. It is no longer a prediction. It is a fact. According to 2021 Global Digital suite of reports from “We Are Social” and “Hootsuite”, there are over 5 billion unique mobile users and over 4 billion users actively connected to the Internet. This excludes connected devices such as Internet of Things, servers that power the internet and etc. With COVID-19 and the rise of 5G rollout, edge and cloud computing, connected technologies became and event more important and part of people’s lives. As the saying goes, “Application/Software powered the Internet and Internet is the backbone of the world economy”. Today organization business leaders require their IT and digital transformation teams to be more innovative by supporting the creation of business-enabling applications, which means they are no longer just responsible for availability of the networks and servers, but also building a robust platform to support the software development and application delivery that are secure, reliable and innovative.To support that vision, organization need a robust platform to support and deliver application portfolio that are able to support the business.Because a strong application portfolio is crucial for the success of the business and increase market value,IT or Digital transformation team may need to ask: "What can we do to embrace and support the proliferation of applications, empower those with creative leadership, foster innovative development, and ultimately help create market value?" Robust and secure cloud-native platform for modern application architecture and frictionless consumption of application services are some of the requirement for success. As of this writing (April 2021), cloud-native / microservices architecture is an architecture pattern of choice for modern developer and Kubernetes Platform is the industry de-facto standard for microservices/containers orchestration. What is the GOAL in this series of articles? Strategies, formulate and build a common, resilient and scalable cloud-native reference architecture and Platform as a Service to handle modern applications workload. This architecture pattern is modular and cloud-agnostic and deliver a consistent security and application services. To established the reference architecture, we are leveraging an open source upstream Kubernetes platform on a single Linux VM with multitude of open source and commercial tools and integrate that with F5's BIG-IP as the unified Ingress/Egress and unified access to cloud-native application hosted on the following type of workload:- Service Mesh workload Non-Service Mesh workload TCP/UDP workload Note: We can leverage F5's Service Proxy for Kubernetes (SPK) as the unified ingress/egress. However, F5's BIG-IP will be used in this article. You can skip steps of building Kubernetes cluster if you already have an existing multi-node Kubernetes cluster, minikube or any public cloud hosted Kubernetes (e.g. EKS/AKS/GKE) Requirement 1 x Ubuntu VM (ensure you have a working Ubuntu 20.x with docker installed) vCPU: 8 (can runs with 4 or 6 vCPU with reduce functionality) HDD: Ideal 80G. (Only required for persistent storage. Can run with 40G). Need to update persistent volume size appropriately. Modern Application Architecture (cPaaS) - Reference Architecture BIG-IP - Service Proxy Central ingress and/or egress for cloud-native workload. For applications deployed in service mesh namespaces, F5 service proxy, proxied ingress traffic to Aspen Mesh ingressgateway. For applications deployed in non-service mesh namespaces, F5 service proxy, proxied ingress traffic to NGINX ingress controller. For applications that required bypass of ingress (e.g. TCP/UDP apps), F5 service proxy, proxied directly to those pods IP. F5 Service Proxy provides centralized security protection by enforcing Web Application and API Protection (WAAP) firewall policy on cloud-native workloads. F5 Service Proxy provided SSL inspection (SSL bridge and/or offload) to Aspen Mesh ingressgateway and/or NGINX ingress controller. F5 Service Proxy can be deploy to send to multiple Kubernetes cluster - for inter and/or intra cluster resiliency. Global Server Load Balancing (F5's DNS) can be enabled on F5 Service Proxy to provides geo-redundancy for multi-cloud workload. F5 Service Proxy act as the unified access management with F5's Access Policy Manager (APM). Cloud-native application can delegate AuthN to F5 Service Proxy (multiple AuthN mechanism such as OIDC/OAuth/NTLM/SAML and etc) and cloud-native application perform AuthZ. F5 Service-Proxy ingress are only need to setup once. Cloud-native apps FQDN are all mapped to the same ingress. Aspen Mesh Service Mesh Centralized ingress for service mesh namespaces Enterprise ready, hardened and fully supported Istio-based service mesh by F5. Provides all capabilities delivered by Istio (Connect, Secure, Control and Observe). Provide traffic management and security for East-West communication. Reduce operational complexities of managing service mesh Aspen Mesh Rapid Resolve / MTTR - Mean Time To Resolution - quickly detect and identify causes of cluster and application errors. Service and Health indicator Graph for service visibility and observability. ISTIO Vet Enhance security Secure by Default with zero trust policy Secure Ingress Enhance RBAC Carrier-grade feature Aspen Mesh Packet Inspector NGINX Ingress Controller Centralized ingress for non-service mesh namespaces Works with both NGINX and NGINX Plus and supports the standard ingress features - content-based routing and TLS/SSL termination Support load balancing WebSocket, gRPC, TCP and UDP applications Container Ingress Services (CIS) Works with container orchestration environment (e.g. Kubernetes) to dynamically create L4/L7 services on BIG-IP and load balance network traffic across those services. It monitor the orchestration API server (e.g. lifecycle of Kubernetes pods) and dynamically update BIG-IP configuration based on changes made on containerized application. In this setup, it monitor Aspen Mesh ingressgateway, NGINX ingress controller and TCP/UDP based apps and dynamically updates BIG-IP configuration. AS3 Application Services 3 extension is a flexible, low-overhead mechanism for managing application-specific configuration on BIG-IP system. Leveraging a declarative model with a single JSON declaration. High Resiliency Cloud-Native Apps The reference architecture above can be treated as an "atomic" unit or a "repeatable pattern". This "atomic" unit can be deploy in multiple public cloud (e.g. EKS/AKS/GKE and etc) or private cloud. Multiple "atomic" unit can be constructed to form a high service resiliency clusters. F5 DNS/GSLB can be deploy to monitor health of each individual cloud-native apps inside each individual "atomic" cluster and dynamically redirect user to a healthy apps. Each cluster can runs as active-active and application can be distributed to both clusters. How applications achieve high resiliency with F5 DNS. Multi-Cloud, Multi-Cluster Service Resiliency Conceptual view on how an "atomic" unit / cPaaS can be deployed in multi-cloud and each of this clusters can be constructed to form a service resiliency mesh by leveraging F5 DNS and F5 BIG-IP. Note: Subsequent section will be a hands-on guide to build the reference architecture describe above (the "atomic" unit) with the exception of multi-cloud, multi-cluster service resiliency mesh. K3D + K3S will be use for the sole purpose of development and testing. Conceptual Architecture for this setup Note: The following instructions are use as a quick start guide. Please refer to respective installation guide for details. Scripts use in this setup can be found on github Install Docker sudo apt-get update sudo apt-get -y install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" sudo apt-get update -y sudo apt-get install docker-ce=5:19.03.15~3-0~ubuntu-focal docker-ce-cli=5:19.03.15~3-0~ubuntu-focal -y fbchan@sky:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES fbchan@sky:~$ sudo systemctl enable --now docker.service Install Helm curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh Install calico binary curl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.15.0/calicoctl chmod u+x calicoctl sudo mv calicoctl /usr/local/bin/ Install kubectl binary curl -LO https://dl.k8s.io/release/v1.19.9/bin/linux/amd64/kubectl chmod u+x kubectl sudo mv kubectl /usr/local/bin Install supporting tools sudo apt install jq -y sudo apt install net-tools -y Install k9s This component is optional. It is a terminal based UI to interact with Kubernetes clusters. wget https://github.com/derailed/k9s/releases/download/v0.24.2/k9s_Linux_x86_64.tar.gz tar zxvf k9s_Linux_x86_64.tar.gz sudo mv k9s /usr/local/bin/ Ensure Linux volume group expanded Depend on your setup, by default, your Ubuntu 20.x VM may not expand all your allocated volume. Hence, this setup is to expand all allocated disk space. fbchan@sky:~$ sudo lvm lvm> lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv Size of logical volume ubuntu-vg/ubuntu-lv changed from 39.50 GiB (10112 extents) to <79.00 GiB (20223 extents). Logical volume ubuntu-vg/ubuntu-lv successfully resized. lvm> quit Exiting. fbchan@sky:~$ sudo resize2fs /dev/ubuntu-vg/ubuntu-lv resize2fs 1.45.5 (07-Jan-2020) Filesystem at /dev/ubuntu-vg/ubuntu-lv is mounted on /; on-line resizing required old_desc_blocks = 5, new_desc_blocks = 10 The filesystem on /dev/ubuntu-vg/ubuntu-lv is now 20708352 (4k) blocks long. fbchan@sky:~$ df -kh Filesystem Size Used Avail Use% Mounted on udev 7.8G 0 7.8G 0% /dev tmpfs 1.6G 1.2M 1.6G 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv 78G 7.1G 67G 10% / .. Disable Ubuntu Firewall sudo ufw disable sudo apt-get remove ufw -y Ubuntu VM fbchan@sky:~$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:0c:29:6c:ab:0b brd ff:ff:ff:ff:ff:ff inet 10.10.2.10/24 brd 10.10.2.255 scope global ens160 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe6c:ab0b/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:4c:15:2e:1e brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever fbchan@sky:~$ ip r default via 10.10.2.1 dev ens160 proto static 10.10.2.0/24 dev ens160 proto kernel scope link src 10.10.2.10 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown Install k3d + k3s K3D in a nutshell. K3D is a lightweight wrapper to run k3s (Rancher Lab's minimal Kubernetes distribution) in docker. K3D makes it very easy to create single- and multi-node K3S clusters in docker, e.g. for local development on Kubernetes. For details please refer to here Install k3d wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v4.2.0 bash Create k3s cluster Spin up 1 x server/master and 3 x agent/worker nodes Disable traefik and service load balancer as we don't need it as we are leveraging BIG-IP as the unified ingress/egress. Replace with calico CNI instead of default flannel CNI Setup TLS SAN certificate so that we can access K3S api remotely. k3d cluster create cpaas1 --image docker.io/rancher/k3s:v1.19.9-k3s1 \ --k3s-server-arg "--disable=servicelb" \ --k3s-server-arg "--disable=traefik" \ --k3s-server-arg --tls-san="10.10.2.10" \ --k3s-server-arg --tls-san="k3s.foobz.com.au" \ --k3s-server-arg '--flannel-backend=none' \ --volume "$(pwd)/calico-k3d.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml" \ --no-lb --servers 1 --agents 3 ### Run above command or cluster-create.sh script provided ### ############################################################## fbchan@sky:~/Part-1$ ./cluster-create.sh WARN[0000] No node filter specified INFO[0000] Prep: Network INFO[0000] Created network 'k3d-cpaas1' INFO[0000] Created volume 'k3d-cpaas1-images' INFO[0001] Creating node 'k3d-cpaas1-server-0' INFO[0001] Creating node 'k3d-cpaas1-agent-0' INFO[0001] Creating node 'k3d-cpaas1-agent-1' INFO[0001] Creating node 'k3d-cpaas1-agent-2' INFO[0001] Starting cluster 'cpaas1' INFO[0001] Starting servers... INFO[0001] Starting Node 'k3d-cpaas1-server-0' INFO[0014] Starting agents... INFO[0014] Starting Node 'k3d-cpaas1-agent-0' INFO[0024] Starting Node 'k3d-cpaas1-agent-1' INFO[0034] Starting Node 'k3d-cpaas1-agent-2' INFO[0045] Starting helpers... INFO[0045] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access INFO[0052] Successfully added host record to /etc/hosts in 4/4 nodes and to the CoreDNS ConfigMap INFO[0052] Cluster 'cpaas1' created successfully! INFO[0052] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false INFO[0052] You can now use it like this: kubectl config use-context k3d-cpaas1 kubectl cluster-info ### Docker k3d spun up multi-node Kubernetes using docker ### ############################################################# fbchan@sky:~/Part-1$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2cf40dca2b0a rancher/k3s:v1.19.9-k3s1 "/bin/k3s agent" About a minute ago Up 52 seconds k3d-cpaas1-agent-2 d5c49bb65b1a rancher/k3s:v1.19.9-k3s1 "/bin/k3s agent" About a minute ago Up About a minute k3d-cpaas1-agent-1 6e5bb6119b61 rancher/k3s:v1.19.9-k3s1 "/bin/k3s agent" About a minute ago Up About a minute k3d-cpaas1-agent-0 ea154b36e00b rancher/k3s:v1.19.9-k3s1 "/bin/k3s server --d…" About a minute ago Up About a minute 0.0.0.0:37371->6443/tcp k3d-cpaas1-server-0 ### All Kubernetes pods are in running states ### ################################################# fbchan@sky:~/Part-1$ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-95gqb 1/1 Running 0 5m11s kube-system calico-node-fdg9f 1/1 Running 0 5m11s kube-system calico-node-klwlq 1/1 Running 0 5m6s kube-system local-path-provisioner-7ff9579c6-mf85f 1/1 Running 0 5m11s kube-system metrics-server-7b4f8b595-7z9vk 1/1 Running 0 5m11s kube-system coredns-66c464876b-hjblc 1/1 Running 0 5m11s kube-system calico-node-shvs5 1/1 Running 0 4m56s kube-system calico-kube-controllers-5dc5c9f744-7j6gb 1/1 Running 0 5m11s Setup Calico on Kubernetes For details please refer to another devcentral article. Note: You do not need to setup calico for Kubernetes in EKS, AKS (Azure CNI with advance networking mode) or GKE deployment. Cloud Provider managed Kubernetes underlay will provides the required connectivity from BIG-IP to Kubernetes pods. sudo mkdir /etc/calico sudo vi /etc/calico/calicoctl.cfg Content of calicoctl.cfg. (replace /home/xxxx/.kube/config with the location of you kubeconfig file) --------------------------------------- apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: "kubernetes" kubeconfig: "/home/xxxx/.kube/config" -------------------------------------- fbchan@sky:~/Part-1$ sudo calicoctl create -f 01-bgpconfig.yml Successfully created 1 'BGPConfiguration' resource(s) fbchan@sky:~/Part-1$ sudo calicoctl create -f 02-bgp-peer.yml Successfully created 1 'BGPPeer' resource(s) fbchan@sky:~/Part-1$ sudo calicoctl get node -o wide NAME ASN IPV4 IPV6 k3d-cpaas1-agent-1 (64512) 172.19.0.4/16 k3d-cpaas1-server-0 (64512) 172.19.0.2/16 k3d-cpaas1-agent-2 (64512) 172.19.0.5/16 k3d-cpaas1-agent-0 (64512) 172.19.0.3/16 On BIG-IP Setup BGP peering with Calico Ensure you enabled Advance Networking on BIG-IP (Network >> Route Domains >> 0, under "Dynamic Routing Protocol", Enabled: BGP) [root@mel-prod:Active:Standalone] config # [root@mel-prod:Active:Standalone] config # imish mel-prod.foobz.com.au[0]>en mel-prod.foobz.com.au[0]#config t Enter configuration commands, one per line. End with CNTL/Z. mel-prod.foobz.com.au[0](config)#router bgp 64512 mel-prod.foobz.com.au[0](config-router)#bgp graceful-restart restart-time 120 mel-prod.foobz.com.au[0](config-router)#neighbor calico-k8s peer-group mel-prod.foobz.com.au[0](config-router)#neighbor calico-k8s remote-as 64512 mel-prod.foobz.com.au[0](config-router)#neighbor 172.19.0.2 peer-group calico-k8s mel-prod.foobz.com.au[0](config-router)#neighbor 172.19.0.3 peer-group calico-k8s mel-prod.foobz.com.au[0](config-router)#neighbor 172.19.0.4 peer-group calico-k8s mel-prod.foobz.com.au[0](config-router)#neighbor 172.19.0.5 peer-group calico-k8s mel-prod.foobz.com.au[0](config-router)#wr Building configuration... [OK] mel-prod.foobz.com.au[0](config-router)#end mel-prod.foobz.com.au[0]#show running-config ! no service password-encryption ! router bgp 64512 bgp graceful-restart restart-time 120 neighbor calico-k8s peer-group neighbor calico-k8s remote-as 64512 neighbor 172.19.0.2 peer-group calico-k8s neighbor 172.19.0.3 peer-group calico-k8s neighbor 172.19.0.4 peer-group calico-k8s neighbor 172.19.0.5 peer-group calico-k8s ! line con 0 login line vty 0 39 login ! end Validate Calico pod network advertised to BIG-IP via BGP Calico pod network routes advertised onto BIG-IP routing table. Because BIG-IP route every pods network to single Ubuntu VM (10.10.2.10) , we need to ensure that Ubuntu VM route those respective pod networks to the right docker container agent/worker nodes. In an environment where master/worker on a dedicated VM/physical host with different IP, BIG-IP BGP will send to the designated host. Hence, the following only require for this setup, where all Kubernetes nodes running on the same VM. Base on my environment, here are the additional route I need to add on my Ubuntu VM. fbchan@sky:~/Part-1$ sudo ip route add 10.53.68.192/26 via 172.19.0.4 fbchan@sky:~/Part-1$ sudo ip route add 10.53.86.64/26 via 172.19.0.3 fbchan@sky:~/Part-1$ sudo ip route add 10.53.115.0/26 via 172.19.0.5 fbchan@sky:~/Part-1$ sudo ip route add 10.53.194.192/26 via 172.19.0.2 If everything working properly, from BIG-IP, you should be able to ping Kubernetes pods IP directly. You can find those pods network IP via 'kubectl get pod -A -o wide' root@(mel-prod)(cfg-sync Standalone)(Active)(/Common)(tmos)# ping -c 2 10.53.86.66 PING 10.53.86.66 (10.53.86.66) 56(84) bytes of data. 64 bytes from 10.53.86.66: icmp_seq=1 ttl=62 time=1.59 ms 64 bytes from 10.53.86.66: icmp_seq=2 ttl=62 time=1.33 ms --- 10.53.86.66 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 1.336/1.463/1.591/0.133 ms root@(mel-prod)(cfg-sync Standalone)(Active)(/Common)(tmos)# ping -c 2 10.53.86.65 PING 10.53.86.65 (10.53.86.65) 56(84) bytes of data. 64 bytes from 10.53.86.65: icmp_seq=1 ttl=62 time=1.03 ms 64 bytes from 10.53.86.65: icmp_seq=2 ttl=62 time=24.5 ms --- 10.53.86.65 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 1.036/12.786/24.537/11.751 ms Note: Do not persist those Linux route on the VM. The routing will change when you reboot or restart your VM. You required to query the new route distribution and re-create the Linux route whenever you reboot your VM. Summary on Part-1 What we achieved so far: Basic understanding on why cloud-native architecture platform so important. Established a cloud-agnostic and cloud-native reference architecture and understand those key components and it roles. Have a working environment for our Part 2 series - Traffic Management, Security and Observability.1.1KViews6likes0CommentsWhen cloud-native meets monolithic
According to CNCF’s Cloud Native Survey 2020 published on 17 Nov 2020, containers in production jump 300% from the first survey in 2016. Year 2020 itself increased to 92% from 84% in 2019. (https://www.cncf.io/cncf-cloud-native-survey-2020). In addition, according to F5's 2020 State of Application Services Report (https://www.f5.com/state-of-application-services-report#get-the-report), 80% of organisations are executing on digital transformation and of this organisation, there are more likely to deploy modern app architecture and app services at higher rate. So, cloud-native modern application architecture is gaining great momentum. Industries and majority organisation embracing and pivoting toward cloud-native technologies. Cloud-native provides multitude of benefits – which is not the subject of this article. F5’s BIG-IP (a.k.a classic BIG-IP) is not cloud-native. How F5’s classic BIG-IP be relevant in the cloud-native world? This article demonstrates how cloud-native meets and needs classic BIG-IP (monolithic). FYI: F5’s BIG-IP SPK (Service Proxy for Kubernetes) is BIG-IP delivered in a containerized form factor. It is cloud-native (https://www.f5.com/products/service-proxy-for-kubernetes). BIG-IP SPK will be discussed in future article. How both of them need each other? Typically, it take years for organisation who embracing cloud-native to move into a fully cloud-native technologies/infrastructure. There are use cases where modern cloud-native application needs to integrate with traditional or existing monolithic applications. Modern apps living along with traditional apps or infrastructure are common for most enterprises. F5’s classic BIG-IP can bridge those gaps. This article about use cases that we solved in one of our customer environment where how we leverage classic BIG-IP to bridge gaps between cloud-native and monolithic apps. First, let’s be clear on what cloud-native really means. To set the record straight, cloud-native doesn’t just mean running workload in the cloud, although it is partially true. There are many definition and perspective on what cloud-native really means. For the sake of this article, I will base on the official definition of cloud-native from the CNCF (Cloud Native Computing Foundation), which defines “Cloud native technologies empower organisations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach. These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil” My takeaway (characteristic of cloud-native): Scalable apps that design to run in dynamic environment (public, private and hybrid clouds). Typically delivered in the form of microservices/containers which loosely coupled systems. Easily adapted and integrated into automation system. CI/CD part of it ecosystem – frequent release, patch and updates cycle. Immutable (cattle service model) instead of mutable (Pets service model) Declarative API Kubernetes is one of the example of a cloud-naive technologies. What is the Problem Statement? Uniquely identify apps/workload/containers deployed in Kubernetes platform and apply appropriate external security control (e.g. network/application firewall) for containerized apps when it communicates with existing legacy applications deployed outside of Kubernetes (egress from Kubernetes). What are those challenges? Containers that egress off Kubernetes are by design source network address translated (SNAT) from Kubernetes nodes. External security control system such as network firewall may not be able to identify the right authorised source apps as it is hidden behind NATing. How to ensure that only authorised apps deployed in Kubernetes environment authorised to access critical legacy apps (e.g. billing or financial system) protected by network/application firewall? For multi-tenant environment with multiple namespaces in Kubernetes, how to ensure pods or namespaces have a unique identity and enforce control access to egress endpoints (outside of Kubernetes). Unique workload identity is important for end-to-end correlation, audit and traceability. How to provide end-to-end and correlated view from source to target apps. How F5 solved this with classic BIG-IP ADC and Aspen Mesh Service Mesh Architecture Overview This solution article is an extension to the original article – “Expanding Service Mesh without Envoy” published by my colleague Eric Chen. For details of that article please refer to https://aspenmesh.io/expanding-service-mesh-without-envoy/ Aspen Mesh, an innovation from F5, is an enterprise-ready service mesh built on Istio. It is tested and hardened distribution of Istio with complete support by F5. For details, please refer to https://aspenmesh.io. For the purpose of this solution, Aspen Mesh and Istio will be used interchangeably. Solution in a nutshell Each pod had its own workload identity. Part of native capabilities of Aspen Mesh (AM). The identity is in a form of client certificate managed by AM (istiod/Citadel) and generated from an organisation Intermediate CA loaded onto Istio control plane. BIG-IP on-boarded with workload identity (client certificate) and signed from the same organisation Intermediate CA (or root CA). This client certificate NOT managed by AM. F5 Virtual Server (VS) configured with client-side profile to perform mutual TLS (mTLS). F5 VS is registered onto AM. F5 VS service can be discovered from internal service registry. On egress, pod will perform mTLS with F5 VS. As F5 client certificate issues from same organisation Intermediate CA, both parties will negotiate and mutually trust each other and exchange mTLS key. An optional iRule can be implemented on BIG-IP to inspect pod identity (certificate SAN) upon successful mTLS and permit/reject request. BIG-IP implement SNAT and present a unique network identifier (e.g IP address) to network firewall. Environment BIGIP LTM (v14.x) Aspen Mesh - v1.16.x Kubernetes 1.18.x Use Case Permit microservices apps (e.g. bookinfo) to use organisation forward proxy (tinyproxy) to get to Internet which sit behind enterprise network firewall and reject all other microservices apps on the same Kubernetes cluster. Classic BIG-IP Only vs_aspenmesh-bookinfo-proxy-srv-mtls-svc configuration will be demonstrated. Similar configuration can be applied on other VS. F5 Virtual Server configuration F5's VS client profile configuration. "Client Certificate = require" require pods deployed inside AM present a valid trusted client certificate. An optional iRule to only permit pods from bookinfo namespace. Optional irule_bookinfo_spiffee to permit bookinfo apps and reject other apps. when CLIENTSSL_CLIENTCERT { set client_cert [SSL::cert 0] #log local0. "Client cert extensions - [X509::extensions $client_cert]" #Split the X509::extensions output on each newline character and log the values foreach item [split [X509::extensions [SSL::cert 0]] \n] { log local0. "$item" } if {[SSL::cert 0] ne ""} { set santemp [findstr [X509::extensions [SSL::cert 0]] "Subject Alternative Name" 43 " "] set spiffe [findstr $santemp "URI" 4] log local0. "Source SPIFFEE-->$spiffe" if { ($spiffe starts_with "spiffe://cluster.local/ns/bookinfo/") } { log local0. "Aspen Mesh mTLS: PEMITTED==>$spiffe" # Allow and SNAT from defined SNAT Pool } else { log local0. "Aspen Mesh mTLS: REJECTED==>$spiffe" reject } } } Note: As of Istio version 1.xx, client-side envoy (istio sidecar) will start a mTLS handshake with server-side BIG-IP VS (F5's client side profile). During the handshake, the client-side envoy also does a secure naming check to verify that the service account presented in the server certificate is authorised to run the target service. Then only the client-side envoy and server-side BIG-IP will establish a mTLS connection. Hence, the client certificate generated loaded onto BIG-IP have to conform to the secure naming information, which maps the server identities to the service names. For details on secure naming, please refer to https://istio.io/latest/docs/concepts/security/#secure-naming Example to generate a SPIFFE friendly certificate openssl req -new -out bookinfo.istio-spiffee-req.pem -subj "/C=AU/ST=Victoria/L=Melbourne/O=F5/OU=SE/CN=bookinfo.spiffie" -keyout bookinfo.istio-spiffee-key.pem -nodes cat > v3.ext <<-EOF authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1=spiffe://cluster.local/ns/bookinfo/sa/default EOF openssl x509 -req -sha512 -days 365 \ -extfile v3.ext \ -CA ../ca1/ca-cert.pem -CAkey ../ca1/ca-key.pem -CAcreateserial \ -in bookinfo.istio-spiffee-req.pem \ -out bookinfo.istio-spiffee-cert.pem where ca1 is the intermediate CA use for Aspen Mesh. Aspen Mesh Pods and Services before registration of F5 VS $ kubectl -n bookinfo get pod,svc NAME READY STATUS RESTARTS AGE pod/details-v1-78d78fbddf-4vmdr 2/2 Running 0 4d1h pod/productpage-v1-85b9bf9cd7-f6859 2/2 Running 0 4d1h pod/ratings-v1-6c9dbf6b45-9ld6f 2/2 Running 0 4d1h pod/reviews-v1-564b97f875-bjx2r 2/2 Running 0 4d1h pod/reviews-v2-568c7c9d8f-zzn8r 2/2 Running 0 4d1h pod/reviews-v3-67b4988599-pdk25 2/2 Running 0 4d1h pod/traffic-generator-productpage-fc97f5595-pdhvv 2/2 Running 0 6d11h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/details ClusterIP 10.235.14.186 <none> 9080/TCP 6d11h service/productpage ClusterIP 10.235.37.112 <none> 9080/TCP 6d11h service/ratings ClusterIP 10.235.40.239 <none> 9080/TCP 6d11h service/reviews ClusterIP 10.235.1.21 <none> 9080/TCP 6d11h service/traffic-generator-productpage ClusterIP 10.235.17.158 <none> 80/TCP 6d11h Register bigip-proxy-svc onto Aspen Mesh $ istioctl register -n bookinfo bigip-proxy-svc 10.4.0.201 3128 --labels apps=bigip-proxy 2020-12-15T23:14:33.286854Z warn Got 'services "bigip-proxy-svc" not found' looking up svc 'bigip-proxy-svc' in namespace 'bookinfo', attempting to create it 2020-12-15T23:14:33.305890Z warn Got 'endpoints "bigip-proxy-svc" not found' looking up endpoints for 'bigip-proxy-svc' in namespace 'bookinfo', attempting to create them $ kubectl -n bookinfo get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE bigip-proxy-svc ClusterIP 10.235.45.250 <none> 3128/TCP 26s details ClusterIP 10.235.14.186 <none> 9080/TCP 6d11h productpage ClusterIP 10.235.37.112 <none> 9080/TCP 6d11h ratings ClusterIP 10.235.40.239 <none> 9080/TCP 6d11h reviews ClusterIP 10.235.1.21 <none> 9080/TCP 6d11h traffic-generator-productpage ClusterIP 10.235.17.158 <none> 80/TCP 6d11h $ kubectl -n bookinfo describe svc bigip-proxy-svc Name: bigip-proxy-svc Namespace: bookinfo Labels: apps=bigip-proxy Annotations: alpha.istio.io/kubernetes-serviceaccounts: default Selector: <none> Type: ClusterIP IP: 10.235.45.250 Port: 3128 3128/TCP TargetPort: 3128/TCP Endpoints: 10.4.0.201:3128 Session Affinity: None Events: <none> To test egress from bookinfo pod to external forward proxy (tinyproxy). Run "curl" accessing to Internet (www.f5.com) pointing to bigip-proxy-svc registered on Aspen Mesh. Example below shown executing curl binary inside "traffic-generator-productpage" pod. $ kubectl -n bookinfo exec -it $(kubectl -n bookinfo get pod -l app=traffic-generator-productpage -o jsonpath={.items..metadata.name}) -c traffic-generator -- curl -Ikx bigip-proxy-svc:3128 https://www.f5.com HTTP/1.0 200 Connection established Proxy-agent: tinyproxy/1.8.3 HTTP/1.1 200 OK Content-Type: text/html;charset=utf-8 Content-Length: 132986 Connection: keep-alive Accept-Ranges: bytes Cache-Control: no-cache="set-cookie" Content-Security-Policy: frame-ancestors 'self' *.cybersource.com *.salesforce.com *.force.com ; form-action *.cybersource.com *.salesforce.com *.force.com 'self' Date: Wed, 16 Dec 2020 06:19:48 GMT ETag: "2077a-5b68b3c0c5be0" Last-Modified: Wed, 16 Dec 2020 02:00:07 GMT Strict-Transport-Security: max-age=16070400; X-Content-Type-Options: nosniff X-Dispatcher: dispatcher1uswest2 X-Frame-Options: SAMEORIGIN X-Vhost: publish Via: 1.1 sin1-bit21, 1.1 24194e89802a1a492c5f1b22dc744e71.cloudfront.net (CloudFront) Vary: Accept-Encoding X-Cache: Hit from cloudfront X-Amz-Cf-Pop: MEL50-C2 X-Amz-Cf-Id: 7gE6sEaBP9WonZ0KjngDsr90dahHWFyDG0MwbuGn91uF7EkEJ_wdrQ== Age: 15713 Logs shown on classic BIG-IP Classic BIG-IP successfully authenticate with bookinfo with mTLS and permit access. Logs shown on forward proxy (tinyproxy). Source IP is SNATed to IP configured on classic BIG-IP. IP also allowed on network firewall. From other namespace (e.g. sm-apigw-a), try to access bigip-proxy-svc. Attempt shown rejected by classic BIG-IP. Example below shown executing curl binary in "nettools" pod. $ kubectl -n sm-apigw-a get pod NAME READY STATUS RESTARTS AGE httpbin-api-78bdd794bd-hfwkj 2/2 Running 2 22d nettools-9497dcc86-nhqmr 2/2 Running 2 22d podinfo-bbb7bf7c-j6wcs 2/2 Running 2 22d sm-apigw-a-85696f7455-rs9zh 3/3 Running 0 7d21h fbchan@logos:~/k8s-clusterX/k8s$ $ kubectl -n sm-apigw-a exec -it $(kubectl -n sm-apigw-a get pod -l app=nettools -o jsonpath={.items..metadata.name}) -c nettools -- curl -kIx bigip-proxy-svc.bookinfo.svc.cluster.local:3128 https://devcentral.f5.com curl: (56) Recv failure: Connection reset by peer command terminated with exit code 56 Classic BIG-IP Logs Classic BIG-IP reject sm-apigw-a namespace from using bigip-proxy-svc service. Summary Aspen Mesh is cloud-native Enterprise Ready Istio service mesh. Classic BIG-IP is a features rich application delivery controller (ADC). With Aspen Mesh, microservices are securely authenticated with mTLS with classic BIG-IP. Classic BIG-IP able to securely authenticate microservices apps and deliver application services based on your business and security requirement. This article addresses egress use cases. What about Ingress to Kubernetes cluster? How classic BIG-IP or cloud-native SPK coherently work together with Aspen Mesh to provides secure and consistent multi-cloud, multi-cluster application delivery services to your Kubernetes environment. This will be shared in future article. Stay tune.1.3KViews0likes2CommentsCan an F5 VIP and Pool have a container member?
I have a container running on a Server with port 80 (TCP) exposed. The container is up and running when you test it on the serverIP and container port for example 172.27.27.2:80. I would now like to point an F5 VIP at a pool containing the member 172.27.27.2:80. I don't want to set up anything fancy using the F5 K8s setup but configure the setup through the F5 as it is were a basic IIS site or windows service. I cannot think of one but is there a reason this is not possible ? & is there a specific health monitor that should be used? at the moment the F5 keeps marking the member as offline.682Views0likes3CommentsContainers: plug-and-play code in DevOps world - part 2
Related articles: DevOps Explained to the Layman Containers: plug-and-play code in DevOps world - part 1 Quick Intro Inpart 1, I explained containers at a very high level and mentioned that Docker was the most popular container platform. I also added that containers are tiny isolated environments within the same Linux host using the same Linux kernel and it is so lightweight that it packs only the libraries and dependencies just enough to get your application running. This is good because even for very distinct applications that require a certain Linux distro to run or different libraries, there should be no problem at all. If you're a Linux guy like me you'd probably want to know that the mostpopular container platform (docker) usesdockerdas the front-line daemon: root@albuquerque-docker:~# ps aux | grep dockerd root 753 0.1 0.4 1865588 74692 ? Ssl Mar19 4:39 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock This is what we're going to do here: Running my hello world app in traditional way (just a simple hello world!) Containerising my hello world app! (how to pack your application into a Docker image!) Quick note about image layers (brief note about Docker image layers!) Running our containerised app (we run the same hello worldapp but now within aDocker container) Uploading my app to online Registry and Retrieving it(we now upload our app to DockerHub so we can pull it from anywhere) How do we manage multiple container images (what do we do if our application is so big that we've got lots of containers?) Running my hello world app in traditional way There is no mystery running an application (or a component of an application) in the traditional way. We've got our physical or virtual machine with an OS installed and you just run it: root@albuquerque-docker:~# cat hello.py #!/usr/bin/python3 print('hello, Rodrigo!') root@albuquerque-docker:~# ./hello.py hello, Rodrigo! Containerising my hello world app! Here I'm going to show you how you can containerise your application and it's best if you follow along with me. First,install Docker. Once installed the command you'll use is alwaysdocker <something>ok? In DevOps world things are usually done in a declarative manner, i.e. you tell docker what you want to do and you don't worry much about the how. With that in mind, by default we can tell Docker in its default configuration file (Dockerfile) about the application you'd like it to pack (pack = creatingan image): root@albuquerque-docker:~# cat Dockerfile FROM ubuntu:latest RUN apt-get update && apt-get upgrade -y && apt-get install python3 -y ADD hello.py / CMD [ "./hello.py" ] root@albuquerque-docker:~# FROM: tell docker what is your base image (don't worry, it automatically downloads the image fromDockerhubif your image is not locally installed) RUN: Any command typed in here is executed and becomes part of base image ADD: copies source file (hello.py) to a directory you pick inside your container (/ in this case here) CMD: Any command typed in here is executed after container is already running So, in above configuration we're telling Docker to build an image to do the following: Install ubuntu Linux as our base image (this is not the whole OS, just the bare minimum) Update and upgrade all packages installed and install python3 Add our hello.py script from current directory to / directory inside the container Run it Exit, because the only task it had (running our script) has been completed Now we execute this command to build the image based on ourDockerfile: Note: notice I didn't specify Dockerfile in my command below. That's because it's the default filename so I just omitted it. root@albuquerque-docker:~# docker build -t hello-world-rodrigo . Sending build context to Docker daemon 607MB Step 1/4 : FROM ubuntu:latest ---> 94e814e2efa8 Step 2/4 : RUN apt-get update && apt-get upgrade -y && apt-get install python3 -y ---> Running in a63919569292 Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] . . <omitted for brevity> . Reading state information... Calculating upgrade... The following packages will be upgraded: apt libapt-pkg5.0 libseccomp2 libsystemd0 libudev1 5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 2268 kB of archives. After this operation, 15.4 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libudev1 amd64 237-3ubuntu10.15 [54.2 kB] Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libapt-pkg5.0 amd64 1.6.10 [805 kB] Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libseccomp2 amd64 2.3.1-2.1ubuntu4.1 [39.1 kB] Get:4 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 apt amd64 1.6.10 [1165 kB] Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libsystemd0 amd64 237-3ubuntu10.15 [205 kB] debconf: delaying package configuration, since apt-utils is not installed Fetched 2268 kB in 3s (862 kB/s) (Reading database ... 4039 files and directories currently installed.) Preparing to unpack .../libudev1_237-3ubuntu10.15_amd64.deb ... . . <omitted for brevity> . Suggested packages: python3-doc python3-tk python3-venv python3.6-venv python3.6-doc binutils binfmt-support readline-doc The following NEW packages will be installed: file libexpat1 libmagic-mgc libmagic1 libmpdec2 libpython3-stdlib libpython3.6-minimal libpython3.6-stdlib libreadline7 libsqlite3-0 libssl1.1 mime-support python3 python3-minimal python3.6 python3.6-minimal readline-common xz-utils 0 upgraded, 18 newly installed, 0 to remove and 0 not upgraded. Need to get 6477 kB of archives. After this operation, 33.5 MB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libssl1.1 amd64 1.1.0g-2ubuntu4.3 [1130 kB] . . <omitted for brevity> . Setting up libpython3-stdlib:amd64 (3.6.7-1~18.04) ... Setting up python3 (3.6.7-1~18.04) ... running python rtupdate hooks for python3.6... running python post-rtupdate hooks for python3.6... Processing triggers for libc-bin (2.27-3ubuntu1) ... Removing intermediate container a63919569292 ---> 6d564b46521d Step 3/4 : ADD hello.py / ---> a936bffc4f17 Step 4/4 : CMD [ "./hello.py" ] ---> Running in bea77d51f830 Removing intermediate container bea77d51f830 ---> e6e4f99ed9f3 Successfully built e6e4f99ed9f3 Successfully tagged hello-world-rodrigo:latest That's it. You've now packed your application into a docker image! We can now list our images to confirm our image is there: root@albuquerque-docker:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello-world-rodrigo latest e6e4f99ed9f3 2 minutes ago 155MB ubuntu latest 94e814e2efa8 2 minutes ago 88.9MB root@albuquerque-docker:~# Note that Ubuntu image was also installed as it is the base image where our app runs. Quick note about image Layers Notice that Docker uses layers to be more efficient and they're reused among containers in the same Host: root@albuquerque-docker:~# docker inspect hello-world-rodrigo | grep Layers -A 8 "Layers": [ "sha256:762d8e1a60542b83df67c13ec0d75517e5104dee84d8aa7fe5401113f89854d9", "sha256:e45cfbc98a505924878945fdb23138b8be5d2fbe8836c6a5ab1ac31afd28aa69", "sha256:d60e01b37e74f12aa90456c74e161f3a3e7c690b056c2974407c9e1f4c51d25b", "sha256:b57c79f4a9f3f7e87b38c17ab61a55428d3391e417acaa5f2f761c0e7e3af409", "sha256:51bedea20e25171f7a6fb32fdba24cce322be0d1a68eab7e149f5a7ee320290d", "sha256:b4cfcee2534584d181cbedbf25a5e9daa742a6306c207aec31fc3a8197606565" ] }, You can think of layers roughly like first layer is bare bones base OS for example, then second one would be a subsequent modification (e.g. installing python3), and so on. The idea is to share layers (read-only) with different containers so we don't need to create a copy of the same layer. Just make sure you understand that what we're sharing here is a read-only image. Anything you write on top of that, docker creates another layer! That's the magic! Running our Containerised App Lastly, we run our image with our hello world app: root@albuquerque-docker:~# docker run hello-world-rodrigo hello, Rodrigo! root@albuquerque-docker:~# As I said before, our container exited and that's because the only task assigned to the container was to run our script so by default it exits. We can confirm there is no container running: root@albuquerque-docker:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES root@albuquerque-docker:~# If you want to run it in daemon mode there is an option called -d but you're only supposed to run this option if you're really going to run a daemon. Let me useNGINXbecause our hello-world image is not suitable for daemon mode: root@albuquerque-docker:~# docker run -d nginx Unable to find image 'nginx:latest' locally latest: Pulling from library/nginx f7e2b70d04ae: Pull complete 08dd01e3f3ac: Pull complete d9ef3a1eb792: Pull complete Digest: sha256:98efe605f61725fd817ea69521b0eeb32bef007af0e3d0aeb6258c6e6fe7fc1a Status: Downloaded newer image for nginx:latest c97d363a1cc2bf578d62e57ec677bca69f27746974b9d5a49dccffd17dd75a1c Yes, you can run docker run command and it will download the image and run the container for you. Let's just confirm our container didn't exit and it is still there: root@albuquerque-docker:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c97d363a1cc2 nginx "nginx -g 'daemon of…" 5 seconds ago Up 4 seconds 80/tcp amazing_lalande Let's confirm we can reach NGINX inside the container. First we check container's locally assigned IP address: root@albuquerque-docker:~# docker inspect c97d363a1cc2 | grep IPAdd "SecondaryIPAddresses": null, "IPAddress": "172.17.0.2", "IPAddress": "172.17.0.2", Now we confirm we have NGINX running inside a docker container: root@albuquerque-docker:~# curl http://172.17.0.2 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> At the moment, my NGINX server is not reachable outside of my host machine (172.16.199.57 is the external IP address of our container's host machine). rodrigo@ubuntu:~$ curl http://172.16.199.57 curl: (7) Failed to connect to 172.16.199.57 port 80: Connection refused To solve this, just add -p flag like this: -p <port host will listen for external connections>:<port our container is listening> Let's delete our our NGINX container first: root@albuquerque-docker:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c97d363a1cc2 nginx "nginx -g 'daemon of…" 8 minutes ago Up 8 minutes 80/tcp amazing_lalande root@albuquerque-docker:~# docker rm c97d363a1cc2 Error response from daemon: You cannot remove a running container c97d363a1cc2bf578d62e57ec677bca69f27746974b9d5a49dccffd17dd75a1c. Stop the container before attempting removal or force remove root@albuquerque-docker:~# docker stop c97d363a1cc2 c97d363a1cc2 root@albuquerque-docker:~# docker rm c97d363a1cc2 c97d363a1cc2 root@albuquerque-docker:~# docker run -d -p 80:80 nginx a8b0454bae36e52f3bdafe4d21eea2f257895c9ea7ca93542b760d7ef89bdd7f root@albuquerque-docker:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a8b0454bae36 nginx "nginx -g 'daemon of…" 7 seconds ago Up 5 seconds 0.0.0.0:80->80/tcp thirsty_poitras Now, let me reach it from an external host: rodrigo@ubuntu:~$ curl http://172.16.199.57 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> Uploading my App to online Registry and Retrieving it You can also upload your containerised application to an online registry such asdockerhubwith docker push command. We first need to create an accountdockerhuband then a repository: Because my username isdigofarias, my hello-world-rodrigo imagewill actually have to be named locally as digofarias/hello-world-rodrigo. Let's list our images: root@albuquerque-docker:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 94e814e2efa8 8 minutes ago 88.9MB hello-world-rodrigo latest e6e4f99ed9f3 8 minutes ago 155MB If I upload the image this way, it won't work so I need to rename it todigofarias/hello-world-rodrigolike this: root@albuquerque-docker:~# docker tag hello-world-rodrigo:latest digofarias/hello-world-rodrigo:latest root@albuquerque-docker:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello-world-rodrigo latest e6e4f99ed9f3 9 minutes ago 155MB digofarias/hello-world-rodrigo latest e6e4f99ed9f3 9 minutes ago 155MB We can now login to our newly created account: root@albuquerque-docker:~# docker login Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one. Username: digofarias Password: ********** WARNING! Your password will be stored unencrypted in /home/rodrigo/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded Lastly, we push our code toDockerHub: root@albuquerque-docker:~# docker push digofarias/hello-world-rodrigo The push refers to repository [docker.io/digofarias/hello-world-rodrigo] b4cfcee25345: Pushed 51bedea20e25: Pushed b57c79f4a9f3: Pushed d60e01b37e74: Pushed e45cfbc98a50: Pushed 762d8e1a6054: Pushed latest: digest: sha256:b69a5fd119c8e9171665231a0c1b40ebe98fd79457ede93f45d63ec1b17e60b8 size: 1569 If you go to any other machine connected to the Internet with Docker installed you can run my hello-world app: root@albuquerque-docker:~# docker run digofarias/hello-world-rodrigo hello, Rodrigo! You don't need to worry about dependencies or anything else. If it worked properly in your machine, it should also work anywhere else as the environment inside the container should be the same. In real world, you'd probably be uploading just a component of your code and your real application could be comprised of lots of containers that can potentially communicate with each other via an API. How do we manage multiple container images? Remember that in the real-world we might need to create multiple components (each inside of its container) and we'll have an ecosystem of containers that eventually make up our application or service. As I said inpart 1, in order to manage this ecosystem we typically use a container orchestrator. Currently there are a couple of them like Docker swarm but Kubernetes is the most popular one. Kubernetes is a topic for a whole new article or many articles but you typically declare the container images to a Kubernetes deployment file and it downloads, installs, run and monitors the whole ecosystem (i.e. your application) for you. Just remember that a container is typically just one component of your application that communicates with other components/containers via an API.1.4KViews0likes0CommentsContainers: plug-and-play code in DevOps world - part 1
Related articles: DevOps Explained to the Layman Containers: plug-and-play code in DevOps world - part 2 Quick Intro In myprevious articleI mentioned that breaking down your code into smallplug-and-playcomponents is a best practice in a DevOps solution and that is called Microservices Architecture. It makes your application less prone to errors, scalable, resilient and it's very easy to replace one component for another one or for a different version. In the real world, Microservices Architecture became very popular because of Containers. As a result, we're going to talk about containers. For the uninitiated, please know that currently the most popular container platform is called Docker and you'll learn more about it in part 2 (another article). In part 1 (this article), I'm going to very briefly explain the following: What was the problem before Containers? How Containers solve them (here you understand what containers are briefly and what they do) Where do Containers fit into DevOps solution? Hang on! What if my application has too many containers? (yes, that's the question I get from people smarter than me after I explain how containers work) What was the problem before Containers? Before containers we had mostly software that were deployed in a monolithic way, i.e. ina larger chunkof code. Changes If you had to deploy changes you'd have to repackage all the components: Changes can be due to a bug found later on, a new feature, etc. The bottom line is that whatever change is needed it might not be straightforward. Per-Component Scalability If you need more memory, CPU or scale a specific component of your application you can probably keep adding more memory, CPU, etc, to your box/VM The other option would be to add a BIG-IP and scale your monolithic application: However, this option may require changes in your code that is not always possible from a developer's point of view. Also, what if you don't need to scale the whole application but just a single component? Deployment Environment I'd say this is more of an advantage of containers rather than a disadvantage of monolithic environment. Without containers, chances are that your development environment might be slightly different to staging or deployment environment: Even if the OS is the same, but maybe libraries are different or OS version is different and unexpected things might happen during testing or deployment phase. How Containers solve the above problems For the uninitiated, think of containers as a Linux trick to isolate a specific component of your application using the same Linux kernel without the need to use Virtual Machines. It contains the component of your application you've just created and all dependencies and libraries it needs to run. An application (in containers world) is typically comprised of one or (more frequently the case) many containers, and sometimes a LOT of containers. The most popular container platform at the moment is called Docker. Let's revisit how containers solve the problems pointed out earlier but keep this in mind: 1 component for 1 container (typically) Each container/component can (and usually do) speak to other components/containers! E.g. your shopping basket component might need to speak to the authentication container of your e-commerce website Changes Say you're a developer and you corrected a bug in Component 1 (C1) code and you'd like to replace C1 for a newer version: All you need to do is to repackage and restart C1 only and leave the other components untouched: You can replace all C1 components or just a few of them in case you'd like to check how new code behaves. You can then configure the API Gateway or Load balancer to round robin only a handful of requests to newer version of the code, for example, until you're super confident. Per-Component Scalability What if we need to scale Component 4 (C4) in the below picture? Just add one or more components when you need it. In the example below, we've added another C4 instance to Server 3's Guest OS: If you no longer need 3 containers, you can remove the additional container. Scalability is literally plug-and-play! Deployment Environment The magic in the containerisation process is that you not only pack your application. You pack its environment as well: This means that the environment (component-wise) you've been creating and testing your component should be the same you deploy to production! This is a tremendous advantage over monolithic applications as it's less prone to errors due to differences in OS version, libraries, etc. Where do Containers fit into DevOps Solution? In a DevOps solution, we use an agile methodology developers are typically encouraged to create small chunks of code anyway like this: They then merge them to main application at least once a day or typically many times a day. If the code is isolated into a single container, it is easier to troubleshoot or to find a bug. It also encourages the developers themselves to think (and to see) the application broken down into organised chunks of functional components. Maintenance is also cleaner and you can focus on a particular service/container for example. This is so much cleaner and so plug-and-play! Hang on! What if my application has too many containers? That's right. You can create hundreds or thousands of components and pack each of them into a container. Are you going to manually deploy all these components? How to do deploy all these components that make up your application and still efficiently utilise the available resources in an efficient manner? What if one container fails out of the blue? You probably need some form of health check to maintain a minimum number of containers running too, right? What if you suddenly need more resources and your containers are not enough? Are you going to manually add containers? That's where Kubernetes comes in! The most popular container orchestration solution. The next article (part 2) I will introduce Docker in a more technical perspective.1.4KViews0likes1CommentF5 in Container Environments: BIG-IP and Beyond
Container systems like Docker are cool. From some tinkerer like me who loves to be able to create a working webserver with a single command and have it run anywhere, to serious compute projects using Docker integrated with tools like Jenkins and GitHub allowing developers to write code then build, test and deploy it automatically while they get on with the next revision, Docker has become a an overnight (OK, well, three year) success story. But the magic word there is integrated. Because what turns a cool container technology from an interesting tool to a central part of your IT infrastructure is the ability to take the value it brings (in this case light weight, fast, run anywhere execution environments) and make it work effectively with other parts of the environment. To become a credible platform for enterprise IT, we need tools to orchestrate the lifecycle of containers and to manage features such as high availability and scheduling. Luckily we’re now well supplied with tools like Mesos Marathon, Docker Swarm, Kubernetes, and a host of others. You also need rest of the infrastructure to be as agile and integrated as the container management system. When you have the ability to spin up applications on demand, in seconds, you want the systems that manage application traffic to be part of the process, tightly coupled with the systems that are creating the applications and services. Which is where F5 comes in. We are committed to building services that integrate with the tools that you use to manage the rest of your environment, so that you can rely on F5 to be protecting, accelerating and managing your application traffic with the same flexibility and agility that you need elsewhere. Our vision is of an architecture where F5 components subscribe to events from container management systems, then create the right application delivery services in the right place to service traffic to the new containers. This might be something simple, such as just adding a new container to an existing pool. It might mean creating a whole new configuration for a new application or service, with the right levels of security and control. It might even mean deploying a whole new platform to perform these services. Maybe it will deploy a BIG-IP Virtual Edition with all the features and functions you expect from F5. But perhaps we need something new. A lighter weight platform that can deal well with East-West traffic in a micro services environment – while a BIG-IP is managing the North-South client traffic and defending the perimeter? f you think this sounds interesting, then I’d encourage you to watch this space. If you think it sounds really interesting and you will be at Dockercon in Seattle during the week of June 20th you should head to an evening panel discussion hosted by our friends at Skytap on June 21 where F5’er Shawn Wormke will be able to tell you (a little) more.391Views0likes0Comments