on 04-May-2021 11:55
In this multi part series of articles, I will be sharing with you on how to leverage F5’s BIG-IP (BIG-IP), Aspen Mesh service mesh and NGINX ingress controller to create a cloud-agnostic, resilient and secure cloud-native architecture platform to support your cloud-native applications requirement. Cloud-native is a term used to describe container-based environment. Microservices is an architectural pattern/approach/style where application are structured into multiple loosely couple, independent services delivered in a containerized form factor. Hence, for simplicity, in this series of articles, cloud-native architecture and microservices architecture platform (cPaaS) are use interchangeably.
Note:
Although BIG-IP is not in the category of a cloud-native apps (in comparison with F5's Service Proxy for Kubernetes (SPK) - which is cloud-native), currently, BIG-IP is feature rich and play a key role in this reference architecture pattern. For existing customer who has BIG-IP, this could be a first step for an organic transition from existing BIG-IP to cloud-native SPK.
The proliferation of Internet based applications, software and usage on mobile devices has grown substantially over the years. It is no longer a prediction. It is a fact. According to 2021 Global Digital suite of reports from “We Are Social” and “Hootsuite”, there are over 5 billion unique mobile users and over 4 billion users actively connected to the Internet. This excludes connected devices such as Internet of Things, servers that power the internet and etc. With COVID-19 and the rise of 5G rollout, edge and cloud computing, connected technologies became and event more important and part of people’s lives. As the saying goes, “Application/Software powered the Internet and Internet is the backbone of the world economy”.
Today organization business leaders require their IT and digital transformation teams to be more innovative by supporting the creation of business-enabling applications, which means they are no longer just responsible for availability of the networks and servers, but also building a robust platform to support the software development and application delivery that are secure, reliable and innovative. To support that vision, organization need a robust platform to support and deliver application portfolio that are able to support the business. Because a strong application portfolio is crucial for the success of the business and increase market value, IT or Digital transformation team may need to ask:
Robust and secure cloud-native platform for modern application architecture and frictionless consumption of application services are some of the requirement for success. As of this writing (April 2021), cloud-native / microservices architecture is an architecture pattern of choice for modern developer and Kubernetes Platform is the industry de-facto standard for microservices/containers orchestration.
Strategies, formulate and build a common, resilient and scalable cloud-native reference architecture and Platform as a Service to handle modern applications workload. This architecture pattern is modular and cloud-agnostic and deliver a consistent security and application services.
To established the reference architecture, we are leveraging an open source upstream Kubernetes platform on a single Linux VM with multitude of open source and commercial tools and integrate that with F5's BIG-IP as the unified Ingress/Egress and unified access to cloud-native application hosted on the following type of workload:-
Note:
Requirement
The reference architecture above can be treated as an "atomic" unit or a "repeatable pattern". This "atomic" unit can be deploy in multiple public cloud (e.g. EKS/AKS/GKE and etc) or private cloud. Multiple "atomic" unit can be constructed to form a high service resiliency clusters.
F5 DNS/GSLB can be deploy to monitor health of each individual cloud-native apps inside each individual "atomic" cluster and dynamically redirect user to a healthy apps. Each cluster can runs as active-active and application can be distributed to both clusters.
Conceptual view on how an "atomic" unit / cPaaS can be deployed in multi-cloud and each of this clusters can be constructed to form a service resiliency mesh by leveraging F5 DNS and F5 BIG-IP.
Note: Subsequent section will be a hands-on guide to build the reference architecture describe above (the "atomic" unit) with the exception of multi-cloud, multi-cluster service resiliency mesh. K3D + K3S will be use for the sole purpose of development and testing.
Note:
sudo apt-get update sudo apt-get -y install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" sudo apt-get update -y sudo apt-get install docker-ce=5:19.03.15~3-0~ubuntu-focal docker-ce-cli=5:19.03.15~3-0~ubuntu-focal -y fbchan@sky:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES fbchan@sky:~$ sudo systemctl enable --now docker.service
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh
curl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.15.0/calicoctl chmod u+x calicoctl sudo mv calicoctl /usr/local/bin/
curl -LO https://dl.k8s.io/release/v1.19.9/bin/linux/amd64/kubectl chmod u+x kubectl sudo mv kubectl /usr/local/bin
sudo apt install jq -y sudo apt install net-tools -y
This component is optional. It is a terminal based UI to interact with Kubernetes clusters.
wget https://github.com/derailed/k9s/releases/download/v0.24.2/k9s_Linux_x86_64.tar.gz tar zxvf k9s_Linux_x86_64.tar.gz sudo mv k9s /usr/local/bin/
Depend on your setup, by default, your Ubuntu 20.x VM may not expand all your allocated volume. Hence, this setup is to expand all allocated disk space.
fbchan@sky:~$ sudo lvm lvm> lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv Size of logical volume ubuntu-vg/ubuntu-lv changed from 39.50 GiB (10112 extents) to <79.00 GiB (20223 extents). Logical volume ubuntu-vg/ubuntu-lv successfully resized. lvm> quit Exiting. fbchan@sky:~$ sudo resize2fs /dev/ubuntu-vg/ubuntu-lv resize2fs 1.45.5 (07-Jan-2020) Filesystem at /dev/ubuntu-vg/ubuntu-lv is mounted on /; on-line resizing required old_desc_blocks = 5, new_desc_blocks = 10 The filesystem on /dev/ubuntu-vg/ubuntu-lv is now 20708352 (4k) blocks long. fbchan@sky:~$ df -kh Filesystem Size Used Avail Use% Mounted on udev 7.8G 0 7.8G 0% /dev tmpfs 1.6G 1.2M 1.6G 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv 78G 7.1G 67G 10% / ..
sudo ufw disable sudo apt-get remove ufw -y
fbchan@sky:~$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:0c:29:6c:ab:0b brd ff:ff:ff:ff:ff:ff inet 10.10.2.10/24 brd 10.10.2.255 scope global ens160 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe6c:ab0b/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:4c:15:2e:1e brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever fbchan@sky:~$ ip r default via 10.10.2.1 dev ens160 proto static 10.10.2.0/24 dev ens160 proto kernel scope link src 10.10.2.10 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
K3D in a nutshell.
K3D is a lightweight wrapper to run k3s (Rancher Lab's minimal Kubernetes distribution) in docker. K3D makes it very easy to create single- and multi-node K3S clusters in docker, e.g. for local development on Kubernetes. For details please refer to here
Install k3d
wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v4.2.0 bash
k3d cluster create cpaas1 --image docker.io/rancher/k3s:v1.19.9-k3s1 \ --k3s-server-arg "--disable=servicelb" \ --k3s-server-arg "--disable=traefik" \ --k3s-server-arg --tls-san="10.10.2.10" \ --k3s-server-arg --tls-san="k3s.foobz.com.au" \ --k3s-server-arg '--flannel-backend=none' \ --volume "$(pwd)/calico-k3d.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml" \ --no-lb --servers 1 --agents 3 ### Run above command or cluster-create.sh script provided ### ############################################################## fbchan@sky:~/Part-1$ ./cluster-create.sh WARN[0000] No node filter specified INFO[0000] Prep: Network INFO[0000] Created network 'k3d-cpaas1' INFO[0000] Created volume 'k3d-cpaas1-images' INFO[0001] Creating node 'k3d-cpaas1-server-0' INFO[0001] Creating node 'k3d-cpaas1-agent-0' INFO[0001] Creating node 'k3d-cpaas1-agent-1' INFO[0001] Creating node 'k3d-cpaas1-agent-2' INFO[0001] Starting cluster 'cpaas1' INFO[0001] Starting servers... INFO[0001] Starting Node 'k3d-cpaas1-server-0' INFO[0014] Starting agents... INFO[0014] Starting Node 'k3d-cpaas1-agent-0' INFO[0024] Starting Node 'k3d-cpaas1-agent-1' INFO[0034] Starting Node 'k3d-cpaas1-agent-2' INFO[0045] Starting helpers... INFO[0045] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access INFO[0052] Successfully added host record to /etc/hosts in 4/4 nodes and to the CoreDNS ConfigMap INFO[0052] Cluster 'cpaas1' created successfully! INFO[0052] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false INFO[0052] You can now use it like this: kubectl config use-context k3d-cpaas1 kubectl cluster-info ### Docker k3d spun up multi-node Kubernetes using docker ### ############################################################# fbchan@sky:~/Part-1$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2cf40dca2b0a rancher/k3s:v1.19.9-k3s1 "/bin/k3s agent" About a minute ago Up 52 seconds k3d-cpaas1-agent-2 d5c49bb65b1a rancher/k3s:v1.19.9-k3s1 "/bin/k3s agent" About a minute ago Up About a minute k3d-cpaas1-agent-1 6e5bb6119b61 rancher/k3s:v1.19.9-k3s1 "/bin/k3s agent" About a minute ago Up About a minute k3d-cpaas1-agent-0 ea154b36e00b rancher/k3s:v1.19.9-k3s1 "/bin/k3s server --d…" About a minute ago Up About a minute 0.0.0.0:37371->6443/tcp k3d-cpaas1-server-0 ### All Kubernetes pods are in running states ### ################################################# fbchan@sky:~/Part-1$ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-95gqb 1/1 Running 0 5m11s kube-system calico-node-fdg9f 1/1 Running 0 5m11s kube-system calico-node-klwlq 1/1 Running 0 5m6s kube-system local-path-provisioner-7ff9579c6-mf85f 1/1 Running 0 5m11s kube-system metrics-server-7b4f8b595-7z9vk 1/1 Running 0 5m11s kube-system coredns-66c464876b-hjblc 1/1 Running 0 5m11s kube-system calico-node-shvs5 1/1 Running 0 4m56s kube-system calico-kube-controllers-5dc5c9f744-7j6gb 1/1 Running 0 5m11s
For details please refer to another devcentral article.
Note:
You do not need to setup calico for Kubernetes in EKS, AKS (Azure CNI with advance networking mode) or GKE deployment. Cloud Provider managed Kubernetes underlay will provides the required connectivity from BIG-IP to Kubernetes pods.
sudo mkdir /etc/calico sudo vi /etc/calico/calicoctl.cfg Content of calicoctl.cfg. (replace /home/xxxx/.kube/config with the location of you kubeconfig file) --------------------------------------- apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: "kubernetes" kubeconfig: "/home/xxxx/.kube/config" -------------------------------------- fbchan@sky:~/Part-1$ sudo calicoctl create -f 01-bgpconfig.yml Successfully created 1 'BGPConfiguration' resource(s) fbchan@sky:~/Part-1$ sudo calicoctl create -f 02-bgp-peer.yml Successfully created 1 'BGPPeer' resource(s) fbchan@sky:~/Part-1$ sudo calicoctl get node -o wide NAME ASN IPV4 IPV6 k3d-cpaas1-agent-1 (64512) 172.19.0.4/16 k3d-cpaas1-server-0 (64512) 172.19.0.2/16 k3d-cpaas1-agent-2 (64512) 172.19.0.5/16 k3d-cpaas1-agent-0 (64512) 172.19.0.3/16
Setup BGP peering with Calico
Ensure you enabled Advance Networking on BIG-IP (Network >> Route Domains >> 0, under "Dynamic Routing Protocol", Enabled: BGP)
[root@mel-prod:Active:Standalone] config # [root@mel-prod:Active:Standalone] config # imish mel-prod.foobz.com.au[0]>en mel-prod.foobz.com.au[0]#config t Enter configuration commands, one per line. End with CNTL/Z. mel-prod.foobz.com.au[0](config)#router bgp 64512 mel-prod.foobz.com.au[0](config-router)#bgp graceful-restart restart-time 120 mel-prod.foobz.com.au[0](config-router)#neighbor calico-k8s peer-group mel-prod.foobz.com.au[0](config-router)#neighbor calico-k8s remote-as 64512 mel-prod.foobz.com.au[0](config-router)#neighbor 172.19.0.2 peer-group calico-k8s mel-prod.foobz.com.au[0](config-router)#neighbor 172.19.0.3 peer-group calico-k8s mel-prod.foobz.com.au[0](config-router)#neighbor 172.19.0.4 peer-group calico-k8s mel-prod.foobz.com.au[0](config-router)#neighbor 172.19.0.5 peer-group calico-k8s mel-prod.foobz.com.au[0](config-router)#wr Building configuration... [OK] mel-prod.foobz.com.au[0](config-router)#end mel-prod.foobz.com.au[0]#show running-config ! no service password-encryption ! router bgp 64512 bgp graceful-restart restart-time 120 neighbor calico-k8s peer-group neighbor calico-k8s remote-as 64512 neighbor 172.19.0.2 peer-group calico-k8s neighbor 172.19.0.3 peer-group calico-k8s neighbor 172.19.0.4 peer-group calico-k8s neighbor 172.19.0.5 peer-group calico-k8s ! line con 0 login line vty 0 39 login ! end
Validate Calico pod network advertised to BIG-IP via BGP
Calico pod network routes advertised onto BIG-IP routing table.
Because BIG-IP route every pods network to single Ubuntu VM (10.10.2.10) , we need to ensure that Ubuntu VM route those respective pod networks to the right docker container agent/worker nodes. In an environment where master/worker on a dedicated VM/physical host with different IP, BIG-IP BGP will send to the designated host. Hence, the following only require for this setup, where all Kubernetes nodes running on the same VM.
Base on my environment, here are the additional route I need to add on my Ubuntu VM.
fbchan@sky:~/Part-1$ sudo ip route add 10.53.68.192/26 via 172.19.0.4 fbchan@sky:~/Part-1$ sudo ip route add 10.53.86.64/26 via 172.19.0.3 fbchan@sky:~/Part-1$ sudo ip route add 10.53.115.0/26 via 172.19.0.5 fbchan@sky:~/Part-1$ sudo ip route add 10.53.194.192/26 via 172.19.0.2
If everything working properly, from BIG-IP, you should be able to ping Kubernetes pods IP directly. You can find those pods network IP via 'kubectl get pod -A -o wide'
root@(mel-prod)(cfg-sync Standalone)(Active)(/Common)(tmos)# ping -c 2 10.53.86.66 PING 10.53.86.66 (10.53.86.66) 56(84) bytes of data. 64 bytes from 10.53.86.66: icmp_seq=1 ttl=62 time=1.59 ms 64 bytes from 10.53.86.66: icmp_seq=2 ttl=62 time=1.33 ms --- 10.53.86.66 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 1.336/1.463/1.591/0.133 ms root@(mel-prod)(cfg-sync Standalone)(Active)(/Common)(tmos)# ping -c 2 10.53.86.65 PING 10.53.86.65 (10.53.86.65) 56(84) bytes of data. 64 bytes from 10.53.86.65: icmp_seq=1 ttl=62 time=1.03 ms 64 bytes from 10.53.86.65: icmp_seq=2 ttl=62 time=24.5 ms --- 10.53.86.65 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 1.036/12.786/24.537/11.751 ms
Note: Do not persist those Linux route on the VM. The routing will change when you reboot or restart your VM. You required to query the new route distribution and re-create the Linux route whenever you reboot your VM.
What we achieved so far: