spk
4 TopicsHow to secure egress with F5 Service Proxy for Kubernetes
Outline: Securing Egress Challenges How F5 can help Technical bit on how it works Getting trafficinto your clusters to your workloads is just a small part of the cluster admin's tasks, and there are many options available. Controlling the packets going out is harder and often ignored. This makes your clusters more vulnerable to security risks because they don’t follow the same strict rules as your traditional networks. This article will dive deeper into how SPK can control traffic exiting your clusters, even when your application workload uses multus to attach additional external interfaces. Secure Egress Challenges By default, a pod deployed using calico CNI will follow the default route to get out of the cluster. Traffic will look like it’s coming from the worker host’s external IP address on the management interface. While KubernetesNetworkPolicies can be used for egress, it becomes painful to manage the lifecycle of hundreds or thousands of policies across all namespaces as the cluster grows. If you deploy a pod with multus interfaces, as commonly seen with telco applications, you add another way for that pod to bypass any NetworkPolicies applied within the cluster. What if there was a way to manage egress dynamically (as pods are spun up and down) and easily so that the cluster admin could centrally configure and control traffic flowing out of the cluster? How F5 can help Service Proxy for Kubernetes (SPK) is a cloud-native application traffic management solution, designed for communication service provider (CoSP) 5G networks and other application workloads. With SPK and its Calico egress gateway feature, managing a pod's default calico network interface as well as any multus interfaces becomes easy and consistent with the CSRC daemonset. Kernel routes are automatically configured so that the pods traffic will always be routed via the SPK pod where you can apply consistent, namespace-aware network policies, source NAT translation, and other controls. If the "watched" application workload is deleted, the corresponding host rules also get removed. Technical Overview This section will provide an overview of how to configure the above scenario. Host Prerequisites On the host, two shims of type macvlan bridges are created on physical interfaces, one for the application pod's calico traffic and one for the macvlan traffic, which will forward packets on to SPK. These interfaces allow connectivity to the SPK's "internal" and "external2" interfaces, respectively. ip link add spk-shim link ens224 type macvlan mode bridge ip addr add 10.1.30.244/24 dev shim1 ip link set shim1 up ip link add spk-shim2 link ens256 type macvlan mode bridge ip addr add 10.1.10.244/24 dev shim2 ip link set shim2 up Application Prerequisites and Configuration In theSPK controller values.yaml file, configure your application workload namespaces in the watchNamespace block. watchNamespace: - "spk-apps" - "spk-apps-2" Since we want SPK to do the source NAT for pod egress traffic, we create an IPPool with natOutgoing set to false. This IPPool will be used by the applications. apiVersion: crd.projectcalico.org/v1 kind: IPPool metadata: name: app-ip-pool spec: cidr: 10.124.0.0/16 ipipMode: Always natOutgoing: false Ensure that the application namespaces are annotated like below to use the IPPool. kubectl annotate namespace spk-apps "cni.projectcalico.org/ipv4pools"=[\"app-ip-pool\"] kubectl annotate namespace spk-apps-2 "cni.projectcalico.org/ipv4pools"=[\"app-ip-pool\"] Deploy your application. See below for an example deployment manifest for the application. Note that I'm attaching a secondary macvlan interface, which is in addition to the default calico interface. It will get an IP address automatically as configured in the corresponding NetworkAttachmentDefinition. Note the specific labels used by SPK, which allows you to enable traffic routing to SPK on a per application basis. Additionally, the enableSecureSPK=true label will instruct SPK to create additional listeners that will pick up traffic coming from the pod's secondary macvlan interface. (Will show these listeners later) apiVersion: apps/v1 kind: Deployment metadata: name: nginx annotations: spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "macvlan-conf-ens256-myapp1" } ]' labels: app: nginx enableSecureSPK: "true" enablePseudoCNI: "true" secureSPKPort: "8050" secureSPKCNFPodIfName: "net1" secondaryCNINodeIfName: "spk-shim2" primaryCNINodeIfName: "spk-shim" secureSPKNetAttachDefName: "macvlan-conf-ens256" secureSPKEgressVlanName: "external" SPK Configuration Deploy the custom resource that will configure a listener that does two things: listen for traffic coming from the internal vlan, or the calico interface of targeted application pods SNAT the traffic so that the source IP is an IP address of SPK apiVersion: "k8s.f5net.com/v1" kind: F5SPKEgress metadata: name: egress-crd namespace: ns-f5-spk spec: #leave commented out for snat automap #egressSnatpool: "snatpool-1" dualStackEnabled: false maxTmmReplicas: 1 vlans: vlanList: [internal] disableListedVlans: false Next, we deploy the CSRC Daemonset that dynamically creates the kernel rules and routes for us. Note that I am setting the daemonsetMode to "pseudoCNI" which means I want to route both primary (calico) and secondary interface traffic to SPK. values-csrc.yaml image: repository: gitlab.tky.lab:5050/registry/spk/200 # daemonset mode, regular, secureSPK, or pseudoCNI #daemonsetMode: "regular" daemonsetMode: "pseudoCNI" ipFamily: "ipv4" imageCredentials: name: f5-common-pull-creds config: iptableid: 200 interfacename: "spk-shim" #tmmpodinterfacename: "internal" json: ipPoolCidrInfo: cidrList: - name: cluster-cidr0 value: "172.21.107.192/26" - name: cluster-cidr1 value: "172.21.66.0/26" - name: cluster-cidr2 value: "10.124.18.192/26" - name: node-cidr0 value: "10.1.11.0/24" - name: node-cidr1 value: "10.1.10.0/24" ipPoolList: - name: default-ipv4-ippool value: "172.21.64.0/18" - name: spk-app1-pool value: "10.124.0.0/16" Testing You can then log onto the worker node that is hosting the applications and confirm the routes and rules are created. Essentially, the rules are making calico interfaces use a custom route table that ensures that the default route is via the SPK. # ip rule 0: from all lookup local 32254: from all to 172.21.107.192/26 lookup main 32254: from all to 172.21.66.0/26 lookup main 32254: from all to 10.124.18.192/26 lookup main 32254: from all to 172.28.15.0/24 lookup main 32254: from all to 10.1.10.0/24 lookup main 32257: from 10.124.18.207 lookup ns-f5-spkshim1ipv4257 <--match on app pod1 calico IP!!! 32257: from 10.124.18.211 lookup ns-f5-spkshim1ipv4257 <--match on app pod2 calico IP!!! 32258: from 10.1.10.171 lookup ns-f5-spkshim2ipv4258 <--match on app pod1 macvlan IP!!! 32258: from 10.1.10.170 lookup ns-f5-spkshim2ipv4258 <--match on app pod2 macvlan IP!!! 32766: from all lookup main 32767: from all lookup default # ip route show table ns-f5-spkshim1ipv4257 default via 10.1.30.242 dev shim1 10.1.30.242 via 10.1.30.242 dev shim1 # ip route show table ns-f5-spkshim2ipv4258 default via 10.1.10.160 dev shim2 10.1.10.160 via 10.1.10.160 dev shim2 If I then try to execute a curl command towards a server that exists in a network segment beyond SPK, the application pod will hit the CSRC-configured ip rule and then forwarded to its new default gateway, which is SPK. Since SPK has Source NAT enabled, the "Client IP" from the server perspective is the self-IP of SPK. This means you can apply firewall policies to application workloads in a deterministic way as well as have visibility into what kind of traffic is coming out of your clusters. k exec -it nginx-7d7699f86c-hsx48 -n my-app1 -- curl 10.1.70.30 ================================================ ___ ___ ___ _ | __| __| | \ ___ _ __ ___ /_\ _ __ _ __ | _||__ \ | |) / -_) ' \/ _ \ / _ \| '_ \ '_ \ |_| |___/ |___/\___|_|_|_\___/ /_/ \_\ .__/ .__/ |_| |_| ================================================ Node Name: F5 Docker vLab Short Name: server.tky.f5se.com Server IP: 10.1.70.30 Server Port: 80 Client IP: 10.1.30.242 Client Port: 59248 Client Protocol: HTTP Request Method: GET Request URI: / host_header: 10.1.70.30 user-agent: curl/7.88.1 A simple tcpdump command run in the debug container of SPK confirms that the pod's calico interface IP (10.124.18.192) is the source IP of the incoming traffic on SPK, and after Source NAT is applied using the self-IP of SPK (10.1.30.242), the packet is sent out towards the server. /tcpdump -nni 0.0 tcp port 80 ----snip---- 12:34:51.964200 IP 10.124.18.192.48194 > 10.1.70.30.80: Flags [P.], seq 1:75, ack 1, win 225, options [nop,nop,TS val 4077628853 ecr 777672368], length 74: HTTP: GET / HTTP/1.1 in slot1/tmm0 lis=egress-ipv4 port=1.1 trunk= ----snip---- 12:34:51.964233 IP 10.1.30.242.48194 > 10.1.70.30.80: Flags [P.], seq 1:75, ack 1, win 225, options [nop,nop,TS val 4077628853 ecr 777672368], length 74: HTTP: GET / HTTP/1.1 out slot1/tmm0 lis=egress-ipv4 port=1.1 trunk= Let's take a look at egress application traffic that is using the secondary macvlan interface. In this case, I have not configured Source NAT so SPK will forward the traffic out, retaining the original pod IP. k exec -it nginx-7d7699f86c-g4hpv -n my-app1 -- curl 10.1.80.30 ================================================ ___ ___ ___ _ | __| __| | \ ___ _ __ ___ /_\ _ __ _ __ | _||__ \ | |) / -_) ' \/ _ \ / _ \| '_ \ '_ \ |_| |___/ |___/\___|_|_|_\___/ /_/ \_\ .__/ .__/ |_| |_| ================================================ Node Name: F5 Docker vLab Short Name: ue-client3 Server IP: 10.1.80.30 Server Port: 80 Client IP: 10.1.10.170 Client Port: 56436 Client Protocol: HTTP Request Method: GET Request URI: / host_header: 10.1.80.30 user-agent: curl/7.88.1 Another tcpdump command run in the debug container of SPK shows that it receives the above GET request and sends it out without Source NAT in this case. /tcpdump -nni 0.0 tcp port 80 ----snip---- 13:54:40.389281 IP 10.1.10.170.56436 > 10.1.80.30.80: Flags [P.], seq 1:75, ack 1, win 229, options [nop,nop,TS val 4087715696 ecr 61040149], length 74: HTTP: GET / HTTP/1.1 in slot1/tmm0 lis=secure-egress-ipv4-virtual-server port=1.2 trunk= ----snip---- 13:54:40.389305 IP 10.1.10.170.56436 > 10.1.80.30.80: Flags [P.], seq 1:75, ack 1, win 229, options [nop,nop,TS val 4087715696 ecr 61040149], length 74: HTTP: GET / HTTP/1.1 out slot1/tmm0 lis=secure-egress-ipv4-virtual-server port=1.2 trunk= You can use the familiar tmctl command inside the debug container of SPK to confirm the statistics for both listeners that process the pod's primary (egress-ipv4) and secondary (secure-egress-ipv4-virtual-server) interface egress traffic. /tmctl -f /var/tmstat/blade/tmm0 virtual_server_stat -s name,clientside.bytes_in,clientside.bytes_out,no_staged_acl_match_accept -w 200 name clientside.bytes_in clientside.bytes_out no_staged_acl_match_accept ---------------------------------------------- ------------------- -------------------- -------------------------- secure-egress-ipv4-virtual-server 394 996 1 egress-ipv4 394 1011 1 Now that you have egress traffic routed to the SPK data plane pods, you can use the below F5 published custom resource definitions (CRDs) to apply granular access control lists (ACLs) to meet your security requirements. The firewall configuration is defined as code (YAML manifests) so it natively integrates with K8s and portable across clusters. F5BigContextGlobal: CRD to define the default global firewall behavior and reference the firewall policy. F5BigFwPolicy: CRD to define your firewall rules. In summary, the above diagrams and configuration snippets show how SPK can capture all egress traffic in a dynamic way so that you don't have to sacrifice security and control in your ever-changing Kubernetes clusters.145Views0likes0CommentsModern Application Architecture - Cloud-Native Architecture Platform - Part 1 of 3
Overview In this multi part series of articles, I will be sharing with you on how to leverage F5’s BIG-IP (BIG-IP), Aspen Mesh service mesh and NGINX ingress controller to create a cloud-agnostic, resilient and secure cloud-native architecture platform to support your cloud-native applications requirement. Cloud-native is a term used to describe container-based environment. Microservices is an architectural pattern/approach/style where application are structured into multiple loosely couple, independent services delivered in a containerized form factor. Hence, for simplicity, in this series of articles, cloud-native architecture and microservices architecture platform (cPaaS) are use interchangeably. Note: Although BIG-IP is not in the category of a cloud-native apps (in comparison with F5's Service Proxy for Kubernetes (SPK) - which is cloud-native), currently, BIG-IP is feature rich and play a key role in this reference architecture pattern. For existing customer who has BIG-IP, this could be a first step for an organic transition from existing BIG-IP to cloud-native SPK. Part 1 – Cloud-Native Architecture Platform Formulate a cloud-agnostic architecture pattern. Architect/Build Kubernetes Platform for development (based on k3d with k3s). Architect and integrate keys technologies for this pattern. BIG-IP Aspen Mesh Service Mesh + Jaeger distributed tracing NGINX Ingress Controller Container Ingress Services (CIS) Application Services v3 (AS3) Grafana/Prometheus Monitoring Part 2 – Traffic Management, Security and Observability Establish common ingress/egress architecture pattern For HTTP based application (e.g., http/http2 web application) For non-HTTP (e.g. TCP/UDP) based application (e.g., MQTT) Uplift cloud-native apps protection with Web Application Firewall. Aspen Mesh Service Mesh Bookinfo apps Httpbin apps NGINX Ingress controller Grafana apps Grafana and Prometheus monitoring for Aspen Mesh and NGINX Part 3 – Unified Authentication (AuthN) and Authorization (AuthZ) for cloud-native apps. OAUTH authentication (Azure AD and Google) Legacy Windows AD Authentication Why cloud-native architecture platform? The proliferation of Internet based applications, software and usage on mobile devices has grown substantially over the years. It is no longer a prediction. It is a fact. According to 2021 Global Digital suite of reports from “We Are Social” and “Hootsuite”, there are over 5 billion unique mobile users and over 4 billion users actively connected to the Internet. This excludes connected devices such as Internet of Things, servers that power the internet and etc. With COVID-19 and the rise of 5G rollout, edge and cloud computing, connected technologies became and event more important and part of people’s lives. As the saying goes, “Application/Software powered the Internet and Internet is the backbone of the world economy”. Today organization business leaders require their IT and digital transformation teams to be more innovative by supporting the creation of business-enabling applications, which means they are no longer just responsible for availability of the networks and servers, but also building a robust platform to support the software development and application delivery that are secure, reliable and innovative.To support that vision, organization need a robust platform to support and deliver application portfolio that are able to support the business.Because a strong application portfolio is crucial for the success of the business and increase market value,IT or Digital transformation team may need to ask: "What can we do to embrace and support the proliferation of applications, empower those with creative leadership, foster innovative development, and ultimately help create market value?" Robust and secure cloud-native platform for modern application architecture and frictionless consumption of application services are some of the requirement for success. As of this writing (April 2021), cloud-native / microservices architecture is an architecture pattern of choice for modern developer and Kubernetes Platform is the industry de-facto standard for microservices/containers orchestration. What is the GOAL in this series of articles? Strategies, formulate and build a common, resilient and scalable cloud-native reference architecture and Platform as a Service to handle modern applications workload. This architecture pattern is modular and cloud-agnostic and deliver a consistent security and application services. To established the reference architecture, we are leveraging an open source upstream Kubernetes platform on a single Linux VM with multitude of open source and commercial tools and integrate that with F5's BIG-IP as the unified Ingress/Egress and unified access to cloud-native application hosted on the following type of workload:- Service Mesh workload Non-Service Mesh workload TCP/UDP workload Note: We can leverage F5's Service Proxy for Kubernetes (SPK) as the unified ingress/egress. However, F5's BIG-IP will be used in this article. You can skip steps of building Kubernetes cluster if you already have an existing multi-node Kubernetes cluster, minikube or any public cloud hosted Kubernetes (e.g. EKS/AKS/GKE) Requirement 1 x Ubuntu VM (ensure you have a working Ubuntu 20.x with docker installed) vCPU: 8 (can runs with 4 or 6 vCPU with reduce functionality) HDD: Ideal 80G. (Only required for persistent storage. Can run with 40G). Need to update persistent volume size appropriately. Modern Application Architecture (cPaaS) - Reference Architecture BIG-IP - Service Proxy Central ingress and/or egress for cloud-native workload. For applications deployed in service mesh namespaces, F5 service proxy, proxied ingress traffic to Aspen Mesh ingressgateway. For applications deployed in non-service mesh namespaces, F5 service proxy, proxied ingress traffic to NGINX ingress controller. For applications that required bypass of ingress (e.g. TCP/UDP apps), F5 service proxy, proxied directly to those pods IP. F5 Service Proxy provides centralized security protection by enforcing Web Application and API Protection (WAAP) firewall policy on cloud-native workloads. F5 Service Proxy provided SSL inspection (SSL bridge and/or offload) to Aspen Mesh ingressgateway and/or NGINX ingress controller. F5 Service Proxy can be deploy to send to multiple Kubernetes cluster - for inter and/or intra cluster resiliency. Global Server Load Balancing (F5's DNS) can be enabled on F5 Service Proxy to provides geo-redundancy for multi-cloud workload. F5 Service Proxy act as the unified access management with F5's Access Policy Manager (APM). Cloud-native application can delegate AuthN to F5 Service Proxy (multiple AuthN mechanism such as OIDC/OAuth/NTLM/SAML and etc) and cloud-native application perform AuthZ. F5 Service-Proxy ingress are only need to setup once. Cloud-native apps FQDN are all mapped to the same ingress. Aspen Mesh Service Mesh Centralized ingress for service mesh namespaces Enterprise ready, hardened and fully supported Istio-based service mesh by F5. Provides all capabilities delivered by Istio (Connect, Secure, Control and Observe). Provide traffic management and security for East-West communication. Reduce operational complexities of managing service mesh Aspen Mesh Rapid Resolve / MTTR - Mean Time To Resolution - quickly detect and identify causes of cluster and application errors. Service and Health indicator Graph for service visibility and observability. ISTIO Vet Enhance security Secure by Default with zero trust policy Secure Ingress Enhance RBAC Carrier-grade feature Aspen Mesh Packet Inspector NGINX Ingress Controller Centralized ingress for non-service mesh namespaces Works with both NGINX and NGINX Plus and supports the standard ingress features - content-based routing and TLS/SSL termination Support load balancing WebSocket, gRPC, TCP and UDP applications Container Ingress Services (CIS) Works with container orchestration environment (e.g. Kubernetes) to dynamically create L4/L7 services on BIG-IP and load balance network traffic across those services. It monitor the orchestration API server (e.g. lifecycle of Kubernetes pods) and dynamically update BIG-IP configuration based on changes made on containerized application. In this setup, it monitor Aspen Mesh ingressgateway, NGINX ingress controller and TCP/UDP based apps and dynamically updates BIG-IP configuration. AS3 Application Services 3 extension is a flexible, low-overhead mechanism for managing application-specific configuration on BIG-IP system. Leveraging a declarative model with a single JSON declaration. High Resiliency Cloud-Native Apps The reference architecture above can be treated as an "atomic" unit or a "repeatable pattern". This "atomic" unit can be deploy in multiple public cloud (e.g. EKS/AKS/GKE and etc) or private cloud. Multiple "atomic" unit can be constructed to form a high service resiliency clusters. F5 DNS/GSLB can be deploy to monitor health of each individual cloud-native apps inside each individual "atomic" cluster and dynamically redirect user to a healthy apps. Each cluster can runs as active-active and application can be distributed to both clusters. How applications achieve high resiliency with F5 DNS. Multi-Cloud, Multi-Cluster Service Resiliency Conceptual view on how an "atomic" unit / cPaaS can be deployed in multi-cloud and each of this clusters can be constructed to form a service resiliency mesh by leveraging F5 DNS and F5 BIG-IP. Note: Subsequent section will be a hands-on guide to build the reference architecture describe above (the "atomic" unit) with the exception of multi-cloud, multi-cluster service resiliency mesh. K3D + K3S will be use for the sole purpose of development and testing. Conceptual Architecture for this setup Note: The following instructions are use as a quick start guide. Please refer to respective installation guide for details. Scripts use in this setup can be found on github Install Docker sudo apt-get update sudo apt-get -y install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" sudo apt-get update -y sudo apt-get install docker-ce=5:19.03.15~3-0~ubuntu-focal docker-ce-cli=5:19.03.15~3-0~ubuntu-focal -y fbchan@sky:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES fbchan@sky:~$ sudo systemctl enable --now docker.service Install Helm curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh Install calico binary curl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.15.0/calicoctl chmod u+x calicoctl sudo mv calicoctl /usr/local/bin/ Install kubectl binary curl -LO https://dl.k8s.io/release/v1.19.9/bin/linux/amd64/kubectl chmod u+x kubectl sudo mv kubectl /usr/local/bin Install supporting tools sudo apt install jq -y sudo apt install net-tools -y Install k9s This component is optional. It is a terminal based UI to interact with Kubernetes clusters. wget https://github.com/derailed/k9s/releases/download/v0.24.2/k9s_Linux_x86_64.tar.gz tar zxvf k9s_Linux_x86_64.tar.gz sudo mv k9s /usr/local/bin/ Ensure Linux volume group expanded Depend on your setup, by default, your Ubuntu 20.x VM may not expand all your allocated volume. Hence, this setup is to expand all allocated disk space. fbchan@sky:~$ sudo lvm lvm> lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv Size of logical volume ubuntu-vg/ubuntu-lv changed from 39.50 GiB (10112 extents) to <79.00 GiB (20223 extents). Logical volume ubuntu-vg/ubuntu-lv successfully resized. lvm> quit Exiting. fbchan@sky:~$ sudo resize2fs /dev/ubuntu-vg/ubuntu-lv resize2fs 1.45.5 (07-Jan-2020) Filesystem at /dev/ubuntu-vg/ubuntu-lv is mounted on /; on-line resizing required old_desc_blocks = 5, new_desc_blocks = 10 The filesystem on /dev/ubuntu-vg/ubuntu-lv is now 20708352 (4k) blocks long. fbchan@sky:~$ df -kh Filesystem Size Used Avail Use% Mounted on udev 7.8G 0 7.8G 0% /dev tmpfs 1.6G 1.2M 1.6G 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv 78G 7.1G 67G 10% / .. Disable Ubuntu Firewall sudo ufw disable sudo apt-get remove ufw -y Ubuntu VM fbchan@sky:~$ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:0c:29:6c:ab:0b brd ff:ff:ff:ff:ff:ff inet 10.10.2.10/24 brd 10.10.2.255 scope global ens160 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe6c:ab0b/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:4c:15:2e:1e brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever fbchan@sky:~$ ip r default via 10.10.2.1 dev ens160 proto static 10.10.2.0/24 dev ens160 proto kernel scope link src 10.10.2.10 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown Install k3d + k3s K3D in a nutshell. K3D is a lightweight wrapper to run k3s (Rancher Lab's minimal Kubernetes distribution) in docker. K3D makes it very easy to create single- and multi-node K3S clusters in docker, e.g. for local development on Kubernetes. For details please refer to here Install k3d wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v4.2.0 bash Create k3s cluster Spin up 1 x server/master and 3 x agent/worker nodes Disable traefik and service load balancer as we don't need it as we are leveraging BIG-IP as the unified ingress/egress. Replace with calico CNI instead of default flannel CNI Setup TLS SAN certificate so that we can access K3S api remotely. k3d cluster create cpaas1 --image docker.io/rancher/k3s:v1.19.9-k3s1 \ --k3s-server-arg "--disable=servicelb" \ --k3s-server-arg "--disable=traefik" \ --k3s-server-arg --tls-san="10.10.2.10" \ --k3s-server-arg --tls-san="k3s.foobz.com.au" \ --k3s-server-arg '--flannel-backend=none' \ --volume "$(pwd)/calico-k3d.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml" \ --no-lb --servers 1 --agents 3 ### Run above command or cluster-create.sh script provided ### ############################################################## fbchan@sky:~/Part-1$ ./cluster-create.sh WARN[0000] No node filter specified INFO[0000] Prep: Network INFO[0000] Created network 'k3d-cpaas1' INFO[0000] Created volume 'k3d-cpaas1-images' INFO[0001] Creating node 'k3d-cpaas1-server-0' INFO[0001] Creating node 'k3d-cpaas1-agent-0' INFO[0001] Creating node 'k3d-cpaas1-agent-1' INFO[0001] Creating node 'k3d-cpaas1-agent-2' INFO[0001] Starting cluster 'cpaas1' INFO[0001] Starting servers... INFO[0001] Starting Node 'k3d-cpaas1-server-0' INFO[0014] Starting agents... INFO[0014] Starting Node 'k3d-cpaas1-agent-0' INFO[0024] Starting Node 'k3d-cpaas1-agent-1' INFO[0034] Starting Node 'k3d-cpaas1-agent-2' INFO[0045] Starting helpers... INFO[0045] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access INFO[0052] Successfully added host record to /etc/hosts in 4/4 nodes and to the CoreDNS ConfigMap INFO[0052] Cluster 'cpaas1' created successfully! INFO[0052] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false INFO[0052] You can now use it like this: kubectl config use-context k3d-cpaas1 kubectl cluster-info ### Docker k3d spun up multi-node Kubernetes using docker ### ############################################################# fbchan@sky:~/Part-1$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2cf40dca2b0a rancher/k3s:v1.19.9-k3s1 "/bin/k3s agent" About a minute ago Up 52 seconds k3d-cpaas1-agent-2 d5c49bb65b1a rancher/k3s:v1.19.9-k3s1 "/bin/k3s agent" About a minute ago Up About a minute k3d-cpaas1-agent-1 6e5bb6119b61 rancher/k3s:v1.19.9-k3s1 "/bin/k3s agent" About a minute ago Up About a minute k3d-cpaas1-agent-0 ea154b36e00b rancher/k3s:v1.19.9-k3s1 "/bin/k3s server --d…" About a minute ago Up About a minute 0.0.0.0:37371->6443/tcp k3d-cpaas1-server-0 ### All Kubernetes pods are in running states ### ################################################# fbchan@sky:~/Part-1$ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-95gqb 1/1 Running 0 5m11s kube-system calico-node-fdg9f 1/1 Running 0 5m11s kube-system calico-node-klwlq 1/1 Running 0 5m6s kube-system local-path-provisioner-7ff9579c6-mf85f 1/1 Running 0 5m11s kube-system metrics-server-7b4f8b595-7z9vk 1/1 Running 0 5m11s kube-system coredns-66c464876b-hjblc 1/1 Running 0 5m11s kube-system calico-node-shvs5 1/1 Running 0 4m56s kube-system calico-kube-controllers-5dc5c9f744-7j6gb 1/1 Running 0 5m11s Setup Calico on Kubernetes For details please refer to another devcentral article. Note: You do not need to setup calico for Kubernetes in EKS, AKS (Azure CNI with advance networking mode) or GKE deployment. Cloud Provider managed Kubernetes underlay will provides the required connectivity from BIG-IP to Kubernetes pods. sudo mkdir /etc/calico sudo vi /etc/calico/calicoctl.cfg Content of calicoctl.cfg. (replace /home/xxxx/.kube/config with the location of you kubeconfig file) --------------------------------------- apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: "kubernetes" kubeconfig: "/home/xxxx/.kube/config" -------------------------------------- fbchan@sky:~/Part-1$ sudo calicoctl create -f 01-bgpconfig.yml Successfully created 1 'BGPConfiguration' resource(s) fbchan@sky:~/Part-1$ sudo calicoctl create -f 02-bgp-peer.yml Successfully created 1 'BGPPeer' resource(s) fbchan@sky:~/Part-1$ sudo calicoctl get node -o wide NAME ASN IPV4 IPV6 k3d-cpaas1-agent-1 (64512) 172.19.0.4/16 k3d-cpaas1-server-0 (64512) 172.19.0.2/16 k3d-cpaas1-agent-2 (64512) 172.19.0.5/16 k3d-cpaas1-agent-0 (64512) 172.19.0.3/16 On BIG-IP Setup BGP peering with Calico Ensure you enabled Advance Networking on BIG-IP (Network >> Route Domains >> 0, under "Dynamic Routing Protocol", Enabled: BGP) [root@mel-prod:Active:Standalone] config # [root@mel-prod:Active:Standalone] config # imish mel-prod.foobz.com.au[0]>en mel-prod.foobz.com.au[0]#config t Enter configuration commands, one per line. End with CNTL/Z. mel-prod.foobz.com.au[0](config)#router bgp 64512 mel-prod.foobz.com.au[0](config-router)#bgp graceful-restart restart-time 120 mel-prod.foobz.com.au[0](config-router)#neighbor calico-k8s peer-group mel-prod.foobz.com.au[0](config-router)#neighbor calico-k8s remote-as 64512 mel-prod.foobz.com.au[0](config-router)#neighbor 172.19.0.2 peer-group calico-k8s mel-prod.foobz.com.au[0](config-router)#neighbor 172.19.0.3 peer-group calico-k8s mel-prod.foobz.com.au[0](config-router)#neighbor 172.19.0.4 peer-group calico-k8s mel-prod.foobz.com.au[0](config-router)#neighbor 172.19.0.5 peer-group calico-k8s mel-prod.foobz.com.au[0](config-router)#wr Building configuration... [OK] mel-prod.foobz.com.au[0](config-router)#end mel-prod.foobz.com.au[0]#show running-config ! no service password-encryption ! router bgp 64512 bgp graceful-restart restart-time 120 neighbor calico-k8s peer-group neighbor calico-k8s remote-as 64512 neighbor 172.19.0.2 peer-group calico-k8s neighbor 172.19.0.3 peer-group calico-k8s neighbor 172.19.0.4 peer-group calico-k8s neighbor 172.19.0.5 peer-group calico-k8s ! line con 0 login line vty 0 39 login ! end Validate Calico pod network advertised to BIG-IP via BGP Calico pod network routes advertised onto BIG-IP routing table. Because BIG-IP route every pods network to single Ubuntu VM (10.10.2.10) , we need to ensure that Ubuntu VM route those respective pod networks to the right docker container agent/worker nodes. In an environment where master/worker on a dedicated VM/physical host with different IP, BIG-IP BGP will send to the designated host. Hence, the following only require for this setup, where all Kubernetes nodes running on the same VM. Base on my environment, here are the additional route I need to add on my Ubuntu VM. fbchan@sky:~/Part-1$ sudo ip route add 10.53.68.192/26 via 172.19.0.4 fbchan@sky:~/Part-1$ sudo ip route add 10.53.86.64/26 via 172.19.0.3 fbchan@sky:~/Part-1$ sudo ip route add 10.53.115.0/26 via 172.19.0.5 fbchan@sky:~/Part-1$ sudo ip route add 10.53.194.192/26 via 172.19.0.2 If everything working properly, from BIG-IP, you should be able to ping Kubernetes pods IP directly. You can find those pods network IP via 'kubectl get pod -A -o wide' root@(mel-prod)(cfg-sync Standalone)(Active)(/Common)(tmos)# ping -c 2 10.53.86.66 PING 10.53.86.66 (10.53.86.66) 56(84) bytes of data. 64 bytes from 10.53.86.66: icmp_seq=1 ttl=62 time=1.59 ms 64 bytes from 10.53.86.66: icmp_seq=2 ttl=62 time=1.33 ms --- 10.53.86.66 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 1.336/1.463/1.591/0.133 ms root@(mel-prod)(cfg-sync Standalone)(Active)(/Common)(tmos)# ping -c 2 10.53.86.65 PING 10.53.86.65 (10.53.86.65) 56(84) bytes of data. 64 bytes from 10.53.86.65: icmp_seq=1 ttl=62 time=1.03 ms 64 bytes from 10.53.86.65: icmp_seq=2 ttl=62 time=24.5 ms --- 10.53.86.65 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 1.036/12.786/24.537/11.751 ms Note: Do not persist those Linux route on the VM. The routing will change when you reboot or restart your VM. You required to query the new route distribution and re-create the Linux route whenever you reboot your VM. Summary on Part-1 What we achieved so far: Basic understanding on why cloud-native architecture platform so important. Established a cloud-agnostic and cloud-native reference architecture and understand those key components and it roles. Have a working environment for our Part 2 series - Traffic Management, Security and Observability.1.1KViews6likes0CommentsWhen cloud-native meets monolithic
According to CNCF’s Cloud Native Survey 2020 published on 17 Nov 2020, containers in production jump 300% from the first survey in 2016. Year 2020 itself increased to 92% from 84% in 2019. (https://www.cncf.io/cncf-cloud-native-survey-2020). In addition, according to F5's 2020 State of Application Services Report (https://www.f5.com/state-of-application-services-report#get-the-report), 80% of organisations are executing on digital transformation and of this organisation, there are more likely to deploy modern app architecture and app services at higher rate. So, cloud-native modern application architecture is gaining great momentum. Industries and majority organisation embracing and pivoting toward cloud-native technologies. Cloud-native provides multitude of benefits – which is not the subject of this article. F5’s BIG-IP (a.k.a classic BIG-IP) is not cloud-native. How F5’s classic BIG-IP be relevant in the cloud-native world? This article demonstrates how cloud-native meets and needs classic BIG-IP (monolithic). FYI: F5’s BIG-IP SPK (Service Proxy for Kubernetes) is BIG-IP delivered in a containerized form factor. It is cloud-native (https://www.f5.com/products/service-proxy-for-kubernetes). BIG-IP SPK will be discussed in future article. How both of them need each other? Typically, it take years for organisation who embracing cloud-native to move into a fully cloud-native technologies/infrastructure. There are use cases where modern cloud-native application needs to integrate with traditional or existing monolithic applications. Modern apps living along with traditional apps or infrastructure are common for most enterprises. F5’s classic BIG-IP can bridge those gaps. This article about use cases that we solved in one of our customer environment where how we leverage classic BIG-IP to bridge gaps between cloud-native and monolithic apps. First, let’s be clear on what cloud-native really means. To set the record straight, cloud-native doesn’t just mean running workload in the cloud, although it is partially true. There are many definition and perspective on what cloud-native really means. For the sake of this article, I will base on the official definition of cloud-native from the CNCF (Cloud Native Computing Foundation), which defines “Cloud native technologies empower organisations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach. These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil” My takeaway (characteristic of cloud-native): Scalable apps that design to run in dynamic environment (public, private and hybrid clouds). Typically delivered in the form of microservices/containers which loosely coupled systems. Easily adapted and integrated into automation system. CI/CD part of it ecosystem – frequent release, patch and updates cycle. Immutable (cattle service model) instead of mutable (Pets service model) Declarative API Kubernetes is one of the example of a cloud-naive technologies. What is the Problem Statement? Uniquely identify apps/workload/containers deployed in Kubernetes platform and apply appropriate external security control (e.g. network/application firewall) for containerized apps when it communicates with existing legacy applications deployed outside of Kubernetes (egress from Kubernetes). What are those challenges? Containers that egress off Kubernetes are by design source network address translated (SNAT) from Kubernetes nodes. External security control system such as network firewall may not be able to identify the right authorised source apps as it is hidden behind NATing. How to ensure that only authorised apps deployed in Kubernetes environment authorised to access critical legacy apps (e.g. billing or financial system) protected by network/application firewall? For multi-tenant environment with multiple namespaces in Kubernetes, how to ensure pods or namespaces have a unique identity and enforce control access to egress endpoints (outside of Kubernetes). Unique workload identity is important for end-to-end correlation, audit and traceability. How to provide end-to-end and correlated view from source to target apps. How F5 solved this with classic BIG-IP ADC and Aspen Mesh Service Mesh Architecture Overview This solution article is an extension to the original article – “Expanding Service Mesh without Envoy” published by my colleague Eric Chen. For details of that article please refer to https://aspenmesh.io/expanding-service-mesh-without-envoy/ Aspen Mesh, an innovation from F5, is an enterprise-ready service mesh built on Istio. It is tested and hardened distribution of Istio with complete support by F5. For details, please refer to https://aspenmesh.io. For the purpose of this solution, Aspen Mesh and Istio will be used interchangeably. Solution in a nutshell Each pod had its own workload identity. Part of native capabilities of Aspen Mesh (AM). The identity is in a form of client certificate managed by AM (istiod/Citadel) and generated from an organisation Intermediate CA loaded onto Istio control plane. BIG-IP on-boarded with workload identity (client certificate) and signed from the same organisation Intermediate CA (or root CA). This client certificate NOT managed by AM. F5 Virtual Server (VS) configured with client-side profile to perform mutual TLS (mTLS). F5 VS is registered onto AM. F5 VS service can be discovered from internal service registry. On egress, pod will perform mTLS with F5 VS. As F5 client certificate issues from same organisation Intermediate CA, both parties will negotiate and mutually trust each other and exchange mTLS key. An optional iRule can be implemented on BIG-IP to inspect pod identity (certificate SAN) upon successful mTLS and permit/reject request. BIG-IP implement SNAT and present a unique network identifier (e.g IP address) to network firewall. Environment BIGIP LTM (v14.x) Aspen Mesh - v1.16.x Kubernetes 1.18.x Use Case Permit microservices apps (e.g. bookinfo) to use organisation forward proxy (tinyproxy) to get to Internet which sit behind enterprise network firewall and reject all other microservices apps on the same Kubernetes cluster. Classic BIG-IP Only vs_aspenmesh-bookinfo-proxy-srv-mtls-svc configuration will be demonstrated. Similar configuration can be applied on other VS. F5 Virtual Server configuration F5's VS client profile configuration. "Client Certificate = require" require pods deployed inside AM present a valid trusted client certificate. An optional iRule to only permit pods from bookinfo namespace. Optional irule_bookinfo_spiffee to permit bookinfo apps and reject other apps. when CLIENTSSL_CLIENTCERT { set client_cert [SSL::cert 0] #log local0. "Client cert extensions - [X509::extensions $client_cert]" #Split the X509::extensions output on each newline character and log the values foreach item [split [X509::extensions [SSL::cert 0]] \n] { log local0. "$item" } if {[SSL::cert 0] ne ""} { set santemp [findstr [X509::extensions [SSL::cert 0]] "Subject Alternative Name" 43 " "] set spiffe [findstr $santemp "URI" 4] log local0. "Source SPIFFEE-->$spiffe" if { ($spiffe starts_with "spiffe://cluster.local/ns/bookinfo/") } { log local0. "Aspen Mesh mTLS: PEMITTED==>$spiffe" # Allow and SNAT from defined SNAT Pool } else { log local0. "Aspen Mesh mTLS: REJECTED==>$spiffe" reject } } } Note: As of Istio version 1.xx, client-side envoy (istio sidecar) will start a mTLS handshake with server-side BIG-IP VS (F5's client side profile). During the handshake, the client-side envoy also does a secure naming check to verify that the service account presented in the server certificate is authorised to run the target service. Then only the client-side envoy and server-side BIG-IP will establish a mTLS connection. Hence, the client certificate generated loaded onto BIG-IP have to conform to the secure naming information, which maps the server identities to the service names. For details on secure naming, please refer to https://istio.io/latest/docs/concepts/security/#secure-naming Example to generate a SPIFFE friendly certificate openssl req -new -out bookinfo.istio-spiffee-req.pem -subj "/C=AU/ST=Victoria/L=Melbourne/O=F5/OU=SE/CN=bookinfo.spiffie" -keyout bookinfo.istio-spiffee-key.pem -nodes cat > v3.ext <<-EOF authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1=spiffe://cluster.local/ns/bookinfo/sa/default EOF openssl x509 -req -sha512 -days 365 \ -extfile v3.ext \ -CA ../ca1/ca-cert.pem -CAkey ../ca1/ca-key.pem -CAcreateserial \ -in bookinfo.istio-spiffee-req.pem \ -out bookinfo.istio-spiffee-cert.pem where ca1 is the intermediate CA use for Aspen Mesh. Aspen Mesh Pods and Services before registration of F5 VS $ kubectl -n bookinfo get pod,svc NAME READY STATUS RESTARTS AGE pod/details-v1-78d78fbddf-4vmdr 2/2 Running 0 4d1h pod/productpage-v1-85b9bf9cd7-f6859 2/2 Running 0 4d1h pod/ratings-v1-6c9dbf6b45-9ld6f 2/2 Running 0 4d1h pod/reviews-v1-564b97f875-bjx2r 2/2 Running 0 4d1h pod/reviews-v2-568c7c9d8f-zzn8r 2/2 Running 0 4d1h pod/reviews-v3-67b4988599-pdk25 2/2 Running 0 4d1h pod/traffic-generator-productpage-fc97f5595-pdhvv 2/2 Running 0 6d11h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/details ClusterIP 10.235.14.186 <none> 9080/TCP 6d11h service/productpage ClusterIP 10.235.37.112 <none> 9080/TCP 6d11h service/ratings ClusterIP 10.235.40.239 <none> 9080/TCP 6d11h service/reviews ClusterIP 10.235.1.21 <none> 9080/TCP 6d11h service/traffic-generator-productpage ClusterIP 10.235.17.158 <none> 80/TCP 6d11h Register bigip-proxy-svc onto Aspen Mesh $ istioctl register -n bookinfo bigip-proxy-svc 10.4.0.201 3128 --labels apps=bigip-proxy 2020-12-15T23:14:33.286854Z warn Got 'services "bigip-proxy-svc" not found' looking up svc 'bigip-proxy-svc' in namespace 'bookinfo', attempting to create it 2020-12-15T23:14:33.305890Z warn Got 'endpoints "bigip-proxy-svc" not found' looking up endpoints for 'bigip-proxy-svc' in namespace 'bookinfo', attempting to create them $ kubectl -n bookinfo get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE bigip-proxy-svc ClusterIP 10.235.45.250 <none> 3128/TCP 26s details ClusterIP 10.235.14.186 <none> 9080/TCP 6d11h productpage ClusterIP 10.235.37.112 <none> 9080/TCP 6d11h ratings ClusterIP 10.235.40.239 <none> 9080/TCP 6d11h reviews ClusterIP 10.235.1.21 <none> 9080/TCP 6d11h traffic-generator-productpage ClusterIP 10.235.17.158 <none> 80/TCP 6d11h $ kubectl -n bookinfo describe svc bigip-proxy-svc Name: bigip-proxy-svc Namespace: bookinfo Labels: apps=bigip-proxy Annotations: alpha.istio.io/kubernetes-serviceaccounts: default Selector: <none> Type: ClusterIP IP: 10.235.45.250 Port: 3128 3128/TCP TargetPort: 3128/TCP Endpoints: 10.4.0.201:3128 Session Affinity: None Events: <none> To test egress from bookinfo pod to external forward proxy (tinyproxy). Run "curl" accessing to Internet (www.f5.com) pointing to bigip-proxy-svc registered on Aspen Mesh. Example below shown executing curl binary inside "traffic-generator-productpage" pod. $ kubectl -n bookinfo exec -it $(kubectl -n bookinfo get pod -l app=traffic-generator-productpage -o jsonpath={.items..metadata.name}) -c traffic-generator -- curl -Ikx bigip-proxy-svc:3128 https://www.f5.com HTTP/1.0 200 Connection established Proxy-agent: tinyproxy/1.8.3 HTTP/1.1 200 OK Content-Type: text/html;charset=utf-8 Content-Length: 132986 Connection: keep-alive Accept-Ranges: bytes Cache-Control: no-cache="set-cookie" Content-Security-Policy: frame-ancestors 'self' *.cybersource.com *.salesforce.com *.force.com ; form-action *.cybersource.com *.salesforce.com *.force.com 'self' Date: Wed, 16 Dec 2020 06:19:48 GMT ETag: "2077a-5b68b3c0c5be0" Last-Modified: Wed, 16 Dec 2020 02:00:07 GMT Strict-Transport-Security: max-age=16070400; X-Content-Type-Options: nosniff X-Dispatcher: dispatcher1uswest2 X-Frame-Options: SAMEORIGIN X-Vhost: publish Via: 1.1 sin1-bit21, 1.1 24194e89802a1a492c5f1b22dc744e71.cloudfront.net (CloudFront) Vary: Accept-Encoding X-Cache: Hit from cloudfront X-Amz-Cf-Pop: MEL50-C2 X-Amz-Cf-Id: 7gE6sEaBP9WonZ0KjngDsr90dahHWFyDG0MwbuGn91uF7EkEJ_wdrQ== Age: 15713 Logs shown on classic BIG-IP Classic BIG-IP successfully authenticate with bookinfo with mTLS and permit access. Logs shown on forward proxy (tinyproxy). Source IP is SNATed to IP configured on classic BIG-IP. IP also allowed on network firewall. From other namespace (e.g. sm-apigw-a), try to access bigip-proxy-svc. Attempt shown rejected by classic BIG-IP. Example below shown executing curl binary in "nettools" pod. $ kubectl -n sm-apigw-a get pod NAME READY STATUS RESTARTS AGE httpbin-api-78bdd794bd-hfwkj 2/2 Running 2 22d nettools-9497dcc86-nhqmr 2/2 Running 2 22d podinfo-bbb7bf7c-j6wcs 2/2 Running 2 22d sm-apigw-a-85696f7455-rs9zh 3/3 Running 0 7d21h fbchan@logos:~/k8s-clusterX/k8s$ $ kubectl -n sm-apigw-a exec -it $(kubectl -n sm-apigw-a get pod -l app=nettools -o jsonpath={.items..metadata.name}) -c nettools -- curl -kIx bigip-proxy-svc.bookinfo.svc.cluster.local:3128 https://devcentral.f5.com curl: (56) Recv failure: Connection reset by peer command terminated with exit code 56 Classic BIG-IP Logs Classic BIG-IP reject sm-apigw-a namespace from using bigip-proxy-svc service. Summary Aspen Mesh is cloud-native Enterprise Ready Istio service mesh. Classic BIG-IP is a features rich application delivery controller (ADC). With Aspen Mesh, microservices are securely authenticated with mTLS with classic BIG-IP. Classic BIG-IP able to securely authenticate microservices apps and deliver application services based on your business and security requirement. This article addresses egress use cases. What about Ingress to Kubernetes cluster? How classic BIG-IP or cloud-native SPK coherently work together with Aspen Mesh to provides secure and consistent multi-cloud, multi-cluster application delivery services to your Kubernetes environment. This will be shared in future article. Stay tune.1.3KViews0likes2Comments