Securing and Scaling Hybrid Application with F5 NGINX (Part 1)
If you are using Kubernetes in production, then you are likely using an ingress controller. The ingress controller is the core engine managing traffic entering and exiting the Kubernetes cluster. Because the ingress controller is a deployment running inside the cluster, how do you route traffic to the ingress controller? How do you route external traffic to internal Kubernetes Services? Cloud providers offer a simple convenient way to expose Kubernetes Services using an external load balancer. Simply deploy a Managed Kubernetes Service (EKS, GKE, AKS) and create a Kubernetes Service of type LoadBalancer. The cloud providers will host and deploy a load balancer providing a public IP address. External users can connect to Kubernetes Services using this public entry point. However, this integration only applies to Managed Kubernetes Services hosted by cloud providers. If you are deploying Kubernetes in private cloud/on-prem environments, you will need to deploy your own load balancer and integrate it with the Kubernetes cluster. Furthermore, Kubernetes Load Balancing integrations in the cloud are limited to TCP Load Balancing and generally lack visibility into metrics, logs, and traces. We propose: A solution that applies regardless of the underlying infrastructure running your workloads Guidance around sizing to avoid bottlenecks from high traffic volumes Application delivery use cases that go beyond basic TCP/HTTP load balancing In the solution depicted below, I deploy NGINX Plus as the external LB service for Kubernetes and route traffic to the NGINX Ingress Controller. The NGINX Ingress Controller will then route the traffic to the application backends. The NLK (NGINX Load Balancer for Kubernetes) deployment is a new controller by NGINX that monitors specified Kubernetes Services and sends API calls to manage upstream endpoints of the NGINX External Load Balancer In this article, I will deploy the components both inside the Kubernetes cluster and NGINX Plus as the external load balancer. Note: I like to deploy both the NLK and Kubernetes cluster in the same subnet to avoid network issues. This is not a hard requirement. Prerequisites The blog assumes you have experience operating in Kubernetes environments. In addition, you have the following: Access to a Kubernetes environment; Bare Metal, Rancher Kubernetes Engine (RKE), VMWare Tanzu Kubernetes (VTK), Amazon Elastic Kubernetes (EKS), Google Kubernetes Engine (GKE), Microsoft Azure Kubernetes Service (AKS), and RedHat OpenShift NGINX Ingress Controller – Deploy NGINX Ingress Controller in the Kubernetes cluster. Installation instructions can be found in the documentation. NGINX Plus – Deploy NGINX Plus on VM or bare metal with SSH access. This will be the external LB service for the Kubernetes cluster. Installation instructions can be found in the documentation. You must have a valid license for NGINX Plus. You can get started today by requesting a 30-day free trial. Setting up the Kubernetes environment I start with deploying the back-end applications. You can deploy your own applications, or you can deploy our basic café application as an example. $ kubectl apply –f cafe.yaml Now I will configure routes and TLS settings for the ingress controller $ kubectl apply –f cafe-secret.yaml $ kubectl apply –f cafe-virtualserver.yaml To ensure the ingress rules are successfully applied, you can examine the output of kubectl get vs. The VirtualServer definition should be in the Valid state. NAMESPACE NAME STATE HOST IP PORTS default cafe-vs Valid cafe.example.com Setting up NGINX Plus as the external LB A fresh install of NGINX Plus will provide the default.conf file in the /etc/nginx/conf.d directory. We will add two additional files into this directory. Simply copy the bulleted files into your /etc/nginx/conf.d directory dashboard.conf; This will enable the real-time monitoring dashboard for NGINX Plus kube_lb.conf; The nginx configuration as the external load balancer for Kubernetes. You can change the configuration file to fit your requirements. In part 1 of this series, we enabled basic routing and TLS for one cluster. You will also need to generate TLS cert/keys and place them in the /etc/ssl/nginx folder of the NGINX Plus instance. For the sake of this example, we will generate a self-signed certificate with openssl. $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout default.key -out default.crt -subj "/CN=NLK" Note: Using self-signed certificates is for testing purposes only. In a live production environment, we recommend using a secure vault that will hold your secrets and trusted CAs (Certificate Authorities). Now I can validate the configuration and reload nginx for the changes to take effect. $ nginx –t $ nginx –s reload I can now connect to the NGINX Plus dashboard by opening a browser and entering http://<external-ip-nginx>:9000/dashboard.html#upstreams The HTTP upstream table should be empty as we have not deployed the NLK Controller yet. We will do that in the next section. Installing the NLK Controller You can install the NLK Controller as a Kubernetes deployment that will configure upstream endpoints for the external load balancer using the NGINX Plus API. First, we will create the NLK namespace $ kubectl create ns nlk And apply the RBAC settings for the NKL deployment $ kubectl apply -f serviceaccount.yaml $ kubectl apply -f clusterrole.yaml $ kubectl apply -f clusterrolebinding.yaml $ kubectl apply -f secret.yaml The next step is to create a ConfigMap defining the API endpoint of the NGINX Plus external load balancer. The API endpoint is used by the NLK Controller to configure the NGINX Plus upstream endpoints. We simply modify the nginx-hosts field in the manifest from our GitHub repository to the IP address of the NGINX external load balancer. nginx-hosts: http://<nginx-plus-external-ip>:9000/api Apply the updated ConfigMap and deploy the NLK controller $ kubectl apply –f nkl-configmap.yaml $ kubectl apply –f nkl-deployment I can verify the NLK controller deployment is running and the ConfigMap data is applied. $ kubectl get pods –o wide –n nlk $ kubectl describe cm nginx-config –n nlk You should see the NLK deployment in status Running and the URL should be defined under nginx-hosts. The URL is the NGINX Plus API endpoint of the external load Balancer. Now that the NKL Controller is successfully deployed, the external load balancer is ready to route traffic to the cluster. The final step is deploying a Kubernetes Service type NodePort to expose the Kubernetes cluster to NGINX Plus. $ kubectl apply –f nodeport.yaml There are a couple things to note about the NodePort Service manifest. Fields on line 7 and 14 are required for the NLK deployment to configure the external load balancer appropriately: The nginxinc.io/nkl-cluster annotation The port name matching the upstream block definition in the NGINX Plus configuration (See line 42 in kube_lb.conf) and preceding nkl- apiVersion: v1 kind: Service metadata: name: nginx-ingress namespace: nginx-ingress annotations: nginxinc.io/nlk-cluster1-https: "http" # Must be added spec: type: NodePort ports: - port: 443 targetPort: 443 protocol: TCP name: nlk-cluster1-https selector: app: nginx-ingress Once the service is applied, you can note down the assigned nodeport selecting the NGINX Ingress Controller deployment. In this example, that node port is 32222. $ kubectl get svc –o wide –n nginx-ingress NAME TYPE CLUSTER-IP PORT(S) SELECTOR nginx-ingress NodePort x.x.x.x 443:32222/TCP app=nginx-ingress If I reconnect to my NGINX Pus dashboard, the upstream tab should be populated with the worker node IPs of the Kubernetes cluster and matching the node port of the nginx-ingress Service (32222). You can list the node IPs of your cluster to make sure they match the IPs in the dashboard upstream tab. $ kubectl get nodes -o wide | awk '{print $6}' INTERNAL-IP 10.224.0.6 10.224.0.5 10.224.0.4 Now I can connect to the Kubernetes application from our local machine. The hostname we used in our example (cafe.example.com) should resolve to the IP address of the NGINX Plus load balancer. Wrapping it up Most enterprises deploying Kubernetes in production will install an ingress controller. It is the DeFacto standard for application delivery in container orchestrators like Kubernetes. DevOps/NetOps engineers are now looking for guidance on how to scale out their Kubernetes footprint in the most efficient way possible. Because enterprises are embracing the hybrid approach, they will need to implement their own integrations outside of cloud providers. The solution we propose: is applicable to hybrid environments (particularly on-prem) Sizing information to avoid bottlenecks from large traffic volumes Enterprise Load Balancing capabilities that stretch beyond a TCP LoadBalancing Service In the next part of our series, I will dive into the third bullet point into much more detail and cover Zero Trust use cases with NGINX Plus, providing extra later of security in your hybrid model.195Views0likes0CommentsF5 BIG-IP per application Red Hat OpenShift cluster migrations
Overview OpenShift migrations are typically done when it is desired to minimise disruption time when performing cluster upgrades. Disruptions can especially occur when performing big changes in the cluster such as changing the CNI from OpenShiftSDN to OVNKubernetes. OpenShift cluster migrations are well covered for applications by using RedHat's Migration Toolkit for Containers (MTC). The F5 BIG-IP has the role of network redirector indicated in the Network considerations chapter. The F5 BIG-IP can perform per L7 route migration without service disruption, hence allowing migration or roll-back on a per-application basis, eliminating disruption and de-risking the maintenance window. How it works As mentioned above, the traffic redirection will be done on a per L7 route basis, this is true regardless of how these L7 routes are implemented: ingress controller, API manager, service mesh, or a combination of these. This L7 awareness is achieved by usingF5 BIG-IP's Controller Ingress Services (CIS) controller for Kubernetes/OpenShift and its multi-cluster functionality which can expose in a single VIP L7 routes of services hosted in multiple Kubernetes/OpenShift clusters. This is shown in the next picture. For a migration operation it will be used a blue/green strategy independent for each L7 route where blue will refer to the application in the older cluster and green will refer to the application in the newer cluster. For each L7 route, it will be specified a weight for each blue or green backend (like in an A/B strategy). This is shown in the next picture. In this example, the migration scenario uses OpenShift´s default ingress controller (HA proxy) as an in-cluster ingress controller where the Route CR is used to indicate the L7 routes. For each L7 route defined in the HA-proxy tier, it will be defined as an L7 route in the F5 BIG-IP tier. This 1:1 mapping allows to have the per-application granularity. The VirtualServer CR is used for the F5 BIG-IP. If desired, it is also possible to use Route resources for the F5 BIG-IP. Next, it is shown the manifests for a given L7 route required for the F5 BIG-IP, in this case, https://www.migration.com/shop (alias route-b) apiVersion: "cis.f5.com/v1" kind: VirtualServer metadata: name: route-b namespace: openshift-ingress labels: f5cr: "true" spec: host: www.migration.com virtualServerAddress: "10.1.10.106" hostGroup: migration.com tlsProfileName: reencrypt-tls profileMultiplex: "/Common/oneconnect-32" pools: - path: /shop service: router-default-route-b-ocp1 servicePort: 443 weight: 100 alternateBackends: - service: router-default-route-b-ocp2 weight: 0 monitor: type: https name: /Common/www.migration.com-shop reference: bigip --- apiVersion: v1 kind: Service metadata: annotations: name: router-default-route-b-ocp1 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default type: NodePort --- apiVersion: v1 kind: Service metadata: annotations: name: router-default-route-b-ocp2 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default type: NodePort The CIS multi-cluster feature will search for the specified services in both clusters. It is up to DevOps to ensure that blue services are only present in the cluster designated as blue (in this case router-default-route-b-ocp1) and green services are only present in the cluster designated as green (in this case router-default-route-b-ocp2). It is important to remark that the Route manifests for HA-proxy (or any other ingress solution used) doesn't require any modification. That is, this migration mechanism is transparent to the application developers. Demo You can see this feature in action in the next video The manifests used in the demo can be found in the following GitHub repository: https://github.com/f5devcentral/f5-bd-cis-demo/tree/main/crds/demo-mc-twotier-haproxy-noshards Next steps Try it today! CIS is open-source software and is included in your support entitlement. If you want to learn more about CIS and CIS multi-cluster features the following blog articles are suggested. F5 BIG-IP deployment with OpenShift - platform and networking options F5 BIG-IP deployment with OpenShift - publishing application options F5 BIG-IP deployment with OpenShift - multi-cluster architectures339Views0likes0CommentsTaming your “Chaos Monkey” with F5 Distributed Cloud Platform
Overview Recently, my family returned from a holiday trip to Japan. While the holiday itself was amazing, this article isn't about the experiences or the chaos my children caused; rather, it's about the significant role technology and applications played in enhancing our vacation and our lives in the digital world. Please also notes that “Application” in this context loosely use to refer to software/applications/AI apps/API or systems that power the digital world. Throughout our journey, we found ourselves heavily reliant on various applications, ranging from weather forecasts to navigation aids. We utilized weather apps to stay informed and dressed appropriately, GPS apps to navigate bustling cities and public transportation, and mobile payment apps for seamless transactions. Social media platforms allowed us to update family and friends on our whereabouts, while continuous access to mobile internet (via 4/5G connectivity) kept us tethered to the digital world. Additionally, we interacted with numerous indirect applications and systems, such as ordering food in cafes, different ticketing systems, or using Automated Teller Machines (ATM) for cash withdrawals. Reflecting on these travel experiences prompts consideration of the potential implications had these apps not existed or malfunctioned during our visit. While it might not have been catastrophic, it would have certainly detracted from the smoothness and enjoyment of our holiday. For instance, the failure of my mobile payment app could have hindered transactions, or what if a life-threatening event occurred and the network went down, accessing emergency services would have been impossible—a potentially catastrophic situation that I couldn’t had imagine. The crux of the matter is the paramount importance of ensuring that these applications remain always available, secure, and resilient. They have become integral to modern life, not just enhancing convenience but also playing a crucial role in safety and well-being. Therefore, efforts to maintain their reliability and functionality are imperative in navigating our increasingly digital world. In our increasingly interconnected world, reliance on technology already ubiquitous. The resilience of apps and systems is now a paramount concern for any organization, occupying the top priority in the minds of many executives (CxOs). When these systems fail, causing disruptions for customers or citizens, CxOs may find themselves compelled to respond publicly or even testify before various authorities, demonstrating their due diligence in managing and maintaining these critical assets. Hence, organization need to have strategies to assess and analyse failure mode and impact analysis of those critical application failure. Numerous methodical strategies exist to study and ensure the resilience of apps and systems, such as Failure Modes, Effects, and Criticality Analysis (FMECA), Failure Mode and Effects Analysis (FMEA), and Chaos Engineering. While the intricacies of these methodologies won't be covered in depth here, it's important to introduce them and highlight their shared objective: mitigating business/availability risks to prevent harm to business when apps or systems encounter failures. The focus of this article is to demonstrate how F5's Distributed Cloud (F5XC) Secure Multi-Cloud Networking (MCN) for Kubernetes can address some of these failure scenarios, particularly through the lens of Chaos Engineering. Chaos Engineering involves deliberately inducing failures in a controlled environment to test system resilience. In this demonstration, I’ll leverage the Open Source Chaos Engineering platform to simulate failure scenarios within a running production system. I will use a sample financial application, Arcadia Finance, as our subject for chaos testing. This application consists of microservices distributed across heterogeneous Kubernetes environments, including Amazon EKS, Azure AKS, Google GKE, and Red Hat OpenShift Container Platform (OCP). F5's XC Mesh for Kubernetes can run on any of these Kubernetes platforms and itself formed a secure mesh fabric to orchestrate apps connectivity, delivery, security and observability between those heterogenous container platform. Regardless of the specific strategy employed, the goal remains consistent: implementing risk prevention strategies to safeguard against the potential harm to business caused by app or system failures. Please do note that full end-to-end demo video at the end of this article. Below are some of the mentioned methodologies. Please refer to respective literature for details. FMECA / FMEA From “Find failure and fix it” to “anticipate failure and prevent it” Extracted from (https://www.getmaintainx.com/learning-center/what-is-fmeca-failure-mode-effects-and-critical-analysis/) FMECA is a risk assessment methodology in which you determine failure modes, assess their level of risk to your equipment or system, and rate the failure based on that level of risk. The U.S. military invented this FMECA analysis technique in the ‘40s. The military continues to use the FMECA even today under the MIL STD-1629A. FMECA is a commonly used technique for performing failure detection and criticality analysis on systems to improve their performance. In addition, it typically provides input for Maintainability Analysis and Logistics Support Analysis, both of which rely on FMECA data. With Industry 4.0, many industries are adopting a predictive maintenance strategy for their equipment. To prioritize failure modes and identify mechanical system and subsystem issues for predictive maintenance, FMECA is a widely used tool. Chaos Engineering Excerpt from https://www.gremlin.com/community/tutorials/chaos-engineering-the-history-principles-and-practice Chaos Engineering is a disciplined approach to identifying failures before they become outages. By proactively testing how a system responds under stress, you can identify and fix failures before they end up in the news. Chaos Engineering lets you compare what you think will happen to what actually happens in your systems. You literally “break things on purpose” to learn how to build more resilient systems Note: Chaos Monkey serves as a critical tool in enhancing chaos engineering; it enables engineering teams to simulate failures across multiple configurations and monitor the system's behaviour in real time. It was a set of tools that originally open source by Netflix. In this demo, Open Source Litmus will be use instead of Chaos Monkey. Litmus Chaos Platform Litmus is an open source Chaos Engineering platform that enables teams to identify weaknesses & potential outages in infrastructures by inducing chaos tests in a controlled way. It is a Cloud-Native Chaos Engineering Framework with cross-cloud support. It is a CNCF Incubating project with adoption across several organizations. Its mission is to help Kubernetes SREs and Developers to find weaknesses in both Non-Kubernetes as well as platforms and applications running on Kubernetes by providing a complete Chaos Engineering framework and associated Chaos Experiments. Litmus adopts a "Kubernetes-native" approach to define chaos intent in a declarative manner via Kubernetes custom resources (CRs). Litmus platform consist of Control Plane, Execution Plane and Chaos Fault flow. Please refer to official documentation for details - https://docs.litmuschaos.io/docs/introduction/what-is-litmus Chaos Center (Chaos Control Plane) is deployed on F5XC AppStack and Chaos Execution Plane (Litmus Agents/Infrastructure) installed on respective Kubernetes Platform. Litmus agents communicate with Chaos Center via F5XC Secure Mesh Fabric. This is a Traffic Graph where Litmus agent on respective K8S communicating to Chaos Centre over websocket connection. These private connections are secured and protected by F5XC. Chaos Engineering Demo - High Level Demo Architecture In this demo environment, Litmus agents are deployed on both Amazon EKS and Red Hat OCP. Arcadia Finance, comprising multiple microservices (applications and APIs), are distributed across heterogeneous container platforms. The demo will focus on two specific use cases: Use Case #1: Frontend Application Latency Demonstrating network latency impacting frontend applications (EKS), resulting in unresponsive app behavior within critical timeframes. Use Case #2: Production Deployment Issues Showcasing the deployment of an updated version of the money-transfer API container (OCP) leading to the money-transfer API pods entering a CrashLoopBack state, hindering production functionality. Litmus (open source) is capable to inject more than 50 chaos experiment – for example, on Kubernetes; pods kill, pod delete, network latency, pod network disruption, node failure and many more. Please refer to litmus documentation for the complete list of chaos. F5 Distributed Cloud Platform Customer Edge Sites Arcadia Finance Sample Application Construct Litmus Chaos Environment for this Demo Litmus agents installed, registered and connected onto Litmus Chaos Center via F5XC Mesh Fabric. Litmus Chaos Center installed on F5's AppStack Kubernetes. Chaos Experiment created for arcadia frontend 4s network latency injected into arcadia frontend Continuous probe (health check) of the frontend to ensure application still functioning and accessible Without multi-cluster resiliency Injected chaos network latency Running Chaos Experiment workflow Logs shown on F5XC before adding multi cluster resiliency. As shown after 4s latency injected to frontend (served from foobz-mesh-eks1), user/probe unable to get to frontend and subsequent request return 503 error as no available site to handle the introduce network latency. End user will received 503 error – “application down” Chaos Experiments test completed with Failure and Resilience Score to 0%. End Result - Application unavailable. Resilient Score - 0% With multi-cluster resiliency Introduce Google Cloud GKE as part of the backup origin pool for arcadia frontend via CI/CD Pipeline. In the even if frontend on EKS unable to handle or failed, traffic will be steered/redirected to GKE. Similar Chaos experience will be run and completed successful with Resilience Score of 100%. From XC request logs shown traffic seamlessly transition from "foobz-mesh-eks1" to "foobz-mesh-gke1" End Result - Application Always Available. Resilient Score - 100% Similar backup site ("foobz-mesh-aks1") will be added via CI/CD Pipeline into the money-transfer apps/api to provide high redundancy. Deployment of rogue software onto money-transfer api pods that causes money-transfer pod into a CrashLoopBack. From XC logs, you can see that money-transfer served from foobz-ves-ocp-sg transition to foobz-mesh-aks1 seamlessly While refer-friend module still remain in foobz-ves-ocp-sg as refer-friend apps/api are healthy in foobz-ves-ocp-sg End-Result with F5 Distributed Cloud Mesh Demo Video Summary F5 is delivering on its mission to make it significantly easier to secure, deliver, and optimize any app, any API, anywhere. We strive to bring a better digital world to life. Our teams empower organization across the globe to create, secure, and run applications that enhance how we experience our evolving digital world.183Views1like2CommentsKubernetes architecture options with F5 Distributed Cloud Services
Summary F5 Distributed Cloud Services (F5 XC) can both integrate with your existing Kubernetes (K8s) clustersand/or host aK8s workload itself. Within these distinctions, we have multiple architecture options. This article explores four major architectures in ascending order of sophistication and advantages. Architecture #1: External Load Balancer (Secure K8s Gateway) Architecture #2: CE as a pod (K8s site) Architecture #3: Managed Namespace (vK8s) Architecture #4: Managed K8s (mK8s) Kubernetes Architecture Options As K8s continues to grow, options for how we run K8s and integrate with existing K8s platforms continue to grow. F5 XC can both integrate with your existing K8s clustersand/orrun a managed K8s platform itself.Multiple architectures exist within these offerings too, so I was thoroughly confused when I first heard about these possibilities. A colleague recently laid it out for me in a conversation: "Michael, listen up: XC can eitherintegrate with your K8s platform,run insideyour K8s platform, host virtual K8s(Namespace-aaS), or run a K8s platformin your environment." I replied, "That's great. Now I have a mental model for differentiating between architecture options." This article will overview these architectures and provide 101-level context: when, how, and why would you implement these options? Side note 1: F5 XC concepts and terms F5 XC is a global platform that can provide networking and app delivery services, as well as compute (K8s workloads). We call each of our global PoP's a Regional Edge (RE). RE's are highly meshed to form the backbone of the global platform. They connect your sites, they can expose your services to the Internet, and they can run workloads. This platform is extensible into your data center by running one or more XC Nodes in your network, also called a Customer Edge (CE). A CE is a compute node in your network that registers to our global control plane and is then managed by a customer as SaaS. The registration of one or more CE's creates a customer site in F5 XC. A CE can run on ahypervisor (VMWare/KVM/Etc), a Hyperscaler (AWS, Azure, GCP, etc), baremetal, or even as a k8s pod, and can be deployed in HA clusters. XC Mesh functionality provides connectivity between sites, security services, and observability. Optionally, in addition, XC App Stack functionality allows a large and arbitrary number of managed clusters to be logically grouped into a virtual site with a single K8s mgmt interface. So where Mesh services provide the networking, App Stack services provide the Kubernetes compute mgmt. Our first 2 architectures require Mesh services only, and our last two require App Stack. Side note 2: Service-to-service communication I'm often asked how to allow services between clusters to communicate with each other. This is possible and easy with XC. Each site can publish services to every other site, including K8s sites. This means that any K8s service can be reachable from other sites you choose. And this can be true in any of the architectures below, although more granular controls are possible with the more sophisticated architectures. I'll explore this common question more in a separate article. Architecture 1: External Load Balancer (Secure K8s Gateway) In a Secure Kubernetes Gatewayarchitecture, you have integration with your existing K8s platform, using the XC node as the external load balancer for your K8s cluster. In this scenario, you create a ServiceAccount and kubeconfig file to configure XC. The XC node then performs service discovery against your K8s API server. I've covered this process in a previous article, but the advantage is that you can integrate withexisting K8s platforms. This allows exposing both NodePort and ClusterIP services via the XC node. XC is not hosting any workloads in this architecture, but it is exposing your services to your local network, or remote sites, or the Internet. In the diagram above, I show a web application being accesssed from a remote site (and/or the Internet) where the origin pool is a NodePort service discovered in a K8s cluster. Architecture 2: Run a site within a K8s cluster (K8s site type) Creating a K8s site is easy - just deploy a single manifest found here. This file deploys multiple resources in your cluster, and together these resources work to provide the services of a CE, and create a customer site. I've heard this referred to as "running a CE inside of K8s" or "running your CE as a pod". However, when I say "CE node" I'm usually referring to a discreet compute node like a VM or piece of hardware; this architecture is actually a group of pods and related resources that run within K8s to create a XC customer site. With XC running inside your existing cluster, you can expose services within the cluster by DNS name because the site will resolve these from within the cluster. Your service can then be exposed anywhere by the F5 XC platform. This is similar to Architecture 1 above, but with this model, your site is simply a group of pods within K8s. An advantage here is the ability to expose services of other types (e.g. ClusterIP). A site deployed into a K8s cluster will only support Mesh functionality and does not support AppStack functionality (i.e., you cannot run a cluster within your cluster). In this architecture, XC acts as a K8s ingress controller with built-in application security. It also enables Mesh features, such as publishing of other sites' services on this site, and publishing of this site's discovered services on other sites. Architecture 3: vK8s (Namespace-as-a-Service) If the services you use includeAppStack capabilities, then architectures #3 and #4 are possible for you.In these scenarios, our XC nodeactually runs your K8son your workloads. We are no longer integrating XC with your existing K8s platform. XCisthe platform. A simple way to run K8s workloads is to use avirtual k8s (vK8s) architecture. This could be referred to as a "managed Namespace" because by creating a vK8s object in XC you get a single namespace in a virtual cluster. Your Namespace can be fully hosted (deployed to RE's) or run on your VM's (CE's), or both. Your kubeconfig file will allow access to your Namespace via the hosted API server. Via your regular kubectl CLI (or via the web console) you can create/delete/manage K8s resources (Deployments, Services, Secrets, ServiceAccounts, etc) and view application resource metrics. This is great if you have workloads that you want to deploy to remote regions where you do not have infrastructure and would prefer to run in F5's RE's, or if you have disparate clusters across multiple sites and you'd like to manage multiple K8s clusters via a single centralized, virtual cluster. Best practice guard rails for vK8s With a vK8s architecture, you don't have your own cluster, but rather a managed Namespace. So there are somerestrictions(for example, you cannot run a container as root, bind to a privileged port, or to the Host network). You cannot create CRD's, ClusterRoles, PodSecurityPolicies, or Namespaces, so K8s operators are not supported. In short, you don't have a managed cluster, but a managed Namespace on a virtual cluster. Architecture 4: mK8s (Managed K8s) Inmanaged k8s (mk8s, also known as physical K8s or pk8s) deployment, we have an enterprise-level K8s distribution that is run at your site. This means you can use XC to deploy/manage/upgrade K8s infrastructure, but you manage the Kubernetes resources. The benefitsinclude what is typical for 3rd-party K8s mgmt solutions, but also some key differentiators: multi-cloud, with automation for Azure, AWS, and GCP environments consumed by you as SaaS enterprise-level traffic control natively allows a large and arbitrary number of managed clusters to be logically managed with a single K8s mgmt interface You can enable kubectl access against your local cluster and disable the hosted API server, so your kubeconfig file can point to a global URL or a local endpoint on-prem. Another benefit of mK8s is that you are running a full K8s cluster at your site, not just a Namespace in a virtual cluster. The restrictions that apply to vK8s (see above) do not apply to mK8s, so you could run privileged pods if required, use Operators that make use of ClusterRoles and CRDs, and perform other tasks that require cluster-wide access. Traffic management controls with mK8s Because your workloads run in a cluster managed by XC, we can apply more sophisticated and native policies to K8s traffic than non-managed clusters in earlier architectures: Service isolation can be enforced within the cluster, so that pods in a given namespace cannot communicate with services outside of that namespace, by default. More service-to-service controls exist so that you can decide which services can reach with other services with more granularity. Egress controlcan be natively enforced for outbound traffic from the cluster, by namespace, labels, IP ranges, or other methods. E.g.: Svc A can reach myapi.example.com but no other Internet service. WAF policies, bot defense, L3/4 policies,etc—allof these policies that you have typically applied with network firewalls, WAF's, etc—can be applied natively within the platform. This architecture took me a long time to understand, and longer to fully appreciate. But once you have run your workloads natively on a managed K8s platform that is connected to a global backbone and capable of performing network and application delivery within the platform, the security and traffic mgmt benefits become very compelling. Conclusion: As K8s continues to expand, management solutions of your clusters make it possible to secure your K8s services, whether they are managed by XC or exist in disparate clusters. With F5 XC as a global platform consumed as a service—not a discreet installation managed by you—the available architectures here are unique and therefore can accommodate the diverse (and changing!) ways we see K8s run today. Related Articles Securely connecting Kubernetes Microservices with F5 Distributed Cloud Multi-cluster Multi-cloud Networking for K8s with F5 Distributed Cloud - Architecture Pattern Multiple Kubernetes Clusters and Path-Based Routing with F5 Distributed Cloud7.9KViews29likes5CommentsUnderstanding Modern Application Architecture - Part 1
This is part 1 of a series. Here are the other parts: Understanding Modern Application Architecture - Part 2 Understanding Modern Application Architecture - Part 3 Over the past decade, there has been a change taking place in how applications are built. As applications become more expansive in capabilities and more critical to how a business operates, (or in many cases, the application is the business itself) a new style of architecture has allowed for increased scalability, portability, resiliency, and agility. To support the goals of a modern application, the surrounding infrastructure has had to evolve as well. Platforms like Kubernetes have played a big role in unlocking the potential of modern applications and is a new paradigm in itself for how infrastructure is managed and served. To help our community transition the skillset they've built to deal with monolithic applications, we've put together a series of videos to drive home concepts around modern applications. This article highlights some of the details found within the video series. In these first three videos, we breakdown the definition of a Modern Application. One might think that by name only, a modern application is simply an application that is current. But we're actually speaking in comparison to a monolithic application. Monolithic applications are made up of a single, or a just few pieces. They are rigid in how they are deployed and fragile in their dependencies. Modern applications will instead incorporate microservices. Where a monolithic application might have all functions built into one broad encompassing service, microservices will break down the service into smaller functions that can be worked on separately. A modern application will also incorporate 4 main pillars. Scalability ensures that the application can handle the needs of a growing user base, both for surges as well as long term growth. Portability ensures that the application can be transportable from its underlying environment while still maintaining all of its functionality and management plane capabilities. Resiliency ensures that failures within the system go unnoticed or pose minimal disruption to users of the application. Agility ensures that the application can accommodate for rapid changes whether that be to code or to infrastructure. There are also 6 design principles of a modern application. Being agnostic will allow the application to have freedom to run on any platform. Leveraging open source software where it makes sense can often allow you to move quickly with an application but later be able to adopt commercial versions of that software when full support is needed. Defining by code allows for more uniformity of configuration and move away rigid interfaces that require specialized knowledge. Automated CI/CD processes ensures the quick integration and deployment of code so that improvements are constantly happening while any failures are minimized and contained. Secure development ensures that application security is integrated into the development process and code is tested thoroughly before being deployed into production. Distributed Storage and Infrastructure ensures that applications are not bound by any physical limitations and components can be located where they make the most sense. These videos should help set the foundation for what a modern application is. The next videos in the series will start to define the fundamental technical components for the platforms that bring together a modern application. Continued in Part 23.7KViews8likes0CommentsEgress control for Kubernetes using F5 Distributed Cloud Services
Summary When using F5 Distributed Cloud Services (F5 XC) to manage your Kubernetes (K8s) workloads, egress firewalling based on K8s namespaces or labels is easy. While network firewalls have no visibility into which K8s workload initiated outbound traffic - and therefore cannot apply security policies based on workload - we can use a platform like F5 XC Managed Kubernetes (mK8s) to achieve this. Introduction Applying security policies to outbound traffic is common practice. Security teams inspect Internet-bound traffic in order to detect/prevent Command & Control traffic, allow select users to browse select portions of the Internet, or for visibility into outbound traffic. Often the allow/deny decision is based on a combination of user, source IP, and destination website. Here's an awesome walk through of outbound inspection. Typical outbound inspection performed by a network-based device. Network-based firewalls cannot do the same for K8s workloads because pods are ephemeral. They can be short-lived, their IP addresses are temporary and reused, and all pods on the same node making outbound connections will have the same source IP on the external network. In short, anetwork device cannot distinguish traffic from one pod versus another. Which microservice is making this outbound request? Should it be allowed? Problem statement In my cluster I have two apps, app1 and app2, in namespaces app1-ns and app2-ns. For HTTP traffic, I want app1 to reach out to *.github.com but nothing else app2 to reach out to the REST API at api.weather.gov but nothing else, even other subdomains of weather.gov For non-HTTP traffic, I want app1 to be able to reach a partner's public IP address on port 22 app2 to reach Google's DNS server at 8.8.8.8 on port 25 I want no other traffic (TCP, UDP) to egress from my pods (ie., HTTP or non-HTTP). What about a Service Mesh? A service mesh will control traffic within your K8s cluster, both East-West (between services) and North-South (traffic to/from the cluster). Indeed, egress control is a feature of some service meshes, and a service mesh is a good solution to this problem. Istio's egress control is a great starting point to read more about a service mesh with egress control. By using an egress gateway, Istio's sidecars will force traffic destined for a particular destination through a proxy, and this proxy can enforce Istio policies. This solves our problem, although I've heard customers voice reasonable concerns: what about non-HTTP traffic? what if the egress gateway is bypassed? can our security team configure the mesh or configuration as code? a mesh may require an additional learning / admin overhead a mesh is often managed by a different team than traditional security What about a NetworkPolicy? A NetworkPolicy is a K8s resource that can define networking rules, including allow/deny rules by namespace, labels, src/dest pods, destination domains, etc. However NetworkPolicies must be enforced by the network plugin in your distribution (eg Calico), so they're not an option for everyone. They're also probably not a scalable solution when you consider the same concerns of service meshes above, but they are possible. Read more about NetworkPolicies for egress control with Istio, and check out this article from Monzo to see a potential solution involving NetworkPolicies. Other ideas Read the article from Monzo linked above to see what others have done. You could watch (or serve) DNS requests from Pods and then very quickly update outbound firewall rules to allow/disallow traffic to the IP address in the DNS response. NeuVector had a great article on this. You could also use a dedicated outbound proxy per domain name, as Monzo did, although this wouldn't scale to a large number, so some kind of exceptions would need to be made. I read an interesting article on Falco also, which is a tool that can monitor outbound connections from pods using eBPF. Generally speaking, these other ideas will bring the same concerns to teams without mesh skills: K8s and mesh networking can be unfamiliar and difficult to operate. Me and Kubernetes Solving egress control with F5 XC Managed Kubernetes Another way we can control outbound traffic specific to K8s namespace or labelsis by using a K8s distribution that includes these features. In a technical sense, this works just like a mesh. By injecting a sidecar container for security controls into pods, the platform can control networking. However the mesh is not managed separately in this case. The security policies of the platform provide a GUI, easy reuse of policies, and generally an experience identical to that used for traditional egress control with the platform. Solving for our problem statement If I am using Virtual K8s (vK8s) or Managed K8s (mK8s), my pods are running on the F5 XC platform. These containers may be on-prem or in F5's PoPs, but the XC platform is natively aware of each pod. Here's how to solve our problem with XC when you have a managed K8s cluster. 1. Firstly we will create a known key so we can have a label in XC to match a label we will apply to our K8s pods. I have created a known key egress-rulesetby following this how-to guide. 2. For HTTP and HTTPS traffic, create aforward proxy policy. Since we want rules to apply to pods based on their labels, choose "Custom Rule List" when creating rules. Rule 1: set the source to be anything with a known label of egress-ruleset=app1and allow access to TLS domain with suffix of github.com. Rule 2: Same as 1, but allow access to HTTP path of suffix github.com. Rules 3 and 4 are the same, but where the source endpoint matches egress-ruleset=app2. Rule 5, the last, can be a Deny All rule. 3. For non HTTP(S) traffic, create multiplefirewall policies for traffic ingressing, egressing, or originating from an F5 Gateway. I've recommended multiple policies because a policy applies to a group of endpoints defined by IP or label. I've used three policies in my examples (one for label egress-ruleset=app1and another for app2, and one for all endpoints). Use the policies to allow TCP traffic as desired. 4. Create and deploy a Managed K8s cluster and an App Stack site that is attached to this cluster. When creating the App Stack site, you can attached the policies you created from steps 1 and 2. You can have multiple policies layered in order, for policies of both types (forward proxy and firewall). 5. Deploy your K8s workload and label your pods with egress-rulesetand a value of app1or app2. Finally validate your policies are in effect by using kubectl execagainst a pod running in your cluster. We have now demonstrated that outbound traffic from our pods is allowed only to destinations we have configured. We can now control outbound traffic specific to the microservice that is the source of the traffic. Application namespaces Another way to solve this problem uses Namespaces only and not labels. If you create your Application Namespace in the XC console (not K8s Namespace) and deploy your workloads in the corresponding K8s namespace, you can use the built-in label of name.ves.io/namespace.This means you won't need to create your own label (Step 1) but you will need to have a 1:1 relationship between K8s namespaces and Application Namespaces in XC. Plus, your granularity for endpoints is not fine-grained at the level of pod labels, but instead is at the namespace level. Further Reading Enterprise-level outbound firewalling from products like F5's SSLO will do more than simple egress control, such as selectively pass traffic to 3rd party inspection devices. Egress control in XC is not integrating with other devices, but the security controls fit the nature of typical microservices. Still, we could layer simple outbound rules performed in K8s with enterprise-wide inspection rules performed by SSLO for further control of outbound traffic, including integration with 3rd party devices. While this example used mK8s, I'll make note of another helpful article that explains how labels can be used for controlling network traffic when using Virtual K8s (vK8s). Conclusion Egress control for Kubernetes workloads, where security policy can be based on namespace labels, can be enforced with a service mesh that supports egress control, or a managed K8s solution like F5 XC that integrates network security policies natively into the K8s networking layer. Consider practical concerns, like management overhead and existing skill sets, and reach out if I or another F5'er can help explain more about egress control using F5 XC! Finally thank you to my colleague Steve Iannetta@netta2who helped me prepare this. Please do reach out if you want to do this yourself or have more in-depth K8s traffic management questions.2.8KViews6likes3CommentsF5 Distributed Cloud - Regional Decryption with Virtual Sites
In this article we discuss how the F5 Distributed Cloud can be configured to support regulatory demands for TLS termination of traffic to specific regions around the world. The article provides insight into the F5 Distributed Cloud global backbone and application delivery network (ADN). The article goes on to inspect how the F5 Distriubted Cloud is able to achieve these custom topologies in a multi-tenant architecture while adhearing to the "rules of the internet" for route summarization. Read on to learn about the flexibility of F5's SaaS platform providing application delivery and security solutions for your applications.4.6KViews15likes2CommentsService Discovery and authentication options for Kubernetes providers (EKS, AKS, GCP)
Summary Hosted Kubernetes (K8s) providers use different services for authenticating to your hosted K8s cluster. A kubeconfig file will define your cluster's API server address and your authentication to this server. Uploading this kubeconfig file is how you enable service discovery from another platform like F5 Distributed Cloud (F5 XC). What is Service Discovery in F5 XC? Service Discovery in this case means somethingoutsidethe cluster is querying the K8s API and learning the services and pods runninginside the cluster. In our case, we want to send traffic into K8s via a platform that can publish our k8s services internally or publicly, and also apply security to the applications exposed. How do I configure k8s Service Discovery? Follow these instructions to configure K8s Service Discovery in XC. You will notice that a kubeconfig file is typically required to be uploaded to XC. User authentication to AKS, EKS, and GKE In K8s, User authentication happens outside of the cluster. There is no "User" resource in K8s - you cannot create a User with kubectl. Unlike ServiceAccounts (SAs), which are created inside K8s and whose authentication secrets exist inside K8s, a User is authenticated with a system outside of K8s. There are multiple authentication schemes, and certificates and OAuth are very common to see. For this article, I'll review the typical authentication providers major providers: Azure's AKS, AWS's EKS, and Google's GKE. Azure AKS AKS authentication(authn) starts at Azure Active Directory, and authorization (authz) can be applied at AAD or k8s RBAC. For this article I am focused on authn (not authz). If you follow officialinstructions to create an AKS cluster via CLI, you will run theaz aks get-credentialscommand to get the access credentials for an AKS cluster and merge them into yourkubeconfigfile. You will see that a client certificate and key is included in the User section of your kubeconfig file. This kubeconfig file can be uploaded to F5 XC for successful service discovery because it does not rely on any additional software to be on the kubectl client. (However, be cautious. Don't share the kubeconfig file further, since it contains credentials to AKS.Microsoft provides instructionsto use Azure role-based access control (Azure RBAC) to control access to these credentials. These Azure roles let you define who can retrieve thekubeconfigfile, and what permissions they then have within the cluster.) AWS EKS "Typical" EKS kubeconfig The AWS EKS cluster authentication process requires the use of an AWS IAM identity, and typically relies on an additional software being installed on the kubectl client machine. Instead of credentials being configured directly in the kubeconfig file, eitherthe aws cli or aws-iam-authenticator are used locally on the kubectl client to generate a token. This token is sent to the k8s master API server, which is verified against AWS IAM (see steps 1-2 in the image below). Use a service account for EKS access The AWS user that created the cluster will have system:masters permissions to your cluster, but if you want to add another user to your cluster, you must edit the aws-auth ConfigMap within Kubernetes. AWS has instructions to do this so I won't repeat them here, but the implications should be clear for what we are trying to do: You don't want a 3rd party authenticating to EKS as your primary user account. If you created the EKS cluster with your primary account, you should create a different user for EKS access. Create an AWS IAM user for this purpose, and then add this user to the aws-auth ConfigMap within Kubernetes. EKS requires authentication from the kubectl client. Instead of hard-coding credentials into the kubeconfig file, the kubeconfig file tells the client to run one of the two following commands that rely on the existing of additional software: aws-iam-authenticator token -i [clustername] or aws eks get-token --region [region] --cluster-name [cluster] We can update our kubeconfig file so that it can use credentials directly in the kubeconfig file, so that we can upload this file to F5 XC for service discovery. Creating a kubeconfig file that uses aws cli or aws-iam-authenticator with provided credentials In the case of F5 XC, we can see from the instructions that "...,you must add AWS credentials in the kubeconfig file for successful service discovery". This means that to upload your kubeconfig to XC you will need to configure it to use the aws-iam-authenticator option, and provide your IAM aws_access_key_id and aws_secret_access_key as env vars in your kubeconfig. Example kubeconfig: .... <everything else in kubeconfig file> ... users: - name: arn:aws:eks:us-east-1:<my_aws_acct_number>:cluster/<my_eks_cluster_name> user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - token - -i - <my_eks_cluster_name> command: aws-iam-authenticator env: - name: AWS_ACCESS_KEY_ID value: "<my_aws_access_key_id>" - name: AWS_SECRET_ACCESS_KEY value: "<my_aws_secret_access_key>" I can use the above kubeconfig file for successful service discovery in XC. I did make some notes: In my testing, the aws cli method did not work in XC. It appears that the XC node has aws-iam-authenticator installed, but not aws cli. Notice the apiVersion above is v1alpha1. This works when uploaded to F5 XC, but when testing this file on my local workstation, I had to ensure my kubectl client was downgraded to kubectl v 1.23.6 or lower. v1alpha1 has been deprecated in newer kubectl versions, but was required for me to authenticate with the aws-iam-authenticator method. This is not ideal. It appears that the XC node has a version of kubectl that allows for v1alpha1 auth, but we know this will be deprecated eventually. These notes led me to asking: can I get a kubeconfig file to authenticate to AWS without 3rd party tools? Read on for how to do this, but first I'll cover Google's GKE. Google GKE Similar to EKS, GKE uses gcloud cli as a "credential helper" in the typical kubeconfig file for GKE cluster administration. This excellent article covers GKE authentication very well. This means we need to take a similar approach as we did for EKS, if we want to authenticate to the GKE API server from a location without gcloud installed. For GKE, I followed an approach I learnd here this blog post, but I'll summarize for you now: Create a Kubeconfig file that contains your cluster API server address and CA certificate. These are safe to commit in git because they do not need to be private. Create a Service Account in Google, generate a key for this service account and save it in JSON format. Export the variable GOOGLE_APPLICATION_CREDENTIALS so the value is the location of your .json file. Use kubectl commands. These should work because kubectl as a client is aware of the env var above. This article from Google explains the community's requirement that provider-specific code is removed from the OSS codebase in v1.25 (as of my writing, v1.24 is the most recent release). This requires an update to kubectl that will remove existing functionality that allows kubectl to authenticate to GKE. Soon, you will need to use a new credential helper from Google called gke-gcloud-auth-plugin. Soon, you'll be relying on a different 3rd party plugin for GKE authentication. So our question will come up again, as it did earlier: Can't I just have a kubeconfig file that doesn't rely on a 3rd party library? Creating a kubeconfig file that does not rely on other software What if we need to upload our kubeconfig to a machine thatdoes not have the aws cli, aws-iam-authenticator, or any other 3rd party credential helpers installed?Sometimes you just need a kubeconfig that doesn't rely on 3rd party software or their IAM service, and this is what ServiceAccounts can achieve. I wrote an article with a script that you can use, focused on K8s SA's for service discovery with F5 XC. Since it's all documented in another article I'll summarize the steps here. Create a Service Account (SA) with permissions to discover services. Export the secret for that SA to use as an auth token. Create a kubeconfig file that targets your cluster and authenticates with your SA. This file can be uploaded to XC. Summary A kubeconfig file is relatively straightforward for most k8s admins. However, uploading a kubeconfig file into a hosted service, for the purpose of k8s service discovery, will sometimes require further analysis. Remember: Your kubeconfig file defines your authentication to the k8s API server. So use a service account (not your primary user account) If your kubeconfig file relies on a helper like aws cli, aws-iam-authenticator, or gcloud, you can probably work around this by editing your kubeconfig file to use a SA and token instead. This may be required if you upload the kubeconfig file to a hosted platform like F5 XC. All this might seem complex at first, but after service discovery is configured the ability to expose your services anywhere - publicly or privately - is incredibly easy and powerful. Good luck and reach out with feedback!4.3KViews3likes3CommentsCalico, Kubernetes and BIG-IP
In order to follow along with this post you will need a couple things. First, a working Kubernetes deployment. If you don't have one following this link will get you up and running.The second thing you will need is a BIG-IP. Don't have one? Click here. You will need the advanced routing modules to get BGP working.The last thing is a Container Connector. If you don't have one of those yet you can pull it from Docker Hub. Now, it's a lot of work to get all these pieces up and running. An easier solution would be to automate it, as we do, using OpenStack and heat templates. This gets your services up and runningquickly, and identical every time. Need to update? Change your heat template. Broke your stack? Use the heat template.And that's it. Let's Get Started Spin up a simple nginx service in your kubernetes deployment so we can see all this magic happening. p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; color: #008200; -webkit-text-stroke: #008200} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #ff2f93; -webkit-text-stroke: 0px #ff2f93} span.s3 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} span.s4 {font-kerning: none; color: #336699; -webkit-text-stroke: 0px #336699} span.s5 {font-kerning: none; color: #000000; -webkit-text-stroke: 0px #000000} # Run the Pods. kubectl run nginx --replicas=2 --image=nginx # Create the Service. kubectl expose deployment nginx --port=80 # Run a Pod and try to access the `nginx` Service. $ kubectl run access --rm -ti --image busybox /bin/sh Waiting for pod policy-demo/access-472357175-y0m47 to be running, status is Pending, pod ready: false If you don't see a command prompt, try pressing enter. / # wget -q nginx -O - You should see a response from nginx. Great! Our Service is accessible. You can exit the Pod now. Installing Calico Let's install Calico next. Following along here, it's super easy to install and we use the provided config map almost exactly, the only change we make is to the IP pool. Calicoctl can be ran two ways, the first is through a docker container which allows most commands to work. This can be useful just for a quick check but can become cumbersome if running a lot of commands and doesn't allow you to run commands that need access to the PID namespace or files on the host without adding volume mounts. To install calicoctl on the node: wget https://github.com/projectcalico/calico-containers/releases/download/v1.0.1/calicoctl chmod +x calicoctl sudo mv calicoctl /usr/bin To run calicoctl through a docker container: docker run -i --rm --net=host calico/ctl:v1.0.1 version Now, this is where things get a bit more interesting. We need to setup the BGP peering in Calico so it can advertise our endpoints to the BIG-IP. Take note of the asNumber Calico is using. You can set your own or use the default then run the create command. p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #ff2f93; -webkit-text-stroke: 0px #ff2f93} span.s3 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} span.s4 {font-kerning: none; color: #c7254e; -webkit-text-stroke: 0px #c7254e} calicoctl config get asnumber cat << EOF | calicoctl create -f - apiVersion: v1 kind: bgpPeer metadata: peerIP: 172.16.1.6 scope: global spec: asNumber: 64511 EOF I'm using the typical 3-interface setup: A management interface, an external (web-facing) interface, and an internal interface (172.16.1.6 in my example) that connects to the Kubernetes cluster. So replace 172.16.1.6 with whatever IP address you've assigned to the BIG-IP's internal interface. Verify it was setup correctly: sudo calicoctl node status You should recieve an output similar to this: p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} Calico process is running. IPv4 BGP status +--------------+-------------------+-------+------------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+-------------------+-------+------------+-------------+ | 172.16.1.6 | node-to-node mesh | up | 2017-01-30 | Established | | 172.16.1.7 | node-to-node mesh | up | 2017-01-30 | Established | | 172.16.1.3 | global | up | 21:41:21 | Established | +--------------+-------------------+-------+------------+-------------+ The "global" peer type will show "State:start" and "Info:Active" (or another status showing it isn't connected) until we get the BIG-IP configured to handle BGP. Once the BIG-IP has been configured it should show "State:up" and "Info:Established" as you can see above. If for some reason you need to remove BIG-IP as a BGP peer, you can run: sudo calicoctl delete bgppeer --scope=global 172.16.1.6 We aren't done in our Kubernetes deployment yet. It's time for the important piece... the Container Connecter (known as the CC). Container Connecters To start let's setup a secret to hold our BIG-IP information - User name, Password and URL. If you need some help with secrets check here We setup our secrets through a yaml similar to the below and apply it by running kubectl create -f secret.yaml p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} apiVersion: v1 items: - apiVersion: v1 data: password: xxx url: xxx username: xxx kind: Secret metadata: name: bigip-credentials namespace: kube-system type: Opaque kind: List metadata: {} Something important to note about the secret: it has to be in the same namespace you deploy the CC in. If it's not the CC won't be able to update your BIG-IP. Let's take a look at the config we are going to use to deploy the CC: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: f5-k8s-controller namespace: kube-system spec: replicas: 1 template: metadata: name: f5-k8s-controller labels: app: f5-k8s-controller spec: containers: - name: f5-k8s-controller # Specify the path to your image here image: "path/to/cc/image:latest" env: # Get sensitive values from the bigip-credentials secret - name: BIGIP_USERNAME valueFrom: secretKeyRef: name: bigip-credentials key: username - name: BIGIP_PASSWORD valueFrom: secretKeyRef: name: bigip-credentials key: password - name: BIGIP_URL valueFrom: secretKeyRef: name: bigip-credentials key: url command: ["/app/bin/f5-k8s-controller"] args: - "--bigip-url=$(BIGIP_URL)" - "--bigip-username=$(BIGIP_USERNAME)" - "--bigip-password=$(BIGIP_PASSWORD)" - "--namespace=default" - "--bigip-partition=k8s" - "--pool-member-type=cluster" Important pieces of this config are the path to your CC image and the last three arguments we are going to start up the CC with. The namespace argument is the namespace you want the CC to watch for changes on. Our nginx service is in default so we want to watch default. The bigip-partition is where the CC will create your virtual servers. This partition has to already exist on your BIG-IP and it can NOT be your "Common" partition (the CC will take over the partition and manage everything inside). And the last one, pool-member-type. We are using cluster because Calico allows us to see all endpoints (pods in the nginx service) which is the whole point of this post! You can also leave this argument off and it will default to a NodePort setup but that won't allow you to take advantage of BIG-IP's advanced load balancing across all endpoints. To deploy the CC we run kubectl create -f cc.yaml And to verify everything looks good kubectl get deployment f5-k8s-controller --namespace kube-system We are almost done setting up the CC but we have one last piece, telling it what needs to be configured on the BIG-IP. To do that we use a ConfigMap and run kubectl create -f vs-config.yaml p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; color: #323333; -webkit-text-stroke: #323333} span.s1 {font-kerning: none} span.s2 {font-kerning: none; color: #323333; -webkit-text-stroke: 0px #323333} span.s3 {font-kerning: none; color: #000000; -webkit-text-stroke: 0px #000000} kind: ConfigMap apiVersion: v1 metadata: name: example-vs namespace: default labels: f5type: virtual-server data: schema: "f5schemadb://bigip-virtual-server_v0.1.1.json" data: | { "virtualServer": { "frontend": { "balance": "round-robin", "mode": "http", "partition": "k8s", "virtualAddress": { "bindAddr": "172.17.0.1", "port": 80 } }, "backend": { "serviceName": "nginx", "servicePort": 80 } } } Let's talk about what we are seeing. The virtual server section is what gets applied to the BIG-IP and concatenated with information from Kubernetes about the service endpoints. So, frontend is BIG-IP configuration options and backend is Kubernetes options. Of note, the bindAddr is where your traffic is coming into from the outside world in our demo world, so 172.17.0.1 is my internet-facing IP address. Awesome, if all went well you should now be able to access the BIG-IP GUI and see your virtual server being configured automatically. If you want to see something really cool (and you do) run kubectl edit deployment nginxand under 'spec' update the 'replicas' count to 5 then go check the virtual server pool members on the BIG-IP. Cool huh? The BIG-IP can see our pool members but it can't actually route traffic to them. We need to setup the BGP peering on our BIG-IP so it has a map to get traffic to the pool members. BIG-IP BGP Peering From the BIG-IP GUI do these couple of steps to allow BGP peering to work: Network >> Self IPs >> selfip.internal (or whatever you called your internal network) Add a custom TCP port of 179 - this allows BGP peering through Go to Network >> Route Domains >> 0 (again, if you called it something different, use that) Under Dynamic Routing Protocols move "BGP" to Enabled and push Update. We are done with the GUI, lets SSH into our BIG-IP and keep moving. Explanation of these commands are beyond the scope of this post so if you are confused or want more info check the docs here and here p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 20.0px; font: 14.0px 'Courier New'; color: #ff2f93; -webkit-text-stroke: #ff2f93} span.s1 {font-kerning: none} imish enable configure terminal router bgp 64511 neighbor group1 peer-group neighbor group1 remote-as 64511 neighbor 172.0.0.0 peer-group group1 You will need to add all your nodes that you want to peer from your Kubernetes deployment. As an example, if you have 1 master and 2 workers you need to add all 3. Let's check some of our configuration outputs on the BIG-IP to verify we are seeing what is expected. From the BIG-IP run ip route There will be some output but what we are looking for is something like 10.4.0.1/26 via 172.16.1.9 dev internal proto zebra where 10.4.0.1 is your pod IP and 172.16.1.9 is your node IP. This tells us that the BIG-IP has a route to our pods by going through our nodes! If everything is looking good it's time to test it out end to end. Curl your BIG-IP's external IP and you will get a response from your nginx pod and that's it! You now have a working end to end setup using a BIG-IP for your load balancing, Calico advertising BGP routes to the endpoints and Kubernetes taking care of your containers. The Container Connector and other products for containerized environments like the Application Service Proxy are available for use today. For more information check out our docs. p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 16.0px; font: 14.0px Courier; color: #323333; -webkit-text-stroke: #323333} span.s1 {font-kerning: none}1.7KViews0likes0Comments