kubernetes
67 TopicsKubernetes architecture options with F5 Distributed Cloud Services
Summary F5 Distributed Cloud Services (F5 XC) can both integrate with your existing Kubernetes (K8s) clustersand/or host aK8s workload itself. Within these distinctions, we have multiple architecture options. This article explores four major architectures in ascending order of sophistication and advantages. Architecture #1: External Load Balancer (Secure K8s Gateway) Architecture #2: CE as a pod (K8s site) Architecture #3: Managed Namespace (vK8s) Architecture #4: Managed K8s (mK8s) Kubernetes Architecture Options As K8s continues to grow, options for how we run K8s and integrate with existing K8s platforms continue to grow. F5 XC can both integrate with your existing K8s clustersand/orrun a managed K8s platform itself.Multiple architectures exist within these offerings too, so I was thoroughly confused when I first heard about these possibilities. A colleague recently laid it out for me in a conversation: "Michael, listen up: XC can eitherintegrate with your K8s platform,run insideyour K8s platform, host virtual K8s(Namespace-aaS), or run a K8s platformin your environment." I replied, "That's great. Now I have a mental model for differentiating between architecture options." This article will overview these architectures and provide 101-level context: when, how, and why would you implement these options? Side note 1: F5 XC concepts and terms F5 XC is a global platform that can provide networking and app delivery services, as well as compute (K8s workloads). We call each of our global PoP's a Regional Edge (RE). RE's are highly meshed to form the backbone of the global platform. They connect your sites, they can expose your services to the Internet, and they can run workloads. This platform is extensible into your data center by running one or more XC Nodes in your network, also called a Customer Edge (CE). A CE is a compute node in your network that registers to our global control plane and is then managed by a customer as SaaS. The registration of one or more CE's creates a customer site in F5 XC. A CE can run on ahypervisor (VMWare/KVM/Etc), a Hyperscaler (AWS, Azure, GCP, etc), baremetal, or even as a k8s pod, and can be deployed in HA clusters. XC Mesh functionality provides connectivity between sites, security services, and observability. Optionally, in addition, XC App Stack functionality allows a large and arbitrary number of managed clusters to be logically grouped into a virtual site with a single K8s mgmt interface. So where Mesh services provide the networking, App Stack services provide the Kubernetes compute mgmt. Our first 2 architectures require Mesh services only, and our last two require App Stack. Side note 2: Service-to-service communication I'm often asked how to allow services between clusters to communicate with each other. This is possible and easy with XC. Each site can publish services to every other site, including K8s sites. This means that any K8s service can be reachable from other sites you choose. And this can be true in any of the architectures below, although more granular controls are possible with the more sophisticated architectures. I'll explore this common question more in a separate article. Architecture 1: External Load Balancer (Secure K8s Gateway) In a Secure Kubernetes Gatewayarchitecture, you have integration with your existing K8s platform, using the XC node as the external load balancer for your K8s cluster. In this scenario, you create a ServiceAccount and kubeconfig file to configure XC. The XC node then performs service discovery against your K8s API server. I've covered this process in a previous article, but the advantage is that you can integrate withexisting K8s platforms. This allows exposing both NodePort and ClusterIP services via the XC node. XC is not hosting any workloads in this architecture, but it is exposing your services to your local network, or remote sites, or the Internet. In the diagram above, I show a web application being accesssed from a remote site (and/or the Internet) where the origin pool is a NodePort service discovered in a K8s cluster. Architecture 2: Run a site within a K8s cluster (K8s site type) Creating a K8s site is easy - just deploy a single manifest found here. This file deploys multiple resources in your cluster, and together these resources work to provide the services of a CE, and create a customer site. I've heard this referred to as "running a CE inside of K8s" or "running your CE as a pod". However, when I say "CE node" I'm usually referring to a discreet compute node like a VM or piece of hardware; this architecture is actually a group of pods and related resources that run within K8s to create a XC customer site. With XC running inside your existing cluster, you can expose services within the cluster by DNS name because the site will resolve these from within the cluster. Your service can then be exposed anywhere by the F5 XC platform. This is similar to Architecture 1 above, but with this model, your site is simply a group of pods within K8s. An advantage here is the ability to expose services of other types (e.g. ClusterIP). A site deployed into a K8s cluster will only support Mesh functionality and does not support AppStack functionality (i.e., you cannot run a cluster within your cluster). In this architecture, XC acts as a K8s ingress controller with built-in application security. It also enables Mesh features, such as publishing of other sites' services on this site, and publishing of this site's discovered services on other sites. Architecture 3: vK8s (Namespace-as-a-Service) If the services you use includeAppStack capabilities, then architectures #3 and #4 are possible for you.In these scenarios, our XC nodeactually runs your K8son your workloads. We are no longer integrating XC with your existing K8s platform. XCisthe platform. A simple way to run K8s workloads is to use avirtual k8s (vK8s) architecture. This could be referred to as a "managed Namespace" because by creating a vK8s object in XC you get a single namespace in a virtual cluster. Your Namespace can be fully hosted (deployed to RE's) or run on your VM's (CE's), or both. Your kubeconfig file will allow access to your Namespace via the hosted API server. Via your regular kubectl CLI (or via the web console) you can create/delete/manage K8s resources (Deployments, Services, Secrets, ServiceAccounts, etc) and view application resource metrics. This is great if you have workloads that you want to deploy to remote regions where you do not have infrastructure and would prefer to run in F5's RE's, or if you have disparate clusters across multiple sites and you'd like to manage multiple K8s clusters via a single centralized, virtual cluster. Best practice guard rails for vK8s With a vK8s architecture, you don't have your own cluster, but rather a managed Namespace. So there are somerestrictions(for example, you cannot run a container as root, bind to a privileged port, or to the Host network). You cannot create CRD's, ClusterRoles, PodSecurityPolicies, or Namespaces, so K8s operators are not supported. In short, you don't have a managed cluster, but a managed Namespace on a virtual cluster. Architecture 4: mK8s (Managed K8s) Inmanaged k8s (mk8s, also known as physical K8s or pk8s) deployment, we have an enterprise-level K8s distribution that is run at your site. This means you can use XC to deploy/manage/upgrade K8s infrastructure, but you manage the Kubernetes resources. The benefitsinclude what is typical for 3rd-party K8s mgmt solutions, but also some key differentiators: multi-cloud, with automation for Azure, AWS, and GCP environments consumed by you as SaaS enterprise-level traffic control natively allows a large and arbitrary number of managed clusters to be logically managed with a single K8s mgmt interface You can enable kubectl access against your local cluster and disable the hosted API server, so your kubeconfig file can point to a global URL or a local endpoint on-prem. Another benefit of mK8s is that you are running a full K8s cluster at your site, not just a Namespace in a virtual cluster. The restrictions that apply to vK8s (see above) do not apply to mK8s, so you could run privileged pods if required, use Operators that make use of ClusterRoles and CRDs, and perform other tasks that require cluster-wide access. Traffic management controls with mK8s Because your workloads run in a cluster managed by XC, we can apply more sophisticated and native policies to K8s traffic than non-managed clusters in earlier architectures: Service isolation can be enforced within the cluster, so that pods in a given namespace cannot communicate with services outside of that namespace, by default. More service-to-service controls exist so that you can decide which services can reach with other services with more granularity. Egress controlcan be natively enforced for outbound traffic from the cluster, by namespace, labels, IP ranges, or other methods. E.g.: Svc A can reach myapi.example.com but no other Internet service. WAF policies, bot defense, L3/4 policies,etc—allof these policies that you have typically applied with network firewalls, WAF's, etc—can be applied natively within the platform. This architecture took me a long time to understand, and longer to fully appreciate. But once you have run your workloads natively on a managed K8s platform that is connected to a global backbone and capable of performing network and application delivery within the platform, the security and traffic mgmt benefits become very compelling. Conclusion: As K8s continues to expand, management solutions of your clusters make it possible to secure your K8s services, whether they are managed by XC or exist in disparate clusters. With F5 XC as a global platform consumed as a service—not a discreet installation managed by you—the available architectures here are unique and therefore can accommodate the diverse (and changing!) ways we see K8s run today. Related Articles Securely connecting Kubernetes Microservices with F5 Distributed Cloud Multi-cluster Multi-cloud Networking for K8s with F5 Distributed Cloud - Architecture Pattern Multiple Kubernetes Clusters and Path-Based Routing with F5 Distributed Cloud8.9KViews29likes5CommentsF5 Distributed Cloud - Regional Decryption with Virtual Sites
In this article we discuss how the F5 Distributed Cloud can be configured to support regulatory demands for TLS termination of traffic to specific regions around the world. The article provides insight into the F5 Distributed Cloud global backbone and application delivery network (ADN). The article goes on to inspect how the F5 Distriubted Cloud is able to achieve these custom topologies in a multi-tenant architecture while adhearing to the "rules of the internet" for route summarization. Read on to learn about the flexibility of F5's SaaS platform providing application delivery and security solutions for your applications.5.6KViews17likes2CommentsBetter together - F5 Container Ingress Services and NGINX Plus Ingress Controller Integration
Introduction The F5 Container Ingress Services (CIS) can be integrated with the NGINX Plus Ingress Controllers (NIC) within a Kubernetes (k8s) environment. The benefits are getting the best of both worlds, with the BIG-IP providing comprehensive L4 ~ L7 security services, while leveraging NGINX Plus as the de facto standard for micro services solution. This architecture is depicted below. The integration is made fluid via the CIS, a k8s pod that listens to events in the cluster and dynamically populates the BIG-IP pool pointing to the NIC's as they scale. There are a few components need to be stitched together to support this integration, each of which is discussed in detail over the proceeding sections. NGINX Plus Ingress Controller Follow this (https://docs.nginx.com/nginx-ingress-controller/installation/building-ingress-controller-image/) to build the NIC image. The NIC can be deployed using the Manifests either as a Daemon-Set or a Service. See this ( https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/ ). A sample Deployment file deploying NIC as a Service is shown below, apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress namespace: nginx-ingress spec: replicas: 3 selector: matchLabels: app: nginx-ingress template: metadata: labels: app: nginx-ingress #annotations: #prometheus.io/scrape: "true" #prometheus.io/port: "9113" spec: serviceAccountName: nginx-ingress imagePullSecrets: - name: abgmbh.azurecr.io containers: - image: abgmbh.azurecr.io/nginx-plus-ingress:edge name: nginx-plus-ingress ports: - name: http containerPort: 80 - name: https containerPort: 443 #- name: prometheus #containerPort: 9113 securityContext: allowPrivilegeEscalation: true runAsUser: 101 #nginx capabilities: drop: - ALL add: - NET_BIND_SERVICE env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name args: - -nginx-plus - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret - -ingress-class=sock-shop #- -v=3 # Enables extensive logging. Useful for troubleshooting. #- -report-ingress-status #- -external-service=nginx-ingress #- -enable-leader-election #- -enable-prometheus-metrics Notice the ‘- -ingress-class=sock-shop’ argument, it means that the NIC will only work with an Ingress that is annotated with ‘sock-shop’. The absence of this annotation makes NIC the default for all Ingress created. Below shows the counterpart Ingress with the ‘sock-shop’ annotation. apiVersion: extensions/v1beta1 kind: Ingress metadata: name: sock-shop-ingress annotations: kubernetes.io/ingress.class: "sock-shop" spec: tls: - hosts: - socks.ab.gmbh secretName: wildcard.ab.gmbh rules: - host: socks.ab.gmbh http: paths: - path: / backend: serviceName: front-end servicePort: 80 This Ingress says if hostname is socks.ab.gmbh and path is ‘/’, send traffic to a service named ‘front-end’, which is part of the socks application itself. The above concludes Ingress configuration with the NIC. F5 Container Ingress Services The next step is to leverage the CIS to dynamically populate the BIG-IP pool with the NIC addresses. Follow this ( https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-app-install.html ) to deploy the CIS. A sample Deployment file is shown below, apiVersion: extensions/v1beta1 kind: Deployment metadata: name: k8s-bigip-ctlr-deployment namespace: kube-system spec: # DO NOT INCREASE REPLICA COUNT replicas: 1 template: metadata: name: k8s-bigip-ctlr labels: app: k8s-bigip-ctlr spec: # Name of the Service Account bound to a Cluster Role with the required # permissions serviceAccountName: bigip-ctlr containers: - name: k8s-bigip-ctlr image: "f5networks/k8s-bigip-ctlr" env: - name: BIGIP_USERNAME valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: username - name: BIGIP_PASSWORD valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: password command: ["/app/bin/k8s-bigip-ctlr"] args: [ # See the k8s-bigip-ctlr documentation for information about # all config options # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest "--bigip-username=$(BIGIP_USERNAME)", "--bigip-password=$(BIGIP_PASSWORD)", "--bigip-url=https://x.x.x.x:8443", "--bigip-partition=k8s", "--pool-member-type=cluster", "--agent=as3", "--manage-ingress=false", "--insecure=true", "--as3-validation=true", "--node-poll-interval=30", "--verify-interval=30", "--log-level=INFO" ] imagePullSecrets: # Secret that gives access to a private docker registry - name: f5-docker-images # Secret containing the BIG-IP system login credentials - name: bigip-login Notice the following arguments below. They tell the CIS to consume AS3 declaration to configure the BIG-IP. According to PM, CCCL(Common Controller Core Library) – used to orchestrate F5 BIG-IP, is getting removed this sprint for the CIS 2.0 release. '--manage-ingress=false' means CIS is not doing anything for Ingress resources defined within the k8s, this is because that CIS is not the Ingress Controller, NGINX Plus is, as far as k8s is concerned. The CIS will create a partition named k8s_AS3 on the BIG-IP, this is used to hold L4~7 configuration relating to the AS3 declaration. The best practice is also to manually create a partition named 'k8s' (in our example), where networking info will be stored (e.g., ARP, FDB). "--bigip-url=https://x.x.x.x:8443", "--bigip-partition=k8s", "--pool-member-type=cluster", "--agent=as3", "--manage-ingress=false", "--insecure=true", "--as3-validation=true", To apply AS3, the declaration is embedded within a ConfigMap applied to the CIS pod. kind: ConfigMap apiVersion: v1 metadata: name: as3-template namespace: kube-system labels: f5type: virtual-server as3: "true" data: template: | { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "id":"1847a369-5a25-4d1b-8cad-5740988d4423", "schemaVersion": "3.16.0", "Nginx_IC": { "class": "Tenant", "Nginx_IC_vs": { "class": "Application", "template": "https", "serviceMain": { "class": "Service_HTTPS", "virtualAddresses": [ "10.1.0.14" ], "virtualPort": 443, "redirect80": false, "serverTLS": { "bigip": "/Common/clientssl" }, "clientTLS": { "bigip": "/Common/serverssl" }, "pool": "Nginx_IC_pool" }, "Nginx_IC_pool": { "class": "Pool", "monitors": [ "https" ], "members": [ { "servicePort": 443, "shareNodes": true, "serverAddresses": [] } ] } } } } } They are telling the BIG-IP to create a tenant called ‘Nginx_IC’, a virtual named ‘Nginx_IC_vs’ and a pool named ‘Nginx_IC_pool’. The CIS will update the serverAddresses with the NIC addresses dynamically. Now, create a Service to expose the NIC’s. apiVersion: v1 kind: Service metadata: name: nginx-ingress namespace: nginx-ingress labels: cis.f5.com/as3-tenant: Nginx_IC cis.f5.com/as3-app: Nginx_IC_vs cis.f5.com/as3-pool: Nginx_IC_pool spec: type: ClusterIP ports: - port: 443 targetPort: 443 protocol: TCP name: https selector: app: nginx-ingress Notice the labels, they match with the AS3 declaration and this allows the CIS to populate the NIC’s addresses to the correct pool. Also notice the kind of the manifest ‘Service’, this means only a Service is created, not an Ingress, as far as k8s is concerned. On the BIG-IP, the following should be created. The end product is below. Please note that this article is focused solely on control plane, that is, how to get the CIS to populate the BIG-IP with NIC's addresses. The specific mechanisms to deliver packets from the BIG-IP to the NIC's on the data plane is not discussed, as it is decoupled from control plane. For data plane specifics, please take a look here ( https://clouddocs.f5.com/containers/v2/ ). Hope this article helps to lift the veil on some integration mysteries.5.3KViews11likes27CommentsUnderstanding Modern Application Architecture - Part 1
This is part 1 of a series. Here are the other parts: Understanding Modern Application Architecture - Part 2 Understanding Modern Application Architecture - Part 3 Over the past decade, there has been a change taking place in how applications are built. As applications become more expansive in capabilities and more critical to how a business operates, (or in many cases, the application is the business itself) a new style of architecture has allowed for increased scalability, portability, resiliency, and agility. To support the goals of a modern application, the surrounding infrastructure has had to evolve as well. Platforms like Kubernetes have played a big role in unlocking the potential of modern applications and is a new paradigm in itself for how infrastructure is managed and served. To help our community transition the skillset they've built to deal with monolithic applications, we've put together a series of videos to drive home concepts around modern applications. This article highlights some of the details found within the video series. In these first three videos, we breakdown the definition of a Modern Application. One might think that by name only, a modern application is simply an application that is current. But we're actually speaking in comparison to a monolithic application. Monolithic applications are made up of a single, or a just few pieces. They are rigid in how they are deployed and fragile in their dependencies. Modern applications will instead incorporate microservices. Where a monolithic application might have all functions built into one broad encompassing service, microservices will break down the service into smaller functions that can be worked on separately. A modern application will also incorporate 4 main pillars. Scalability ensures that the application can handle the needs of a growing user base, both for surges as well as long term growth. Portability ensures that the application can be transportable from its underlying environment while still maintaining all of its functionality and management plane capabilities. Resiliency ensures that failures within the system go unnoticed or pose minimal disruption to users of the application. Agility ensures that the application can accommodate for rapid changes whether that be to code or to infrastructure. There are also 6 design principles of a modern application. Being agnostic will allow the application to have freedom to run on any platform. Leveraging open source software where it makes sense can often allow you to move quickly with an application but later be able to adopt commercial versions of that software when full support is needed. Defining by code allows for more uniformity of configuration and move away rigid interfaces that require specialized knowledge. Automated CI/CD processes ensures the quick integration and deployment of code so that improvements are constantly happening while any failures are minimized and contained. Secure development ensures that application security is integrated into the development process and code is tested thoroughly before being deployed into production. Distributed Storage and Infrastructure ensures that applications are not bound by any physical limitations and components can be located where they make the most sense. These videos should help set the foundation for what a modern application is. The next videos in the series will start to define the fundamental technical components for the platforms that bring together a modern application. Continued in Part 23.9KViews8likes0CommentsHow to setup DSR in Kubernetes with BIG-IP
Using Direct Server Return (DSR) in Kubernetes can have benefits when you have workloads that require low latency, high throughput, and/or you want to preserve the source IP address of the connection.The following will guide you through how to configure Kubernetes and BIG-IP to use DSR for traffic to a Kubernetes Pod. Why DSR? I’m not a huge fan of DSR.It’s a weird way of having a client send traffic to a Load Balancer (LB), the LB forwards to a backend server WITHOUT rewriting the destination address, and the backend server responds directly back to the client. It looks WEIRD!But there are some benefits, the backend server sees the original client IP address without the need for the LB to be in the return path of traffic and the LB only has to handle one side of the connection.This is also the downside because it’s not straightforward to do any type of intelligent LB if you only see half the conversation.It also involves doing weird things on your backend servers to configure loopback devices so that it will answer for the traffic when it is received, but not create an IP conflict on the network. DSR in Kubernetes The following uses IP Virtual Server (IPVS) to setup DSR in Kubernetes.IPVS has been supported in Kubernetes since 1.11.When using IPVS it replaces IP Tables for the kube-proxy (internal LB).When you provision a LoadBalancer or NodePort service (method to expose traffic outside the cluster) you can add “externalTrafficPolicy: Local” to enable DSR.This is mentioned in the Kubernetes documentation for GCP and Azure environments. DSR in BIG-IP On the BIG-IP DSR is referred to as “nPath”.K11116 discusses the steps involved in getting it setup.The steps create a profile that will disable destination address translation and allow the BIG-IP to not maintain the state of TCP connections (since it will only see half the conversation). Putting the Pieces Together To enable DSR from Kubernetes the first step is to create a LoadBalancer service where you define the external LB IP address. apiVersion: v1 kind: Service metadata: name: my-frontend spec: ports: - port: 80 protocol: TCP targetPort: 80 type: LoadBalancer loadBalancerIP: 10.1.10.10 externalTrafficPolicy: Local selector: run: my-frontend After you create the service you need to update Service to add the following status (example in YAML format, this needs to be done via the API vs. kubectl): status: loadBalancer: ingress: - ip: 10.1.10.10 Once this is done you run “ipvsadm -ln” to verify that you now have an IPVS rule to rewrite the destination address to the Pod IP Address. IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:PortForward Weight ActiveConn InActConn .. TCP10.1.10.10:80 rr -> 10.233.90.25:80Masq100 -> 10.233.90.28:80Masq100 … You can verify that DSR is working by connecting to the external IP address and observing that the MAC address that the traffic is sent to is different than the MAC address that the reply is sent from. $ sudo tcpdump -i eth1 -nnn -e host 10.1.10.10 … 01:30:02.579765 06:ba:49:38:53:f0 > 06:1f:8a:6c:8e:d2, ethertype IPv4 (0x0800), length 143: 10.1.10.100.37664 > 10.1.10.10.80: Flags [P.], seq 1:78, ack 1, win 229, options [nop,nop,TS val 3625903493 ecr 3191715024], length 77: HTTP: GET /txt HTTP/1.1 01:30:02.582457 06:d2:0a:b1:14:20 > 06:ba:49:38:53:f0, ethertype IPv4 (0x0800), length 66: 10.1.10.10.80 > 10.1.10.100.37664: Flags [.], ack 78, win 227, options [nop,nop,TS val 3191715027 ecr 3625903493], length 0 01:30:02.584176 06:d2:0a:b1:14:20 > 06:ba:49:38:53:f0, ethertype IPv4 (0x0800), length 692: 10.1.10.10.80 > 10.1.10.100.37664: Flags [P.], seq 1:627, ack 78, win 227, options [nop,nop,TS val 3191715028 ecr 3625903493], length 626: HTTP: HTTP/1.1 200 OK ... Automate it Using Container Ingress Services we can automate this setup with the following AS3 declaration (note the formatting is off and this will not copy-and-paste cleanly, only provided for illustrative purposes). kind: ConfigMap apiVersion: v1 metadata: name: f5demo-as3-configmap namespace: default labels: f5type: virtual-server as3: "true" data: template: | { "class": "AS3", "action": "deploy", "declaration": { "class": "ADC", "schemaVersion": "3.10.0", "id": "DSR Demo", "AS3": { "class": "Tenant", "MyApps": { "class": "Application", "template": "shared", "frontend_pool": { "members": [ { "servicePort": 80, "serverAddresses": [] } ], "monitors": [ "http" ], "class": "Pool" }, "l2dsr_http": { "layer4": "tcp", "pool": "frontend_pool", "persistenceMethods": [], "sourcePortAction": "preserve-strict", "translateServerAddress": false, "translateServerPort": false, "class": "Service_L4", "profileL4": { "use": "fastl4_dsr" }, "virtualAddresses": [ "10.1.10.10" ], "virtualPort": 80, "snat": "none" }, "dsrhash": { "hashAlgorithm": "carp", "class": "Persist", "timeout": "indefinite", "persistenceMethod": "source-address" }, "fastl4_dsr": { "looseClose": true, "looseInitialization": true, "resetOnTimeout": false, "class": "L4_Profile" } } } } } You can then have the BIG-IP automatically pick-up the location of the pods by annotating the service. apiVersion: v1 kind: Service metadata: name: my-frontend labels: run: my-frontend cis.f5.com/as3-tenant: AS3 cis.f5.com/as3-app: MyApps cis.f5.com/as3-pool: frontend_pool ... Not so weird? DSR is a weird way to load balance traffic, but it can have some benefits.For a more exhaustive list of the reasons not to do DSR; we can reach back to 2008 for the following gem from Lori MacVittie.What is old is new again!2.4KViews8likes1CommentEgress control for Kubernetes using F5 Distributed Cloud Services
Summary When using F5 Distributed Cloud Services (F5 XC) to manage your Kubernetes (K8s) workloads, egress firewalling based on K8s namespaces or labels is easy. While network firewalls have no visibility into which K8s workload initiated outbound traffic - and therefore cannot apply security policies based on workload - we can use a platform like F5 XC Managed Kubernetes (mK8s) to achieve this. Introduction Applying security policies to outbound traffic is common practice. Security teams inspect Internet-bound traffic in order to detect/prevent Command & Control traffic, allow select users to browse select portions of the Internet, or for visibility into outbound traffic. Often the allow/deny decision is based on a combination of user, source IP, and destination website. Here's an awesome walk through of outbound inspection. Typical outbound inspection performed by a network-based device. Network-based firewalls cannot do the same for K8s workloads because pods are ephemeral. They can be short-lived, their IP addresses are temporary and reused, and all pods on the same node making outbound connections will have the same source IP on the external network. In short, anetwork device cannot distinguish traffic from one pod versus another. Which microservice is making this outbound request? Should it be allowed? Problem statement In my cluster I have two apps, app1 and app2, in namespaces app1-ns and app2-ns. For HTTP traffic, I want app1 to reach out to *.github.com but nothing else app2 to reach out to the REST API at api.weather.gov but nothing else, even other subdomains of weather.gov For non-HTTP traffic, I want app1 to be able to reach a partner's public IP address on port 22 app2 to reach Google's DNS server at 8.8.8.8 on port 25 I want no other traffic (TCP, UDP) to egress from my pods (ie., HTTP or non-HTTP). What about a Service Mesh? A service mesh will control traffic within your K8s cluster, both East-West (between services) and North-South (traffic to/from the cluster). Indeed, egress control is a feature of some service meshes, and a service mesh is a good solution to this problem. Istio's egress control is a great starting point to read more about a service mesh with egress control. By using an egress gateway, Istio's sidecars will force traffic destined for a particular destination through a proxy, and this proxy can enforce Istio policies. This solves our problem, although I've heard customers voice reasonable concerns: what about non-HTTP traffic? what if the egress gateway is bypassed? can our security team configure the mesh or configuration as code? a mesh may require an additional learning / admin overhead a mesh is often managed by a different team than traditional security What about a NetworkPolicy? A NetworkPolicy is a K8s resource that can define networking rules, including allow/deny rules by namespace, labels, src/dest pods, destination domains, etc. However NetworkPolicies must be enforced by the network plugin in your distribution (eg Calico), so they're not an option for everyone. They're also probably not a scalable solution when you consider the same concerns of service meshes above, but they are possible. Read more about NetworkPolicies for egress control with Istio, and check out this article from Monzo to see a potential solution involving NetworkPolicies. Other ideas Read the article from Monzo linked above to see what others have done. You could watch (or serve) DNS requests from Pods and then very quickly update outbound firewall rules to allow/disallow traffic to the IP address in the DNS response. NeuVector had a great article on this. You could also use a dedicated outbound proxy per domain name, as Monzo did, although this wouldn't scale to a large number, so some kind of exceptions would need to be made. I read an interesting article on Falco also, which is a tool that can monitor outbound connections from pods using eBPF. Generally speaking, these other ideas will bring the same concerns to teams without mesh skills: K8s and mesh networking can be unfamiliar and difficult to operate. Me and Kubernetes Solving egress control with F5 XC Managed Kubernetes Another way we can control outbound traffic specific to K8s namespace or labelsis by using a K8s distribution that includes these features. In a technical sense, this works just like a mesh. By injecting a sidecar container for security controls into pods, the platform can control networking. However the mesh is not managed separately in this case. The security policies of the platform provide a GUI, easy reuse of policies, and generally an experience identical to that used for traditional egress control with the platform. Solving for our problem statement If I am using Virtual K8s (vK8s) or Managed K8s (mK8s), my pods are running on the F5 XC platform. These containers may be on-prem or in F5's PoPs, but the XC platform is natively aware of each pod. Here's how to solve our problem with XC when you have a managed K8s cluster. 1. Firstly we will create a known key so we can have a label in XC to match a label we will apply to our K8s pods. I have created a known key egress-rulesetby following this how-to guide. 2. For HTTP and HTTPS traffic, create aforward proxy policy. Since we want rules to apply to pods based on their labels, choose "Custom Rule List" when creating rules. Rule 1: set the source to be anything with a known label of egress-ruleset=app1and allow access to TLS domain with suffix of github.com. Rule 2: Same as 1, but allow access to HTTP path of suffix github.com. Rules 3 and 4 are the same, but where the source endpoint matches egress-ruleset=app2. Rule 5, the last, can be a Deny All rule. 3. For non HTTP(S) traffic, create multiplefirewall policies for traffic ingressing, egressing, or originating from an F5 Gateway. I've recommended multiple policies because a policy applies to a group of endpoints defined by IP or label. I've used three policies in my examples (one for label egress-ruleset=app1and another for app2, and one for all endpoints). Use the policies to allow TCP traffic as desired. 4. Create and deploy a Managed K8s cluster and an App Stack site that is attached to this cluster. When creating the App Stack site, you can attached the policies you created from steps 1 and 2. You can have multiple policies layered in order, for policies of both types (forward proxy and firewall). 5. Deploy your K8s workload and label your pods with egress-rulesetand a value of app1or app2. Finally validate your policies are in effect by using kubectl execagainst a pod running in your cluster. We have now demonstrated that outbound traffic from our pods is allowed only to destinations we have configured. We can now control outbound traffic specific to the microservice that is the source of the traffic. Application namespaces Another way to solve this problem uses Namespaces only and not labels. If you create your Application Namespace in the XC console (not K8s Namespace) and deploy your workloads in the corresponding K8s namespace, you can use the built-in label of name.ves.io/namespace.This means you won't need to create your own label (Step 1) but you will need to have a 1:1 relationship between K8s namespaces and Application Namespaces in XC. Plus, your granularity for endpoints is not fine-grained at the level of pod labels, but instead is at the namespace level. Further Reading Enterprise-level outbound firewalling from products like F5's SSLO will do more than simple egress control, such as selectively pass traffic to 3rd party inspection devices. Egress control in XC is not integrating with other devices, but the security controls fit the nature of typical microservices. Still, we could layer simple outbound rules performed in K8s with enterprise-wide inspection rules performed by SSLO for further control of outbound traffic, including integration with 3rd party devices. While this example used mK8s, I'll make note of another helpful article that explains how labels can be used for controlling network traffic when using Virtual K8s (vK8s). Conclusion Egress control for Kubernetes workloads, where security policy can be based on namespace labels, can be enforced with a service mesh that supports egress control, or a managed K8s solution like F5 XC that integrates network security policies natively into the K8s networking layer. Consider practical concerns, like management overhead and existing skill sets, and reach out if I or another F5'er can help explain more about egress control using F5 XC! Finally thank you to my colleague Steve Iannetta@netta2who helped me prepare this. Please do reach out if you want to do this yourself or have more in-depth K8s traffic management questions.2.9KViews6likes3CommentsUbuntu Virtual Machine for NGINX Microservices March 2022 Labs
Since I didn't have access to the lab environment in UDF, I decided to setup and run my own environment in VMware Workstation, so that I can run the Microservices March Labs at my own pace. This guide should help anyone to setup their own Ubuntu VM to run the labs in your environment.1.7KViews6likes2Comments3 Ways to use F5 BIG-IP with OpenShift 4
F5 BIG-IP can provide key infrastructure and application services in a RedHat OpenShift 4 environment.Examples include providing core load balancing for the OpenShift API and Router, DNS services for the cluster, a supplement or replacement for the OpenShift Router, and security protection for the OpenShift management and application services. #1. Core Services OpenShift 4 requires a method to provide high availability to the OpenShift API (port 6443), MachineConfig (22623), and Router services (80/443).BIG-IP Local Traffic Manager (LTM) can provide these trusted services easily.OpenShift also requires several DNS records that the BIG-IP can provide accelerated responses as a DNS cache and/or providing Global Server Load Balancing of cluster DNS records. Additional documentation about OpenShift 4 Network Requirements (RedHat) Networking Requirements for user-provisioned infrastructure #2 OpenShift Router RedHat provides their own OpenShift Router for L7 load balancing, but the F5 BIG-IP can also provide these services using Container Ingress Services.Instead of deploying load balancing resources on the same nodes that are hosting OpenShift workloads; F5 BIG-IP provides these services outside of the cluster on either hardware or Virtual Edition platforms.Container Ingress Services can run either as an auxiliary router to the included router or a replacement. Additional articles that are related to Container Ingress Services • Using F5 BIG-IP Controller for OpenShift #3 Security F5 can help filter, authenticate, and validate requests that are going into or out of an OpenShift cluster.LTM can be used to host sensitive SSL resources outside of the cluster (including on a hardware HSM if necessary) as well as filtering of requests (i.e. disallow requests to internal resources like the management console).Advanced Web Application Firewall (AWAF) policies can be deployed to stymie bad actors from reaching sensitive applications.Access Policy Manager can provide OpenID Connect services for the OpenShift management console and help with providing identity services for applications and microservices that are running on OpenShift (i.e. converting BasicAuth request into a JWT token for a microservice). Additional documentation related to attaching a security policy to an OpenShift Route • AS3 Override Where Can I Try This? The environment that was used to write this article and create the companion video can be found at: https://github.com/f5devcentral/f5-k8s-demo/tree/ocp4/ocp4. For folks that are part of F5 you can access this in our Unified Demo Framework and can schedule labs with customers/partners (search for "OpenShift 4.3 with CIS"). I plan on publishing a version of this demo environment that can run natively in AWS. Check back to this article for any updates. Thanks!8.5KViews6likes3CommentsKnowledge sharing: Containers, Kubernetes, Openshift, F5 Container Connector, NGINX Ingress
For anyone interested about the free traning for "F5 Container Connector for Kubernetes" or "F5 OpenShift Container Integration" at "LearnF5". For NGINX being installed in Kubernetes there is enough info but for F5 Contaner Connector/Container Ingress Services there is not so much: https://docs.nginx.com/nginx-ingress-controller/f5-ingresslink/ https://www.nginx.com/products/nginx-ingress-controller/ https://community.f5.com/t5/technical-articles/better-together-f5-container-ingress-services-and-nginx-plus/ta-p/280471 F5 Devcentral also has youtube channel with usefull info: https://www.youtube.com/c/devcentral If you don't have good knowledge about containers and kubernetes then first check the links below. For Docker containers in youtube you will find a lot of good training for example: you need to learn Kubernetes RIGHT NOW!! - YouTube Docker Tutorial for Beginners [FULL COURSE in 3 Hours] - YouTube Docker overview | Docker Documentation The same is true for Kubernetes and they have a free test lab on their site: Learn Kubernetes Basics | Kubernetes you need to learn Docker RIGHT NOW!! // Docker Containers 101 - YouTube Red Hat has some free training and IBM provides some free labs for Containers, Kubernetes, Openshift etc.: Training and Certification (redhat.com) IBM CloudLabs: Free, Interactive Kubernetes Tutorials | IBM Red Hat OpenShift Tutorials | IBM963Views5likes2Comments