kubernetes
67 TopicsKubernetes architecture options with F5 Distributed Cloud Services
Summary F5 Distributed Cloud Services (F5 XC) can both integrate with your existing Kubernetes (K8s) clustersand/or host aK8s workload itself. Within these distinctions, we have multiple architecture options. This article explores four major architectures in ascending order of sophistication and advantages. Architecture #1: External Load Balancer (Secure K8s Gateway) Architecture #2: CE as a pod (K8s site) Architecture #3: Managed Namespace (vK8s) Architecture #4: Managed K8s (mK8s) Kubernetes Architecture Options As K8s continues to grow, options for how we run K8s and integrate with existing K8s platforms continue to grow. F5 XC can both integrate with your existing K8s clustersand/orrun a managed K8s platform itself.Multiple architectures exist within these offerings too, so I was thoroughly confused when I first heard about these possibilities. A colleague recently laid it out for me in a conversation: "Michael, listen up: XC can eitherintegrate with your K8s platform,run insideyour K8s platform, host virtual K8s(Namespace-aaS), or run a K8s platformin your environment." I replied, "That's great. Now I have a mental model for differentiating between architecture options." This article will overview these architectures and provide 101-level context: when, how, and why would you implement these options? Side note 1: F5 XC concepts and terms F5 XC is a global platform that can provide networking and app delivery services, as well as compute (K8s workloads). We call each of our global PoP's a Regional Edge (RE). RE's are highly meshed to form the backbone of the global platform. They connect your sites, they can expose your services to the Internet, and they can run workloads. This platform is extensible into your data center by running one or more XC Nodes in your network, also called a Customer Edge (CE). A CE is a compute node in your network that registers to our global control plane and is then managed by a customer as SaaS. The registration of one or more CE's creates a customer site in F5 XC. A CE can run on ahypervisor (VMWare/KVM/Etc), a Hyperscaler (AWS, Azure, GCP, etc), baremetal, or even as a k8s pod, and can be deployed in HA clusters. XC Mesh functionality provides connectivity between sites, security services, and observability. Optionally, in addition, XC App Stack functionality allows a large and arbitrary number of managed clusters to be logically grouped into a virtual site with a single K8s mgmt interface. So where Mesh services provide the networking, App Stack services provide the Kubernetes compute mgmt. Our first 2 architectures require Mesh services only, and our last two require App Stack. Side note 2: Service-to-service communication I'm often asked how to allow services between clusters to communicate with each other. This is possible and easy with XC. Each site can publish services to every other site, including K8s sites. This means that any K8s service can be reachable from other sites you choose. And this can be true in any of the architectures below, although more granular controls are possible with the more sophisticated architectures. I'll explore this common question more in a separate article. Architecture 1: External Load Balancer (Secure K8s Gateway) In a Secure Kubernetes Gatewayarchitecture, you have integration with your existing K8s platform, using the XC node as the external load balancer for your K8s cluster. In this scenario, you create a ServiceAccount and kubeconfig file to configure XC. The XC node then performs service discovery against your K8s API server. I've covered this process in a previous article, but the advantage is that you can integrate withexisting K8s platforms. This allows exposing both NodePort and ClusterIP services via the XC node. XC is not hosting any workloads in this architecture, but it is exposing your services to your local network, or remote sites, or the Internet. In the diagram above, I show a web application being accesssed from a remote site (and/or the Internet) where the origin pool is a NodePort service discovered in a K8s cluster. Architecture 2: Run a site within a K8s cluster (K8s site type) Creating a K8s site is easy - just deploy a single manifest found here. This file deploys multiple resources in your cluster, and together these resources work to provide the services of a CE, and create a customer site. I've heard this referred to as "running a CE inside of K8s" or "running your CE as a pod". However, when I say "CE node" I'm usually referring to a discreet compute node like a VM or piece of hardware; this architecture is actually a group of pods and related resources that run within K8s to create a XC customer site. With XC running inside your existing cluster, you can expose services within the cluster by DNS name because the site will resolve these from within the cluster. Your service can then be exposed anywhere by the F5 XC platform. This is similar to Architecture 1 above, but with this model, your site is simply a group of pods within K8s. An advantage here is the ability to expose services of other types (e.g. ClusterIP). A site deployed into a K8s cluster will only support Mesh functionality and does not support AppStack functionality (i.e., you cannot run a cluster within your cluster). In this architecture, XC acts as a K8s ingress controller with built-in application security. It also enables Mesh features, such as publishing of other sites' services on this site, and publishing of this site's discovered services on other sites. Architecture 3: vK8s (Namespace-as-a-Service) If the services you use includeAppStack capabilities, then architectures #3 and #4 are possible for you.In these scenarios, our XC nodeactually runs your K8son your workloads. We are no longer integrating XC with your existing K8s platform. XCisthe platform. A simple way to run K8s workloads is to use avirtual k8s (vK8s) architecture. This could be referred to as a "managed Namespace" because by creating a vK8s object in XC you get a single namespace in a virtual cluster. Your Namespace can be fully hosted (deployed to RE's) or run on your VM's (CE's), or both. Your kubeconfig file will allow access to your Namespace via the hosted API server. Via your regular kubectl CLI (or via the web console) you can create/delete/manage K8s resources (Deployments, Services, Secrets, ServiceAccounts, etc) and view application resource metrics. This is great if you have workloads that you want to deploy to remote regions where you do not have infrastructure and would prefer to run in F5's RE's, or if you have disparate clusters across multiple sites and you'd like to manage multiple K8s clusters via a single centralized, virtual cluster. Best practice guard rails for vK8s With a vK8s architecture, you don't have your own cluster, but rather a managed Namespace. So there are somerestrictions(for example, you cannot run a container as root, bind to a privileged port, or to the Host network). You cannot create CRD's, ClusterRoles, PodSecurityPolicies, or Namespaces, so K8s operators are not supported. In short, you don't have a managed cluster, but a managed Namespace on a virtual cluster. Architecture 4: mK8s (Managed K8s) Inmanaged k8s (mk8s, also known as physical K8s or pk8s) deployment, we have an enterprise-level K8s distribution that is run at your site. This means you can use XC to deploy/manage/upgrade K8s infrastructure, but you manage the Kubernetes resources. The benefitsinclude what is typical for 3rd-party K8s mgmt solutions, but also some key differentiators: multi-cloud, with automation for Azure, AWS, and GCP environments consumed by you as SaaS enterprise-level traffic control natively allows a large and arbitrary number of managed clusters to be logically managed with a single K8s mgmt interface You can enable kubectl access against your local cluster and disable the hosted API server, so your kubeconfig file can point to a global URL or a local endpoint on-prem. Another benefit of mK8s is that you are running a full K8s cluster at your site, not just a Namespace in a virtual cluster. The restrictions that apply to vK8s (see above) do not apply to mK8s, so you could run privileged pods if required, use Operators that make use of ClusterRoles and CRDs, and perform other tasks that require cluster-wide access. Traffic management controls with mK8s Because your workloads run in a cluster managed by XC, we can apply more sophisticated and native policies to K8s traffic than non-managed clusters in earlier architectures: Service isolation can be enforced within the cluster, so that pods in a given namespace cannot communicate with services outside of that namespace, by default. More service-to-service controls exist so that you can decide which services can reach with other services with more granularity. Egress controlcan be natively enforced for outbound traffic from the cluster, by namespace, labels, IP ranges, or other methods. E.g.: Svc A can reach myapi.example.com but no other Internet service. WAF policies, bot defense, L3/4 policies,etc—allof these policies that you have typically applied with network firewalls, WAF's, etc—can be applied natively within the platform. This architecture took me a long time to understand, and longer to fully appreciate. But once you have run your workloads natively on a managed K8s platform that is connected to a global backbone and capable of performing network and application delivery within the platform, the security and traffic mgmt benefits become very compelling. Conclusion: As K8s continues to expand, management solutions of your clusters make it possible to secure your K8s services, whether they are managed by XC or exist in disparate clusters. With F5 XC as a global platform consumed as a service—not a discreet installation managed by you—the available architectures here are unique and therefore can accommodate the diverse (and changing!) ways we see K8s run today. Related Articles Securely connecting Kubernetes Microservices with F5 Distributed Cloud Multi-cluster Multi-cloud Networking for K8s with F5 Distributed Cloud - Architecture Pattern Multiple Kubernetes Clusters and Path-Based Routing with F5 Distributed Cloud8.9KViews29likes5Comments3 Ways to use F5 BIG-IP with OpenShift 4
F5 BIG-IP can provide key infrastructure and application services in a RedHat OpenShift 4 environment.Examples include providing core load balancing for the OpenShift API and Router, DNS services for the cluster, a supplement or replacement for the OpenShift Router, and security protection for the OpenShift management and application services. #1. Core Services OpenShift 4 requires a method to provide high availability to the OpenShift API (port 6443), MachineConfig (22623), and Router services (80/443).BIG-IP Local Traffic Manager (LTM) can provide these trusted services easily.OpenShift also requires several DNS records that the BIG-IP can provide accelerated responses as a DNS cache and/or providing Global Server Load Balancing of cluster DNS records. Additional documentation about OpenShift 4 Network Requirements (RedHat) Networking Requirements for user-provisioned infrastructure #2 OpenShift Router RedHat provides their own OpenShift Router for L7 load balancing, but the F5 BIG-IP can also provide these services using Container Ingress Services.Instead of deploying load balancing resources on the same nodes that are hosting OpenShift workloads; F5 BIG-IP provides these services outside of the cluster on either hardware or Virtual Edition platforms.Container Ingress Services can run either as an auxiliary router to the included router or a replacement. Additional articles that are related to Container Ingress Services • Using F5 BIG-IP Controller for OpenShift #3 Security F5 can help filter, authenticate, and validate requests that are going into or out of an OpenShift cluster.LTM can be used to host sensitive SSL resources outside of the cluster (including on a hardware HSM if necessary) as well as filtering of requests (i.e. disallow requests to internal resources like the management console).Advanced Web Application Firewall (AWAF) policies can be deployed to stymie bad actors from reaching sensitive applications.Access Policy Manager can provide OpenID Connect services for the OpenShift management console and help with providing identity services for applications and microservices that are running on OpenShift (i.e. converting BasicAuth request into a JWT token for a microservice). Additional documentation related to attaching a security policy to an OpenShift Route • AS3 Override Where Can I Try This? The environment that was used to write this article and create the companion video can be found at: https://github.com/f5devcentral/f5-k8s-demo/tree/ocp4/ocp4. For folks that are part of F5 you can access this in our Unified Demo Framework and can schedule labs with customers/partners (search for "OpenShift 4.3 with CIS"). I plan on publishing a version of this demo environment that can run natively in AWS. Check back to this article for any updates. Thanks!8.4KViews6likes3CommentsF5 Distributed Cloud - Regional Decryption with Virtual Sites
In this article we discuss how the F5 Distributed Cloud can be configured to support regulatory demands for TLS termination of traffic to specific regions around the world. The article provides insight into the F5 Distributed Cloud global backbone and application delivery network (ADN). The article goes on to inspect how the F5 Distriubted Cloud is able to achieve these custom topologies in a multi-tenant architecture while adhearing to the "rules of the internet" for route summarization. Read on to learn about the flexibility of F5's SaaS platform providing application delivery and security solutions for your applications.5.6KViews17likes2CommentsBetter together - F5 Container Ingress Services and NGINX Plus Ingress Controller Integration
Introduction The F5 Container Ingress Services (CIS) can be integrated with the NGINX Plus Ingress Controllers (NIC) within a Kubernetes (k8s) environment. The benefits are getting the best of both worlds, with the BIG-IP providing comprehensive L4 ~ L7 security services, while leveraging NGINX Plus as the de facto standard for micro services solution. This architecture is depicted below. The integration is made fluid via the CIS, a k8s pod that listens to events in the cluster and dynamically populates the BIG-IP pool pointing to the NIC's as they scale. There are a few components need to be stitched together to support this integration, each of which is discussed in detail over the proceeding sections. NGINX Plus Ingress Controller Follow this (https://docs.nginx.com/nginx-ingress-controller/installation/building-ingress-controller-image/) to build the NIC image. The NIC can be deployed using the Manifests either as a Daemon-Set or a Service. See this ( https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/ ). A sample Deployment file deploying NIC as a Service is shown below, apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress namespace: nginx-ingress spec: replicas: 3 selector: matchLabels: app: nginx-ingress template: metadata: labels: app: nginx-ingress #annotations: #prometheus.io/scrape: "true" #prometheus.io/port: "9113" spec: serviceAccountName: nginx-ingress imagePullSecrets: - name: abgmbh.azurecr.io containers: - image: abgmbh.azurecr.io/nginx-plus-ingress:edge name: nginx-plus-ingress ports: - name: http containerPort: 80 - name: https containerPort: 443 #- name: prometheus #containerPort: 9113 securityContext: allowPrivilegeEscalation: true runAsUser: 101 #nginx capabilities: drop: - ALL add: - NET_BIND_SERVICE env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name args: - -nginx-plus - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret - -ingress-class=sock-shop #- -v=3 # Enables extensive logging. Useful for troubleshooting. #- -report-ingress-status #- -external-service=nginx-ingress #- -enable-leader-election #- -enable-prometheus-metrics Notice the ‘- -ingress-class=sock-shop’ argument, it means that the NIC will only work with an Ingress that is annotated with ‘sock-shop’. The absence of this annotation makes NIC the default for all Ingress created. Below shows the counterpart Ingress with the ‘sock-shop’ annotation. apiVersion: extensions/v1beta1 kind: Ingress metadata: name: sock-shop-ingress annotations: kubernetes.io/ingress.class: "sock-shop" spec: tls: - hosts: - socks.ab.gmbh secretName: wildcard.ab.gmbh rules: - host: socks.ab.gmbh http: paths: - path: / backend: serviceName: front-end servicePort: 80 This Ingress says if hostname is socks.ab.gmbh and path is ‘/’, send traffic to a service named ‘front-end’, which is part of the socks application itself. The above concludes Ingress configuration with the NIC. F5 Container Ingress Services The next step is to leverage the CIS to dynamically populate the BIG-IP pool with the NIC addresses. Follow this ( https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-app-install.html ) to deploy the CIS. A sample Deployment file is shown below, apiVersion: extensions/v1beta1 kind: Deployment metadata: name: k8s-bigip-ctlr-deployment namespace: kube-system spec: # DO NOT INCREASE REPLICA COUNT replicas: 1 template: metadata: name: k8s-bigip-ctlr labels: app: k8s-bigip-ctlr spec: # Name of the Service Account bound to a Cluster Role with the required # permissions serviceAccountName: bigip-ctlr containers: - name: k8s-bigip-ctlr image: "f5networks/k8s-bigip-ctlr" env: - name: BIGIP_USERNAME valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: username - name: BIGIP_PASSWORD valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: password command: ["/app/bin/k8s-bigip-ctlr"] args: [ # See the k8s-bigip-ctlr documentation for information about # all config options # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest "--bigip-username=$(BIGIP_USERNAME)", "--bigip-password=$(BIGIP_PASSWORD)", "--bigip-url=https://x.x.x.x:8443", "--bigip-partition=k8s", "--pool-member-type=cluster", "--agent=as3", "--manage-ingress=false", "--insecure=true", "--as3-validation=true", "--node-poll-interval=30", "--verify-interval=30", "--log-level=INFO" ] imagePullSecrets: # Secret that gives access to a private docker registry - name: f5-docker-images # Secret containing the BIG-IP system login credentials - name: bigip-login Notice the following arguments below. They tell the CIS to consume AS3 declaration to configure the BIG-IP. According to PM, CCCL(Common Controller Core Library) – used to orchestrate F5 BIG-IP, is getting removed this sprint for the CIS 2.0 release. '--manage-ingress=false' means CIS is not doing anything for Ingress resources defined within the k8s, this is because that CIS is not the Ingress Controller, NGINX Plus is, as far as k8s is concerned. The CIS will create a partition named k8s_AS3 on the BIG-IP, this is used to hold L4~7 configuration relating to the AS3 declaration. The best practice is also to manually create a partition named 'k8s' (in our example), where networking info will be stored (e.g., ARP, FDB). "--bigip-url=https://x.x.x.x:8443", "--bigip-partition=k8s", "--pool-member-type=cluster", "--agent=as3", "--manage-ingress=false", "--insecure=true", "--as3-validation=true", To apply AS3, the declaration is embedded within a ConfigMap applied to the CIS pod. kind: ConfigMap apiVersion: v1 metadata: name: as3-template namespace: kube-system labels: f5type: virtual-server as3: "true" data: template: | { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "id":"1847a369-5a25-4d1b-8cad-5740988d4423", "schemaVersion": "3.16.0", "Nginx_IC": { "class": "Tenant", "Nginx_IC_vs": { "class": "Application", "template": "https", "serviceMain": { "class": "Service_HTTPS", "virtualAddresses": [ "10.1.0.14" ], "virtualPort": 443, "redirect80": false, "serverTLS": { "bigip": "/Common/clientssl" }, "clientTLS": { "bigip": "/Common/serverssl" }, "pool": "Nginx_IC_pool" }, "Nginx_IC_pool": { "class": "Pool", "monitors": [ "https" ], "members": [ { "servicePort": 443, "shareNodes": true, "serverAddresses": [] } ] } } } } } They are telling the BIG-IP to create a tenant called ‘Nginx_IC’, a virtual named ‘Nginx_IC_vs’ and a pool named ‘Nginx_IC_pool’. The CIS will update the serverAddresses with the NIC addresses dynamically. Now, create a Service to expose the NIC’s. apiVersion: v1 kind: Service metadata: name: nginx-ingress namespace: nginx-ingress labels: cis.f5.com/as3-tenant: Nginx_IC cis.f5.com/as3-app: Nginx_IC_vs cis.f5.com/as3-pool: Nginx_IC_pool spec: type: ClusterIP ports: - port: 443 targetPort: 443 protocol: TCP name: https selector: app: nginx-ingress Notice the labels, they match with the AS3 declaration and this allows the CIS to populate the NIC’s addresses to the correct pool. Also notice the kind of the manifest ‘Service’, this means only a Service is created, not an Ingress, as far as k8s is concerned. On the BIG-IP, the following should be created. The end product is below. Please note that this article is focused solely on control plane, that is, how to get the CIS to populate the BIG-IP with NIC's addresses. The specific mechanisms to deliver packets from the BIG-IP to the NIC's on the data plane is not discussed, as it is decoupled from control plane. For data plane specifics, please take a look here ( https://clouddocs.f5.com/containers/v2/ ). Hope this article helps to lift the veil on some integration mysteries.5.2KViews11likes27CommentsExploring Kubernetes API using Wireshark part 1: Creating, Listing and Deleting Pods
Related Articles: Exploring Kubernetes API using Wireshark part 2: Namespaces Exploring Kubernetes API using Wireshark part 3: Python Client API Quick Intro This article answers the following question: What happens when we create, list and delete pods under the hood? More specifically on the wire. I used these 3 commands: I'll show you on Wireshark the communication between kubectl client and master node (API) for each of the above commands. I used a proxy so we don't have to worry about TLS layer and focus on HTTP only. Creating NGINX pod pcap:creating_pod.pcap (use http filter on Wireshark) Here's our YAML file: Here's how we create this pod: Here's what we see on Wireshark: Behind the scenes, kubectl command sent an HTTP POST with our YAML file converted to JSON but notice the same thing was sent (kind, apiVersion, metadata, spec): You can even expand it if you want to but I didn't to keep it short. Then, Kubernetes master (API) responds with HTTP 201 Created to confirm our pod has been created: Notice that master node replies with similar data with the additional status column because after pod is created it's supposed to have a status too. Listing Pods pcap:listing_pods.pcap (use http filter on Wireshark) When we list pods, kubectl just sends a HTTP GET request instead of POST because we don't need to submit any data apart from headers: This is the full GET request: And here's the HTTP 200 OK with JSON file that contains all information about all pods from default's namespace: I just wanted to emphasise that when you list a pod the resource type that comes back isPodListand when we created our pod it was justPod. Remember? The other thing I'd like to point out is that all of your pods' information should be listed underitems. Allkubectldoes is to display some of the API's info in a humanly readable way. Deleting NGINX pod pcap:deleting_pod.pcap (use http filter on Wireshark) Behind the scenes, we're just sending an HTTP DELETE to Kubernetes master: Also notice that the pod's name is also included in the URI: /api/v1/namespaces/default/pods/nginx← this is pods' name HTTP DELETEjust likeHTTP GETis pretty straightforward: Our master node replies with HTTP 200 OK as well as some json file with all the info about the pod, including about it's termination: It's also good to emphasise here that when our pod is deleted, master node returns JSON file with all information available about the pod. I highlighted some interesting info. For example, resource type is now just Pod (not PodList when we're just listing our pods).4.5KViews3likes0CommentsService Discovery and authentication options for Kubernetes providers (EKS, AKS, GCP)
Summary Hosted Kubernetes (K8s) providers use different services for authenticating to your hosted K8s cluster. A kubeconfig file will define your cluster's API server address and your authentication to this server. Uploading this kubeconfig file is how you enable service discovery from another platform like F5 Distributed Cloud (F5 XC). What is Service Discovery in F5 XC? Service Discovery in this case means somethingoutsidethe cluster is querying the K8s API and learning the services and pods runninginside the cluster. In our case, we want to send traffic into K8s via a platform that can publish our k8s services internally or publicly, and also apply security to the applications exposed. How do I configure k8s Service Discovery? Follow these instructions to configure K8s Service Discovery in XC. You will notice that a kubeconfig file is typically required to be uploaded to XC. User authentication to AKS, EKS, and GKE In K8s, User authentication happens outside of the cluster. There is no "User" resource in K8s - you cannot create a User with kubectl. Unlike ServiceAccounts (SAs), which are created inside K8s and whose authentication secrets exist inside K8s, a User is authenticated with a system outside of K8s. There are multiple authentication schemes, and certificates and OAuth are very common to see. For this article, I'll review the typical authentication providers major providers: Azure's AKS, AWS's EKS, and Google's GKE. Azure AKS AKS authentication(authn) starts at Azure Active Directory, and authorization (authz) can be applied at AAD or k8s RBAC. For this article I am focused on authn (not authz). If you follow officialinstructions to create an AKS cluster via CLI, you will run theaz aks get-credentialscommand to get the access credentials for an AKS cluster and merge them into yourkubeconfigfile. You will see that a client certificate and key is included in the User section of your kubeconfig file. This kubeconfig file can be uploaded to F5 XC for successful service discovery because it does not rely on any additional software to be on the kubectl client. (However, be cautious. Don't share the kubeconfig file further, since it contains credentials to AKS.Microsoft provides instructionsto use Azure role-based access control (Azure RBAC) to control access to these credentials. These Azure roles let you define who can retrieve thekubeconfigfile, and what permissions they then have within the cluster.) AWS EKS "Typical" EKS kubeconfig The AWS EKS cluster authentication process requires the use of an AWS IAM identity, and typically relies on an additional software being installed on the kubectl client machine. Instead of credentials being configured directly in the kubeconfig file, eitherthe aws cli or aws-iam-authenticator are used locally on the kubectl client to generate a token. This token is sent to the k8s master API server, which is verified against AWS IAM (see steps 1-2 in the image below). Use a service account for EKS access The AWS user that created the cluster will have system:masters permissions to your cluster, but if you want to add another user to your cluster, you must edit the aws-auth ConfigMap within Kubernetes. AWS has instructions to do this so I won't repeat them here, but the implications should be clear for what we are trying to do: You don't want a 3rd party authenticating to EKS as your primary user account. If you created the EKS cluster with your primary account, you should create a different user for EKS access. Create an AWS IAM user for this purpose, and then add this user to the aws-auth ConfigMap within Kubernetes. EKS requires authentication from the kubectl client. Instead of hard-coding credentials into the kubeconfig file, the kubeconfig file tells the client to run one of the two following commands that rely on the existing of additional software: aws-iam-authenticator token -i [clustername] or aws eks get-token --region [region] --cluster-name [cluster] We can update our kubeconfig file so that it can use credentials directly in the kubeconfig file, so that we can upload this file to F5 XC for service discovery. Creating a kubeconfig file that uses aws cli or aws-iam-authenticator with provided credentials In the case of F5 XC, we can see from the instructions that "...,you must add AWS credentials in the kubeconfig file for successful service discovery". This means that to upload your kubeconfig to XC you will need to configure it to use the aws-iam-authenticator option, and provide your IAM aws_access_key_id and aws_secret_access_key as env vars in your kubeconfig. Example kubeconfig: .... <everything else in kubeconfig file> ... users: - name: arn:aws:eks:us-east-1:<my_aws_acct_number>:cluster/<my_eks_cluster_name> user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - token - -i - <my_eks_cluster_name> command: aws-iam-authenticator env: - name: AWS_ACCESS_KEY_ID value: "<my_aws_access_key_id>" - name: AWS_SECRET_ACCESS_KEY value: "<my_aws_secret_access_key>" I can use the above kubeconfig file for successful service discovery in XC. I did make some notes: In my testing, the aws cli method did not work in XC. It appears that the XC node has aws-iam-authenticator installed, but not aws cli. Notice the apiVersion above is v1alpha1. This works when uploaded to F5 XC, but when testing this file on my local workstation, I had to ensure my kubectl client was downgraded to kubectl v 1.23.6 or lower. v1alpha1 has been deprecated in newer kubectl versions, but was required for me to authenticate with the aws-iam-authenticator method. This is not ideal. It appears that the XC node has a version of kubectl that allows for v1alpha1 auth, but we know this will be deprecated eventually. These notes led me to asking: can I get a kubeconfig file to authenticate to AWS without 3rd party tools? Read on for how to do this, but first I'll cover Google's GKE. Google GKE Similar to EKS, GKE uses gcloud cli as a "credential helper" in the typical kubeconfig file for GKE cluster administration. This excellent article covers GKE authentication very well. This means we need to take a similar approach as we did for EKS, if we want to authenticate to the GKE API server from a location without gcloud installed. For GKE, I followed an approach I learnd here this blog post, but I'll summarize for you now: Create a Kubeconfig file that contains your cluster API server address and CA certificate. These are safe to commit in git because they do not need to be private. Create a Service Account in Google, generate a key for this service account and save it in JSON format. Export the variable GOOGLE_APPLICATION_CREDENTIALS so the value is the location of your .json file. Use kubectl commands. These should work because kubectl as a client is aware of the env var above. This article from Google explains the community's requirement that provider-specific code is removed from the OSS codebase in v1.25 (as of my writing, v1.24 is the most recent release). This requires an update to kubectl that will remove existing functionality that allows kubectl to authenticate to GKE. Soon, you will need to use a new credential helper from Google called gke-gcloud-auth-plugin. Soon, you'll be relying on a different 3rd party plugin for GKE authentication. So our question will come up again, as it did earlier: Can't I just have a kubeconfig file that doesn't rely on a 3rd party library? Creating a kubeconfig file that does not rely on other software What if we need to upload our kubeconfig to a machine thatdoes not have the aws cli, aws-iam-authenticator, or any other 3rd party credential helpers installed?Sometimes you just need a kubeconfig that doesn't rely on 3rd party software or their IAM service, and this is what ServiceAccounts can achieve. I wrote an article with a script that you can use, focused on K8s SA's for service discovery with F5 XC. Since it's all documented in another article I'll summarize the steps here. Create a Service Account (SA) with permissions to discover services. Export the secret for that SA to use as an auth token. Create a kubeconfig file that targets your cluster and authenticates with your SA. This file can be uploaded to XC. Summary A kubeconfig file is relatively straightforward for most k8s admins. However, uploading a kubeconfig file into a hosted service, for the purpose of k8s service discovery, will sometimes require further analysis. Remember: Your kubeconfig file defines your authentication to the k8s API server. So use a service account (not your primary user account) If your kubeconfig file relies on a helper like aws cli, aws-iam-authenticator, or gcloud, you can probably work around this by editing your kubeconfig file to use a SA and token instead. This may be required if you upload the kubeconfig file to a hosted platform like F5 XC. All this might seem complex at first, but after service discovery is configured the ability to expose your services anywhere - publicly or privately - is incredibly easy and powerful. Good luck and reach out with feedback!4.4KViews3likes3CommentsCIS and Kubernetes - Part 1: Install Kubernetes and Calico
Welcome to this series to see how to: Install Kubernetes and Calico (Part 1) Deploy F5 Container Ingress Services (F5 CIS) to tie applications lifecycle to our application services (Part 2) Here is the setup of our lab environment: BIG-IP Version: 15.0.1 Kubernetes component: Ubuntu 18.04 LTM We consider that your BIG-IPs are already setup and running: Licensed and setup as a cluster The networking setup is already done Part 1: Install Kubernetes and Calico Setup our systems before installing kubernetes Step1: Update our systems and install docker To run containers in Pods, Kubernetes uses a container runtime. We will use docker and follow the recommendation provided here As root on ALL Kubernetes components (Master and Node): # Install packages to allow apt to use a repository over HTTPS apt-get -y update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common # Add Docker’s official GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - # Add Docker apt repository. add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" # Install Docker CE. apt-get -y update && apt-get install -y docker-ce=18.06.2~ce~3-0~ubuntu # Setup daemon. cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF mkdir -p /etc/systemd/system/docker.service.d # Restart docker. systemctl daemon-reload systemctl restart docker We may do a quick test to ensure docker run as expected: docker run hello-world Step2: Setup Kubernetes tools (kubeadm, kubelet and kubectl) To setup Kubernetes, we will leverage the following tools: kubeadm: the command to bootstrap the cluster. kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers. kubectl: the command line util to talk to your cluster. As root on ALL Kubernetes components (Master and Node): curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF | tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get -y update We can review which version of kubernetes is supported with F5 Container Ingress Services here At the time of this article, the latest supported version is v1.13.4. We'll make sure to install this specific version with our following step apt-get install -qy kubelet=1.13.4-00 kubeadm=1.13.4-00 kubectl=1.13.4-00 kubernetes-cni=0.6.0-00 apt-mark hold kubelet kubeadm kubectl Install Kubernetes Step1: Setup Kubernetes with kubeadm We will follow the steps provided in the documentation here As root on the MASTER node (make sure to update the api server address to reflect your master node IP): kubeadm init --apiserver-advertise-address=10.1.20.20 --pod-network-cidr=192.168.0.0/16 Note: SAVE somewhere the kubeadm join command. It is needed to "assimilate" the node later. In my example, it looks like the following (YOURS WILL BE DIFFERENT): kubeadm join 10.1.20.20:6443 --token rlbc20.va65z7eauz89mmuv --discovery-token-ca-cert-hash sha256:42eca5bf49c645ff143f972f6bc88a59468a30276f907bf40da3bcf5127c0375 Now you should NOT be ROOT anymore. Go back to your non root user. Since i use Ubuntu, i'll use the default "ubuntu" user Run the following commands as highlighted in the screenshot above: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Step2: Install the networking component of Kubernetes The last step is to setup the network related to our k8s infrastructure. In our kubeadm init command, we used --pod-network-cidr=192.168.0.0/16 in order to be able to setup next on network leveraging Calico as documented here kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml You may monitor the deployment by running the command: kubectl get pods --all-namespaces After some time (<1 min), everything shouldhave a "Running" status. Make sure that CoreDNS started also properly. If everything is up and running, we have our master setup properly and can go to the node to setup k8s on it. Step3: Add the Node to our Kubernetes Cluster Now that the master is setup properly, we can assimilate the node. You need to retrieve the "kubeadmin join …" command that you received at the end of the "kubeadm init …" cmd. You must run the following command as ROOT on the Kubernetes NODE (remember that you got a different hash and token, the command below is an example): kubeadm join 10.1.20.20:6443 --token rlbc20.va65z7eauz89mmuv --discovery-token-ca-cert-hash sha256:42eca5bf49c645ff143f972f6bc88a59468a30276f907bf40da3bcf5127c0375 We can check the status of our node by running the following command on our MASTER (ubuntu user) kubectl get nodes Both component should have a "Ready" status. Last step is to setup Calico between our BIG-IPs and our Kubernetes cluster Setup Calico We need to setup Calico on our BIG-IPs and k8S components. We will setup our environment with the following AS Number: 64512 Step1: BIG-IPs Calico setup F5 has documented this procedure here We will use our self IPs on the internal network. Therefore we need to make sure of the following: The self IP has a portlock down setup to "Allow All" Or add a TCP custom port to the self IP: TCP port 179 You need to allow BGP on the default route domain 0 on your BIG-IPs. Connect to the BIG-IP GUI on go into Network > Route domain. Click on Route Domain "0" and allow BGP Click on "Update" Once this is done,connect via SSH and get into a bash shell on both BIG-IPs Run the following commands: #access the IMI Shell imish #Switch to enable mode enable #Enter configuration mode config terminal #Setup route bgp with AS Number 64512 router bgp 64512 #Create BGP Peer group neighbor calico-k8s peer-group #assign peer group as BGP neighbors neighbor calico-k8s remote-as 64512 #we need to add all the peers: the other BIG-IP, our k8s components neighbor 10.1.20.20 peer-group calico-k8s neighbor 10.1.20.21 peer-group calico-k8s #on BIG-IP1, run neighbor 10.1.20.12 peer-group calico-k8s #on BIG-IP2, run neighbor 10.1.20.11 peer-group calico-k8s #save configuration write #exit end You can review your setup with the command show ip bgp neighbors Note: your other BIG-IP should be identified with a router ID and have a BGP state of "Active". The k8s node won't have a router ID since BGP hasn't already been setup on those nodes. Keep your BIG-IP SSH sessions open. We'll re-use the imish terminal once our k8s components have Calico setup Step2: Kubernetes Calico setup On the MASTER node (not as root), we need to retrieve the calicoctl binary curl -O -Lhttps://github.com/projectcalico/calicoctl/releases/download/v3.10.0/calicoctl chmod +x calicoctl sudo mv calicoctl /usr/local/bin We need to setup calicoctl as explained here sudo mkdir /etc/calico Create a file /etc/calico/calicoctl.cfg with your preferred editor (you'll need sudo privilegies). This file should contain the following apiVersion: projectcalico.org/v3 kind: CalicoAPIConfig metadata: spec: datastoreType: "kubernetes" kubeconfig: "/home/ubuntu/config" Note: you may have to change the path specified by the kubeconfig parameter based on the user you use to do kubectl command To make sure that calicoctl is properly setup, run the command calicoctl get nodes You should get a list of your Kubernetes nodes Now we can work on our Calico/BGP configuration as documented here On the MASTER node: cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPConfiguration metadata: name: default spec: logSeverityScreen: Info nodeToNodeMeshEnabled: true asNumber: 64512 EOF Note: Because we setup nodeToNodeMeshEnabled to True, the k8s node will receive the same config We may now setup our BIG-IP BGP peers. Replace the peerIP Value with the IP of your BIG-IPs cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: bgppeer-global-bigip1 spec: peerIP: 10.1.20.11 asNumber: 64512 EOF cat << EOF | calicoctl create -f - apiVersion: projectcalico.org/v3 kind: BGPPeer metadata: name: bgppeer-global-bigip2 spec: peerIP: 10.1.20.12 asNumber: 64512 EOF Review your setup with the command: calicoctl get bgpPeer If you go back to your BIG-IP SSH connections, you may check that your Kubernetes nodes have a router ID now in your BGP configuration: imish show ip bgp neighbors Summary So far we have: Setup Kubernetes Setup Calico between our BIG-IPs and our Kubernetes cluster In the next article, we will setup F5 container Ingress Services (F5 CIS)4.2KViews1like1CommentUnderstanding Modern Application Architecture - Part 1
This is part 1 of a series. Here are the other parts: Understanding Modern Application Architecture - Part 2 Understanding Modern Application Architecture - Part 3 Over the past decade, there has been a change taking place in how applications are built. As applications become more expansive in capabilities and more critical to how a business operates, (or in many cases, the application is the business itself) a new style of architecture has allowed for increased scalability, portability, resiliency, and agility. To support the goals of a modern application, the surrounding infrastructure has had to evolve as well. Platforms like Kubernetes have played a big role in unlocking the potential of modern applications and is a new paradigm in itself for how infrastructure is managed and served. To help our community transition the skillset they've built to deal with monolithic applications, we've put together a series of videos to drive home concepts around modern applications. This article highlights some of the details found within the video series. In these first three videos, we breakdown the definition of a Modern Application. One might think that by name only, a modern application is simply an application that is current. But we're actually speaking in comparison to a monolithic application. Monolithic applications are made up of a single, or a just few pieces. They are rigid in how they are deployed and fragile in their dependencies. Modern applications will instead incorporate microservices. Where a monolithic application might have all functions built into one broad encompassing service, microservices will break down the service into smaller functions that can be worked on separately. A modern application will also incorporate 4 main pillars. Scalability ensures that the application can handle the needs of a growing user base, both for surges as well as long term growth. Portability ensures that the application can be transportable from its underlying environment while still maintaining all of its functionality and management plane capabilities. Resiliency ensures that failures within the system go unnoticed or pose minimal disruption to users of the application. Agility ensures that the application can accommodate for rapid changes whether that be to code or to infrastructure. There are also 6 design principles of a modern application. Being agnostic will allow the application to have freedom to run on any platform. Leveraging open source software where it makes sense can often allow you to move quickly with an application but later be able to adopt commercial versions of that software when full support is needed. Defining by code allows for more uniformity of configuration and move away rigid interfaces that require specialized knowledge. Automated CI/CD processes ensures the quick integration and deployment of code so that improvements are constantly happening while any failures are minimized and contained. Secure development ensures that application security is integrated into the development process and code is tested thoroughly before being deployed into production. Distributed Storage and Infrastructure ensures that applications are not bound by any physical limitations and components can be located where they make the most sense. These videos should help set the foundation for what a modern application is. The next videos in the series will start to define the fundamental technical components for the platforms that bring together a modern application. Continued in Part 23.7KViews8likes0CommentsExploring Kubernetes API using Wireshark part 3: Python Client API
Quick Intro In this article, we continue our exploration of Kubernetes API but this time we're going to use Python along with Wireshark. I strongly advise you to go throughExploring Kubernetes API using Wireshark part 1: Creating, Listing and Deleting PodsandExploring Kubernetes API using Wireshark part 2: Namespacesfirst. We're not creating any app here. I'll just show you how you can explore Kubernetes API using Python's client API and Wireshark output retrieved fromkubectl get pods command: In case you want to follow along, I've set upgcloud toolas I'm using Google Cloud andGOOGLE_APPLICATION_CREDENTIALSenvironment variable on my Mac so I had no problem authenticating to Kubernetes API. I hope you understand that when I say Kubernetes API, I'm talking about the API on Kubernetes Master node on my Google Cloud's Kubernetes cluster. What I'm going to do here In this article, we're going to do the following (using Python): Authenticate to kube API Connect to /api/v1 and then move to/api/v1/namespaces/default/pods Add pod information to a variable named pods from variable pods, create a list of: pod names pod status At the end, we display the above info Authenticating/Authorising your code to contact Kube API Here we're loading auth and cluster info from kube-config file and storing into into our client API config: Connecting to /api/v1 We now store into v1 variable the API object. For the sake of simplicity, think of it as we were pointing to /api/v1/ folder and we haven't decided where to go yet: Connecting to /api/v1/namespaces/default/pods We now move to /api/v1/namespaces/default/pods which where we find information about pods that belong to default namespace (watch=False just means we will not be monitoring for changes, we're just retrieving it as one-off thing): Now, what we've stored in the above variable (ret) is the equivalent of moving to the root directory in our JSON tree (Object) in the output below: The output above was from kubectl get pods command which lists all pods from default namespace and that's equivalent to what we're doing here using Python's Kubernetes Client API. Python shows us similar options about where to go when I typeret.and hit tab: We've gotkind,apiVersion,metadataanditemsas options to move to butitemsis what we're looking for because it's supposed to store pod's information. So let's move to items and store it in another variable calledpodsto make it easier to remember that it actually holds all pods' information fromdefaultnamespace: Listing pods' names We're now in a comfortable place and ready to play! Let's keeppodsvariable as our placeholder! On Wireshark, pod's name is located inmetadata.name: \ So let's create another variable calledpods_namesto store all pods' names: Listing Pods' status (phase) What we're looking for is in status.phase: Let's store it instatusvariable in Python: Displaying the output We can now display the output using built-in zip() method: If you want something pretty you can use prettytable:3.2KViews1like0CommentsIntegrating with your IPv6 Kubernetes Cluster
Why important to focus on IPv6 IPv6 was first introduced as a standard in 1995, yet it is only in the last five years that adoption has accelerated due to the growing need to address the limitations of IPv4. In even some countries like Japan, the pool of IPv4 addresses has long been considered exhausted and many utilize NAT architectures to buy time. With the explosion of cloud-based applications and IoT, not even Kubernetes can escape the call for IPv6 support and all the caveats that come with it. For network engineers, the IPv6 is well known. However, one can argue that the container world is still very new to the protocol. Fortunately, this is changing and arguably with more speed now thatdual-stack IPv4/IPv6 networkinghas reached general availability (GA) in Kubernetes v1.23 and single-stack IPv6 alpha support in Istio v1.9. IPv6 changes everything in Kubernetes When deploying IPv6-only clusters, your pod and service subnets will talk to each other on 128-bit addresses from a block of IPs you defined (default /122) and will therefore need NAT64 and DNS64 in place when calling to IPv4 only services such as DockerHub, GitHub, and other package libraries sitting internally or on the Internet. This also applies to libraries required when installing pre-requisites for K8S. Your CNI configuration, for example Calico, will also need specific variables set to enable IPv6. https://projectcalico.docs.tigera.io/networking/ipv6 Once you have your cluster set up, you can then focus on all the hard parts of IPv6 integration. For example, your BGP routers will need to ensure proper configuration for IPv6 peers, and everything will need to allow neighbor discovery (or NDP) to succeed or else you will fail to achieve the simplest of tests— the ICMP ping. All your containerized applications will also need to support communicating on IPv6 sockets. This seems to be the biggest challenge with most applications that are still migrating from the IPv4 world. Fortunately, F5 has made the ingress part easier. How can BIG-IP integrate with your IPv6-only cluster for reverse proxy and security services? IPv6 support is available for many features of Container Ingress Services (CIS) as well as F5 IPAM Controller to meet your needs for exposing your K8S workloads in an easy and secure way. BIG-IP admin IP address In versions prior to 2.6.0, you had to define a hostAliases block for your IPv6 BIG-IP URL (--bigip-url=bigip01). hostAliases: - hostnames: - bigip01 ip: aaa1:bbb2:ccc3:ddd4::100 Now you can use either of below formats: #IPv6 URL with non-standard port --bigip-url=https://[aaa1:bbb2:ccc3:ddd4::100]:8443 #IPv6 address --bigip-url=[aaa1:bbb2:ccc3:ddd4::100] VirtualServer, TransportServer CRD When deploying your custom resource, you simply specify an IPv6 address (virtualServerAddress) that will be the external IP exposing your workloads. apiVersion: "cis.f5.com/v1" kind: TransportServer metadata: name: simple-virtual-l4 labels: f5cr: "true" spec: virtualServerAddress: "aaa1:bbb2:ccc3:eee5::200" virtualServerPort: 80 mode: performance snat: auto pool: service: nginx servicePort: 8080 monitor: type: tcp interval: 10 timeout: 10 --- apiVersion: "cis.f5.com/v1" kind: VirtualServer metadata: name: l7-virtual-http labels: f5cr: "true" spec: # This is an insecure virtual, Please use TLSProfile to secure the virtual # check out tls examples to understand more. host: expo.example.com virtualServerAddress: "aaa1:bbb2:ccc3:eee5::201" virtualServerName: "l7-virtual-http" pools: - path: /expo service: svc-2 servicePort: 80 Service Type LoadBalancer (with F5 IPAM Controller) You can also simulate the one-click public cloud experience when exposing your workloads by simply specifying your service to be of type LoadBalancer. The listener IP allocation and configuration is automagically done for you with the help of F5 IPAM Controller. In F5 IPAM Controller deployment manifest, just specify your IPv6 range in the container args like below: - '{"Dev":"2001:db8:3::7-2001:db8:3::9","Test":"2001:db8:4::7-2001:db8:4::9"}' Then deploy your F5 IPAM Controller and if you see addresses in the logs like below, you're good to go. 2022/03/08 15:34:31 [DEBUG] Created New IPAM Client 2022/03/08 15:34:31 [DEBUG] [STORE] Using IPAM DB file from mount path 2022/03/08 15:34:31 [DEBUG] [STORE] [ipaddress status ipam_label reference] 2022/03/08 15:34:31 [DEBUG] [STORE] 2001:db8:4::7 1 Test ZpTSxCKs0gigByk5 2022/03/08 15:34:31 [DEBUG] [STORE] 2001:db8:4::8 1 Test 650YpEeEBF2H88Z8 2022/03/08 15:34:31 [DEBUG] [STORE] 2001:db8:4::9 1 Test la9aJTZ5Ubqi/2zU Note: For the CIS configuration, ensure that your CIS deployment manifest defines below parameters, which are particularly for service type LoadBalancer and IPv6 use case. "--custom-resource-mode=true", "--ipam=true", "--enable-ipv6=true", Then you can deploy a simple Service resource like below: apiVersion: v1 kind: Service metadata: annotations: cis.f5.com/health: '{"interval": 5, "timeout": 10}' cis.f5.com/ipamLabel: Test labels: app: svc-f5-demo-lb1 name: svc-f5-demo-lb1 namespace: default spec: ports: - name: svc-lb1-80 port: 80 protocol: TCP targetPort: 80 selector: app: f5-demo type: LoadBalancer At which point the magic happens and you see that your "EXTERNAL-IP" is populated with an IP address allocated by F5 IPAM Controller. # kubectl get svc svc-f5-demo-lb1 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc-f5-demo-lb1 LoadBalancer 172.19.2.55 2001:db8:4::4 80:32410/TCP 43m On the BIG-IP you will see the virtual server created as defined in your Service object. Summary Moving to a single-stack IPv6 Kubernetes cluster can be difficult and requires a thorough review of all your application components as well as the parts that integrate with your cluster. F5 has ensured that even with a pure IPv6 environment you are covered with an enterprise grade ingress solution that not only simplifies exposing of your workloads, but also provides many security options as well.3KViews2likes0Comments