red hat
8 TopicsIntroducing F5 BIG-IP Next CNF Solutions for Red Hat OpenShift
5G and Red Hat OpenShift 5G standards have embraced Cloud-Native Network Functions (CNFs) for implementing network services in software as containers. This is a big change from previous Virtual Network Functions (VNFs) or Physical Network Functions (PNFs). The main characteristics of Cloud-Native Functions are: Implementation as containerized microservices Small performance footprint, with the ability to scale horizontally Independence of guest operating system,since CNFs operate as containers Lifecycle manageable by Kubernetes Overall, these provide a huge improvement in terms of flexibility, faster service delivery, resiliency, and crucially using Kubernetes as unified orchestration layer. The later is a drastic change from previous standards where each vendor had its own orchestration. This unification around Kubernetes greatly simplifies network functions for operators, reducing cost of deploying and maintaining networks. Additionally, by embracing the container form factor, allows Network Functions (NFs) to be deployed in new use cases like far edge. This is thanks to the smaller footprint while at the same time these can be also deployed at large scale in a central data center because of the horizontal scalability. In this article we focus on Red Hat OpenShift which is the market leading and industry reference implementation of Kubernetes for IT and Telco workloads. Introduction to F5 BIG-IP Next CNF Solutions F5 BIG-IP Next CNF Solutions is a suite of Kubernetes native 5G Network Functions, implemented as microservices. It shares the same Cloud Native Engine (CNE) as F5 BIG-IP Next SPK introduced last year. The functionalities implemented by the CNF Solutions deal mainly with user plane data. User plane data has the particularity that the final destination of the traffic is not the Kubernetes cluster but rather an external end-point, typically the Internet. In other words, the traffic gets in the Kubernetes cluster and it is forwarded out of the cluster again. This is done using dedicated interfaces that are not used for the regular ingress and egress paths of the regular traffic of a Kubernetes cluster. In this case, the main purpose of using Kubernetes is to make use of its orchestration, flexibility, and scalability. The main functionalities implemented at initial GA release of the CNF Solutions are: F5 Next Edge Firewall CNF, an IPv4/IPv6 firewall with the main focus in protecting the 5G core networks from external threads, including DDoS flood protection and IPS DNS protocol inspection. F5 Next CGNAT CNF, which offers large scale NAT with the following features: NAPT, Port Block Allocation, Static NAT, Address Pooling Paired, and Endpoint Independent mapping modes. Inbound NAT and Hairpining. Egress path filtering and address exclusions. ALG support: FTP/FTPS, TFTP, RTSP and PPTP. F5 Next DNS CNF, which offers a transparent DNS resolver and caching services. Other remarkable features are: Zero rating DNS64 which allows IPv6-only clients connect to IPv4-only services via synthetic IPv6 addresses. F5 Next Policy Enforcer CNF, which provides traffic classification, steering and shaping, and TCP and video optimization. This product is launched as Early Access in February 2023 with basic functionalities. Static TCP optimization is now GA in the initial release. Although the CGNAT (Carrier Grade NAT) and the Policy Enforcer functionalities are specific to User Plane use cases, the Edge Firewall and DNS functionalities have additional uses in other places of the network. F5 and OpenShift BIG-IP Next CNF Solutions fully supportsRed Hat OpenShift Container Platform which allows the deployment in edge or core locations with a unified management across the multiple deployments. OpenShift operators greatly facilitates the setup and tuning of telco grade applications. These are: Node Tuning Operator, used to setup Hugepages. CPU Manager and Topology Manager with NUMA awareness which allows to schedule the data plane PODs within a NUMA domain which is aligned with the SR-IOV NICs they are attached to. In an OpenShift platform all these are setup transparently to the applications and BIG-IP Next CNF Solutions uniquely require to be configured with an appropriate runtimeClass. F5 BIG-IP Next CNF Solutions architecture F5 BIG-IP Next CNF Solutions makes use of the widely trusted F5 BIG-IP Traffic Management Microkernel (TMM) data plane. This allows for a high performance, dependable product from the start. The CNF functionalities come from a microservices re-architecture of the broadly used F5 BIG-IP VNFs. The below diagram illustrates how a microservices architecture used. The data plane POD scales vertically from 1 to 16 cores and scales horizontally from 1 to 32 PODs, enabling it to handle millions of subscribers. NUMA nodes are supported. The next diagram focuses on the data plane handling which is the most relevant aspect for this CNF suite: Typically, each data plane POD has two IP address, one for each side of the N6 reference point. These could be named radio and Internet sides as shown in the diagram above. The left-side L3 hop must distribute the traffic amongst the lef-side addresses of the CNF data plane. This left-side L3 hop can be a router with BGP ECMP (Equal Cost Multi Path), an SDN or any other mechanism which is able to: Distribute the subscribers across the data plane PODs, shown in [1] of the figure above. Keep these subscribers in the same PODs when there is a change in the number of active data plane PODs (scale-in, scale-out, maintenance, etc...) as shown in [2] in the figure above. This minimizes service disruption. In the right side of the CNFs, the path towards the Internet, it is typical to implement NAT functionality to transform telco's private addresses to public addresses. This is done with the BIG-IP Next CG-NAT CNF. This NAT makes the return traffic symmetrical by reaching the same POD which processed the outbound traffic. This is thanks to each POD owning part of this NAT space, as shown in [3] of the above figure. Each POD´s NAT address space can be advertised via BGP. When not using NAT in the right side of the CNFs, it is required that the network is able to send the return traffic back to the same POD which is processing the same connection. The traffic must be kept symmetrical at all times, this is typically done with an SDN. Using F5 BIG-IP Next CNF Solutions As expected in a fully integrated Kubernetes solution, both the installation and configuration is done using the Kubernetes APIs. The installation is performed using helm charts, and the configuration using Custom Resource Definitions (CRDs). Unlike using ConfigMaps, using CRDs allow for schema validation of the configurations before these are applied. Details of the CRDs can be found in this clouddocs site. Next it is shown an overview of the most relevant CRDs. General network configuration Deploying in Kubernetes automatically configures and assigns IP addresses to the CNF PODs. The data plane interfaces will require specific configuration. The required steps are: Create Kubernetes NetworkNodePolicies and NetworkAttchment definitions which will allow to expose SR-IOV VFs to the CNF data planes PODs (TMM). To make use of these SR-IOV VFs these are referenced in the BIG-IP controller's Helm chart values file. This is described in theNetworking Overview page. Define the L2 and L3 configuration of the exposed SR-IOV interfaces using the F5BigNetVlan CRD. If static routes need to be configured, these can be added using the F5BigNetStaticroute CRD. If BGP configuration needs to be added, this is configured in the BIG-IP controller's Helm chart values file. This is described in the BGP Overview page. It is expected this will be configured using a CRD in the future. Traffic management listener configuration As with classic BIG-IP, once the CNFs are running and plumbed in the network, no traffic is processed by default. The traffic management functionalities implemented by BIG-IP Next CNF Solutions are the same of the analogous modules in the classic BIG-IP, and the CRDs in BIG-IP Next to configure these functionalities are conceptually similar too. Analogous to Virtual Servers in classic BIG-IP, BIG-IP Next CNF Solutions have a set of CRDs that create listeners of traffic where traffic management policies are applied. This is mainly the F5BigContextSecure CRD which allows to specify traffic selectors indicating VLANs, source, destination prefixes and ports where we want the policies to be applied. There are specific CRDs for listeners of Application Level Gateways (ALGs) and protocol specific solutions. These required several steps in classic BIG-IP: first creating the Virtual Service, then creating the profile and finally applying it to the Virtual Server. In BIG-IP Next this is done in a single CRD. At time of this writing, these CRDs are: F5BigZeroratingPolicy - Part of Zero-Rating DNS solution; enabling subscribers to bypass rate limits. F5BigDnsApp - High-performance DNS resolution, caching, and DNS64 translations. F5BigAlgFtp - File Transfer Protocol (FTP) application layer gateway services. F5BigAlgTftp - Trivial File Transfer Protocol (TFTP) application layer gateway services. F5BigAlgPptp - Point-to-Point Tunnelling Protocol (PPTP) application layer gateway services. F5BigAlgRtsp - Real Time Streaming Protocol (RTSP) application layer gateway services. Traffic management profiles and policies configuration Depending on the type of listener created, these can have attached different types of profiles and policies. In the case of F5BigContextSecure it can get attached thefollowing CRDs to define how traffic is processed: F5BigTcpSetting - TCP options to fine-tune how application traffic is managed. F5BigUdpSetting - UDP options to fine-tune how application traffic is managed. F5BigFastl4Setting - FastL4 option to fine-tune how application traffic is managed. and the following policies for security and NAT: F5BigDdosPolicy - Denial of Service (DoS/DDoS) event detection and mitigation. F5BigFwPolicy - Granular stateful-flow filtering based on access control list (ACL) policies. F5BigIpsPolicy - Intelligent packet inspection protects applications from malignant network traffic. F5BigNatPolicy - Carrier-grade NAT (CG-NAT) using large-scale NAT (LSN) pools. The ALG listeners require the use of F5BigNatPolicy and might make use for the F5BigFwPolicyCRDs.These CRDs have also traffic selectors to allow further control over which traffic these policies should be applied to. Firewall Contexts Firewall policies are applied to the listener with best match. In addition to theF5BigFwPolicy that might be attached, a global firewall policy (hence effective in all listeners) can be configured before the listener specific firewall policy is evaluated. This is done with F5BigContextGlobal CRD, which can have attached a F5BigFwPolicy. F5BigContextGlobal also contains the default action to apply on traffic not matching any firewall rule in any context (e.g. Global Context or Secure Context or another listener). This default action can be set to accept, reject or drop and whether to log this default action. In summary, within a listener match, the firewall contexts are processed in this order: ContextGlobal Matching ContextSecure or another listener context. Default action as defined by ContextGlobal's default action. Event Logging Event logging at high speed is critical to provide visibility of what the CNFs are doing. For this the next CRDs are implemented: F5BigLogProfile - Specifies subscriber connection information sent to remote logging servers. F5BigLogHslpub - Defines remote logging server endpoints for the F5BigLogProfile. Demo F5 BIG-IP Next CNF Solutions roadmap What it is being exposed here is just the begin of a journey. Telcos have embraced Kubernetes as compute and orchestration layer. Because of this, BIG-IP Next CNF Solutions will eventually replace the analogous classic BIG-IP VNFs. Expect in the upcoming months that BIG-IP Next CNF Solutions will match and eventually surpass the features currently being offered by the analogous VNFs. Conclusion This article introduces fully re-architected, scalable solution for Red Hat OpenShift mainly focused on telco's user plane. This new microservices architecture offers flexibility, faster service delivery, resiliency and crucially the use of Kubernetes. Kubernetes is becoming the unified orchestration layer for telcos, simplifying infrastructure lifecycle, and reducing costs. OpenShift represents the best-in-class Kubernetes platform thanks to its enterprise readiness and Telco specific features. The architecture of this solution alongside the use of OpenShift also extends network services use cases to the edge by allowing the deployment of Network Functions in a smaller footprint. Please check the official BIG-IP Next CNF Solutions documentation for more technical details and check www.f5.com for a high level overview.2.1KViews3likes2CommentsThe BIG-IP and Ansible Sandbox, The Perfect Automation Playground!
Ever need a sandbox area to validate and test your F5 BIG-IP Automation? Well look no further! Ansible and F5 have worked together to create a deployable sandbox solution for all your testing needs within AWS. Why do I need a sandbox environment? In most use cases playbooks are never perfect out of the box. We all need an area to test that code and run through many different scenarios to ensure that the code is fully functional, does error detection/correction and doesn't create outages on an environment; All of which needs to be done before that code gets pushed to production. Setting up that kind of lab can take time to setup/configure and can even be problematic to reset back to a "Factory Default" configuration. Having that ability to reset the infrastructure to a clean state that also has the unique add-ons necessary for your testing,makes the lab reusable for testing new or older builds of code. So Where do I Start? Best place to get started is in the F5 Cloud Docs website https://clouddocs.f5.com/training/automation-sandbox/ The website will help with the following Information on setting up the prerequisites (installing Docker, setting up IAM account in AWS, Route 53 Domain, etc. Using the provided DockerFile code to deploy a local container that will be the control node for deploying the AWS environment. Commands to download the Git repository for the Red Hat Automation Platform Workshops (https://github.com/ansible/workshops/) Commands execute the code to deploy or destroy a sandbox Automation Workshop with an F5 Configuration on AWS Basic (101) and advanced (201) use cases. From there the sky is the limit! The sandbox is the perfect automation playground to test any code you might have. What will the Red Hat Ansible Automation Platform Workshops build with the F5 configuration? As per the diagram below the Ansible Automation Platform Workshops will build out and configure an AWS VPC, an Ansible Host, a BIG-IP (Best Licensed) and two Web Servers. The entire environment will be built and configured during the deployment. Each device within the VPC will also have a public IP and private IP addresses for intranet access between the nodes and external internet accessibility for importing and testing playbooks and configurations. When all the code is run and the lab is built, it will provide the essential information related on how to connect to the environment Workshop Name - Unique identifier that was provided in the variables file Instructor Inventory - The hostnames with the public and private IP addresses to connect to the environment with. Private Key - If you wish to use SSH authentication to the Nodes Want to Learn More? Visit F5 Automation Sandbox cloud docs website - https://clouddocs.f5.com/training/automation-sandbox/ Visit Red Hat Ansible Automation Platform Workshops - https://ansible.github.io/workshops/ Visit Red Hat Ansible Automation Platform Git Repository - https://github.com/ansible/workshops/1.2KViews0likes0CommentsF5 Friday: Routing in Red Hat OpenShift Container Platform with F5 Just Got Easier
Scaling applications with container-based clusters is on the rise. Whether as part of a private cloud implementation or just part of an effort to modernize the delivery environment, we continue to see applications – traditional and microservices-based – being deployed and scaled within containerized environments with platforms like Red Hat OpenShift. And while the container platforms themselves do an excellent job of scaling apps and services out and back-in by increasing and reducing the number of containers available to respond to requests, that doesn’t address the question of how requests get to the cluster in the first place. The answer usually (almost always but Heisenberg frowns on such a high degree of certainty) lies in routing, upstream from the container cluster. As the OpenShift documentation states quite well, “The OpenShift Container Platform router is the ingress point for all external traffic destined for services in your OpenShift Container Platform installation.” This is the general architectural pattern for all containerized app infrastructures, where an upstream proxy is responsible for scale by routing requests across a cluster. Some patterns insert yet another layer of local scalability within the cluster, but all require the services of an upstream proxy capable of routing requests to the apps distributed across the cluster. That generally requires some networking magic, which is often seen as one of the top barriers to container adoption. ClusterHQ’s “Container Market Adoption Survey” report released in mid-2016 found that networking ranked second, after storage, as one of the barriers to deploying containers. Which is why it’s important to simplify the required networking whenever possible. Routing is one of the functions that necessarily relies on networking. In the case of OpenShift, that networking revolves around an SDN overlay network (VXLAN). Red Hat has natively supported F5 as a router (since OpenShift Container Platform 3.0.2) but in the past deployment has required a ramp node the cluster to act as a gateway between the BIG-IP and a pod. This was necessary to enable VXLAN/VLAN bridging but added weight to the deployment in the form of additional components (the ramp node) and networking configuration. While a workable solution, the ramp node quickly becomes a bottleneck that can impede performance, making it less than ideal. The release of OpenShift Container Platform 3.4 contained somekey benefits via an updated integration of F5. This includes theremoval of the need for a ramp node, and theaddition of BIG-IP as a VXLAN VTEP endpoint. That means the BIG-IP now has direct access to the Pods running inside OpenShift.Now, when a container is launched or killed, rather than update the ramp node the update goes directly to the F5 BIG-IP via ourREST API. This updated integration results in a simpler architecture (and easier deployment) that improves performance of apps served from the cluster and scalability of the overall architecture.It also offers three key benefits. First, fewer hops are required. Elimination of the ramp node means no ramp node induced bottlenecks. Second, BIG-IP has direct access for health monitoring the pods. Lastly, the integration also allows for app/page routing (L7 policy steering) by inspecting HTTP headers. Until now, Red Hat has owned the integration of F5 into OpenShift, but with this latest release we’ve taken on responsibility for maintaining and supporting the code. The new integration is available now, and you can learn more about how to use an F5 BIG-IP as an OpenShift router in its documentation.657Views0likes2CommentsDeploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization
Introduction Red Hat OpenShift Virtualization is a feature that brings virtual machine (VM) workloads into the Kubernetes platform, allowing them to run alongside containerized applications in a seamless, unified environment. Built on the open-source KubeVirt project, OpenShift Virtualization enables organizations to manage VMs using the same tools and workflows they use for containers. Why OpenShift Virtualization? Organizations today face critical needs such as: Rapid Migration: "I want to migrate ASAP" from traditional virtualization platforms to more modern solutions. Infrastructure Modernization: Transitioning legacy VM environments to leverage the benefits of hybrid and cloud-native architectures. Unified Management: Running VMs alongside containerized applications to simplify operations and enhance resource utilization. OpenShift Virtualization addresses these challenges by consolidating legacy and cloud-native workloads onto a single platform. This consolidation simplifies management, enhances operational efficiency, and facilitates infrastructure modernization without disrupting existing services. Integrating F5 Distributed Cloud Customer Edge (XC CE) into OpenShift Virtualization further enhances this environment by providing advanced networking and security capabilities. This combination offers several benefits: Multi-Tenancy: Deploy multiple CE VMs, each dedicated to a specific tenant, enabling isolation and customization for different teams or departments within a secure, multi-tenant environment. Load Balancing: Efficiently manage and distribute application traffic to optimize performance and resource utilization. Enhanced Security: Implement advanced threat protection at the edge to strengthen your security posture against emerging threats. Microservices Management: Seamlessly integrate and manage microservices, enhancing agility and scalability. This guide provides a step-by-step approach to deploying XC CE within OpenShift Virtualization, detailing the technical considerations and configurations required. Technical Overview Deploying XC CE within OpenShift Virtualization involves several key technical steps: Preparation Cluster Setup: Ensure an operational OpenShift cluster with OpenShift Virtualization installed. Access Rights: Confirm administrative permissions to configure compute and network settings. F5 XC Account: Obtain access to generate node tokens and download the XC CE images. Resource Optimization: Enable CPU Manager: Configure the CPU Manager to allocate CPU resources effectively. Configure Topology Manager: Set the policy to single-numa-node for optimal NUMA performance. Network Configuration: Open vSwitch (OVS) Bridges: Set up OVS bridges on worker nodes to handle networking for the virtual machines. NetworkAttachmentDefinitions (NADs): Use Multus CNI to define how virtual machines attach to multiple networks, supporting both external and internal connectivity. Image Preparation: Obtain XC CE Image: Download the XC CE image in qcow2 format suitable for KubeVirt. Generate Node Token: Create a one-time node token from the F5 Distributed Cloud Console for node registration. User Data Configuration: Prepare cloud-init user data with the node token and network settings to automate the VM initialization process. Deployment: Create DataVolumes: Import the XC CE image into the cluster using the Containerized Data Importer (CDI). Deploy VirtualMachine Resources: Apply manifests to deploy XC CE instances in OpenShift. Network Configuration Setting up the network involves creating Open vSwitch (OVS) bridges and defining NetworkAttachmentDefinitions (NADs) to enable multiple network interfaces for the virtual machines. Open vSwitch (OVS) Bridges Create a NodeNetworkConfigurationPolicy to define OVS bridges on all worker nodes: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-vms spec: nodeSelector: node-role.kubernetes.io/worker: '' desiredState: interfaces: - name: ovs-vms type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: true port: - name: eno1 ovn: bridge-mappings: - localnet: ce2-slo bridge: ovs-vms state: present Replace eno1 with the appropriate physical network interface on your nodes. This policy sets up an OVS bridge named ovs-vms connected to the physical interface. NetworkAttachmentDefinitions (NADs) Define NADs using Multus CNI to attach networks to the virtual machines. External Network (ce2-slo): External Network (ce2-slo): Connects VMs to the physical network with a specific VLAN ID. This setup allows the VMs to communicate with external systems, services, or networks, which is essential for applications that require access to resources outside the cluster or need to expose services to external users. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-slo namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-slo", "type": "ovn-k8s-cni-overlay", "topology": "localnet", "netAttachDefName": "f5-ce/ce2-slo", "mtu": 1500, "vlanID": 3052, "ipam": {} } Internal Network (ce2-sli): Internal Network (ce2-sli): Provides an isolated Layer 2 network for internal communication. By setting the topology to "layer2", this network operates as an internal overlay network that is not directly connected to the physical network infrastructure. The mtu is set to 1400 bytes to accommodate any overhead introduced by encapsulation protocols used in the internal network overlay. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-sli namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-sli", "type": "ovn-k8s-cni-overlay", "topology": "layer2", "netAttachDefName": "f5-ce/ce2-sli", "mtu": 1400, "ipam": {} } VirtualMachine Configuration Configuring the virtual machine involves preparing the image, creating cloud-init user data, and defining the VirtualMachine resource. Image Preparation Obtain XC CE Image: Download the qcow2 image from the F5 Distributed Cloud Console. Generate Node Token: Acquire a one-time node token for node registration. Cloud-Init User Data Create a user-data configuration containing the node token and network settings: #cloud-config write_files: - path: /etc/vpm/user_data content: | token: <your-node-token> slo_ip: <IP>/<prefix> slo_gateway: <Gateway IP> slo_dns: <DNS IP> owner: root permissions: '0644' Replace placeholders with actual network configurations. This file automates the VM's initial setup and registration. VirtualMachine Resource Definition Define the VirtualMachine resource, specifying CPU, memory, disks, network interfaces, and cloud-init configurations. Resources: Allocate sufficient CPU and memory. Disks: Reference the DataVolume containing the XC CE image. Interfaces: Attach NADs for network connectivity. Cloud-Init: Embed the user data for automatic configuration. Conclusion Deploying F5 Distributed Cloud CE in OpenShift Virtualization enables organizations to leverage advanced networking and security features within their existing Kubernetes infrastructure. This integration facilitates a more secure, efficient, and scalable environment for modern applications. For detailed deployment instructions and configuration examples, please refer to the attached PDF guide. Related Articles: BIG-IP VE in Red Hat OpenShift Virtualization VMware to Red Hat OpenShift Virtualization Migration OpenShift Virtualization499Views1like0CommentsBehind the Scenes: The F5 Private Cloud Solution Package for Red Hat OpenStack – Measuring Success
F5 and Red Hat have just released our Private Cloud Solution Package for OpenStack. There are all kinds of marketing pieces which describe this package, what it includes, how to buy it, and what the target market looks like. I wanted to take a few minutes and tell you why I think this is such a cool offering. But first, let me steal the description paragraph from the Deployment Guide: F5 Private Cloud Solution Package for OpenStack The F5 private cloud solution package for OpenStack provides joint certification and testing with Red Hat to orchestrate F5® BIG-IP® Application Delivery Controllers (ADCs) with OpenStack Networking services. The validated solutions and use cases are based on customer requirements utilizing BIG-IP ADC and OpenStack integrations. F5’s OpenStack LBaaSv2 integration provides under-the-cloud L4–L7 services for OpenStack Networking tenants. F5’s OpenStack orchestration (HEAT) templates provide over-the-cloud, single-tenant onboarding of BIG-IP virtual edition (VE) ADC clusters and F5 iApps® templating for application services deployment. So why did we spend the time to develop this Private Cloud Solution Package for Red Hat OpenStack Platform v9? And why do I think it is valuable? Well several different reasons. First, if you are like me, six months ago I had no idea where to start. How do I build an OpenStack cloud so I can test and understand OpenStack? Do I build a Faux-penStack in a single VM on my laptop? Do I purchase a labs worth of machines to build it out? Those were my initial questions. It seems to be the question a lot of enterprises are also facing. In a study commissioned by Suse, 50% of those who have started an OpenStack initiative have failed. The fact is, OpenStack is difficult. There are many, many options. There are many, many configuration files. Until you are grounded in what each does and how they interact, it all seems like a bunch of gibberish. So, we first created this Private Cloud Solution Package with Red Hat to provide that starting point. It is intended to be the ‘You Are Here’ marker for a successful deployment of a real-world production cloud which meets the needs of an enterprise. The deployment guide marries the Red Hat install guide with specific instruction gained through setting up our Red Hat OpenStack Platform many times. The aim isn’t to provide answers to questions about each configuration option, or to provide multiple paths with a decision tree for options which differ and many times conflict with each other. Our guidance is specific, and prescriptive. We wanted a documentation path that ensures a functioning cloud. Follow this step by step using the variable inputs we did and you will end up with a validated, known-good cloud. I hope you will find the effort we put into it to be of value. John, Mark, Dave, and I (Our “Pizza team”, as John called it—although we didn’t eat any pizza as we were all working remotely from one another) spent many hours getting to the point where we could create documentation, that when followed, creates a reproducible, functioning, and validated Red Hat OpenStack Platform v9 Overcloud with F5 LBaaS v2 functionality connected to a pair of F5’s new iSeries 5800 appliances. Get that, REPRODUCABLE, and VALIDATED. We can wipe our overcloud away, and redeploy the overcloud in our environment in 44 minutes. That includes reinstalling the LBaaS components and validating that they are working using OpenStack community tests. VALIDATED. That is the second reason we spent all this time creating this solution package. We wanted to define a way for our customers, the Red Hat and F5 support organizations, and for our two awesome Professional Services organizations to KNOW that what has been built is validated, and will function. To that end, as part of the package development we created a test and troubleshooting Docker container, and have released it on Github here. This container bundles up all the software requirements and specific software versions required to run the community OpenStack tempest tests against any newly installed OpenStack Red Hat Platform environment. These tests let you know definitely whether the cloud was built correctly. We’ll run a set of test against your cloud installation assuring networking is working before we install any F5 service components. We’ll run a set of tests after we install F5 service components assuring the proper functionality of these services we provide. We’ll leave you with the test results in a common testing format as documentation that your cloud tenants should be good to go. As we develop certified use cases of our technologies with our customers, we’ll write tests for those and run those too. Cool huh? This is DevOps after all. It’s all about tested solutions. You don’t have to wait for our professional services to test your own cloud. By default, the test client includes all of the community tempest tests needed to validate LBaaS v2 on Liberty Red Hat OSP v8 (just for fun and to demonstrate the extensibility of the toolset) and Mitaka Red Hat OSPv9. Not only does it include the tests, and the toolsets to run them, get this, John even created a script which will generate the required tempest configuration files on the fly. Simply provide your overcloudrc file and the environment will be interrogated and the proper settings will be added to the config files. Again Cool. Testing is king, and we’re doing our best to hand out crowns.482Views0likes0CommentsF5 BIG-IP per application Red Hat OpenShift cluster migrations
Overview OpenShift migrations are typically done when it is desired to minimise disruption time when performing cluster upgrades. Disruptions can especially occur when performing big changes in the cluster such as changing the CNI from OpenShiftSDN to OVNKubernetes. OpenShift cluster migrations are well covered for applications by using RedHat's Migration Toolkit for Containers (MTC). The F5 BIG-IP has the role of network redirector indicated in the Network considerations chapter. The F5 BIG-IP can perform per L7 route migration without service disruption, hence allowing migration or roll-back on a per-application basis, eliminating disruption and de-risking the maintenance window. How it works As mentioned above, the traffic redirection will be done on a per L7 route basis, this is true regardless of how these L7 routes are implemented: ingress controller, API manager, service mesh, or a combination of these. This L7 awareness is achieved by usingF5 BIG-IP's Controller Ingress Services (CIS) controller for Kubernetes/OpenShift and its multi-cluster functionality which can expose in a single VIP L7 routes of services hosted in multiple Kubernetes/OpenShift clusters. This is shown in the next picture. For a migration operation it will be used a blue/green strategy independent for each L7 route where blue will refer to the application in the older cluster and green will refer to the application in the newer cluster. For each L7 route, it will be specified a weight for each blue or green backend (like in an A/B strategy). This is shown in the next picture. In this example, the migration scenario uses OpenShift´s default ingress controller (HA proxy) as an in-cluster ingress controller where the Route CR is used to indicate the L7 routes. For each L7 route defined in the HA-proxy tier, it will be defined as an L7 route in the F5 BIG-IP tier. This 1:1 mapping allows to have the per-application granularity. The VirtualServer CR is used for the F5 BIG-IP. If desired, it is also possible to use Route resources for the F5 BIG-IP. Next, it is shown the manifests for a given L7 route required for the F5 BIG-IP, in this case, https://www.migration.com/shop (alias route-b) apiVersion: "cis.f5.com/v1" kind: VirtualServer metadata: name: route-b namespace: openshift-ingress labels: f5cr: "true" spec: host: www.migration.com virtualServerAddress: "10.1.10.106" hostGroup: migration.com tlsProfileName: reencrypt-tls profileMultiplex: "/Common/oneconnect-32" pools: - path: /shop service: router-default-route-b-ocp1 servicePort: 443 weight: 100 alternateBackends: - service: router-default-route-b-ocp2 weight: 0 monitor: type: https name: /Common/www.migration.com-shop reference: bigip --- apiVersion: v1 kind: Service metadata: annotations: name: router-default-route-b-ocp1 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default type: NodePort --- apiVersion: v1 kind: Service metadata: annotations: name: router-default-route-b-ocp2 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default type: NodePort The CIS multi-cluster feature will search for the specified services in both clusters. It is up to DevOps to ensure that blue services are only present in the cluster designated as blue (in this case router-default-route-b-ocp1) and green services are only present in the cluster designated as green (in this case router-default-route-b-ocp2). It is important to remark that the Route manifests for HA-proxy (or any other ingress solution used) doesn't require any modification. That is, this migration mechanism is transparent to the application developers. Demo You can see this feature in action in the next video The manifests used in the demo can be found in the following GitHub repository: https://github.com/f5devcentral/f5-bd-cis-demo/tree/main/crds/demo-mc-twotier-haproxy-noshards Next steps Try it today! CIS is open-source software and is included in your support entitlement. If you want to learn more about CIS and CIS multi-cluster features the following blog articles are suggested. F5 BIG-IP deployment with OpenShift - platform and networking options F5 BIG-IP deployment with OpenShift - publishing application options F5 BIG-IP deployment with OpenShift - multi-cluster architectures399Views0likes0CommentsRecommended linux text books for F5 Administration using TMSH
Hi All, I have little experience with using Linux and would like to build my understanding in using Linux to appreciate using Traffic Management Shell with confidence. I would like to know what recommended Linux Text books that would be required to get a basic to immediate understanding of administration of F5 appliances using TMSH.307Views0likes3CommentsF5 BIG-IP deployment with Red Hat OpenShift - keeping client IP addresses and egress flows
Controlling the egress traffic in OpenShift allows to use the BIG-IPfor several use cases: Keeping the source IP of the ingress clients Providing highly scalable SNAT for egress flows Providing security functionalities for egress flows199Views1like0Comments