announcement
80 TopicsIntroducing F5 BIG-IP Next CNF Solutions for Red Hat OpenShift
5G and Red Hat OpenShift 5G standards have embraced Cloud-Native Network Functions (CNFs) for implementing network services in software as containers. This is a big change from previous Virtual Network Functions (VNFs) or Physical Network Functions (PNFs). The main characteristics of Cloud-Native Functions are: Implementation as containerized microservices Small performance footprint, with the ability to scale horizontally Independence of guest operating system,since CNFs operate as containers Lifecycle manageable by Kubernetes Overall, these provide a huge improvement in terms of flexibility, faster service delivery, resiliency, and crucially using Kubernetes as unified orchestration layer. The later is a drastic change from previous standards where each vendor had its own orchestration. This unification around Kubernetes greatly simplifies network functions for operators, reducing cost of deploying and maintaining networks. Additionally, by embracing the container form factor, allows Network Functions (NFs) to be deployed in new use cases like far edge. This is thanks to the smaller footprint while at the same time these can be also deployed at large scale in a central data center because of the horizontal scalability. In this article we focus on Red Hat OpenShift which is the market leading and industry reference implementation of Kubernetes for IT and Telco workloads. Introduction to F5 BIG-IP Next CNF Solutions F5 BIG-IP Next CNF Solutions is a suite of Kubernetes native 5G Network Functions, implemented as microservices. It shares the same Cloud Native Engine (CNE) as F5 BIG-IP Next SPK introduced last year. The functionalities implemented by the CNF Solutions deal mainly with user plane data. User plane data has the particularity that the final destination of the traffic is not the Kubernetes cluster but rather an external end-point, typically the Internet. In other words, the traffic gets in the Kubernetes cluster and it is forwarded out of the cluster again. This is done using dedicated interfaces that are not used for the regular ingress and egress paths of the regular traffic of a Kubernetes cluster. In this case, the main purpose of using Kubernetes is to make use of its orchestration, flexibility, and scalability. The main functionalities implemented at initial GA release of the CNF Solutions are: F5 Next Edge Firewall CNF, an IPv4/IPv6 firewall with the main focus in protecting the 5G core networks from external threads, including DDoS flood protection and IPS DNS protocol inspection. F5 Next CGNAT CNF, which offers large scale NAT with the following features: NAPT, Port Block Allocation, Static NAT, Address Pooling Paired, and Endpoint Independent mapping modes. Inbound NAT and Hairpining. Egress path filtering and address exclusions. ALG support: FTP/FTPS, TFTP, RTSP and PPTP. F5 Next DNS CNF, which offers a transparent DNS resolver and caching services. Other remarkable features are: Zero rating DNS64 which allows IPv6-only clients connect to IPv4-only services via synthetic IPv6 addresses. F5 Next Policy Enforcer CNF, which provides traffic classification, steering and shaping, and TCP and video optimization. This product is launched as Early Access in February 2023 with basic functionalities. Static TCP optimization is now GA in the initial release. Although the CGNAT (Carrier Grade NAT) and the Policy Enforcer functionalities are specific to User Plane use cases, the Edge Firewall and DNS functionalities have additional uses in other places of the network. F5 and OpenShift BIG-IP Next CNF Solutions fully supportsRed Hat OpenShift Container Platform which allows the deployment in edge or core locations with a unified management across the multiple deployments. OpenShift operators greatly facilitates the setup and tuning of telco grade applications. These are: Node Tuning Operator, used to setup Hugepages. CPU Manager and Topology Manager with NUMA awareness which allows to schedule the data plane PODs within a NUMA domain which is aligned with the SR-IOV NICs they are attached to. In an OpenShift platform all these are setup transparently to the applications and BIG-IP Next CNF Solutions uniquely require to be configured with an appropriate runtimeClass. F5 BIG-IP Next CNF Solutions architecture F5 BIG-IP Next CNF Solutions makes use of the widely trusted F5 BIG-IP Traffic Management Microkernel (TMM) data plane. This allows for a high performance, dependable product from the start. The CNF functionalities come from a microservices re-architecture of the broadly used F5 BIG-IP VNFs. The below diagram illustrates how a microservices architecture used. The data plane POD scales vertically from 1 to 16 cores and scales horizontally from 1 to 32 PODs, enabling it to handle millions of subscribers. NUMA nodes are supported. The next diagram focuses on the data plane handling which is the most relevant aspect for this CNF suite: Typically, each data plane POD has two IP address, one for each side of the N6 reference point. These could be named radio and Internet sides as shown in the diagram above. The left-side L3 hop must distribute the traffic amongst the lef-side addresses of the CNF data plane. This left-side L3 hop can be a router with BGP ECMP (Equal Cost Multi Path), an SDN or any other mechanism which is able to: Distribute the subscribers across the data plane PODs, shown in [1] of the figure above. Keep these subscribers in the same PODs when there is a change in the number of active data plane PODs (scale-in, scale-out, maintenance, etc...) as shown in [2] in the figure above. This minimizes service disruption. In the right side of the CNFs, the path towards the Internet, it is typical to implement NAT functionality to transform telco's private addresses to public addresses. This is done with the BIG-IP Next CG-NAT CNF. This NAT makes the return traffic symmetrical by reaching the same POD which processed the outbound traffic. This is thanks to each POD owning part of this NAT space, as shown in [3] of the above figure. Each POD´s NAT address space can be advertised via BGP. When not using NAT in the right side of the CNFs, it is required that the network is able to send the return traffic back to the same POD which is processing the same connection. The traffic must be kept symmetrical at all times, this is typically done with an SDN. Using F5 BIG-IP Next CNF Solutions As expected in a fully integrated Kubernetes solution, both the installation and configuration is done using the Kubernetes APIs. The installation is performed using helm charts, and the configuration using Custom Resource Definitions (CRDs). Unlike using ConfigMaps, using CRDs allow for schema validation of the configurations before these are applied. Details of the CRDs can be found in this clouddocs site. Next it is shown an overview of the most relevant CRDs. General network configuration Deploying in Kubernetes automatically configures and assigns IP addresses to the CNF PODs. The data plane interfaces will require specific configuration. The required steps are: Create Kubernetes NetworkNodePolicies and NetworkAttchment definitions which will allow to expose SR-IOV VFs to the CNF data planes PODs (TMM). To make use of these SR-IOV VFs these are referenced in the BIG-IP controller's Helm chart values file. This is described in theNetworking Overview page. Define the L2 and L3 configuration of the exposed SR-IOV interfaces using the F5BigNetVlan CRD. If static routes need to be configured, these can be added using the F5BigNetStaticroute CRD. If BGP configuration needs to be added, this is configured in the BIG-IP controller's Helm chart values file. This is described in the BGP Overview page. It is expected this will be configured using a CRD in the future. Traffic management listener configuration As with classic BIG-IP, once the CNFs are running and plumbed in the network, no traffic is processed by default. The traffic management functionalities implemented by BIG-IP Next CNF Solutions are the same of the analogous modules in the classic BIG-IP, and the CRDs in BIG-IP Next to configure these functionalities are conceptually similar too. Analogous to Virtual Servers in classic BIG-IP, BIG-IP Next CNF Solutions have a set of CRDs that create listeners of traffic where traffic management policies are applied. This is mainly the F5BigContextSecure CRD which allows to specify traffic selectors indicating VLANs, source, destination prefixes and ports where we want the policies to be applied. There are specific CRDs for listeners of Application Level Gateways (ALGs) and protocol specific solutions. These required several steps in classic BIG-IP: first creating the Virtual Service, then creating the profile and finally applying it to the Virtual Server. In BIG-IP Next this is done in a single CRD. At time of this writing, these CRDs are: F5BigZeroratingPolicy - Part of Zero-Rating DNS solution; enabling subscribers to bypass rate limits. F5BigDnsApp - High-performance DNS resolution, caching, and DNS64 translations. F5BigAlgFtp - File Transfer Protocol (FTP) application layer gateway services. F5BigAlgTftp - Trivial File Transfer Protocol (TFTP) application layer gateway services. F5BigAlgPptp - Point-to-Point Tunnelling Protocol (PPTP) application layer gateway services. F5BigAlgRtsp - Real Time Streaming Protocol (RTSP) application layer gateway services. Traffic management profiles and policies configuration Depending on the type of listener created, these can have attached different types of profiles and policies. In the case of F5BigContextSecure it can get attached thefollowing CRDs to define how traffic is processed: F5BigTcpSetting - TCP options to fine-tune how application traffic is managed. F5BigUdpSetting - UDP options to fine-tune how application traffic is managed. F5BigFastl4Setting - FastL4 option to fine-tune how application traffic is managed. and the following policies for security and NAT: F5BigDdosPolicy - Denial of Service (DoS/DDoS) event detection and mitigation. F5BigFwPolicy - Granular stateful-flow filtering based on access control list (ACL) policies. F5BigIpsPolicy - Intelligent packet inspection protects applications from malignant network traffic. F5BigNatPolicy - Carrier-grade NAT (CG-NAT) using large-scale NAT (LSN) pools. The ALG listeners require the use of F5BigNatPolicy and might make use for the F5BigFwPolicyCRDs.These CRDs have also traffic selectors to allow further control over which traffic these policies should be applied to. Firewall Contexts Firewall policies are applied to the listener with best match. In addition to theF5BigFwPolicy that might be attached, a global firewall policy (hence effective in all listeners) can be configured before the listener specific firewall policy is evaluated. This is done with F5BigContextGlobal CRD, which can have attached a F5BigFwPolicy. F5BigContextGlobal also contains the default action to apply on traffic not matching any firewall rule in any context (e.g. Global Context or Secure Context or another listener). This default action can be set to accept, reject or drop and whether to log this default action. In summary, within a listener match, the firewall contexts are processed in this order: ContextGlobal Matching ContextSecure or another listener context. Default action as defined by ContextGlobal's default action. Event Logging Event logging at high speed is critical to provide visibility of what the CNFs are doing. For this the next CRDs are implemented: F5BigLogProfile - Specifies subscriber connection information sent to remote logging servers. F5BigLogHslpub - Defines remote logging server endpoints for the F5BigLogProfile. Demo F5 BIG-IP Next CNF Solutions roadmap What it is being exposed here is just the begin of a journey. Telcos have embraced Kubernetes as compute and orchestration layer. Because of this, BIG-IP Next CNF Solutions will eventually replace the analogous classic BIG-IP VNFs. Expect in the upcoming months that BIG-IP Next CNF Solutions will match and eventually surpass the features currently being offered by the analogous VNFs. Conclusion This article introduces fully re-architected, scalable solution for Red Hat OpenShift mainly focused on telco's user plane. This new microservices architecture offers flexibility, faster service delivery, resiliency and crucially the use of Kubernetes. Kubernetes is becoming the unified orchestration layer for telcos, simplifying infrastructure lifecycle, and reducing costs. OpenShift represents the best-in-class Kubernetes platform thanks to its enterprise readiness and Telco specific features. The architecture of this solution alongside the use of OpenShift also extends network services use cases to the edge by allowing the deployment of Network Functions in a smaller footprint. Please check the official BIG-IP Next CNF Solutions documentation for more technical details and check www.f5.com for a high level overview.2.1KViews3likes2CommentsUpcoming Action Required: F5 NGINX Plus R33 Release and Licensing Update
Hello community! The upcoming release of NGINX Plus R33 is scheduled for this quarter. This release brings changes to our licensing process, aligning it with industry best practices and the rest of the F5 licensing programs. These updates are designed to better serve our commercial customers by providing improved visibility into usage, streamlined license tracking, and enhanced customer service. Key Changes in NGINX Plus R33 Release: Q4, 2024 New Requirement: All commercial NGINX Plus instances will now require the placement of a JSON Web Token (JWT). This JWT file can be downloaded from your MyF5 account. License Validation: NGINX Plus instances will regularly validate their license status with the F5 licensing endpoint for connected customers. Offline environments can manage this through the NGINX Instance Manager. Usage Reporting: NGINX Plus R33 introduces a new requirement for commercial product usage reporting. NGINX's adoption of F5's standardized approach ensures easier and more precise license and usage tracking. Once our customers are utilizing R33 together with NGINX’s management options, tasks such as usage reporting and renewals will be much more streamlined and straightforward. Additionally, NGINX instance visibility and management will be much easier. Action Required To ensure a smooth transition and uninterrupted service, please take the following steps: Install the JWT: Make sure to install the JWT on all your commercial NGINX Plus instances. This is crucial to avoid any interruptions. Additional Steps: Refer to our detailed guide for any other necessary steps.See here for additional required next steps. IMPORTANT: Failure to followthese steps will result in NGINX Plus R33 and subsequent release instances not functioning. Critical Notes JWT Requirement: JWT files are essential for the startup of NGINX Plus R33. NGINX Ingress Controller: Users of NGINX Ingress Controller should not upgrade to NGINX Plus R33 until the next version of the Ingress Controller is released. No Changes for Earlier Versions: If you are using a version of NGINX Plus prior to R33, no action is required. Resources We are preparing a range of resources to help you through this transition: Support Documentation: Comprehensive support documentation will be available upon the release of NGINX Plus R33. Demonstration Videos: We will also provide demonstration videos to guide you through the new processes upon the release of NGINX Plus R33. NGINX Documentation: For more detailed information, visit our NGINX documentation. Need Assistance? If you have any questions or concerns, please do not hesitate to reach out: F5 Representative: Contact your dedicated representative for personalized support. MyF5 Account: Support is readily available through your MyF5 account. Stay tuned for more updates. Thank you for your continued partnership.677Views0likes0CommentsAnnouncing F5 NGINX Gateway Fabric 1.4.0 with IPv6 and TLS Passthrough
We announced the next release of F5 NGINX Gateway Fabric version 1.4.0 which includes a lot of smaller but very necessary features. This allows us to dedicate more time to advancing our non-functional testing framework and ensuring we maintain top performance across releases. Nevertheless, we have some great highlights of this release: IPv6 support TLS passthrough (via TLSRoute) Server zone metrics Ability to add custom pod annotations Plenty of bug fixes! During this release cycle, we discovered a bug around our custom policies that occurred when you had the same path for more than one Route: The policy would not be applied to either Route. For this release, we’ve decided to enforce a restriction so that policies cannot be applied when two or more routes share the same path. However, we are pursuing a long-term solution to lift this restriction on this edge case, as we understand that use cases that route based on header, query parameter, or other request attributes on the same path do exist. IPv6 Support While most Kubernetes clusters are still utilizing IPv4, we recognized that anyone employing a IPv6 cluster would have no ability to deploy NGINX Gateway Fabric. Thus, we implemented a simple feature to dual IPv4/IPv6 networking for NGINX Gateway Fabric. This option is enabled by default, so you can simply install as normal on an IPv6 cluster. TLS Passthrough New with 1.4 is TLSRoute support. This Route type enables the TLS Passthrough use case and is similar to setting up an HTTPRoute. This allows you to pass encrypted traffic through NGINX Gateway Fabric where it is terminated by your backend application, ensuring end-to-end encryption. As most information passes through NGINX Gateway Fabric with this route, setup is easy. You can enable TLS passthrough for any application using our guide available here. Non-Functional Testing This release marks the completion of automating our non-functional testing that we execute before each release. If you are unfamiliar with these tests, our team runs NGINX Gateway Fabric through a series of scenarios, non-functional tests, to test if our performance is regressing or improving from previous releases. As an infrastructure product that you rely on, it is our top priority to ensure that stability and performance are not compromised as new features are released. The results of all non-functional testing are available in the GitHub repository for anyone to see and should give you an idea of how well NGINX Gateway Fabric performs in general and across releases. What’s Next NGINX Gateway Fabric 1.5.0 will bring NGINX code snippets to the Gateway API with a first-class Upstream Settings policy to configure keepalive connections and NGINX zone size. If you are familiar with NGINX or find that you need to use a feature that NGINX provides that is not yet available via a Gateway API extension, you can put a NGINX code snippet within a SnippetFilter to apply NGINX configuration to a Route rule. You will even be able to use the feature to load other modules NGINX provides and leverage the vast wealth of NGINX functionality. We will still be providing many NGINX features via first-class policies and filters, such as the Upstream Settings policy, as they allow us to handle much of the complexity of translating to Gateway API for you. These custom policies and filters allow us to handle a lot of the complexity of applying NGINX config across the Gateway API framework for you. The Upstream Settings policy can set upstream management directives that are unable to be applied via snippets effectively. We will continue to deliver these custom policies and filters across all of our releases, in addition to new Gateway API resources and NGINX Gateway Fabric specific features. You can see a preview of the full snippet design here, though not all features may be implemented in one release cycle. For more information on our strategy towards first-class NGINX customization via Gateway API extensions, see our full enhancement proposalhere. Resources For the complete changelog for NGINX Gateway Fabric 1.4.0, see the Release Notes. To try NGINX Gateway Fabric for Kubernetes with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases. If you would like to get involved, see what is coming next, or see the source code for NGINX Gateway Fabric, check out our repository on GitHub! We have weekly community meetings on Tuesdays at 9:30AM Pacific/12:30PM Eastern/5:30PM GMT. The meeting link, updates, agenda, and notes are on the NGINX Gateway Fabric Meeting Calendar. Links are also always available from our GitHub readme.98Views1like0CommentsStreamlining BIG-IP Next Deployments: Automate with CI/CD Pipelines Using Terraform Cloud and GitHub
Automation is key to maintaining efficiency and consistency in today's fast-paced IT environment. In this article, I will demonstrate how to automate the deployment of BIG-IP Next configurations using Terraform Cloud and GitHub. By integrating AS3 JSON and Terraform configuration code, you can ensure that any changes made in your GitHub repository automatically trigger Terraform Cloud to deploy the updated configurations to your BIG-IP Next instance via the BIG-IP Next Central Manager. Key Players: BIG-IP Next:Your powerful application delivery controller, offers advanced features for load balancing, security, and more. BIG-IP Next Central Manager: The brain of your BIG-IP Next deployment, orchestrating and managing all your BIG-IP instances. BIG-IP Next Terraform resources:A powerful interface allowing programmatic control over your BIG-IP configuration, simplifying automation. Terraform Cloud:A robust platform for infrastructure-as-code, providing version control, collaboration, and powerful automation tools. GitHub:A popular version control system for collaborative software development, where your Terraform configuration files will reside. Terraform Agent: A local agent installed on a dedicated VM in your private data center as a bridge between Terraform Cloud and your BIG-IP Next instances. The Workflow: Define your Infrastructure in GitHub:Using the Terraform resources documented athttps://clouddocs.f5.com/products/orchestration/terraform/latest/BIG-IP-Next/big-ip-next-index.html#release-notes, you describe your desired BIG-IP Next configuration in code (e.g., creating virtual servers, pools, monitors, and other application services). Store your Terraform code in a GitHub repository. Configure Terraform Cloud: Set up a workspace in Terraform Cloud and link it to your GitHub repository. Configure a VCS trigger to automatically initiate a Terraform plan and apply it when changes are made to your code in GitHub. Install and Configure Terraform Agent: Set up a VM in your private data center, run Ubuntu, and install the Terraform Agent. Configure the agent to connect to your Terraform Cloud workspace. Automatic Configuration: When you push changes to your Terraform code in GitHub, Terraform Cloud detects the update, triggers a Terraform plan, and sends it to the Terraform Agent. The agent then communicates with your BIG-IP Next Central Manager, to implement the necessary changes to your BIG-IP Next instances. Benefits: Simplified Management: No more manual configuration and tedious updates! Terraform Cloud automates deployment, reducing errors and ensuring consistency across your BIG-IP Next environment. Increased Efficiency: Spend less time on repetitive tasks and focus on building and deploying applications faster. Collaboration and Version Control:Work collaboratively with your team, track changes, and easily revert to previous configurations using GitHub's robust version control capabilities. Scalability and Flexibility:Terraform Cloud seamlessly scales to manage large and complex environments, providing flexibility and adaptability for your growing needs. Getting Started: Set up GitHub Repository: Create a repository in GitHub and store your Terraform configuration files there. You can clone the GitHub repository from https://github.com/f5bdscs/example-AS3.git and begin working on it. terraform { required_providers { bigipnext = { source = "F5Networks/bigipnext" version = "1.2.0" } } cloud { organization = "39nX-example" workspaces { name = "39nX-example" } } } variable "host" {} variable "username" {} variable "password" {} provider "bigipnext" { username = var.username password = var.password host = var.host } resource "bigipnext_cm_as3_deploy" "test" { target_address = "10.1.1.10" as3_json = file("as3.json") } Explanation: Terraform Block: Defines the required provider bigipnext with source and version. Specifies cloud organization and workspace name. Variable Declarations: host, username, and password are declared as input variables. Provider Configuration: Uses the input variables for username, password, and host. Resource Definition: bigipnext_cm_as3_deploy resource with target_address and as3_json file. Make sure to create and populate the as3.json file with the necessary AS3 declarations. Also, ensure you provide values for host, username, and password when running the Terraform commands. { "class": "ADC", "schemaVersion": "3.45.0", "id": "example-declaration-01", "label": "Sample 1", "remark": "Simple HTTP application with round robin pool", "next-cm-tenant01": { "class": "Tenant", "EXAMPLE_APP": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualAddresses": [ "10.1.20.10" ], "pool": "next-cm-pool01" }, "next-cm-pool01": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 8080, "serverAddresses": [ "10.1.20.4" ] } ] } } } } Configure Terraform Cloud:Create a workspace, link it to your GitHub repository, and set up a VCS trigger to activate plans and apply changes. Please follow the guide at https://developer.hashicorp.com/terraform/tutorials/cloud-get-started/cloud-vcs-changeto integrate Terraform Cloud with your GitHub repository. Install and Configure Terraform Agent:Set up a VM in your private data center, install the Terraform Agent, and configure it to connect to your Terraform Cloud workspace. Please follow the guide at https://developer.hashicorp.com/terraform/tutorials/cloud/cloud-agents to install Terraform Cloud agent Deploy your configuration: Push your code to GitHub and watch as Terraform Cloud automatically updates your BIG-IP Next instances. You can watch the Demonstration Video here https://youtu.be/0xEtj-jAepE492Views0likes0CommentsF5 Distributed Cloud Customer Edge on F5 rSeries – Reference Architecture
Traditionally, to advertise an application to the internet or to connect applications across multi-cloud environments enterprises must configure and manage multiple networking and security devices from different vendors in the DMZ of the data center. CE on F5 rSeries is a single vendor, converged solution for all enterprise multi-cloud application connectivity and security needs.1.1KViews2likes2CommentsAnnouncing F5 NGINX Gateway Fabric 1.3.0 with Tracing, GRPCRoute, and Client Settings
The release of NGINX Gateway Fabric version 1.3.0, introduces plenty of highly requested features and improvements. GRPCRoutes are now supported to manage gRPC traffic, similar to the handling of HTTPRoute. The update includes new custom policies like ClientSettingsPolicy for client request configurations and ObservabilityPolicy for enabling application tracing with OpenTelemetry support. The GRPCRoute allows for efficient routing, header modifications, traffic weighting, and error conversion from HTTP to gRPC. We will explain how to set up NGINX Gateway Fabric to manage gRPC traffic using a Gateway and a GRPCRoute, providing a detailed example of the setup. It also outlines how to enable tracing through the NginxProxy resource and ObservabilityPolicy, emphasizing a selective approach to tracing to avoid data overload. Additionally, the ClientSettingsPolicy allows for the customization of NGINX directives at the Gateway or Route level, giving users control over certain NGINX behaviors with the possibility of overriding Gateway defaults at the Route level. Looking ahead, the NGINX Gateway Fabric team plans to work on TLS Passthrough, IPv6, and improvements to the testing suite, while preparing for larger updates like NGINX directive customization and separation of data and control planes. Check the end of the article to see how to get involved in the development process through GitHub and participate in bi-weekly community meetings. Further resources and links are also provided within.200Views0likes0Comments