Announcing F5 NGINX Gateway Fabric 1.4.0 with IPv6 and TLS Passthrough
We announced the next release of F5 NGINX Gateway Fabric version 1.4.0 which includes a lot of smaller but very necessary features. This allows us to dedicate more time to advancing our non-functional testing framework and ensuring we maintain top performance across releases. Nevertheless, we have some great highlights of this release: IPv6 support TLS passthrough (via TLSRoute) Server zone metrics Ability to add custom pod annotations Plenty of bug fixes! During this release cycle, we discovered a bug around our custom policies that occurred when you had the same path for more than one Route: The policy would not be applied to either Route. For this release, we’ve decided to enforce a restriction so that policies cannot be applied when two or more routes share the same path. However, we are pursuing a long-term solution to lift this restriction on this edge case, as we understand that use cases that route based on header, query parameter, or other request attributes on the same path do exist. IPv6 Support While most Kubernetes clusters are still utilizing IPv4, we recognized that anyone employing a IPv6 cluster would have no ability to deploy NGINX Gateway Fabric. Thus, we implemented a simple feature to dual IPv4/IPv6 networking for NGINX Gateway Fabric. This option is enabled by default, so you can simply install as normal on an IPv6 cluster. TLS Passthrough New with 1.4 is TLSRoute support. This Route type enables the TLS Passthrough use case and is similar to setting up an HTTPRoute. This allows you to pass encrypted traffic through NGINX Gateway Fabric where it is terminated by your backend application, ensuring end-to-end encryption. As most information passes through NGINX Gateway Fabric with this route, setup is easy. You can enable TLS passthrough for any application using our guide available here. Non-Functional Testing This release marks the completion of automating our non-functional testing that we execute before each release. If you are unfamiliar with these tests, our team runs NGINX Gateway Fabric through a series of scenarios, non-functional tests, to test if our performance is regressing or improving from previous releases. As an infrastructure product that you rely on, it is our top priority to ensure that stability and performance are not compromised as new features are released. The results of all non-functional testing are available in the GitHub repository for anyone to see and should give you an idea of how well NGINX Gateway Fabric performs in general and across releases. What’s Next NGINX Gateway Fabric 1.5.0 will bring NGINX code snippets to the Gateway API with a first-class Upstream Settings policy to configure keepalive connections and NGINX zone size. If you are familiar with NGINX or find that you need to use a feature that NGINX provides that is not yet available via a Gateway API extension, you can put a NGINX code snippet within a SnippetFilter to apply NGINX configuration to a Route rule. You will even be able to use the feature to load other modules NGINX provides and leverage the vast wealth of NGINX functionality. We will still be providing many NGINX features via first-class policies and filters, such as the Upstream Settings policy, as they allow us to handle much of the complexity of translating to Gateway API for you. These custom policies and filters allow us to handle a lot of the complexity of applying NGINX config across the Gateway API framework for you. The Upstream Settings policy can set upstream management directives that are unable to be applied via snippets effectively. We will continue to deliver these custom policies and filters across all of our releases, in addition to new Gateway API resources and NGINX Gateway Fabric specific features. You can see a preview of the full snippet design here, though not all features may be implemented in one release cycle. For more information on our strategy towards first-class NGINX customization via Gateway API extensions, see our full enhancement proposalhere. Resources For the complete changelog for NGINX Gateway Fabric 1.4.0, see the Release Notes. To try NGINX Gateway Fabric for Kubernetes with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases. If you would like to get involved, see what is coming next, or see the source code for NGINX Gateway Fabric, check out our repository on GitHub! We have weekly community meetings on Tuesdays at 9:30AM Pacific/12:30PM Eastern/5:30PM GMT. The meeting link, updates, agenda, and notes are on the NGINX Gateway Fabric Meeting Calendar. Links are also always available from our GitHub readme.75Views0likes0CommentsStreamlining BIG-IP Next Deployments: Automate with CI/CD Pipelines Using Terraform Cloud and GitHub
Automation is key to maintaining efficiency and consistency in today's fast-paced IT environment. In this article, I will demonstrate how to automate the deployment of BIG-IP Next configurations using Terraform Cloud and GitHub. By integrating AS3 JSON and Terraform configuration code, you can ensure that any changes made in your GitHub repository automatically trigger Terraform Cloud to deploy the updated configurations to your BIG-IP Next instance via the BIG-IP Next Central Manager. Key Players: BIG-IP Next:Your powerful application delivery controller, offers advanced features for load balancing, security, and more. BIG-IP Next Central Manager: The brain of your BIG-IP Next deployment, orchestrating and managing all your BIG-IP instances. BIG-IP Next Terraform resources:A powerful interface allowing programmatic control over your BIG-IP configuration, simplifying automation. Terraform Cloud:A robust platform for infrastructure-as-code, providing version control, collaboration, and powerful automation tools. GitHub:A popular version control system for collaborative software development, where your Terraform configuration files will reside. Terraform Agent: A local agent installed on a dedicated VM in your private data center as a bridge between Terraform Cloud and your BIG-IP Next instances. The Workflow: Define your Infrastructure in GitHub:Using the Terraform resources documented athttps://clouddocs.f5.com/products/orchestration/terraform/latest/BIG-IP-Next/big-ip-next-index.html#release-notes, you describe your desired BIG-IP Next configuration in code (e.g., creating virtual servers, pools, monitors, and other application services). Store your Terraform code in a GitHub repository. Configure Terraform Cloud: Set up a workspace in Terraform Cloud and link it to your GitHub repository. Configure a VCS trigger to automatically initiate a Terraform plan and apply it when changes are made to your code in GitHub. Install and Configure Terraform Agent: Set up a VM in your private data center, run Ubuntu, and install the Terraform Agent. Configure the agent to connect to your Terraform Cloud workspace. Automatic Configuration: When you push changes to your Terraform code in GitHub, Terraform Cloud detects the update, triggers a Terraform plan, and sends it to the Terraform Agent. The agent then communicates with your BIG-IP Next Central Manager, to implement the necessary changes to your BIG-IP Next instances. Benefits: Simplified Management: No more manual configuration and tedious updates! Terraform Cloud automates deployment, reducing errors and ensuring consistency across your BIG-IP Next environment. Increased Efficiency: Spend less time on repetitive tasks and focus on building and deploying applications faster. Collaboration and Version Control:Work collaboratively with your team, track changes, and easily revert to previous configurations using GitHub's robust version control capabilities. Scalability and Flexibility:Terraform Cloud seamlessly scales to manage large and complex environments, providing flexibility and adaptability for your growing needs. Getting Started: Set up GitHub Repository: Create a repository in GitHub and store your Terraform configuration files there. You can clone the GitHub repository from https://github.com/f5bdscs/example-AS3.git and begin working on it. terraform { required_providers { bigipnext = { source = "F5Networks/bigipnext" version = "1.2.0" } } cloud { organization = "39nX-example" workspaces { name = "39nX-example" } } } variable "host" {} variable "username" {} variable "password" {} provider "bigipnext" { username = var.username password = var.password host = var.host } resource "bigipnext_cm_as3_deploy" "test" { target_address = "10.1.1.10" as3_json = file("as3.json") } Explanation: Terraform Block: Defines the required provider bigipnext with source and version. Specifies cloud organization and workspace name. Variable Declarations: host, username, and password are declared as input variables. Provider Configuration: Uses the input variables for username, password, and host. Resource Definition: bigipnext_cm_as3_deploy resource with target_address and as3_json file. Make sure to create and populate the as3.json file with the necessary AS3 declarations. Also, ensure you provide values for host, username, and password when running the Terraform commands. { "class": "ADC", "schemaVersion": "3.45.0", "id": "example-declaration-01", "label": "Sample 1", "remark": "Simple HTTP application with round robin pool", "next-cm-tenant01": { "class": "Tenant", "EXAMPLE_APP": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualAddresses": [ "10.1.20.10" ], "pool": "next-cm-pool01" }, "next-cm-pool01": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 8080, "serverAddresses": [ "10.1.20.4" ] } ] } } } } Configure Terraform Cloud:Create a workspace, link it to your GitHub repository, and set up a VCS trigger to activate plans and apply changes. Please follow the guide at https://developer.hashicorp.com/terraform/tutorials/cloud-get-started/cloud-vcs-changeto integrate Terraform Cloud with your GitHub repository. Install and Configure Terraform Agent:Set up a VM in your private data center, install the Terraform Agent, and configure it to connect to your Terraform Cloud workspace. Please follow the guide at https://developer.hashicorp.com/terraform/tutorials/cloud/cloud-agents to install Terraform Cloud agent Deploy your configuration: Push your code to GitHub and watch as Terraform Cloud automatically updates your BIG-IP Next instances. You can watch the Demonstration Video here https://youtu.be/0xEtj-jAepE449Views0likes0CommentsF5 Distributed Cloud Customer Edge on F5 rSeries – Reference Architecture
Traditionally, to advertise an application to the internet or to connect applications across multi-cloud environments enterprises must configure and manage multiple networking and security devices from different vendors in the DMZ of the data center. CE on F5 rSeries is a single vendor, converged solution for all enterprise multi-cloud application connectivity and security needs.893Views2likes2CommentsAnnouncing F5 NGINX Gateway Fabric 1.3.0 with Tracing, GRPCRoute, and Client Settings
The release of NGINX Gateway Fabric version 1.3.0, introduces plenty of highly requested features and improvements. GRPCRoutes are now supported to manage gRPC traffic, similar to the handling of HTTPRoute. The update includes new custom policies like ClientSettingsPolicy for client request configurations and ObservabilityPolicy for enabling application tracing with OpenTelemetry support. The GRPCRoute allows for efficient routing, header modifications, traffic weighting, and error conversion from HTTP to gRPC. We will explain how to set up NGINX Gateway Fabric to manage gRPC traffic using a Gateway and a GRPCRoute, providing a detailed example of the setup. It also outlines how to enable tracing through the NginxProxy resource and ObservabilityPolicy, emphasizing a selective approach to tracing to avoid data overload. Additionally, the ClientSettingsPolicy allows for the customization of NGINX directives at the Gateway or Route level, giving users control over certain NGINX behaviors with the possibility of overriding Gateway defaults at the Route level. Looking ahead, the NGINX Gateway Fabric team plans to work on TLS Passthrough, IPv6, and improvements to the testing suite, while preparing for larger updates like NGINX directive customization and separation of data and control planes. Check the end of the article to see how to get involved in the development process through GitHub and participate in bi-weekly community meetings. Further resources and links are also provided within.181Views0likes0CommentsWhat is BIG-IP Next?
BIG-IP Next LTM and BIG-IP Next WAF hit general availability back in October, and we hit the road for a tour around North America for its arrival party! Those who attended one of our F5 Academy sessions got a deep-dive presentation into BIG-IP Next conceptually, and then a lab session to work through migrating workloads and deploying them. I got to attend four of the events and discuss with so many fantastic community members what's old, what's new, what's borrowed, what's blue...no wait--this is no wedding! But for those of us who've been around the block with BIG-IP for a while, if not married to the tech, we definitely have a relationship with it, for better and worse, right? And that's earned. So any time something new, or in our case "Next" comes around, there's risk and fear involved personally. But don't fret. Seriously. It's going to be different in a lot of ways, but it's going to be great. And there are a crap-ton (thank you Mark Rober!) of improvements that once we all make it through the early stages, we'll embrace and wonder why we were even scared in the first place. So with all that said, will you come on the journey with me? In this first of many articles to come from me this year, I'll cover the high-level basics of what is so next about BIG-IP Next, and in future entries we'll be digging into the tech and learning together. BIG-IP and BIG-IP Next Conceptually - A Comparison BIG-IP has been around since before the turn of the century (which is almost old enough to rent a car here in the United States) and this year marks the 20 year anniversary of TMOS. That the traffic management microkernel (TMM) is still grokking like a boss all these years later is a testament to that early innovation! So whereas TMOS as a system is winding down, it's heart, TMM, will go on (cue sappy Celine Dion ditty in 3, 2, 1...) Let's take a look at what was and what is. With TMOS, the data plane and control plane compete for resources as it's one big system. With BIG-IP, the separation of duties is more explicit and intentionally designed to scale on the control plane. Also, the product modules are no longer either completely integrated in TMM or plugins to TMM, but rather, isolated to their own container structures. The image above might convey the idea that LTM or WAF or any of the other modules are single containers, but that's just shown that way for brevity. Each module is an array of containers. But don't let that scare you. The underlying kubernetes architecture is an abstraction that you may--but certainly are not required to--care about. TMM continues to be its awesome TMM self. The significant change operationally is how you interact with BIG-IP. With TMOS, historically you engage directly with each device, even if you have some other tools like BIG-IQ or third-party administration/automation platforms. With BIG-IP Next, everything is centralized on Central Manager, and the BIG-IP Next instances, whether they are running on rSeries, VELOS, or Virtual Edition, are just destinations for your workloads. In fact, outside of sidecar proxies for troubleshooting, instance logins won't even be supported! Yes, this is a paradigm shift. With BIG-IP Next, you will no longer be configuration-object focused. You will be application-focused. You'll still have the nerd-knobs to tweak and turn, but they'll be done within the context of an application declaration. If you haven't started your automation journey yet, you might not be familiar with AS3. It's been out now for years and works with BIG-IP to deploy applications declaratively. Instead of following a long pre-flight checklist with 87 steps to go from nothing to a working application, you simply define the parameters of your application in a blob of JSON data and click the easy button. For BIG-IP Next, this is the way. Now, in the Central Manager GUI, you might interact with FAST templates that deliver a more traditional view into configuring applications, but the underlying configuration engine is all AS3. For more, I hosted aseries of streams in December to introduce AS3 Foundations, I highly recommend you take the time to digest the basics. Benefits I'm Excited About There are many and you can read about them on the product page on F5.com. But here's my short list: API-first. Period. BIG-IP had APIs with iControl from the era before APIs were even cool, but they were not first-class citizens. The resulting performance at scale requires effort to manage effectively. Not only performance, but feature parity among iControl REST, iControl SOAP, tmsh, and the GUI has been a challenge because of the way development occurred over time. Not so with BIG-IP Next. Everything is API-first, so all tooling is able to consume everything. This is huge! Migration assistance. Central Manager has the JOURNEYS tool on sterroids built-in to the experience. Upload your UCS, evaluate your applications to see what can be migrated without updates, and deploy! It really is that easy. Sure, there's work to be done for applications that aren't fully compatible yet, but it's a great start. You can do this piece (and I recommend that you do) before you even think about deploying a single instance just to learn what work you have ahead of you and what solutions you might need to adapt to be ready. Simplified patch/upgrade process. If you know, you know...patches are upgrades with BIG-IP, and not in place at that. This is drastically improved with BIG-IP Next! Because of the containerized nature of the system, individual containers can be targeted for patching, and depending on the container, may not even require a downtime consideration. Release cycle. A more frequent release cadence might terrify the customers among us that like to space out their upgrades to once every three years or so, but for the rest of us, feature delivery to the tune of weeks instead of twice per year is an exciting development (pun intended!) Features I'm Excited About Versioning for iRules and policies. For those of us who write/manage these things, this is huge! Typically I'd version by including it in the title, and I know some who set release tags in repos. With Central Manager, it's built-in and you can deploy iRules and polices by version and do diffs in place. I'm super excited about this! Did I mention the API? On the API front...it's one API, for all functionality. No digging and scraping through the GUI, tmsh, iControl REST, iControl SOAP, building out a node.js app to deploy a custom API endpoint with iControl LX, if even possible with some of the modules like APM or ASM. Nope, it's all there in one API. Glorious. Centralized dashboards. This one is for the Ops teams! Who among us has spent many a day building custom dashboards to consume stats from BIG-IPs across your org to have a single pane of glass to manage? I for one, and I'm thrilled to see system, application, and security data centralized for analysis and alerting. Log/metric streaming. And finally, logs and metrics! Telemetry Streaming from the F5 Automation Toolchain doesn't come forward in BIG-IP Next, but the ideas behind it do. If you need your data elsewhere from Central Manager, you can set up remote logging with OpenTelemetry (see the link in the resources listed below for a first published example of this.) There are some great features coming with DNS, Access, and all the other modules when they are released as well. I'll cover those when they hit general availability. Let's Go! In the coming weeks, I'll be releasing articles on installation and licensing walk-throughs for Central Manager and the instances, andcontent from our awesome group of authors is already starting to flow as well. Here are a few entries you can feast your eyes on, including an instance Proxmox installation: For the kubernetes crowd, BIG-IP Next CNF Solutions for RedHat Openshift Installing BIG-IP Next Instance on Proxmox Remote Logging with BIG-IP Next and OpenTelemetry Are you ready? Grab a trial licensefrom your MyF5 dashboard and get going! And make sure to join us in the BIG-IP Next Academy group here on DevCentral. The launch team is actively engaged there for next-related questions/issues, so that's the place to be in your early journey! Also...if you want the ultimate jump-start for all things BIG-IP Next, join usatAppWorld 2024 in SanJose next month!5.9KViews18likes5CommentsHow To Run Ollama On F5 AppStack With An NVIDIA GPU In AWS
If you're just getting started with AI, you'll want to watch this one, as Michael Coleman shows Aubrey King, from DevCentral, how to run Ollama on F5 AppStack on an AWS instance with an NVIDIA Tesla T4 GPU. You'll get to see the install, what it looks like when a WAF finds a suspicious conversation and even a quick peek at how Mistral handles a challenge differently than Gemma.150Views2likes0Comments