devops
1560 TopicsF5 NGINX Automation Examples [Part 1-Deploy F5 NGINX Ingress Controller with App ProtectV5 ]
Introduction: Welcome to our initial article on F5 NGINX automation use cases, where we aim to provide deeper insights into the strategies and benefits of implementing NGINX solutions. This series uses the NGINX Automation Examples GitHub repo and CI/CD platform to deploy NGINX solutions based on DevSecOps principles. Our focus will specifically address the integration of NGINX with Terraform, two powerful tools that enhance application delivery and support infrastructure as code. Stay tuned for additional use cases that will be presented in the upcoming content! In this detailed example, we will demonstrate how to deploy an F5 NGINX Ingress Controller with the F5 NGINX App Protect version 5 in the AWS, GCP, and Azure Cloud. We will utilize Terraform to set up an AWS Elastic Kubernetes Service (EKS) cluster that hosts the Arcadia Finance test web application. The NGINX Ingress Controller will manage this application for Kubernetes and will have security measures provided by the NGINX App Protect version 5. To streamline the deployment process, we will integrate GitHub Actions for continuous integration and continuous deployment (CI/CD) while using an Amazon S3 bucket to manage the state of our Terraform configurations. Prerequisites: F5 NGINX One License AWS Account - Due to the assets being created, the free tier will not work GitHub Account Tools Cloud Provider: AWS Infrastructure as Code: Terraform Infrastructure as Code State: S3 CI/CD: GitHub Action NGINX Ingress Controller: This solution provides comprehensive management for API gateways, load balancers, and Kubernetes Ingress Controllers, enhancing security and visibility in hybrid and multicloud environments, particularly at the edge of Kubernetes clusters. Consolidating technology streamlines operations and reduces the complexity of using multiple tools. NGINX App Protect WAF v5: A lightweight software security solution designed to deliver high performance and low latency. It supports platform-agnostic deployment, making it suitable for modern microservices and container-based applications. This version integrates both NGINX and Web Application Firewall (WAF) components within a single pod, making it particularly well-suited for scalable, cloud-native environments. Module 1: Deploy NGINX Ingress Controller with App Protect V5 in AWS Cloud Workflow Guides: Deploy NGINX Ingress Controller with App ProtectV5 in AWS Cloud Architecture Diagram Module 2: Deploy NGINX Ingress Controller with App Protect V5 in GCP Cloud Workflow Guides: Deploy NGINX Ingress Controller with App Protect V5 in GCP Cloud Architecture Diagram Module 3: Deploy NGINX Ingress Controller with App Protect V5 in Azure Workflow Guides: Deploy NGINX Ingress Controller with App Protect V5 in Azure Architecture Diagram Conclusion This article outlines deploying a robust security framework using the NGINX Ingress Controller and NGINX App Protect WAF version 5 for a sample web application hosted on AWS EKS. We leveraged the NGINX Automation Examples Repository and integrated it into a CI/CD pipeline for streamlined deployment. Although the provided code and security configurations are foundational and may not cover every possible scenario, they serve as a valuable starting point for implementing NGINX Ingress Controller and NGINX App Protect version 5 in your cloud environments.254Views2likes0CommentsModern Applications-Demystifying Ingress solutions flavors
In this article, we explore the different ingress services provided by F5 and how those solutions fit within our environment. With different ingress services flavors, you gain the ability to interact with your microservices at different points, allowing for flexible, secure deployment. The ingress services tools can be summarized into two main categories, Management plane: NGINX One BIG-IP CIS Traffic plane: NGINX Ingress Controller / Plus / App Protect / Service Mesh BIG-IP Next for Kubernetes Cloud Native Functions (CNFs) F5 Distributed Cloud Ingress Controller Ingress solutions definitions In this section we go quickly through the Ingress services to understand the concept for each service, and then later move to the use cases’ comparison. BIG-IP Next for Kubernetes Kubernetes' native networking architecture does not inherently support multi-network integration or non-HTTP/HTTPS protocols, creating operational and security challenges for complex deployments. BIG-IP Next for Kubernetes addresses these limitations by centralizing ingress and egress traffic control, aligning with Kubernetes design principles to integrate with existing security frameworks and broader network infrastructure. This reduces operational overhead by consolidating cross-network traffic management into a unified ingress/egress point, eliminating the need for multiple external firewalls that traditionally require isolated configuration. The solution enables zero-trust security models through granular policy enforcement and provides robust threat mitigation, including DDoS protection, by replacing fragmented security measures with a centralized architecture. Additionally, BIG-IP Next supports 5G Core deployments by managing North/South traffic flows in containerized environments, facilitating use cases such as network slicing and multi-access edge computing (MEC). These capabilities enable dynamic resource allocation aligned with application-specific or customer-driven requirements, ensuring scalable, secure connectivity for next-generation 5G consumer and enterprise solutions while maintaining compatibility with existing network and security ecosystems. Cloud Native Functions (CNFs) BIG-IP Next for Kubernetes enables the advanced networking, traffic management and security functionalities; CNFs enables additional advanced services. VNFs and CNFs can be consolidated in the S/Gi-LAN or the N6 LAN in 5G networks. A consolidated approach results in simpler management and operation, reduced operational costs up to reduced TCO by 60% and more opportunities to monetize functions and services. Functions can include DNS, Edge Firewall, DDoS, Policy Enforcer, and more. BIG-IP Next CNFs provide scalable, automated, resilient, manageable, and observable cloud-native functions and applications. Support dynamic elasticity, occupy a smaller footprint with fast restart, and use continuous deployment and automation principles. NGINX for Kubernetes / NGINX One NGINX for Kubernetes is a versatile and cloud-native application delivery platform that aligns closely with DevOps and microservices principles. It is built around two primary models: NGINX Ingress Controller (OSS and Plus): Deployed directly inside Kubernetes clusters, it acts as the primary ingress gateway for HTTP/S, TCP, and UDP traffic. It supports Kubernetes-native CRDs, and integrates easily with GitOps pipelines, service meshes (e.g., Istio, Linkerd), and modern observability stacks like Prometheus and OpenTelemetry. NGINX One/NGINXaaS: This SaaS-delivered, managed service extends the NGINX experience by offloading the operational overhead, providing scalability, resilience, and simplified security configurations for Kubernetes environments across hybrid and multi-cloud platforms. NGINX solutions prioritize lightweight deployment, fast performance, and API-driven automation. NGINX Plus variants offer extended features like advanced WAF (NGINX App Protect), JWT authentication, mTLS, session persistence, and detailed application-layer observability. Some under the hood differences, BIG-IP Next for Kubernetes/CNF make use of F5 own TMM to perform application delivery and security, NGINX rely on Kernel to perform some network level functions like NAT, IP tables and routing. So it’s a matter of the architecture of your environment to go with one or both options to enhance your application delivery and security experience. BIG-IP Container Ingress Services (CIS) BIG-IP CIS works on management flow. The CIS service is deployed at Kubernetes cluster, sending information on created Pods to an integrated BIG-IP external to Kubernetes environment. This allows to automatically create LTM pools and forwarding traffic based on pool members health. This service allows for application teams to focus on microservice development and automatically update BIG-IP, allowing for easier configuration management. Use cases categorization Let’s talk in use cases terms to make it more related to the field and our day-to-day work, NGINX One Access to NGINX commercial products, support for open-source, and the option to add WAF. Unified dashboard and APIs to discover and manage your NGINX instances. Identify and fix configuration errors quickly and easily with the NGINX One configuration recommendation engine. Quickly diagnose bottlenecks and act immediately with real-time performance monitoring across all NGINX instances. Enforce global security polices across diverse environments. Real-time vulnerability management identifies and addresses CVEs in NGINX instances. Visibility into compliance issues across diverse app ecosystems. Update groups of NGINX systems simultaneously with a single configuration file change. Unified view of your NGINX fleet for collaboration, performance tuning, and troubleshooting. NGINX One to automate manual configuration and updating tasks for security and platform teams. BIG-IP CIS Enable self-service Ingress HTTP routing and app services selection by subscribing to events to automatically configure performance, routing, and security services on BIG-IP. Integrate with the BIG-IP platform to scale apps for availability and enable app services insertion. In addition, integrate with the BIG-IP system and NGINX for Ingress load balancing. BIG-IP Next for Kubernetes Supports ingress and egress traffic management and routing for seamless integration to multiple networks. Enables support for 4G and 5G protocols that are not supported by Kubernetes—such as Diameter, SIP, GTP, SCTP, and more. BIG-IP Next for Kubernetes enables security services applied at ingress and egress, such as firewalling and DDoS. Topology hiding at ingress obscures the internal structure within the cluster. As a central point of control, per-subscriber traffic visibility at ingress and egress allows traceability for compliance tracking and billing. Support for multi-tenancy and network isolation for AI applications, enabling efficient deployment of multiple users and workloads on a single AI infrastructure. Optimize AI factories implementations with BIG-IP Next for Kubernetes on Nvidia DPU. F5 Cloud Native Functions (CNFs) Add containerized services for example Firewall, DDoS, and Intrusion Prevention System (IPS) technology Based on F5 BIG-IP AFM. Ease IPv6 migration and improve network scalability and security with IPv4 address management. Deploy as part of a security strategy. Support DNS Caching, DNS over HTTPS (DoH). Supports advanced policy and traffic management use cases. Improve QoE and ARPU with tools like traffic classification, video management and subscriber awareness. NGINX Ingress Controller Provide L4-L7 NGINX services within Kubernetes cluster. Manage user and service identities and authorize access and actions with HTTP Basic authentication, JSON Web Tokens (JWTs), OpenID Connect (OIDC), and role-based access control (RBAC). Secure incoming and outgoing communications through end-to-end encryption (SSL/TLS passthrough, TLS termination). Collect, monitor, and analyze data through prebuilt integrations with leading ecosystem tools, including OpenTelemetry, Grafana, Prometheus, and Jaeger. Easy integration with Kubernetes Ingress API, Gateway API (experimental support), and Red Hat OpenShift Routes F5 Distributed Cloud Ingress Controller The F5 XC Ingress Controller is supported only for Sites running Managed Kubernetes, also known as Physical K8s (PK8s). Deployment of the ingress controller is supported only using Helm. The Ingress Controller manages external access to HTTP services in a Kubernetes cluster using the F5 Distributed Cloud Services Platform. The ingress controller is a K8s deployment that configures the HTTP Load Balancer using the K8s ingress manifest file. The Ingress Controller automates the creation of load balancer and other required objects such as VIP, Layer 7 routes (path-based routing), advertise policy, certificate creation (k8s secrets or automatic custom certificate) Conclusion As you can see, the diverse Ingress controllers tools give you more flexibility, tailoring your architecture based on organization requirements and maintain application delivery and security practices across your applications ecosystem. Related Content and Technical demos BIG-IP Next SPK: a Kubernetes native ingress and egress gateway for Telco workloads F5 BIG-IP Next CNF solutions suite of Kubernetes native 5G Network Functions Announcing F5 NGINX Ingress Controller v4.0.0 | DevCentral JWT authorization with NGINX Ingress Controller My first CRD deployment with CIS | DevCentral BIG-IP Next for Kubernetes BIG-IP Next for Kubernetes (LA) BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home F5 NGINX Ingress Controller Overview of F5 BIG-IP Container Ingress Services NGINX One81Views0likes0CommentsDeploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization
Introduction Red Hat OpenShift Virtualization is a feature that brings virtual machine (VM) workloads into the Kubernetes platform, allowing them to run alongside containerized applications in a seamless, unified environment. Built on the open-source KubeVirt project, OpenShift Virtualization enables organizations to manage VMs using the same tools and workflows they use for containers. Why OpenShift Virtualization? Organizations today face critical needs such as: Rapid Migration: "I want to migrate ASAP" from traditional virtualization platforms to more modern solutions. Infrastructure Modernization: Transitioning legacy VM environments to leverage the benefits of hybrid and cloud-native architectures. Unified Management: Running VMs alongside containerized applications to simplify operations and enhance resource utilization. OpenShift Virtualization addresses these challenges by consolidating legacy and cloud-native workloads onto a single platform. This consolidation simplifies management, enhances operational efficiency, and facilitates infrastructure modernization without disrupting existing services. Integrating F5 Distributed Cloud Customer Edge (XC CE) into OpenShift Virtualization further enhances this environment by providing advanced networking and security capabilities. This combination offers several benefits: Multi-Tenancy: Deploy multiple CE VMs, each dedicated to a specific tenant, enabling isolation and customization for different teams or departments within a secure, multi-tenant environment. Load Balancing: Efficiently manage and distribute application traffic to optimize performance and resource utilization. Enhanced Security: Implement advanced threat protection at the edge to strengthen your security posture against emerging threats. Microservices Management: Seamlessly integrate and manage microservices, enhancing agility and scalability. This guide provides a step-by-step approach to deploying XC CE within OpenShift Virtualization, detailing the technical considerations and configurations required. Technical Overview Deploying XC CE within OpenShift Virtualization involves several key technical steps: Preparation Cluster Setup: Ensure an operational OpenShift cluster with OpenShift Virtualization installed. Access Rights: Confirm administrative permissions to configure compute and network settings. F5 XC Account: Obtain access to generate node tokens and download the XC CE images. Resource Optimization: Enable CPU Manager: Configure the CPU Manager to allocate CPU resources effectively. Configure Topology Manager: Set the policy to single-numa-node for optimal NUMA performance. Network Configuration: Open vSwitch (OVS) Bridges: Set up OVS bridges on worker nodes to handle networking for the virtual machines. NetworkAttachmentDefinitions (NADs): Use Multus CNI to define how virtual machines attach to multiple networks, supporting both external and internal connectivity. Image Preparation: Obtain XC CE Image: Download the XC CE image in qcow2 format suitable for KubeVirt. Generate Node Token: Create a one-time node token from the F5 Distributed Cloud Console for node registration. User Data Configuration: Prepare cloud-init user data with the node token and network settings to automate the VM initialization process. Deployment: Create DataVolumes: Import the XC CE image into the cluster using the Containerized Data Importer (CDI). Deploy VirtualMachine Resources: Apply manifests to deploy XC CE instances in OpenShift. Network Configuration Setting up the network involves creating Open vSwitch (OVS) bridges and defining NetworkAttachmentDefinitions (NADs) to enable multiple network interfaces for the virtual machines. Open vSwitch (OVS) Bridges Create a NodeNetworkConfigurationPolicy to define OVS bridges on all worker nodes: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-vms spec: nodeSelector: node-role.kubernetes.io/worker: '' desiredState: interfaces: - name: ovs-vms type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: true port: - name: eno1 ovn: bridge-mappings: - localnet: ce2-slo bridge: ovs-vms state: present Replace eno1 with the appropriate physical network interface on your nodes. This policy sets up an OVS bridge named ovs-vms connected to the physical interface. NetworkAttachmentDefinitions (NADs) Define NADs using Multus CNI to attach networks to the virtual machines. External Network (ce2-slo): External Network (ce2-slo): Connects VMs to the physical network with a specific VLAN ID. This setup allows the VMs to communicate with external systems, services, or networks, which is essential for applications that require access to resources outside the cluster or need to expose services to external users. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-slo namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-slo", "type": "ovn-k8s-cni-overlay", "topology": "localnet", "netAttachDefName": "f5-ce/ce2-slo", "mtu": 1500, "vlanID": 3052, "ipam": {} } Internal Network (ce2-sli): Internal Network (ce2-sli): Provides an isolated Layer 2 network for internal communication. By setting the topology to "layer2", this network operates as an internal overlay network that is not directly connected to the physical network infrastructure. The mtu is set to 1400 bytes to accommodate any overhead introduced by encapsulation protocols used in the internal network overlay. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-sli namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-sli", "type": "ovn-k8s-cni-overlay", "topology": "layer2", "netAttachDefName": "f5-ce/ce2-sli", "mtu": 1400, "ipam": {} } VirtualMachine Configuration Configuring the virtual machine involves preparing the image, creating cloud-init user data, and defining the VirtualMachine resource. Image Preparation Obtain XC CE Image: Download the qcow2 image from the F5 Distributed Cloud Console. Generate Node Token: Acquire a one-time node token for node registration. Cloud-Init User Data Create a user-data configuration containing the node token and network settings: #cloud-config write_files: - path: /etc/vpm/user_data content: | token: <your-node-token> slo_ip: <IP>/<prefix> slo_gateway: <Gateway IP> slo_dns: <DNS IP> owner: root permissions: '0644' Replace placeholders with actual network configurations. This file automates the VM's initial setup and registration. VirtualMachine Resource Definition Define the VirtualMachine resource, specifying CPU, memory, disks, network interfaces, and cloud-init configurations. Resources: Allocate sufficient CPU and memory. Disks: Reference the DataVolume containing the XC CE image. Interfaces: Attach NADs for network connectivity. Cloud-Init: Embed the user data for automatic configuration. Conclusion Deploying F5 Distributed Cloud CE in OpenShift Virtualization enables organizations to leverage advanced networking and security features within their existing Kubernetes infrastructure. This integration facilitates a more secure, efficient, and scalable environment for modern applications. For detailed deployment instructions and configuration examples, please refer to the attached PDF guide. Related Articles: BIG-IP VE in Red Hat OpenShift Virtualization VMware to Red Hat OpenShift Virtualization Migration OpenShift Virtualization774Views2likes2CommentsBIG-IP Next for Kubernetes addressing today’s enterprise challenges
Enterprises have started adopting Kubernetes (K8s)—not just cloud service providers—as it offers strategic advantages in agility, cost efficiency, security, and future-proofing. Cloud Native Functions account for around 60% TCO savings Easier to deploy, manage, maintain, and scale. Easier to add and roll out new services. Kubernetes complexities With the move from traditional application deployments to microservices and containerized services, some complexities were introduced, Networking Challenges with Kubernetes Default Deployments Kubernetes networking has several inherent challenges when using default configurations that can impact performance, security, and reliability in production environments. Core Networking Challenges Flat Network Model All pods can communicate with all other pods by default (east-west traffic) No network segmentation between applications Potential security risks from excessive inter-pod communication Service Discovery Limitations DNS-based service discovery has caching behaviors that can delay updates No built-in load balancing awareness (can route to unhealthy pods during updates) Limited traffic shaping capabilities (all requests treated equally) Ingress Challenges No default ingress controller installed Multiple ingress controllers can conflict if not properly configured SSL/TLS termination requires manual certificate management Network Policy Absence No network policies applied by default (allow all traffic). Difficult to implement zero-trust networking principles No default segmentation between namespaces DNS Issues CoreDNS default cache settings may not be optimal. Pod DNS policies may not match application requirements. Nodelocal DNS cache not enabled by default Load-Balancing Problems Service `ClusterIP` is the default (no external access). NodePort` services can conflict on port allocations. Cloud provider load balancers can be expensive if overused CNI (Container Network Interface) Considerations Default CNI plugin may not support required features Network performance varies significantly between CNI choices IP address management challenges at scale Performance-Specific Issues kube-proxy inefficiencies Default iptables mode becomes slow with many services IPVS (IP Virtual Server) mode requires explicit configuration Service mesh sidecars can double latency Pod Network Overhead Additional hops for cross-node communication Encapsulation overhead with some CNI plugins No QoS guarantees for network traffic Multicluster Communication No default solution for cross-cluster networking Complex to establish secure connections between clusters Service discovery doesn’t span clusters by default Security Challenges No default encryption between pods No default authentication for service-to-service communication. All namespaces are network-accessible to each other by default. External traffic can bypass ingress controllers if misconfigured. These challenges highlight why most production Kubernetes deployments require significant, complex customization beyond the default configuration. Figure 1 shows those workarounds being implemented and how complicated our setup would be, with multiple addons required to overcome Kubernetes limitations. In the following section, we are exploring how BIG-IP Next for Kubernetes simplifies and enhances application delivery and security within Kubernetes environment. BIG-IP Next for Kubernetes Introducing BIG-IP Next for Kubernetes not only reduces complexity, but leverages the main networking components to the TMM pods rather than relying on the host server. Think of where current network functions are applied, it’s the host kernel. Whether you are doing NAT or firewalling services, this requires intervention by the host side, which impacts the zero-trust architecture and traffic performance is limited by default kernel IP and routing capabilities. Deployment overview Among the introduced features in 2.0.0 Release API GW CRs (Custom Resources). F5 IPAM Controller to manage IP addresses for Gateway resource. Seamless firewall policy integration in Gateway API. Ingress DDoS protection in Gateway API. Enforced access control for Debug and QKView APIs with Admin Token. In this section, we explore the steps to deploy BIG-IP Next for Kubernetes in your environment, Infrastructure Using different flavors depending on your needs and lab type (demo or production), for labs microk8s, k8s or kind, for example. BIG-IP Next for Kubernetes helm, docker are required packages for this installation. Follow the installation guide BIG-IP Next for Kubernetes current 2.0.0 GA release is available. For the desired objective in this article, you may skip the Nvidia DOCA (that's the focus of the coming article) and go directly for BIG-IP Next for Kubernetes. Install additional CRDs Once the licensing and core pods are ready, you can move to adding additional CRDs (Customer Resources Definition). BIG-IP Next for Kubernetes CRDs BIG-IP Next for Kubernetes CRDs Custom CRDs Install F5 Use case Custom Resource Definitions Related Content BIG-IP Next for Kubernetes v2.0.0 Release Notes System Requirements BIG-IP Next for Kubernetes CRDs BIG-IP Next for Kubernetes BIG-IP Next SPK: a Kubernetes native ingress and egress gateway for Telco workloads F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs BIG-IP Next for Kubernetes running in Amazon EKS107Views0likes0CommentsF5 NGINX Gateway Fabric: Revolutionizing Kubernetes Traffic Management
F5 NGINX Gateway Fabric represents a significant advancement in managing Kubernetes traffic, addressing the limitations of traditional Ingress controllers and introducing a more structured approach to traffic management through the Gateway API. Challenges of Traditional Ingress Controllers: Traditional Kubernetes Ingress controllers often lead to configuration complexities and resource conflicts in multi-team environments, making management cumbersome and error-prone. Gateway API and Role-Oriented Design: The Gateway API facilitates a role-oriented design that separates configurations between platform and development teams, enhancing stability and autonomy while preventing resource conflicts. Advanced Features for Security and Management: NGINX Gateway Fabric implements a role-based API model for multi-tenancy, standardized configuration management, and seamless observability integration, improving security and operational efficiency. Use Cases and Future Outlook: The platform excels in API management, multi-team development environments, and advanced traffic management scenarios, positioning itself as a robust solution for modern cloud-native architectures.55Views0likes0CommentsF5 rSeries: Next-Generation Fully Automatable Hardware
What is rSeries? F5 rSeries is a rearchitected, next-generation hardware platform that scales application delivery performance and automates application services to address many of today’s most critical business challenges. rSeries relies on a Kubernetes-based platform layer (F5OS) that is tightly integrated with F5 TMOS software. Going to a microservice-based platform layer allows rSeries to provide additional functionality that was not possible in previous generations of F5 BIG-IP platforms. Customers do not need to learn Kubernetes but still get the benefits of it. Management of the hardware will still be done via a familiar F5 CLI, webUI or API. The additional benefit of automation capabilities can greatly simplify the process of deploying F5 products. A significant amount of time and resources are saved due to automation, which translates to more time to perform critical tasks. F5OS rSeries UI Why is this important? Get more done in less time by using a highly automatable hardware platform that can deploy software solutions in seconds, not minutes or hours. Increased performance improves ROI: The rSeries platform is a high performance and highly scalable appliance with improved processing power. Running multiple versions on the same platform allows for more flexibility than previously possible. Pay-as-you-Grow licensing options that unlock more CPU resources. Key rSeries Use-Cases NetOps Automation Shorten time to market by automating network operations and offering cloud like orchestration with full stack programmability Drive app development and delivery with self-service and faster response time Business Continuity Drive consistent policies across on-prem and public cloud and across hardware and software based ADCs Build resiliency with rSeries’ superior performance and failover capabilities Future proof investments by running multiple versions of apps side-by-side; migrate applications at your own pace Cloud Migration On-Ramp Accelerate cloud strategy by adopting cloud operating models and on-demand scalability with rSeries and use that as on ramp to cloud Dramatically reduce TCO with rSeries systems; extend commercial models to migrate from hardware to software or as applications move to cloud Automation Capabilities Declarative APIs and integration with automation frameworks (Terraform, Ansible) greatly simplifies operations and reduces overhead: AS3 (Application Services 3 Extension): A declarative API that simplifies the configuration of application services. With AS3, customers can deploy and manage configurations consistently across environments. Ansible Automation: Prebuilt Ansible modules for rSeries enable automated provisioning, configuration, and updates, reducing manual effort and minimizing errors. Terraform: Organizations leveraging Infrastructure as Code (IaC) can use Terraform to define and automate the deployment of rSeries appliances and associated configurations. Example json file: Example of running the Automation Playbook: Example of the results: More information on Automation: Automating F5OS on rSeries GitHub Automation Repository Specialized Hardware Performance rSeries offers more hardware-accelerated performance capabilities with more FPGA chipsets that are more tightly integrated with TMOS. It also includes the latest Intel processing capabilities. This enhances the following: SSL and compression offload L4 offload for higher performance and reduced load on software Hardware-accelerated SYN flood protection Hardware-based protection from more than 100 types of denial-of-service (DoS) attacks Support for F5 Intelligence Services Migration Options (BIG-IP Journeys) Use BIG-IP Jouneys to easily migrate your existing configuration to rSeries. This covers the following: Entire L4-L7 configuration can be migrated Individual Applications can be migrated BIG-IP Tenant configuration can be migrated Automatically identify and resolve migration issues Convert UCS files into AS3 declarations if needed Post-deployment diagnostics and health The Journeys Tool, available on DevCentral’s GitHub, facilitates the migration of legacy BIG-IP configurations to rSeries-compatible formats. Customers can convert UCS files, validate configurations, and highlight unsupported features during the migration process. Multi-tenancy capabilities in rSeries simplify the process of isolating workloads during and after migration. GitHub repository for F5 Journeys Conclusion The F5 rSeries platform addresses the modern enterprise’s need for high-performance, scalable, and efficient application delivery and security solutions. By combining cutting-edge hardware capabilities with robust automation tools and flexible migration options, rSeries empowers organizations to seamlessly transition from legacy platforms while unlocking new levels of performance and operational agility. Whether driven by the need for increased throughput, advanced multi-tenancy, the rSeries platform stands as a future-ready solution for securing and optimizing application delivery in an increasingly complex IT landscape. Related Content Cloud Docs rSeries Guide F5 rSeries Appliance Datasheet F5 VELOS: A Next-Generation Fully Automatable Platform Demo Video212Views2likes0CommentsF5 VELOS: A Next-Generation Fully Automatable Platform
What is VELOS? The F5 VELOS platform is the next generation of F5’s chassis-based systems. VELOS can bridge traditional and modern application architectures by supporting a mix of traditional F5 BIG-IP tenants as well as next-generation BIG-IP Next tenants in the future. VELOS relies on a Kubernetes-based platform layer (F5OS) that is tightly integrated with F5 TMOS software. Going to a microservice-based platform layer allows VELOS to provide additional functionality that was not possible in previous generations of F5 BIG-IP platforms. Customers do not need to learn Kubernetes but still get the benefits of it. Management of the chassis will still be done via a familiar F5 CLI, webUI, or API. The additional benefit of automation capabilities can greatly simplify the process of deploying F5 products. A significant amount of time and resources are saved due to automation, which translates to more time to perform critical tasks. F5OS VELOS UI Why is VELOS important? Get more done in less time by using a highly automatable hardware platform that can deploy software solutions in seconds, not minutes or hours. Increased performance improves ROI: The VELOS platform is a high-performance and highly scalable chassis with improved processing power. Running multiple versions on the same platform allows for more flexibility than previously possible. Significantly reduce the TCO of previous-generation hardware by consolidating multiple platforms into one. Key VELOS Use-Cases NetOps Automation Shorten time to market by automating network operations and offering cloud-like orchestration with full-stack programmability Drive app development and delivery with self-service and faster response time Business Continuity Drive consistent policies across on-prem and public cloud and across hardware and software-based ADCs Build resiliency with VELOS’ superior platform redundancy and failover capabilities Future-proof investments by running multiple versions of apps side-by-side; migrate applications at your own pace Cloud Migration On-Ramp Accelerate cloud strategy by adopting cloud operating models and on-demand scalability with VELOS and use that as on-ramp to cloud Dramatically reduce TCO with VELOS systems; extend commercial models to migrate from hardware to software or as applications move to cloud Automation Capabilities Declarative APIs and integration with automation frameworks (Terraform, Ansible) greatly simplifies operations and reduces overhead: AS3 (Application Services 3 Extension): A declarative API that simplifies the configuration of application services. With AS3, customers can deploy and manage configurations consistently across environments. Ansible Automation: Prebuilt Ansible modules for VELOS enable automated provisioning, configuration, and updates, reducing manual effort and minimizing errors. Terraform: Organizations leveraging Infrastructure as Code (IaC) can use Terraform to define and automate the deployment of VELOS appliances and associated configurations. Example json file: Example of running the Automation Playbook: Example of the results: More information on Automation: Automating F5OS on VELOS GitHub Automation Repository Specialized Hardware Performance VELOS offers more hardware-accelerated performance capabilities with more FPGA chipsets that are more tightly integrated with TMOS. It also includes the latest Intel processing capabilities. This enhances the following: SSL and compression offload L4 offload for higher performance and reduced load on software Hardware-accelerated SYN flood protection Hardware-based protection from more than 100 types of denial-of-service (DoS) attacks Support for F5 Intelligence Services VELOS CX1610 chassis VELOS BX520 blade Migration Options (BIG-IP Journeys) Use BIG-IP Journeys to easily migrate your existing configuration to VELOS. This covers the following: Entire L4-L7 configuration can be migrated Individual Applications can be migrated BIG-IP Tenant configuration can be migrated Automatically identify and resolve migration issues Convert UCS files into AS3 declarations if needed Post-deployment diagnostics and health The Journeys Tool, available on DevCentral’s GitHub, facilitates the migration of legacy BIG-IP configurations to VELOS-compatible formats. Customers can convert UCS files, validate configurations, and highlight unsupported features during the migration process. Multi-tenancy capabilities in VELOS simplify the process of isolating workloads during and after migration. GitHub repository for F5 Journeys Conclusion The F5 VELOS platform addresses the modern enterprise’s need for high-performance, scalable, and efficient application delivery and security solutions. By combining cutting-edge hardware capabilities with robust automation tools and flexible migration options, VELOS empowers organizations to seamlessly transition from legacy platforms while unlocking new levels of performance and operational agility. Whether driven by the need for increased throughput, advanced multi-tenancy, the VELOS platform stands as a future-ready solution for securing and optimizing application delivery in an increasingly complex IT landscape. Related Content Cloud Docs VELOS Guide F5 VELOS Chassic System Datasheet F5 rSeries: Next-Generation Fully Automatable Hardware Demo Video212Views3likes0CommentsHow to Elevate Application Performance and Availability in Azure with F5 NGINXaaS
In this article, we focus on how to optimize traffic distribution with F5 NGINXaaS. F5 NGINXaaS for Azure, an Application Delivery Controller as a Service (ADCaaS), helps organizations deliver outstanding digital experiences using adaptive load balancing with optimized traffic management. This ADCaaS also tailors customization through broad configuration control and reduces complexity through technology consolidation for cloud-native deployments. Adaptive Load Balancing Adaptive load balancing enables organizations to automatically distribute traffic across multiple backend services based on real-time demands. It can monitor traffic flow continuously and adjust dynamically based on response time or active health checks, ensuring consistent application connectivity. Whether operating in a multi-cluster Kubernetes environment or scaling applications during request spikes, NGINXaaS for Azure can optimize traffic distribution while preserving smooth user experiences. Active Health Checks One of the F5 NGINXaaS’ built-on capabilities is active health checks that proactively monitor the status of your backend services. After identifying an unresponsive instance, F5 NGINXaaS reroutes traffic to healthy instances transparently to end users. Advanced Traffic Management Patterns F5 NGINXaaS for Azure supports advanced traffic management patterns out-of-the-box, empowering organizations to experiment, deploy, and test with minimal effort. Blue-Green and Canary Deployments With blue-green and canary deployment strategies, F5 NGINXaaS enables organizations to gradually route requests to new application versions, allowing staged releases for validation. In case of any issues with the new version encountered, the changes can be easily rolled back to the previous working state, minimizing the risk of downtime. A/B Testing F5 NGINXaaS simplifies A/B testing by routing traffic to multiple variants of your application based on user segmentation, allowing your organization gather insights, test hypotheses, and refine their offerings without impacting users. Circuit Breaker and Rate Limiting F5 NGINXaaS also includes essential capabilities for maintaining stability under fluctuating traffic demands. The circuit breaker pattern prevents cascading failures by isolating services that exhibit abnormal behavior, preserving overall application health. Rate limiting ensures your infrastructure isn’t overwhelmed by excessive requests, protecting resources against malicious activities or unintentional spikes in traffic. These advanced connectivity patterns reduce deployment risks, enhance scalability, and ensure reliable user experiences. Multi-Cluster Scalability F5 NGINXaaS for Azure provides the ability to distribute traffic seamlessly across Kubernetes pods within an Azure Kubernetes Services (AKS) cluster and across multiple clusters. With the built-in Loadbalancer for Kubernetes feature, F5 NGINXaaS for Azure can dynamically build and maintain a list of load balancing targets (pods) in Kubernetes. It can also automatically update the F5 NGINXaaS configuration based on detected topology changes such as deployments of new pods or pod failures. This helps implement fine-grained, application-specific routing, security, and monitoring policies with detailed visibility into services running in AKS and ensure consistent user experience. In addition, F5 NGINXaaS’ support for multi-cluster topologies enables advanced failover and disaster recovery scenarios, helping achieve business continuity. How to Deploy and Configure F5 NGINXaaS You can find F5 NGINXaaS on the Azure marketplace. We have also curated a workshop consisting of self-paced lab exercises to help you get up and running quickly with F5 NGINXaaS. This workshop is designed for cloud and platform architects, DevOps, and SREs to learn more about how F5 NGINXaaS for Azure works - how it is configured, deployed, monitored, and managed. Using various Azure Resources like virtual machines (VMs), containers, Azure Kubernetes Service (AKS) clusters, and Azure networking, you will deploy cloud applications and configure F5 NGINXaaS to deliver them using various real-world scenarios. Load Balancing / Blue-Green / Split Clients / Multi-Cluster Load Balancing Lab To learn more about and practice with implementing load balancing and advanced traffic management, we recommend the following lab: NGINX Load Balancing / Blue-Green / Split Clients / Multi Cluster LB. In this lab, you will configure F5 NGINXaaS for proxying and load balancing across several different backend systems, including F5 NGINX Ingress Controllers in Azure Kubernetes Services (AKS) and a Windows virtual machine (VM). You will work with the F5 NGINXaaS configuration files to enable connectivity to your web applications running in the Docker containers, VMs, and AKS pods. You will also optionally configure load balancing for a Redis in-memory cache running in the AKS cluster. During the lab, you will: Configure F5 NGINXaaS to proxy and load balance across AKS workloads Configure F5 NGINXaaS to proxy to a Windows server VM Test access to your F5 NGINXaaS configurations with Curl and Chrome Inspect the HTTP content Run an HTTP load test on your systems Enable HTTP Split Clients for blue/green deployments and A/B testing Configure F5 NGINXaaS for Redis Cluster (Optional) Upon completion of this lab exercise, you will: Hands-on experience in configuring advanced load balancing with F5 NGINXaaS for Azure. Enable F5 NGINXaaS to distribute traffic across different backend systems in advanced traffic-splitting scenarios, such as blue-green and canary deployments. Gain practical knowledge in implementing multi-cluster load balancing for AKS using F5 NGINXaaS. Conclusion F5 NGINXaaS for Azure is a game-changer for organizations striving to enhance user experiences while simplifying operations. By offering adaptive load balancing, proactive health checks, advanced traffic management patterns like blue-green and canary deployments, and multi-cluster support, it aligns perfectly with the growing demands of cloud-native application development and deployment.109Views0likes0Comments