adc
108 TopicsHow to Elevate Application Performance and Availability in Azure with F5 NGINXaaS
In this article, we focus on how to optimize traffic distribution with F5 NGINXaaS. F5 NGINXaaS for Azure, an Application Delivery Controller as a Service (ADCaaS), helps organizations deliver outstanding digital experiences using adaptive load balancing with optimized traffic management. This ADCaaS also tailors customization through broad configuration control and reduces complexity through technology consolidation for cloud-native deployments. Adaptive Load Balancing Adaptive load balancing enables organizations to automatically distribute traffic across multiple backend services based on real-time demands. It can monitor traffic flow continuously and adjust dynamically based on response time or active health checks, ensuring consistent application connectivity. Whether operating in a multi-cluster Kubernetes environment or scaling applications during request spikes, NGINXaaS for Azure can optimize traffic distribution while preserving smooth user experiences. Active Health Checks One of the F5 NGINXaaS’ built-on capabilities is active health checks that proactively monitor the status of your backend services. After identifying an unresponsive instance, F5 NGINXaaS reroutes traffic to healthy instances transparently to end users. Advanced Traffic Management Patterns F5 NGINXaaS for Azure supports advanced traffic management patterns out-of-the-box, empowering organizations to experiment, deploy, and test with minimal effort. Blue-Green and Canary Deployments With blue-green and canary deployment strategies, F5 NGINXaaS enables organizations to gradually route requests to new application versions, allowing staged releases for validation. In case of any issues with the new version encountered, the changes can be easily rolled back to the previous working state, minimizing the risk of downtime. A/B Testing F5 NGINXaaS simplifies A/B testing by routing traffic to multiple variants of your application based on user segmentation, allowing your organization gather insights, test hypotheses, and refine their offerings without impacting users. Circuit Breaker and Rate Limiting F5 NGINXaaS also includes essential capabilities for maintaining stability under fluctuating traffic demands. The circuit breaker pattern prevents cascading failures by isolating services that exhibit abnormal behavior, preserving overall application health. Rate limiting ensures your infrastructure isn’t overwhelmed by excessive requests, protecting resources against malicious activities or unintentional spikes in traffic. These advanced connectivity patterns reduce deployment risks, enhance scalability, and ensure reliable user experiences. Multi-Cluster Scalability F5 NGINXaaS for Azure provides the ability to distribute traffic seamlessly across Kubernetes pods within an Azure Kubernetes Services (AKS) cluster and across multiple clusters. With the built-in Loadbalancer for Kubernetes feature, F5 NGINXaaS for Azure can dynamically build and maintain a list of load balancing targets (pods) in Kubernetes. It can also automatically update the F5 NGINXaaS configuration based on detected topology changes such as deployments of new pods or pod failures. This helps implement fine-grained, application-specific routing, security, and monitoring policies with detailed visibility into services running in AKS and ensure consistent user experience. In addition, F5 NGINXaaS’ support for multi-cluster topologies enables advanced failover and disaster recovery scenarios, helping achieve business continuity. How to Deploy and Configure F5 NGINXaaS You can find F5 NGINXaaS on the Azure marketplace. We have also curated a workshop consisting of self-paced lab exercises to help you get up and running quickly with F5 NGINXaaS. This workshop is designed for cloud and platform architects, DevOps, and SREs to learn more about how F5 NGINXaaS for Azure works - how it is configured, deployed, monitored, and managed. Using various Azure Resources like virtual machines (VMs), containers, Azure Kubernetes Service (AKS) clusters, and Azure networking, you will deploy cloud applications and configure F5 NGINXaaS to deliver them using various real-world scenarios. Load Balancing / Blue-Green / Split Clients / Multi-Cluster Load Balancing Lab To learn more about and practice with implementing load balancing and advanced traffic management, we recommend the following lab: NGINX Load Balancing / Blue-Green / Split Clients / Multi Cluster LB. In this lab, you will configure F5 NGINXaaS for proxying and load balancing across several different backend systems, including F5 NGINX Ingress Controllers in Azure Kubernetes Services (AKS) and a Windows virtual machine (VM). You will work with the F5 NGINXaaS configuration files to enable connectivity to your web applications running in the Docker containers, VMs, and AKS pods. You will also optionally configure load balancing for a Redis in-memory cache running in the AKS cluster. During the lab, you will: Configure F5 NGINXaaS to proxy and load balance across AKS workloads Configure F5 NGINXaaS to proxy to a Windows server VM Test access to your F5 NGINXaaS configurations with Curl and Chrome Inspect the HTTP content Run an HTTP load test on your systems Enable HTTP Split Clients for blue/green deployments and A/B testing Configure F5 NGINXaaS for Redis Cluster (Optional) Upon completion of this lab exercise, you will: Hands-on experience in configuring advanced load balancing with F5 NGINXaaS for Azure. Enable F5 NGINXaaS to distribute traffic across different backend systems in advanced traffic-splitting scenarios, such as blue-green and canary deployments. Gain practical knowledge in implementing multi-cluster load balancing for AKS using F5 NGINXaaS. Conclusion F5 NGINXaaS for Azure is a game-changer for organizations striving to enhance user experiences while simplifying operations. By offering adaptive load balancing, proactive health checks, advanced traffic management patterns like blue-green and canary deployments, and multi-cluster support, it aligns perfectly with the growing demands of cloud-native application development and deployment.251Views0likes0CommentsEnhancing Application Delivery with F5 NGINXaaS for Azure: ADC-as-a-Service Redefined
Application Delivery Controllers as a Service (ADCaaS) play a critical role in delivering applications securely, efficiently, and seamlessly--from load balancing and traffic optimization to advanced observability and infrastructure automation. For organizations using Microsoft Azure, F5 NGINXaaS for Azure provides enhanced performance, availability, visibility, security, and automation capabilities. It offers a fully managed, highly scalable, and highly customizable ADCaaS solution in Azure designed to meet the needs of an organization’s cloud-native applications. What is F5 NGINXaaS for Azure? F5 NGINXaaS for Azure is a partner ecosystem’s ADCaaS solution jointly developed by F5 and Microsoft and natively integrated into Azure cloud platform. F5 NGINXaaS is available from the Azure Marketplace, and it provides application delivery services that enhance application performance, availability, security, and observability in Azure’s public cloud environment. By combining F5’s application delivery expertise with the flexibility and scalability of Azure, F5 NGINXaaS bridges the gap between general-purpose ADCaaS capabilities and advanced, highly unique application requirements. Let’s take a closer look at how NGINXaaS for Azure enhances your application delivery infrastructure. Intelligent Load Balancing for Optimal Performance One of the major strengths of F5 NGINXaaS for Azure lies in its intelligent, adaptive load balancing capabilities. For instance, the "least time" algorithm ensures that requests are routed to the fastest available backend servers, improving application response times and optimizing resource utilization. F5 NGINXaaS also excels in handling dynamic environments with zero-downtime reconfigurations. Support for dynamic configuration updates, application topology changes (such as adding or removing backend instances) are processed without traffic flow interruptions. This ensures a consistent, smooth user experience – even during sudden traffic surges or service failures. In addition, F5 NGINXaaS enables advanced traffic management scenarios and use cases--such as blue-green and canary deployments, A/B testing, and rate limiting--out-of-the-box. This ensures a seamless user experience during rollouts of new releases of an application or extremely high request rates. Enhanced Observability for Real-Time Insights F5 NGINXaaS for Azure observability features include more than 200 real-time metrics for in-depth monitoring and troubleshooting. These metrics provide granular insights into traffic patterns, application health, and user experience–all from a single interface, whether you use Azure Monitor or third-party integrations such as Grafana. Programmability for Customized Application Delivery Programmability is a key differentiator when choosing an ADCaaS, especially for organizations requiring highly tailored solutions. F5 NGINXaaS’ programmable data plane with embedded NGINX JavaScript (njs) makes it possible to unlock virtually unlimited possibilities in customizing application behavior. Organizations can create bespoke configurations that adhere to unique business requirements, such as custom request routing rules or advanced security policies. Technology Consolidation and Azure Integration F5 NGINXaaS for Azure offers a unified tool that supports both L4 and L7 traffic management along with enhanced web application firewall (WAF) protection from OWASP Top 10 security risks out-of-the-box using the default policy supplied. Beyond its integrated functionality, F5 NGINXaaS works seamlessly with core Azure services such as Azure Key Vault, Azure Monitor, and Azure Entra, simplifying the technology stack and optimizing operational workflows. Use Cases: When and Where F5 NGINXaaS for Azure Excels F5 NGINXaaS for Azure is ideal for medium to large-scale Azure cloud environments supporting scalable, dynamic cloud-native workloads. Its advanced capabilities make it an excellent choice for: High-Traffic Cloud-Native App Delivery: Applications requiring advanced load-balancing algorithms to handle many concurrent connections efficiently in an agile Azure cloud environment. Distributed Applications in Kubernetes: Azure Kubernetes Services (AKS) single- and multi-cluster deployments for enhanced performance, observability, and protection, as well as distributed load balancing, failover, and disaster recovery scenarios, ensuring seamless user experience. API Communications: Scenarios requiring granular control over API traffic with programmability and integrated API security. How to Deploy and Configure F5 NGINXaaS You can find F5 NGINXaaS on the Azure marketplace. We have also curated a workshop consisting of self-paced lab exercises to help you get up and running quickly with F5 NGINXaaS for Azure. This workshop is designed for cloud and platform architects, DevOps, and SREs to learn more about how F5 NGINXaaS for Azure works - how it is configured, deployed, monitored, and managed. Using various Azure Resources like virtual machines (VMs), containers, Azure Kubernetes Service (AKS) clusters, and Azure networking, you will deploy cloud applications and configure F5 NGINXaaS to deliver them using various real-world scenarios. Azure Network Build and NGINX for Azure Overview Lab We recommend starting your F5 NGINXaaS journey with the following lab: Azure Network Build and NGINX for Azure Overview. In this lab, you will be adding and configuring the Azure components needed for the workshop. This will require a few network resources, a Network Security Group and a Public IP to allow incoming traffic to your F5 NGINX for Azure workshop resource. You will also deploy the F5 NGINXaaS resource and explore the product with a quick overview of what it is and how to deploy it. During the lab, you will: Setup your Azure Resource Group for the workshop Setup your Azure Virtual Network, Subnets, and Network Security Group for inbound traffic Create a Public IP and user-assigned managed identity to access F5 NGINXaaS for Azure Deploy an F5 NGINXaaS resource Explore F5 NGINXaaS for Azure Create an initial F5 NGINXaaS configuration for testing Create Log Analytics workspace to collect F5 NGINXaaS error and access logs Upon completion of this lab exercise, you will develop a foundational understanding of what F5 NGINXaaS is, its capabilities, and how it integrates into Azure environments. You will also gain practical experience in configuring Azure resources for deploying F5 NGINXaaS, as well as creating the basic application delivery configuration. Conclusion F5 NGINXaaS for Azure takes application delivery to the next level. By unlocking advanced scenarios and use cases powered by flexible programmability, enhanced customization, better scaling, and improved observability technologies – F5 NGINXaaS offers a tailored ADCaaS solution that brings agility, flexibility, and reliability to your Azure-based applications.195Views0likes0CommentsSimplifying Application Health Monitoring with F5 BIG-IP
A simple agreement between BIG-IP administrators and application owners can foster smooth collaboration between teams. Application owners define their own simple or complex health monitors and agree to expose a conventional /health endpoint. When a /health endpoint responds with an HTTP 200 request, BIG-IP assumes the application is healthy based on the application owners' own criteria. The Challenge of Health Monitoring in Modern Environments F5 BIG-IP administrators in Network Operations (NetOps) teams often work with application teams because the BIG-IP acts as a full proxy, providing services like: TLS termination Load balancing Health monitoring Health checks are crucial for effective load balancing. The BIG-IP uses them to determine where to send traffic among back-end application servers. However, health monitoring frequently causes friction between teams. Problems with the Traditional Approach Traditionally, BIG-IP administrators create and maintain health monitors ranging from simple ICMP pings to complex monitors that: Simulate user transactions Verify HTTP response codes Validate payload contents Track application dependencies This leads to several issues: Knowledge Gap: NetOps may not fully grasp each application's intricacies. Change Management Overhead: Application updates require retesting monitors, causing delays. Production Risk: Monitors can break after application changes, incorrectly marking services as up/down. Team Friction: Troubleshooting failed health checks involves tedious back-and-forth between teams. A Cloud-Native Solution The cloud-native and microservices communities have patterns that elegantly solve these problems. One widely used pattern is the [health endpoint], which adapts well to BIG-IP environments. The /health Endpoint Convention Cloud-native applications commonly expose dedicated health endpoints like /health, /healthy, or /ready. These return standard status codes reflecting the application's state. The /health endpoint provides a clear contract between NetOps and application teams for BIG-IP integration. Implementing the Contract This approach establishes a simple agreement: Application Team Responsibilities: Implement /health to return HTTP 200 when the application is ready for traffic Define "healthy" based on application needs (database connectivity, dependencies, etc.) Maintain the health check logic as the application changes BIG-IP Team Responsibilities: Configure an HTTP monitor targeting the /health endpoint Treat 200 as "healthy", anything else as "unhealthy" Benefits of This Approach Aligned Expertise: Application teams define health based on their knowledge. Less Friction: BIG-IP configuration stays stable as applications evolve. Better Reliability: Health checks reflect true application health, including dependencies. Easier Troubleshooting: The /health endpoint can return detailed diagnostic info, but this is ignored by the BIG-IP and used strictly for troubleshooting. Implementation Examples F5 BIG-IP Health Monitor Configuration ltm monitor http /Common/app-health-monitor { defaults-from /Common/http destination *:* interval 5 recv 200 recv-disable none send "GET /health HTTP/1.1\r\nHost: example.com\r\nConnection: close\r\n\r\n" time-until-up 0 timeout 16 } Node.js Health Endpoint Implementation const express = require('express'); const app = express(); const port = 3000; app.get('/', (req, res) => { res.send('Application is running'); }); app.get('/health', async (req, res) => { try { const dbStatus = await checkDatabaseConnection(); const serviceStatus = await checkDependentServices(); if (dbStatus && serviceStatus) { return res.status(200).json({ status: 'healthy', database: 'connected', services: 'available', timestamp: new Date().toISOString() }); } res.status(503).json({ status: 'unhealthy', database: dbStatus ? 'connected' : 'disconnected', services: serviceStatus ? 'available' : 'unavailable', timestamp: new Date().toISOString() }); } catch (error) { res.status(500).json({ status: 'error', message: error.message, timestamp: new Date().toISOString() }); } }); async function checkDatabaseConnection() { // Check real database connection return true; } async function checkDependentServices() { // Check required service connections return true; } app.listen(port, () => { console.log(`Application listening at http://localhost:${port}`); }); Adopting this health check pattern can greatly reduce friction between NetOps and application teams while improving reliability. The simple contract of HTTP 200 for healthy provides the needed integration while letting each team focus on their expertise. For apps that can't implement a custom /health endpoint, BIG-IP admins can still use traditional ICMP or TCP port monitoring. However, these basic checks can't accurately reflect an app's true health and complex dependencies. This approach fosters collaboration and leverages the specialized knowledge of both network and application teams. The result is more reliable services and smoother operations.402Views1like0CommentsNew to the community
Hello all, I am a Network & Security Freelancer by profession. I look after providing solutions for Firewalls, Routing/Switching and network design and implementation in general. Recently, I have started exploring BigIP F5 Load Balancers. Mostly I am trying to get hand on it via LAB practice using trial licenses. While exploring the BigIP LB, I came across this DevCentral community and so I am here :) . So it looks, my questions or queries posted here will be answered or solved. Today, I am looking for the help on iRules and LTM policy. What are the differences? What will be uses cases of both? Any advantages of one on other? Looking for the answers.Solved222Views0likes4CommentsNeed to restrict access to URLs
Hello team, I have a new https://xyz.com that needs to be published to internet. We are planning to launch its services in phases. For 1st phase I have received set of 29 URI paths (These are wildcard URI path i.e https://xyz.com/asdf/xyz/morning*) that needs to be accessible from internet public IPv4 & public IPv6 IPs. Any other URI paths than these 29 paths should be redirected to https://oldapplication.com when accessed from internet public IPv4 & public IPv6 IPs. Access to https://xyz.com from internal organization private IPs should be accessible without any URI path restriction. Please inform how I can achieve above requirement using iRule or LTM policy or WAF. Thanks in advance144Views0likes2CommentsK000136009 mount: /usr is busy
Hello Community, I've tried to follow the instructions on K000136009. It works except for point 4. Remount command shows "mount: /usr is busy" Is there any way to resolve the issue without a device reboot? After a reboot the /user partition is operating in read only again. Many thanks rschwarzSolved176Views0likes2CommentsWhat is an Application Delivery Controller - Part I
A Little History Application Delivery got its start in the form of network-based load balancing hardware. It is the essential foundation on which Application Delivery Controllers (ADCs) operate. The second iteration of purpose-built load balancing (following application-based proprietary systems) materialized in the form of network-based appliances. These are the true founding fathers of today's ADCs. Because these devices were application-neutral and resided outside of the application servers themselves, they could load balance using straightforward network techniques. In essence, these devices would present a "virtual server" address to the outside world, and when users attempted to connect, they would forward the connection to the most appropriate real server doing bi-directional network address translation (NAT). Figure 1: Network-based load balancing appliances. With the advent of virtualization and cloud computing, the third iteration of ADCs arrived as software delivered virtual editions intended to run on hypervisors. Virtual editions of application delivery services have the same breadth of features as those that run on purpose-built hardware and remove much of the complexity from moving application services between virtual, cloud, and hybrid environments. They allow organizations to quickly and easily spin-up application services in private or public cloud environments. Basic Application Delivery Terminology It would certainly help if everyone used the same lexicon; unfortunately, every vendor of load balancing devices (and, in turn, ADCs) seems to use different terminology. With a little explanation, however, the confusion surrounding this issue can easily be alleviated. Node, Host, Member, and Server Most ADCs have the concept of a node, host, member, or server; some have all four, but they mean different things. There are two basic concepts that they all try to express. One concept—usually called a node or server—is the idea of the physical or virtual server itself that will receive traffic from the ADC. This is synonymous with the IP address of the physical server and, in the absence of a ADC, would be the IP address that the server name (for example, www.example.com) would resolve to. We will refer to this concept as the host. The second concept is a member (sometimes, unfortunately, also called a node by some manufacturers). A member is usually a little more defined than a server/node in that it includes the TCP port of the actual application that will be receiving traffic. For instance, a server named www.example.com may resolve to an address of 172.16.1.10, which represents the server/node, and may have an application (a web server) running on TCP port 80, making the member address 172.16.1.10:80. Simply put, the member includes the definition of the application port as well as the IP address of the physical server. We will refer to this as the service. Why all the complication? Because the distinction between a physical server and the application services running on it allows the ADC to individually interact with the applications rather than the underlying hardware or hypervisor. A host (172.16.1.10) may have more than one service available (HTTP, FTP, DNS, and so on). By defining each application uniquely (172.16.1.10:80, 172.16.1.10:21, and 172.16.1.10:53), the ADC can apply unique load balancing and health monitoring based on the services instead of the host. However, there are still times when being able to interact with the host (like low-level health monitoring or when taking a server offline for maintenance) is extremely convenient. Most load balancing-based technology uses some concept to represent the host, or physical server, and another to represent the services available on it— in this case, simply host and services. Pool, Cluster, and Farm Load balancing allows organizations to distribute inbound application traffic across multiple back-end destinations, including cloud deployments. It is therefore a necessity to have the concept of a collection of back-end destinations. Pools, as we will refer to them (also known as clusters or farms) are collections of similar services available on any number of hosts. For instance, all services that offer the company web page would be collected into a pool called "company web page" and all services that offer e-commerce services would be collected into a pool called "e-commerce." The key element here is that all systems have a collective object that refers to "all similar services" and makes it easier to work with them as a single unit. This collective object—a pool—is almost always made up of services, not hosts. Virtual Server Although not always the case, today the term virtual server means a server hosting virtual machines. It is important to note that like the definition of services, virtual server usually includes the application port was well as the IP address. The term "virtual service" would be more in keeping with the IP:Port convention; but because most vendors, ADC and Cloud alike use virtual server, this article uses virtual server as well. Putting It All Together Putting all of these concepts together makes up the basic steps in load balancing. The ADC presents virtual servers to the outside world. Each virtual server points to a pool of services that reside on one or more physical hosts. Figure 2: Application Delivery comprises four basic concepts—virtual servers, pool/cluster, services, and hosts. While the diagram above may not be representative of a real-world deployment, it does provide the elemental structure for continuing a discussion about application delivery basics. ps Next Steps Read What is Load Balancing? if you haven't already and check out What is an Application Delivery Controller - Part II.3.9KViews0likes1CommentUsing F5 Distributed Cloud private connectivity orchestration for secure multi-cloud infrastructure
Introduction Enterprise businesses use modern apps that access services in many locations. Users running productivity apps, like Office365, must connect to services in the cloud from on-prem locations. To keep this running well, enterprises must provide connectivity that’s fast, reliable, and private. Traditionally, it has taken many steps to create private connections to a public cloud subscription and route application specific traffic to it. F5 Distributed Cloud Platform orchestrates ExpressRoutes in Azure and Direct Connect services in AWS, eliminating many of the steps needed for routing end-to-end. Distributed Cloud private connectivity orchestration makes it easier than ever to connect and configure routing over existing private and dedicated circuits from on-prem locations to cloud services running in AWS and in Azure. The illustration below outlines the basic components to an ExpressRoute service in Azure but there’s a lot more you’ll need to know about just under the cover. Without orchestration, many steps are needed to enable routing between on-prem sites and Azure. This requires expert knowledge of Azure Networking, numerous dependent resources to be built, and advanced routing protocols knowledge -- specifically the Border Gateway Protocol (BGP). Extend on-prem network to a colo provider Create and provision the ExpressRoute Circuit Create a Virtual Network Gateway Create a connection between ExpressRoute Circuit & Virtual Network Gateway (VNG) Configure a Route Server to propagate routes between VNG and on-prem Configure user-defined routes on each subnet on each VNet in Azure Using Distributed Cloud to orchestrate ExpressRoutes in Azure and Direct Connect in AWS, the total number of steps is effectively reduced to just an essential few. Additional benefits include no longer needing to be an expert in Azure Networking or in BGP routing, and you get the ability to control connectivity with intent-based policies natively built into the Distributed Cloud Platform. An example of an intent-based policy is to configure VNet tagging in Azure to use with a firewall policy that just allow access to specific apps or by select users. Additional policies that support tagging include Distributed Cloud WAAP and Distributed Cloud App Infrastructure Protection. The following details cover the key components needed to support direct connectivity and show how to create the services and deploy a privately routed app in Distributed Cloud. Building ExpressRoute to Azure Extend on-prem network to a colo provider Create and provision the ExpressRoute Circuit Enable the ExpressRoute orchestration feature on an Azure VNet Site configured in Distributed Cloud To create an ExpressRoute orchestrated configuration in Distributed Cloud, navigate to Multi-Cloud Network Connect > Site Management > Azure VNET Sites > Add Azure VNET Site or Manage Configuration for an existing Site. Enter the required parameters, and when you reach the “Ingress Gateway or Ingress/Egress Gateway”, select “Ingress/Egress Gateway (Two Interface) …””. Here you have the option to deploy on a Recommended Region or an Alternate Region. This selection depends entirely on your business’ cloud deployment model. After choosing the model that best fits your environment, configure the number of Availability Zones for the Gateway and subnets (new/existing) that it will join and Apply the settings. Now scroll down to Advanced Options (enabling Advanced Fields) and Select VNet type: Hub VNet. Click “View Configuration”, and any existing VNet’s from your Azure Subscription that should inherit orchestrated routing. Next, change the “Express Route Configuration” to Enabled to expand the dropdown to access the ExpressRoute Circuit and Virtual Network Gateway settings. Under “* Connections”, add the ExpressRoute Circuit configuration for your Azure subscription(s). The required fields are the Name and the Express Route Circuit, this is the Resource ID for the circuit in Azure. Note: When configuring more than one circuit, you may want to also configure the Routing Weight for circuit preference. When configuring an express route circuit from another subscription (not shown below), you’ll also need an Authorization Key. For ease of deployment, it’s recommended to use the default values for the remaining fields, including for the Gateway SKU, Subnet for Azure VNet Gateway, and Subnet for Azure Route Server, including ASN Configuration for BGP between Site and Azure Route Servers. After the configuration is fully saved and deployed, with site status Applied on the Cloud Sites page, all resources in Azure will now be set to use ExpressRoute Circuit(s) for all designated L3 routed traffic. Next, we’ll configured the orchestration of Direct Connect in an AWS VPC connected site. AWS TGW connected sites are also support. Building Direct Connect for AWS To create a Direct Connect orchestrated configuration in Distributed Cloud, navigate to Multi-Cloud Network Connect > Site Management > AWS VPC Sites > Add AWS VPC Site or Manage Configuration for an existing Site. Enter the required parameters, and when you reach the “Ingress Gateway or Ingress/Egress Gateway”, choose the form factor that meets your deployment requirements. Scroll down to Advanced Configuration, enable Advanced Fields, and then Enable Direct Connect. Configuring the Direct Connect connection feature, choose either Hosted VIF or Standard VIF mode. Use Hosted VIF when you’re already using the Direct Connect connection for other purposes in AWS or when the VIF is in another AWS subscription. Otherwise, choosing Standard VIF allows Distributed Cloud to automatically create the VIF, and dependent services in AWS mentioned below to access the Direct Connect connection. Standard VIF mode creates the following additional resources in AWS: Virtual Gateway (VGW): associating it to the VPC and enabling route propagation to inside route tables Direct Connect Gateway (DCG): associating it to the VGW Note: In Standard VIF mode, at the end of the deployment admins may copy the direct connect gateway ID and use it to create other VIF’s. Admins may also copy the ASN. This is the AWS side of the ASN that’s needed by network ops teams to configure BGP peering. Note: In Hosted VIF mode, independent site deployment is responsible for: Creating VGW, and associating it to the VPC and enabling route propagation to inside route tables Creating DCG and associating it to the VGW Accepting the Hosted VIF and linking to the DCGW VIF Optionally, you may configure the Custom ASN if needed to work with an existing BGP configuration or choose Auto to let Distributed Cloud figure it out. Apply the config, save changes, and exit to the general Sites page. After the configuration is fully saved and deployed and having site status Applied on the Cloud Sites page, all resources in AWS will now be set to use the Direct Connect Gateway for all designated L3 routed traffic. Adding Private Connectivity On-Prem The final part to this deployment is routing both the ExpressRoute and Direct Connect circuits to an on-prem site. Both circuits must terminate at a colo space, and then standard IT/NetOps teams handle the routing outside the realm of Distributed Cloud to the destination. Building a Distributed App w/ Private Connectivity With Distributed Cloud having orchestrated the routing to each site’s workload, and IT/NetOps configured routing on-prem, including propagating the on-prem routes on BGP, an app with components that work independently can now be accessed as one unified interface. An example of a distributed app that run perfectly in this environment is the demo app, Arcadia Finance. This app has four components: Main – Frontend Web interface API – An App module accessed by Main to support money transfers Refer-A-Friend (Not used) – An App module interface accessed by Main to invite friends Backend – A DB server that stores money transfer accounts used by the API module, stock portfolio positions used by the Main module, and email addresses saved by the Refer-A-Friend module. Functionally, the connection flow is as follows: Users access a VIP advertised by an F5 Global Network Regional Edge to the Internet User traffic is connected to the Main (frontend) app running in AWS via the F5 Global Network Main App connects to API in Azure to load the money-transfer side frame, and then to the Backend DB on-prem to load the stocks portfolio balances. These connections transit the private connectivity links created in this article. API App in Azure connects to the Backend DB on-prem to retrieve money transfer accounts. This connection transits the private connectivity links created in this article. To support this topology and configuration, the apps are divided and run as follows: AWS Frontend (nginx) Main (Web) Refer-A-Friend Azure API (App) On-Prem Backend (DB) To make the app reachable to users, use the Distributed Cloud console Sites Distributed Apps feature to create one HTTP Load Balancer with the VIP advertised to the Internet, and with the origin pool of the Frontend (nginx) app. Note: This step assumes that you have previously created a fully connected AWS CE Site with connectivity to your VPC’s and a Direct Connect circuit in the section above. Navigate to Multi-Cloud App Connect > Manage > Load Balancers > Origin Pools, create a new origin pool. In the pool creation menu, at the top, select “JSON” and change the format to YAML, then paste the following example, changing the specific values, such as the namespace, to match your environment: metadata: name: mcn-aws-workload-pool namespace: mcn-privatelinks labels: ves.io/app_type: arcadia annotations: {} disable: false spec: origin_servers: - private_ip: ip: 10.100.2.238 site_locator: site: tenant: acmecorp-tnxbsial namespace: system name: soln-eng-aws-dc kind: site inside_network: {} labels: {} no_tls: {} port: 8000 same_as_endpoint_port: {} loadbalancer_algorithm: LB_OVERRIDE endpoint_selection: LOCAL_PREFERRED With the origin pool created, navigate to Distributed Apps > Manage > Load Balancers > HTTP Load Balancers, and add a new one with the following YAML provided as an example: metadata: name: mcn-arcadia-frontend namespace: mcn-privatelinks labels: ves.io/app_type: arcadia annotations: {} disable: false spec: domains: - mcn-arcadia-frontend.demo.internal http: dns_volterra_managed: false port: 80 downstream_tls_certificate_expiration_timestamps: [] advertise_on_public_default_vip: {} default_route_pools: - pool: tenant: acmecorp-tnxbsial namespace: mcn-privatelinks name: mcn-aws-workload-pool kind: origin_pool weight: 1 priority: 1 endpoint_subsets: {} Internally verify end-to-end connectivity Opening a command line shell to the Frontend Web App, a variation of traceroute with the tool hping3 and using curl, reveals each hop identified as privately connected along with connectivity established directly to the destination working without an intermediary. The following IP addresses are used to support a TCP connection from the Frontend (Web) app running in AWS to the API app running in Azure: 10.100.2.238 (Source): Frontend (Web) 172.18.0.1: Container host node 192.168.1.6: AWS Direct Connect Gateway 192.168.1.5: On-Prem router 192.168.1.22: Azure ExpressRoute Circuit endpoint 10.101.1.5 (Destination): App (API) In the CLI output, note each hop and the value in the HTTP Response header “Server”: root@1e40062cb314:/etc/nginx# hping3 -ST -p 8080 api HPING api (eth2 10.101.1.5): S set, 40 headers + 0 data bytes hop=1 TTL 0 during transit from ip=172.18.0.1 name=UNKNOWN hop=1 hoprtt=7.7 ms hop=2 TTL 0 during transit from ip=192.168.1.6 name=UNKNOWN hop=2 hoprtt=31.4 ms hop=3 TTL 0 during transit from ip=192.168.1.5 name=UNKNOWN hop=3 hoprtt=67.3 ms hop=4 TTL 0 during transit from ip=192.168.1.22 name=UNKNOWN hop=4 hoprtt=67.2 ms ^C --- api hping statistic --- 8 packets transmitted, 4 packets received, 50% packet loss round-trip min/avg/max = 7.7/43.4/67.3 ms root@1e40062cb314:/etc/nginx# curl -v api:8080 * Rebuilt URL to: api:8080/ * Hostname was NOT found in DNS cache * Trying 10.101.1.5... * Connected to api (10.101.1.5) port 8080 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.35.0 > Host: api:8080 > Accept: */* > < HTTP/1.1 200 OK * Server nginx/1.18.0 (Ubuntu) is not blacklisted < Server: nginx/1.18.0 (Ubuntu) < Date: Thu, 12 Jan 2023 18:59:32 GMT < Content-Type: text/html < Content-Length: 612 < Last-Modified: Fri, 11 Nov 2022 03:24:47 GMT < Connection: keep-alive < ETag: "636dc07f-264" < Accept-Ranges: bytes Conclusion As more services continue to be deployed to and run in the cloud, dedicated, reliable, and secure private connectivity is increasingly required by Enterprises. Establishing connectivity is not a rudimentary task and requires the assistance of many hands in different departments. Distributed Cloud private connectivity orchestration helps streamline this process by eliminating many of the steps required in each cloud provider, including no longer requiring dedicated cloud and routing protocol experts just to configure these services manually. To see all of this in action and to see how all the parts come together, watch the following video, a companion to this article. Visit the following resources for more information about this feature and other Distributed Cloud services: Multi-Cloud Network Connect Product Information Direct Connect orchestration for AWS TGW Sites Direct Connect orchestration for AWS VPC Sites ExpressRoutes orchestration for Azure VNet Sites YouTube Video3KViews8likes0CommentsWhat is Load Balancing?
tl;dr - Load Balancing is the process of distributing data across disparate services to provide redundancy, reliability, and improve performance. The entire intent of load balancing is to create a system that virtualizes the "service" from the physical servers that actually run that service. A more basic definition is to balance the load across a bunch of physical servers and make those servers look like one great big server to the outside world. There are many reasons to do this, but the primary drivers can be summarized as "scalability," "high availability," and "predictability." Scalability is the capability of dynamically, or easily, adapting to increased load without impacting existing performance. Service virtualization presented an interesting opportunity for scalability; if the service, or the point of user contact, was separated from the actual servers, scaling of the application would simply mean adding more servers or cloud resources which would not be visible to the end user. High Availability (HA) is the capability of a site to remain available and accessible even during the failure of one or more systems. Service virtualization also presented an opportunity for HA; if the point of user contact was separated from the actual servers, the failure of an individual server would not render the entire application unavailable. Predictability is a little less clear as it represents pieces of HA as well as some lessons learned along the way. However, predictability can best be described as the capability of having confidence and control in how the services are being delivered and when they are being delivered in regards to availability, performance, and so on. A Little Background Back in the early days of the commercial Internet, many would-be dot-com millionaires discovered a serious problem in their plans. Mainframes didn't have web server software (not until the AS/400e, anyway) and even if they did, they couldn't afford them on their start-up budgets. What they could afford was standard, off-the-shelf server hardware from one of the ubiquitous PC manufacturers. The problem for most of them? There was no way that a single PC-based server was ever going to handle the amount of traffic their idea would generate and if it went down, they were offline and out of business. Fortunately, some of those folks actually had plans to make their millions by solving that particular problem; thus was born the load balancing market. In the Beginning, There Was DNS Before there were any commercially available, purpose-built load balancing devices, there were many attempts to utilize existing technology to achieve the goals of scalability and HA. The most prevalent, and still used, technology was DNS round-robin. Domain name system (DNS) is the service that translates human-readable names (www.example.com) into machine recognized IP addresses. DNS also provided a way in which each request for name resolution could be answered with multiple IP addresses in different order. Figure 1: Basic DNS response for redundancy The first time a user requested resolution for www.example.com, the DNS server would hand back multiple addresses (one for each server that hosted the application) in order, say 1, 2, and 3. The next time, the DNS server would give back the same addresses, but this time as 2, 3, and 1. This solution was simple and provided the basic characteristics of what customer were looking for by distributing users sequentially across multiple physical machines using the name as the virtualization point. From a scalability standpoint, this solution worked remarkable well; probably the reason why derivatives of this method are still in use today particularly in regards to global load balancing or the distribution of load to different service points around the world. As the service needed to grow, all the business owner needed to do was add a new server, include its IP address in the DNS records, and voila, increased capacity. One note, however, is that DNS responses do have a maximum length that is typically allowed, so there is a potential to outgrow or scale beyond this solution. This solution did little to improve HA. First off, DNS has no capability of knowing if the servers listed are actually working or not, so if a server became unavailable and a user tried to access it before the DNS administrators knew of the failure and removed it from the DNS list, they might get an IP address for a server that didn't work. Proprietary Load Balancing in Software One of the first purpose-built solutions to the load balancing problem was the development of load balancing capabilities built directly into the application software or the operating system (OS) of the application server. While there were as many different implementations as there were companies who developed them, most of the solutions revolved around basic network trickery. For example, one such solution had all of the servers in a cluster listen to a "cluster IP" in addition to their own physical IP address. Figure 2: Proprietary cluster IP load balancing When the user attempted to connect to the service, they connected to the cluster IP instead of to the physical IP of the server. Whichever server in the cluster responded to the connection request first would redirect them to a physical IP address (either their own or another system in the cluster) and the service session would start. One of the key benefits of this solution is that the application developers could use a variety of information to determine which physical IP address the client should connect to. For instance, they could have each server in the cluster maintain a count of how many sessions each clustered member was already servicing and have any new requests directed to the least utilized server. Initially, the scalability of this solution was readily apparent. All you had to do was build a new server, add it to the cluster, and you grew the capacity of your application. Over time, however, the scalability of application-based load balancing came into question. Because the clustered members needed to stay in constant contact with each other concerning who the next connection should go to, the network traffic between the clustered members increased exponentially with each new server added to the cluster. The scalability was great as long as you didn't need to exceed a small number of servers. HA was dramatically increased with these solutions. However, since each iteration of intelligence-enabling HA characteristics had a corresponding server and network utilization impact, this also limited scalability. The other negative HA impact was in the realm of reliability. Network-Based Load balancing Hardware The second iteration of purpose-built load balancing came about as network-based appliances. These are the true founding fathers of today's Application Delivery Controllers. Because these boxes were application-neutral and resided outside of the application servers themselves, they could achieve their load balancing using much more straight-forward network techniques. In essence, these devices would present a virtual server address to the outside world and when users attempted to connect, it would forward the connection on the most appropriate real server doing bi-directional network address translation (NAT). Figure 3: Load balancing with network-based hardware The load balancer could control exactly which server received which connection and employed "health monitors" of increasing complexity to ensure that the application server (a real, physical server) was responding as needed; if not, it would automatically stop sending traffic to that server until it produced the desired response (indicating that the server was functioning properly). Although the health monitors were rarely as comprehensive as the ones built by the application developers themselves, the network-based hardware approach could provide at least basic load balancing services to nearly every application in a uniform, consistent manner—finally creating a truly virtualized service entry point unique to the application servers serving it. Scalability with this solution was only limited by the throughput of the load balancing equipment and the networks attached to it. It was not uncommon for organization replacing software-based load balancing with a hardware-based solution to see a dramatic drop in the utilization of their servers. HA was also dramatically reinforced with a hardware-based solution. Predictability was a core component added by the network-based load balancing hardware since it was much easier to predict where a new connection would be directed and much easier to manipulate. The advent of the network-based load balancer ushered in a whole new era in the architecture of applications. HA discussions that once revolved around "uptime" quickly became arguments about the meaning of "available" (if a user has to wait 30 seconds for a response, is it available? What about one minute?). This is the basis from which Application Delivery Controllers (ADCs) originated. The ADC Simply put, ADCs are what all good load balancers grew up to be. While most ADC conversations rarely mention load balancing, without the capabilities of the network-based hardware load balancer, they would be unable to affect application delivery at all. Today, we talk about security, availability, and performance, but the underlying load balancing technology is critical to the execution of all. Next Steps Ready to plunge into the next level of Load Balancing? Take a peek at these resources: Go Beyond POLB (Plain Old Load Balancing) The Cloud-Ready ADC BIG-IP Virtual Edition Products, The Virtual ADCs Your Application Delivery Network Has Been Missing Cloud Balancing: The Evolution of Global Server Load Balancing23KViews1like1CommentIntro to Load Balancing for Developers – The Algorithms
If you’re new to this series, you can find the complete list of articles in the series on my personal page here If you are writing applications to sit behind a Load Balancer, it behooves you to at least have a clue what the algorithm your load balancer uses is about. We’re taking this week’s installment to just chat about the most common algorithms and give a plain- programmer description of how they work. While historically the algorithm chosen is both beyond the developers’ control, you’re the one that has to deal with performance problems, so you should know what is happening in the application’s ecosystem, not just in the application. Anything that can slow your application down or introduce errors is something worth having reviewed. For algorithms supported by the BIG-IP, the text here is paraphrased/modified versions of the help text associated with the Pool Member tab of the BIG-IP UI. If they wrote a good description and all I needed to do was programmer-ize it, then I used it. For algorithms not supported by the BIG-IP I wrote from scratch. Note that there are many, many more algorithms out there, but as you read through here you’ll see why these (or minor variants of them) are the ones you’ll see the most. Plain Programmer Description: Is not intended to say anything about the way any particular dev team at F5 or any other company writes these algorithms, they’re just an attempt to put the process into terms that are easier for someone with a programming background to understand. Hopefully a successful attempt. Interestingly enough, I’ve pared down what BIG-IP supports to a subset. That means that F5 employees and aficionados will be going “But you didn’t mention…!” and non-F5 employees will likely say “But there’s the Chi-Squared Algorithm…!” (no, chi-squared is theoretical distribution method I know of because it was presented as a proof for testing the randomness of a 20 sided die, ages ago in Dragon Magazine). The point being that I tried to stick to a group that builds on each other in some connected fashion. So send me hate mail… I’m good. Unless you can say more than 2-5% of the world’s load balancers are running the algorithm, I won’t consider that I missed something important. The point is to give developers and software architects a familiarity with core algorithms, not to build the worlds most complete lexicon of algorithms. Random: This load balancing method randomly distributes load across the servers available, picking one via random number generation and sending the current connection to it. While it is available on many load balancing products, its usefulness is questionable except where uptime is concerned – and then only if you detect down machines. Plain Programmer Description: The system builds an array of Servers being load balanced, and uses the random number generator to determine who gets the next connection… Far from an elegant solution, and most often found in large software packages that have thrown load balancing in as a feature. Round Robin: Round Robin passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced. Round Robin works well in most configurations, but could be better if the equipment that you are load balancing is not roughly equal in processing speed, connection speed, and/or memory. Plain Programmer Description: The system builds a standard circular queue and walks through it, sending one request to each machine before getting to the start of the queue and doing it again. While I’ve never seen the code (or actual load balancer code for any of these for that matter), we’ve all written this queue with the modulus function before. In school if nowhere else. Weighted Round Robin (called Ratio on the BIG-IP): With this method, the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine. This is an improvement over Round Robin because you can say “Machine 3 can handle 2x the load of machines 1 and 2”, and the load balancer will send two requests to machine #3 for each request to the others. Plain Programmer Description: The simplest way to explain for this one is that the system makes multiple entries in the Round Robin circular queue for servers with larger ratios. So if you set ratios at 3:2:1:1 for your four servers, that’s what the queue would look like – 3 entries for the first server, two for the second, one each for the third and fourth. In this version, the weights are set when the load balancing is configured for your application and never change, so the system will just keep looping through that circular queue. Different vendors use different weighting systems – whole numbers, decimals that must total 1.0 (100%), etc. but this is an implementation detail, they all end up in a circular queue style layout with more entries for larger ratings. Dynamic Round Robin (Called Dynamic Ratio on the BIG-IP): is similar to Weighted Round Robin, however, weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time. This Application Delivery Controller method is rarely available in a simple load balancer. Plain Programmer Description: If you think of Weighted Round Robin where the circular queue is rebuilt with new (dynamic) weights whenever it has been fully traversed, you’ll be dead-on. Fastest: The Fastest method passes a new connection based on the fastest response time of all servers. This method may be particularly useful in environments where servers are distributed across different logical networks. On the BIG-IP, only servers that are active will be selected. Plain Programmer Description: The load balancer looks at the response time of each attached server and chooses the one with the best response time. This is pretty straight-forward, but can lead to congestion because response time right now won’t necessarily be response time in 1 second or two seconds. Since connections are generally going through the load balancer, this algorithm is a lot easier to implement than you might think, as long as the numbers are kept up to date whenever a response comes through. These next three I use the BIG-IP name for. They are variants of a generalized algorithm sometimes called Long Term Resource Monitoring. Least Connections: With this method, the system passes a new connection to the server that has the least number of current connections. Least Connections methods work best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time. This Application Delivery Controller method is rarely available in a simple load balancer. Plain Programmer Description: This algorithm just keeps track of the number of connections attached to each server, and selects the one with the smallest number to receive the connection. Like fastest, this can cause congestion when the connections are all of different durations – like if one is loading a plain HTML page and another is running a JSP with a ton of database lookups. Connection counting just doesn’t account for that scenario very well. Observed: The Observed method uses a combination of the logic used in the Least Connections and Fastest algorithms to load balance connections to servers being load-balanced. With this method, servers are ranked based on a combination of the number of current connections and the response time. Servers that have a better balance of fewest connections and fastest response time receive a greater proportion of the connections. This Application Delivery Controller method is rarely available in a simple load balancer. Plain Programmer Description: This algorithm tries to merge Fastest and Least Connections, which does make it more appealing than either one of the above than alone. In this case, an array is built with the information indicated (how weighting is done will vary, and I don’t know even for F5, let alone our competitors), and the element with the highest value is chosen to receive the connection. This somewhat counters the weaknesses of both of the original algorithms, but does not account for when a server is about to be overloaded – like when three requests to that query-heavy JSP have just been submitted, but not yet hit the heavy work. Predictive: The Predictive method uses the ranking method used by the Observed method, however, with the Predictive method, the system analyzes the trend of the ranking over time, determining whether a servers performance is currently improving or declining. The servers in the specified pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. The Predictive methods work well in any environment. This Application Delivery Controller method is rarely available in a simple load balancer. Plain Programmer Description: This method attempts to fix the one problem with Observed by watching what is happening with the server. If its response time has started going down, it is less likely to receive the packet. Again, no idea what the weightings are, but an array is built and the most desirable is chosen. You can see with some of these algorithms that persistent connections would cause problems. Like Round Robin, if the connections persist to a server for as long as the user session is working, some servers will build a backlog of persistent connections that slow their response time. The Long Term Resource Monitoring algorithms are the best choice if you have a significant number of persistent connections. Fastest works okay in this scenario also if you don’t have access to any of the dynamic solutions. That’s it for this week, next week we’ll start talking specifically about Application Delivery Controllers and what they offer – which is a whole lot – that can help your application in a variety of ways. Until then! Don.22KViews1like9Comments