Azure
244 TopicsGet Started with BIG-IP and BIG-IQ Virtual Edition (VE) Trial
Welcome to the BIG-IP and BIG-IQ trials page! This will be your jumping off point for setting up a trial version of BIG-IP VE or BIG-IQ VE in your environment. As you can see below, everything you’ll need is included and organized by operating environment — namely by public/private cloud or virtualization platform. To get started with your trial, use the following software and documentation which can be found in the links below. Upon requesting a trial, you should have received an email containing your license keys. Please bear in mind that it can take up to 30 minutes to receive your licenses. Don't have a trial license?Get one here. Or if you're ready to buy, contact us. Looking for other Resourceslike tools, compatibility matrix... BIG-IP VE and BIG-IQ VE When you sign up for the BIG-IP and BIG-IQ VE trial, you receive a set of license keys. Each key will correspond to a component listed below: BIG-IQ Centralized Management (CM) — Manages the lifecycle of BIG-IP instances including analytics, licenses, configurations, and auto-scaling policies BIG-IQ Data Collection Device (DCD) — Aggregates logs and analytics of traffic and BIG-IP instances to be used by BIG-IQ BIG-IP Local Traffic Manager (LTM), Access (APM), Advanced WAF (ASM), Network Firewall (AFM), DNS — Keep your apps up and running with BIG-IP application delivery controllers. BIG-IP Local Traffic Manager (LTM) and BIG-IP DNS handle your application traffic and secure your infrastructure. You’ll get built-in security, traffic management, and performance application services, whether your applications live in a private data center or in the cloud. Select the hypervisor or environment where you want to run VE: AWS CFT for single NIC deployment CFT for three NIC deployment BIG-IP VE images in the AWS Marketplace BIG-IQ VE images in the AWS Marketplace BIG-IP AWS documentation BIG-IP video: Single NIC deploy in AWS BIG-IQ AWS documentation Setting up and Configuring a BIG-IQ Centralized Management Solution BIG-IQ Centralized Management Trial Quick Start Azure Azure Resource Manager (ARM) template for single NIC deployment Azure ARM template for threeNIC deployment BIG-IP VE images in the Azure Marketplace BIG-IQ VE images in the Azure Marketplace BIG-IQ Centralized Management Trial Quick Start BIG-IP VE Azure documentation Video: BIG-IP VE Single NIC deploy in Azure BIG-IQ VE Azure documentation Setting up and Configuring a BIG-IQ Centralized Management Solution VMware/KVM/Openstack Download BIG-IP VE image Download BIG-IQ VE image BIG-IP VE Setup BIG-IQ VE Setup Setting up and Configuring a BIG-IQ Centralized Management Solution Google Cloud Google Deployment Manager template for single NIC deployment Google Deployment Manager template for threeNIC deployment BIG-IP VE images in Google Cloud Google Cloud Platform documentation Video:Single NIC deploy inGoogle Other Resources AskF5 Github community(f5devcentral,f5networks) Tools toautomate your deployment BIG-IQ Onboarding Tool F5 Declarative Onboarding F5 Application Services 3 Extension Other Tools: F5 SDK (Python) F5 Application Services Templates (FAST) F5 Cloud Failover F5 Telemetry Streaming Find out which hypervisor versions are supported with each release of VE. BIG-IP Compatibility Matrix BIG-IQ Compatibility Matrix Do you haveany comments orquestions? Ask here66KViews8likes24CommentsIntegrating the F5 BIGIP with Azure Sentinel
So here’s the deal; I have a few F5 BIG-IP VEs deployed across the globe protecting my cloud-hosted applications.It sure would be nice if there was a way to send all that event and statistical data to my Azure Sentinel workspace.Well, guess what? There is a way and yes, it is nice. The Application Services 3 (AS3) extension is relatively new mechanism for declaratively configuring application-specific resources on a BIG-IP system.This involves posting a JSON declaration to the system’s API endpoint, (https://<BIG-IP>/mgmt/shared/appsvcs/declare). Telemetry Streaming (TS) is an F5 iControl LX Extension that, when installed on the BIG-IP, enables you to declaratively aggregate, normalize, and forward statistics and events from the BIG-IP.The control plane data can be streamed to Azure Log Analytics application by posting a single TS JSON declaration to TS’s API endpoint, (https://<BIG-IP>>mgmt/shared/telemetry/declare). As illustrated on the right, events/stats can be collected and aggregated from multiple BIG-IPs regardless of whether they reside in Azure, on-premises, or other public/private clouds. Let’s take a quick look at how I setup my BIG-IP and Azure sentinel.Since this post is not meant to be prescriptive guidance, I have included links to relevant guidance where appropriate.Okay, let’s have some fun! So I don’t want to sound too biased here but, with that said, the F5 crew has put out some excellent guidance on Telemetry Streaming.The CloudDocs site, (see left) includes information for various cloud-related F5 technologies and integrations.Refer to the installation section for detailed guidance. Install the Plug-in The TS plug-in RPM can be downloaded from the GitHub repo, (https://github.com/F5Networks/f5-telemetry-streaming/releases). From the BIG-IP management GUI, I navigated to iApps –> Package ManagementLX and selected ‘Import’. I selected ‘Choose File’ , browsed to and selected the downloaded rpm. With the TS extension installed, I can now configure streaming via the newly created REST API endpoint.You may have noticed that I have previously installed the Application Services 3, (AS3) extension.AS3 is a powerful F5 extension that enables application-specific configuration of the BIG-IP via a declarative JSON REST interface. Configure Logging Profiles and Streaming on BIG-IP As I mentioned above, I could make use of the AS3 extension to configure my BIG-IP with the necessary logging resources.With AS3, I can post a single JSON declaration, (I used Postman to apply) that configures event listeners for my various deployed modules.In my deployment, I’m currently using Local Traffic Manager, and Advanced WAF.For my deployment, I went a little “old school” and configured the BIG-IP via the management GUI or TMSH cli.Regardless of the method you prefer, the installation instructions provide detailed guidance for each log configuration method. LTM Logging To enable LTM request logging, I ran the following two TMSH commands.Afterwards, I enabled request logging on the virtual server, (see below) to begin streaming data to Azure Log Analytics. Create Listener Pool - create ltm pool telemetry-local monitor tcp members replace-all-with { 10.8.3.10:6514 } Create LTM Request Log Profile - create ltm profile request-log telemetry request-log-pool telemetry-local request-log-protocol mds-tcp request-log-template event_source=\"request_logging\",hostname=\"$BIGIP_HOSTNAME\", client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\", http_method=\"$HTTP_METHOD\", http_uri=\"$HTTP_URI\", virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\" request-logging enabled ASM, (Advanced WAF) Logging To enable ASM event logging, I ran the following two TMSH commands.Afterwards, I simply needed to associate my security logging profiles to my application virtual servers, (see below). Create Security Log Profile – create security log profile telemetry application replace-all-with { telemetry { filter replace-all-with { request-type { values replace-all-with { all } } } logger-type remote remote-storage splunk servers replace-all-with { 255.255.255.254:6514 {} } } } Streaming Data to Azure Log Analytics With my BIG-IP configured for remote logging, I was now ready to configure my BIG-IPs to stream event data to my Azure Log Analytics workspace.This is accomplished by posting a JSON declaration to the TS API endpoint.The declaration, (see example below) includes settings specifying workspace ID, access passphrase, polling interval, etc.).This information can be gathered from the Azure portal or via Azure cli.With the declaration applied to the the BIG-IP event/stat data now streams to my Azure workspace. Utilize Azure Sentinel for Global Visibility and Analytics With event and stats now streaming into my previously created OMS workspace from my BIG-IP(s), I can now start to visualize and work with the aggregated data.From the OMS workspace I can aggregate data from my BIG-IPs as well as other sources and perform complex queries.I can then take the results and use them to populate a one or more custom dashboards, (see example below). Additionally, to get started quickly I can deploy a pre-defined dashboard directly out of the Azure OMS workspace.As of this post, F5 currently has a pre-canned dashboard for visualizing Advanced WAF and basic LTM event data, (see below). Summary Now, I have a single pane of glass that can be pinned to my Azure portal for quick, near-real time visibility of my globally deployed application.Pretty cool, huh?Here’s the overall order and some relevant links: Setup Azure Sentinel and OMS Workspace Install and Configure Telemetry Streaming onto the BIG-IP(s) Configure logging on BIG-IP(s) Additional Links Video Walkthrough of Azure Sentinel Integration F5 CloudDocs Application Services 3 Extension Telemetry Streaming User Guide Azure Sentinel Overview22KViews4likes7CommentsLightboard Lessons: BIG-IP Deployments in Azure Cloud
In this edition of Lightboard Lessons, I cover the deployment of a BIG-IP in Azure cloud. There are a few videos associated with this topic, and each video will address a specific use case. Topics will include the following: Azure Overview with BIG-IP BIG-IP High Availability Failover Methods Glossary: ALB = Azure Load Balancer ILB = Azure Internal Load Balancer HA = High Availability VE = Virtual Edition NVA = Network Virtual Appliance DSR = Direct Server Return RT = Route Table UDR = User Defined Route WAF = Web Application Firewall Azure Overview with BIG-IP This overview covers the on-prem BIG-IP with a 3-nic example setup. I then discuss the Azure cloud network and cloud components and how that relates to making a BIG-IP work in the Azure cloud. Things discussed include NICs, routes, network security groups, and IP configurations. The important thing to remember is that the cloud is not like on-prem regarding things like L2 and L3 networking components. This makes a difference as you assign NICs and IPs to each virtual machine in the cloud. Read more here on F5 CloudDocs for Azure BIG-IP Deployments. BIG-IP HA and Failover Methods The high availability section will review three different videos. These videos will discuss the failover methods for a BIG-IP cluster and how traffic can failover to the second device upon a failover event. I also discuss IP addressing options for the BIG-IP VIP/listeners and "why". Question: “Which F5 solution is right for me? Autoscaling or HA solutions?” Use these bullet points as guidance: Auto Scale Solution Ramp up/down time to consider as new instances come and go Dynamically adjust instance count based on CPU, memory, and throughput No failover, all devices are Active/Active Self-healing upon device failure (thanks to cloud provider native features) Instances are deployed with 1-NIC only HA Failover (non auto scale) No Ramp up/down time since no additional devices are "auto" scaling No dynamic scaling of the cluster, it will remain as two (2) instances Yes failover, UDRs and IP config will failover to other BIG-IP instance No self-healing, manual maintenance is required by user (similar to on-prem) Instances can be deployed with multiple NICs if needed HA Using API for Failover How do IP addresses and routes failover to the other BIG-IP unit and still process traffic with no layer 2 (L2) networking? Easy, API calls to the cloud. When you deploy an HA pair of BIG-IP instances in the Azure cloud, the BIG-IP instances are onboarded with various cloud scripts. These scripts help facilitate the moving of cloud objects by detecting failover events, triggering API calls to Azure cloud, and thus moving cloud objects (ex. Azure IPs, Azure route next-hops). Traffic now processes successfully on the newly active BIG-IP instance. Benefits of Failover via API: This is most similar to a traditional HA setup No ALB or ILB required VIPs, SNATs, Floating IPs, and managed routes (UDR) can failover to the peer SNAT pool can be used if port exhaustion is a concern SNAT automap is optional (UDR routes needed if SNAT none) Requirements for Failover via API: Service Principal required with correct permissions BIG-IP needs outbound internet access to Azure REST API on port 443 Mutli-NIC required Other things to know: BIG-IP pair will be active/standby Failover times are dependent on Azure API queue (30-90 seconds, sometimes longer) I have experienced up to 20 minutes to failover IPs (public IPs, private IPs) UDR route table entries typically take 5-10 seconds in my testing experience BIG-IP listener can be secondary private IP associated with NIC can be an IP within network prefix being routed to BIG-IP via UDR Read about the F5 GitHub Azure Failover via API templates. HA Using ALB for Failover This type of BIG-IP deployment in Azure requires the use of an Azure load balancer. This ALB sits in a Tier 1 position and acts as an Layer 4 (L4) load balancer to the BIG-IP instances. The ALB performs health checks against the BIG-IP instances with configurable timers. This can result in a much faster failover time than the "HA via API" method in which the latter is dependent on the Azure API queue. In default mode, the ALB has Direct Server Return (DSR) disabled. This means the ALB will DNAT the destination IP requested by the client. This results in the BIG-IP VIP/listener IP listening on a wildcard 0.0.0.0/0 or the NIC subnet range like 10.1.1.0/24. Why? Because ALB will send traffic to the BIG-IP instance on a private IP. This IP will be unique per BIG-IP instance and cannot "float" over without an API call. Remember, no L2...no ARP in the cloud. Rather than create two different listener IP objects for each app, you can simply use a network range listener or a wildcard. The video has a quick example of this using various ports like 0.0.0.0/0:443, 0.0.0.0/0:9443. Benefits of Failover via LB: 3-NIC deployment supports sync-only Active/Active or sync-fail Active/Standby Failover times depend on ALB health probe (ex. 5 sec) Multiple traffic groups are supported Requirements for Failover via LB: ALB and/or ILB required SNAT automap required Other things to know: BIG-IP pair will be active/standby or active/active depending on setup ALB is for internet traffic ILB is for internal traffic ALB has DSR disabled by default Failover times are much quicker than "HA via API" Times are dependent on Azure LB health probe timers Azure LB health probe can be tcp:80 for example (keep it simple) Backend pool members for ALB are the BIG-IP secondary private IPs BIG-IP listener can be wildcard like 0.0.0.0/0 can be network range associated with NIC subnet like 10.1.1.0/24 can use different ports for different apps like 0.0.0.0/0:443, 0.0.0.0/0:9443 Read about the F5 GitHub Azure Failover via ALB templates. HA Using ALB for Failover with DSR Enabled (Floating IP) This is a quick follow up video to the previous "HA via ALB". In this fourth video, I discuss the "HA via ALB" method again but this time the ALB has DSR enabled. Whew! Lots of acronyms! When DSR is enabled, the ALB will send the traffic to the backend pool (aka BIG-IP instances) private IP without performing destination NAT (DNAT). This means...if client requested 2.2.2.2, then the ALB will send a request to the backend pool (BIG-IP) on same destination 2.2.2.2. As a result, the BIG-IP VIP/listener will match the public IP on the ALB. This makes use of a floating IP. Benefits of Failover via LB with ALB DSR Enabled: Reduces configuration complexity between the ALB and BIG-IP The IP you see on the ALB will be the same IP as the BIG-IP listener Failover times depend on ALB health probe (ex. 5 sec) Requirements for Failover via LB: DSR enabled on the Azure ALB or ILB ALB and/or ILB required SNAT automap required Dummy VIP "healthprobe" to check status of BIG-IP on individual self IP of each instance Create one "healthprobe" listener for each BIG-IP (total of 2) VIP listener IP #1 will be BIG-IP #1 self IP of external network VIP listener IP #2 will be BIG-IP #2 self IP of external network VIP listener port can be 8888 for example (this should match on the ALB health probe side) attach iRule to listener for up/down status Example iRule... when HTTP_REQUEST { HTTP::respond 200 content "OK" } Other things to know: ALB is for internet traffic ILB is for internal traffic BIG-IP pair will operate as active/active Failover times are much quicker than "HA via API" Times are dependent on Azure LB health probe timers Backend pool members for ALB are the BIG-IP primary private IPs BIG-IP listener can be same IP as the ALB public IP can use different ports for different apps like 2.2.2.2:443, 2.2.2.2:8443 Read about the F5 GitHub Azure Failover via ALB templates. Also read about Azure LB and DSR. Auto Scale BIG-IP with ALB This type of BIG-IP deployment takes advantage of the native cloud features by creating an auto scaling group of BIG-IP instances. Similar to the "HA via LB" mentioned earlier, this deployment makes use of an ALB that sits in a Tier 1 position and acts as a Layer 4 (L4) load balancer to the BIG-IP instances. Azure auto scaling is accomplished by using Azure Virtual Machine Scale Sets that automatically increase or decrease BIG-IP instance count. Benefits of Auto Scale with LB: Dynamically increase/decrease BIG-IP instance count based on CPU and throughput If using F5 auto scale WAF templates, then those come with pre-configured WAF policies F5 devices will self-heal (cloud VM scale set will replace damaged instances with new) Requirements for Auto Scale with LB: Service Principal required with correct permissions BIG-IP needs outbound internet access to Azure REST API on port 443 ALB required SNAT automap required Other things to know: BIG-IP cluster will be active/active BIG-IP will be deployed with 1-NIC BIG-IP onboarding time BIG-IP VE process takes about 3-8 minutes depending on instance type and modules Azure VM Scale Set configured with 10 minute window for scale up/down window (ex. to prevent flapping) Take these timers into account when looking at full readiness to accept traffic BIG-IP listener can be wildcard like 0.0.0.0/0 can use different ports for different apps like 0.0.0.0/0:443, 0.0.0.0/0:9443 Licensing PAYG marketplace licensing can be used BIG-IQ license manager can be used for BYOL licensing Sorry, no video yet...a picture will have to do! Here's an example diagram of auto scale with ALB. Read about the F5 GitHub Azure Auto Scale via ALB templates. Auto Scale BIG-IP with DNS This type of BIG-IP deployment takes advantage of the native cloud features by creating an auto scaling group of BIG-IP instances. Unlike "HA via LB" mentioned earlier or "Auto Scale with ALB", this deployment makes use of DNS that acts as a method to distribute traffic to the auto scaling BIG-IP instances. This solution integrates with F5 BIG-IP DNS (formerly named GTM). And...since there is no ALB in front of the BIG-IP instances, this means you do not need SNAT automap on the BIG-IP listeners. In other words, if you have apps that need to see the real client IP and they are non-HTTP apps (can't pass XFF header) then this is one method to consider. Benefits of Auto Scale with DNS: Dynamically increase/decrease BIG-IP instance count based on CPU and throughput If using F5 auto scale WAF templates, then those come with pre-configured WAF policies F5 devices will self-heal (cloud VM scale set will replace damaged instances with new) ALB not required (cost savings) SNAT automap not required Requirements for Auto Scale with DNS: Service Principal required with correct permissions BIG-IP needs outbound internet access to Azure REST API on port 443 SNAT automap optional BIG-IP DNS (aka GTM) needs connectivity to each BIG-IP auto scaled instance Other things to know: BIG-IP cluster will be active/active BIG-IP will be deployed with 1-NIC BIG-IP onboarding time BIG-IP VE process takes about 3-8 minutes depending on instance type and modules Azure VM Scale Set configured with 10 minute window for scale up/down window (ex. to prevent flapping) Take these timers into account when looking at full readiness to accept traffic BIG-IP listener can be wildcard like 0.0.0.0/0 can use different ports for different apps like 0.0.0.0/0:443, 0.0.0.0/0:9443 Licensing PAYG marketplace licensing can be used BIG-IQ license manager can be used for BYOL licensing Sorry, no video yet...a picture will have to do! Here's an example diagram of auto scale with DNS. Read about the F5 GitHub Azure Auto Scale via DNS templates. Summary That's it for now! I hope you enjoyed the video series (here in full on YouTube) and quick explanation. Please leave a comment if this helped or if you have additional questions. Additional Resources F5 High Availability - Public Cloud Guidance The Hitchhiker’s Guide to BIG-IP in Azure The Hitchhiker’s Guide to BIG-IP in Azure – “Deployment Scenarios” The Hitchhiker’s Guide to BIG-IP in Azure – “High Availability” The Hitchhiker’s Guide to BIG-IP in Azure – “Life Cycle Management”9.6KViews7likes7CommentsBIG-IP integration with Azure Gateway Load Balancer
Introduction Microsoft just announced the Gateway Load Balancer. This is a new load balancer sku to go along with the existing types of Basic and Standard, however, this is aimed at sending traffic to transparent network devices for in-line traffic inspection. This article will overview the gateway load balancer and explain how F5 BIG-IP can integrate for transparent traffic inspection. Check out MS's blog too. The Challenge Many organizations face challenges with Azure network operations, like: setting up High Availability (HA) can be a learning curve for admins of 3rd party appliances in Azure using User Defined Routes (UDR's) to direct traffic to a network appliance can be difficult for some organizations, especially updating at scale dealing with multiple VNets across multiple regions and routing traffic between them can add operational overhead requiring Source NAT (SNAT) can be a deal-breaker for app owners, but architecting for flow symmetry without SNAT can require serious know-how Finally, there's a very real skills gap between most Network/Security teams and Cloud teams. Often, a SecOps team member (eg, firewall admin) just wants to manage a firewall, without needing to know about Azure networking. Likewise, the CloudOps team want to send traffic through the firewall without having to alert the SecOps team for every new application added. With Azure's Basic and Standard load balancer types, it is possible to achieve much of this and I've blogged about this before. But things get complicated quickly with advanced scenarios. The Gateway load balancer makes load balancing network appliances much easier. No SNAT It's important to remember that this is aimed at network devices that are inspecting but not proxying traffic. I.e., src and dst IP and port should be preserved when traffic traverses BIG-IP in this scenario, so SNAT is not supported. Think of transparent firewalling as a good use case here. Enter Gateway Load Balancer The gateway load balancer is aimed at load balancing firewalls, routers or other network appliances. Some high-level benefits: It's transparent. Source and Destination IP addresses are unchanged when traffic traverses the gateway load balancer, via VXLAN tunnels to backend pool members. Your firewall admin can now set up policies without worrying about Source or Destination NATs. Flow symmetry. The load balancer will ensure that response traffic hits the same device on the return path as it did on the request path. This means you can run multiple standalone or Active/Active appliances without using SNAT or UDR's to ensure flow symmetry. UDR's are not required, because traffic reaches the gateway load balancer in one of two ways: Either a regular, external load balancer is updated to send traffic via the gateway load balancer before forwarding to it's backend pool, or, If the VM is not behind an Azure Load Balancer, the VM's NIC is updated so that traffic traverses the gateway load balancer before it receives traffic. Traffic across VNets and Regions is supported. You can target a gateway load balancer from a regular load balancer that is in a different VNet, or a different region. UDR's are not used, and VNet peering is not required for this traffic flow. One last important point I'll make: this load balancer type is aimed at solving for North-South traffic, not East-West. I.e., your typical use-case will be for inbound traffic from Internet to your app servers on your Azure VNet. Put your app servers behind a public-facing load balancer, update that load balancer to reference the gateway load balancer, and you're good to go! No need to ask the firewall or F5 admin to configure a new application for you, or a new public IP on their device, etc. Example scenario In this scenario we have a web app that is internet-facing. We want to protect the web app from attacks by having the traffic traverse BIG-IP Advanced WAF (AWAF). The gateway load balancer is configured to allow all traffic inbound to the BIG-IP VE's, where security policies are applied. So we deploy a public-facing Azure Load Balancer, configure the load balancer with an attribute referencing the gateway load balancer, and watch as traffic from internet clients is routed via the Advanced WAF devices in transit to the app servers. Demo A basic demo of this is hosted on GitHub, if you would like to deploy this yourself or see more details. The BIG-IP's will require VXLAN tunnels configuration, VLAN groups, FDB entries, and ARP records. This is to integrate with the VXLAN requirements of the gateway load balancer. See the demo on GitHub here.8.9KViews2likes3CommentsCreate a BIG-IP HA Pair in Azure
Use an Azure ARM template to create a high availability (active-standby) pair of BIG-IP Virtual Edition instances in Microsoft Azure. When one BIG-IP VE goes standby, the other becomes active, the virtual server address is reassigned from one external NIC to another. Today, let’s walk through how to create a high availability pair of BIG-IP VE instances in Microsoft Azure. When we’re done, we’ll have an active-standby pair of BIG-IP VEs. To start, go to the F5 Networks Github repository. Click F5-azure-arm-templates. Then go to Supported>failover. You have several options at this point. You can chose which templates to use based on your needs, failing over via API calls, via upstream load balancers, and NIC counts. Read each readme to determine your desired deployment strategy. When you already have your subnets and existing IP addresses defined but to see how it works, let’s deploy a new stack. Click new stack and scroll down to the Deploy button. If you have a trial or production license from F5, you can use the BYOL or BIG-IQ as license server options but in this case we’re going to choose the PAYG option. Click Deploy and the template opens in the Azure portal. Now we simply fill out the fields. We’ll create a new Resource Group and set a password for the BIG-IP VEs. When you get to the questions: The DNS label is used as part of the URL. Instance Name is just the name of the VM in Azure. Instance Type determines how much memory and CPU you’ll have. Image Name determines how many BIG-IP modules you can run (and you can choose the latest BIG-IP version). Licensed Bandwidth determines the maximum throughput of the traffic going through BIG-IP. Select the Number of External IP addresses (we’ll start with one but can add more later). For instance, if you plan on running more than one application behind the BIG-IP, then you’ll need the appropriate external IP addresses. Vnet Address Prefix is for the address ranges of you subnets (we’ll leave at default). The next 3 fields (Tenant ID, Client ID, Service Principal Secret) have to do with security. Rather than using your own credentials to modify resources in Azure, you can create an Active Directory application and assign permissions to it. The last two fields also go together. Managed Routes let you route traffic from other external networks through the BIG-IPs. The Route Table Tag means that anytime this tag is found in the route table, routes that have this destination are updated so that the next hop is the IP address of the active BIG-IP VE. This is useful if you want all outbound traffic to go through the BIG-IP or if you want to send traffic from a bunch of different Vnets through the BIG-IP. We’ll leave the rest as default but the Restricted Src Address is good way to put IP addresses on my network – the ones that are allowed to connect to the BIG-IP. We’ll agree to the terms and click Purchase. We’re redirected to the Dashboard with the Deployment in Progress indicator. This takes about 15 minutes. Once finished we’ll go check all the resources in the Resource Group. Let’s find out where the virtual server address is located since this is associated with one of the external NICs, which have ‘ext’ in the name. Click the one you want. Then click IP Configuration under Settings. When you look at the IP Configuration for these NICs, whenever the NIC has two IP addresses that’s the NIC for the active BIG-IP. The Primary IP address is the BIG-IP Self IP and the Secondary IP is the virtual server address. If we look at the other external NIC we’ll see that it only has one Self IP and that’s the Primary and it doesn’t have the Secondary virtual server address. The virtual server address is assigned to the active BIG-IP. When we force the active BIG-IP to standby, the virtual server address is reassigned from one NIC to the other. To see this, we’ll log into the BIG-IPs and on the active BIG-IP, we’ll click Force to Standby and the other BIG-IP becomes Active. When we go back to Azure, we can see that the virtual server IP is no longer associated with the external NIC. And if we wait a few minutes, we’ll see that the address is now associated with the other NIC. So basically how BIG-IP HA works in the Azure cloud is by reassigning the virtual server address from one BIG-IP to another. Thanks to our TechPubs group and check out the demo video. ps6.5KViews0likes6CommentsPractical considerations for using Azure internal load balancer and BIG-IP
Background I recently had a scenario that required me to do some testing and I thought it would be a good opportunity to share. A user told me that he wants to put BIG-IP in Azure, but he has a few requirements: He wants to use an Azure Load Balancer (ALB) to ensure HA for his BIG-IP pair. This makes failover times faster in Azure, compared to other options. He does not want to use an external Load Balancer. He has internet-facing firewalls that will proxy inbound traffic to BIG-IP, so there is no need to expose BIG-IP to internet. He needs internal BIG-IP's only to provide the app services he needs. He does not want his traffic SNAT'd. He wants app servers to see the true client IP. Ideally he does not want to automate an update of Azure routes at time of failover. He would like to run his BIG-IP pair Active/Active, but could also run Active/Standby. Quick side note: Why would we use ALB's when deploying BIG-IP? Isn't that like putting a load balancer in front of a load balancer? In this case we're using the ALB as a basic, Layer 3/4 traffic disaggregator to simply send traffic to multiple BIG-IP appliances and provide failover in the case of a VM-level failure. However we want our traffic to traverse BIG-IP's when we need advanced app services: TLS handling, authentication, advanced health monitoring, sophisticated load balancing methods, etc. Let's analyze this! Firstly, I put together a quick demo to easily show him how to deploy BIG-IP behind an ALB. My demo uses an external (internet-facing) ALB and an internal ALB. It is based on the official template provided by F5, but additionally deploys an app server and configures the BIG-IP's with a route and an AS3 declaration. If it wasn't for the internal-only part, we would have met his requirements with this set up: Constraints However, this user's case requires no internet-facing BIG-IP. And now we hit 2 problems: Azure will not allow 2x internal LB’s in the same Availability Set, or you’ll get a conflict with error: NetworkInterfacesInAvailabilitySetUseMultipleLoadBalancersOfSameType . So the diagram above cannot be re-created with 2x Internal Azure LB's. When using only 1x internal LB, you “ should only have one inbound rule if that rule loadbalances across all ports and protocols. ” Since we want at least 1 rule for all ports (for outbound traffic from server), we cannot also have individual LB rules for our apps. Trying to do so will fail with the error message in quotes. Alternatives This leaves us with a few options. We can meet most of his requirements but one that I have not been able to overcome is this: if your cluster of BIG-IP's is Active/Active, you will need to SourceNAT traffic to ensure the response traffic traverses the same BIG-IP. I'll discuss three options and how they meet the requirements. Use a single Azure internal LB. At BIG-IP, SNAT the traffic to the web server, and send XFF header in place of true client IP. Default route can be the Firewall, so server-initiated traffic like patch updates can still reach internet. Can be Active/Active or Active/Standby, but you must SNAT if you do not want to be updating a UDR at time of failover. Or, don’t SNAT traffic, and web server sees true source IP. You will need a UDR (User Defined Route) to point default route and any client subnets at the Active BIG-IP. You will need to automatically update this UDR at time of failover (automated via F5's Cloud Failover Extension or CFE). Can be Active/Standby only, as traffic will return following the default route. Use a single Azure internal LB, but with multiple LB rules. In our example we'll have 2x Front End IP's configured on the Azure LB, and these will be in different internal subnets. Then we'll have 2x back end pools that consist of the external SelfIP's for 1 rule, and internal SelfIP's for the other. Finally we'll have 2x LB rules. The rule for the "internal" side of the LB may listen on ALL ports (for outbound traffic) and the "external" side of the LB might listen on 80 and 443 only. Advanced use cases (not pictured) Single NIC. If you did not want to have a 3-NIC BIG-IP, it would be possible to achieve scenario C above with a single NIC or dual NIC VM: Use a 2-nic BIG-IP (1 nic for mgmt., 1 for dataplane). Put your F5 pair behind a single internal Azure LB with only 1 LB rule which has “HA” ports checked (all ports). We can then have the default route of the server subnet point to Azure LB, which will be served by a VIP 0.0.0.0/0 on F5. Because this only allows you 1 LB rule on the Azure LB, enable DSR on the Azure LB Rule. Designate an “alien subnet range” that doesn’t exist in VNET, but only on the BIG-IP. Create a route to this range, and point the next hop at the frontend IP on the only LB rule. Then have your FW send traffic to the actual VIP on F5 that's within the alien range (not the frontendIP), which will get forwarded to Azure LB, and to F5. I have tested this but see no real advantage and prefer multi-NIC VM's. Alien range. As mentioned above, an "alien IP range" - a subnet not within the VNET but configured only on the BIG-IP for VIPs - could exist. You can then create a UDR that points this "alien range" toward the FrontEnd IP on Azure LB. An alien range may help you overcome the limit of internal IP's on Azure NIC's, but with a limit of 256 private IP's per NIC, I have not seen a case requiring this. An alien range might also allow app teams to manage their own network without bothering network admins. I would not advise going "around" network teams however - cooperation is key. So I cannot find a great use for this in Azure, but I have written about how an alien range may help in AWS. DSR. Direct Server Return in Azure LB means that Azure LB will not perform Destination NAT on the traffic, and it will arrive at the backend pool member with the true destination IP address. This can be handy when you want to create 1 VIP per application in your BIG-IP cluster, and not worry about multiple VIP's per application, or VIP's with /30 masks, or VIP's that use Shared Address lists. However, given that we have multiple options to configure VIP's when Destination NAT is performed by Azure LB (as it is, by default), I generally don't recommend DSR on Azure LB unless it's truly desired. Personally, I'd recommend in this case to proceed with Option C below. I'd also point out: I believe operating in the cloud requires automation, so you should not shy away from automated updates to UDR's when required. Given these can be configured by tools like F5's Cloud Failover Extension (CFE), they are a part of mature cloud operations. I personally try to architect to allow for changes later. Making SNAT a requirement may be a limitation for app owners later on, so I try not to end up in a scenario where we enforce SNAT. I personally like to see outbound traffic traverse BIG-IP, and not just because it allows apps to see true source IP. Outbound traffic can be analyzed, optimized, secured, etc - and a sophisticated device like BIG-IP is the place to do it. Lastly, I recommend best practices we all should know: Use template-based deployments for production so that you have Infrastructure as Code Ideally keep your BIG-IP config off-box and deploy with AS3 templates Get your app and dev teams configuring BIG-IP using declarative deployments to speed your deployment times Conclusion There are multiple ways to deploy BIG-IP when your requirements dictate an architecture that is not deployed by default. By keeping in mind my priorities of application services, operational support and high availability, you can decide on the best architecture for a given scenario. Thanks for reading, and let me know if you have any questions.6.2KViews4likes20CommentsF5 High Availability - Public Cloud Guidance
This article will provide information about BIG-IP and NGINX high availability (HA) topics that should be considered when leveraging the public cloud. There are differences between on-prem and public cloud such as cloud provider L2 networking. These differences lead to challenges in how you address HA, failover time, peer setup, scaling options, and application state. Topics Covered: Discuss and Define HA Importance of Application Behavior and Traffic Sizing HA Capabilities of BIG-IP and NGINX Various HA Deployment Options (Active/Active, Active/Standby, auto scale) Example Customer Scenario What is High Availability? High availability can mean many things to different people. Depending on the application and traffic requirements, HA requires dual data paths, redundant storage, redundant power, and compute. It means the ability to survive a failure, maintenance windows should be seamless to user, and the user experience should never suffer...ever! Reference: https://en.wikipedia.org/wiki/High_availability So what should HA provide? Synchronization of configuration data to peers (ex. configs objects) Synchronization of application session state (ex. persistence records) Enable traffic to fail over to a peer Locally, allow clusters of devices to act and appear as one unit Globally, disburse traffic via DNS and routing Importance of Application Behavior and Traffic Sizing Let's look at a common use case... "gaming app, lots of persistent connections, client needs to hit same backend throughout entire game session" Session State The requirement of session state is common across applications using methods like HTTP cookies,F5 iRule persistence, JSessionID, IP affinity, or hash. The session type used by the application can help you decide what migration path is right for you. Is this an app more fitting for a lift-n-shift approach...Rehost? Can the app be redesigned to take advantage of all native IaaS and PaaS technologies...Refactor? Reference: 6 R's of a Cloud Migration Application session state allows user to have a consistent and reliable experience Auto scaling L7 proxies (BIG-IP or NGINX) keep track of session state BIG-IP can only mirror session state to next device in cluster NGINX can mirror state to all devices in cluster (via zone sync) Traffic Sizing The cloud provider does a great job with things like scaling, but there are still cloud provider limits that affect sizing and machine instance types to keep in mind. BIG-IP and NGINX are considered network virtual appliances (NVA). They carry quota limits like other cloud objects. Google GCP VPC Resource Limits Azure VM Flow Limits AWS Instance Types Unfortunately, not all limits are documented. Key metrics for L7 proxies are typically SSL stats, throughput, connection type, and connection count. Collecting these application and traffic metrics can help identify the correct instance type. We have a list of the F5 supported BIG-IP VE platforms on F5 CloudDocs. F5 Products and HA Capabilities BIG-IP HA Capabilities BIG-IP supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster L3/L4 connection sharing to next device in cluster (ex. avoids re-login) L5-L7 state sharing to next device in cluster (ex. IP persistence, SSL persistence, iRule UIE persistence) Reference: BIG-IP High Availability Docs NGINX HA Capabilities NGINX supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster Mirroring connections at L3/L4 not available Mirroring session state to ALL devices in cluster using Zone Synchronization Module (NGINX Plus R15) Reference: NGINX High Availability Docs HA Methods for BIG-IP In the following sections, I will illustrate 3 common deployment configurations for BIG-IP in public cloud. HA for BIG-IP Design #1 - Active/Standby via API HA for BIG-IP Design #2 - A/A or A/S via LB HA for BIG-IP Design #3 - Regional Failover (multi region) HA for BIG-IP Design #1 - Active/Standby via API (multi AZ) This failover method uses API calls to communicate with the cloud provider and move objects (IP address, routes, etc) during failover events. The F5 Cloud Failover Extension (CFE) for BIG-IP is used to declaratively configure the HA settings. Cloud provider load balancer is NOT required Fail over time can be SLOW! Only one device actively used (other device sits idle) Failover uses API calls to move cloud objects, times vary (see CFE Performance and Sizing) Key Findings: Google API failover times depend on number of forwarding rules Azure API slow to disassociate/associate IPs to NICs (remapping) Azure API fast when updating routes (UDR, user defined routes) AWS reliable with API regarding IP moves and routes Recommendations: This design with multi AZ is more preferred than single AZ Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost For Azure (based on my testing)... Recommend using Azure UDR versus IP failover when possible Look at Failover via LB example instead for Azure If API method required, look at DNS solutions to provide further redundancy HA for BIG-IP Design #2 - A/A or A/S via LB (multi AZ) Cloud LB health checks the BIG-IP for up/down status Faster failover times (depends on cloud LB health timers) Cloud LB allows A/A or A/S Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Use "failover via LB" if you require faster failover times For Google (based on my testing)... Recommend against "via LB" for IPSEC traffic (Google LB not supported) If load balancing IPSEC, then use "via API" or "via DNS" failover methods HA for BIG-IP Design #3 - Regional Failover via DNS (multi AZ, multi region) BIG-IP VE active/active in multiple regions Traffic disbursed to VEs by DNS/GSLB DNS/GSLB intelligent health checks for the VEs Key difference: Cloud LB is not required DNS logic required by clients Orchestration required to manage configs across each BIG-IP BIG-IP standalone devices (no DSC cluster limitations) Recommendations: Good for apps that handle DNS resolution well upon failover events Recommend when cloud LB cannot handle a particular protocol Recommend when customer is already using DNS to direct traffic Recommend for applications that have been refactored to handle session state outside of BIG-IP Recommend for customers with in-house skillset to orchestrate (Ansible, Terraform, etc) HA Methods for NGINX In the following sections, I will illustrate 2 common deployment configurations for NGINX in public cloud. HA for NGINX Design #1 - Active/Standby via API HA for NGINX Design #2 - Auto Scale Active/Active via LB HA for NGINX Design #1 - Active/Standby via API (multi AZ) NGINX Plus required Cloud provider load balancer is NOT required Only one device actively used (other device sits idle) Only available in AWS currently Recommendations: Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost Reference: Active-Passive HA for NGINX Plus on AWS HA for NGINX Design #2 - Auto Scale Active/Active via LB (multi AZ) NGINX Plus required Cloud LB health checks the NGINX Faster failover times Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Recommended for apps fitting a migration type of Replatform or Refactor Reference: Active-Active HA for NGINX Plus on AWS, Active-Active HA for NGINX Plus on Google Pros & Cons: Public Cloud Scaling Options Review this handy table to understand the high level pros and cons of each deployment method. Example Customer Scenario #1 As a means to make this topic a little more real, here isa common customer scenario that shows you the decisions that go into moving an application to the public cloud. Sometimes it's as easy as a lift-n-shift, other times you might need to do a little more work. In general, public cloud is not on-prem and things might need some tweaking. Hopefully this example will give you some pointers and guidance on your next app migration to the cloud. Current Setup: Gaming applications F5 Hardware BIG-IP VIRPIONs on-prem Two data centers for HA redundancy iRule heavy configuration (TLS encryption/decryption, payload inspections) Session Persistence = iRule Universal Persistence (UIE), and other methods Biggest app 15K SSL TPS 15Gbps throughput 2 million concurrent connections 300K HTTP req/sec (L7 with TLS) Requirements for Successful Cloud Migration: Support current traffic numbers Support future target traffic growth Must run in multiple geographic regions Maintain session state Must retain all iRules in use Recommended Design for Cloud Phase #1: Migration Type: Hybrid model, on-prem + cloud, and some Rehost Platform: BIG-IP Retaining iRules means BIG-IP is required Licensing: High Performance BIG-IP Unlocks additional CPU cores past 8 (up to 24) extra traffic and SSL processing Instance type: check F5 supported BIG-IP VE platforms for accelerated networking (10Gb+) HA method: Active/Standby and multi-region with DNS iRule Universal persistence only mirrors to only next device, keep cluster size to 2 scale horizontally via additional HA clusters and DNS clients pinned to a region via DNS (on-prem or public cloud) inside region, local proxy cluster shares state This example comes up in customer conversations often. Based on customer requirements, in-house skillset, current operational model, and time frames there is one option that is better than the rest. A second design phase lends itself to more of a Replatform or Refactor migration type. In that case, more options can be leveraged to take advantage of cloud-native features. For example, changing the application persistence type from iRule UIE to cookie would allow BIG-IP to avoid keeping track of state. Why? With cookies, the client keeps track of that session state. Client receives a cookie, passes the cookie to L7 proxy on successive requests, proxy checks cookie value, sends to backend pool member. The requirement for L7 proxy to share session state is now removed. Example Customer Scenario #2 Here is another customer scenario. This time the application is a full suite of multimedia content. In contrast to the first scenario, this one will illustrate the benefits of rearchitecting various components allowing greater flexibility when leveraging the cloud. You still must factor in-house skill set, project time frames, and other important business (and application) requirements when deciding on the best migration type. Current Setup: Multimedia (Gaming, Movie, TV, Music) Platform BIG-IP VIPRIONs using vCMP on-prem Two data centers for HA redundancy iRule heavy (Security, Traffic Manipulation, Performance) Biggest App: oAuth + Cassandra for token storage (entitlements) Requirements for Success Cloud Migration: Support current traffic numbers Elastic auto scale for seasonal growth (ex. holidays) VPC peering with partners (must also bypass Web Application Firewall) Must support current or similar traffic manipulating in data plane Compatibility with existing tooling used by Business Recommended Design for Cloud Phase #1: Migration Type: Repurchase, migration BIG-IP to NGINX Plus Platform: NGINX iRules converted to JS or LUA Licensing: NGINX Plus Modules: GeoIP, LUA, JavaScript HA method: N+1 Autoscaling via Native LB Active Health Checks This is a great example of a Repurchase in which application characteristics can allow the various teams to explore alternative cloud migration approaches. In this scenario, it describes a phase one migration of converting BIG-IP devices to NGINX Plus devices. This example assumes the BIG-IP configurations can be somewhat easily converted to NGINX Plus, and it also assumes there is available skillset and project time allocated to properly rearchitect the application where needed. Summary OK! Brains are expanding...hopefully? We learned about high availability and what that means for applications and user experience. We touched on the importance of application behavior and traffic sizing. Then we explored the various F5 products, how they handle HA, and HA designs. These recommendations are based on my own lab testing and interactions with customers. Every scenario will carry its own requirements, and all options should be carefully considered when leveraging the public cloud. Finally, we looked at a customer scenario, discussed requirements, and design proposal. Fun! Resources Read the following articles for more guidance specific to the various cloud providers. Advanced Topologies and More on Highly Available Services Lightboard Lessons - BIG-IP Deployments in Azure Google and BIG-IP Failing Faster in the Cloud BIG-IP VE on Public Cloud High-Availability Load Balancing with NGINX Plus on Google Cloud Platform Using AWS Quick Starts to Deploy NGINX Plus NGINX on Azure5.6KViews5likes2CommentsTransparent Load Balancing in Azure
When deploying a BIG-IP load balancer in Azure you often need to apply both source and destination address translation to work with the Azure load balancers.The following is how to reduce the amount of address translation to make it possible for a backend application to see the original client IP address and make it easier to correlate logs to public IP addresses. Deployment Scenario The following example makes use of several Azure and BIG-IP features to reduce the amount of address translation that is required. Use Azure “Floating IP” to preserve the destination IP address to the BIG-IP Set SNAT (source network address translation) to None on the BIG-IP to preserve the source IP address Make use of Azure “HA Ports” to send the return traffic from the backend application to the correct BIG-IP device How the pieces fit together Preserving Destination IP Address When you use the Azure Load Balancer there is an option to configure a “Floating IP”.Enabling this option causes ALB to no longer rewrite the destination IP address to the private address of the BIG-IP. Preserving the Client IP Address ALB will forward the connection with the original client IP address.The BIG-IP can be configured to also forward the client IP address by selecting the option of setting SNAT to None.This will cause the BIG-IP to still rewrite the destination IP to the pool member (otherwise how would the packet get there?), but preserve the source IP. Getting the return traffic via ALB Earlier we did not rewrite the client IP address, that creates a new problem of how do we respond to the original client with the correct source IP address (previously rewritten by the BIG-IP).In Azure you can configure a route table (or UDR) to point to a private address on an internal Azure Load Balancer (no public IP address).The ALB can then be configured to forward the connection back to the BIG-IP.This is the Azure equivalent of pointing the default gateway of a backend server to the BIG-IP (common “two-arm” deployment of BIG-IP on-prem). Staying Active A potential issue is that Azure load balancer will send to all the backends. To keep traffic symmetric (only flow through a single device) we configure Azure health monitors to monitor a health probe on the BIG-IP that will only respond on the "active" device. Unusual Traffic Flow Taking a look at the final picture of traffic flow we can see that we have created a meandering path of traffic.First stop, external ALB, next stop BIG-IP, on to the backend app, back to an internal ALB, return to the BIG-IP, and all the way back to the client.This “works” because the traffic always knows where to go next. Should I be doing this? This solution works well in situations where the backend application MUST see the original client IP address.Otherwise, it’s a bit complicated and reduces you to an Active/Standby architecture instead of a more scalable Active/Active deployment.Alternate approaches would be to use an X-Forwarded-For header and/or Proxy Protocol. You can also do this architecture without using an Azure load balancer and using the Cloud Failover Extension to move Azure Public IPs and Route Tables via API calls. Using the Azure load balancer makes it simpler to do this type of topology. I hope this cleared things up for you.4.7KViews0likes3CommentsRevoke and Reuse the F5 license
@here is there any way to revoke and reuse the F5 license? I know it’s doable via big-IQ but wondering if it can be achieved via some other approach? We are looking for it from destroy and rebuild type of Azure infra standpoint. Thanks in advance for the help.Solved4.6KViews0likes5CommentsExploring Kubernetes API using Wireshark part 1: Creating, Listing and Deleting Pods
Related Articles: Exploring Kubernetes API using Wireshark part 2: Namespaces Exploring Kubernetes API using Wireshark part 3: Python Client API Quick Intro This article answers the following question: What happens when we create, list and delete pods under the hood? More specifically on the wire. I used these 3 commands: I'll show you on Wireshark the communication between kubectl client and master node (API) for each of the above commands. I used a proxy so we don't have to worry about TLS layer and focus on HTTP only. Creating NGINX pod pcap:creating_pod.pcap (use http filter on Wireshark) Here's our YAML file: Here's how we create this pod: Here's what we see on Wireshark: Behind the scenes, kubectl command sent an HTTP POST with our YAML file converted to JSON but notice the same thing was sent (kind, apiVersion, metadata, spec): You can even expand it if you want to but I didn't to keep it short. Then, Kubernetes master (API) responds with HTTP 201 Created to confirm our pod has been created: Notice that master node replies with similar data with the additional status column because after pod is created it's supposed to have a status too. Listing Pods pcap:listing_pods.pcap (use http filter on Wireshark) When we list pods, kubectl just sends a HTTP GET request instead of POST because we don't need to submit any data apart from headers: This is the full GET request: And here's the HTTP 200 OK with JSON file that contains all information about all pods from default's namespace: I just wanted to emphasise that when you list a pod the resource type that comes back isPodListand when we created our pod it was justPod. Remember? The other thing I'd like to point out is that all of your pods' information should be listed underitems. Allkubectldoes is to display some of the API's info in a humanly readable way. Deleting NGINX pod pcap:deleting_pod.pcap (use http filter on Wireshark) Behind the scenes, we're just sending an HTTP DELETE to Kubernetes master: Also notice that the pod's name is also included in the URI: /api/v1/namespaces/default/pods/nginx← this is pods' name HTTP DELETEjust likeHTTP GETis pretty straightforward: Our master node replies with HTTP 200 OK as well as some json file with all the info about the pod, including about it's termination: It's also good to emphasise here that when our pod is deleted, master node returns JSON file with all information available about the pod. I highlighted some interesting info. For example, resource type is now just Pod (not PodList when we're just listing our pods).4.5KViews3likes0Comments