Azure
74 TopicsGet Started with BIG-IP and BIG-IQ Virtual Edition (VE) Trial
Welcome to the BIG-IP and BIG-IQ trials page! This will be your jumping off point for setting up a trial version of BIG-IP VE or BIG-IQ VE in your environment. As you can see below, everything you’ll need is included and organized by operating environment — namely by public/private cloud or virtualization platform. To get started with your trial, use the following software and documentation which can be found in the links below. Upon requesting a trial, you should have received an email containing your license keys. Please bear in mind that it can take up to 30 minutes to receive your licenses. Don't have a trial license?Get one here. Or if you're ready to buy, contact us. Looking for other Resourceslike tools, compatibility matrix... BIG-IP VE and BIG-IQ VE When you sign up for the BIG-IP and BIG-IQ VE trial, you receive a set of license keys. Each key will correspond to a component listed below: BIG-IQ Centralized Management (CM) — Manages the lifecycle of BIG-IP instances including analytics, licenses, configurations, and auto-scaling policies BIG-IQ Data Collection Device (DCD) — Aggregates logs and analytics of traffic and BIG-IP instances to be used by BIG-IQ BIG-IP Local Traffic Manager (LTM), Access (APM), Advanced WAF (ASM), Network Firewall (AFM), DNS — Keep your apps up and running with BIG-IP application delivery controllers. BIG-IP Local Traffic Manager (LTM) and BIG-IP DNS handle your application traffic and secure your infrastructure. You’ll get built-in security, traffic management, and performance application services, whether your applications live in a private data center or in the cloud. Select the hypervisor or environment where you want to run VE: AWS CFT for single NIC deployment CFT for three NIC deployment BIG-IP VE images in the AWS Marketplace BIG-IQ VE images in the AWS Marketplace BIG-IP AWS documentation BIG-IP video: Single NIC deploy in AWS BIG-IQ AWS documentation Setting up and Configuring a BIG-IQ Centralized Management Solution BIG-IQ Centralized Management Trial Quick Start Azure Azure Resource Manager (ARM) template for single NIC deployment Azure ARM template for threeNIC deployment BIG-IP VE images in the Azure Marketplace BIG-IQ VE images in the Azure Marketplace BIG-IQ Centralized Management Trial Quick Start BIG-IP VE Azure documentation Video: BIG-IP VE Single NIC deploy in Azure BIG-IQ VE Azure documentation Setting up and Configuring a BIG-IQ Centralized Management Solution VMware/KVM/Openstack Download BIG-IP VE image Download BIG-IQ VE image BIG-IP VE Setup BIG-IQ VE Setup Setting up and Configuring a BIG-IQ Centralized Management Solution Google Cloud Google Deployment Manager template for single NIC deployment Google Deployment Manager template for threeNIC deployment BIG-IP VE images in Google Cloud Google Cloud Platform documentation Video:Single NIC deploy inGoogle Other Resources AskF5 Github community(f5devcentral,f5networks) Tools toautomate your deployment BIG-IQ Onboarding Tool F5 Declarative Onboarding F5 Application Services 3 Extension Other Tools: F5 SDK (Python) F5 Application Services Templates (FAST) F5 Cloud Failover F5 Telemetry Streaming Find out which hypervisor versions are supported with each release of VE. BIG-IP Compatibility Matrix BIG-IQ Compatibility Matrix Do you haveany comments orquestions? Ask here66KViews8likes24CommentsLightboard Lessons: BIG-IP Deployments in Azure Cloud
In this edition of Lightboard Lessons, I cover the deployment of a BIG-IP in Azure cloud. There are a few videos associated with this topic, and each video will address a specific use case. Topics will include the following: Azure Overview with BIG-IP BIG-IP High Availability Failover Methods Glossary: ALB = Azure Load Balancer ILB = Azure Internal Load Balancer HA = High Availability VE = Virtual Edition NVA = Network Virtual Appliance DSR = Direct Server Return RT = Route Table UDR = User Defined Route WAF = Web Application Firewall Azure Overview with BIG-IP This overview covers the on-prem BIG-IP with a 3-nic example setup. I then discuss the Azure cloud network and cloud components and how that relates to making a BIG-IP work in the Azure cloud. Things discussed include NICs, routes, network security groups, and IP configurations. The important thing to remember is that the cloud is not like on-prem regarding things like L2 and L3 networking components. This makes a difference as you assign NICs and IPs to each virtual machine in the cloud. Read more here on F5 CloudDocs for Azure BIG-IP Deployments. BIG-IP HA and Failover Methods The high availability section will review three different videos. These videos will discuss the failover methods for a BIG-IP cluster and how traffic can failover to the second device upon a failover event. I also discuss IP addressing options for the BIG-IP VIP/listeners and "why". Question: “Which F5 solution is right for me? Autoscaling or HA solutions?” Use these bullet points as guidance: Auto Scale Solution Ramp up/down time to consider as new instances come and go Dynamically adjust instance count based on CPU, memory, and throughput No failover, all devices are Active/Active Self-healing upon device failure (thanks to cloud provider native features) Instances are deployed with 1-NIC only HA Failover (non auto scale) No Ramp up/down time since no additional devices are "auto" scaling No dynamic scaling of the cluster, it will remain as two (2) instances Yes failover, UDRs and IP config will failover to other BIG-IP instance No self-healing, manual maintenance is required by user (similar to on-prem) Instances can be deployed with multiple NICs if needed HA Using API for Failover How do IP addresses and routes failover to the other BIG-IP unit and still process traffic with no layer 2 (L2) networking? Easy, API calls to the cloud. When you deploy an HA pair of BIG-IP instances in the Azure cloud, the BIG-IP instances are onboarded with various cloud scripts. These scripts help facilitate the moving of cloud objects by detecting failover events, triggering API calls to Azure cloud, and thus moving cloud objects (ex. Azure IPs, Azure route next-hops). Traffic now processes successfully on the newly active BIG-IP instance. Benefits of Failover via API: This is most similar to a traditional HA setup No ALB or ILB required VIPs, SNATs, Floating IPs, and managed routes (UDR) can failover to the peer SNAT pool can be used if port exhaustion is a concern SNAT automap is optional (UDR routes needed if SNAT none) Requirements for Failover via API: Service Principal required with correct permissions BIG-IP needs outbound internet access to Azure REST API on port 443 Mutli-NIC required Other things to know: BIG-IP pair will be active/standby Failover times are dependent on Azure API queue (30-90 seconds, sometimes longer) I have experienced up to 20 minutes to failover IPs (public IPs, private IPs) UDR route table entries typically take 5-10 seconds in my testing experience BIG-IP listener can be secondary private IP associated with NIC can be an IP within network prefix being routed to BIG-IP via UDR Read about the F5 GitHub Azure Failover via API templates. HA Using ALB for Failover This type of BIG-IP deployment in Azure requires the use of an Azure load balancer. This ALB sits in a Tier 1 position and acts as an Layer 4 (L4) load balancer to the BIG-IP instances. The ALB performs health checks against the BIG-IP instances with configurable timers. This can result in a much faster failover time than the "HA via API" method in which the latter is dependent on the Azure API queue. In default mode, the ALB has Direct Server Return (DSR) disabled. This means the ALB will DNAT the destination IP requested by the client. This results in the BIG-IP VIP/listener IP listening on a wildcard 0.0.0.0/0 or the NIC subnet range like 10.1.1.0/24. Why? Because ALB will send traffic to the BIG-IP instance on a private IP. This IP will be unique per BIG-IP instance and cannot "float" over without an API call. Remember, no L2...no ARP in the cloud. Rather than create two different listener IP objects for each app, you can simply use a network range listener or a wildcard. The video has a quick example of this using various ports like 0.0.0.0/0:443, 0.0.0.0/0:9443. Benefits of Failover via LB: 3-NIC deployment supports sync-only Active/Active or sync-fail Active/Standby Failover times depend on ALB health probe (ex. 5 sec) Multiple traffic groups are supported Requirements for Failover via LB: ALB and/or ILB required SNAT automap required Other things to know: BIG-IP pair will be active/standby or active/active depending on setup ALB is for internet traffic ILB is for internal traffic ALB has DSR disabled by default Failover times are much quicker than "HA via API" Times are dependent on Azure LB health probe timers Azure LB health probe can be tcp:80 for example (keep it simple) Backend pool members for ALB are the BIG-IP secondary private IPs BIG-IP listener can be wildcard like 0.0.0.0/0 can be network range associated with NIC subnet like 10.1.1.0/24 can use different ports for different apps like 0.0.0.0/0:443, 0.0.0.0/0:9443 Read about the F5 GitHub Azure Failover via ALB templates. HA Using ALB for Failover with DSR Enabled (Floating IP) This is a quick follow up video to the previous "HA via ALB". In this fourth video, I discuss the "HA via ALB" method again but this time the ALB has DSR enabled. Whew! Lots of acronyms! When DSR is enabled, the ALB will send the traffic to the backend pool (aka BIG-IP instances) private IP without performing destination NAT (DNAT). This means...if client requested 2.2.2.2, then the ALB will send a request to the backend pool (BIG-IP) on same destination 2.2.2.2. As a result, the BIG-IP VIP/listener will match the public IP on the ALB. This makes use of a floating IP. Benefits of Failover via LB with ALB DSR Enabled: Reduces configuration complexity between the ALB and BIG-IP The IP you see on the ALB will be the same IP as the BIG-IP listener Failover times depend on ALB health probe (ex. 5 sec) Requirements for Failover via LB: DSR enabled on the Azure ALB or ILB ALB and/or ILB required SNAT automap required Dummy VIP "healthprobe" to check status of BIG-IP on individual self IP of each instance Create one "healthprobe" listener for each BIG-IP (total of 2) VIP listener IP #1 will be BIG-IP #1 self IP of external network VIP listener IP #2 will be BIG-IP #2 self IP of external network VIP listener port can be 8888 for example (this should match on the ALB health probe side) attach iRule to listener for up/down status Example iRule... when HTTP_REQUEST { HTTP::respond 200 content "OK" } Other things to know: ALB is for internet traffic ILB is for internal traffic BIG-IP pair will operate as active/active Failover times are much quicker than "HA via API" Times are dependent on Azure LB health probe timers Backend pool members for ALB are the BIG-IP primary private IPs BIG-IP listener can be same IP as the ALB public IP can use different ports for different apps like 2.2.2.2:443, 2.2.2.2:8443 Read about the F5 GitHub Azure Failover via ALB templates. Also read about Azure LB and DSR. Auto Scale BIG-IP with ALB This type of BIG-IP deployment takes advantage of the native cloud features by creating an auto scaling group of BIG-IP instances. Similar to the "HA via LB" mentioned earlier, this deployment makes use of an ALB that sits in a Tier 1 position and acts as a Layer 4 (L4) load balancer to the BIG-IP instances. Azure auto scaling is accomplished by using Azure Virtual Machine Scale Sets that automatically increase or decrease BIG-IP instance count. Benefits of Auto Scale with LB: Dynamically increase/decrease BIG-IP instance count based on CPU and throughput If using F5 auto scale WAF templates, then those come with pre-configured WAF policies F5 devices will self-heal (cloud VM scale set will replace damaged instances with new) Requirements for Auto Scale with LB: Service Principal required with correct permissions BIG-IP needs outbound internet access to Azure REST API on port 443 ALB required SNAT automap required Other things to know: BIG-IP cluster will be active/active BIG-IP will be deployed with 1-NIC BIG-IP onboarding time BIG-IP VE process takes about 3-8 minutes depending on instance type and modules Azure VM Scale Set configured with 10 minute window for scale up/down window (ex. to prevent flapping) Take these timers into account when looking at full readiness to accept traffic BIG-IP listener can be wildcard like 0.0.0.0/0 can use different ports for different apps like 0.0.0.0/0:443, 0.0.0.0/0:9443 Licensing PAYG marketplace licensing can be used BIG-IQ license manager can be used for BYOL licensing Sorry, no video yet...a picture will have to do! Here's an example diagram of auto scale with ALB. Read about the F5 GitHub Azure Auto Scale via ALB templates. Auto Scale BIG-IP with DNS This type of BIG-IP deployment takes advantage of the native cloud features by creating an auto scaling group of BIG-IP instances. Unlike "HA via LB" mentioned earlier or "Auto Scale with ALB", this deployment makes use of DNS that acts as a method to distribute traffic to the auto scaling BIG-IP instances. This solution integrates with F5 BIG-IP DNS (formerly named GTM). And...since there is no ALB in front of the BIG-IP instances, this means you do not need SNAT automap on the BIG-IP listeners. In other words, if you have apps that need to see the real client IP and they are non-HTTP apps (can't pass XFF header) then this is one method to consider. Benefits of Auto Scale with DNS: Dynamically increase/decrease BIG-IP instance count based on CPU and throughput If using F5 auto scale WAF templates, then those come with pre-configured WAF policies F5 devices will self-heal (cloud VM scale set will replace damaged instances with new) ALB not required (cost savings) SNAT automap not required Requirements for Auto Scale with DNS: Service Principal required with correct permissions BIG-IP needs outbound internet access to Azure REST API on port 443 SNAT automap optional BIG-IP DNS (aka GTM) needs connectivity to each BIG-IP auto scaled instance Other things to know: BIG-IP cluster will be active/active BIG-IP will be deployed with 1-NIC BIG-IP onboarding time BIG-IP VE process takes about 3-8 minutes depending on instance type and modules Azure VM Scale Set configured with 10 minute window for scale up/down window (ex. to prevent flapping) Take these timers into account when looking at full readiness to accept traffic BIG-IP listener can be wildcard like 0.0.0.0/0 can use different ports for different apps like 0.0.0.0/0:443, 0.0.0.0/0:9443 Licensing PAYG marketplace licensing can be used BIG-IQ license manager can be used for BYOL licensing Sorry, no video yet...a picture will have to do! Here's an example diagram of auto scale with DNS. Read about the F5 GitHub Azure Auto Scale via DNS templates. Summary That's it for now! I hope you enjoyed the video series (here in full on YouTube) and quick explanation. Please leave a comment if this helped or if you have additional questions. Additional Resources F5 High Availability - Public Cloud Guidance The Hitchhiker’s Guide to BIG-IP in Azure The Hitchhiker’s Guide to BIG-IP in Azure – “Deployment Scenarios” The Hitchhiker’s Guide to BIG-IP in Azure – “High Availability” The Hitchhiker’s Guide to BIG-IP in Azure – “Life Cycle Management”9.6KViews7likes7CommentsF5 High Availability - Public Cloud Guidance
This article will provide information about BIG-IP and NGINX high availability (HA) topics that should be considered when leveraging the public cloud. There are differences between on-prem and public cloud such as cloud provider L2 networking. These differences lead to challenges in how you address HA, failover time, peer setup, scaling options, and application state. Topics Covered: Discuss and Define HA Importance of Application Behavior and Traffic Sizing HA Capabilities of BIG-IP and NGINX Various HA Deployment Options (Active/Active, Active/Standby, auto scale) Example Customer Scenario What is High Availability? High availability can mean many things to different people. Depending on the application and traffic requirements, HA requires dual data paths, redundant storage, redundant power, and compute. It means the ability to survive a failure, maintenance windows should be seamless to user, and the user experience should never suffer...ever! Reference: https://en.wikipedia.org/wiki/High_availability So what should HA provide? Synchronization of configuration data to peers (ex. configs objects) Synchronization of application session state (ex. persistence records) Enable traffic to fail over to a peer Locally, allow clusters of devices to act and appear as one unit Globally, disburse traffic via DNS and routing Importance of Application Behavior and Traffic Sizing Let's look at a common use case... "gaming app, lots of persistent connections, client needs to hit same backend throughout entire game session" Session State The requirement of session state is common across applications using methods like HTTP cookies,F5 iRule persistence, JSessionID, IP affinity, or hash. The session type used by the application can help you decide what migration path is right for you. Is this an app more fitting for a lift-n-shift approach...Rehost? Can the app be redesigned to take advantage of all native IaaS and PaaS technologies...Refactor? Reference: 6 R's of a Cloud Migration Application session state allows user to have a consistent and reliable experience Auto scaling L7 proxies (BIG-IP or NGINX) keep track of session state BIG-IP can only mirror session state to next device in cluster NGINX can mirror state to all devices in cluster (via zone sync) Traffic Sizing The cloud provider does a great job with things like scaling, but there are still cloud provider limits that affect sizing and machine instance types to keep in mind. BIG-IP and NGINX are considered network virtual appliances (NVA). They carry quota limits like other cloud objects. Google GCP VPC Resource Limits Azure VM Flow Limits AWS Instance Types Unfortunately, not all limits are documented. Key metrics for L7 proxies are typically SSL stats, throughput, connection type, and connection count. Collecting these application and traffic metrics can help identify the correct instance type. We have a list of the F5 supported BIG-IP VE platforms on F5 CloudDocs. F5 Products and HA Capabilities BIG-IP HA Capabilities BIG-IP supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster L3/L4 connection sharing to next device in cluster (ex. avoids re-login) L5-L7 state sharing to next device in cluster (ex. IP persistence, SSL persistence, iRule UIE persistence) Reference: BIG-IP High Availability Docs NGINX HA Capabilities NGINX supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster Mirroring connections at L3/L4 not available Mirroring session state to ALL devices in cluster using Zone Synchronization Module (NGINX Plus R15) Reference: NGINX High Availability Docs HA Methods for BIG-IP In the following sections, I will illustrate 3 common deployment configurations for BIG-IP in public cloud. HA for BIG-IP Design #1 - Active/Standby via API HA for BIG-IP Design #2 - A/A or A/S via LB HA for BIG-IP Design #3 - Regional Failover (multi region) HA for BIG-IP Design #1 - Active/Standby via API (multi AZ) This failover method uses API calls to communicate with the cloud provider and move objects (IP address, routes, etc) during failover events. The F5 Cloud Failover Extension (CFE) for BIG-IP is used to declaratively configure the HA settings. Cloud provider load balancer is NOT required Fail over time can be SLOW! Only one device actively used (other device sits idle) Failover uses API calls to move cloud objects, times vary (see CFE Performance and Sizing) Key Findings: Google API failover times depend on number of forwarding rules Azure API slow to disassociate/associate IPs to NICs (remapping) Azure API fast when updating routes (UDR, user defined routes) AWS reliable with API regarding IP moves and routes Recommendations: This design with multi AZ is more preferred than single AZ Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost For Azure (based on my testing)... Recommend using Azure UDR versus IP failover when possible Look at Failover via LB example instead for Azure If API method required, look at DNS solutions to provide further redundancy HA for BIG-IP Design #2 - A/A or A/S via LB (multi AZ) Cloud LB health checks the BIG-IP for up/down status Faster failover times (depends on cloud LB health timers) Cloud LB allows A/A or A/S Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Use "failover via LB" if you require faster failover times For Google (based on my testing)... Recommend against "via LB" for IPSEC traffic (Google LB not supported) If load balancing IPSEC, then use "via API" or "via DNS" failover methods HA for BIG-IP Design #3 - Regional Failover via DNS (multi AZ, multi region) BIG-IP VE active/active in multiple regions Traffic disbursed to VEs by DNS/GSLB DNS/GSLB intelligent health checks for the VEs Key difference: Cloud LB is not required DNS logic required by clients Orchestration required to manage configs across each BIG-IP BIG-IP standalone devices (no DSC cluster limitations) Recommendations: Good for apps that handle DNS resolution well upon failover events Recommend when cloud LB cannot handle a particular protocol Recommend when customer is already using DNS to direct traffic Recommend for applications that have been refactored to handle session state outside of BIG-IP Recommend for customers with in-house skillset to orchestrate (Ansible, Terraform, etc) HA Methods for NGINX In the following sections, I will illustrate 2 common deployment configurations for NGINX in public cloud. HA for NGINX Design #1 - Active/Standby via API HA for NGINX Design #2 - Auto Scale Active/Active via LB HA for NGINX Design #1 - Active/Standby via API (multi AZ) NGINX Plus required Cloud provider load balancer is NOT required Only one device actively used (other device sits idle) Only available in AWS currently Recommendations: Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost Reference: Active-Passive HA for NGINX Plus on AWS HA for NGINX Design #2 - Auto Scale Active/Active via LB (multi AZ) NGINX Plus required Cloud LB health checks the NGINX Faster failover times Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Recommended for apps fitting a migration type of Replatform or Refactor Reference: Active-Active HA for NGINX Plus on AWS, Active-Active HA for NGINX Plus on Google Pros & Cons: Public Cloud Scaling Options Review this handy table to understand the high level pros and cons of each deployment method. Example Customer Scenario #1 As a means to make this topic a little more real, here isa common customer scenario that shows you the decisions that go into moving an application to the public cloud. Sometimes it's as easy as a lift-n-shift, other times you might need to do a little more work. In general, public cloud is not on-prem and things might need some tweaking. Hopefully this example will give you some pointers and guidance on your next app migration to the cloud. Current Setup: Gaming applications F5 Hardware BIG-IP VIRPIONs on-prem Two data centers for HA redundancy iRule heavy configuration (TLS encryption/decryption, payload inspections) Session Persistence = iRule Universal Persistence (UIE), and other methods Biggest app 15K SSL TPS 15Gbps throughput 2 million concurrent connections 300K HTTP req/sec (L7 with TLS) Requirements for Successful Cloud Migration: Support current traffic numbers Support future target traffic growth Must run in multiple geographic regions Maintain session state Must retain all iRules in use Recommended Design for Cloud Phase #1: Migration Type: Hybrid model, on-prem + cloud, and some Rehost Platform: BIG-IP Retaining iRules means BIG-IP is required Licensing: High Performance BIG-IP Unlocks additional CPU cores past 8 (up to 24) extra traffic and SSL processing Instance type: check F5 supported BIG-IP VE platforms for accelerated networking (10Gb+) HA method: Active/Standby and multi-region with DNS iRule Universal persistence only mirrors to only next device, keep cluster size to 2 scale horizontally via additional HA clusters and DNS clients pinned to a region via DNS (on-prem or public cloud) inside region, local proxy cluster shares state This example comes up in customer conversations often. Based on customer requirements, in-house skillset, current operational model, and time frames there is one option that is better than the rest. A second design phase lends itself to more of a Replatform or Refactor migration type. In that case, more options can be leveraged to take advantage of cloud-native features. For example, changing the application persistence type from iRule UIE to cookie would allow BIG-IP to avoid keeping track of state. Why? With cookies, the client keeps track of that session state. Client receives a cookie, passes the cookie to L7 proxy on successive requests, proxy checks cookie value, sends to backend pool member. The requirement for L7 proxy to share session state is now removed. Example Customer Scenario #2 Here is another customer scenario. This time the application is a full suite of multimedia content. In contrast to the first scenario, this one will illustrate the benefits of rearchitecting various components allowing greater flexibility when leveraging the cloud. You still must factor in-house skill set, project time frames, and other important business (and application) requirements when deciding on the best migration type. Current Setup: Multimedia (Gaming, Movie, TV, Music) Platform BIG-IP VIPRIONs using vCMP on-prem Two data centers for HA redundancy iRule heavy (Security, Traffic Manipulation, Performance) Biggest App: oAuth + Cassandra for token storage (entitlements) Requirements for Success Cloud Migration: Support current traffic numbers Elastic auto scale for seasonal growth (ex. holidays) VPC peering with partners (must also bypass Web Application Firewall) Must support current or similar traffic manipulating in data plane Compatibility with existing tooling used by Business Recommended Design for Cloud Phase #1: Migration Type: Repurchase, migration BIG-IP to NGINX Plus Platform: NGINX iRules converted to JS or LUA Licensing: NGINX Plus Modules: GeoIP, LUA, JavaScript HA method: N+1 Autoscaling via Native LB Active Health Checks This is a great example of a Repurchase in which application characteristics can allow the various teams to explore alternative cloud migration approaches. In this scenario, it describes a phase one migration of converting BIG-IP devices to NGINX Plus devices. This example assumes the BIG-IP configurations can be somewhat easily converted to NGINX Plus, and it also assumes there is available skillset and project time allocated to properly rearchitect the application where needed. Summary OK! Brains are expanding...hopefully? We learned about high availability and what that means for applications and user experience. We touched on the importance of application behavior and traffic sizing. Then we explored the various F5 products, how they handle HA, and HA designs. These recommendations are based on my own lab testing and interactions with customers. Every scenario will carry its own requirements, and all options should be carefully considered when leveraging the public cloud. Finally, we looked at a customer scenario, discussed requirements, and design proposal. Fun! Resources Read the following articles for more guidance specific to the various cloud providers. Advanced Topologies and More on Highly Available Services Lightboard Lessons - BIG-IP Deployments in Azure Google and BIG-IP Failing Faster in the Cloud BIG-IP VE on Public Cloud High-Availability Load Balancing with NGINX Plus on Google Cloud Platform Using AWS Quick Starts to Deploy NGINX Plus NGINX on Azure5.6KViews5likes2CommentsPractical considerations for using Azure internal load balancer and BIG-IP
Background I recently had a scenario that required me to do some testing and I thought it would be a good opportunity to share. A user told me that he wants to put BIG-IP in Azure, but he has a few requirements: He wants to use an Azure Load Balancer (ALB) to ensure HA for his BIG-IP pair. This makes failover times faster in Azure, compared to other options. He does not want to use an external Load Balancer. He has internet-facing firewalls that will proxy inbound traffic to BIG-IP, so there is no need to expose BIG-IP to internet. He needs internal BIG-IP's only to provide the app services he needs. He does not want his traffic SNAT'd. He wants app servers to see the true client IP. Ideally he does not want to automate an update of Azure routes at time of failover. He would like to run his BIG-IP pair Active/Active, but could also run Active/Standby. Quick side note: Why would we use ALB's when deploying BIG-IP? Isn't that like putting a load balancer in front of a load balancer? In this case we're using the ALB as a basic, Layer 3/4 traffic disaggregator to simply send traffic to multiple BIG-IP appliances and provide failover in the case of a VM-level failure. However we want our traffic to traverse BIG-IP's when we need advanced app services: TLS handling, authentication, advanced health monitoring, sophisticated load balancing methods, etc. Let's analyze this! Firstly, I put together a quick demo to easily show him how to deploy BIG-IP behind an ALB. My demo uses an external (internet-facing) ALB and an internal ALB. It is based on the official template provided by F5, but additionally deploys an app server and configures the BIG-IP's with a route and an AS3 declaration. If it wasn't for the internal-only part, we would have met his requirements with this set up: Constraints However, this user's case requires no internet-facing BIG-IP. And now we hit 2 problems: Azure will not allow 2x internal LB’s in the same Availability Set, or you’ll get a conflict with error: NetworkInterfacesInAvailabilitySetUseMultipleLoadBalancersOfSameType . So the diagram above cannot be re-created with 2x Internal Azure LB's. When using only 1x internal LB, you “ should only have one inbound rule if that rule loadbalances across all ports and protocols. ” Since we want at least 1 rule for all ports (for outbound traffic from server), we cannot also have individual LB rules for our apps. Trying to do so will fail with the error message in quotes. Alternatives This leaves us with a few options. We can meet most of his requirements but one that I have not been able to overcome is this: if your cluster of BIG-IP's is Active/Active, you will need to SourceNAT traffic to ensure the response traffic traverses the same BIG-IP. I'll discuss three options and how they meet the requirements. Use a single Azure internal LB. At BIG-IP, SNAT the traffic to the web server, and send XFF header in place of true client IP. Default route can be the Firewall, so server-initiated traffic like patch updates can still reach internet. Can be Active/Active or Active/Standby, but you must SNAT if you do not want to be updating a UDR at time of failover. Or, don’t SNAT traffic, and web server sees true source IP. You will need a UDR (User Defined Route) to point default route and any client subnets at the Active BIG-IP. You will need to automatically update this UDR at time of failover (automated via F5's Cloud Failover Extension or CFE). Can be Active/Standby only, as traffic will return following the default route. Use a single Azure internal LB, but with multiple LB rules. In our example we'll have 2x Front End IP's configured on the Azure LB, and these will be in different internal subnets. Then we'll have 2x back end pools that consist of the external SelfIP's for 1 rule, and internal SelfIP's for the other. Finally we'll have 2x LB rules. The rule for the "internal" side of the LB may listen on ALL ports (for outbound traffic) and the "external" side of the LB might listen on 80 and 443 only. Advanced use cases (not pictured) Single NIC. If you did not want to have a 3-NIC BIG-IP, it would be possible to achieve scenario C above with a single NIC or dual NIC VM: Use a 2-nic BIG-IP (1 nic for mgmt., 1 for dataplane). Put your F5 pair behind a single internal Azure LB with only 1 LB rule which has “HA” ports checked (all ports). We can then have the default route of the server subnet point to Azure LB, which will be served by a VIP 0.0.0.0/0 on F5. Because this only allows you 1 LB rule on the Azure LB, enable DSR on the Azure LB Rule. Designate an “alien subnet range” that doesn’t exist in VNET, but only on the BIG-IP. Create a route to this range, and point the next hop at the frontend IP on the only LB rule. Then have your FW send traffic to the actual VIP on F5 that's within the alien range (not the frontendIP), which will get forwarded to Azure LB, and to F5. I have tested this but see no real advantage and prefer multi-NIC VM's. Alien range. As mentioned above, an "alien IP range" - a subnet not within the VNET but configured only on the BIG-IP for VIPs - could exist. You can then create a UDR that points this "alien range" toward the FrontEnd IP on Azure LB. An alien range may help you overcome the limit of internal IP's on Azure NIC's, but with a limit of 256 private IP's per NIC, I have not seen a case requiring this. An alien range might also allow app teams to manage their own network without bothering network admins. I would not advise going "around" network teams however - cooperation is key. So I cannot find a great use for this in Azure, but I have written about how an alien range may help in AWS. DSR. Direct Server Return in Azure LB means that Azure LB will not perform Destination NAT on the traffic, and it will arrive at the backend pool member with the true destination IP address. This can be handy when you want to create 1 VIP per application in your BIG-IP cluster, and not worry about multiple VIP's per application, or VIP's with /30 masks, or VIP's that use Shared Address lists. However, given that we have multiple options to configure VIP's when Destination NAT is performed by Azure LB (as it is, by default), I generally don't recommend DSR on Azure LB unless it's truly desired. Personally, I'd recommend in this case to proceed with Option C below. I'd also point out: I believe operating in the cloud requires automation, so you should not shy away from automated updates to UDR's when required. Given these can be configured by tools like F5's Cloud Failover Extension (CFE), they are a part of mature cloud operations. I personally try to architect to allow for changes later. Making SNAT a requirement may be a limitation for app owners later on, so I try not to end up in a scenario where we enforce SNAT. I personally like to see outbound traffic traverse BIG-IP, and not just because it allows apps to see true source IP. Outbound traffic can be analyzed, optimized, secured, etc - and a sophisticated device like BIG-IP is the place to do it. Lastly, I recommend best practices we all should know: Use template-based deployments for production so that you have Infrastructure as Code Ideally keep your BIG-IP config off-box and deploy with AS3 templates Get your app and dev teams configuring BIG-IP using declarative deployments to speed your deployment times Conclusion There are multiple ways to deploy BIG-IP when your requirements dictate an architecture that is not deployed by default. By keeping in mind my priorities of application services, operational support and high availability, you can decide on the best architecture for a given scenario. Thanks for reading, and let me know if you have any questions.6.3KViews4likes20CommentsUsing Cloud Templates to Change BIG-IP Versions - Azure
Introduction This article will make use of F5 cloud templates on GitHub to modify the BIG-IP versions for your public cloud deployments in Azure. This is part of an article series, so please review the “Concepts” as well as other articles within the series. Modifying BIG-IP Templates for Azure Cloud This section will show you how to modify the BIG-IP version in Azure deployments. The template deployment service in Azure is called Azure Resource Manager. Azure ARM BIG-IP Cloud Templates for Azure on GitHub There a few methods I tested, and I’ll do a “How To” for each. Check the Appendix for additional examples. Use Latest Template Release (no edits required) Use Previous Template Release (no edits required) Edit Latest Template to Deploy BIG-IP Versions 12.x Use Latest Template to Deploy Custom BYOL Uploaded Image Note: At the time of this article, the "Latest" template release version for F5 cloud templates in Azure is 9.4.0.0 and found under Tag 9.4.0.0 on GitHub. See Tag 9.4.0.0 Release Notes. Option #1: Use Latest Template Release (no edits required) This option lets you use templates without modification of code. Each release corresponds to a certain BIG-IP version (see Azure ARM Template Matrix), and the template is hard coded with the selection of one default BIG-IP version in Azure F5 cloud templates using variable bigIpVersion. The latest template can deploy BIG-IP versions 15.1 (default), 14.1, and "latest" which is 16.0 at the time of this article. If one of these versions are your version of choice, great! This section is for you. Here is an example to deploy BIG-IP version 14.1.4.2. Deploy BIG-IP with latest template release: Find your favorite BIG-IP template for Azure. I’ll use the BIG-IP, standalone, 3nic, PAYG licensing (Tag 9.4.0.0) Review the entire README for installation instructions Download template "azuredeploy.json" and parameters "azuredeploy.parameters.json" Edit parameters file bigIpVersion = 14.1.402000 Populate all remaining parameters Save file and deploy with your favorite method Azure will validate the template and launch a BIG-IP running 14.1.4.2 #Example deploying using Azure CLI az group deployment create -n myDeployment -g myRG1 \ --template-file azuredeploy.json \ --parameters @azuredeploy.parameters.json \ --parameters bigIpVersion=14.1.402000 Note: If you need to specify a more exact version or flavor of BIG-IP, try modifying the parameter customImageUrn. You can find available values in the Azure Offer List, so pick one that matches your deployment requirements and licensing model. For example, 1Gb Best PAYG 14.1.4.2 would be f5-networks:f5-big-ip-best:f5-bigip-virtual-edition-1g-best-hourly:14.1.402000. Option #2: Use Previous Template Release (no edits required) If you don’t mind a previous template release (less fixes/features), AND you still don’t want to tweak template code, AND you still need a different BIG-IP version, AND the BIG-IP version is listed in the matrix then keep reading! Here is an example to deploy BIG-IP version 13.1.1.0. Find a previous template release to deploy BIG-IP version you desire: Decide what BIG-IP version you need (my example 13.1.1.0) Check the Azure ARM Template Matrix for BIG-IP Scroll down the list and you’ll see template release v7.0.0.2 It allows “latest, 14.1.003000, 13.1.100000” Click the link to review v7.0.0.2 template release notes Deploy BIG-IP with previous template release: Find your favorite BIG-IP template for Azure. I’ll use the BIG-IP, standalone, 3nic, PAYG licensing (Tag 7.0.0.2) Review the entire README for installation instructions Download template "azuredeploy.json" and parameters "azuredeploy.parameters.json" Edit parameters file bigIpVersion = 13.1.100000 Populate all remaining parameters Save file and deploy with your favorite method Azure will validate the template and launch a BIG-IP running 13.1.1.0 #Example deploying using Azure CLI az group deployment create -n myDeployment -g myRG1 \ --template-file azuredeploy.json \ --parameters @azuredeploy.parameters.json \ --parameters bigIpVersion=13.1.100000 OK...we made it this far, but you still don’t see the BIG-IP version you need. Keep reading! In the next section, we’ll tweak some templates! Option #3: Edit Latest Template to Deploy BIG-IP Versions 12.x So far, I have addressed using the different templates to deploy BIG-IP various versions. The same concept can also work to deploy a version 12.x BIG-IP assuming the image is still available in the marketplace. We’ll use the latest template version, make some minor edits to the code, and deploy the BIG-IP. First, we need to make sure the version number is available in marketplace. Here is an example to deploy BIG-IP version 12.1.5.3. Note: Version 12.x BIG-IP uses an older Azure WAAgent, and requires a specific management route Note: Review the knowledge article F5 support for GitHub software for any questions pertaining to support of templates and modified templates. Search for Azure image via Azure CLI: Open your favorite terminal Enter a search filter. Use “big-ip” as the starting filter value command = az vm image list -f big-ip --all Scroll through the list and find an available version for your specific BIG-IP flavor My desired version example = 12.1.5.3 My desired “flavor” = Best (all modules), 1Gb throughput, PAYG (hourly) Copy the “version” value and save for later (my example 12.1.503000) #Example search and results az vm image list -f big-ip --all #Output similar to this... …lots of images are listed { "offer": "f5-big-ip-best", "publisher": "f5-networks", "sku": "f5-bigip-virtual-edition-1g-best-hourly", "urn": "f5-networks:f5-big-ip-best:f5-bigip-virtual-edition-1g-best-hourly:12.1.503000", "version": "12.1.503000" }, Deploy BIG-IP with edited latest template release: Find your favorite BIG-IP template for Azure. I’ll use the BIG-IP, standalone, 3nic, PAYG licensing (Tag 9.4.0.0) Review the entire README for installation instructions Download template "azuredeploy.json" and parameters "azuredeploy.parameters.json" Edit template file **Refer to EXAMPLE EDITS code snippet below Replace value of routeCmd with a management route for waagent Edit parameters file bigIpVersion = 12.1.503000 Populate all remaining parameters Save file and deploy with your favorite method Azure will validate the template and launch a BIG-IP running 12.1.5.3 #Example deploying using Azure CLI az group deployment create -n myDeployment -g myRG1 \ --template-file azuredeploy.json \ --parameters @azuredeploy.parameters.json \ --parameters bigIpVersion=12.1.503000 #Example Edits for Option #3: Edit Latest Template to Deploy BIG-IP Versions 12.x ############## ## routeCmd ## ############## #original "routeCmd": "route", #after replacing with mgmt route for WA agent "routeCmd": "[concat('tmsh create sys management-route waagent_route network 168.63.129.16/32 gateway ', variables('mgmtRouteGw'), '; tmsh save sys config')]", Note: If you need to specify a more exact version or flavor of BIG-IP, try modifying the parameter customImageUrn. You can find available values in the Azure Offer List, so pick one that matches your deployment requirements and licensing model. For example, 1Gb Best PAYG 12.1.5.3 would be f5-networks:f5-big-ip-best:f5-bigip-virtual-edition-1g-best-hourly:12.1.503000. Option #4: Use Latest Template to Deploy Custom BYOL Uploaded Image The final Azure option allows you to upload or create your own BIG-IP images and reference those images in F5 cloud template deployments. There is an existing how-to doc on F5 CloudDocs explaining how to upload a VHD to your Azure environment. Review first “When You Don’t Have Azure Marketplace Access”. I’ll walk through the high-level steps of the article below. Then we'll review the deploy steps which require another JSON code edit. Note: Custom images only allow BYOL licensing. Note: Review the knowledge article F5 support for GitHub software for any questions pertaining to support of templates and modified templates Upload/Create Custom Image: Obtain a VHD image file for the BIG-IP version you desire My example = 13.1.3.2 Download VHD file from https://downloads.f5.com and upload as-is Or...you can use F5 Image Generator too to make your own custom image Untar the Azure-F5_Networks-xxxxxx.tar.gz file (will result in single VHD file) Create Azure Storage account and container Recommend creating a folder in your storage container like “f5-images” Upload VHD file to Azure Storage Click the VHD file to review properties Copy the URL and save for later (my example https://<storage-account>.blob.core.windows.net/f5-images/F5_Networks-13.1.3.2-0.0.4-BYOL-_.vhd) The customImage parameter in the F5 cloud templates can reference the VHD URL value, or the templates can reference an Azure Image resourceID. Read the next “Optional” section to create an Azure Image. Otherwise, skip to “Deploy Custom BIG-IP Image with Latest Template”. Note: If you don’t create an Azure Image resource prior to deployment, the templates will automatically create an Azure Image (source = VHD file) in the same resource group. Optional - Create Azure Image from VHD File via Azure CLI: Create Azure Image command = az image create --name xxxx --resource-group xxxx --os-type Linux --source https://xxxx/file.vhd Copy the “id” value and save for later (my example /subscriptions/xxxx-xxxx-xxxx-xxxx-xxxx/resourceGroups/myRG1/providers/Microsoft.Compute/images/F5_Networks-13.1.3.2-0.0.4-BYOL-2slot) #Example image create and results az image create --name F5_Networks-13.1.3.2-0.0.4-BYOL-2slot \ --resource-group myRG1 \ --os-type Linux \ --source https://mystorage123.blob.core.windows.net/f5-images/F5_Networks-13.1.3.2-0.0.4-BYOL-_.vhd #Output similar to this... { "hyperVgeneration": "V1", "id": "/subscriptions/xxxx/resourceGroups/myRG1/providers/Microsoft.Compute/images/F5_Networks-13.1.3.2-0.0.4-BYOL-2slot", Deploy custom BIG-IP image with latest template release: Find your favorite BIG-IP template for Azure. I’ll use the BIG-IP, standalone, 3nic, BYOL licensing (Tag 9.4.0.0) Review the entire README for installation instructions Download template "azuredeploy.json" and parameters "azuredeploy.parameters.json" Edit parameters file bigIpVersion = 13.1.302000 customImage = /subscriptions/xxxx/resourceGroups/myRG1/providers/Microsoft.Compute/images/F5_Networks-13.1.3.2-0.0.4-BYOL-2slot Populate all remaining parameters Save file and deploy with your favorite method Azure will validate the template and launch a BIG-IP running 13.1.3.2 #Example deploying using Azure CLI az group deployment create -n myDeployment -g myRG1 \ --template-file azuredeploy.json \ --parameters @azuredeploy.parameters.json \ --parameters bigIpVersion=13.1.302000 You can use the customImage method to also deploy a version 12.x that you download from https://downloads.f5.com or create with F5 Image Generator. Note: If you decide to use a v12.x BIG-IP image, make sure to review the Appendix Option 4a section for an example on how to modify the routes as required by v12.x and the Azure WAAgent in the routeCmd variable. Summary That is a wrap! There’s lots of info in this post, and I hope it makes your job easier in deciding what template to choose when deploying various versions of BIG-IP devices in the Azure public cloud. Appendix Option #4a: Edit Latest Template to Deploy Custom BYOL Uploaded Image 12.x The same method in Option #4 “Use Latest Template to Deploy Custom Uploaded Image” can also work to deploy a version 12.1.x BIG-IP version that you download from F5 Downloads and upload as BYOL. I’ll summarize the steps here since the full steps are already listed in Option #4. Note: Custom images only allow BYOL licensing. Note: Version 12.1.x BIG-IP uses an older Azure WAAgent, and requires a specific management route as illustrated by the routeCmd. Download BIG-IP v12.x version from Azure folder on https://downloads.f5.com My example LTM/DNS 1 boot slot = Azure-F5_Networks-BIGIP-12.1.5.30.0.5-size_45GB.vhd.tar.gz Untar file, upload resulting VHD file to Azure storage My example = F5_Networks-BIGIP-12.1.5-30.0.5-size_45GB.vhd Select newly uploaded VHD file, review properties, save VHD URL for later My example = https://<storage-account>.blob.core.windows.net/f5-images/F5_Networks-BIGIP-12.1.5-30.0.5-size_45GB.vhd Optionally...create Azure Image from VHD and save resourceID for later My example = /subscriptions/xxxx/resourceGroups/myRG1/providers/Microsoft.Compute/images/F5_Networks-12.1.5-30.0.5-BYOL-2slot Find your favorite BIG-IP template for Azure. I’ll use the BIG-IP, standalone, 3nic, BYOL licensing (Tag 9.4.0.0) Review the entire README for installation instructions Download template "azuredeploy.json" and parameters "azuredeploy.parameters.json" Edit template file **Refer to EXAMPLE EDITS code snippet below Replace value of routeCmd with a management route for waagent Edit parameters file bigIpVersion = 12.1.503000 customImage = /subscriptions/xxxx/resourceGroups/myRG1/providers/Microsoft.Compute/images/F5_Networks-12.1.5-30.0.5-BYOL-2slot Populate all remaining parameters Save file and deploy with your favorite method Azure will validate the template and launch a BIG-IP running 12.1.5.3 #Example deploying using Azure CLI az group deployment create -n myDeployment -g myRG1 \ --template-file azuredeploy.json \ --parameters @azuredeploy.parameters.json \ --parameters bigIpVersion=12.1.503000 #Example Edits for Option #4a: Edit Latest Template to Deploy Custom Uploaded Image 12.x ############## ## routeCmd ## ############## #original "routeCmd": "route", #after replacing with mgmt route for WA agent "routeCmd": "[concat('tmsh create sys management-route waagent_route network 168.63.129.16/32 gateway ', variables('mgmtRouteGw'), '; tmsh save sys config')]",1.2KViews4likes2CommentsIntegrating the F5 BIGIP with Azure Sentinel
So here’s the deal; I have a few F5 BIG-IP VEs deployed across the globe protecting my cloud-hosted applications.It sure would be nice if there was a way to send all that event and statistical data to my Azure Sentinel workspace.Well, guess what? There is a way and yes, it is nice. The Application Services 3 (AS3) extension is relatively new mechanism for declaratively configuring application-specific resources on a BIG-IP system.This involves posting a JSON declaration to the system’s API endpoint, (https://<BIG-IP>/mgmt/shared/appsvcs/declare). Telemetry Streaming (TS) is an F5 iControl LX Extension that, when installed on the BIG-IP, enables you to declaratively aggregate, normalize, and forward statistics and events from the BIG-IP.The control plane data can be streamed to Azure Log Analytics application by posting a single TS JSON declaration to TS’s API endpoint, (https://<BIG-IP>>mgmt/shared/telemetry/declare). As illustrated on the right, events/stats can be collected and aggregated from multiple BIG-IPs regardless of whether they reside in Azure, on-premises, or other public/private clouds. Let’s take a quick look at how I setup my BIG-IP and Azure sentinel.Since this post is not meant to be prescriptive guidance, I have included links to relevant guidance where appropriate.Okay, let’s have some fun! So I don’t want to sound too biased here but, with that said, the F5 crew has put out some excellent guidance on Telemetry Streaming.The CloudDocs site, (see left) includes information for various cloud-related F5 technologies and integrations.Refer to the installation section for detailed guidance. Install the Plug-in The TS plug-in RPM can be downloaded from the GitHub repo, (https://github.com/F5Networks/f5-telemetry-streaming/releases). From the BIG-IP management GUI, I navigated to iApps –> Package ManagementLX and selected ‘Import’. I selected ‘Choose File’ , browsed to and selected the downloaded rpm. With the TS extension installed, I can now configure streaming via the newly created REST API endpoint.You may have noticed that I have previously installed the Application Services 3, (AS3) extension.AS3 is a powerful F5 extension that enables application-specific configuration of the BIG-IP via a declarative JSON REST interface. Configure Logging Profiles and Streaming on BIG-IP As I mentioned above, I could make use of the AS3 extension to configure my BIG-IP with the necessary logging resources.With AS3, I can post a single JSON declaration, (I used Postman to apply) that configures event listeners for my various deployed modules.In my deployment, I’m currently using Local Traffic Manager, and Advanced WAF.For my deployment, I went a little “old school” and configured the BIG-IP via the management GUI or TMSH cli.Regardless of the method you prefer, the installation instructions provide detailed guidance for each log configuration method. LTM Logging To enable LTM request logging, I ran the following two TMSH commands.Afterwards, I enabled request logging on the virtual server, (see below) to begin streaming data to Azure Log Analytics. Create Listener Pool - create ltm pool telemetry-local monitor tcp members replace-all-with { 10.8.3.10:6514 } Create LTM Request Log Profile - create ltm profile request-log telemetry request-log-pool telemetry-local request-log-protocol mds-tcp request-log-template event_source=\"request_logging\",hostname=\"$BIGIP_HOSTNAME\", client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\", http_method=\"$HTTP_METHOD\", http_uri=\"$HTTP_URI\", virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\" request-logging enabled ASM, (Advanced WAF) Logging To enable ASM event logging, I ran the following two TMSH commands.Afterwards, I simply needed to associate my security logging profiles to my application virtual servers, (see below). Create Security Log Profile – create security log profile telemetry application replace-all-with { telemetry { filter replace-all-with { request-type { values replace-all-with { all } } } logger-type remote remote-storage splunk servers replace-all-with { 255.255.255.254:6514 {} } } } Streaming Data to Azure Log Analytics With my BIG-IP configured for remote logging, I was now ready to configure my BIG-IPs to stream event data to my Azure Log Analytics workspace.This is accomplished by posting a JSON declaration to the TS API endpoint.The declaration, (see example below) includes settings specifying workspace ID, access passphrase, polling interval, etc.).This information can be gathered from the Azure portal or via Azure cli.With the declaration applied to the the BIG-IP event/stat data now streams to my Azure workspace. Utilize Azure Sentinel for Global Visibility and Analytics With event and stats now streaming into my previously created OMS workspace from my BIG-IP(s), I can now start to visualize and work with the aggregated data.From the OMS workspace I can aggregate data from my BIG-IPs as well as other sources and perform complex queries.I can then take the results and use them to populate a one or more custom dashboards, (see example below). Additionally, to get started quickly I can deploy a pre-defined dashboard directly out of the Azure OMS workspace.As of this post, F5 currently has a pre-canned dashboard for visualizing Advanced WAF and basic LTM event data, (see below). Summary Now, I have a single pane of glass that can be pinned to my Azure portal for quick, near-real time visibility of my globally deployed application.Pretty cool, huh?Here’s the overall order and some relevant links: Setup Azure Sentinel and OMS Workspace Install and Configure Telemetry Streaming onto the BIG-IP(s) Configure logging on BIG-IP(s) Additional Links Video Walkthrough of Azure Sentinel Integration F5 CloudDocs Application Services 3 Extension Telemetry Streaming User Guide Azure Sentinel Overview22KViews4likes7CommentsHow does F5 AS3 really work under the hood?
Put it simple,AS3 is a way to configure a BIG-IP once a BIG-IP is already provisioned. Full stop! We can also use AS3 to maintain that configuration over time. The way it works is we as a client send a JSON declaration via REST API and AS3 engine is supposed to work out how to configure BIG-IP the way it's been declared. AS3 internal components (parser and auditor) are explained further ahead. For non-DEV audience, AS3 is simply the name we give to an intelligent listener which acts as an interpreter that reads our declaration and translate it to proper commands to be issued on the BIG-IP. AS3 engine may or may not reside on BIG-IP (more on that on section entitled "3 ways of using AS3"). Yes, AS3 is declared in a structured JSON file and there are many examples on how to configure your regular virtual server, profiles, pools, etc,on clouddocs. AS3 uses common REST methods to communicate such as GET, POST andDELETE under the hood. For example, when we send our AS3 declaration to BIG-IP, we're sending an HTTP POST with the AS3 JSON file attached. AS3 is part of the Automation Toolchain whichincludes Declarative Onboarding and Telemetry Steaming. What AS3 is NOT Not a Mechanism for Role-BasedAccess Control (RBAC) AS3 doesn't support RBAC in a way that you can allow one user to configure certain objects and another user to configure other objects. AS3 has to use admin username/password with full access to BIG-IP resources. Not a GUI There's currently no native GUI built on top of AS3. Not an orchestrator AS3 won't and doesn't work out how to connect to different BIG-IPs and automatically figure out which box it needs to send which configuration to. All it does is receive a declaration, forwards it on and configure BIG-IP. Not for converting BIG-IP configuration We can't currently use AS3 to pull BIG-IP configuration and generate an AS3 configuration but I hope this functionality should be available in the future. Not for licensing or other onboarding functions We can't use AS3 for doing things like configuring VLANs or NTP servers. We use AS3 to configure BIG-IP once it's been already initially provisioned. For BIG-IP's initial set up, we useDeclarative Onboarding. Why should we use AS3? To configure and maintain BIG-IPs across multiple versions using the same automated workflow. A simple JSON declaration becomes the source of Truth with AS3, where configuration edits should only be made to the declaration itself. If multiple BIG-IP boxes use the same configuration, a single AS3 declaration can be used to configure the entire fleet. It can also be easily integrated with external automation tools such as Ansible and Terraform. What I find really REALLY cool about AS3 AS3 targets and supports BIG-IP version 12.1 and higher. Say we have an AS3 declaration that was previously used to configure BIG-IP v12.1, right? Regardless if we're upgrading or moving config to another box, we can still use the same declaration to configure BIG-IP v15.1 box in the same way. I'm not joking! Back in the F5 Engineering Services days, I still remember when I used to grab support tickets where the issue was a configuration from an earlier version that was incompatible with newer version, e.g. a profile option was moved to a different profile, or new feature was added that requires some other option to be selected, etc. This is supposed to be a thing of the past with AS3. AS3Key Features Transactional If you're a DBA, you've certainly heard of the term ACIDIC (atomicity, consistency, isolation, and durability). Let's say we send an AS3 declaration with 5 objects. AS3 will either apply the entire declaration or not apply at all. What that means is that if there's one single error, AS3 will never apply part of the configuration and leave BIG-IP in an unknown/inconsistent state. There's no in-between state. Either everything gets configured or nothing at all. It's either PASS or FAIL. Idempotent Say we send a declaration where there's nothing to configure on BIG-IP. In that case, AS3 will come back to client and inform that there's nothing for it to do. Essentially, AS3 won't remove BIG-IP's entire config and then re-apply it. It is smart enough to determine what work it needs to do and it will always do as little work as possible. Bounded AS3 enforces multi-tenancy by default, i.e. AS3 only creates objects in partitions (known as "tenants" in AS3 jargon) other than /Common. If we look at theAS3 declaration examples, we can see that a tenant (partition) is specified before we declare our config. AS3 does not create objects in the /Common partition. The exception to that is /Common/Shared when objects are supposed to be shared among multiple partitions/tenants. An example is when we create a pool member and a node gets automatically created on BIG-IP. Such node is created on /Common/Shared partition because that node might be a pool member in another partition, for example. Nevertheless, AS3 scope is and must always be bounded. The 3 ways of using AS3 Using AS3 through BIG-IP In this case here, we install AS3 RPM on each BIG-IP. BIG-IP is the box that has the "AS3 listener" waiting for us to send our AS3 JSON config file. All we need to do is to download AS3's binary and install it locally. There's a step by step guide for babies (with screenshots)hereusing BIG-IP's GUI. There's also a way to do it using curl if you're a geek like mehere. Using AS3 through BIG-IQ In this case, we don't need to manually install AS3 RPM on each BIG-IP box like in previous step. BIG-IQ does it for us. BIG-IQ v6.1.0+ supports AS3 and we can directly send declarations through BIG-IQ. Apart from installing, BIG-IQ also upgrades AS3 in the target box (or boxes) if they're using an older version. Analytics and RBAC are also supported. Using AS3 through Docker container This is where AS3 is completely detached from BIG-IP. In the Docker container set up, AS3 engine resides within a Docker container decoupled from BIG-IP. Say your environment have Docker containers running, which is not something uncommon nowadays. We can installAS3 in a Docker containerand use that container as the entry-point to send our AS3 declaration to BIG-IP. Yes, we as Cluent send our AS3 JSON file to where the Docker container is running and as long as the Docker container can reach our BIG-IP, then it will connect and configure it. Notice that in this case our AS3 engine runs outside of BIG-IP so we don't have to install AS3 on our BIG-IP fleet here. Docker container communicates with BIG-IP using iControl REST sendingtmshcommands directly. AS3 Internal Components AS3 engine is comprised ofan AS3 parser and AS3 auditor: AS3 Parser This is the front-end part of AS3 that communicates with the client andis responsible for client's declaration validation. AS3 Auditor After receiving validated declaration from AS3 parser, AS3 auditor's job is to compare desired validated declaration with BIG-IP's current configuration. It then determines what needs to be added/removed and forwards the changes to BIG-IP. AS3 in Action The way it works is Client sends a declaration to AS3 parser and config validation process kicks in. If declaration is not valid, it throws an error back to client along with an error code. If valid, declaration is forwarded on to AS3 auditor which then compares declaration with current BIG-IP's config and determines what needs to change. Only the configuration changes are supposed to be sent to BIG-IP, not the whole config. AS3 auditor then converts AS3 declaration totmshcommands and send convertedtmshconfig to BIG-IP via iControl REST. BIG-IP then pushes the changes viatmshcommands and returns success/error to AS3 auditor. If changes are not successful, an error is returned all the way to the client. Otherwise, successful code is returned to client and changes are properly applied to BIG-IP. Here's the visual description of what I've just said: Debugging AS3 AS3 schema validation errors are returned in HTTP Response with a message pointing to the specific error: This includes typos in property names and so on. Logs on BIG-IP are stored on/var/log/restnoded/restnoded.logand by default only errors are logged. Log level can be changed through theControlsobject in AS3 declaration itself. AS3 vs Declarative Onboarding This is usually source of confusion so I'd like to clarify that a bit. AS3 is the way we configure BIG-IP once it's already up and running. Declarative Onboarding (DO) is for the initial configuration of BIG-IP, i.e. setting up licence, users, DNS, NTP and even provisioning modules. Just like AS3,DO is API-only so no GUI on top of it. We can also have AS3 and DO in the same BIG-IP, so that's not a problem at all. Currently, there's no option to run it in a container like AS3 so as far as I'm concerned, it's only RPM based. Resources AS3 CloudDocs GitHub Repo Releases Declarative Onboarding (DO) CloudDocs GitHub Repo Releases I'd like to thank F5 Software Engineers Steven Chadwick and Garrett Dieckmann from AS3 team for providing a brilliant reference material.2.1KViews4likes2CommentsCreating a Credential in F5 Distributed Cloud for Azure
Configuring a cloud account credential for F5 Distributed Cloud to use with Azure, while a straightforward process, requires some nuance to get just right. This article illustrates each step of the way. "Nature is a mutable cloud, which is always and never the same." - Ralph Waldo Emerson We might not wax that philosophically around here, but our heads are in the cloud nonetheless! Join the F5 Distributed Cloud user group today and learn more with your peers and other F5 experts. Log in to the Azure portal at portal.azure.com. Navigate to All Services > Azure AD > App registrations, then click “New registration”. Enter the app name and choose who can access the API. Single tenant access is recommended. Now click “Certificates & secrets”, then “Client secrets (0)”, and then “New client secret”. Enter a name for the secret and choose the default expiration time of 6 months as a best practice. Copy the secret and save it to enter later in the F5 Distributed Cloud Console. In the app registration overview “Essentials” section, copy the Application (client) ID and Directory (tenant) ID. You’ll need this information in the F5 Distributed Cloud Console further in this guide. Exit the app registration, and in the Azure Active Directory Overview, save the Tenant ID to enter later in the F5 Distributed Cloud Console. In the search box, type “Subscriptions” and open the subscription that you want services provisioned in. Click “Access control (IAM)”, then “+ Add”, then “Add role assignment”. Select the built-in role “Contributor”, then click the tab “Members”. Enter the name of the app registration created in step 2 above, highlight the selection, then click “Select”. The role assignment and member should appear. Now click “Review & assign”. Open and navigate to the F5 Distributed Cloud Console, Cloud and Edge Sites > Site Management > Cloud Credentials, then click “Add Cloud Credentials”. Enter the following details, and then click “Configure”. Name: azure-cred Cloud Credential Type: Azure Client Secret for Service Principal Client ID: [copied in step 5] Subscription ID: [copied in step 6] Tenant ID: [copied in step 5] Paste in the private key using type “Text”, with the key copied in step 4. Click “Blindfold”, and then click “Apply”. Click “Save and Exit” Congrats! You've now configured a Cloud Credential for deploying services in Azure using the Distributed Cloud Service.2.6KViews3likes2CommentsGo in Plain English: Setting things up and Hello World!
Related Articles: Go In Plain English: Playing with Strings Go In Plain English: Creating your First WebServer Quick Intro I like to keep things short so just do what I do here and you will pick things up. In this article we'll cover the following: Installation (1 liner) Hello world! The directory structure of Go BONUS! VS Code for Golang Installation →https://golang.org/doc/install Note:If you never programmed in Go, please go straight to the bonus section, install VS Code and come back here! Hello World! At the moment just keep in mind that you'll typically see package, import and func in a Go program. If you're just getting started then this is not the time to go really in-depth but package main signals go that our piece of code is an executable. The import statement is where you tell go which additional package you'd like to import. In this case, we're importing fmt package (the one that has the Printf method). Lastly, func main() is where you're going to add your code to, i.e. the entry point of our program. Running your Go code! You can pick a folder of your choice to run your Go apps. In my case I picked /Users/albuquerque/Documents/go-apps/ directory here: Inside of it, you create a src directory to keep your *.go source code inside: go run When you type go run you're telling Go that you don't care about executable (for now) so go compiles your code, keeps executable into a temporary directory and runs it for you. go build When you type go build, Go compiles the executable and saves it to your current directory: go install There is a variable called $GOPATH where we tell Go where to look for source files. Let's set it to our go-apps directory: When we type go install, Go creates a bin directory to keep the executable: go doc In case you want to know what a particular command does, just use go doc command: BONUS! VS Code for Golang The program I use to code in Go and Python is VS Code: It has autocompletion and roughly all you expect from a proper code editor. You can download it from here:https://code.visualstudio.com/download You can choose themes too:784Views3likes5CommentsExploring Kubernetes API using Wireshark part 1: Creating, Listing and Deleting Pods
Related Articles: Exploring Kubernetes API using Wireshark part 2: Namespaces Exploring Kubernetes API using Wireshark part 3: Python Client API Quick Intro This article answers the following question: What happens when we create, list and delete pods under the hood? More specifically on the wire. I used these 3 commands: I'll show you on Wireshark the communication between kubectl client and master node (API) for each of the above commands. I used a proxy so we don't have to worry about TLS layer and focus on HTTP only. Creating NGINX pod pcap:creating_pod.pcap (use http filter on Wireshark) Here's our YAML file: Here's how we create this pod: Here's what we see on Wireshark: Behind the scenes, kubectl command sent an HTTP POST with our YAML file converted to JSON but notice the same thing was sent (kind, apiVersion, metadata, spec): You can even expand it if you want to but I didn't to keep it short. Then, Kubernetes master (API) responds with HTTP 201 Created to confirm our pod has been created: Notice that master node replies with similar data with the additional status column because after pod is created it's supposed to have a status too. Listing Pods pcap:listing_pods.pcap (use http filter on Wireshark) When we list pods, kubectl just sends a HTTP GET request instead of POST because we don't need to submit any data apart from headers: This is the full GET request: And here's the HTTP 200 OK with JSON file that contains all information about all pods from default's namespace: I just wanted to emphasise that when you list a pod the resource type that comes back isPodListand when we created our pod it was justPod. Remember? The other thing I'd like to point out is that all of your pods' information should be listed underitems. Allkubectldoes is to display some of the API's info in a humanly readable way. Deleting NGINX pod pcap:deleting_pod.pcap (use http filter on Wireshark) Behind the scenes, we're just sending an HTTP DELETE to Kubernetes master: Also notice that the pod's name is also included in the URI: /api/v1/namespaces/default/pods/nginx← this is pods' name HTTP DELETEjust likeHTTP GETis pretty straightforward: Our master node replies with HTTP 200 OK as well as some json file with all the info about the pod, including about it's termination: It's also good to emphasise here that when our pod is deleted, master node returns JSON file with all information available about the pod. I highlighted some interesting info. For example, resource type is now just Pod (not PodList when we're just listing our pods).4.6KViews3likes0Comments