microsoft azure
5 TopicsBIG-IP to Azure Dynamic IPsec Tunneling
In one of my previous posts we took a look at configuring the BIG-IP to act as a site-to-site VPN tunnel endpoint for connecting on-premises environments with Azure. At the time the BIG-IP only supported policy-based, (static-route) VPN tunnels. Now, with the latest release of the F5 BIGIP OS, (version 12.x), both dynamic as well as static-based IPSec VPNs are supported. “But Greg, why do I care?”, you may ask. Excellent question! For a good primer on the two version of IPSec VPNs checkout this blog post from Russ Slaten. From a practical standpoint, if your organization needs to connect multiple endpoints, (including Multi-Site, Point-to-Site, and VNet-to-VNet ), to their Azure environment, you must utilize a dynamic route-based VPN configuration. So with that said, let’s take a look at a typical configuration setup. Note: The following steps assume the BIG-IP has been initially configured settings including, but not limited to, licensing, provisioning, and network configurations. Addtionally, an iApp template is available here. The iApp will facilitate the deployment described below. Setup – Configure each of the following objects in BIG-IP as illustrated below. Step 1. Create IPsec Policy – The following IPsec policy created utilizes SHA-1’ for authentication, ‘AES-256’ for encryption, and Diffie-Hellman (MODP1024) Perfect Forward Secrecy. However, you have various options with regards to levels and types of auth/encryption. Refer to the Azure’s page for requirements. Step 2. Create Azure Traffic Selector – During the initial tunnel negotiation, the Azure VPN gateway will advertise ‘0.0.0.0/0’ for both source and destination subnets regardless of the actual on-premises and Azure VNet address spaces. The BIG-IP traffic selector should match this to allow for Azure initiated tunnels. The actual traffic direction, (routing) will be determined by the static route entries, (see Step 6 below). Step 3. Create Azure Peer – The Azure IKE peer utilizes IKE v2, ‘SHA-1’ for authentication, ‘AES-256’ for encryption, Diffie-Hellman (MODP1024) Perfect Forward Secrecy, and a ‘preshared key’. Step 4. Create IPsec tunnel profile and tunnel – This is where dynamic, (aka route-based) IPsec and policy-based IPsec diverge. Utilizing an IPsec tunnel interface allows us to create static routes with the tunnel endpoint as the next hop. This way any traffic destined for the Azure side will be routed through the tunnel. By contrast, policy-based VPNs require a policy that explicitly states which traffic can use the VPN. Step 5. Create Tunnel Endpoint Self-IP and IPsec interface Self-IP. Note:Although required, the address assigned is not utilized by Azure tunnel and the only requirement is the subnet must be unique. Step 6. Create Route – A static route with the newly created tunnel as the next hop allows any traffic hitting the BIG-IP and destined for the specified subnet to be routed through the IPsec tunnel. Step 7. Create a forwarding virtual server – The simple forwarding virtual server listens for and directs traffic over the IPsec tunnel. Additional Links: CodeShare - IPSec Tunnel Endpoint iApp Download Connecting to Windows Azure with the BIG-IP About VPN devices for site-to-site virtual network connections Configuring IPsec between a BIG-IP system and a third-party device Windows Azure Virtual Networks Static vs Dynamic Routing Gateways in Azure – Russ Slaten Blog Post Technorati Tags: F5,BIG-IP,VPN,AES,IPsec,IKE,SHA,AZURE,ADC5.3KViews0likes9CommentsGetting In Shape For Summer With BIG-IP Per App Virtual Edition
What happens when you cross a developer with a fitness instructor? I can't think of a punch line that won't make you hate me. But there's really no joke here. Last January F5 released a new version of BIG-IP and something interesting was lurking under the hood of those extra decimal places. Similar to my discussion of BIG-IP's SELinux updatesthese features don't always get noticed but it's important for you to know when making deployment decisions. Grab your protein shake and let's see what changed so far. Dropping The Weight It's a universal fact that Storage Admins eat their young so the last thing you want to do is ask for excessive amounts of disk space. Our developers took that to heart and trimmed down BIG-IP virtual edition to nearly half of it's predecessor. Don't believe me? Here is a side by side of a vanilla BIG-IP EC2 instance in AWS. Here's BIG-IP v12 after a nice holiday season of figgy pudding and candied yams. BIG-IP v13.1.0.2kept their New Year's resolution intact and shed those gigs! How many BIG-IP virtual editions do you really have deployed, is the storage savings worth it? Some of you can count on one hand, some of you need your hands and toes... or your coworkers hands and toes too. If I deploy 5 BIG-IP v13.1.0.2 (or later)instances I'll save roughly 240GB of storage compared to earlier versions. What if I deployed 25? Over 1TB of storage saved but who's deploying 25 BIG-IP's at a time? Hold that thought. Usain Bolt Has Nothing On Our Deploy Time Cloud deployments expect application availability in minutes not hours. F5's developersarealways looking for more ways to speed up time between pressing the deployment button and actually passing traffic between clients and applications. An internal team didexactly that and here are some results of our initial tests in AWS. Mind you these numbers will always fluctuate depending on how complexyour automation is and how complicated you like to make your configurations. Notes on cloud testing in AWS: Region: Canada Size: m4.xlarge - 4vcpu/16GB RAM Disk: io1/gp2 Image Size: Good 41G And from what I've seen, these numbers are only getting better. We have a faster deploymet times to processing traffic after initial deployment. Now what? Hold thatthought too. Per App VE - Where The Work Pays Off Public or private cloud, administrators still deploy BIG-IP virtual edition similar to how they deploy BIG-IP hardware. A monolithic device providing reliable application delivery controller and security services supporting hundreds or thousands of applications. This is still a popular method to install BIG-IP in traditional or hybrid data centers. Developers can still programmatically configure monolithic BIG-IP virtual instances; application services spin up and down while updating nodes and configurations to BIG-IP via our REST interface. However you'll always have applications who may not have access to the "corporate" BIG-IP infrastructure or an application owner may need a unique instance to test a CI/CD process and they're segregated away productioninfrastructure. Or your teams just like their own application sandboxes. Enter BIG-IP Per App Virtual Edition (VE). BIG-IP Per App VE is a bandwidth and CPU licensed offering that createsa reduced cost solution designed to provide Local Traffic Manager (LTM) and Web Application Firewall (WAF) features programmatically on a per-app need. Combined with BIG-IQ as a full management solution for orchestration management or just using the BIG-IQ License Manager (free) you can deploy BIG-IP wherever developers and application teams need. BIG-IQ is NOT needed to purchase BIG-IP Per App VE but it makes licensing a lot of deviceseasier. What does the license provide and how do you provision? I'm glad you asked. The BIG-IP Per App VE License: 1 virtual IP address 3 virtual servers or 1 (a combination of virtual address and a listening port) 25 Mbps or 200 Mbps throughput (license dependent) LTM or LTM with Advanced WAF 1 Interface BIG-IP Per App VE Instance Requirements Minimum Maximum vCPU 2 4 Memory 4 16 Disk 40 82 Remember you were holding two thoughts? The post-diet VE image available in v13.1.0.2 or later and the improved boot times? Suddenly this should start coming together for you. You can now deploy a realistically sized full featured security and ADC solution whodeploys and processes traffic when your application needs it to and costs a lot less than the traditional "monolithic BIG-IP". Developers can now get work with BIG-IP LTM and Advanced WAF services where they need them insted of being forced to subscribe to the stricter management that come with larger consolidated deployments. Where are we going? The BIG-IP Per App VE is that first step to providing a more robustsolution into continuous delivery/integration platforms and puts security and adc features closer to the developer and applicaiton owners. It's hard to require developers to adopt a security position when traditional infrastructure createroadblocks (ITIL, I'm looking at you). You'll still need those restricted systems for mission critical applicaitons where downtime breaks SLA's and contractural agreements. For all of those high priority applications BIG-IP Per App VE is your answer. F5 is busily working on building on the flexibility of the Per App VE license into emerging products to make deployments and scalability a piece of cake. So I'll ask you to again... hold that thought. ; )576Views0likes0CommentsSecuring Azure Web Apps with the BIG-IP
With the recent release of the BIG-IP virtual edition for Azure enterprises can now take advantage of F5’s various services, (WAF, multi-factor authentication, endpoint inspection, etc.) to securely deploy their cloud-based applications. This includes support for applications that are deployed on: IaaS - Enterprise deployed and managed Azure virtual machines (Virtual Machines); and PaaS – Azure managed application hosting platforms, (Web Apps). For those of us confused by IaaS and PaaS, here’s a post from Robert Greiner that provides a great overview of Azure’s related offerings. While securing applications that utilize virtual machines and virtual network services is comparable to traditional data center deployments; securing Azure Web Apps with an upstream device, such as the BIG-IP presents an “interesting” challenge. Specifically, Web Apps run on shared platforms, (PaaS) with little to no visibility into or control of the underlying infrastructure. Additionally, there is no guarantee that the underlying networking configuration, (routes, IP addressing, etc.) won’t change periodically. Fortunately, with the advent of the App Service Environment, (ASE) enterprises now have a means to deploy Web Apps into an ASE and reap the benefits of a dedicated and isolated environment that makes use of Azure virtual networking services. Here’s the cool part; with a little “creative networking” we can now utilize an upstream BIG-IP to secure the applications inside of the App Service Environment. But hey, that's just me talking, (um….typing). Let’s take a look at an actual deployment scenario. F5Demo Web Apps In this scenario, we have two Azure Web Apps, (f5demo-1 and f5demo-2) that we want to secure with the BIG-IP Application Security Manager, (ASM). To accomplish this, we need to perform the following steps: 1. Deploy BIG-IP(s) VE for Azure – Refer to this previous article for deploying the BIG-IP into an Azure ARM environment. This particular BIG-IP has a network security group, (NSG) configured to allow public access via HTTP, HTTPS, and SSH. 2. Create App Service Environment - Refer to the following documentation to create an App Service Environment and migrate the existing Azure Web App(s) or deploy a new Web App into the ASE. The ASE will be connected to a v1 virtual network and subnet. Connecting to a virtual network is critical as it allows us to utilize Azure networking services. In the next step, we will be using a NSG associated to the subnet where the ASE resides. The NSG is used to restrict access to the Azure Web Apps, only allowing access from the BIG-IP virtual appliance. 3. Create a Network Security Group – Now that our Web Apps have been added into the ASE, we will use PowerShell to create a NSG and assign it to the subnet where the ASE resides. The script below creates the NSG with rules allowing HTTP and HTTPS connections from the upstream BIG-IP as well as allowing management access. Refer to this article for additional information on controlling inbound access to the ASE. #Create new NSG, (Network Security Group) New-AzureNetworkSecurityGroup -Name 'WebApp-NSG' -Location -Label "Network security group for app service environment" #Create inbound rules set Get-AzureNetworkSecurityGroup -Name 'WebApp-NSG' | Set-AzureNetworkSecurityRule -Name "AzureMgmt" -Type Inbound -Priority 100 -Action Allow -SourceAddressPrefix 'INTERNET' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '454-455' -Protocol TCP Get-AzureNetworkSecurityGroup -Name 'WebApp-NSG' | Set-AzureNetworkSecurityRule -Name "HTTP" -Type Inbound -Priority 200 -Action Allow -SourceAddressPrefix -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '80' -Protocol TCP Get-AzureNetworkSecurityGroup -Name 'WebApp-NSG' | Set-AzureNetworkSecurityRule -Name "HTTPS" -Type Inbound -Priority 300 -Action Allow -SourceAddressPrefix -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '443' -Protocol TCP Get-AzureNetworkSecurityGroup -Name 'WebApp-NSG' | Set-AzureNetworkSecurityRule -Name "RemoteDebuggingVS2012" -Type Inbound -Priority 600 -Action Allow -SourceAddressPrefix 'INTERNET' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '4016' -Protocol TCP Get-AzureNetworkSecurityGroup -Name 'WebApp-NSG' | Set-AzureNetworkSecurityRule -Name "RemoteDebuggingVS2013" -Type Inbound -Priority 700 -Action Allow -SourceAddressPrefix 'INTERNET' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '4018' -Protocol TCP Get-AzureNetworkSecurityGroup -Name 'WebApp-NSG' | Set-AzureNetworkSecurityRule -Name "RemoteDebuggingVS2015" -Type Inbound -Priority 800 -Action Allow -SourceAddressPrefix 'INTERNET' -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '4020' -Protocol TCP #Assign NSG to subnet where WebApp resides Get-AzureNetworkSecurityGroup -Name "WebApp-NSG" | Set-AzureNetworkSecurityGroupToSubnet -VirtualNetworkName -SubnetName 4.Configure BIG-IP – With the above NSG configured and applied to the VNET/subnet web traffic we configure the BIG-IP and to act as a reverse-proxy. The configuration of the BIG-IP will depend on the type of application being published, (deployed) and the services being utilized, (ASM, APM, LTM, DNS, etc.). In this scenario, we are publishing two web applications and protecting them F5’s WAF module, (ASM). To accomplish this, at a minimum we will need to: Create Application Member Pool(s) – We create two member pools, (f5demo1_pool & f5demo2_pool). Pool members are referenced using their respective FQDN, (see below and right). Note: SSL Offload vs. Bridging – With the BIG-IP acting as a full proxy, we can either choose to offload SSL connection, (i.e. decrypt SSL connections and send to backend servers unencrypted) or bridge SSL connections, (decrypt and re-encrypt). However, since the data between the BIG-IP and the backend pool members travels across the public Internet, I would strongly recommend SSL bridging over SSL offload. Create Virtual Server(s) – We will create a virtual server to receive external connections and proxy requests into the backend pool(s). Since the BIG-IP is currently limited to one external facing Public VIP, (Azure limitation) we will utilize an iRule to direct incoming connections to the appropriate pool depending upon the HTTP request host header. In this scenario, the external FQDN for the our two Azure Web Apps, (f5demo-1 and f5demo-2) are site1.f5demo.net and site2.f5demo.net respectively. Therefore to ensure application functionality, we utilize the iRule, (shown below) to perform: Host header redirection – requests received on the external FQDN are translated to the Azure Web App FQDN prior to being transmitted to web app. Traffic Steering – The host header received in the HTTP request is used to determine the appropriate member pool to direct incoming connections. This allows us to service two separate applications from one external facing virtual server. when HTTP_REQUEST { set host [ string tolower [HTTP::host]] if {$host contains "site1" } { HTTP::host "f5demo-1.appserverenv1.p.azurewebsites.net" pool f5demo1_pool } elseif {$host contains "site2" } { HTTP::host "f5demo-2.appserverenv1.p.azurewebsites.net" pool f5demo2_pool } } Additional Links: https://devcentral.f5.com/s/articles/big-ip-in-azure-are-you-serious https://azure.microsoft.com/en-us/documentation/articles/app-service-value-prop-what-is/ https://azure.microsoft.com/en-us/blog/introducing-app-service-environment/ https://azure.microsoft.com/en-us/documentation/articles/app-service-web-how-to-create-an-app-service-environment/ https://azure.microsoft.com/en-us/documentation/articles/app-service-app-service-environment-control-inbound-traffic/ https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-nsg/2.1KViews0likes2CommentsBridging the Gap between Azure Classic and ARM
p>Or should I say “Tunneling the gap between….” Ha Ha ha, hmm….never mind. So you just finished deploying your first BIG-IP from the Azure Marketplace. You can barely contain your excitement!!! That web app that’s been floating up in the cloud with its backside twisting in the proverbial wind will now be snug and safe behind a BIG-IP with ASM. You’ll finally get a good night’s sleep tonight! A single tear drop falls from your eye. I know…..sigh….yeah, I know. This is where our little story takes a turn. You start to configure the BIG-IP when all of a sudden it hits you like a ton of server racks; the web app was deployed in Azure Classic. But your BIG-IP is deployed in ARM! Classic and ARM environments don’t play well together! This time it’s not a single tear drop. Drama aside, the above scenario is becoming quite common. As enterprises as they start to migrate their workloads from Azure’s legacy model, (Classic aka v1) to the new mode, (ARM aka v2), providing inter-connectivity between legacy resources located on Classic VNets and newer resource deployed in ARM VNets will be critical. Fortunately, while not very “elegant” there is a solution and that solution is VPN. Connecting resources located on Classic VNet to resources on an ARM VNet can be achieved by creating an IPsec VPN tunnel between the two infrastructures; essentially the same process as connecting an Azure infrastructure to an on-premises data center. For more detail, check out the guidance provided by Telmo Sampaio. Warning: while conceptually accurate, the guidance provided in the aforementioned article is out-of-date. Specifically, the PowerShell cmdlets used have been deprecated. But hey, that’s ok. I’m here to help . Integrating an ARM BIG-IP with a Classic Application In this post we’ll walk though the process of creating a dynamic IPsec tunnel between a legacy Classic VNet, hosting a multi-tiered web application, and an ARM-based BIG-IP virtual ADC. The end result is illustrated below. This process will enable the BIG-IP to provide services, (revers proxy, WAF, etc.) to the legacy application. The procedure includes the following high-level tasks: Create Classic dynamic VPN gateway using the legacy management portal, (https://manage.windowsazure.com); Create ARM dynamic VPN gateway using Azure PowerShell; and Establishing a VPN tunnel between the two gateway endpoints. Note: To level set, the following example assumes that both the Classic and ARM infrastructures, (VNets, VMs, etc.) have already been deployed and properly configured. Additionally, the user, (that’s you), is assumed to have a basic knowledge of networking and VPN technologies. Refer to Azure guidance for detailed information related Azure technologies, (VPNs, virtual machines, networking, etc.). * Graphic borrowed, and modified, from article authored by Telmo Sampaio. Update the Classic Environment’s VNET As shown at right, the f5demo Azure Classic environment we have provisioned several virtual machines all of which are connected to the virtual network, ‘F5DEMO_WEST_VN’. To enable connectivity to the ARM VNET, we will need to: Create a local network configuration representing the virtual network on the other side of the IPsec tunnel; Enable Site-to-site connectivity; and Create a Dynamic VPN Gateway. The following steps will be completed using the legacy portal, https://manage.windowsazure.com. 1. Create a ‘Local Network’ in the Classic Environment From the portal, select ‘NEW’ –> ‘NETWORK SERVICES’ –> ‘VIRTUAL NETWORK’ '—> ‘ADD LOCAL NETWORK’; Enter a name for the local network. This corresponds to the virtual network that will be located on the other side of the VPN tunnel; Enter an IP address for the ARM VPN gateway endpoint. since we have yet to create the ARM gateway endpoint, any properly formed address will be sufficient, (1.2.3.4 in the example). In a later step we will return to this screen and update; Click on the arrow to continue; Enter the address space that corresponds to the ARM VNet address space. Note: The VNets must utilize unique address spaces. In our example, we are using an address space of 192.168.0.0/16 for the ARM VNet; Click on the check mark to complete. 2. Enable Site-to-site Connectivity Select ‘NETWORKS’ –> ‘VIRTUAL NETWORKS’ –> ‘’ –> ‘CONFIGURE’; As illustrated below, ensure ‘Connect to the local network’ checkbox is checked and the newly created local network is selected from the drop-down; 3. Create VPN Gateway Select ‘DASHBOARD –> ‘CREATE GATEWAY’, (see below). Be patient. The creation process may take several minutes; 4. Capture Gateway Address and Shared Key Once the gateway has been created, make note of the gateway IP address. this will be referenced in a future step; Additionally, make note of the shared key. Select ‘MANAGE KEYS’ from the bottom of the screen and select the ‘copy’ icon; Create ARM VNet VPN Gateway As illustrated at right, we have already deployed our BIG-IP into a new ARM environment all nicely consolidated into a single Azure resource group. To create and configure the ARM VNet gateway, we must use Azure PowerShell. As illustrated in the aforementioned Azure guidance, we could make use of PowerShell and ARM templates to configure the ARM gateway. However, for one-time configurations such as this, I prefer to stick with straight PowerShell cmdlets when available. Mind you, this is just my preference. Regardless of which method you choose, all the necessary objects can be created relatively easily with a single script. Speaking of scripts, I have one for you. 1. Run PowerShell Script Modify and execute the following PowerShell script creates and configures the various ARM objects including: Gateway public IP address; Gateway subnet; Local Network – (corresponding to Classic VNet); VPN Gateway; and Gateway Connection. Note: You will need to modify the ‘Parameters’ section with the appropriate values. This includes the gateway IP address and shared key previously captured. ## Connect to Azure Subscription Login-AzureRmAccount clear #Parameters ######################################## $RGName = 'BIGIP-ASM' $Location = 'West US' $ARMVNET = 'BIGIP-ASM' $ARMGWPrefix = '192.168.2.0/24' $ClassicVNET = 'f5demo_west_vn' $ClassicPrefix = '172.16.101.0/24' $ClassicGWIP = '40.118.168.206' $SharedKey = 'BwiNaTNleW5CGMJbXTbCOnoN2uwFINTT' ######################################### #Create ARM gateway public IP $ARMGWIP = New-AzureRmPublicIpAddress -Name ($ARMVNET + "-gw-IP") -ResourceGroupName $RGName -Location $Location -AllocationMethod Dynamic #Create ARM gateway subnet and update virtual network configuration $vnet = Get-AzureRmVirtualNetwork -ResourceGroupName $RGName -Name $ARMVNET Add-AzureRmVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix $ARMGWPrefix -VirtualNetwork $vnet $SubnetConfig = (Get-AzurermVirtualNetworkSubnetConfig -VirtualNetwork $vnet -Name GatewaySubnet).Id Set-AzureRmVirtualNetwork -VirtualNetwork $vnet #Create Classic local network gateway $ClassicGW = New-AzureRmLocalNetworkGateway -Name ($ClassicVNET + "-ln") -ResourceGroupName $RGName -Location $Location -AddressPrefix $ClassicPrefix -GatewayIpAddress $ClassicGWIP $ARMGWConfig = New-AzurermVirtualNetworkGatewayIpConfig -Name ($ARMVNET + "-2-" + $ClassicVNET + “-gwconfig”) -SubnetId $SubnetConfig -PublicIpAddressId $ARMGWIP.Id #Create ARM network gateway $ARMGW = New-AzurermVirtualNetworkGateway -Name ($ARMVNET + "-2-" + $ClassicVNET+ “-gw”) -ResourceGroupName $RGName -Location $Location -IpConfigurations $ARMGWConfig -GatewayType VPN -VpnType RouteBased #Create gateway connection New-AzurermVirtualNetworkGatewayConnection -Name ($ARMVNET + "-2-" + $ClassicVNET+ “-connection”) -ResourceGroupName $RGName -Location $Location -VirtualNetworkGateway1 $ARMGW -LocalNetworkGateway2 $ClassicGW -ConnectionType IPsec -SharedKey $SharedKey Once the script has completed, (may take several minutes) the previously noted objects are created and can be viewed in the ARM portal, (https://portal.azure.com). Guess what? We’re just about done! Not too bad. 2. Capture Gateway Address As I mentioned previously, after completing the ARM Gateway creation, make note of the ARM gateway IP address, (see below - 40.118.253.238 in our example); Update the Classic Local Network Gateway Address 1. Update Local Network Address To complete the configuration, we need to modify the previously create local network object in the Classic portal and enable the VPN. Using the legacy portal, https://manage.windowsazure.com, connect to the Classic environment. Select ‘NETWORKS’ –> ‘LOCAL NETWORKS’ ---> ‘’; Select ‘EDIT’ located at the bottom of the page; Modify the VPN device address using the ARM gateway IP address previously noted; Click on the arrow to continue; Click on the check mark to complete the update. 2. Enable VPN Connection Select ‘VIRTUAL NETWORKS’ Click on the ‘Connect’ icon located at the bottom of the page to establish the VPN tunnel. Process Compete! Once successfully completed, the tunnel status can be viewed in both the Classic portal as well as the ARM portal as shown below respectively. With the tunnel established, cross-communication between Classic and ARM infrastructure resources can be established. Classic Portal View ARM Portal View356Views0likes0CommentsCloud bursting, the hybrid cloud, and why cloud-agnostic load balancers matter
Cloud Bursting and the Hybrid Cloud When researching cloud bursting, there are many directions Google may take you. Perhaps you come across services for airplanes that attempt to turn cloudy wedding days into memorable events. Perhaps you'd rather opt for a service that helps your IT organization avoid rainy days. Enter cloud bursting ... yes, the one involving computers and networks instead of airplanes. Cloud bursting is a term that has been around in the tech realm for quite a few years. It, in essence, is the ability to allocate resources across various public and private clouds as an organization's needs change. These needs could be economic drivers such as Cloud 2 having lower cost than Cloud 1, or perhaps capacity drivers where additional resources are needed during business hours to handle traffic. For intelligent applications, other interesting things are possible with cloud bursting where, for example, demand in a geographical region suddenly needs capacity that is not local to the primary, private cloud. Here, one can spin up resources to locally serve the demand and provide a better user experience.Nathan Pearcesummarizes some of the aspects of cloud bursting inthis minute long video, which is a great resource to remind oneself of some of the nuances of this architecture. While Cloud Bursting is a term that is generally accepted by the industry as an "on-demand capacity burst,"Lori MacVittiepoints out that this architectural solution eventually leads to aHybrid Cloudwhere multiple compute centers are employed to serve demand among both private-based resources are and public-based resources, or clouds, all the time. The primary driver for this: practically speaking,there are limitations around how fast data that is critical to one's application (think databases, for example) can be replicated across the internet to different data centers.Thus, the promises of "on-demand" cloud bursting scenarios may be short lived, eventually leaning in favor of multiple "always-on compute capacity centers"as loads increase for a given application.In any case, it is important to understand thatthat multiple locations, across multiple clouds will ultimately be serving application content in the not-too-distant future. An example hybrid cloud architecture where services are deployed across multiple clouds. The "application stack" remains the same, using LineRate in each cloud to balance the local application, while a BIG-IP Local Traffic Manager balances application requests across all of clouds. Advantages of cloud-agnostic Load Balancing As one might conclude from the Cloud Bursting and Hybrid Cloud discussion above, having multiple clouds running an application creates a need for user requests to be distributed among the resources and for automated systems to be able to control application access and flow. In order to provide the best control over how one's application behaves, it is optimal to use a load balancer to serve requests. No DNS or network routing changes need to be made and clients continue using the application as they always did as resources come online or go offline; many times, too, these load balancers offer advanced functionality alongside the load balancing service that provide additional value to the application. Having a load balancer that operates the same way no matter where it is deployed becomes important when resources are distributed among many locations. Understanding expectations around configuration, management, reporting, and behavior of a system limits issues for application deployments and discrepancies between how one platform behaves versus another. With a load balancer like F5's LineRate product line, anyone can programmatically manage the servers providing an application to users. Leveraging this programatic control, application providers have an easy way spin up and down capacity in any arbitrary cloud, retain a familiar yet powerful feature-set for their load balancer, ultimately redistribute resources for an application, and provide a seamless experience back to the user. No matter where the load balancer deployment is, LineRate can work hand-in-hand with any web service provider, whether considered a cloud or not. Your data, and perhaps more importantly cost-centers, are no longer locked down to one vendor or one location. With the right application logic paired with LineRate Precision's scripting engine, an application can dynamically react to take advantage of market pricing or general capacity needs. Consider the following scenarios where cloud-agnostic load balancer have advantages over vendor-specific ones: Economic Drivers Time-dependent instance pricing Spot instances with much lower cost becoming available at night Example: my startup's billing system can take advantage in better pricing per unit of work in the public cloud at night versus the private datacenter Multiple vendor instance pricing Cloud 2 just dropped their high-memory instance pricing lower than Cloud 1's Example: Useful for your workload during normal business hours; My application's primary workload is migrated to Cloud 2 with a simple config change Competition Having multiple cloud deployments simultaneously increases competition, and thusyour organization's negotiated pricing contracts become more attractiveover time Computational Drivers Traffic Spikes Someone in marketing just tweeted about our new product. All of a sudden, the web servers that traditionally handled all the loads thrown at them just fine are gettingslashdottedby people all around North America placing orders. Instead of having humans react to the load and spin up new instances to handle the load - or even worse: doing nothing - your LineRate system and application worked hand-in-hand to spin up a few instances in Microsoft Azure's Texas location and a few more in Amazon's Virginia region. This helps you distribute requests from geographically diverse locations: your existing datacenter in Oregon, the central US Microsoft Cloud, and the east-coast based Amazon Cloud. Orders continue to pour in without any system downtime, or worse: lost customers. Compute Orchestration A mission-critical application in your organization's private cloud unexpectedly needs extra computer power, but needs to stay internal for compliance reasons. Fortunately, your application can spin up public cloud instances and migrate traffic out of the private datacenter without affecting any users or data integrity. Your LineRate instance reaches out to Amazon to boot instances and migrate important data. More importantly, application developers and system administrators don't even realize the application has migrated since everything behaves exactly the same in the cloud location. Once the cloud systems boot, alerts are made to F5's LTM and LineRate instances that migrate traffic to the new servers, allowing the mission-critical app to compute away. You just saved the day! The benefit to having a cloud-agnostic load balancing solution for connecting users with an organization's applications not only provides a unified user experience, but provides powerful, unified way of controlling the application for its administrators as well. If all of a sudden an application needs to be moved from, say, aprivate datacenter with a 100 Mbps connection to a public cloud with a GigE connection, this can easily be done without having to relearn a new load balancing solution. F5's LineRate product is available for bare-metal deployments on x86 hardware, virtual machine deployments, and has recently deployed anAmazon Machine Image (AMI). All of these deployment types leverage the same familiar, powerful tools that LineRate offers:lightweight and scalable load balancing, modern management through its intuitive GUI or the industry-standard CLI, and automated control via itscomprehensive REST API.LineRate Point Load Balancerprovides hardened, enterprise-grade load balancing and availability services whereasLineRate Precision Load Balanceradds powerful Node.js programmability, enabling developers and DevOps teams to leveragethousands of Node.js modulesto easily create custom controlsfor application network traffic. Learn about some of LineRate'sadvanced scripting and functionalityhere, ortry it out for freeto see if LineRate is the right cloud-agnostic load balancing solution for your organization.925Views0likes0Comments