Active/Active load balancing examples with F5 BIG-IP and Azure load balancer
Background A couple years ago Iwrote an article about some practical considerations using Azure Load Balancer. Over time it's been used by customers, so I thought to add a further article that specifically discusses Active/Active load balancing options. I'll use Azure's standard load balancer as an example, but you can apply this to other cloud providers. In fact, the customer I helped most recently with this very question was running in Google Cloud. This article focuses on using standard TCP load balancers in the cloud. Why Active/Active? Most customers run 2x BIG-IP's in an Active/Standby cluster on-premises, and it's extremely common to do the same in public cloud. Since simplicity and supportability are key to successful migration projects, often it's best to stick with architectures you know and can support. However, if you are confident in your cloud engineering skills or if you want more than 2x BIG-IP's processing traffic, you may consider running them all Active. Of course, if your totalthroughput for N number of BIG-IP's exceeds the throughput thatN-1 can support, the loss of a single VM will leave you with more traffic than the remaining device(s) can handle. I recommend choosing Active/Active only if you're confident in your purpose and skillset. Let's define Active/Active Sometimes this term is used with ambiguity. I'll cover three approaches using Azure load balancer, each slightly different: multiple standalone devices Sync-Only group using Traffic Group None Sync-Failover group using Traffic Group None Each of these will use a standard TCP cloud load balancer. This article does not cover other ways to run multiple Active devices, which I've outlined at the end for completeness. Multiple standalone appliances This is a straightforward approach and an ideal target for cloud architectures. When multiple devices each receive and process traffic independently, the overhead work of disaggregating traffic to spread between the devices can be done by other solutions, like a cloud load balancer. (Other out-of-scope solutions could be ECMP, BGP, DNS load balancing, or gateway load balancers). Scaling out horizontally can be a matter of simple automation and there is no cluster configuration to maintain. The only limit to the number of BIG-IP's will be any limits of the cloud load balancer. The main disadvantage to this approach is the fear of misconfiguration by human operators. Often a customer is not confident that they can configure two separate devices consistently over time. This is why automation for configuration management is ideal. In the real world, it's also a reason customers consider our next approach. Clustering with a sync-only group A Sync-Only device group allows us to sync some configuration data between devices, but not fail over configuration objects in floating traffic groups between devices, as we would in a Sync-Failover group. With this approach, we can sync traffic objects between devices, assign them to Traffic Group None, and both devices will be considered Active. Both devices will process traffic, but changes only need to be made to a single device in the group. In the example pictured above: The 2x BIG-IP devices are in a Sync-Only group called syncGroup /Common partition isnotsynced between devices /app1 partition issynced between devices the /app1 partition has Traffic Group None selected the /app1 partition has the Sync-Only group syncGroup selected Both devices are Active and will process traffic received on Traffic Group None The disadvantage to this approach is that you can create an invalid configuration by referring to objects that are not synced. For example, if Nodes are created in/Common, they will exist on the device on which they were created, but not on other devices. If a Pool in /app1 then references Nodes from /Common, the resulting configuration will be invalid for devices that do not have these Nodes configured. Another consideration is that an operator must use and understand partitions. These are simple and should be embraced. However, not all customers understand the use of partitions and many prefer to use /Common only, if possible. The big advantage here is that changes only need to be made on a single device, and they will be replicated to other devices (up to 32 devices in a Sync-Only group). The risk of inconsistent configuration due to human error is reduced. Each device has a small green "Active" icon in the top left hand of the console, reminding operators that each device is Active and will process incoming traffic onTraffic Group None. Failover clustering using Traffic Group None Our third approach is very similar to our second approach. However, instead of a Sync-Only group, we will use a Sync-Failover group. A Sync-Failover group will sync all traffic objects in the default /Common partition, allowing us to keep all traffic objects in the default partition and avoid the use of additional partitions. This creates a traditional Active/Standby pair for a failover traffic group, and a Standby device will not respond to data plane traffic. So how do we make this Active/Active? When we create our VIPs in Traffic Group None, all devices will process traffic received on these Virtual Servers. One device will show "Active" and the other "Standby" in their console, but this is only the status for the floating traffic group. We don't need to use the floating traffic group, and by using Traffic Group None we have an Active/Active configuration in terms of traffic flow. The advantage here is similar to the previous example: human operators only need to configure objects in a single device, and all changes are synced between device group members (up to 8 in a Sync-Failover group). Another advantage is that you can use the/Common partition, which was not possible with the previous example. The main disadvantage here is that the console will show the word "Active" and "Standby" on devices, and this can confuse an operator that is familiar only with Active/Standby clusters using traffic groups for failover. While this third approach is a very legitimate approach and technically sound, it's worth considering if your daily operations and support teams have the knowledge to support this. Other considerations Source NAT (SNAT) It is almost always a requirement that you SNAT traffic when using Active/Active architecture, and this especially applies to the public cloud, where our options for other networking tricks are limited. If you have a requirement to see true source IPandneed to use multiple devices in Active/Active fashion, consider using Azure or AWS Gateway Load Balancer options. Alternative solutions like NGINX and F5 Distributed Cloud may also be worth considering in high-value, hard-requirement situations. Alternatives to a cloud load balancer This article is not referring to F5 with Azure Gateway Load Balancer, or to F5 with AWS Gateway Load Balancer. Those gateway load balancer solutions are another way for customers to run appliances as multiple standalone devices in the cloud. However, they typically requirerouting, not proxying the traffic (ie, they don't allow destination NAT, which many customers intend with BIG-IP). This article is also not referring to other ways you might achieve Active/Active architectures, such as DNS-based high availability, or using routing protocols, like BGP or ECMP. Note that using multiple traffic groups to achieve Active/Active BIG-IP's - the traditional approach on-prem or in private cloud - is not practical in public cloud, as briefly outlined below. Failover of traffic groups with Cloud Failover Extension (CFE) One option for Active/Standby high availability of BIG-IP is to use the CFE , which can programmatically update IP addresses and routes in Azure at time of device failure. Since CFE does not support Active/Active scenarios, it is appropriate only for failover of a single traffic group (ie., Active/Standby). Conclusion Thanks for reading! In general I see that Active/Standby solutions work for many customers, but if you are confident in your skills and have a need for Active/Active F5 BIG-IP devices in the cloud, please reach out if you'd like me to walk you through these options and explore any other possibilities. Related articles Practical Considerations using F5 BIG-IP and Azure Load Balancer Deploying F5 BIG-IP with Azure Cross-Region Load Balancer1.3KViews2likes2CommentsBIG-IP VE on Google Cloud Platform
Hot off Cloud Month, let’s look at how to deploy BIG-IP Virtual Edition on the Google Cloud Platform. This is a simple single-NIC, single IP deployment, which means that both management traffic and data traffic are going through the same NIC and are accessible with the same IP address. Before you can create this deployment, you need a license from F5. You can also get a trial license here. Also, we're using BIG-IP VE version 13.0.0 HF2 EHF3 for this example. Alright, let’s get started. Open the console, go to Cloud Launcher and search for F5. Pick the version you want. Now click Launch on Compute Engine. I’m going to change the name so the VM is easier to find… For everything else, I’ll leave the defaults. And then down under firewall, if these ports aren’t already open on your network, you can open 22, which you need so you can use SSH to connect to the instance, and 8443, so you can use the BIG-IP Configuration utility—the web tool that you use to manage the BIG-IP. Now click Deploy. It takes just a few minutes to deploy. And Deployed. When you’re done, you can connect straight from the Google console. This screen cap shows SSH but if you use the browser window, you need to change the Linux username to admin in order to connect. Once done, you'll get that command line. If you choose the gcloud command line option and then run in the gcloud shell, you need to put admin@ in front of the instance name in order to connect. We like using putty so first we need to go get the external IP address of the instance. So I look at the instance and copy the external IP. Then we go into Metadata > SSH keys to confirm that the keys are there. (Added earlier), Whichever keys you want to use to connect, you should put them here. BIG-IP VE grabs these keys every minute or so, so any of the non-expired keys in this list can access the instance. If you remove keys from this list, they’ll be removed from BIG-IP and will no longer have access. You do have the option to edit the VM instance and block project-wide keys if you’d like. Because my keys are already in this list I can open Putty now, and then specify my keys in order to connect. The reason that we're using ssh to connect is that you need to set an admin password that’s used to connect to the BIG-IP Config utility. So I’m going to set the admin password here… (and again, you can do these same steps, no matter how you connect to the instance) tmsh Command is: modify auth modify auth password admin And then: save sys config to save the change. Now we can connect and log in to the BIG-IP Config utility by using https, the external IP and port 8443. Now type admin and the password we just set. Then we can proceed with licensing and provisioning BIG-IP VE. A few other notes: If you’re used to creating a self IP and VLAN, you don’t need to do that. In this single NIC deployment, those things are taken care of for you. If you want to start sending traffic, just set up your pool and virtual server the way you normally would. Just make sure if your app is using port 443, for example, that you add that firewall rule to your network or your instance. And finally, you most likely want to make your external IP address one that is static, and you can do that in the UI by choosing Networking, then External IP addresses, then Type). If you need any help, here's the Google Cloud Platform/BIG-IP VE Setup Guide and/or watch the full video. ps845Views0likes1CommentSolving for true-source IP with global load balancers in Google Cloud
Background Recently a customer approached us with requirements that may seem contradictory: true source IP persistence, global distribution of load balancers (LB's), TCP-only proxying, and alias IP ranges in Google Cloud. With the help of PROXY protocol support, we offered a straightforward solution that is worth documenting for others. Customer requirements We have NGINX WAF running on VM instances in Google Cloud Platform (GCP) We want to expose these to the Internet with aglobal load balancer We must know the true source IP of clients when traffic reaches our WAF We donot want so use an application (HTTP/S) load balancer in Google. i.e., we do not want to perform TLS decryption with GCP or use HTTP/HTTPS load balancing therefore, we cannot use X-Forwarded-For headers to preserve true source IP Additionally, we'd like to use Cloud Armor. How can we add on a CDN/DDoS/etc provider if needed? Let's solve for these requirements by finding the load balancer to use, and then how to preserve and use true source IP. Which load balancer type fits best? This guideoutlines our options for Google LB’s. Because our requirements includeglobal, TCP-only load balancing, we will choose the highlighted LB type of “Global external proxy Network Load Balancer”. Proxy vs Passthrough Notice that global LB’sproxytraffic. They do not preserve source IP address as apassthrough LB does. Global IP addresses are advertised from multiple, globally-distributed front end locations, using Anycast IP routing. Proxying from these locations allows traffic symmetry, but Source NAT causes loss of the original client IP address. I've added some comments into a Google diagram below to show our core problem here: PROXY protocol support with Google load balancers Google’s TCP LBdocumentationoutlines our challenge and solution: "By default, the target proxy does not preserve the original client IP address and port information. You can preserve this information by enabling the PROXY protocol on the target proxy." Without PROXY protocol support, we could only meet 2 out of 3 core requirements with any given load balancer type. PROXY protocol allows us to meet all 3 simultaneously. Setting up our environment in Google The script below configures a global TCP proxy network load balancer and associated objects. It is assumed that a VPC network, subnet, and VM instances exist already. This script assumes the VM’s are F5 BIG-IP devices, although our demo will use Ubuntu VM’s with NGINX installed. Both BIG-IP and NGINX can easily receive and parse PROXY protocol. # GCP Environment Setup Guide for Global TCP Proxy LB with Proxy Protocol. Credit to Tony Marfil, @tmarfil # Step 1: Prerequisites # Before creating the network endpoint group, ensure the following GCP resources are already configured: # # -A VPC network named my-vpc. # -A subnet within this network named outside. # -Instances ubuntu1 and ubuntu2 should have alias IP addresses configured: 10.1.2.16 and 10.1.2.17, respectively, both using port 80 and 443. # # Now, create a network endpoint group f5-neg1 in the us-east4-c zone with the default port 443. gcloud compute network-endpoint-groups create f5-neg1 \ --zone=us-east4-c \ --network=my-vpc \ --subnet=outside \ --default-port=443 # Step 2: Update the Network Endpoint Group # # Add two instances with specified IPs to the f5-neg1 group. gcloud compute network-endpoint-groups update f5-neg1 \ --zone=us-east4-c \ --add-endpoint 'instance=ubuntu1,ip=10.1.2.16,port=443' \ --add-endpoint 'instance=ubuntu2,ip=10.1.2.17,port=443' # Step 3: Create a Health Check # # Set up an HTTP health check f5-healthcheck1 that uses the serving port. gcloud compute health-checks create http f5-healthcheck1 \ --use-serving-port # Step 4: Create a Backend Service # # Configure a global backend service f5-backendservice1 with TCP protocol and attach the earlier health check. gcloud compute backend-services create f5-backendservice1 \ --global \ --health-checks=f5-healthcheck1 \ --protocol=TCP # Step 5: Add Backend to the Backend Service # # Link the network endpoint group f5-neg1 to the backend service. gcloud compute backend-services add-backend f5-backendservice1 \ --global \ --network-endpoint-group=f5-neg1 \ --network-endpoint-group-zone=us-east4-c \ --balancing-mode=CONNECTION \ --max-connections=1000 # Step 6: Create a Target TCP Proxy # # Create a global target TCP proxy f5-tcpproxy1 to handle routing to f5-backendservice1. gcloud compute target-tcp-proxies create f5-tcpproxy1 \ --backend-service=f5-backendservice1 \ --proxy-header=PROXY_V1 \ --global # Step 7: Create a Forwarding Rule # # Establish global forwarding rules for TCP traffic on port 80 & 443. gcloud compute forwarding-rules create f5-tcp-forwardingrule1 \ --ip-protocol TCP \ --ports=80 \ --global \ --target-tcp-proxy=f5-tcpproxy1 gcloud compute forwarding-rules create f5-tcp-forwardingrule2 \ --ip-protocol TCP \ --ports=443 \ --global \ --target-tcp-proxy=f5-tcpproxy1 # Step 8: Create a Firewall Rule # # Allow ingress traffic on specific ports for health checks with the rule allow-lb-health-checks. gcloud compute firewall-rules create allow-lb-health-checks \ --direction=INGRESS \ --priority=1000 \ --network=my-vpc \ --action=ALLOW \ --rules=tcp:80,tcp:443,tcp:8080,icmp \ --source-ranges=35.191.0.0/16,130.211.0.0/22 \ --target-tags=allow-health-checks # Step 9: Add Tags to Instances # # Tag instances ubuntu1 and ubuntu2 to include them in health checks. gcloud compute instances add-tags ubuntu1 --tags=allow-health-checks --zone=us-east4-c gcloud compute instances add-tags ubuntu2 --tags=allow-health-checks --zone=us-east4-c ## TO PULL THIS ALL DOWN: (uncomment the lines below) # gcloud compute firewall-rules delete allow-lb-health-checks --quiet # gcloud compute forwarding-rules delete f5-tcp-forwardingrule1 --global --quiet # gcloud compute forwarding-rules delete f5-tcp-forwardingrule2 --global --quiet # gcloud compute target-tcp-proxies delete f5-tcpproxy1 --global --quiet # gcloud compute backend-services delete f5-backendservice1 --global --quiet # gcloud compute health-checks delete f5-healthcheck1 --quiet # gcloud compute network-endpoint-groups delete f5-neg1 --zone=us-east4-c --quiet # # Then delete your VM's and VPC network if desired. Receiving PROXY protocol using NGINX We now have 2x Ubuntu VM's running in GCP that will receive traffic when we target our global TCP proxy LB's IP address. Let’s use NGINX to receive and parse the PROXY protocol traffic. When proxying and "stripping" the PROXY protocol headers from traffic, NGINX can append an additional header containing the value of the source IP obtained from PROXY protocol: server { listen 80 proxy_protocol; # tell NGINX to expect traffic with PROXY protocol server_name customer1.my-f5.com; location / { proxy_pass http://localhost:3000; proxy_http_version 1.1; proxy_set_header x-nginx-ip $server_addr; # append a header to pass the IP address of the NGINX server proxy_set_header x-proxy-protocol-source-ip $proxy_protocol_addr; # append a header to pass the src IP address obtained from PROXY protocol proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; # append a header to pass the src IP of the connection between Google's front end LB and NGINX proxy_cache_bypass $http_upgrade; } } Displaying true source IP in our web app You might notice above that NGINX is proxying to http://localhost:3000. I have a simple NodeJS app to display a page with HTTP headers: const express = require('express'); const app = express(); const port = 3000; // set the view engine to ejs app.set('view engine', 'ejs'); app.get('/', (req, res) => { const proxy_protocol_addr = req.headers['x-proxy-protocol-source-ip']; const source_ip_addr = req.headers['x-real-ip']; const array_headers = JSON.stringify(req.headers, null, 2); const nginx_ip_addr = req.headers['x-nginx-ip']; res.render('index', { proxy_protocol_addr: proxy_protocol_addr, source_ip_addr: source_ip_addr, array_headers: array_headers, nginx_ip_addr: nginx_ip_addr }); }) app.listen(port, () => { console.log('Server is listenting on port 3000'); }) For completeness, NodeJS is using the EJS template engine to build our page. The file views/index.ejs is here: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale-1"> <title>Demo App</title> </head> <body class="container"> <main> <h2>Hello World!</h2> <p>True source IP (the value of <code>$proxy_protocol_addr</code>) is <b><%= typeof proxy_protocol_addr != 'undefined' ? proxy_protocol_addr : '' %></b></p> <p>IP address that NGINX recieved the connection from (the value of <code>$remote_addr</code>) is <b><%= typeof source_ip_addr != 'undefined' ? source_ip_addr : '' %> </b></p> <p>IP address that NGINX is running on (the value of <code>$server_addr</code>) is <b><%= typeof nginx_ip_addr != 'undefined' ? nginx_ip_addr : '' %></b><p> <h3>Request Headers at the app:</h3> <pre><%= typeof array_headers != 'undefined' ? array_headers : '' %></pre> </main> </body> </html> Cloud Armor Cloud Armor is aneasy add-onwhen using Google load balancers. If required, an admin can: Create a Cloud Armor security policy Add rules (for example, rate limiting) to this policy Attach the policy to a TCP load balancer In this way “edge protection” is applied to your Google workloads with little effort. Our end result This small demo app shows that true source IP can be known to an application running on Google VM’s when using the Global TCP Network Load Balancer. We’ve achieved this using PROXY protocol and NGINX. We’ve used NodeJS to display a web page with proxied header values. Thanks for reading. Please reach out with any questions!699Views3likes4CommentsBIGIP VE on Google Cloud: Curl returns 404 - Public URI path not registered
Been scratching my head on this problem for some time now. To avoid testing on our physical production appliances, I recently deployed a BIGIP VE on Google Cloud. When I tried to test the REST API via some simple curl calls, I am getting the following: curl -sku admin:admin https://vm-external-ip:8443/mgmt/tm/sys/software/volume {"code":404,"message":"Public URI path not registered:/tm/sys/software/volume","referer":"xx.xx.xx.xx","restOperationId":6004830,"kind":":resterrorresponse"} When running the same call on our existing physical appliance, it works as expected: curl -sku admin:admin https://appliance-ip/mgmt/tm/sys/software/volume {"kind":"tm:sys:software:volume:volumecollectionstate","selfLink":"https://localhost/mgmt/tm/sys/software/volume?ver=12.1.2","items":[{"kind":"tm:sys:software:volume:volumestate","name":"HD1.1",......} Has anyone experienced this before? If so, is there something different with the deployment on the cloud which I missed out on? Thanks in advance!499Views0likes1CommentInstalling and running iControl extensions in isolated GCP VPCs
BIG-IP instances launched on Google Cloud Platform usually need access to the internet to retrieve extensions, install DO and AS3 declarations, and get to any other run-time assets pulled from public URLs during boot. This allows decoupling of BIG-IP releases from the library and extensions that enhance GCP deployments, and is generally a good thing. What if the BIG-IP doesn't have access to the Internet? Best practices for Google Cloud recommend that VMs are deployed with the minimal set of access requirements. For some that means that egress to the internet is restricted too: BIG-IP VMs do not have public IP addresses. A NAT Gateway or NATing VM is not present in the VPC. Default VPC network routes to the internet have been removed. If you have a private artifact repository available in the VPC, supporting libraries and onboarding resources could be added to there and retrieved during initialization as needed, or you could also create customized BIG-IP images that have the supporting libraries pre-installed (see BIG-IP image generator for details). Both those methods solve the problem of installing run-time components without internet access, but Cloud Failover Extension, AS3 Service Discovery, and Telemetry Streaming must be able to make calls to GCP APIs, but GCP APIs are presented as endpoints on the public internet. For example, Cloud Failover Extension will not function correctly out of the box when the BIG-IP instances are not able to reach the internet directly or via a NAT because the extension must have access to Storage APIs for shared-state persistence, and to Compute APIs to updates to network resources. If the BIG-IP is deployed without functioning routes to the internet, CFE cannot function as expected. Figure 1: BIG-IP VMs 1 cannot reach public API endpoints 2 because routes to internet 3 are removed Given that constraint, how can we make CFE work in truly isolated VPCs where internet access is prohibited? Private Google Access Enabling Private Google Access on each VPC subnet that may need to access Google Cloud APIs changes the underlying SDN so that the CIDRs for restricted.googleapis.com (or private.googleapis.com † ) will be routed without going through the internet. When combined with a private DNS zone which shadows all googleapis.com lookups to use the chosen protected endpoint range, the VPC networks effectively have access for all GCP APIs. The steps to do so are simple: Enable Private Google Access on each VPC subnet where a GCP API call may be sourced. Create a Cloud DNS private zone for googleapis.com that contains two records: CNAME for *.googleapis.com that responds with restricted.googleapis.com. A record for restricted.googleapis.com that resolves to each host in 199.36.153.4/30. Create a custom route on each VPC network for 199.36.153.4/30 with next-hop set for internet gateway. With this configuration in place, any VMs that are attached to the VPC networks that are associated with this private DNS zone will automatically try to use 199.36.153.4/30 endpoints for all GCP API calls without code changes, and the custom route will allow Private Google Access to function correctly. Automating with Terraform and Google Cloud Foundation Toolkit ‡ While you can perform the steps to enable private API access manually, it is always better to have a repeatable and reusable approach that can be automated as part of your infrastructure provisioning. My tool of choice for infrastructure automation is Hashicorp's Terraform, and Google's Cloud Foundation Toolkit, a set of Terraform modules that can create and configure GCP resources. By combining Google's modules with my own BIG-IP modules, we can build a repeatable solution for isolated VPC deployments; just change the variable definitions to deploy to development, testing/QA, and production. Cloud Failover Example Figure 2: Private Google Access 1 , custom DNS 2 , and custom routes 3 combine to enable API access 4 without public internet access A fully-functional example that builds out the infrastructure shown in figure 2 can be found in my GitHub repo f5-google-bigip-isolated-vpcs. When executed, Terraform will create three network VPCs that lack the default-internet egress route, but have a custom route defined to allow traffic to restricted.googleapis.com CIDR. A Cloud DNS private zone will be created to override wildcard googleapis.com lookups with restricted.googleapis.com, and the private zone will be enabled on all three VPC networks. A pair of BIG-IPs are instantiated with CFE enabled and configured to use a dedicated CFE bucket for state management. An IAP-enabled bastion host with tinyproxy allows for SSH and GUI access to the BIG-IPs (See the repo's README for full details on how to connect). Once logged in to the active BIG-IP, you can verify that the instances do not have access to the internet, and you can verify that CFE is functioning correctly by forcing the active instance to standby. Almost immediately you can see that the other BIG-IP instance has become the active instance. Notes † Private vs Restricted access GCP supports two protected endpoint options; private and restricted. Both allow access to GCP API endpoints without traversing the public internet, but restricted is integrated with VPC Service Controls. If you need access to a GCP API that is unsupported by VPC Service Controls, you can choose private access and change steps 2 and 3 above to use private.googleapis.com and 199.36.153.8/30 instead. ‡ Prefer Google Deployment Manager? My colleague Gert Wolfis has written a similar article that focuses on using GDM templates for BIG-IP deployment. You can find his article at https://devcentral.f5.com/s/articles/Deploy-BIG-IP-on-GCP-with-GDM-without-Internet-access.335Views1like0CommentsDevCentral Cloud Month - Week Two
What’s this week about? You got a mini taste of DevCentral’s Cloud Month last week and week two we really dig in. This week we’re looking at Build and Deployment considerations for the Cloud. The first step in successfully deploying in a cloud infrastructure. Starting today, Suzanne and team show us how to deploy an application in AWS; On Wednesday, Greg, harking the Hitchhiker’s Guide, explains Azure’s Architectural Considerations; Marty uncovers Kubernetes concepts and how to deploy an application in Kubernetes this Thursday; on #Flashback Friday, Lori takes us down memory lane wondering if SOA is still super. Filling my typical Tuesday spot, Hitesh reveals some foundational building blocks and philosophy of F5’s cloud/automated architectures. These will help get you off the ground and your head in the clouds, preferably Cloud Nine. Enjoy! ps Related: 5 steps to building a cloud-ready application architecture 5 Considerations When Building Cloud Cost Aware Architecture Considerations for Designing and Running an Application in the Cloud Great app migration takes enterprise “on-prem” applications to the Cloud F5 Multi-Cloud Solutions289Views0likes1CommentCloud Month on DevCentral
#DCCloud17 The term ‘Cloud’ as in Cloud Computing has been around for a while. Some insist Western Union invented the phrase in the 1960s; others point to a 1994 AT&T ad for the PersonaLink Services; and still others argue it was Amazon in 2006 or Google a few years later. And Gartner had cloud computing at the top of their Hype Cycle in 2009. No matter the birth year, cloud computing has become an integral part of an organization’s infrastructure and is not going away anytime soon. A 2017 SolarWinds IT Trends report says 95% of businesses have migrated critical applications to the cloud and F5's SOAD report notes that 20% of organizations will have over half their applications in the cloud this year. It is so critical that we’ve decided to dedicate the entire month of June to the Cloud. We’ve planned a cool cloud encounter for you this month. We’re lucky to have many of F5’s cloud experts offering their 'how-to' expertise with multiple 4-part series. The idea is to take you through a typical F5 deployment for various cloud vendors throughout the month. Mondays, we got Suzanne Selhorn & Thomas Stanley covering AWS; Wednesdays, Greg Coward will show how to deploy in Azure; Thursdays, Marty Scholes walks us through Google Cloud deployments including Kubernetes. But wait, there’s more! On Tuesdays, Hitesh Patel is doing a series on the F5 Cloud/Automation Architectures and how F5 plays in the Service Model, Deployment Model and Operational Model - no matter the cloud and on F5 Friday #Flashback starting tomorrow, we’re excited to have Lori MacVittie revisit some 2008 #F5Friday cloud articles to see if anything has changed a decade later. Hint: It has…mostly. In addition, I’ll offer my weekly take on the tasks & highlights that week. Below is the calendar for DevCentral's Cloud Month and we’ll be lighting up the links as they get published so bookmark this page and visit daily! Incidentally, I wrote my first cloud tagged article on DevCentral back in 2009. And if you missed it, Cloud Computing won the 2017 Preakness. Cloudy Skies Ahead! June 2017 Monday Tuesday Wednesday Thursday Friday 28 29 30 31 1 Cloud Month on DevCentral Calendar 2 Flashback Friday: The Many Faces of Cloud Lori MacVittie 3 4 5 Successfully Deploy Your Application in the AWS Public Cloud Suzanne Selhorn 6 Cloud/Automated Systems need an Architecture Hitesh Patel 7 The Hitchhiker’s Guide to BIG-IP in Azure Greg Coward 8 Deploy an App into Kubernetes in less than 24 Minutes Marty Scholes 9 Flashback Friday: The Death of SOA Has (Still) Been Greatly Exaggerated -Lori 10 11 12 Secure Your New AWS Application with an F5 Web Application Firewall -Suzanne 13 The Service Model for Cloud/Automated Systems Architecture -Hitesh DCCloud17 X-tra! BIG-IP deployments using Ansible in private and public cloud 14 The Hitchhiker’s Guide to BIG-IP in Azure – ‘Deployment Scenarios’ -Greg DCCloud17 X-tra! LBL Video:BIG-IP in the Public Cloud 15 Deploy an App into Kubernetes Even Faster (Than Last Week) -Marty 16 Flashback Friday: Cloud and Technical Data Integration Challenges Waning -Lori 17 18 19 Shed the Responsibility of WAF Management with F5 Cloud Interconnect -Suzanne 20 The Deployment Model for Cloud/Automated Systems Architecture -Hitesh 21 The Hitchhiker’s Guide to BIG-IP in Azure – ‘High Availability’ -Greg DCCloud17 X-tra! LBL Video: BIG-IP in the Private Cloud 22 Deploy an App into Kubernetes Using Advanced Application Services -Marty 23 Flashback Friday: Is Vertical Scalability Still Your Problem? -Lori 24 25 26 Get Back Speed and Agility of App Development in the Cloud with F5 Application Connector -Suzanne 27 The Operational Model for Cloud/Automated Systems Architecture -Hitesh 28 The Hitchhiker’s Guide to BIG-IP in Azure – ‘Life Cycle Management’ -Greg 29 What’s Happening Inside My Kubernetes Cluster? -Marty 30 Cloud Month Wrap! Titles subject to change...but not by much. ps283Views0likes0CommentsDevCentral Cloud Month - Week Four
What’s this week about? Ready for another week of Cloud Month on DevCentral? Suzanne, Hitesh, Greg, Marty and Lori are ready! Last week we looked at services, security, automation, migration, Ansible and other areas to focus on once you get your cloud running. We also had a cool Lightboard Lesson explaining BIG-IP in the public cloud. This week we go deeper into areas like high availability, scalability, responsibility, inter-connectivity and exploring the philosophy behind cloud deployment models. Now that we’re half-way through Cloud Month, I thought it’d be fun to share a little bit about our authors. Suzanne Selhorn is a Sr. Technical Writer with our TechPubs team. Our Technical Communications team are responsible for many of the deployment guides you use and are also the creators of some of the awesome step-by-step technical videos featured on DevCentral’s YouTube channel. She and Thomas Stanley crafted the AWS series. Hitesh Patel is a Sr. Solution Architect covering Cloud/DevOps. He’s one of the smartest cloud cookies we got and works with F5 customers to get a handle on their cloud deployments. He also loves karaoke. Greg Coward is a Solution Architect on our Business Development team. The BizDev team works with our many technology partners building out joint solutions. Greg covers Microsoft and how BIG-IP plays in Azure among other solutions. Marty Scholes is an Applications Architect with our Solutions Marketing team. Traditionally, he writes whitepapers, technical articles and helps the Marketing team understand the technical nuances of various solutions and this month he went deep into GoogleCloud deployments. Finally, someone you probably are already familiar due to her extensive writing and expertise, F5’s Principal Technical Evangelist Lori MacVittie. User 38 on DevCentral, she is a subject matter expert on emerging technologies and how F5 fits with the internet craze these days. I’ve been fortunate to have known & worked with Lori since her early days at F5 when we were both trailblazing Technical Marketing Managers. The DevCentral team truly appreciates their contributions to Cloud Month and encourages you to connect with them. ps267Views0likes0CommentsDevCentral Cloud Month - Week Five
What’s this week about? This is the final week of DevCentral’s Cloud Month so let’s close out strong. Throughout the month Suzanne, Hitesh, Greg, Marty and Lori have taken us on an interesting journey to share their unique cloud expertise. Last week we covered areas like high availability, scalability, responsibility, inter-connectivity and exploring the philosophy behind cloud deployment models. We also got a nifty Lightboard Lesson covering BIG-IP in the private cloud This week’s focus is on maintaining, managing and operating your cloud deployments. If you missed any of the previous articles, you can catch up with our Cloud Month calendar and we’ll wrap up DevCentral's Cloud Month on Friday. Thanks for taking the journey with us and hope it was educational, informative and entertaining! ps Related: Cloud Month on DevCentral DevCentral Cloud Month - Week Two DevCentral Cloud Month - Week Three DevCentral Cloud Month - Week Four243Views0likes0CommentsDevCentral Cloud Month Wrap
Is it the end of June already? At least it ended on a Friday and we can close out DevCentral’s Cloud Month followed by the weekend! First, huge thanks to our Cloud Month authors: Suzanne, Hitesh, Greg, Marty and Lori. Each delivered an informative series (23 articles in all!) from their area of expertise and the DevCentral team appreciates their involvement. We hope you enjoyed the content as much as we enjoyed putting it together. And with that, that’s a wrap for DevCentral Cloud Month. You can check out the original day-by-day calendar and below is each of the series if you missed anything. Thanks for coming by and we’ll see you in the community. AWS - Suzanne & Thomas Successfully Deploy Your Application in the AWS Public Cloud Secure Your New AWS Application with an F5 Web Application Firewall Shed the Responsibility of WAF Management with F5 Cloud Interconnect Get Back Speed and Agility of App Development in the Cloud with F5 Application Connector Cloud/Automated Systems – Hitesh Cloud/Automated Systems need an Architecture The Service Model for Cloud/Automated Systems Architecture The Deployment Model for Cloud/Automated Systems Architecture The Operational Model for Cloud/Automated Systems Architecture Azure – Greg The Hitchhiker’s Guide to BIG-IP in Azure The Hitchhiker’s Guide to BIG-IP in Azure – ‘Deployment Scenarios’ The Hitchhiker’s Guide to BIG-IP in Azure – ‘High Availability’ The Hitchhiker’s Guide to BIG-IP in Azure – ‘Life Cycle Management’ Google Cloud – Marty Deploy an App into Kubernetes in less than 24 Minutes Deploy an App into Kubernetes Even Faster (Than Last Week) Deploy an App into Kubernetes Using Advanced Application Services What’s Happening Inside My Kubernetes Cluster? F5 Friday #Flashback – Lori Flashback Friday: The Many Faces of Cloud Flashback Friday: The Death of SOA Has (Still) Been Greatly Exaggerated Flashback Friday: Cloud and Technical Data Integration Challenges Waning Flashback Friday: Is Vertical Scalability Still Your Problem? Cloud Month Lightboard Lesson Videos – Jason Lightboard Lessons: BIG-IP in the Public Cloud Lightboard Lessons: BIG-IP in the private cloud #DCCloud17 X-Tra! BIG-IP deployments using Ansible in private and public cloud The Weeks DevCentral Cloud Month - Week Two DevCentral Cloud Month - Week Three DevCentral Cloud Month - Week Four DevCentral Cloud Month - Week Five DevCentral Cloud Month Wrap ps240Views0likes0Comments