cloud migration
4 TopicsF5 BIG-IP Instance Migration in AWS
The f5-aws-migrate.py is a Python 2.7 script that automates the migration ofan existing BIG-IP instance to a new instance using a different BIG-IP image in AWS while keeping all configurations identical. For a primer on F5 in AWS, check out these three excellent articles written by Chris Mutzel: F5 in AWS Part 1 - AWS Networking Basics F5 in AWS Part 2 - Running BIG-IP in an EC2 Virtual Private Cloud F5 in AWS Part 3 - Advanced Topologies and More on Highly Available Services As discussed in the articleF5 in AWS Part 2, there are two ways you can run BIG-IP in AWS: subscription (hourly or annual subscriptions) or bring your own license (BYOL). You might be running a BIG-IP instance in AWS on an hourly subscription and then decide to convert to an annual subscription instead. Or you might decide to convert the BIG-IP subscription instance to a BYOL instance instead after obtaining a F5 software license. To achieve this conversion (prior to the script) you must manually create a new BIG-IP instance with either an annual subscription or a F5 software license. Additionally, you may want to move BIG-IP instances for other reasons. For example, if you are trying to perform a complete mitigation on a BIG-IP instance impacted byCVE-2016-2084. According to the following Security Advisory on AskF5, anew BIG-IP instance needs to be created to replace the vulnerable instance (SOL 11772107: BIG-IP and BIG-IQ cloud image vulnerability CVE-2016-2084has more information on this vulnerability). The challenge with any of these scenarios is ensuring the new BIG-IP instance has the identical configuration as the old instance being migrated. This involves two major tasks. AWS configuration The new instance must be configured with the same ENIs (Elastic Network Interfaces), Security Groups, and Tags as the old instance. All of these settings need to be gathered from the old instance for use in configuring the new instance. In order to reuse the ENIs from the old instance, the DeleteOnTermination setting on each ENI needs to be set to False. The old instance needs to be terminated allowing detachment of the ENIs. Then you would create the new instance from the desiredBIG-IP image and manually configure it with the identical settings gathered from the old instance. BIG-IP configuration On the new BIG-IP instance, if using BYOL license, you must complete the licensing process. The BIG-IP UCS (User Configuration Set; a backup) file saved from the old instance needs to be restored onto the new instance. The result is a new instance created from the selected BIG-IP image with identical configuration as the old terminated instance. Performing all of these steps manually can be tedious and error prone. A solution automating the two major tasks described above is now available. F5 has created a Python 2.7 script, f5-aws-migrate.py, which automates the migration of one BIG-IP instance to another instance in AWS for the two types of BIG-IP images available on the AWS Marketplace. The script begins by gathering a BIG-IP UCS file and polling AWS to gather instance configuration details. It then terminates the original instance and launches a new, identical instance using the AMI image you specify. Finally, the script performs automated licensing and installs the UCS file from the original instance with a no-license flag to avoid overwriting the new license. The script can also perform complete BIG-IP mitigation steps forCVE-2016-2084. For more information, and to download the software see the ReadMe file on our F5 DevCentral Github repositoryhttps://github.com/f5devcentral/f5-aws-migrate942Views0likes0CommentsDeploying BIG-IP VE in VMware vCloud Director
Beginning with BIG-IP version 11.2, you may have noticed a new package in the Virtual Edition downloads folder for vCloud Director 1.5. VMware’s vCloud Director is a software solution enabling enterprises to build multi-tenant private clouds. Each virtual datacenter has its own resource set of cpu, memory, and disk that the vDC owner can allocate as necessary. F5 DevCentral is now running in these virtual datacenter configurations (as announced June 13th, 2012), with full BIG-IP VE infrastructure in place. This article will describe the deployment process to get BIG-IP VE installed and running in the vCloud Director environment. Uploading the vCloud Image The upload process is fairly simple, but it does take a while. First, after logging in to the vCloud interface, click catalogs, then select your private catalog. Once in the private catalog, click the upload button highlighted below. This will launch a pop up. Make sure the vCloud zip file has been extracted. When the .ovf is selected in this screen, it will grab that as well as the disk file after clicking upload. Now get a cup of coffee. Or a lot of them, this takes a while. Deploying the BIG-IP VE OVF Template Now that the image is in place, click on my cloud at the top navigation, select vApps, then select the plus sign, which will create a new vApp. (Or, the BIG-IP can be deployed into an existing vApp as well.) Select the BIG-IP VE template (bigip11_2 in the screenshot below) and click next. Give the vApp a name and click next. Accept the F5 EULA and click next. At this point, give the VM a full name and a computer name and click finish. I checked the network adapter box to show the network adapter type. It is not configurable at this point, and the flexible NIC is not the right one. After clicking finish, the system will create the vApp and build the VM, so maybe it’s time for another cup of coffee. Once the build is complete, click into the vapp_test vApp. Right-click on the testbigip-11-2 VM and select properties. Do NOT power on the VM yet! CPU and memory should not be altered. More CPU won’t help TMM, there is no CMP yet in the virtual edition and one extra CPU for system stuff is sufficient. TMM can’t schedule more than 4G of RAM either. Click the “Show network adapter type” and again you’ll notice the NICs are not correct. Delete all the network interfaces, then re-add one at a time as many (up to 10 in vCloud Director) NICs as is necessary for your infrastructure. To add a NIC, just click the add button and then select the network dropdown and select Add Network. At this point, you’ll need to already have a plan for your networking infrastructure. Organizational networks are usable in and between all vApps, whereas vApp networks are isolated to just that instance. I’ll show organizational network configuration in this article. Click Organization network and then click next. Select the appropriate network and click next. I’ve selected the Management network. For the management NIC I’ll leave the adapter type as E1000. The IP Mode is useful for systems where guest customization is enabled, but is still a required setting. I set it to Static-Manual and enter the self IP addresses assigned to those interfaces. This step is still required within the F5, it will not auto-configure the vlans and self IPs for you. For the remaining NICs that you add, make sure to set the adapter type to VMXNET 3. Then click OK to apply the new NIC configurations. *Note that adding more than 5 NICs in VE might cause the interfaces to re-order internally. If this happens, you’ll need to map the mac address in vCloud to the mac addresses reported in tmsh and adjust your vlans accordingly. Powering Up! After the configuration is updated, right-click on the testbigip-11-2 VM and select power on. After the VM powers on, BIG-IP VE will boot. Login with root/default credentials and type config at the prompt to set the management ip and netmask. Select No on auto-configuration Set the IP address. Then set the netmask. I selected no on the default route, but it might be necessary depending on the infrastructure you have in place. Finally, accept the settings. At this point, the system should be available on the management network. I have a linux box on that network as well so I can ssh into the BIG-IP VE to perform the licensing steps as the vCloud Director console does not support copy/paste.303Views0likes2CommentsDevCentral Architecture
Everyone has surely (don’t call me Shirley!) at least been exposed to THE CLOUD by now. Whether it’s the—I’ll go with interesting—“to the cloud!” commercials or down in the nuts and bolts of hypervisors and programmatic interfaces for automation, the buzz has been around for a while. One of F5’s own, cloud computing expert and blogger extraordinaire (among many other talents) Lori MacVittie, weighs in consistently on the happenings and positioning in the cloud computing space. F5 has some wicked smart talent with expertise in the cloud and dynamic datacenter spaces, and we make products perfectly positioned for both worlds. With the release of all our product modules on BIG-IP VE last year, it presented the opportunity for the DevCentral team to elevate ourselves from evangelists of our great products to customers as well. And with that opportunity, we drove the DevCentral bull onward to our new virtual datacenters at Bluelock. Proof of Concept We talked to a couple different vendors during the selection period for a cloud provider. We selected Bluelock for a couple major reasons. First, their influential leadership in the cloud space by way of CTO Pat O’Day. Second, their strong partnership with fellow partner VMware, and their use of VMware’s vCloud Director platform. This was a good fit for us, as our production BIG-IP VE products are built for the ESX hypervisors (and others in limited configurations, please reference the supported hypervisors matrix). As part of the selection process, Bluelock set up a temporary virtual datacenter for us to experiment with. Our initial goal was just to get the application working with minimal infrastructure and test the application performance. The biggest concerns going in were related to the database performance in a virtual server as the DevCentral application platform, DotNetNuke, is heavy on queries. The most difficult thing in getting the application up was getting files into the environment. Once we got the files in place and the BIG-IP VE licensed, we were up and running in less than a day. We took captures, analyzed stats, and with literally no tuning, the application was performing within 10% of our production baseline on dedicated server/infrastructure iron. It was an eye-opening success. Preparations Proof of concept done, contracts negotiated and done, and a few months down the road, we began preparing for the move. In the proof of concept, LTM was the sole product in use. However, in the existing production environment, we had the LTM, ASM, GTM, and Web Accelerator. In the new production environment at Bluelock, we added APM for secure remote access to the environment, and WOM to secure the traffic between our two virtual datacenters. The list of moving parts: Change to LTM VE from LTM (w/ ASM module); version upgrade Change to GTM VE from GTM; version upgrade Change to Edge Gateway VE from Web Accelerator; version upgrade Introduce APM (via Edge Gateway VE) Introduce WOM (via Edge Gateway VE) Application server upgrade New monitoring processes iRules rewrites and updates to take advantage of new features and changes dns duties There were many many other things we addressed along the way, but these were the big ones. It wasn’t just a physical –> virtual change. The end result was a far different animal than we began with. Networking In the vCloud Director environment, there are a few different network types: external networks, organizational networks, and vApp networks, and a few sub-types as well. I’ll leave it to the reader to study all the differences with the platform. We chose to utilize org networks so we could route between vApps with minimal no additional configuration. We ended up with several networks defined for our organizations, including networks for public access, high availability, config sync, and mirroring, and others for internal routing purposes. The meat of the infrastructure is shown in the diagram below. Client Flow One of the design goals for the new environment was to optimize the flow of traffic through the infrastructure, as well as provide for multiple external paths in the event of network or device failures. Web Accelerator could have been licensed with ASM on the BIG-IP LTM VE, but the existing versions do not yet support CMP on VE, so we opted to keep them apart. SSL is terminated on the external vips, and then the ASM policy is applied to a non-routable vip on the LTM, utilizing iRules to implement the vip targeting vip solution. This was done primarily to support the iRules we run to support our application traffic without requiring major modifications that would be necessary to run on a virtual server with a plugin (ASM,APM,WA,etc) applied. If you’re wondering about the performance hit of the vip targeting vip solution, (you know you are!) in our testing, as long as the front and backside vips are the same type, we found the difference between vip->vip and just a single vip to be negligible, in most cases less than a tenth of a percentage point. YMMV depending on your scenario. You might also be wondering about terminating SSL on BIG-IP VE. There is no magic here. It’s simple math. In our environment, the handshakes per second are not a concern for 2k keys on our implementation, but if you need the dedicated compute power for SSL offload, you can still go with a hybrid deployment, using hardware up-front for the heavy lifting and VE for the application intelligence. Zero-Hands Expansion One of the cooler things about living in a virtual datacenter is when it comes time for expansion. We originally deployed our secondary datacenter with less gear and availability, but additional production and development projects warranted more protections for failure. Rather than requiring a lengthy equipment procurement process, it took a few emails to quote for new resources, a few emails for approvals, and the time to plan the project and execute. The total hours required to convert our standalone infrastructure in our secondary vDC to an HA environment and add an application server to boot came in just under thirty, with no visits required to any datacenter with boxes, dollies, cables, screw drivers, and muscles in tow. Super slick. Conclusion F5 BIG-IP VE and the Bluelock Virtual Datacenter is a match made in application paradise. If you have any questions regarding the infrastructure, the deployment process, why the sky is blue, please post below.522Views0likes0CommentsLive Migration versus Pre-Positioning in the Cloud
The secret to live migration isn’t just a fat, fast pipe – it’s a dynamic infrastructure Very early on in the cloud computing hype cycle we posited about different use cases for the “cloud”. One that remains intriguing and increasingly possible thanks to a better understanding of the challenges associated with the process is cloud bursting. The first time I wrote about cloud bursting and detailed the high-level process the inevitable question that remained was, “Well, sure, but how did the application get into the cloud in the first place?” Back then there was no good answer because no one had really figured it out yet. Since that time, however, there have grown up many niche solutions that provide just that functionality in addition to the ability to achieve such a “migration” using virtualization technologies. You just choose a cloud and click a button and voila! Yeah. Right. It may look that easy, but under the covers there’s a lot more details required than might at first meet the eye. Especially when we’re talking about live migration. LIVE MIGRATION versus PRE-POSITIONING Many architectural-based cloud bursting solutions require pre-positioning of the application. In other words, the application must have been transferred into the cloud before it was needed to fulfill additional capacity demands on applications experiencing suddenly high volume. It assumed, in a way, that operators were prescient and budgets were infinite. While it’s true you only pay when an image is active in the cloud, there can be storage costs associated with pre-positioning as well as the inevitable wait time between seeing the need and filling the need for additional capacity. That’s because launching an instance in a cloud computing environment is never immediate. It takes time, sometimes as long as ten minutes or more. So either your operators must be able to see ten minutes into the future or it’s possible that the challenge for which you’re implementing a cloud bursting strategy (handle overflow) won’t be addressed by such a challenge. Enter live migration. Live migration of applications attempts to remove the issues inherent with pre-positioning (or no positioning at all) by migrating on-demand to a cloud computing environment and maintaining at the same time availability of the application. What that means is the architecture must be capable of: Transferring a very large virtual image across a constrained WAN connection in a relatively short period of time Launch the cloud-hosted application Recognize the availability of the cloud-hosted application and somehow direct users to it When demand decreases you must siphon users off (quiesce) the cloud-hosted application instance When no more users are connected to the cloud-hosted application, take it down Reading between the lines you should see a common theme: collaboration. The ability to recognize and act on what are essentially “events” occurring in the process require awareness of the process and a level of collaboration traditionally not found in infrastructure solutions. CLOUD is an EXERCISE in INFRASTRUCTURE INTEGRATION Sound familiar? It should. Live migration, and even the ability to leverage pre-positioned content in a cloud computing environment, is at its core an exercise in infrastructure integration. There must be collaboration and sharing of context, automation as well as orchestration of processes to realize the benefits of applications deployed in “the cloud.” Global application delivery services must be able to monitor and infer the health at the site level, and in turn local application delivery services must monitor and infer the health and capacity of the application if cloud bursting is to successfully support the resiliency and performance requirements of application stakeholders, i.e. the business. The relationship between capacity, location, and performance of applications is well-known. The problem is pulling all the disparate variables together from the client, application, and network components which individually hold some of the necessary information – but not all. These variables comprise context, and it requires collaboration across all three “tiers” of an application interaction to determine on-demand where any given request should be directed in order to meet service level expectations. That sharing, that collaboration, requires integration of the infrastructure components responsible for directing, routing, and delivering application data between clients and servers, especially when they may be located in physically diverse locations. As customers begin to really explore how to integrate and leverage cloud computing resources and services with their existing architectures, it will become more and more apparent that at the heart of cloud computing is a collaborative and much more dynamic data center architecture. That without the ability not just to automate and orchestrate, but integrate and collaborate infrastructure across highly diverse environments, cloud computing – aside from SaaS - will not achieve the successes it is predicted. Cloud is an Exercise in Infrastructure Integration IT as a Service: A Stateless Infrastructure Architecture Model Cloud is the How not the What Cloud-Tiered Architectural Models are Bad Except When They Aren’t Cloud Chemistry 101 You Can’t Have IT as a Service Until IT Has Infrastructure as a Service Cloud Computing Making Waves All Cloud Computing Posts on DevCentral233Views0likes0Comments