vmware
557 TopicsSecure and Seamless Cloud Application Migration with F5 Distributed Cloud and Nutanix
Introduction F5 Distributed Cloud (XC) offers SaaS-based security, networking, and application management services for multicloud environments, on-premises infrastructures, and edge locations. F5 Distributed Cloud Services Customer Edge (CE) enhances these capabilities by integrating into a customer’s environment, enabling centralized management via the F5 Distributed Cloud Console while being fully operated by the customer. F5 Distributed Cloud Services Customer Edge (CE) can be deployed in public clouds, on-premises, or at the edge. Nutanix is a leading provider of Hyperconverged Infrastructure (HCI), which integrates storage, compute, networking, and virtualization into a unified, scalable, and easily managed solution. Nutanix Cloud Clusters (NC2) extend on-premises data centers to public clouds, maintaining the simplicity of the Nutanix software stack with a unified management console. NC2 runs AOS and AHV on public cloud instances, offering the same CLI, user interface, and APIs as on-premises environments. This article explores how F5 Distributed Cloud and Nutanix collaborate to deliver secure and seamless application services across various types of cloud application migrations. Whether migrating applications to the cloud, repatriating them from public clouds, or transitioning into a hybrid multicloud environment, F5 Distributed Cloud and Nutanix ensure optimal performance and security at all times. Illustration F5 Distributed Cloud App Connect securely connect distributed application services across hybrid and multicloud environments. It operates seamlessly with a platform of web application and API protection (WAAP) services, safeguarding apps and APIs against a wide range of threats through robust security policies including an integrated WAF, DDoS protection, bot management, and other security tools. This enables the enforcement of consistent and comprehensive security policies across all applications without the need to configure individual custom policies for each app and environment. Additionally, it provides centralized observability by providing clear insights into performance metrics, security posture, and operational statuses across all cloud platforms. In this section, we illustrate how to utilize F5 Distributed App Connect with Nutanix for different cloud application migration scenarios. Cloud Migration In our example, we have a VMware environment within a data center located in San Jose. Our goal is to migrate the on-premises application nutanix.f5-demo.com from the VMware environment to a multicloud environment by distributing the application workloads across Nutanix Cloud Clusters (NC2) on AWS and Nutanix Cloud Clusters (NC2) on Azure. First, we deploy F5 Distributed Cloud Customer Edge (CE) and application workloads on Nutanix Cloud Clusters (NC2) on AWS as well as Nutanix Cloud Clusters (NC2) on Azure. F5 Distributed Cloud App Connect addresses the issue of IP overlapping, enabling us to deploy application workloads using the same IP addresses as those in the VMware environment in the San Jose data center. Next, we create origin pools on the F5 Distributed Cloud Console. In our example, we create two origin pools: nutanix-nc2-aws-pool for origin servers on NC2 on AWS and nutanix-nc2-azure-pool for origin servers on NC2 on Azure. To ensure minimal application services disruption, we update the HTTP Load Balancer for nutanix.f5-demo.com to include both new origin pools, and we assign them with a higher weight than the existing pool vmware-sj-pool so that the origin servers on Nutanix Cloud Clusters (NC2) on AWS and on Nutanix Cloud Clusters (NC2) on Azure will receive more traffic compared to the origin servers in the VMware environment in the San Jose data center. Note that web application firewall (WAF) nutanix-demo is enabled. Finally, we remove vmware-sj-pool to complete the cloud migration. Cloud Repatriation In this example, xc.f5-demo.com is deployed in a multicloud environment across AWS and Azure. Our objective is to migrate the application back to the Nutanix environment in the San Jose data center from the public clouds. To begin, we deploy F5 Distributed Cloud Customer Edge (CE) and application workloads in Nutanix AHV. We deploy the application workloads using the same IP addresses as those in the public clouds because IP overlapping is not a concern with F5 Distributed Cloud App Connect. On the F5 Distributed Cloud Console, we create an origin pool nutanix-sj-pool with the origin servers originating from the Nutanix environment in the San Jose data center. We then update the HTTP Load Balancer for xc.f5-demo.com to include the new origin pool, and assign it with a higher weight than both existing pools: xc-aws-pool with origin servers on AWS and xc-azure-pool with origin servers on Azure. As a result, the origin servers in the Nutanix environment, located in the San Jose data center will receive more traffic compared to origin servers in other pools. To ensure all applications receive the same level of security protection, web application firewall (WAF) nutanix-demo is also applied here. To complete the cloud repatriation, we remove xc-aws-pool and xc-azure-pool. The application service experiences minimal disruption during and after the migration. Hybrid Multicloud Our goal in this example is to bring xc-nutanix.f5-demo.com into a hybrid multicloud environment, as it is presently deployed solely in the San Jose data center. We first deploy F5 Distributed Cloud Customer Edge (CE) and application workloads on Nutanix Cloud Clusters (NC2) on AWS as well as on Nutanix Cloud Clusters (NC2) on Azure. We create an origin pool with origin servers originating from each of the F5 Distributed Cloud Customer Edge (CE) sites on the F5 Distributed Cloud Console. Next, we update the HTTP Load Balancer for xc-nutanix.f5-demo.com so that it includes all origin pools: nutanix-sj-pool (Nutanix AHV in our San Jose data center), nutanix-nc2-aws-pool (NC2 on AWS), and nutanix-nc2-azure-pool (NC2 on Azure). Note that web application firewall (WAF) nutanix-demo is applied here as well so that we can ensure a consistent level of security protection across all applications no matter where they are deployed. xc-nutanix.f5-demo.com is now in a hybrid multicloud environment. F5 Distributed Cloud Console is the centralized console for configuration management and observability. It provides real-time metrics and analytics, which allows us proactively monitor security events. Additionally, its integrated AI assistant delivers real-time insights and actionable recommendations of security events, enhancing our understanding of the security events and enabling more informed decision-making. This enables us to swiftly detect and respond to emerging threats, thereby sustaining a robust security posture. Conclusion Cloud application migration can be complex and challenging. F5 Distributed Cloud and Nutanix collaborate to offer a secure and streamlined solution that minimizes risk and disruption during and after the migration process, including those migrating from VMware environments. This ensures a seamless cloud application transition while maintaining business continuity throughout the entire process and beyond.186Views1like0CommentsF5 BIG-IP VE and Application Workloads Migration From VMware to Nutanix
Introduction Nutanix is a leading provider of Hyperconverged Infrastructure (HCI), which integrates storage, compute, networking, and virtualization into a unified, scalable, and easily managed solution. This article will outlined the recommended procedure of migrating BIG-IP Virtual Edition (VE) and application workloads from VMware vSphere to Nutanix AHV, ensuring minimal disruption to application services. As always, it is advisable to schedule a maintenance window for any migration activities to mitigate risks and ensure smooth execution. Migration Overview Our goal is to migrate VMware BIG-IP VEs and application workloads to Nutanix with minimal disruption to application services, while preserving the existing configuration including license, IP addresses, hostnames, and other settings. The recommended migration process can be summarized in five stages: Stage 1 – Deploy a pair of BIG-IP VEs in Nutanix: Stage 2 – Migrate Standby BIG-IP VE from VMware to Nutanix: Stage 3 – Failover Active BIG-IP VE from VMware to Nutanix: Stage 4 – Migrate application workloads from VMware to Nutanix: Stage 5 – Migrate now Standby BIG-IP VE from VMware to Nutanix: Migration Procedure In our example topology, we have an existing VMware environment with a pair of BIG-IP VEs operating in High Availability (HA) mode - Active and Standby, along with application workloads. Each of our BIG-IP VEs is set up with four NICs, which is a typical configuration: one for management, one for internal, one for external, and one for high availability. We will provide a detailed step-by-step breakdown of the events during the migration process using this topology. Stage 1 – Deploy a pair of BIG-IP VEs in Nutanix i) Create Nutanix BIGIP-1 and Nutanix BIGIP-2 ensuring that the host CPU and memory are consistent with VMware BIGIP-1 and VMware BIGIP-2: ii) Keep both Nutanix BIGIP-1 and Nutanix BIGIP-2 powered down. *Current BIG-IP State*: VMware BIGIP-1 (Active) and VMware BIGIP-2 (Standby) Stage 2 – Migrate Standby BIG-IP VE from VMware to Nutanix i) Set VMware BIGIP-2 (Standby) to “Forced Offline”, and then save a copy of the configuration: ii) Save a copy of the license from “/config/bigip.license”. iii) Make sure above files are saved at a location we can retrieve later in the migration process. iv) Revoke the license on VMware BIGIP-2 (Standby): Note: Please refer to BIG-IQ documentation if the license was assigned using BIG-IQ. v) Disconnect all interfaces on VMware BIGIP-2 (Standby): Note: Disconnecting all interfaces enables a quicker rollback should it become necessary, as opposed to powering down the system. vi) Power on Nutanix BIGIP-2 and configure it with the same Management IP of VMware BIGIP-2: vii) License Nutanix BIGIP-2 with the saved license from VMware BIGIP-2 (Stage 2ii): Note: Please refer to K91841023 if the VE is running in FIPS mode. viii) Set Nutanix BIGIP-2 to “Forced Offline”: ix) Upload the saved UCS configuration (Stage 2i) to Nutanix BIGIP-2, and then load it with “no-license”: Note: Please refer K9420 to if the UCS file containing encrypted password or passphrase. x) Check the log and wait until the message “Configuration load completed, device ready for online” is seen before proceeding, which can be done by opening a separate session to Nutanix BIGIP-2: xi) Set Nutanix BIGIP-2 to “Online”: Note: Before bringing Nutanix BIGIP-2 "Online", make sure it is deployed with the same number of NICs, and interface-to-VLAN mapping is identical to VMware BIGIP-2. For example, if interface 1.1 is mapped to VLAN X on VMware BIGIP-2, make sure interface 1.1 is mapped to VLAN X too on Nutanix BIGIP-2. xii) Make sure Nutanix BIGIP-2 is "In Sync". Perform Config-Sync using “run cm config-sync from-group <device-group-name>” if “(cfg-sync Changes Pending)" is seen like below: xiii) BIGIP-2 is now migrated from VMware to Nutanix: Note: Due to BIG-IP VEs are running in different hypervisors, persistence mirroring or connection mirroring will not be operational during migration. If enabled, ".....notice DAG hash mismatch; discarding mirrored state" message maybe seen during migration and is expected. *Current BIG-IP State*: VMware BIGIP-1 (Active) and Nutanix BIGIP-2 (Standby) Stage 3 – Failover Active BIG-IP from VMware to Nutanix i) Failover VMware BIGIP-1 from Active to Standby: ii) Nutanix BIGIP-2 is now the Active BIG-IP: *Current BIG-IP State*: VMware BIGIP-1 (Standby) and Nutanix BIGIP-2 (Active) Stage 4 – Migrate application workloads from VMware to Nutanix i) Migrate application workloads from VMware to Nutanix using Nutanix Move Note: To minimize application service disruption, it is suggested to migrate the application workloads in groups instead of all at once, ensuring that at least one pool member remains active during the process. It is because Nutanix Move requires a downtime to shut down the VM at the source (VMware), perform a final sync of data and then start the VM at the destination (Nutanix). *Current BIG-IP State*: VMware BIGIP-1 (Standby) and Nutanix BIGIP-2 (Active) Stage 5 – Migrate now Standby BIG-IP VE from VMware to Nutanix i) Set VMware BIGIP-1 “Forced Offline”, and then save a copy of the configuration: ii) Save a copy of the license from “/config/bigip.license”. iii) Make sure above files are saved at a location we can retrieve later in the migration process. iv) Revoke the license on VMware BIGIP-1 (Standby): Note: Please refer to BIG-IQ documentation if the license was assigned using BIG-IQ. v) Disconnect all interfaces on VMware BIGIP-1 (Standby): Note: Disconnecting all interfaces enables a quicker rollback should it become necessary, as opposed to powering down the system. vi) Power on Nutanix BIGIP-1 and configure it with the same Management IP of VMware BIGIP-1: vii) License Nutanix BIGIP-1 with the saved license from VMware BIGIP-1 (Stage 5ii): Note: Please refer to K91841023 if the VE is running in FIPS mode. viii) Set Nutanix BIGIP-1 to “Forced Offline”: ix) Upload the saved UCS configuration (Stage 5i) to Nutanix BIGIP-1, and then load it with “no-license”: Note: Please refer K9420 to if the UCS file containing encrypted password or passphrase. x) Check the log and wait until the message “<hostname>……Configuration load completed, device ready for online” is seen before proceeding, which can be done by opening a separate session to Nutanix BIGIP-1: xi) Set Nutanix BIGIP-1 to “Online”: Note: Before bringing Nutanix BIGIP-1 "Online", make sure it is deployed with the same number of NICs ,and interface-to-VLAN mapping is identical to VMware BIGIP-1. For example, if interface 1.1 is mapped to VLAN X on VMware BIGIP-1, make sure interface 1.1 is mapped to VLAN X too on Nutanix BIGIP-1. xii) Make sure Nutanix BIGIP-1 is "In Sync". Perform Config-Sync using “run cm config-sync from-group <device-group-name>” if “(cfg-sync Changes Pending)" is seen like below: xiii) BIGIP-1 is now migrated from VMware to Nutanix: Migration is now completed. *Current BIG-IP State*: Nutanix BIGIP-1 (Standby) and Nutanix BIGIP-2 (Active) Summary The outlined migration procedure in this article is the recommended procedure of migrating BIG-IP Virtual Edition (VE) and application workloads from VMware vSphere to Nutanix AHV. It ensures successful migration during a scheduled maintenance with minimal application service disruption, enabling them to continue functioning smoothly during and post-migration. References Nutanix AHV: BIG-IP Virtual Edition Setup https://clouddocs.f5.com/cloud/public/v1/nutanix/nutanix_setup.html Nutanix Move User Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v5_5:top-overview-c.html K7752: Licensing the BIG-IP system https://my.f5.com/manage/s/article/K7752 K2595: Activating and installing a license file from the command line https://my.f5.com/manage/s/article/K2595 K91841023: Overview of the FIPS 140 Level 1 Compliant Mode license for BIG-IP VE https://my.f5.com/manage/s/article/K91841023 K9420: Installing UCS files containing encrypted passwords or passphrases https://my.f5.com/manage/s/article/K9420 K13132: Backing up and restoring BIG-IP configuration files with a UCS archive https://my.f5.com/manage/s/article/K13132 BIG-IQ Documentation - Manage Software Licenses for Devices https://techdocs.f5.com/en-us/bigiq-7-0-0/managing-big-ip-ve-subscriptions-from-big-iq/manage-licenses-devices.html389Views0likes2CommentsWill Big-IP ver 17.5.X supported on VMware ESXi ?
I am in the process of deploying new Big-IP LTM VE edition on VMware ESXi platfrom. From the F5 BIG-IP Virtual Edition Supported Platforms matrix page https://clouddocs.f5.com/cloud/public/v1/matrix.html, only ver 17.1.X are supported on ESXi version 8.0 U1-U3 and 7.0 U3 Based on K5903: BIG-IP software support policy https://my.f5.com/manage/s/article/K5903 Ver 17.1.X will be end of technical support by 31 March 2027. The Big-IP ver 17.5.0 had just been released last month (27 Feb 2025), will it be supported on ESXi platform in the next release ?Solved75Views0likes1CommentGetting Started with BIG-IP Next: Creating Instances in Central Manager with the VMware vSphere Provider
You can create instances directly on F5 rSeries or VELOS hardware or on KVM or VMware hypervisors, and then onboard them in BIG-IP Next Central Manager (CM), some of which is already covered in other articles that we've released. In this article, we'll highlight the capability within CM to create the instances directly: Creating an instance template in a vCenter content library (must be licensed to work) Creating the VMware vSphere provider in Central Manager Creating the BIG-IP instance in Central Manager The first two steps are necessary the first time, but only step three is required for future instances. Note: There is an intermittent issue with creating instances with the provider prior to v20.2.1, so make sure to install or upgrade to that version of Central Manager and that the instances template in your resource library is also updated. Creating an instance instance template in a vCenter content library The steps here are very similar to the walkthrough I did with ESXi, but because it's done in vCenter this time and that there are a few additional steps, I recorded the process again. If you already have created a template for BIG-IP Next and have it listed in a content library, you can skip this section and move on to creating the provider. Create the template First, head to MyF5 and download version 20.2.1+ of BIG-IP Next instance Virtual Edition. In the vSphere client, right click on the appropriate cluster, then select Deploy OVF Template Select the image you just downloaded from MyF5 and click next. Set the name, select a compute location for the virtual machine, then click next. Now select a compute resource and then click next. If your VMware environment is lab like mine, you might not have set up all the intermediate and root certificates properly (a colleague shared this article for details.) If that's the case, you can click ignore on the certificate not being trusted by the vSphere client and click next. For those that have properly prepared their environment, you shouldn't see this certificate trust issue at all, and can click next as well. Now select storage and click next. On the networks tab, select the network that would be appropriate for CM <-> instance management traffic communication. For me, that is VM Network. Then click next. After reviewing the details, click finish. After the VM is created, select it in the left-hand navigation, then select edit settings from the actions menu. Here I dropped the cpu count to 2, the memory to 8GB, and added a second NIC with my vm_tagging network so I can tag all the other VLANs I might need. Click OK. Note: Once creating an instance in Central Manager with the provider, it didn't seem to matter that I had customized the template, so the last two steps may not be necessary. I still prefer to be explicit even if I have to redo this within CM. Create the content library Click the hamburger menu in the top left in the vSphere client and select content libraries. If you have a content library already that you want to add the template to, you can skip this step. Otherwise, click create. Set your content library name and select the server. I only have the one so that was an easy choice! I kept the defaults here, local content library and clicked next. You may have requirements for security policies on imported OVF library items, but I don't in my lab so I opted out of that and clicked next. Select the datastore and click next. Review the content library settings and click finish. Clone the template to the content library Back in the vCenter inventory, right-click on the instance template and select clone and then clone as template to library. Name the template (must be unique from the instance template you already created!) and select the destination, then click next. Select the library you created (or already had created previously) and click next. Now select the cluster and click next. Note: If you have multiple clusters, make sure to uniquely name the resource pool in the one you will assign BIG-IP Next resources to. Otherwise you could face a provider conflict when Central Manager attempts to create the instance. Select storage and click next. Review details and click finish. What your vCenter logs and you should get a completion message on creating the content library template. Congratulations! All the prep work to get to Central Manger has been completed. Again, this process is a one-time (per instance version) requirement to prepare for what the provider will work with from Central Manager, and even with that, the content library steps aren't needed for future instance templates either. Onward! Creating the VMware vSphere provider in Central Manager Log in to Central Manager (make sure it is version 20.2.1+) and click on manage instances. Now click on providers in the left-hand nagivation menu. Click on start adding providers. If you already have another provider, click add on the upper-right menu next to the delete button. Select VMware vSphere as the type, then name the provider, set the IP address or FQDN, and then click connect. Enter your credentials and click submit. A dialog might pop up with an authenticity warning. This is similarly related to the OVF import issue discussed earlier. A properly configured certificate chain on the vSphere server would eliminate this alert. If this is your lab, you can click accept here. You should now see a configured provider in the listing. And there we go! Central Manager is now in a state to create instances on your behalf. Creating the BIG-IP instance in Central Manager Login to Central Manager if you have not, then click on manage instances. If you don't have any instances on your Central Manager, click start adding instances. Otherwise, click add in the upper-right section of the screen. Since we are asking the provider to create an instance on our behalf, select create a new instance. Review the list of what you'll need (and make it happen, Cap'n!) and then click next. Set the instance hostname, an optional description, select the VE Standalone instance template and click start creating. Select the provider created in the previous section and then click next. On this screen, all the information should be provided in the dropdowns in alignment with the vSphere environment and template created in previous sections. My example is shown below. Notice that the cores and memory are still selectable even though I set those in the template I created. I broke the screen capture into two images here. On the first, set your instance management IP and mask (there is a task to combine these fields in a future release), your gateway address, and your networks. I though the vSphere networks would populate for dropdown but they do not, so make sure you accurately account for them. My management interfaces for all VMs are in the VM Network, and then for this instance deployment I am using tagged vlans in one virtual NIC in the vm_tagging network. Now further down the same screen, set your DNS and NTP servers, then click next. Click on the VLANs tab under networking, then click create twice for your external and internal traffic interfaces and fill out the appropriate details. For me, that is vlans 30 and 40, respectively, in my vm_tagged network. Yours might look a little different here. Do not click next here. Instead, click IP Addresses. Set the self IP address for each VLAN as appropriate, then click next. Set the management username and password for the instance. This is how Central Manager will connect to the instance. Click next. Review the details of the instance that the provider will create, then click deploy. After several minutes, you should have a healthy looking instance in the my instances list. Congratulations! Resources Create BIG-IP Next instance template on VMware How to: Create a BIG-IP Next instance in a VMware vSphere environment from Central Manager355Views0likes0CommentsInfrastructure as Code: Automating F5 Distributed Cloud CEs with Ansible
Introduction Welcome to the first installment of our Infrastructure as Code (IaC) series, focusing on F5 products and Ansible. This series has been a long-standing desire of mine to showcase the ability of IaC utilizing Ansible Automation Platform to deliver Day 0 through Day 2 operations with multiple F5 virtualized platforms. Over time, I've encountered numerous financial clients expressing interest in this topic. For many of these clients, the prospect of leveraging IaC to redeploy an environment outweighs the traditional approach of performing upgrades. This series will hopefully provide insight, documentation, and code for anyone embarking on this journey. Why Ansible Automation Platform? Like most people, I started my journey with community editions of Ansible. As my coding became more complex, so did the need to ensure that my lab infrastructure adhered to the best security guidelines required by my company (my goal being to mimic how customers would/should do things in real life). I began utilizing Ansible Automation Platform to ensure my credentials were protected, as well as to organize and share my code with the rest of my team (following the 'just in case you got hit by a bus' theory). Ansible Automation Platform utilizes execution environments (EE) to ensure code runs efficiently and cleanly every time. Now, I am also creating Execution Environments via GitHub with workflows and pushing them up to Quay.io (https://github.com/VDI-Tech-Guy/f5-execution-engines). Huge thanks to Colin McNaughton at Red Hat for making my life so much easier with building EEs! Why deploy F5 Distributed Cloud on VMware vSphere? As I mentioned before, I had this desire to build this Infrastructure as Code (IaC) code a while back. This was prior to the Broadcom acquisition of VMware. Being an ex-VMware employee, I had a lot of knowledge of virtualization platform infrastructure going into this project, and I started my focus on deploying on VMware vSphere. F5 Distributed Cloud can be deployed in any cloud, anywhere. However, I really wanted to focus on on-premises deployments because not every customer can afford the cloud. Moreover, there's always a back-and-forth battle between on-premises and the cloud, which has evolved into the Hybrid Cloud and the Multi-Cloud. I do intend to extend this series to the Multi-Cloud, but these initial deployments will be focused on VMware vSphere, as it is still utilized in many organizations across the globe. Information about the Setup in the Demo Video If you watch the video (down below) on how the deployment works, you can see i did a bunch of the pre-work prior to launching the deployment, in the git repostory (link in Resources). Here are some Prework items i did Had a fully functional Ansible Automation Platform 2.4+ enviornment setup and working. (at the time the controller version was 4.4.4) Execution Environment was imported into Ansible Automation Platform Controller The Project was setup to import the Playbooks from the Git Repository (In Resources Section below) and setup the Default Execution Environment Demo Inventory was setup (in our usecase we only needed the vCenter Host) We Setup Network Credentials for the vCenter The Template was setup and had Variables populated in it (Note the API Key was hidden). As mentioned in the Video (Below) The variables were populated to my environment, this contains all the information, i have provided a Demo Example in the git repository for anyone to mimic my settings to their environment, also the example has comments about each field or area of a field and the purpose of the variable. { "rhel_location": "https://vesio.blob.core.windows.net/releases/rhel/9/x86_64/images/vmware/rhel-9.2023.29-20231212012955-single-nic.ova", "xc_api_credential": "_____________________________________", "xc_namespace": "mmabis-automation", "xc_console_host": "f5-bd", "xc_user": "admin", "xc_pass": "Ansible123!", "vcenter_hostname": "{{ ansible_host }}", "vcenter_username": "{{ ansible_env.ANSIBLE_NET_USERNAME }}", "vcenter_password": "{{ ansible_env.ANSIBLE_NET_PASSWORD }}", "vcenter_validate_certs": false, "datacenter_name": "Apex", "cluster_name": "Worlds-Edge", "datastore": "TrueNAS-SSD", "dvs_switch_name": "DSC-DVS", "dns_name_servers": [ "192.168.192.20", "192.168.192.1" ], "dns_name_search": [ "dsc-services.local", "localdomain" ], "ntp_servers": [ "0.pool.ntp.org", "1.pool.ntp.org", "2.pool.ntp.org" ], "domain_fqdn": "dsc-services.local", "DVS_Name": "{{dvs_switch_name}}", "Internal_Network": "DVS-Server-vLan", "External_Network": "DVS-DMZ-vLan", "resource_pool_name": "Lab-XC", "waiting_period": 2, "temp_download_location": "/tmp/xc-ova-download.ova", "xc_ova_builds": [ { "hostname": "xc-automation-rhel-demo", "tmpl_name": "xc-automation-rhel-demo", "admin_password": "Ansible123!", "cluster_name": "xc-automation-cluster-rhel-demo", "dhcp": "no", "external_ip": "172.16.192.170", "external_ip_subnet_prefix": "24", "external_ip_gw": "172.16.192.1", "external_ip_route": "0.0.0.0/0", "internal_ip": "192.168.192.170", "internal_ip_subnet_prefix": "22", "internal_ip_gw": "192.168.192.1", "certified_hw": "vmware-regular-nic-voltmesh", "latitude": "39.51833126", "longitude": "-104.759496962", "build_count": 3, "nic_config": "rhel-multi" } ] } Launching the Code With all of that prework Handled it was as easy as launch the code, there were a few caviats i learned over time when dealing with the atuomation that i wanted to share. Never re-use a cluster name in F5 Distributed Cloud, especially if it was used in a different version of the CE (there were communications issues with the CEs and previous cluster information that was stored in F5 Distributred Cloud Console) The Api Credentials are system level when trying to accept registration or create the token for importing in to the environment. This code is designed to check for "{{ xc-namespace}}-token" if it exists then it will utilize the existing token, if not it will try to create it so you need system level permissions to do this. Build Count should be 3 by default (still needs to be defined) or an ODD number based on recomendations i have heard from our F5 Field. If there are more that i think of ill definatly edit the post and make sure its up-to-date. When launching the code i was able to get the lab to build up correctly multiple times, so please if there is an issue or something i might not have documented well, feel free to let me know and give it a shot for yourself! YouTube Video now on DevCentral Channel Resources https://github.com/f5devcentral/f5-bd-ansible-day0-automation - The Code utilized for this deployment https://github.com/VDI-Tech-Guy/f5-execution-engines - Building Execution Environments with Github and Workflows Conclusion I do hope that this series will help everyone who wants to embrace IaC and if you have any questions feel free to reach out!651Views3likes0CommentsAPM :: VMware View :: Blast HTML5
I'm trying to get the APM functioning with VMware View Blast client - and I am having quite the time. I have tried the iApp (1.5) but haven't been able to get that to function either. At the moment, I have a manual configuration based-off of the deployment guide. The deployment guide says to create a forwarding virtual server, and the iApp does the same thing. Neither of which seem to be working for me. So with the forwarding VS above created… I can log-in fine, the webtop displays, the RDP link I have works great... the Blast HTML5 link... not so much. If I click on the VMware View desktop shown above, it brings me to the following: The error shown above is thrown-around a lot by View, so it’s hard to say what the real problem is. I’ve seen that error displayed for straight-up communications issues in the past… which I think this is. If I do a tcpdump on the BIG-IP, I can see it trying to connect to 8443, but it cannot connect (SYNs… no SYN/ACKs). 11:27:30.022625 IP x.x.x.10.28862 > x.x.x.252.8443: Flags [S], seq 2246191783, win 4140, options [mss 1380,sackOK,eol], length 0 out slot1/tmm0 lis=/Common/xxxxxxxxxxxxxxxxxxxx-https Source is the floating IP, destination is the VS. I know 8443 is listening on the VMware View server because I can connect to it locally. And I know the VMware View server knows how to get back to the F5 because it populates the webtop with my available desktop(s) shown above. I tried converting the forwarding VS to standard, assigned a pool, etc… and it still did the same thing. SYNs… no SYN/ACKs. What might be telling though is the lis= above. It lists my main virtual server with the APM policy assigned. That makes me think though… Why is it trying to connect to that VS and not the forwarding VS? The forwarding virtual server is a better match no? In any event, yeah if the virtual server isn’t listening on 8443, of course it won’t reply back (my thought-process anyway). So I figure… welp, why not just try an “any” port VS… yeah not so much. If I manually remove the :0 and submit, it loads the same error about the certificate. Nothing shows-up in tcpdump trying to connect to 8443 either - so, a step back. If anybody happen to have any ideas for me, I would be really appreciative. Thanks!738Views0likes11CommentsBIG-IP APM with Horizon 7.x HTML5 gets a Hotfix For Updated Code
Technical update on some new hotfixes that were rolled out to resolve some issues with HTML5 connectivity with VMware Horizon 7.1/7.2 with BIG-IP Access Policy Manager. What is VMware Horizon HTML Access? VMware Horizon HTML Access provides the ability for employees to access applications and desktops via web browsers (HTML5 compliant) and without the need for additional plugins or native client installations. This method of access provides advantages to customers who utilize very strict software installation requirements and require access to their internal resources, as well as customers who utilize BYOD based implementations. VMware Horizon HTML Access is an alternative way of accessing company internal resources without the requirement of software installation. What does the Hotfix Do? The Hotfix is designed to allow the newer version of the VMware Horizon HTML Access Clients which were upgraded with new URI information to be accessible via APM. Without this hotfix, customers who upgrade to the Horizon 7.1/7.2 code may experience an issue where HTML5 will not connect to the VDI Resource (blank or grey screen.) The easiest way to determine if you are affected by the issue is within the URL. If you do not see the string f5vdifwd within the URL then you are most likely affected by this issue. Here is an example of a working configuration. Notice the f5vdifwd string in the URL: https://test.test.local/f5vdifwd/vmview/68a5058e-2911-4316-849b-3d55f5b5cafb/portal/webclient/index.html#/desktop The Hotfix Information Details Note that the fixes are incorporated into Hotfixes. F5 recommends to use the Hotfix builds over the iRules listed in the below article. If the iRules are in place when upgrading to a build with the incorporated fix, make sure that the iRule is removed. Version 12.1.2 HF1 Release Notes Version 13.0 HF2 Release Notes 638780-3 Handle 302 redirects for VMware Horizon View HTML5 client Component Access Policy Manager Symptoms Starting from v4.4, Horizon View HTML5 client is using new URI for launching remote sessions, and supports 302 redirect from old URI for backward compatibility. Conditions APM webtop with a VMware View resource assigned. HTML5 client installed on backend is of version 4.4 or later. Impact This fix allows for VMware HTML5 clients v4.4 or later to work properly through APM. Workaround for versions 11.6.x and 12.x priority 2 when HTTP_REQUEST { regexp {(/f5vdifwd/vmview/[0-9a-f\-]{36})/} [HTTP::uri] vmview_html5_prefix dummy } when HTTP_RESPONSE { if { ([HTTP::status] == "302") && ([HTTP::header exists "Location"]) } { if { [info exists vmview_html5_prefix] } { set location [HTTP::header "Location"] set location_path [URI::path $location] if { $location_path starts_with "/portal/" } { set path_index [string first $location_path $location] set new_location [substr $location $path_index] regsub "/portal/" $new_location $vmview_html5_prefix new_location HTTP::header replace "Location" $new_location } unset vmview_html5_prefix } } } Workaround for version 13.0 priority 2 when HTTP_REQUEST { regexp {(/f5vdifwd/vmview/[0-9a-f\-]{36})/} [HTTP::uri] dummy vmview_html5_prefix } when HTTP_RESPONSE { if { ([HTTP::status] == "302") && ([HTTP::header exists "Location"]) } { if { [info exists vmview_html5_prefix] } { set location [HTTP::header "Location"] set location_path [URI::path $location] if { $location_path starts_with "/portal/" } { set path_index [string first $location_path $location] set new_location "$vmview_html5_prefix[substr $location $path_index]" HTTP::header replace "Location" $new_location } unset vmview_html5_prefix } } }564Views0likes1CommentBIG-IP : iRule return statement
From the docs : Causes immediate exit from the currently executing event in the currently executing iRule. iRule processing is not aborted, and subsequent events will be triggered and evaluated. Note that return does not: - cause an exit from the iRule altogether; - prevent the same event from firing in another iRule; or - prevent the same event with a higher priority value from firing in the same iRule. To prevent further processing of an event in the current rule or other rules for the current TCP connection, you can use 'event EVENT_NAME disable'. Here are my questions : How can the same event exist more than once within a single iRule ? Is 'current TCP connection' refer to a session that is maintained across multiple request-response sequences from a given client-browser ? Or does each new request initiate a new TCP connection ?1.3KViews0likes1CommentChecksums for F5 supported VMware vCenter cloud templates
Problem this snippet solves: Checksums for F5 supported VMware vCenter cloud templates F5 Networks provides checksums for all of our supported VMware vCenter templates (for other Cloud providers, see https://devcentral.f5.com/codeshare/checksums-for-f5-supported-cft-and-arm-templates-on-github-1014). See the README files on GitHub for information on individual templates. You can find the VMware templates in the appropriate supported directory on GitHub: https://github.com/F5Networks/f5-vmware-vcenter-templates/tree/master/supported You can get a checksum for a particular template by running one of the following commands, depending on your operating system: Linux: sha512sum <path_to_template> Windows using CertUtil: CertUtil –hashfile <path_to_template> SHA512 You can compare the checksum produced by that command against the following list. To find your hash, copy the script-signature hash out of your template and search for it on this page. To find the script signature, click the link in the Solution File column (look closely at the path to find the template you are using) and search for script-signature. The hash immediately follows. Release 1.4.0 Solution File Hash https://github.com/F5Networks/f5-vmware-vcenter-templates/tree/master/supported/failover/same-net/traditional/4nic/existing-stack/f5-existing-stack-failover-4nic-bigip.js `38a48d1f93e91cafcbe3324f5705ea9573918f48e987f2ad127ff38b3de1a2bbe38225bb62be969bba4e6d655b40f8f5f0a4eec43f3b2999c03ddfc22a0b0744` https://github.com/F5Networks/f5-vmware-vcenter-templates/tree/master/supported/standalone/n-nic/existing-stack/f5-existing-stack-nNic-bigip.js `5b1ddbbe50a0986b4ef4a0d347f30d36e20cb36186f1741449b47cfe8651e57e13bd76ab21164c0b98730f202823b3aa747d232196d0cc32759f558124b2a973` Release 1.3.0 Solution File Hash https://github.com/F5Networks/f5-vmware-vcenter-templates/tree/master/supported/failover/same-net/traditional/4nic/existing-stack/f5-existing-stack-failover-4nic-bigip.js `75613cb46c9639c9e5632dd6099fa424e1b92c6cfd1e4996fc16a1186a3b73e7e9ea8008ab1a27bddf8d0324260d3e2678ed4b1ff5575cfe7f0a492087998cec` https://github.com/F5Networks/f5-vmware-vcenter-templates/tree/master/supported/standalone/n-nic/existing-stack/f5-existing-stack-nNic-bigip.js `b0dc9b2d814aff8598426b7b1c94d06bab58cc5d7d76b77d7df1251c782e534eb40655c8104911e9007ae9806daa0d02520c2c86816ec1f76721840bb3f4f324` Code : You can get a checksum for a particular template by running one of the following commands, depending on your operating system: * **Linux**: `sha512sum ` * **Windows using CertUtil**: `CertUtil –hashfile SHA512`280Views0likes0Commentsirule class match each query param separately
Data Group dg1 param1=p11¶m2=p21 := host1 param1=p12¶m2=p22 := host2 The problem is that some request urls might list their query params in reverse order param2=p21¶m1=p11 Therefore I need to match query params individually. So I have two problems to solve : extract from [HTTP::query] the param segments for param1 and param2 determine if both param1_segment and param2_segment are found together in some key in dg1 So something like : set param1 "param1" set param2 "param2" set param1_segment = [[HTTP:query] $param1] set param2_segment = [[HTTP:query] $param2] if { (class match $param1_segment&$param2_segment equals dg1) or (class match $param2_segment&$param1_segment equals dg1) } { NOTE: I know the above is wrong in terms of both language elements and syntax. I'm just providing to better describe the problem I need to solve. Because my use-case might extend to 3 query params ( in any order ) it might be better to AND together class match for each query param segment.428Views0likes2Comments