vmware
137 TopicsF5 Scalable App Delivery & Security for Hybrid Environments
As enterprises modernize and expand their digital services, they increasingly deploy multiple instances of the same applications across diverse infrastructure environments—such as VMware, OpenShift, and Nutanix—to support distributed teams, regional data sovereignty, redundancy, or environment-specific compliance needs. These application instances often integrate into service chains that span across clouds and data centers, introducing both scale and operational complexity. F5 Distributed Cloud provides a unified solution for secure, consistent application delivery and security across hybrid and multi-cloud environments. It enables organizations to add workloads seamlessly—whether for scaling, redundancy, or localization—without sacrificing visibility, security, or performance.172Views3likes0CommentsSecure and Seamless Cloud Application Migration with F5 Distributed Cloud and Nutanix
Introduction F5 Distributed Cloud (XC) offers SaaS-based security, networking, and application management services for multicloud environments, on-premises infrastructures, and edge locations. F5 Distributed Cloud Services Customer Edge (CE) enhances these capabilities by integrating into a customer’s environment, enabling centralized management via the F5 Distributed Cloud Console while being fully operated by the customer. F5 Distributed Cloud Services Customer Edge (CE) can be deployed in public clouds, on-premises, or at the edge. Nutanix is a leading provider of Hyperconverged Infrastructure (HCI), which integrates storage, compute, networking, and virtualization into a unified, scalable, and easily managed solution. Nutanix Cloud Clusters (NC2) extend on-premises data centers to public clouds, maintaining the simplicity of the Nutanix software stack with a unified management console. NC2 runs AOS and AHV on public cloud instances, offering the same CLI, user interface, and APIs as on-premises environments. This article explores how F5 Distributed Cloud and Nutanix collaborate to deliver secure and seamless application services across various types of cloud application migrations. Whether migrating applications to the cloud, repatriating them from public clouds, or transitioning into a hybrid multicloud environment, F5 Distributed Cloud and Nutanix ensure optimal performance and security at all times. Illustration F5 Distributed Cloud App Connect securely connect distributed application services across hybrid and multicloud environments. It operates seamlessly with a platform of web application and API protection (WAAP) services, safeguarding apps and APIs against a wide range of threats through robust security policies including an integrated WAF, DDoS protection, bot management, and other security tools. This enables the enforcement of consistent and comprehensive security policies across all applications without the need to configure individual custom policies for each app and environment. Additionally, it provides centralized observability by providing clear insights into performance metrics, security posture, and operational statuses across all cloud platforms. In this section, we illustrate how to utilize F5 Distributed App Connect with Nutanix for different cloud application migration scenarios. Cloud Migration In our example, we have a VMware environment within a data center located in San Jose. Our goal is to migrate the on-premises application nutanix.f5-demo.com from the VMware environment to a multicloud environment by distributing the application workloads across Nutanix Cloud Clusters (NC2) on AWS and Nutanix Cloud Clusters (NC2) on Azure. First, we deploy F5 Distributed Cloud Customer Edge (CE) and application workloads on Nutanix Cloud Clusters (NC2) on AWS as well as Nutanix Cloud Clusters (NC2) on Azure. F5 Distributed Cloud App Connect addresses the issue of IP overlapping, enabling us to deploy application workloads using the same IP addresses as those in the VMware environment in the San Jose data center. Next, we create origin pools on the F5 Distributed Cloud Console. In our example, we create two origin pools: nutanix-nc2-aws-pool for origin servers on NC2 on AWS and nutanix-nc2-azure-pool for origin servers on NC2 on Azure. To ensure minimal application services disruption, we update the HTTP Load Balancer for nutanix.f5-demo.com to include both new origin pools, and we assign them with a higher weight than the existing pool vmware-sj-pool so that the origin servers on Nutanix Cloud Clusters (NC2) on AWS and on Nutanix Cloud Clusters (NC2) on Azure will receive more traffic compared to the origin servers in the VMware environment in the San Jose data center. Note that web application firewall (WAF) nutanix-demo is enabled. Finally, we remove vmware-sj-pool to complete the cloud migration. Cloud Repatriation In this example, xc.f5-demo.com is deployed in a multicloud environment across AWS and Azure. Our objective is to migrate the application back to the Nutanix environment in the San Jose data center from the public clouds. To begin, we deploy F5 Distributed Cloud Customer Edge (CE) and application workloads in Nutanix AHV. We deploy the application workloads using the same IP addresses as those in the public clouds because IP overlapping is not a concern with F5 Distributed Cloud App Connect. On the F5 Distributed Cloud Console, we create an origin pool nutanix-sj-pool with the origin servers originating from the Nutanix environment in the San Jose data center. We then update the HTTP Load Balancer for xc.f5-demo.com to include the new origin pool, and assign it with a higher weight than both existing pools: xc-aws-pool with origin servers on AWS and xc-azure-pool with origin servers on Azure. As a result, the origin servers in the Nutanix environment, located in the San Jose data center will receive more traffic compared to origin servers in other pools. To ensure all applications receive the same level of security protection, web application firewall (WAF) nutanix-demo is also applied here. To complete the cloud repatriation, we remove xc-aws-pool and xc-azure-pool. The application service experiences minimal disruption during and after the migration. Hybrid Multicloud Our goal in this example is to bring xc-nutanix.f5-demo.com into a hybrid multicloud environment, as it is presently deployed solely in the San Jose data center. We first deploy F5 Distributed Cloud Customer Edge (CE) and application workloads on Nutanix Cloud Clusters (NC2) on AWS as well as on Nutanix Cloud Clusters (NC2) on Azure. We create an origin pool with origin servers originating from each of the F5 Distributed Cloud Customer Edge (CE) sites on the F5 Distributed Cloud Console. Next, we update the HTTP Load Balancer for xc-nutanix.f5-demo.com so that it includes all origin pools: nutanix-sj-pool (Nutanix AHV in our San Jose data center), nutanix-nc2-aws-pool (NC2 on AWS), and nutanix-nc2-azure-pool (NC2 on Azure). Note that web application firewall (WAF) nutanix-demo is applied here as well so that we can ensure a consistent level of security protection across all applications no matter where they are deployed. xc-nutanix.f5-demo.com is now in a hybrid multicloud environment. F5 Distributed Cloud Console is the centralized console for configuration management and observability. It provides real-time metrics and analytics, which allows us proactively monitor security events. Additionally, its integrated AI assistant delivers real-time insights and actionable recommendations of security events, enhancing our understanding of the security events and enabling more informed decision-making. This enables us to swiftly detect and respond to emerging threats, thereby sustaining a robust security posture. Conclusion Cloud application migration can be complex and challenging. F5 Distributed Cloud and Nutanix collaborate to offer a secure and streamlined solution that minimizes risk and disruption during and after the migration process, including those migrating from VMware environments. This ensures a seamless cloud application transition while maintaining business continuity throughout the entire process and beyond.360Views1like0CommentsF5 BIG-IP VE and Application Workloads Migration From VMware to Nutanix
Introduction Nutanix is a leading provider of Hyperconverged Infrastructure (HCI), which integrates storage, compute, networking, and virtualization into a unified, scalable, and easily managed solution. This article will outlined the recommended procedure of migrating BIG-IP Virtual Edition (VE) and application workloads from VMware vSphere to Nutanix AHV, ensuring minimal disruption to application services. As always, it is advisable to schedule a maintenance window for any migration activities to mitigate risks and ensure smooth execution. Migration Overview Our goal is to migrate VMware BIG-IP VEs and application workloads to Nutanix with minimal disruption to application services, while preserving the existing configuration including license, IP addresses, hostnames, and other settings. The recommended migration process can be summarized in five stages: Stage 1 – Deploy a pair of BIG-IP VEs in Nutanix: Stage 2 – Migrate Standby BIG-IP VE from VMware to Nutanix: Stage 3 – Failover Active BIG-IP VE from VMware to Nutanix: Stage 4 – Migrate application workloads from VMware to Nutanix: Stage 5 – Migrate now Standby BIG-IP VE from VMware to Nutanix: Migration Procedure In our example topology, we have an existing VMware environment with a pair of BIG-IP VEs operating in High Availability (HA) mode - Active and Standby, along with application workloads. Each of our BIG-IP VEs is set up with four NICs, which is a typical configuration: one for management, one for internal, one for external, and one for high availability. We will provide a detailed step-by-step breakdown of the events during the migration process using this topology. Stage 1 – Deploy a pair of BIG-IP VEs in Nutanix i) Create Nutanix BIGIP-1 and Nutanix BIGIP-2 ensuring that the host CPU and memory are consistent with VMware BIGIP-1 and VMware BIGIP-2: ii) Keep both Nutanix BIGIP-1 and Nutanix BIGIP-2 powered down. *Current BIG-IP State*: VMware BIGIP-1 (Active) and VMware BIGIP-2 (Standby) Stage 2 – Migrate Standby BIG-IP VE from VMware to Nutanix i) Set VMware BIGIP-2 (Standby) to “Forced Offline”, and then save a copy of the configuration: ii) Save a copy of the license from “/config/bigip.license”. iii) Make sure above files are saved at a location we can retrieve later in the migration process. iv) Revoke the license on VMware BIGIP-2 (Standby): Note: Please refer to BIG-IQ documentation if the license was assigned using BIG-IQ. v) Disconnect all interfaces on VMware BIGIP-2 (Standby): Note: Disconnecting all interfaces enables a quicker rollback should it become necessary, as opposed to powering down the system. vi) Power on Nutanix BIGIP-2 and configure it with the same Management IP of VMware BIGIP-2: vii) License Nutanix BIGIP-2 with the saved license from VMware BIGIP-2 (Stage 2ii): Note: Please refer to K91841023 if the VE is running in FIPS mode. viii) Set Nutanix BIGIP-2 to “Forced Offline”: ix) Upload the saved UCS configuration (Stage 2i) to Nutanix BIGIP-2, and then load it with “no-license”: Note: Please refer K9420 to if the UCS file containing encrypted password or passphrase. x) Check the log and wait until the message “Configuration load completed, device ready for online” is seen before proceeding, which can be done by opening a separate session to Nutanix BIGIP-2: xi) Set Nutanix BIGIP-2 to “Online”: Note: Before bringing Nutanix BIGIP-2 "Online", make sure it is deployed with the same number of NICs, and interface-to-VLAN mapping is identical to VMware BIGIP-2. For example, if interface 1.1 is mapped to VLAN X on VMware BIGIP-2, make sure interface 1.1 is mapped to VLAN X too on Nutanix BIGIP-2. xii) Make sure Nutanix BIGIP-2 is "In Sync". Perform Config-Sync using “run cm config-sync from-group <device-group-name>” if “(cfg-sync Changes Pending)" is seen like below: xiii) BIGIP-2 is now migrated from VMware to Nutanix: Note: Due to BIG-IP VEs are running in different hypervisors, persistence mirroring or connection mirroring will not be operational during migration. If enabled, ".....notice DAG hash mismatch; discarding mirrored state" message maybe seen during migration and is expected. *Current BIG-IP State*: VMware BIGIP-1 (Active) and Nutanix BIGIP-2 (Standby) Stage 3 – Failover Active BIG-IP from VMware to Nutanix i) Failover VMware BIGIP-1 from Active to Standby: ii) Nutanix BIGIP-2 is now the Active BIG-IP: *Current BIG-IP State*: VMware BIGIP-1 (Standby) and Nutanix BIGIP-2 (Active) Stage 4 – Migrate application workloads from VMware to Nutanix i) Migrate application workloads from VMware to Nutanix using Nutanix Move Note: To minimize application service disruption, it is suggested to migrate the application workloads in groups instead of all at once, ensuring that at least one pool member remains active during the process. It is because Nutanix Move requires a downtime to shut down the VM at the source (VMware), perform a final sync of data and then start the VM at the destination (Nutanix). *Current BIG-IP State*: VMware BIGIP-1 (Standby) and Nutanix BIGIP-2 (Active) Stage 5 – Migrate now Standby BIG-IP VE from VMware to Nutanix i) Set VMware BIGIP-1 “Forced Offline”, and then save a copy of the configuration: ii) Save a copy of the license from “/config/bigip.license”. iii) Make sure above files are saved at a location we can retrieve later in the migration process. iv) Revoke the license on VMware BIGIP-1 (Standby): Note: Please refer to BIG-IQ documentation if the license was assigned using BIG-IQ. v) Disconnect all interfaces on VMware BIGIP-1 (Standby): Note: Disconnecting all interfaces enables a quicker rollback should it become necessary, as opposed to powering down the system. vi) Power on Nutanix BIGIP-1 and configure it with the same Management IP of VMware BIGIP-1: vii) License Nutanix BIGIP-1 with the saved license from VMware BIGIP-1 (Stage 5ii): Note: Please refer to K91841023 if the VE is running in FIPS mode. viii) Set Nutanix BIGIP-1 to “Forced Offline”: ix) Upload the saved UCS configuration (Stage 5i) to Nutanix BIGIP-1, and then load it with “no-license”: Note: Please refer K9420 to if the UCS file containing encrypted password or passphrase. x) Check the log and wait until the message “<hostname>……Configuration load completed, device ready for online” is seen before proceeding, which can be done by opening a separate session to Nutanix BIGIP-1: xi) Set Nutanix BIGIP-1 to “Online”: Note: Before bringing Nutanix BIGIP-1 "Online", make sure it is deployed with the same number of NICs ,and interface-to-VLAN mapping is identical to VMware BIGIP-1. For example, if interface 1.1 is mapped to VLAN X on VMware BIGIP-1, make sure interface 1.1 is mapped to VLAN X too on Nutanix BIGIP-1. xii) Make sure Nutanix BIGIP-1 is "In Sync". Perform Config-Sync using “run cm config-sync from-group <device-group-name>” if “(cfg-sync Changes Pending)" is seen like below: xiii) BIGIP-1 is now migrated from VMware to Nutanix: Migration is now completed. *Current BIG-IP State*: Nutanix BIGIP-1 (Standby) and Nutanix BIGIP-2 (Active) Summary The outlined migration procedure in this article is the recommended procedure of migrating BIG-IP Virtual Edition (VE) and application workloads from VMware vSphere to Nutanix AHV. It ensures successful migration during a scheduled maintenance with minimal application service disruption, enabling them to continue functioning smoothly during and post-migration. References Nutanix AHV: BIG-IP Virtual Edition Setup https://clouddocs.f5.com/cloud/public/v1/nutanix/nutanix_setup.html Nutanix Move User Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v5_5:top-overview-c.html K7752: Licensing the BIG-IP system https://my.f5.com/manage/s/article/K7752 K2595: Activating and installing a license file from the command line https://my.f5.com/manage/s/article/K2595 K91841023: Overview of the FIPS 140 Level 1 Compliant Mode license for BIG-IP VE https://my.f5.com/manage/s/article/K91841023 K9420: Installing UCS files containing encrypted passwords or passphrases https://my.f5.com/manage/s/article/K9420 K13132: Backing up and restoring BIG-IP configuration files with a UCS archive https://my.f5.com/manage/s/article/K13132 BIG-IQ Documentation - Manage Software Licenses for Devices https://techdocs.f5.com/en-us/bigiq-7-0-0/managing-big-ip-ve-subscriptions-from-big-iq/manage-licenses-devices.html900Views0likes2CommentsInfrastructure as Code: Automating F5 Distributed Cloud CEs with Ansible
Introduction Welcome to the first installment of our Infrastructure as Code (IaC) series, focusing on F5 products and Ansible. This series has been a long-standing desire of mine to showcase the ability of IaC utilizing Ansible Automation Platform to deliver Day 0 through Day 2 operations with multiple F5 virtualized platforms. Over time, I've encountered numerous financial clients expressing interest in this topic. For many of these clients, the prospect of leveraging IaC to redeploy an environment outweighs the traditional approach of performing upgrades. This series will hopefully provide insight, documentation, and code for anyone embarking on this journey. Why Ansible Automation Platform? Like most people, I started my journey with community editions of Ansible. As my coding became more complex, so did the need to ensure that my lab infrastructure adhered to the best security guidelines required by my company (my goal being to mimic how customers would/should do things in real life). I began utilizing Ansible Automation Platform to ensure my credentials were protected, as well as to organize and share my code with the rest of my team (following the 'just in case you got hit by a bus' theory). Ansible Automation Platform utilizes execution environments (EE) to ensure code runs efficiently and cleanly every time. Now, I am also creating Execution Environments via GitHub with workflows and pushing them up to Quay.io (https://github.com/VDI-Tech-Guy/f5-execution-engines). Huge thanks to Colin McNaughton at Red Hat for making my life so much easier with building EEs! Why deploy F5 Distributed Cloud on VMware vSphere? As I mentioned before, I had this desire to build this Infrastructure as Code (IaC) code a while back. This was prior to the Broadcom acquisition of VMware. Being an ex-VMware employee, I had a lot of knowledge of virtualization platform infrastructure going into this project, and I started my focus on deploying on VMware vSphere. F5 Distributed Cloud can be deployed in any cloud, anywhere. However, I really wanted to focus on on-premises deployments because not every customer can afford the cloud. Moreover, there's always a back-and-forth battle between on-premises and the cloud, which has evolved into the Hybrid Cloud and the Multi-Cloud. I do intend to extend this series to the Multi-Cloud, but these initial deployments will be focused on VMware vSphere, as it is still utilized in many organizations across the globe. Information about the Setup in the Demo Video If you watch the video (down below) on how the deployment works, you can see i did a bunch of the pre-work prior to launching the deployment, in the git repostory (link in Resources). Here are some Prework items i did Had a fully functional Ansible Automation Platform 2.4+ enviornment setup and working. (at the time the controller version was 4.4.4) Execution Environment was imported into Ansible Automation Platform Controller The Project was setup to import the Playbooks from the Git Repository (In Resources Section below) and setup the Default Execution Environment Demo Inventory was setup (in our usecase we only needed the vCenter Host) We Setup Network Credentials for the vCenter The Template was setup and had Variables populated in it (Note the API Key was hidden). As mentioned in the Video (Below) The variables were populated to my environment, this contains all the information, i have provided a Demo Example in the git repository for anyone to mimic my settings to their environment, also the example has comments about each field or area of a field and the purpose of the variable. { "rhel_location": "https://vesio.blob.core.windows.net/releases/rhel/9/x86_64/images/vmware/rhel-9.2023.29-20231212012955-single-nic.ova", "xc_api_credential": "_____________________________________", "xc_namespace": "mmabis-automation", "xc_console_host": "f5-bd", "xc_user": "admin", "xc_pass": "Ansible123!", "vcenter_hostname": "{{ ansible_host }}", "vcenter_username": "{{ ansible_env.ANSIBLE_NET_USERNAME }}", "vcenter_password": "{{ ansible_env.ANSIBLE_NET_PASSWORD }}", "vcenter_validate_certs": false, "datacenter_name": "Apex", "cluster_name": "Worlds-Edge", "datastore": "TrueNAS-SSD", "dvs_switch_name": "DSC-DVS", "dns_name_servers": [ "192.168.192.20", "192.168.192.1" ], "dns_name_search": [ "dsc-services.local", "localdomain" ], "ntp_servers": [ "0.pool.ntp.org", "1.pool.ntp.org", "2.pool.ntp.org" ], "domain_fqdn": "dsc-services.local", "DVS_Name": "{{dvs_switch_name}}", "Internal_Network": "DVS-Server-vLan", "External_Network": "DVS-DMZ-vLan", "resource_pool_name": "Lab-XC", "waiting_period": 2, "temp_download_location": "/tmp/xc-ova-download.ova", "xc_ova_builds": [ { "hostname": "xc-automation-rhel-demo", "tmpl_name": "xc-automation-rhel-demo", "admin_password": "Ansible123!", "cluster_name": "xc-automation-cluster-rhel-demo", "dhcp": "no", "external_ip": "172.16.192.170", "external_ip_subnet_prefix": "24", "external_ip_gw": "172.16.192.1", "external_ip_route": "0.0.0.0/0", "internal_ip": "192.168.192.170", "internal_ip_subnet_prefix": "22", "internal_ip_gw": "192.168.192.1", "certified_hw": "vmware-regular-nic-voltmesh", "latitude": "39.51833126", "longitude": "-104.759496962", "build_count": 3, "nic_config": "rhel-multi" } ] } Launching the Code With all of that prework Handled it was as easy as launch the code, there were a few caviats i learned over time when dealing with the atuomation that i wanted to share. Never re-use a cluster name in F5 Distributed Cloud, especially if it was used in a different version of the CE (there were communications issues with the CEs and previous cluster information that was stored in F5 Distributred Cloud Console) The Api Credentials are system level when trying to accept registration or create the token for importing in to the environment. This code is designed to check for "{{ xc-namespace}}-token" if it exists then it will utilize the existing token, if not it will try to create it so you need system level permissions to do this. Build Count should be 3 by default (still needs to be defined) or an ODD number based on recomendations i have heard from our F5 Field. If there are more that i think of ill definatly edit the post and make sure its up-to-date. When launching the code i was able to get the lab to build up correctly multiple times, so please if there is an issue or something i might not have documented well, feel free to let me know and give it a shot for yourself! YouTube Video now on DevCentral Channel Resources https://github.com/f5devcentral/f5-bd-ansible-day0-automation - The Code utilized for this deployment https://github.com/VDI-Tech-Guy/f5-execution-engines - Building Execution Environments with Github and Workflows Conclusion I do hope that this series will help everyone who wants to embrace IaC and if you have any questions feel free to reach out!769Views3likes0CommentsBIG-IP APM with Horizon 7.x HTML5 gets a Hotfix For Updated Code
Technical update on some new hotfixes that were rolled out to resolve some issues with HTML5 connectivity with VMware Horizon 7.1/7.2 with BIG-IP Access Policy Manager. What is VMware Horizon HTML Access? VMware Horizon HTML Access provides the ability for employees to access applications and desktops via web browsers (HTML5 compliant) and without the need for additional plugins or native client installations. This method of access provides advantages to customers who utilize very strict software installation requirements and require access to their internal resources, as well as customers who utilize BYOD based implementations. VMware Horizon HTML Access is an alternative way of accessing company internal resources without the requirement of software installation. What does the Hotfix Do? The Hotfix is designed to allow the newer version of the VMware Horizon HTML Access Clients which were upgraded with new URI information to be accessible via APM. Without this hotfix, customers who upgrade to the Horizon 7.1/7.2 code may experience an issue where HTML5 will not connect to the VDI Resource (blank or grey screen.) The easiest way to determine if you are affected by the issue is within the URL. If you do not see the string f5vdifwd within the URL then you are most likely affected by this issue. Here is an example of a working configuration. Notice the f5vdifwd string in the URL: https://test.test.local/f5vdifwd/vmview/68a5058e-2911-4316-849b-3d55f5b5cafb/portal/webclient/index.html#/desktop The Hotfix Information Details Note that the fixes are incorporated into Hotfixes. F5 recommends to use the Hotfix builds over the iRules listed in the below article. If the iRules are in place when upgrading to a build with the incorporated fix, make sure that the iRule is removed. Version 12.1.2 HF1 Release Notes Version 13.0 HF2 Release Notes 638780-3 Handle 302 redirects for VMware Horizon View HTML5 client Component Access Policy Manager Symptoms Starting from v4.4, Horizon View HTML5 client is using new URI for launching remote sessions, and supports 302 redirect from old URI for backward compatibility. Conditions APM webtop with a VMware View resource assigned. HTML5 client installed on backend is of version 4.4 or later. Impact This fix allows for VMware HTML5 clients v4.4 or later to work properly through APM. Workaround for versions 11.6.x and 12.x priority 2 when HTTP_REQUEST { regexp {(/f5vdifwd/vmview/[0-9a-f\-]{36})/} [HTTP::uri] vmview_html5_prefix dummy } when HTTP_RESPONSE { if { ([HTTP::status] == "302") && ([HTTP::header exists "Location"]) } { if { [info exists vmview_html5_prefix] } { set location [HTTP::header "Location"] set location_path [URI::path $location] if { $location_path starts_with "/portal/" } { set path_index [string first $location_path $location] set new_location [substr $location $path_index] regsub "/portal/" $new_location $vmview_html5_prefix new_location HTTP::header replace "Location" $new_location } unset vmview_html5_prefix } } } Workaround for version 13.0 priority 2 when HTTP_REQUEST { regexp {(/f5vdifwd/vmview/[0-9a-f\-]{36})/} [HTTP::uri] dummy vmview_html5_prefix } when HTTP_RESPONSE { if { ([HTTP::status] == "302") && ([HTTP::header exists "Location"]) } { if { [info exists vmview_html5_prefix] } { set location [HTTP::header "Location"] set location_path [URI::path $location] if { $location_path starts_with "/portal/" } { set path_index [string first $location_path $location] set new_location "$vmview_html5_prefix[substr $location $path_index]" HTTP::header replace "Location" $new_location } unset vmview_html5_prefix } } }598Views0likes1CommentLoad Balancing VMware Identity Manager Integration Guide is now Ready!
This will be the first in a many of articles being released on new or updated documentation for deploying F5 LTM/APM/DNS with various VMware End-User-Computing based products. I am happy to announce that our first document “Load Balancing VMware Identity Manager” is now available to the public! What is VMware Identity Manager? VMware Identity Manager combines applications and desktops in a single, aggregated workspace. Employees can then access the desktops and applications regardless of where they are based. With fewer management points and flexible access, Identity Manager reduces the complexity of IT administration. What does this Integration Guide Detail? This documentation focuses on deploying F5 LTM with VMware Identity Manager (On-Premise) for a production deployment. Typically, the first VMware Identity Manager node is setup/configured and placed behind the load balancer, this will be the focus of this document. After that’s completed the first node would be shutdown then cloned to the other two nodes for a total of 3 Nodes in the cluster, there are references within the document for other VMware documentation to complete this part. Here is an example from the document that shows how to setup the advanced monitor we use to identify if a single node within the cluster is online or not. This monitor is an example of how F5 does more than just a simple load balancer. Most simple load balancers just check for the HTTPS header or ICMP (Ping) responses to identify if a node is online. F5 worked together with VMware to identify the best way to identify if a node within a cluster is in maintenance mode or offline due to other issues. Create Monitor The next task is to create the Identity Manager Monitor for the BIG-IP Appliance to validate when the webserver is available. Use the following guidance to create a health monitor on the BIG-IP system. Click Local Traffic. Hover over Monitors. Click the Add button (+) to the right of Monitors to create a new health monitor. Monitor Configuration Create a Monitor with the following settings. In the Name field, type a unique name such as WorkspaceOne-Monitor. From the Type list, select HTTPS. In the Send String field, type GET /SAAS/API/1.0/REST/system/health/heartbeat HTTP/1.1\r\nHost: \r\nConnection: Close\r\n\r\n In the Receive String field, type ok$. In the Receive Disable String field, type 404. Click Finished. You can now download the updated step-by-step guide for Load Balancing VMware Identity Manager. https://f5.com/Portals/1/PDF/Partners/f5-big-ip-vmware-workspaceone-integration-guide.pdf You can also read up on setting up a 3-Node Cluster with VMware Identity Manager. https://communities.vmware.com/docs/DOC-33552 and http://pubs.vmware.com/identity-manager-28/index.jsp#com.vmware.wsp-install_28/GUID-A29C51E5-6FF5-4F7F-8FC2-1A0F687F6DC5.html Special Thanks to Dean Flaming, and the VMware Identity Management team for all of their assistance putting this together!1.6KViews0likes2CommentsInside Look - PCoIP Proxy for VMware Horizon View
I sit down with F5 Solution Architect Paul Pindell to get an inside look at BIG-IP's native support for VMware's PCoIP protocol. He reviews the architecture, business value and gives a great demo on how to configure BIG-IP. BIG-IP APM offers full proxy support for PC-over-IP (PCoIP), a leading virtual desktop infrastructure (VDI) protocol. F5 is the first to provide this functionality which allows organizations to simplify their VMware Horizon View architectures. Combining PCoIP proxy with the power of the BIG-IP platform delivers hardened security and increased scalability for end-user computing. In addition to PCoIP, F5 supports a number of other VDI solutions, giving customers flexibility in designing and deploying their network infrastructure. ps Related: F5 Friday: Simple, Scalable and Secure PCoIP for VMware Horizon View Solutions for VMware applications F5's YouTube Channel In 5 Minutes or Less Series (24 videos – over 2 hours of In 5 Fun) Inside Look Series Life@F5 Series Technorati Tags: vdi,PCoIP,VMware,Access,Applications,Infrastructure,Performance,Security,Virtualization,silva,video,inside look,big-ip,apm Connect with Peter: Connect with F5:424Views0likes0CommentsHorizon Blast Extreme UDP with BEAT Support Functionality in BIG-IP Access Manager 14.0!
Hey All, Just wanted to provide an update on new features that were added to BIG-IP Access Manager (Formerly APM) 14.0 for VMware Horizon. Listed below are the new features that were added into Access Manager for VMware Workspace ONE and VMware Horizon. APM supports Blast Extreme protocol over TCP and UDP and also supports the Blast Extreme Adaptive Transport (BEAT) for Desktops and Applications. APM supports access to VMware Horizon desktops and applications using VMware Workspace ONE as an IDP for more information on this check out the integration guide at https://f5.com/Portals/1/PDF/Partners/apm-proxy-with-workspace-one-integration-guide.pdf What is the VMware Horizon Blast Extreme TCP/UDP with BEAT Feature? Since the release of Blast Extreme in Horizon 7, F5 has supported the TCP functionality of the Blast code allowing for the VMware Horizon native client and HTML5 client's to connect to desktops and apps. BIG-IP (14.0) now supports the UDP and BEAT functionality of the Blast Extreme code. What is BEAT? BEAT or Blast Extreme Adaptive Transport allows the switching between TCP and UDP of the Blast Extreme Transport based on the connected clients conditions. For example, when a client is connected over a mobile network sometimes the connectivity is unstable (packet loss and/or high latency), with a typical TCP connection packet loss will retransmit the packet over and over again creating lag from a user's desktop or app perspective in Horizon. BEAT was designed to adapt to these types of connections and will detect those packets being lost and adjust the protocol from the connected client from TCP to UDP to allow the dropped packets to be lost and continue moving forward allowing the user to have a more seamless desktop experience. BEAT also has the ability to switch from UDP to TCP depending on the clients connectivity. Is there an iAPP to Enable Blast UDP? Currently there is not an iAPP for this functionality and the existing iAPP will only create the TCP functionality for the Blast Extreme Protocol. F5 intends to release a build soon to resolve this issue, this article is being posted to help customers manually create the Virtual Server to allow for the Blast Extreme Functionality prior to the iAPP fix. Here is the information needed to implement the Blast UDP functionality which will enable BEAT. NOTE: This will need to be removed when the iAPP is upgraded later to allow for the feature/function Create a VDI Profile Creating the VDI Profile for Blast Extreme Navigate to Access --> Connectivity/VPN --> VDI/RDP --> VDI Profiles. Create a new profile Name it whatever you want Change Parent Profile to “/Common/vdi” In VMware View Settings change from PCoIP to Blast Extreme Create a Virtual IP for the Blast Extreme UDP Port Provide a Unique Name Match the Destination Address with existing Horizon APM Deployment Service Port: 8443 Source Address Translation: Automap VDI Profile: Select previously created VDI Profile Click Finished to Create the VIP Validation/Testing Once completed you can test the connection, I recommend using the VMware Horizon Performance Tracker as you can see the BEAT protocol in action changing from TCP to UDP.4.3KViews2likes3CommentsVMware Fusion Custom Networking for BIG-IP VE Lab
I've used VMware workstation on Windows and Fusion on OS X for quite some time and I'm a big fan of both platforms. That said, the lack of a network settings editor built in to Fusion (I understand it's now available in the Pro version of Fusion) can be more than a little frustrating, particularly if you want a custom experience. Why custom? Well, you just might want to do more with your BIG-IP VE Lab license than just connect to it and test iRules, and that might require more than just a nic or two. Custom network capabilities enable you to mock up complete environments rapidly. If you are importing an existing VE instance (as I did) and you have not yet set up your custom networking, you will see that the nic is unrecognized if it was previously configured for additional networking as show in the figure below. If you are installing a new VE instance, set up your networking first. This is accomplished by editing the networking file located in your /Library tree. You can edit this file by typing this command: sudo vi /Library/Preferences/Vmware\ Fusion/networking Once you are in edit mode, you can insert the text below between the VNET_1 and VNET_8 sections. answer VNET_2_HOSTONLY_NETMASK 255.255.255.0 answer VNET_2_HOSTONLY_SUBNET 192.168.102.0 answer VNET_2_VIRTUAL_ADAPTER yes answer VNET_2_VIRTUAL_ADAPTER_ADDR 192.168.102.1 answer VNET_3_HOSTONLY_NETMASK 255.255.255.0 answer VNET_3_HOSTONLY_SUBNET 192.168.103.0 answer VNET_3_VIRTUAL_ADAPTER yes answer VNET_3_VIRTUAL_ADAPTER_ADDR 192.168.103.1 answer VNET_4_HOSTONLY_NETMASK 255.255.255.0 answer VNET_4_HOSTONLY_SUBNET 192.168.104.0 answer VNET_4_VIRTUAL_ADAPTER yes answer VNET_4_VIRTUAL_ADAPTER_ADDR 192.168.104.1 answer VNET_5_HOSTONLY_NETMASK 255.255.255.0 answer VNET_5_HOSTONLY_SUBNET 192.168.105.0 answer VNET_5_VIRTUAL_ADAPTER yes answer VNET_5_VIRTUAL_ADAPTER_ADDR 192.168.105.1 answer VNET_6_HOSTONLY_NETMASK 255.255.255.0 answer VNET_6_HOSTONLY_SUBNET 192.168.106.0 answer VNET_6_VIRTUAL_ADAPTER yes answer VNET_6_VIRTUAL_ADAPTER_ADDR 192.168.106.1 answer VNET_7_HOSTONLY_NETMASK 255.255.255.0 answer VNET_7_HOSTONLY_SUBNET 192.168.107.0 answer VNET_7_VIRTUAL_ADAPTER yes answer VNET_7_VIRTUAL_ADAPTER_ADDR 192.168.107.1 You don’t have to add as many vnics as I did, and you can adjust the IP space and enable DHCP (check the VNET_1 config for details) in the networks as well if desired. Save and restart Fusion and you should be good to go. Looking back at my VE instance NIC2 config, it properly placed my nic into vmnet2 (same as it was pre-import,) but I now have the flexibility to change to any of these networks.1.2KViews0likes4Comments