migration
78 TopicsF5 migration
hi, We are working migrate F5 from Viprion hosted guest to new R10900 Tennent, both old and new running on 17.x version but old F5 has huge number of as3 deployed config from big-iq and 200+ legacy vips.. What is best way we can migrate this kind mixed environment and how to make sure big-iq still works fine with new target post migration..106Views0likes4CommentsIssue while migrating config from 4000s to r4600
Hi All, we are trying to migrate config from 4000s to r4600. We have created UCS on 4000s but while loading it on a tenant on r4600, we got an error saying ""load sys partition all platform migrate " - failed -- 010713d0:3: Symmetric Unit key decrypt failure - decrypt failure, configuration loading error: high-config-load-failed". Before loading the UCS from 4000s device to tenant, we copied the master key to the new tenant and verified it as well. The command used to load the UCS : load sys ucs <file name> no-license platform-migrate Didn't see any other error logs in /var/log/ltm. Could someone suggest how to resolve this issue ? Please note we are using a CA device certificate and not self signed certificate for the device. Also the management IP, trunk name and number of trunk ports in the UCS are different from those on the tenant.232Views0likes5CommentsF5 BIG-IP VE and Application Workloads Migration From VMware to Nutanix
Introduction Nutanix is a leading provider of Hyperconverged Infrastructure (HCI), which integrates storage, compute, networking, and virtualization into a unified, scalable, and easily managed solution. This article will outlined the recommended procedure of migrating BIG-IP Virtual Edition (VE) and application workloads from VMware vSphere to Nutanix AHV, ensuring minimal disruption to application services. As always, it is advisable to schedule a maintenance window for any migration activities to mitigate risks and ensure smooth execution. Migration Overview Our goal is to migrate VMware BIG-IP VEs and application workloads to Nutanix with minimal disruption to application services, while preserving the existing configuration including license, IP addresses, hostnames, and other settings. The recommended migration process can be summarized in five stages: Stage 1 – Deploy a pair of BIG-IP VEs in Nutanix: Stage 2 – Migrate Standby BIG-IP VE from VMware to Nutanix: Stage 3 – Failover Active BIG-IP VE from VMware to Nutanix: Stage 4 – Migrate application workloads from VMware to Nutanix: Stage 5 – Migrate now Standby BIG-IP VE from VMware to Nutanix: Migration Procedure In our example topology, we have an existing VMware environment with a pair of BIG-IP VEs operating in High Availability (HA) mode - Active and Standby, along with application workloads. Each of our BIG-IP VEs is set up with four NICs, which is a typical configuration: one for management, one for internal, one for external, and one for high availability. We will provide a detailed step-by-step breakdown of the events during the migration process using this topology. Stage 1 – Deploy a pair of BIG-IP VEs in Nutanix i) Create Nutanix BIGIP-1 and Nutanix BIGIP-2 ensuring that the host CPU and memory are consistent with VMware BIGIP-1 and VMware BIGIP-2: ii) Keep both Nutanix BIGIP-1 and Nutanix BIGIP-2 powered down. *Current BIG-IP State*: VMware BIGIP-1 (Active) and VMware BIGIP-2 (Standby) Stage 2 – Migrate Standby BIG-IP VE from VMware to Nutanix i) Set VMware BIGIP-2 (Standby) to “Forced Offline”, and then save a copy of the configuration: ii) Save a copy of the license from “/config/bigip.license”. iii) Make sure above files are saved at a location we can retrieve later in the migration process. iv) Revoke the license on VMware BIGIP-2 (Standby): Note: Please refer to BIG-IQ documentation if the license was assigned using BIG-IQ. v) Disconnect all interfaces on VMware BIGIP-2 (Standby): Note: Disconnecting all interfaces enables a quicker rollback should it become necessary, as opposed to powering down the system. vi) Power on Nutanix BIGIP-2 and configure it with the same Management IP of VMware BIGIP-2: vii) License Nutanix BIGIP-2 with the saved license from VMware BIGIP-2 (Stage 2ii): Note: Please refer to K91841023 if the VE is running in FIPS mode. viii) Set Nutanix BIGIP-2 to “Forced Offline”: ix) Upload the saved UCS configuration (Stage 2i) to Nutanix BIGIP-2, and then load it with “no-license”: Note: Please refer K9420 to if the UCS file containing encrypted password or passphrase. x) Check the log and wait until the message “Configuration load completed, device ready for online” is seen before proceeding, which can be done by opening a separate session to Nutanix BIGIP-2: xi) Set Nutanix BIGIP-2 to “Online”: Note: Before bringing Nutanix BIGIP-2 "Online", make sure it is deployed with the same number of NICs, and interface-to-VLAN mapping is identical to VMware BIGIP-2. For example, if interface 1.1 is mapped to VLAN X on VMware BIGIP-2, make sure interface 1.1 is mapped to VLAN X too on Nutanix BIGIP-2. xii) Make sure Nutanix BIGIP-2 is "In Sync". Perform Config-Sync using “run cm config-sync from-group <device-group-name>” if “(cfg-sync Changes Pending)" is seen like below: xiii) BIGIP-2 is now migrated from VMware to Nutanix: Note: Due to BIG-IP VEs are running in different hypervisors, persistence mirroring or connection mirroring will not be operational during migration. If enabled, ".....notice DAG hash mismatch; discarding mirrored state" message maybe seen during migration and is expected. *Current BIG-IP State*: VMware BIGIP-1 (Active) and Nutanix BIGIP-2 (Standby) Stage 3 – Failover Active BIG-IP from VMware to Nutanix i) Failover VMware BIGIP-1 from Active to Standby: ii) Nutanix BIGIP-2 is now the Active BIG-IP: *Current BIG-IP State*: VMware BIGIP-1 (Standby) and Nutanix BIGIP-2 (Active) Stage 4 – Migrate application workloads from VMware to Nutanix i) Migrate application workloads from VMware to Nutanix using Nutanix Move Note: To minimize application service disruption, it is suggested to migrate the application workloads in groups instead of all at once, ensuring that at least one pool member remains active during the process. It is because Nutanix Move requires a downtime to shut down the VM at the source (VMware), perform a final sync of data and then start the VM at the destination (Nutanix). *Current BIG-IP State*: VMware BIGIP-1 (Standby) and Nutanix BIGIP-2 (Active) Stage 5 – Migrate now Standby BIG-IP VE from VMware to Nutanix i) Set VMware BIGIP-1 “Forced Offline”, and then save a copy of the configuration: ii) Save a copy of the license from “/config/bigip.license”. iii) Make sure above files are saved at a location we can retrieve later in the migration process. iv) Revoke the license on VMware BIGIP-1 (Standby): Note: Please refer to BIG-IQ documentation if the license was assigned using BIG-IQ. v) Disconnect all interfaces on VMware BIGIP-1 (Standby): Note: Disconnecting all interfaces enables a quicker rollback should it become necessary, as opposed to powering down the system. vi) Power on Nutanix BIGIP-1 and configure it with the same Management IP of VMware BIGIP-1: vii) License Nutanix BIGIP-1 with the saved license from VMware BIGIP-1 (Stage 5ii): Note: Please refer to K91841023 if the VE is running in FIPS mode. viii) Set Nutanix BIGIP-1 to “Forced Offline”: ix) Upload the saved UCS configuration (Stage 5i) to Nutanix BIGIP-1, and then load it with “no-license”: Note: Please refer K9420 to if the UCS file containing encrypted password or passphrase. x) Check the log and wait until the message “<hostname>……Configuration load completed, device ready for online” is seen before proceeding, which can be done by opening a separate session to Nutanix BIGIP-1: xi) Set Nutanix BIGIP-1 to “Online”: Note: Before bringing Nutanix BIGIP-1 "Online", make sure it is deployed with the same number of NICs ,and interface-to-VLAN mapping is identical to VMware BIGIP-1. For example, if interface 1.1 is mapped to VLAN X on VMware BIGIP-1, make sure interface 1.1 is mapped to VLAN X too on Nutanix BIGIP-1. xii) Make sure Nutanix BIGIP-1 is "In Sync". Perform Config-Sync using “run cm config-sync from-group <device-group-name>” if “(cfg-sync Changes Pending)" is seen like below: xiii) BIGIP-1 is now migrated from VMware to Nutanix: Migration is now completed. *Current BIG-IP State*: Nutanix BIGIP-1 (Standby) and Nutanix BIGIP-2 (Active) Summary The outlined migration procedure in this article is the recommended procedure of migrating BIG-IP Virtual Edition (VE) and application workloads from VMware vSphere to Nutanix AHV. It ensures successful migration during a scheduled maintenance with minimal application service disruption, enabling them to continue functioning smoothly during and post-migration. References Nutanix AHV: BIG-IP Virtual Edition Setup https://clouddocs.f5.com/cloud/public/v1/nutanix/nutanix_setup.html Nutanix Move User Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v5_5:top-overview-c.html K7752: Licensing the BIG-IP system https://my.f5.com/manage/s/article/K7752 K2595: Activating and installing a license file from the command line https://my.f5.com/manage/s/article/K2595 K91841023: Overview of the FIPS 140 Level 1 Compliant Mode license for BIG-IP VE https://my.f5.com/manage/s/article/K91841023 K9420: Installing UCS files containing encrypted passwords or passphrases https://my.f5.com/manage/s/article/K9420 K13132: Backing up and restoring BIG-IP configuration files with a UCS archive https://my.f5.com/manage/s/article/K13132 BIG-IQ Documentation - Manage Software Licenses for Devices https://techdocs.f5.com/en-us/bigiq-7-0-0/managing-big-ip-ve-subscriptions-from-big-iq/manage-licenses-devices.html509Views0likes2CommentsUpgrading BIGIP 2000S to R2600
Hi, I have a pair of BIGIP 2000 licensed with LTM and need to upgrade the hardware to R2600. I have some backend nodes pointing to F5 as their gateways. 2000 appliances run code v15.1. Will it be doable to archive CSF file of old F5s, edit some names related to 2000s like hostname and license with new names of 2600s, load the file on new F5s? I'm thinking to use different mgmt IP but keep all other configuration of VS, Vlans, IPs as they are. Also, what about license and certificate files? Thank you!110Views0likes2CommentsMigration From BIG-IP V15.1.10.2 TO BIG IP-17.1.1.3.0.76.5 on VE
Hi Community, Need you support. Currently VM is on BIG-IP V15.1.10.2 and we are deploying a new VM on BIG IP-17.1.1.3.0.76.5. Could you please confirm will there be any config syntax changes as planning to upload to existing file SCF file on new new. Please comment on this.454Views0likes3CommentsBIG-IP Configuration Conversion Scripts
Kirk Bauer, John Alam, and Pete White created a handful of perl and/or python scripts aimed at easing your migration from some of the “other guys” to BIG-IP. While they aren’t going to map every nook and cranny of the configurations to a BIG-IP feature, they will get you well along the way, taking out as much of the human error element as possible. Links to the codeshare articles below. Cisco ACE (perl) Cisco ACE via tmsh (perl) Cisco ACE (python) Cisco CSS (perl) Cisco CSS via tmsh (perl) Cisco CSM (perl) Citrix Netscaler (perl) Radware via tmsh (perl) Radware (python)1.8KViews1like13CommentsA Brief Introduction To External Application Verification Monitors
Background EAVs (External Application Verification) monitors are one of most useful and extensible features of the BIG-IP product line. They give the end user the ability to utilize the underlying Linux operating system to perform complex and thorough service checks. Given a service that does not have a monitor provided, a lot of users will assign the closest related monitor and consider the solution complete. There are more than a few cases where a TCP or UDP monitor will mark a service “up” even while the service is unresponsive. EAVs give us the ability to dive much deeper than merely performing a 3-way handshake and neglecting the other layers of the application or service. How EAVs Work An EAV monitor is an executable script located on the BIG-IP’s file system (usually under /usr/bin/monitors) that is executed at regular intervals by the bigd daemon and reports its status. One of the most common misconceptions (especially amongst those with *nix backgrounds) is that the exit status of the script dictates the fate of the pool member. The exit status has nothing to do with how bigd interprets the pool member’s health. Any output to stdout (standard output) from the script will mark the pool member “up”. This is a nuance that should receive special attention when architecting your next EAV. Analyze each line of your script and make sure nothing will inadvertently get directed to stdout during monitor execution. The most common example is when someone writes a script that echoes “up” when the checks execute correctly and “down” when they fail. The pool member will be enabled by the BIG-IP under both circumstances rendering a useless monitor. Bigd automatically provides two arguments to the EAV’s script upon execution: node IP address and node port number. The node IP address is provided with an IPv6 prefix that may need to be removed in order for the script to function correctly. You’ll notice we remove the “::ffff://” prefix with a sed substitution in the example below. Other arguments can be provided to the script when configured in the UI (or command line). The user-provided arguments will have offsets of $3, $4, etc. Without further ado, let’s take a look at a service-specific monitor that gives us a more complete view of the application’s health. An Example I have seen on more than one occasion where a DNS pool member has successfully passed the TCP monitor, but the DNS service was unresponsive. As a result, a more invasive inspection is required to make sure that the DNS service is in fact serving valid responses. Let’s take a look at an example: #!/bin/bash # $1 = node IP # $2 = node port # $3 = hostname to resolve [[ $# != 3 ]] && logger -p local0.error -t ${0##*/} -- "usage: ${0##*/} <node IP> <node port> <hostname to resolve>" && exit 1 node_ip=$(echo $1 | sed 's/::ffff://') dig +short @$node_ip $3 IN A &> /dev/null [[ $? == 0 ]] && echo “UP” We are using the dig (Domain Information Groper) command to query our DNS server for an A record. We use the exit status from dig to determine if the monitor will pass. Notice how the script will never output anything to stdout other than “UP” in the case of success. If there aren’t enough arguments for the script to proceed, we output the usage to /var/log/ltm and exit. This is a very simple 13 line script, but effective example. The Takeaways The command should be as lightweight and efficient as possible If the same result can be accomplished with a built-in monitor, use it EAV monitors don’t rely on the command’s exit status, only standard output Send all error and informational messages to logger instead of stdout or stderr (standard error) “UP” has no significance, it is just a series of character sent to stdout, the monitor would still pass if the script echoed “DOWN” Conclusion When I first discovered EAV monitors, it opened up a whole realm of possibilities that I could not accomplish with built in monitors. It gives you the ability to do more thorough checking as well as place logic in your monitors. While my example was a simple bash script, BIG-IP also ships with Perl and Python along with their standard libraries, which offer endless possibilities. In addition to using the built-in commands and libraries, it would be just as easy to write a monitor in a compiled language (C, C++, or whatever your flavor may be) and statically compile it before uploading it to the BIG-IP. If you are new to EAVs, I hope this gives you the tools to make your environments more robust and resilient. If you’re more of a seasoned veteran, we’ll have more fun examples in the near future.2.3KViews0likes7CommentsMigration projects - how to avoid IP conflicts
Hi, Wonder if there is smarter/easier way to avoid IP conflicts during migrations - in the phase when production and new service should listen on the same IP. Scenario: All VIPs in 192.168.1.0/24 subnet All traffic to VIPs is coming via external router (no clients in 192.168.1.0/24 subnet) Production device IP: 192.168.1.254 Production VIP: 192.168.1.100 New device IP: 192.168.1.253 New VIP: 192.168.1.100 Traffic from any client except test station (192.168.10.100) should hit VIP at production device BIG-IP setup: Floating IP: 192.168.1.253 VIP: 192.168.1.100; ARP disabled External router setup: Route: From 192.168.10.100 to 192.168.1.100/32 gw 192.168.1.253 One important note: virtual-address object for VS has to be created in advance via tmsh. For example using tmsh load sys config from-terminal merge and similar config: ltm virtual-address 192.168.1.100 { address 192.168.1.100 arp disabled mask 255.255.255.255 traffic-group traffic-group-1 } or of course any other suitable way. Reason for that is simple - auto created virtual-address objects (created when VS is created) always has ARP enabled. After finishing testing all virtual-address objects can be updated with ARP enabled using simple bash script like below: !/bin/sh $1 contains source file with VIP to place in array $2 contains enable or disable to turn on and off ARP fro VIP if [ -z "$1" ] then bad arguments - quit echo "Syntax: vip_arp_enable-disable_from-file.sh " else mapfile -t myArray < $1 local count = 0 for vip in "${myArray[@]}" do echo "Vip is: $vip"; tmsh modify ltm virtual-address $vip arp $2; ((++count)); done echo "$count processed" fi With above script it's possible to both enable and disable ARP, file with list of virtual-addresses to be processed. Result: Traffic from any client (except sourced from 192.168.10.100) is just send to 192.168.1.100 based on MAC in ARP Reply send to 192.168.1.0/24 subnet by router. BIG-IP never responds to ARP Request for 192.168.1.100 (ARP disabled on this VIP) Traffic from sourced from 192.168.10.100 is send to 192.168.1.253 (next hop, using 192.168.1.253 MAC as target and 192.168.1.100 as target IP), then internally BIG-IP is able to route this packet to configured VS with 192.168.1.100. Tested and working, but maybe not optimal approach? What I am afraid is if ARP cache on router will not be issue - like when production traffic is routed MAC of production VIP is cached, then when test traffic is processed (to the same IP as production) this cached entry will be used - traffic will reach production instead of test VIP - never happened in my lab but it's not 100% confirmation it would not fail. Router simulated using other BIG IP with two VSs: Wildcard (Forwarding IP) accepting traffic to subnet 192.168.1.0/24 Wildcard (PerformanceL4): Source Address: 192.168.10.100/32 Destination Address/Mask: 192.168.1.0/24 All ports All protocols Address Translation and Port Translation: disabled Pool with Pool Member: 192.168.1.253 (Floating IP of other BIG-IP)455Views0likes1CommentMigration projects - how to avoid IP conflicts
Hi, Wonder if there is smarter/easier way to avoid IP conflicts during migrations - in the phase when production and new service should listen on the same IP. Scenario: All VIPs in 192.168.1.0/24 subnet All traffic to VIPs is coming via external router (no clients in 192.168.1.0/24 subnet) Production device IP: 192.168.1.254 Production VIP: 192.168.1.100 New device IP: 192.168.1.253 New VIP: 192.168.1.100 Traffic from any client except test station (192.168.10.100) should hit VIP at production device BIG-IP setup: Floating IP: 192.168.1.253 VIP: 192.168.1.100; ARP disabled External router setup: Route: From 192.168.10.100 to 192.168.1.100/32 gw 192.168.1.253 One important note: virtual-address object for VS has to be created in advance via tmsh. For example using tmsh load sys config from-terminal merge and similar config: ltm virtual-address 192.168.1.100 { address 192.168.1.100 arp disabled mask 255.255.255.255 traffic-group traffic-group-1 } or of course any other suitable way. Reason for that is simple - auto created virtual-address objects (created when VS is created) always has ARP enabled. After finishing testing all virtual-address objects can be updated with ARP enabled using simple bash script like below: !/bin/sh $1 contains source file with VIP to place in array $2 contains enable or disable to turn on and off ARP fro VIP if [ -z "$1" ] then bad arguments - quit echo "Syntax: vip_arp_enable-disable_from-file.sh " else mapfile -t myArray < $1 local count = 0 for vip in "${myArray[@]}" do echo "Vip is: $vip"; tmsh modify ltm virtual-address $vip arp $2; ((++count)); done echo "$count processed" fi With above script it's possible to both enable and disable ARP, file with list of virtual-addresses to be processed. Result: Traffic from any client (except sourced from 192.168.10.100) is just send to 192.168.1.100 based on MAC in ARP Reply send to 192.168.1.0/24 subnet by router. BIG-IP never responds to ARP Request for 192.168.1.100 (ARP disabled on this VIP) Traffic from sourced from 192.168.10.100 is send to 192.168.1.253 (next hop, using 192.168.1.253 MAC as target and 192.168.1.100 as target IP), then internally BIG-IP is able to route this packet to configured VS with 192.168.1.100. Tested and working, but maybe not optimal approach? What I am afraid is if ARP cache on router will not be issue - like when production traffic is routed MAC of production VIP is cached, then when test traffic is processed (to the same IP as production) this cached entry will be used - traffic will reach production instead of test VIP - never happened in my lab but it's not 100% confirmation it would not fail. Router simulated using other BIG IP with two VSs: Wildcard (Forwarding IP) accepting traffic to subnet 192.168.1.0/24 Wildcard (PerformanceL4): Source Address: 192.168.10.100/32 Destination Address/Mask: 192.168.1.0/24 All ports All protocols Address Translation and Port Translation: disabled Pool with Pool Member: 192.168.1.253 (Floating IP of other BIG-IP)397Views0likes0Comments