migration
82 TopicsBehavior of masterkey on rSeries
Is there any difference in regards to the usage of the masterkey on rSeries? I mean is this still different/dedicated for the F5OS and all the tenants? Or is there just ONE masterkey, which needs to be adjusted on F5OS level? Reason why I'm asking, I want to load a bigip.conf file from an iSeries on a Tenant of a rSeries. I performed the procedure with f5mku commands to have the same masterkey on the new rSeries Tenant and it will also be displayed correctly. But when I try to load/verify the configuration (load sys config partition { xyz } verify) I still get the error message: Decryption of the field (pvalue) for object (xxx 1 PASSWORD=) failed while loading configuration that is encrypted with a different master key. Is there anything else I should double check? Thank you! Regards, Stefan :)Solved226Views0likes4CommentsSingle LTM with multiple GTM domains
I am currently working on a Datacenter migration and we are re-IP'ing everything and rebuilding all the network appliances. I am working out the BEST, least impactful, way to migrate the GTM appliances to the new DC's. Here is the overall situation. Everything is the same version running 15.x.x with a mix of rSeries hardware running VE's and iSeries hardware also running VE's. Existing DC's: GTM Domain with two GTM's in different DC's Multiple LTM's all joined to the GTM New DC's: Two GTM's in different DC's, blank configuration Multiple LTM's all joined with the existing DC GTM's I know that I can add the new GTM's to the existing DC GTM domain, let them sync up, then update the NS records to migrate the DNS flows over to the new DC, but that also sync's over all the technical debt and limits my pre-testing abilities. I would like to setup a new GTM Domain in the new DC, build some automation for the WideIP / Pool creation, and manually review / rebuild all the necessary records in the new DC. My hangup is that this is ONLY possible if the LTM appliance can join multiple GTM domains. Can a single LTM appliance join multiple GTM domains and report status to multiple appliances? I don't have an easy way to build a test environment and build this out with VE's and validate so I am hoping for some input from the community.68Views0likes2CommentsIssue with 2 parallel F5 clusters
Hello everybody and first of all thank you for taking the time to read my issue! The issue that I have is in regards to a migration We have a productive F5 BigIP cluster (Active/Standby), let's call this "Old F5", which has a lot of Virtual Servers in partitions, with specific pools and monitors for each application/service This device also has 2 Vlans, internal (vlan11) and external (vlan10), and 2 interfaces in an LACP that it's tagged on both Vlans, and it's connected to the same one leg to a Cisco APIC It has 2 Self IP addresses (one for each Vlan): 10.10.10.1-Vlan "external" 10.20.20.1-Vlan "internal" (numbers are just for example) It also has 4 Floating IP address (2 for each Vlan) with 2 traffic groups: 10.10.10.2-Vlan external traffic group 1 10.10.10.3-Vlan external traffic group 2 10.20.20.2-Vlan internal traffic group 1 10.20.20.3-Vlan internal traffic group 2 This device (cluster) has to be replaced by another F5 BigIP cluster (let's call this new F5), this device is an identical copy to the old F5 (the config was took from the old one and imported to the new one), meaning same Vlans, monitors, pools, VServers IP addresses etc At the moment this one has the 2 interfaces disabled and a blackhole default reject route set up in order to not interfere with the old F5 which is the productive one. The ideea is to configure the new F5 device with IP addresses from the same subnet (for example 10.10.10.5), and disable all the Virtual Servers so it doesn't handle traffic (the nodes, monitors, pools stay up on both devices), and have the 2 F5 devices, old and new, running in parallel and then move the Virtual servers one by one by just disabling the VS on the old F5 and enable it on the new F5. At this point we also remove the blackhole route, configure the correct default static route (the same which is on the old F5), and enable the interfaces This sounded and looked good, on the new F5 the nodes, pools are green and the Virtual servers are disabled as expected. On the old productive F5 everything is up and green BUT if I try to reach one of the Virtual servers, either by the Virtual IP address or hostname the attempt just times out without any response (if I try to telnet to the VS on port 443 it connects meaning that the old F5 accepts the traffic) I tried to disable on the new F5 also the nodes but still the same behaviour, the only to get it back to work is to disable the interfaces on the new F5 and add the default reject blackhole route. This is not how I imagined it to work, in my mind I was expecting that the old F5 will work as normal, and the new F5 device will see the nodes and pools up (confirming good communication) but don't handle any traffic regarding the Virtual servers because they are disabled. Does anyone have any idea what is causing this issue, why when both F5 devices are up in parallel, the connection to the Virtual server through the old productive F5 times out while that F5 sees both the pools and Virtual servers as up and running. Thank you in advance!129Views0likes3CommentsF5 migration
hi, We are working migrate F5 from Viprion hosted guest to new R10900 Tennent, both old and new running on 17.x version but old F5 has huge number of as3 deployed config from big-iq and 200+ legacy vips.. What is best way we can migrate this kind mixed environment and how to make sure big-iq still works fine with new target post migration..216Views0likes4CommentsIssue while migrating config from 4000s to r4600
Hi All, we are trying to migrate config from 4000s to r4600. We have created UCS on 4000s but while loading it on a tenant on r4600, we got an error saying ""load sys partition all platform migrate " - failed -- 010713d0:3: Symmetric Unit key decrypt failure - decrypt failure, configuration loading error: high-config-load-failed". Before loading the UCS from 4000s device to tenant, we copied the master key to the new tenant and verified it as well. The command used to load the UCS : load sys ucs <file name> no-license platform-migrate Didn't see any other error logs in /var/log/ltm. Could someone suggest how to resolve this issue ? Please note we are using a CA device certificate and not self signed certificate for the device. Also the management IP, trunk name and number of trunk ports in the UCS are different from those on the tenant.472Views0likes5CommentsF5 BIG-IP VE and Application Workloads Migration From VMware to Nutanix
Introduction Nutanix is a leading provider of Hyperconverged Infrastructure (HCI), which integrates storage, compute, networking, and virtualization into a unified, scalable, and easily managed solution. This article will outlined the recommended procedure of migrating BIG-IP Virtual Edition (VE) and application workloads from VMware vSphere to Nutanix AHV, ensuring minimal disruption to application services. As always, it is advisable to schedule a maintenance window for any migration activities to mitigate risks and ensure smooth execution. Migration Overview Our goal is to migrate VMware BIG-IP VEs and application workloads to Nutanix with minimal disruption to application services, while preserving the existing configuration including license, IP addresses, hostnames, and other settings. The recommended migration process can be summarized in five stages: Stage 1 – Deploy a pair of BIG-IP VEs in Nutanix: Stage 2 – Migrate Standby BIG-IP VE from VMware to Nutanix: Stage 3 – Failover Active BIG-IP VE from VMware to Nutanix: Stage 4 – Migrate application workloads from VMware to Nutanix: Stage 5 – Migrate now Standby BIG-IP VE from VMware to Nutanix: Migration Procedure In our example topology, we have an existing VMware environment with a pair of BIG-IP VEs operating in High Availability (HA) mode - Active and Standby, along with application workloads. Each of our BIG-IP VEs is set up with four NICs, which is a typical configuration: one for management, one for internal, one for external, and one for high availability. We will provide a detailed step-by-step breakdown of the events during the migration process using this topology. Stage 1 – Deploy a pair of BIG-IP VEs in Nutanix i) Create Nutanix BIGIP-1 and Nutanix BIGIP-2 ensuring that the host CPU and memory are consistent with VMware BIGIP-1 and VMware BIGIP-2: ii) Keep both Nutanix BIGIP-1 and Nutanix BIGIP-2 powered down. *Current BIG-IP State*: VMware BIGIP-1 (Active) and VMware BIGIP-2 (Standby) Stage 2 – Migrate Standby BIG-IP VE from VMware to Nutanix i) Set VMware BIGIP-2 (Standby) to “Forced Offline”, and then save a copy of the configuration: ii) Save a copy of the license from “/config/bigip.license”. iii) Make sure above files are saved at a location we can retrieve later in the migration process. iv) Revoke the license on VMware BIGIP-2 (Standby): Note: Please refer to BIG-IQ documentation if the license was assigned using BIG-IQ. v) Disconnect all interfaces on VMware BIGIP-2 (Standby): Note: Disconnecting all interfaces enables a quicker rollback should it become necessary, as opposed to powering down the system. vi) Power on Nutanix BIGIP-2 and configure it with the same Management IP of VMware BIGIP-2: vii) License Nutanix BIGIP-2 with the saved license from VMware BIGIP-2 (Stage 2ii): Note: Please refer to K91841023 if the VE is running in FIPS mode. viii) Set Nutanix BIGIP-2 to “Forced Offline”: ix) Upload the saved UCS configuration (Stage 2i) to Nutanix BIGIP-2, and then load it with “no-license”: Note: Please refer K9420 to if the UCS file containing encrypted password or passphrase. x) Check the log and wait until the message “Configuration load completed, device ready for online” is seen before proceeding, which can be done by opening a separate session to Nutanix BIGIP-2: xi) Set Nutanix BIGIP-2 to “Online”: Note: Before bringing Nutanix BIGIP-2 "Online", make sure it is deployed with the same number of NICs, and interface-to-VLAN mapping is identical to VMware BIGIP-2. For example, if interface 1.1 is mapped to VLAN X on VMware BIGIP-2, make sure interface 1.1 is mapped to VLAN X too on Nutanix BIGIP-2. xii) Make sure Nutanix BIGIP-2 is "In Sync". Perform Config-Sync using “run cm config-sync from-group <device-group-name>” if “(cfg-sync Changes Pending)" is seen like below: xiii) BIGIP-2 is now migrated from VMware to Nutanix: Note: Due to BIG-IP VEs are running in different hypervisors, persistence mirroring or connection mirroring will not be operational during migration. If enabled, ".....notice DAG hash mismatch; discarding mirrored state" message maybe seen during migration and is expected. *Current BIG-IP State*: VMware BIGIP-1 (Active) and Nutanix BIGIP-2 (Standby) Stage 3 – Failover Active BIG-IP from VMware to Nutanix i) Failover VMware BIGIP-1 from Active to Standby: ii) Nutanix BIGIP-2 is now the Active BIG-IP: *Current BIG-IP State*: VMware BIGIP-1 (Standby) and Nutanix BIGIP-2 (Active) Stage 4 – Migrate application workloads from VMware to Nutanix i) Migrate application workloads from VMware to Nutanix using Nutanix Move Note: To minimize application service disruption, it is suggested to migrate the application workloads in groups instead of all at once, ensuring that at least one pool member remains active during the process. It is because Nutanix Move requires a downtime to shut down the VM at the source (VMware), perform a final sync of data and then start the VM at the destination (Nutanix). *Current BIG-IP State*: VMware BIGIP-1 (Standby) and Nutanix BIGIP-2 (Active) Stage 5 – Migrate now Standby BIG-IP VE from VMware to Nutanix i) Set VMware BIGIP-1 “Forced Offline”, and then save a copy of the configuration: ii) Save a copy of the license from “/config/bigip.license”. iii) Make sure above files are saved at a location we can retrieve later in the migration process. iv) Revoke the license on VMware BIGIP-1 (Standby): Note: Please refer to BIG-IQ documentation if the license was assigned using BIG-IQ. v) Disconnect all interfaces on VMware BIGIP-1 (Standby): Note: Disconnecting all interfaces enables a quicker rollback should it become necessary, as opposed to powering down the system. vi) Power on Nutanix BIGIP-1 and configure it with the same Management IP of VMware BIGIP-1: vii) License Nutanix BIGIP-1 with the saved license from VMware BIGIP-1 (Stage 5ii): Note: Please refer to K91841023 if the VE is running in FIPS mode. viii) Set Nutanix BIGIP-1 to “Forced Offline”: ix) Upload the saved UCS configuration (Stage 5i) to Nutanix BIGIP-1, and then load it with “no-license”: Note: Please refer K9420 to if the UCS file containing encrypted password or passphrase. x) Check the log and wait until the message “<hostname>……Configuration load completed, device ready for online” is seen before proceeding, which can be done by opening a separate session to Nutanix BIGIP-1: xi) Set Nutanix BIGIP-1 to “Online”: Note: Before bringing Nutanix BIGIP-1 "Online", make sure it is deployed with the same number of NICs ,and interface-to-VLAN mapping is identical to VMware BIGIP-1. For example, if interface 1.1 is mapped to VLAN X on VMware BIGIP-1, make sure interface 1.1 is mapped to VLAN X too on Nutanix BIGIP-1. xii) Make sure Nutanix BIGIP-1 is "In Sync". Perform Config-Sync using “run cm config-sync from-group <device-group-name>” if “(cfg-sync Changes Pending)" is seen like below: xiii) BIGIP-1 is now migrated from VMware to Nutanix: Migration is now completed. *Current BIG-IP State*: Nutanix BIGIP-1 (Standby) and Nutanix BIGIP-2 (Active) Summary The outlined migration procedure in this article is the recommended procedure of migrating BIG-IP Virtual Edition (VE) and application workloads from VMware vSphere to Nutanix AHV. It ensures successful migration during a scheduled maintenance with minimal application service disruption, enabling them to continue functioning smoothly during and post-migration. References Nutanix AHV: BIG-IP Virtual Edition Setup https://clouddocs.f5.com/cloud/public/v1/nutanix/nutanix_setup.html Nutanix Move User Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v5_5:top-overview-c.html K7752: Licensing the BIG-IP system https://my.f5.com/manage/s/article/K7752 K2595: Activating and installing a license file from the command line https://my.f5.com/manage/s/article/K2595 K91841023: Overview of the FIPS 140 Level 1 Compliant Mode license for BIG-IP VE https://my.f5.com/manage/s/article/K91841023 K9420: Installing UCS files containing encrypted passwords or passphrases https://my.f5.com/manage/s/article/K9420 K13132: Backing up and restoring BIG-IP configuration files with a UCS archive https://my.f5.com/manage/s/article/K13132 BIG-IQ Documentation - Manage Software Licenses for Devices https://techdocs.f5.com/en-us/bigiq-7-0-0/managing-big-ip-ve-subscriptions-from-big-iq/manage-licenses-devices.html
1.4KViews0likes2CommentsUpgrading BIGIP 2000S to R2600
Hi, I have a pair of BIGIP 2000 licensed with LTM and need to upgrade the hardware to R2600. I have some backend nodes pointing to F5 as their gateways. 2000 appliances run code v15.1. Will it be doable to archive CSF file of old F5s, edit some names related to 2000s like hostname and license with new names of 2600s, load the file on new F5s? I'm thinking to use different mgmt IP but keep all other configuration of VS, Vlans, IPs as they are. Also, what about license and certificate files? Thank you!267Views0likes2CommentsMigration From BIG-IP V15.1.10.2 TO BIG IP-17.1.1.3.0.76.5 on VE
Hi Community, Need you support. Currently VM is on BIG-IP V15.1.10.2 and we are deploying a new VM on BIG IP-17.1.1.3.0.76.5. Could you please confirm will there be any config syntax changes as planning to upload to existing file SCF file on new new. Please comment on this.619Views0likes3Comments
