15-Nov-2022 10:48
I'm planning to replace two legacy 4200V platform HA peers with a pair of VEs and I'm asking for guidance to ensure success. I have done several RMAs of platforms but never replaced a platform with VE. IP space makes adding new VEs to a sync group with unique self-ips problematic. I plan to replace the platforms one by one in a maintenance window. I'm considering accomplishing this by either configuring the new VEs using UCS archive 'platform-migrate' option (K82540512) or matching network configs and updating the VEs via ConfigSync. Here are two options I'm considering. I'm open to suggestions and any advice is greatly appreciated!
Complete:
- installed same version software on both VE vmware guests
- configured mgmt with unique IPs
Option 1 - Load from UCS:
- set new VEs to FORCED OFFLINE state and disable interfaces other than mgmt (to ensure they do not advertise routes or participate in ConfigSync)
- create UCS files on platforms to be replaced
- load UCS files on VEs with 'platform-migrate' option (K82540512)
- verify configs and add missing trunks, vlans, ConfigSync, Mirroring
- disable Automatic Sync
- shut down standby platform
- change mgmt IP and Host Name of VE to match standby
- enable interfaces
- manually initiate ConfigSync from Active platform to Standby VE
- verify ConfigSync, iQuery, Device Trust, and Logging
- promote VE from Standby to Active
- replace other platform the same way
- restore Automatic Sync
Concerns:
- VE becoming Active before intended (creating Active/Active scenario)
- iQuery comms failing/ lost device trust
- TTL of platform ARP on network devices before the new VEs are cached
- UCS archive might carry over unnecessary legacy cruft (these platforms are very old)
Option 2 - No UCS:
- set new VEs to FORCED OFFLINE state and disable interfaces other than mgmt (to ensure they do not advertise routes or participate in ConfigSync)
- manually add all vlans and self-ips for platform they will replace
- verify ConfigSync and Mirroring configs
- disable Automatic Sync on Active platform
- shut down standby platform
- change mgmt IP and Host Name of VE to match standby
- import device certificate
- enable interfaces
- manually initiate ConfigSync from Active platform to Standby VE
- verify ConfigSync, iQuery, Device Trust, and Logging
- promote VE from Standby to Active
- replace other platform the same way
- restore Automatic Sync
Concerns:
- VE becoming Active before intended (creating Active/Active scenario)
- iQuery comms failing/ lost device trust
- TTL of platform ARP on network devices before the new VEs are cached
Thoughts, concerns, suggestions? Thanks in advance!
Solved! Go to Solution.
15-Nov-2022 14:57
Hi @speachey ,
I like such these changes.
> Both of options are good ideas for migration , and you covered them well.
I want to add some points below :
GoodLuck in your Migration , you will enjoy by taking this Action.
Regards
15-Nov-2022 14:30
I've gone with option 2 in the past and it's been successful.
Some additional tips:
I don't think you need to worry as long as you follow your game plan. You've seem to have thought this through properly. Thumbs up!
16-Nov-2022 06:40
Thanks Patrik! Good ideas to ping/verify L2 and run one VE as Active for a few days before replacing the other platform. I'm replacing two similar HA platform pairs in different DCs with VEs and I will start with the DC that is not as high profile and test thoroughly. I suggested shutting the platforms down instead of forcing them offline because I plan to use the same mgmt config as the platforms they are replacing. I'll probably actually leave the platforms up and shut all of their interfaces at the nexus switches (mgmt, HA, trunk) so I could bring platforms back up quickly, if needed.
Just curious...have you noticed any performance issues switching from platform to VE? I don't think our load requirements are that high so it looks like VE should work fine ( 1K virtual servers, max of about 50K active connections, 150M throughput[bits], 250 APM Access Session/sec)
16-Nov-2022 08:44
"I'll probably actually leave the platforms up and shut all of their interfaces at the nexus switches (mgmt, HA, trunk) so I could bring platforms back up quickly, if needed." - exactly my point. 🙂
As for performance, you should be ok. I have had to back out of a VE migration once because of performance but this was a high intensive sports book betting application. Have you read the docs about VMWare specific settings (if you're using VMware)?
https://clouddocs.f5.com/cloud/public/v1/vmware/vmware_users.html
Migrating the master key is not a bad idea by Mohamed below (if you're not using the ucs path). IIRC it's used to encrypt sensitive stuff in the config such as TLS key passphrases.
16-Nov-2022 09:01
Thanks again. I believe the f5-recommended sizing for our VE builds was used but I'll review the guide with our VMWare team
16-Nov-2022 11:15
There are some other options in the article rather than sizing such as LRO/GRO. Read the whole thing and you'll see. 🙂
15-Nov-2022 14:57
Hi @speachey ,
I like such these changes.
> Both of options are good ideas for migration , and you covered them well.
I want to add some points below :
GoodLuck in your Migration , you will enjoy by taking this Action.
Regards
16-Nov-2022 07:18
Thanks for the response and details Mohamed! These load balancers are LTM + APM. I purchased and already received the LTM and APM VE licenses from F5 and applied them to the VEs and configured Resource Provisioning appropriately. Is there still a need to transfer Master keys? I never heard of Mater Keys so I need to understand/research that.
I mention exporting/importing device certificates because we use device certs signed by a CA and not self-signed. It was an organization requirement that complicates updating device trust between LTMs and separate BIG-IP DNS (formerly GTM) devices in other locations every year. I would not recommend anyone using signed certs for f5 devices if you can avoid it! Would you still advise creating new/unique device certs on the VEs? I could go through the device trust dance, if there is a compelling reason to do so.
You recommend hardware/platform over VE. I know the F5 HW chipsets in the past were always preferred over VE, but everything I have read shows the specs on current VM NICs and specs look sufficient for our needs. I have never put production LTM+APM (SAML/SSO) services on VE and I'm curious about the details of problems you have had. Did increasing VM memory/CPU address your issues? I don't think our load requirements are that high so it looks like VE should work fine ( 1K virtual servers, max of about 50K active connections, 150M throughput[bits], 250 APM Access Session/sec). Are your connections and throughput mush higher than ours?
Thanks again for taking the time to share your knowledge!
16-Nov-2022 07:53
I should have added that I dropped support for the platforms when I purchased the VE licenses....so the platform licenses are about to expire
16-Nov-2022 09:11
Hi @speachey ,
> About master Key :
- It is the most important thing before Migration , this Key is relevant to UCSs or old appliance Configuration , and it must be printed on the VEs before loading UCS or you definitly fail.
Please read well this KB :
https://techdocs.f5.com/en-us/bigip-13-1-0/big-ip-secure-vault-administration/Working-with-ucs-files...
and this as well :
https://www.empirion.co.uk/f5/f5-master-key-rma-migration/
and take a look here Also :
https://securityguy225.wordpress.com/2016/11/11/how-to-migrate-all-configuration-from-2-different-f5...
____________________________________________________________________
> For Certificates :
- okay very well , you Can proceed without auto generating self sign certficate , but if you got errors in " Device Trust " it would be due to Certificate.
But at for a Certificate signed by CA , I expect that After migration , you will find both of devices "insync" Without issues.
_____________________________________________________________________
> For Physical and Virtual Editions :
I mentioned that as a Concern of Physical appliance Capabilities , but it seems you have done a good sizing with your needs.
- In internet service proiders , it is mandatory to use a very powerful Phyical appliances to process this huge Quantity of traffic.
____________________________________________________________________
Regards.
and GoodLuck with your migration