Forum Discussion
Most stable current V11 release
I can't really comment too much on the latest v11.4 - we have various engineering hotfix's with our 11.1 and .2 versions which are an integrated LTM / GTM platforms. Lot of angst with those two editions in production. I don't believe there're is any difference between the odd/even numbers. Certainly F5 recommended we move to 11.3 when we spoke to them recently , dispite 11.4 being out.
Moving from 10.2 to 11 with LTM and GTM integration , quite a lot of changes on side note. Fairly challenging upgrade we found.
- smp_86112Aug 15, 2013
Cirrostratus
Re: the comment about the upgrade being challenging...I encountered the same thing moving from v9 to v10, and wrote a tech tip about it: https://devcentral.f5.com/s/articles/problems-overcome-during-a-major-ltm-software-hardware-upgrade I'm hoping someone will repay the favor. My environment is very, very large and complex, - learning from others' experience is very valueable to me, because I only get one shot at it. It has to work right...the first time. - JGAug 21, 2013
Cumulonimbus
Same here. We were so fed up with 11.2 with various EngHFs that we jumped to 11.3 when it came out about last Christmas, bypassing 11.2.1 altogether, which didn't seem to be much different anyway. Now we are on various EngHFs on 11.3. :-( Was tempted to jump to v11.4, but didn't do it for a missing bug fix that we needed. Also noticed that HF1 came out a day after 11.4 became available. We had F5 people do the upgrade from 10.2.x to 11.2 for us last year, and such a relief to see them having the angst rather than I myself. :-) So I can only give you my observations of the upgrade. First of all, the upgade process did not parse the 10.2.x conf to convert it to v11 conf sucessfully. It could not load the conf at all. Secondly, it seemed that one could not revert to a previous v10.2.x partition once a v11.2 was installed: it went into a boot loop in our case. There was a clean re-installation in the end. "/usr/libexec/bigpipe daol" was used to convert the v10.2.x conf to v11.2 after the upgrade. v11 is very different from v10.2.x, and has subdirectories for different partitions. There was a bug that was later fixed in a HF to load the confs in these directories correctly, in the right order. I'd say v11.3, and perhaps v.11.4 for that matter, might be able to handle the upgrade better, as they have incorporated all those hotfixes from v11.2.x. I still have a few v10.2.4 devices to upgrade, and what worries me is that the current root partition might not be just big enough for upgrade to v11.4.x. The root partition of v11.2 seems to default to 287M. A list of the default size of the root partition of various releases would help planning upgrades. If you have upgraded from a v10.2.x to v11.3 or v11.4 successfully, it might be wise to take a backup, do a clean re-installation and reload the conf from the backup. You will have a larger root partition, which will set you up well for a future major upgrade. -Jie - JGAug 22, 2013
Cumulonimbus
Just learnt from another thread that one can acutally restore a v10 (v10.2.4?) usc file on a v11 system. I suspect that was what was done in our last upgrade. But for some systems, one might have to convert the v10 confs manually to v11 conf.This will make v10-v11 upgrade much easier. If anybody has done it this way successfully, please let us know here.
- Neil_66348Aug 23, 2013
Nimbostratus
We had lots of problems in restoring the UCS configurations with a GTM LTM combination. Resulting in one box that needed replacing with the 3900's where we forced the import to occur which caused a device certificate issue which F5 couldn't resolve. The issue was only with GTM and the certificates need for the sync groups. In fact memory servers , everything involving UCS restores and GTM ended in disaster somewhere, maybe our early 11. Our steps for migrating approx 250 VS’s and hundreds of nodes with LTM and GTM was by hand with careful dual running. The major downside with an LTM UCS restore is that the node names are all chopped around , names get replaced with IP’s. Which isn’t too bad , but we have a lot of custom iControl code integrated into SCCM which caused a lot of problems and generally our engineers reference services by common naming groups , its just darn ugly as well a restore with a large LTM config. Our Step Roughly: •Manually migrate Nodes/pools/monitors/irules etc configuration to ensure correct naming and logical structure maintained. •Careful work with partitions , we ended up with duplicate partitions due a GTM / LTM naming problem. They are case sensitive. •(create test VS’s to review iRule functionality) •Create Virtual Servers , if live is eg 10.100.30.10 , create on new as 10.200.30.10 – then just re-ip both on the VS when ready to switch. In fact we brought the live VS with the same IP live first then disabled the original followed by an arp flush on the network core each time where we had an extended L2 involved. Yes you get bleating about duplicate IP etc , but service continuity was very important, it worked for us. If you are using Proxy-ARP more care needed with this approach. •GTM we swapped in by performing monitoring over-rides on the LTM side until we were fully operational. Only approx 30 Address’s on the sub domain delegation. 4 x Node Sync group with hot standby. •All in all we shifted the whole lot over the course of two nights with downtime per service approx 2 packets. •Testing and a lot of coordination for each VS •Steer clear of analytics on heavy web services. Our setup does approx 2.8GBit/s at peak someone had the level a bit too low we ended up with a lot of data on one VS. •Volume testing on anything with Web Accelerator I wish we’d done , as so many errors occurred with the accursed early 11.x editions. oWatch like a hawk HTTP error logs not IIS logs(we’re an Wintel house). By far , the “cowboy” spurs approach of duplicate IP during the switch over enabled us to perform high speed switching. The major downside if your application has to use StickySessions you maybe in a pickle. We didn’t have this issue as the Farms were either stateless or utilised a session state system on the backend for dynamic load shifting from our perspective. Overall very pleased with the 10.x migration to 11.x – performance was massively improved with the tuning on the HTTP profiles and TCP profiles. Thanks Neil - JGAug 24, 2013
Cumulonimbus
Thanks for posting your valuable experience. Now I remember we also had change of names after the upgrade.
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com