Forum Discussion
Nick_T_68319
Nimbostratus
Mar 07, 2012vCMP on Viprion 2400
I am retiring some of our 6900's and 3600's to migrate to a Viprion 2400. Does anyone use vCMP yet? If you virtualize the instances, is the performance less than not virtualizing the instances? Has vCMP been out long enough that you would feel comfortable running production on it? I'm just looking for some actual real life user feedback on it.
23 Replies
- Frank_47355
Nimbostratus
Hello everybody!
I have a queation over the creation of one trunk in the VIPRION. I want migrate 3 BIG-IP 6900, toward VIPRION.
I only have to configure over the Blade One Trunk for internal vlan, and One trunk for external vlan.
OR I have configure 3 Trunk each one with his internal and external vlan to do the migration fo BIG-IP 6900.
Actually in the Network is running 3 BIG-IP 6900, each F5 is running one service. in total there are 3 services. as you know each F5 -6900 has one internal vlan and one external vlan.
Then How I must configure in one Blade of VIPRION 2400. Only one Trunk, with his vlans respectives, or three trunks.
Please, I need your help.
Thank you. - brad_11480
Nimbostratus
Viprion 2400 requires version 11.x to operate vCMP.
But then on a blade up to 4 guests can be setup each can have a different version of software.
Does it support guests using version 10.2x or must all guests be at least 11.x (or the version of the chassis?).
What are the limitations to the versions operating in the guest instances?
thanks so much? - Hamish
Cirrocumulus
Guests must be v11 as well. v10.x isn't supported as a guest. There's a SOL note with a matrix of supported versions between the vCMP host and the vCMP guest. Currently it's pretty unrestricted, but at some stage I guess you'll need to upgrade the host to get the latest guest version.
H - Sec-Enabled_658
Cirrostratus
Wondering if someone can answer this question for me. I'm running VCMP on a viprion 2400 with 1 blade. The blade is carved up into 4 vcmp guests. Doing some preliminary load testing and seeing traffic speeds around ~800 Mbps going through a vip using a standard tcp profile on 1 of the vcmp guests. Are all 4 of those guests limited to ~2Gbps throughput like a 3600/3900 would be? - Hamish
Cirrocumulus
I suspect that the throughput will be pretty dependent on the type of VS you have... i.e. Whether it's just FastL4 with full acceleration or a full bells and whistles thing with IRules... As soon as you drop the accelleration your throughput will drop drastically.
H - pete_71470
Cirrostratus
We're running single-blade 2400's but without vCMP and don't see what you're seeing. During peak production traffic for a busy VIP (ssl offload client and server, X-Forwarded-For, several simple iRules) we see 1.3gbps client and 1.3gbps server traffic without any performance problems on the 2400 cpu/memory. The 1.3gbps peak is probably due to SNAT providing insufficient variation in trunk member assignment with upstream cisco equipment. In our experience, the 2400 is quite a beast. For us, the 'hey, why is it slow when going through the F5 but not when going to the node directly' complaints are nearly always related to frame buffer saturation (drops) on cisco equipment leading to devices spinning their wheels waiting for retransmissions, reassembly, etc. - Sec-Enabled_658
Cirrostratus
Update: I think it was the way the customer was doing the test. The customer was doing test with Iperf, and testing against a http profile VIP. Once the VIP was changed to Performance Layer 4, the speed shot up to ~2.6Gbps. - Ian_S_37823
Nimbostratus
We have implemented a new vCMP instance on a 2400 and that is running successfully on 11.2.1. We are reasonably happy with V11 on a VIPRION , although I agree about the need to have a more control over resource allocation. We are now migrating from 1500 and 6400s to vCMP and thinking about how to migrate the configuration from 9.4.8 to 11.2.1 ( via a test F5 running 10.2.4 ) . Anyone any experience /comments on migration especially how to handle the move from 'real ' interfaces /vlans to virtual interfaces and VLANS on vCMP? - Hamish
Cirrocumulus
I rebuilt from scratch.
The VLANs all get configured on your vCMP host and assigned to guests. So that's a 'manual' process in itself. The selfip's in V11 now have names and not just IP's, so again that's preferably a 'manual' process too. Then there's the differences between the LTM configs which you'll probably want to do manually as well. Plus differences in HA (active/standby with units versus v11 clustering with traffic-groups)
By 'manual' however I DON'T mean type it all into the GUI. Take a list of the config from tmsh in the old BigIP and then either hand edit the entries using vi/emacs/whatever or use a quick program to map what you want into your separate vCMP Host and vCMP Guest configs. Then merge the configs in.
H - brad_11480
Nimbostratus
You are doing two conversions as you indicated. One is a platform change and the other is a software version change. Both have some challenges but are minor.
the bigger one is the interface changes as the upgrade conversion utility doesn't handle platform differences. It simply is checking and transforming the configuration to V11 form.
For the interfaces, go into the bigip_base.conf files and edit it. The Viprion guests don't have the interface pieces other than the VLAN's, so remove the interface statements and the statemirror (better to set up HA after you have things going no the new platform). remove the tag statements in the vlan,
check the VLAN names for consistency and that the names are what you will be defining on the Viprion host.
as far as the configuration (bigip.conf). I suggest that if you use the LTM partitions, move everything into the Common partition. V11 changes the way that the config is stored and it also adds fully qualified references to objects with the partition as the path. The common partition is stored where you have always seen the bigip.conf in the /config. the other partitions are separated into their own bigip.conf files in a directory structure in /conf .
if you have iRules that reference objects, in particular pool names, they now include the path so a check of starts_with or equals needs to be changed/updated.
edit the source which is in /config/bigpipe folder, and run /usr/libexec/bigpip doal to reprocess the config files.
errors can be found by looking at /var/log/ltm
(all the above is in a F5 document -- but I don't recall which one it is).
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects