Forum Discussion
Why so high Ping Latencies in VE LTM ?
Some additional info. The server is a Proliant DL380 Gen9 running Windows Server 2012R2 for the Hyper-V. Two of the physical interfaces are in a network team just for guest machines, setup using the "Hyper-V Port" load balancing mode (the other two are in a different team for host network traffic like VM replication, etc).
Those 2 NICs go into separate Cisco switches in a stack, using LACP for the team. Works great for all other guest VMs (and as mentioned, VMQ is disabled... I've seen the problems that caused in the past and learned my lessons :)
The Big-IP VE is using a trial license. In a nutshell, one of our pair of LTM 1600's failed, and it's in a remote datacenter so I have to either get remote hands or plan a site visit to get it swapped out under the support contract. This same failed 1600 had died once before and replaced with a refurb unit, and now this refurb is dead. I think it's memory... on bootup I don't even see it POST when it powers on... the AOM goes through its thing and then just shows it waiting to post. Same thing that happened when the previous unit died and I think it was memory then as well.
So, I thought I'd see if a physical/virtual cluster combo would work in the short term until I can get the unit RMA'd, just as a precaution in case the other one dies in the meantime.
I did only have a single traffic group, but for testing I created a second traffic group once I got the new VE joined, so I could fail over a non-critical set of vips, and that's where I am now.
Hyper-V doesn't have the same sorts of options as you indicate VMWare has. I did notice an option to enable MAC spoofing, so I went ahead and ticked that box since I was using MAC masquerading. Otherwise I think Hyper-V blocks net traffic from the guest that has a MAC other than what it's supposed to be.
I tried switching the virtual net adapters to legacy NICs but when I rebooted the virtual F5, it didn't recognize any NICs so I switched back. I thought maybe it would use different drivers for those that might work better, but maybe I'd have to redo some configuration or start from scratch, so I stopped that research for now.
The only modules enabled are LTM (set to "nominal") and everything else is "none", just like the physical box. Management is set to "small", but really, since I gave it 8GB of RAM I could probably bump that to medium or large. I don't have a lot of objects though. 58 vservers, 41 pools, 8 nodes... "small" has worked fine so far.
Basically, this secondary traffic group has a set of vservers that aren't actively used except for the F5 monitors themselves (those particular vservers are for a redundant set of websites in this particular facility). So when I say the CPU is low, I mean it's barely above zero when the VE is hosting that group. Memory usage is also super low, with several GB (6+ ) of free space for TMM.
All the "real" traffic on this cluster is on the other traffic group still pointed to the physical unit... leaving that alone until I get to the bottom of this virtual edition problem.
I haven't opened a support case... this is a trial VE after all, and hopefully this was only going to be a temporary "just in case" for a month or so. :)
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com