Forum Discussion
F5 LTM VIP/STP Problem
Over the weekend, we removed foundry switches and replaced those with the Cisco 3560E’s. Since this change over we have had a few issues with no resolution to date. First, when we try to access the VIP for our webservers on the F5 LTM by HTTP/HTTPS it does not resolve. If we try to access the LTMs HTTPS web address, it does not resolve either. However, we can access all servers using their physical address with HTTP/HTTPS. What is really weird is that we can ping the VIP and LTM address. We do not currently have an access-list on any device denying this traffic. Also, when we removed a NIC from the team, we could resolve the VIP and LTM by HTTP/HTTPs. The second issue is that spanning-tree is blocking the redundant interfaces on our second switch. Not sure why this is happening if the LTM is in an Active/Standby state and it must be noted that we are using STP pass through.
Hopefully someone reading this has experienced this before or has an idea/suggestion for a resolution. We have opened a ticket with F5, but no resolution yet. We opened a case with Cisco TAC and they have reviewed the switch configuration and everything looks good.
25 Replies
- jfrizzell_43066
Nimbostratus
Hamish,
Thank you for replying to this post. The only Etherchannel configurations I have are between switches only. The devices E-01/E-02 are Layer 3 Etherchannels to another network. In the diagram provided, we have a Layer 2 trunk between two Catalyst 3560E-48 switches. We have no 802.3ad configurations between the F5 LMTs or NIC Teams. It is just a single cable to each switch for redundancy purposes.
I am not a fan of HP Teaming. I couldn't agree more with you about HP teaming and I have seen a number of issues with those causing loops as you described. We do not have 802.3ad/LACP configured because we only have one NIC per switch and the 3560Es are not stackable. This is why we are using one NIC to each switch with TLB. What I did notice is that when we change the Load Balance Method on the teaming group to Destination IP, it works. Really not sure why and to make that change on all server scares me without understanding why. No other Load Balance Method works. I have changed the 3560E load-balancing methods, but it had no effect and I have left it at src-dst-ip.
The LTMs connect to the switch ports on 37 & 38 for both switches and I have those set for autonegotiation of speed/duplex.
I have two DNS servers and one connects to SW1/SW2. They are on the same VLAN as all the other servers and F5. The DNS servers can resolve internal/external names including the VIPs. It's just with the teamed NICs Team Type Selection of automatic/TLB and Load Balacing Method to Automatic it cannot reach the VIPs/LTM web management. If I change the Load Balancing Method to Destination IP, I can access the VIPs/LTM web management.
Jeremy
Hamish wrote:
I think you just have (Had? If it's now working) a problem with HP NIC teaming (Because it's not really good with multiple switches), and trying to aggregate links without using signaling.
I agree and think that's what's going on here.. I'm a big fan of using the correct tool for the job.. It looks like you're trying to do more with less.. which most likely isn't your fault. Teaming those NICs properly across switches requires some price Switch hardware.. Or you could buy some old dusty Nortel Boxes and have your hand at some Split Multilink Trunking! oh noooo.. just kidding... they were the first to put it out on the market.. but never really got it working correctly ;)
So this is working with "Destination IP" on your teaming?, it sounds like it's the return traffic through the LTM which is not working from, which makes some sense.. Do you know what method it was using under automatic? that may shed some light..- jfrizzell_43066
Nimbostratus
The default method on the NIC teams is either TCP Connection or Destination MAC. I say this because neither of these options work, but according to the HP NIC Teaming document it states on page 52 "Although in the current product automatic mode is identical to TCP Connection mode." - J_H__3680
Nimbostratus
we're seeing a similar issue here. it's random and often traffic will flow one way but not the other and some virtual server ip's will be reachable but others will not. On the host the IP will have the correct MAC in the ARP table on the host but traffic will not pass. So far all but one have been corrected by changing the teaming type to NFT from automatic. I was chalking it up to extensive use of Route Domains on our end but it sounds like a larger issue? - Techgeeeg
Nimbostratus
Hi Jfrizzell,
Can you please share the configuration file for your LTM boxes.
Regards, - HDsup123_35917
Nimbostratus
I would like to report the same issue's and add some of my information to this discussion.
Setup: 2x ltm 1600 in active/standby mode.
Firmware's 9.xx to 11.xx
I can confirm that the symptoms disappear when using a server or workstation with a single NIC or changing the load balancing from automatic to network Fail-over only.
To add. i have this issue with the VIP's but also with the self ip's of the LTM's.
Also the mgmt interface is hooked up to a seperate flat(no routing) L2 network with its own ip range.
Even the management interface's behave in the way's described in the above posts when accessed on a machine with double NIC's in the management network.
On servers with multiple NICS connected to 1 switch with port-aggregation / (lacp trunks) this issue does not occur.
I went trough all the possible arp tables and i cannot find any mismatch in the IP -> arp.
Regards,
Ton - HDsup123_35917
Nimbostratus
Opend up a support case @ f5.
Official statement:
In short: NO F5 does not support HP nic teaming.
There is no standard that defines how NIC teaming should work, therefor every implementation done by a vendor can be diffrent.
because of this proprietry nature of the HP nic teaming F5 also has no workaround.
Use 802.3ad, lacp. or nothing at all.
In my oppionion this is a poorly documented missing feature in F5 products.
and my guess is lots of customers will run into this issue. - Hamish
Cirrocumulus
Mmm.. Looking at the whole picture, that's probably fair. Much as I like bashing a vendor for not doing what I want, I'm not sure how F5 would support a feature from another vendor that was subject to arbitrary revision and change and can be implemented in so many different ways and called the same thing.
HP Teaming can encompass
LDAP (802.3ad) which IS supported.
Active/Standby (Which IS supported).
And several other ways. I think there's a couple of arbitrary pseudo load-balancing algorithms in there as well. They'd be the ones I'd steer clear of.
HOWEVER
if you limit your NIC teaming to LACP (802.3ad) I'm fairly confident the answer will come back that it is supported... Because 802.3ad is a STANDARD. Which teaming (Being a feature comprised of both stanards and proprietary configs) is not...
H - stevehuffy_1335
Nimbostratus
Found this thread useful, so posting our solution to it, not sure if there is some other way of doing it.
Our problem was HP blade servers configured with TLB teaming initiating connections to a VIP where the F5 and HP servers were on the same VLAN - sometimes it worked, sometimes it didn't. Packet capture showed F5 sending traffic back to source MAC in request, rather than the MAC in the ARP table.
Our solution: On the VIP, we set "Auto Last Hop" to "disabled", which fixed our problem on that VLAN. It actually broke connections coming in via another VLAN through a firewall, so we just configured another VIP on that VLAN. So ended up with 2 VIPs, with same IP, with different source vlans and different "auto last hop" settings.
- Amitabha_118500
Nimbostratus
We ran into this exact same issues months ago and running into it again now. We ended up disabling the fault tolerance on the server and didn't what was the root cause. stevehuffy 's comment explains it. Thanks A LOT. Namo Amituofo.....
- stevehuffy
Nimbostratus
Found this thread useful, so posting our solution to it, not sure if there is some other way of doing it.
Our problem was HP blade servers configured with TLB teaming initiating connections to a VIP where the F5 and HP servers were on the same VLAN - sometimes it worked, sometimes it didn't. Packet capture showed F5 sending traffic back to source MAC in request, rather than the MAC in the ARP table.
Our solution: On the VIP, we set "Auto Last Hop" to "disabled", which fixed our problem on that VLAN. It actually broke connections coming in via another VLAN through a firewall, so we just configured another VIP on that VLAN. So ended up with 2 VIPs, with same IP, with different source vlans and different "auto last hop" settings.
- Amitabha_118500
Nimbostratus
We ran into this exact same issues months ago and running into it again now. We ended up disabling the fault tolerance on the server and didn't what was the root cause. stevehuffy 's comment explains it. Thanks A LOT. Namo Amituofo.....
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
