lacp
9 TopicsPacket loss between Fortigate and B2250
Hello, I am having an issue that I have not been able to resolve, so hoping someone here can point me in the right direction. I have a Fortigate 3700D with 3x 40G interfaces aggregated, with 3x VLANS on the aggregate interface. Then I have 3x B2250´s configured where each blade has 1x 40G from the Fortigate, and configured as trunk with the vlans added. Now when I tried pinging the F5 locally from the Fortigate and from the internet with virtual IP configured, I got maybe 50% drops of packets, both icmp and dns lookups to the F5DNS server. This was with the Fortigate configured as L4 algorithm (layer 4) , and F5 as "Source/Destination IP address port". I then changed the Fortigate to L3 algorithm (layer 3), and I have a much better response rate on the icmp and dns packets (even though I would assume L4 is correct for the source/destination ip address port config on f5 side? So not sure why it works better now..) So while pings do not drop that often, I still get drops maybe every 7-8 time I try. When doing a tcpdump on the F5 I see that the icmp requests stop working everytime right after an ARP request is made from the Fortigate to the F5, as seen from screenshot attached. Might this be due to the F5 blades using different mac addresses, and the Fortigate being confused by that? (even though I set it to work on L3?.. Anyone know or can point me in the right direction? Thanks in advance!66Views0likes1CommentInterfaces up but trunk down
Hi all, We've VIPRION C2400 with 1 blade installed, connecting to Nexus N5K switches. There are 3 LACP trunks with 2 interfaces each, formed and have been running over year, running 12.0.0. Recently 2 more blades added to the VIPRION, each contributes 3 interfaces to the 3 LACP trunks. There is no problem in adding the interface. The VIPRION is not rebooted since adding new interface to the 3 trunks. Today we reboot the VIPRION, then when the system is started, all 3 trunks are marked down, while the member interfaces are up. Double checked with our Network support and found that although the Port Channel on switch side is connected, but the newly added interfaces are suspended since being added. We tried some actions but all in vain: remove the newly added interfaces and restart the system change production LACP mode from Active to Passive (according to https://support.f5.com/csp/article/K13142). Since the Port Channel status on switch is connected, there is nothing can do on switch side. Here is the interface status: -------------------------------------------------------------------------------- Port Name Status Vlan Duplex Speed Type -------------------------------------------------------------------------------- Eth1/2 po1 connected trunk full 10G 1/10g (existing) Eth1/3 po1 connected trunk full 10G 1/10g (existing) Eth1/12 po1 suspnd trunk full 10G 1/10g (new) Eth1/13 po1 suspnd trunk full 10G 1/10g (new) Po1 viprion_ha connected trunk full 10G There is also some error in /var/log/ltm, e.g. Jun 8 16:24:08 slot2/viprion1 notice sod[5410]: 010c0048:5: Bcm56xxd and lacpd connected - links up. Jun 8 16:24:10 slot2/viprion1 err bcm56xxd[6953]: 012c0010:3: Trouble with packet send request from LACPD oort 9 Jun 8 16:24:10 slot2/viprion1 err bcm56xxd[6953]: 012c0010:3: Trouble with packet send request from LACPD oort 11 Jun 8 16:24:12 slot2/viprion1 err bcm56xxd[6953]: 012c0010:3: Trouble with packet send request from LACPD oort 10 Jun 8 16:24:42 slot3/viprion1 info lacpd[6220]: 01160012:6: Link 3/1.3 Partner Out of Sync Jun 8 16:24:42 slot1/viprion1 info lacpd[6102]: 01160012:6: Resuming log processing at this invocation; hels. Jun 8 16:24:42 slot1/viprion1 info lacpd[6102]: 01160012:6: Link 1/1.11 Partner Out of Sync Jun 8 16:24:42 slot1/viprion1 info lacpd[6102]: 01160012:6: Link 1/1.15 Partner Out of Sync Jun 8 16:24:58 slot2/viprion1 info lacpd[7495]: 01160012:6: Link 2/1.1 Partner Out of Sync Jun 8 16:24:58 slot2/viprion1 info lacpd[7495]: 01160012:6: Link 2/1.3 Partner Out of Sync Jun 8 16:24:58 slot2/viprion1 info lacpd[7495]: 01160012:6: Link 2/1.2 Partner Out of Sync We've no idea at all. Would anyone please help? Thanks a lot. Rgds1.1KViews0likes4CommentsP2V migration while LACP configured for links
I am working on a F5 LTM migration, where a pair of LTM running 14.1.x will be migrated to 2 VMs. All configs can be migrated via UCS loading, but there are LACP configured for their network interfaces. KB K85674611already outlined the issue for LACP and VM, and provided workaround of removing LACP before UCS generation. However, the pair of LTM is running as production, and customer is reluctant to change F5 network config, worrying service interruption. Is there any other method to allow loading of F5 UCS with LACP configuration in VM appliances? Any ideas?605Views0likes2CommentsBIG-IP L2 Virtual Wire LACP Mode Deployment with Gigamon Network Packet Broker
Introduction This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer tohttps://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer tohttps://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibilityandhttps://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2). This article covers the design and implementation of the Gigamon Bypass Switch / Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire) with LACP Mode. This article covers LACP Mode deployment mentioned in articlehttps://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon. Network Topology Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with Gigamon. Figure 1 - Topology with MLAG and LAG before deployment of Gigamon and BIG-IP Figure 2 - Topology with MLAG and LAG after deployment of Gigamon and BIG-IP Figure 3 - Connection between Gigamon and BIG-IP Hardware Specification Hardware used in this article are BIG-IP i5800 GigaVUE-HC1 Arista DCS-7010T-48 (all the four switches) Note: All the Interfaces/Ports are 1G speed Software Specification Software used in this article are BIG-IP 16.1.0 GigaVUE-OS 5.7.01 Arista 4.21.3F (North Switches) Arista 4.19.2F (South Switches) Switch Configuration Switch Configuration is same as previous articlehttps://devcentral.f5.com/s/articles/BIG-IP-L2-V-Wire-LACP-Passthorugh-Deployment-with-Gigamon Note: In above mentioned configuration switch ports are configured as access port to allow vlan 120, so BIG-IP will receive untagged frames. In case to have tagged frame, configure switch ports as trunk ports. In this article, below scenarios are tested with Tagged frames. Gigamon Configuration Gigamon Configuration is same as previous article https://devcentral.f5.com/s/articles/BIG-IP-L2-vWire-LACP-Passthrough-Deployment-with-1-to-1-mapping-of-Gigamon-NPS BIG-IP Configuration BIG-IP configuration is exactly same as configuration mentioned inhttps://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon This article is specific to LACP Mode, find below trunk configuration with LACP mode enabled. Figure 4 - Trunk configuration with LACP enabled Note: For LACP mode, in vWire configuration Propagate Virtual Wire Link Status should be disabled. Scenarios As perFigure 2 and 3, setup is completely up and functional. As LACP passthrough mode configured in BIG-IP, LACP frames will passthrough BIG-IP. LACP will be established between North and South Switches.ICMP traffic is used to represent network traffic from the north switches to the south switches. Scenario 1: Traffic flow through BIG-IP with North and South Switches configured in LACP active mode Above configurations shows that all the four switches are configured with LACP active mode. Figure 5 - MLAG and LAG status after deployment of BIG-IP and Gigamon with Switches configured in LACP AC TIVE mode Figure 5shows that port-channels 120 and 121 are active at both North Switches and South Switches. Above configuration shows MLAG configured at North Switches and LAG configured at South Switches. Figure 6 - ICMP traffic flow from client to server through BIG-IP Figure 6shows ICMP is reachable from client to server through BIG-IP. Here LACP is established between Switches and BIG-IP, whereas in passthrough mode LACP will be established between switches Figure 7 - Actor ID of BIG-IP Figure 8 - LACP neighbor details in switches Figure 7 and Figure 8 shows LACP is established between Switches and BIG-IP. Scenario 2: Traffic flow through BIG-IP with North and South Switches configured in LACP Passive mode North Switch 1: interface Ethernet36 channel-group 120 mode passive interface Ethernet37 channel-group 121 mode passive North Switch 2: interface Ethernet37 channel-group 120 mode passive interface Ethernet36 channel-group 121 mode passive South Switch 1: interface Ethernet36 channel-group 120 mode passive interface Ethernet37 channel-group 120 mode passive South Switch 2: interface Ethernet36 channel-group 121 mode passive interface Ethernet37 channel-group 121 mode passive Figure 9 - MLAG and LAG status after deployment of BIG-IP and Gigamon with Switches configured in LACP Passive mode Figure 9shows that port-channels 120 and 121 are active at both North Switches and South Switches. Above configuration shows MLAG configured at North Switches and LAG configured at South Switches. Figure 10 - ICMP traffic flow from client to server through BIG-IP Figure 10shows ICMP is reachable from client to server through BIG-IP. BIG-IP configured with LACP in Active mode and Switches configured with LACP in Passive mode, thus LACP got established successfully. This behavior will not occur when BIG-IP configured in Passthrough mode, in that case both the North and South will be in LACP passive mode and LACP will not get established. Scenario 3: Active BIG-IP link goes down in BIG-IP Figure 10shows that interface 1.1 of BIG-IP is active incoming interface and interface 1.2 of BIG-IP is active outgoing interface. Disabling BIG-IP interface 1.1 will make active link down as below Figure 11 - BIG-IP interface 1.1 disabled Figure 12 - Trunk state after BIG-IP interface 1.1 disabled Figure 12shows that all the trunks are up even though interface 1.1 is down. As per configuration, Left_Trunk1 has 2 interfaces connected to it 1.1 and 2.3 and one of the interface is still up, so Left_Trunk1 status is active. In previous articlehttps://devcentral.f5.com/s/articles/BIG-IP-L2-V-Wire-LACP-Passthorugh-Deployment-with-Gigamon, individual trunks got configured and status of Left_Trunk1 was down. Figure 13 - MLAG and LAG status with interface 1.1 down Figure 13shows that port-channels 120 and 121 are active at both North Switches and South Switches. This shows that switches are not aware of link failure and it is been handled by Gigamon configuration. Figure 14 - One of Inline Tool goes down after link failure Figure 14shows Inline Tool which is connected to interface 1.1 of BIG-IP goes down. Figure 15 - Bypass enabled for specific flow Figure 15shows tool failure introduced bypass for Inline-network pair Bypass1 ( Interface 1.1 and 1.2) If traffic hits interface 1.1 then Gigamon will send traffic directly to interface 1.2. This shows traffic bypassed BIG-IP. Figure 16 -ICMP traffic flow from client to server bypassing BIG-IP Figure 16shows client is reaching server and no traffic passing through BIG-IP which means traffic bypassed BIG-IP. Figure 17 - Port Statistics of Gigamon Figure 17shows traffic reaches interface 1.1 of Gigamon and forwards to interface 1.2. Traffic is not routed to tool, as specific Inline-Network enabled with bypass. In the same scenario, if traffic hits any other interface apart from interface 1.1 of Gigamon then traffic will be route to BIG-IP. Please note that only one Inline-network pair enables bypass, remaining 3 Inline-network pairs are still in normal forwarding state. Scenario 4: BIG-IP goes down and bypass enabled in Gigamon Figure 18 - All the BIG-IP interfaces disabled Figure 19 - Inline tool status after BIG-IP goes down Figure 19shows that all the Inline Tool pair goes down once BIG-IP is down. Figure 20 - Bypass enabled in Gigamon Figure 20shows bypass enabled in Gigamon and ensures there is no network failure. ICMP traffic still flows between ubuntu client and ubuntu server as below Figure 21 - ICMP traffic flow from client to server bypassing BIG-IP Conclusion This article covers BIG-IP L2 Virtual Wire LACP mode deployment with Gigamon. Gigamon configured with one to one mapping between Inline-network and Inline-tool. No Inline-network group and Inline-tool group configured in Gigamon. Observations of this deployment are as below As one to one mapping configured between Inline-network and Inline-tool, no additional tag inserted by Gigamon. As there is no additional tag in frames when reaching BIG-IP, this configuration works for both Tagged and Untagged packets. If any of the Inline Tool link goes down, Gigamon handles bypass. Switches will be still unware of the changes. If any of the Inline Tool Pairs goes down, then specific Inline-network enables bypass. If traffic hits bypass enabled Inline-network, then traffic will be bypassing BIG IP. If traffic hits Normal forward state Inline-Network, the traffic will be forwarded to BIG-IP. If BIG-IP goes down, Gigamon enables bypass and ensures there is no packet drop. Propagate Virtual Wire Link State should be disabled for LACP Mode in Virtual Wire Configuration655Views5likes0CommentsBig-IP enable LACP
Dear Teach Community, We are currently evaluating BIG-IP LM v15 trial product in our virtual LAB environment. From network perspective we trying to enable LACP in the product, and for that I'm referring to this article: https://support.f5.com/csp/article/K13142 We are missing a check box under Step 6 (To enable LACP, select theLACPcheck box). Any idea as where to find the check box, did we missing anything. We cannot find it as suggested under Network --> Trunk --> Trunk List --> Create. Please find attached a screenshot. In advance thank you Bujari548Views0likes1CommentF5 Viprion bit size hash bit size in hardware for LACP
Dear all, For optimal traffic distribution accross links inside a Trunk / LACP port channel it is important to know what bit size is used built inside the F5 hardware As explained here a comparison between 3-bit and 8-bit where the first one is optimized for 2, 4 or 8 links the the 8-bit can be used for almost equal link distribution when using 6 links inside the Trunk. https://www.packetmischief.ca/2012/07/24/doing-etherchannel-over-3-5-6-and-7-link-bundles/ I suppose the F5 hardware uses 3-bit hashing size built inside the Viprion hardware, but can someone confirm? Is it perhaps adjustable by perhaps using a database variable to switch between 3-bit and 8-bit? The following article explains the hashing methods but it does not provide information about bit-size. https://support.f5.com/csp/article/K13562429Views0likes0CommentsLACP on Viprion and Frame Distribution Hash
Hi, I am rather not pro in this area so excuse me if my questions are not making sense. I wonder what is relation between Frame Distribution Hash and VIPRION cluster distribution (end of article Overview of trunks on BIG-IP platforms. I assume that flow is like that (Viprion uses vCMP, and multi blade vGuests): Peer switch receives packet destined to Viprion Peer is creating hash (based on its frame distribution algorithm) Hash is assigned to chosen interface in trunk All other packets with the same hash are send via the same interface Viprion receives packet and assigns it to TMM - this TMM can be running on vGuest VM on another blade than interface receiving packet - if I am not wrong this is possible? If TMM is on another blade then packet is send over backplane to VM on that balde According to article response packet will be send via interface on the blade where TMM actually processing packet is running, not via receiving interface If above is correct is that not creating issue on the peer side? Will it not expect returning packet on the same trunk interface as send packet? According to mentioned article: When frames are transmitted on a trunk, they are distributed across the working member links. The distribution function ensures that the frames belonging to a particular conversation are neither mis-ordered nor duplicated at the receiving end. What "conversation" means here? Only egress packets with the same hash value Both ingress and egress packets with same source IP:destination IP - I doubt, by not sure. In the end I would like just to find logic for LACP: On given side (peer or Viprion) LACP only assures that packets with given hash entering trunk will be send via the same interface. Packets received from trunk (even if being response to given packet send into trunk) can arrive on completely different interface - LACP do not care to have one TCP connection or UDP flow to run via the same interface in both directions. If above is true then choice of hashing algorithm can be different on both sides of trunk - Am I right? Piotr412Views0likes1CommentTrunk setup procedure - review?
I need to change a single interface over to a two-interface Trunk group on a pair of 7200 chassis. Below is how I'm thinking it could be done, but I thought I'd post it here for the more experienced folks to look over and see if there's anything missing (or a better alternative). The switch config side of this is being handled by a fellow tech, and we've done this before on a 2200 (but as an initial config). So, anway... Scenario: A pair of active/active 7200 chassis with four VCMP guests each. Guests run in four active/standby pairs. Usually, all VCMP guests on one chassis are active and the other standby (no technical reason for doing so, it's just easier to remember who's active). Tagged interface 2.1 on each chassis is currently used for 19 vlans. Plan is to create a Trunk with interfaces 2.1 and 2.2 (not in use) in it on each. Do this first on the "standby" 7200 chassis (all VMs in standby). Once complete, force failover all active VMs and then repeat on the other chassis. Force failover again (back to the original one) afterward to verify. Create "Trunk01" and add interface 2.2. Move a vlan over to it and verify nodes in that vlan recover on one or more VMs. Test a ping to a self IP, etc. Trunk01 will be used as a "tagged" interface. Once the secondary link connectivity looks good, move over the other vlans to Trunk01. Check to ensure nodes recover. Once all vlans have been moved from 2.1 to Trunk01, move 2.1 into the Trunk01 LAG with 2.2. Force failover the active VMs to the standby ones and repeat the procedure on the other chassis. Once complete, force failover back to verify. Thanks!313Views0likes4CommentsF5 tagged vlan with copper and fiber?
Hi I've some question regard network infrastructure. Currently we have network switch behind F5 which connect F5 and server by port 1Gbps normally. In the future, we will have new server and new switch which support only 10Gbps (but we didn't replace old server. new server reside in the same vlan as old server and we still using old server too.) Can we config vlan tagging by assign interface 1.x (copper) and 2.x (fiber) in the same vlan and using it properly? So logical diagram will be like F5 have 2 cable. copper cable connect to old switch and fiber cable connect to new switch. (same vlan) Thank you387Views0likes1Comment