lacp
7 TopicsInterfaces up but trunk down
Hi all, We've VIPRION C2400 with 1 blade installed, connecting to Nexus N5K switches. There are 3 LACP trunks with 2 interfaces each, formed and have been running over year, running 12.0.0. Recently 2 more blades added to the VIPRION, each contributes 3 interfaces to the 3 LACP trunks. There is no problem in adding the interface. The VIPRION is not rebooted since adding new interface to the 3 trunks. Today we reboot the VIPRION, then when the system is started, all 3 trunks are marked down, while the member interfaces are up. Double checked with our Network support and found that although the Port Channel on switch side is connected, but the newly added interfaces are suspended since being added. We tried some actions but all in vain: remove the newly added interfaces and restart the system change production LACP mode from Active to Passive (according to https://support.f5.com/csp/article/K13142). Since the Port Channel status on switch is connected, there is nothing can do on switch side. Here is the interface status: -------------------------------------------------------------------------------- Port Name Status Vlan Duplex Speed Type -------------------------------------------------------------------------------- Eth1/2 po1 connected trunk full 10G 1/10g (existing) Eth1/3 po1 connected trunk full 10G 1/10g (existing) Eth1/12 po1 suspnd trunk full 10G 1/10g (new) Eth1/13 po1 suspnd trunk full 10G 1/10g (new) Po1 viprion_ha connected trunk full 10G There is also some error in /var/log/ltm, e.g. Jun 8 16:24:08 slot2/viprion1 notice sod[5410]: 010c0048:5: Bcm56xxd and lacpd connected - links up. Jun 8 16:24:10 slot2/viprion1 err bcm56xxd[6953]: 012c0010:3: Trouble with packet send request from LACPD oort 9 Jun 8 16:24:10 slot2/viprion1 err bcm56xxd[6953]: 012c0010:3: Trouble with packet send request from LACPD oort 11 Jun 8 16:24:12 slot2/viprion1 err bcm56xxd[6953]: 012c0010:3: Trouble with packet send request from LACPD oort 10 Jun 8 16:24:42 slot3/viprion1 info lacpd[6220]: 01160012:6: Link 3/1.3 Partner Out of Sync Jun 8 16:24:42 slot1/viprion1 info lacpd[6102]: 01160012:6: Resuming log processing at this invocation; hels. Jun 8 16:24:42 slot1/viprion1 info lacpd[6102]: 01160012:6: Link 1/1.11 Partner Out of Sync Jun 8 16:24:42 slot1/viprion1 info lacpd[6102]: 01160012:6: Link 1/1.15 Partner Out of Sync Jun 8 16:24:58 slot2/viprion1 info lacpd[7495]: 01160012:6: Link 2/1.1 Partner Out of Sync Jun 8 16:24:58 slot2/viprion1 info lacpd[7495]: 01160012:6: Link 2/1.3 Partner Out of Sync Jun 8 16:24:58 slot2/viprion1 info lacpd[7495]: 01160012:6: Link 2/1.2 Partner Out of Sync We've no idea at all. Would anyone please help? Thanks a lot. Rgds999Views0likes4CommentsP2V migration while LACP configured for links
I am working on a F5 LTM migration, where a pair of LTM running 14.1.x will be migrated to 2 VMs. All configs can be migrated via UCS loading, but there are LACP configured for their network interfaces. KB K85674611already outlined the issue for LACP and VM, and provided workaround of removing LACP before UCS generation. However, the pair of LTM is running as production, and customer is reluctant to change F5 network config, worrying service interruption. Is there any other method to allow loading of F5 UCS with LACP configuration in VM appliances? Any ideas?599Views0likes2CommentsBig-IP enable LACP
Dear Teach Community, We are currently evaluating BIG-IP LM v15 trial product in our virtual LAB environment. From network perspective we trying to enable LACP in the product, and for that I'm referring to this article: https://support.f5.com/csp/article/K13142 We are missing a check box under Step 6 (To enable LACP, select theLACPcheck box). Any idea as where to find the check box, did we missing anything. We cannot find it as suggested under Network --> Trunk --> Trunk List --> Create. Please find attached a screenshot. In advance thank you Bujari499Views0likes1CommentF5 Viprion bit size hash bit size in hardware for LACP
Dear all, For optimal traffic distribution accross links inside a Trunk / LACP port channel it is important to know what bit size is used built inside the F5 hardware As explained here a comparison between 3-bit and 8-bit where the first one is optimized for 2, 4 or 8 links the the 8-bit can be used for almost equal link distribution when using 6 links inside the Trunk. https://www.packetmischief.ca/2012/07/24/doing-etherchannel-over-3-5-6-and-7-link-bundles/ I suppose the F5 hardware uses 3-bit hashing size built inside the Viprion hardware, but can someone confirm? Is it perhaps adjustable by perhaps using a database variable to switch between 3-bit and 8-bit? The following article explains the hashing methods but it does not provide information about bit-size. https://support.f5.com/csp/article/K13562421Views0likes0CommentsLACP on Viprion and Frame Distribution Hash
Hi, I am rather not pro in this area so excuse me if my questions are not making sense. I wonder what is relation between Frame Distribution Hash and VIPRION cluster distribution (end of article Overview of trunks on BIG-IP platforms. I assume that flow is like that (Viprion uses vCMP, and multi blade vGuests): Peer switch receives packet destined to Viprion Peer is creating hash (based on its frame distribution algorithm) Hash is assigned to chosen interface in trunk All other packets with the same hash are send via the same interface Viprion receives packet and assigns it to TMM - this TMM can be running on vGuest VM on another blade than interface receiving packet - if I am not wrong this is possible? If TMM is on another blade then packet is send over backplane to VM on that balde According to article response packet will be send via interface on the blade where TMM actually processing packet is running, not via receiving interface If above is correct is that not creating issue on the peer side? Will it not expect returning packet on the same trunk interface as send packet? According to mentioned article: When frames are transmitted on a trunk, they are distributed across the working member links. The distribution function ensures that the frames belonging to a particular conversation are neither mis-ordered nor duplicated at the receiving end. What "conversation" means here? Only egress packets with the same hash value Both ingress and egress packets with same source IP:destination IP - I doubt, by not sure. In the end I would like just to find logic for LACP: On given side (peer or Viprion) LACP only assures that packets with given hash entering trunk will be send via the same interface. Packets received from trunk (even if being response to given packet send into trunk) can arrive on completely different interface - LACP do not care to have one TCP connection or UDP flow to run via the same interface in both directions. If above is true then choice of hashing algorithm can be different on both sides of trunk - Am I right? Piotr402Views0likes1CommentF5 tagged vlan with copper and fiber?
Hi I've some question regard network infrastructure. Currently we have network switch behind F5 which connect F5 and server by port 1Gbps normally. In the future, we will have new server and new switch which support only 10Gbps (but we didn't replace old server. new server reside in the same vlan as old server and we still using old server too.) Can we config vlan tagging by assign interface 1.x (copper) and 2.x (fiber) in the same vlan and using it properly? So logical diagram will be like F5 have 2 cable. copper cable connect to old switch and fiber cable connect to new switch. (same vlan) Thank you378Views0likes1CommentTrunk setup procedure - review?
I need to change a single interface over to a two-interface Trunk group on a pair of 7200 chassis. Below is how I'm thinking it could be done, but I thought I'd post it here for the more experienced folks to look over and see if there's anything missing (or a better alternative). The switch config side of this is being handled by a fellow tech, and we've done this before on a 2200 (but as an initial config). So, anway... Scenario: A pair of active/active 7200 chassis with four VCMP guests each. Guests run in four active/standby pairs. Usually, all VCMP guests on one chassis are active and the other standby (no technical reason for doing so, it's just easier to remember who's active). Tagged interface 2.1 on each chassis is currently used for 19 vlans. Plan is to create a Trunk with interfaces 2.1 and 2.2 (not in use) in it on each. Do this first on the "standby" 7200 chassis (all VMs in standby). Once complete, force failover all active VMs and then repeat on the other chassis. Force failover again (back to the original one) afterward to verify. Create "Trunk01" and add interface 2.2. Move a vlan over to it and verify nodes in that vlan recover on one or more VMs. Test a ping to a self IP, etc. Trunk01 will be used as a "tagged" interface. Once the secondary link connectivity looks good, move over the other vlans to Trunk01. Check to ensure nodes recover. Once all vlans have been moved from 2.1 to Trunk01, move 2.1 into the Trunk01 LAG with 2.2. Force failover the active VMs to the standby ones and repeat the procedure on the other chassis. Once complete, force failover back to verify. Thanks!301Views0likes4Comments