trunk
9 TopicsTrunk / VPC Port-Channel not working properly with Nexus 9K / 2K (FEX) : Spanning-tree involved
Hello DevCentral, I'll present to you an odd behavior using 2 Nexus 9k (9.2.1) with Nexus 2k as Fex on which two Big-IP i4600 (12.1.4) are connected. Our Setup : The two Big-IP are configured in a device-group, Each Big-IP is connected to two Nexus 2k (FEX) in the same aggregate using VPV technology on the Nexus. The configuration match this KB : https://support.f5.com/csp/article/K13142 Spanning-tree is disabled on interfaces and Trunk on the Big-IP Flow Control is disabled on the Big-IP and the Nexus The Big-IP are connected to multiples VLANS using "Tagged Interfaces" option (802.1q tag on packets) Observations with this spanning-tree setup on the VPC configured on the Nexus : spanning-tree port type edge spanning-tree bpduguard enable Observation 1: When every interface is up, everything work properly Observation 2: If I shut one or the other interface of Port-channel1 on the switch everything is ok, If I shut both interfaces of Port-channel1 the aggregate is seen "Down", If I "no shut" interface1 of Port-channel1 the aggregate is rebuild and works after few seconds. Observation 3: If I shut one or the other interface of Port-channel1 on the switch everything is ok, If I shut both interfaces of Port-channel1 the aggregate is seen "Down", If I "no shut" interface2 of Port-channel1 the aggregate is rebuild but packets are not forwarded to/from this interface. Observations with this spanning-tree setup on the VPC configured on Nexus (notice the word trunk added): spanning-tree port type edge trunk spanning-tree bpduguard enable Observation 1: When every interface is up, everything work properly Observation 2: If I shut one or the other interface of Port-channel1 on the switch everything is ok, If I shut both interfaces of Port-channel1 the aggregate is seen "Down", If I "no shut" interface1 of Port-channel1 the aggregate is rebuild and works after few seconds. Observation 3: If I shut one or the other interface of Port-channel1 on the switch everything is ok, If I shut both interfaces of Port-channel1 the aggregate is seen "Down", If I "no shut" interface2 of Port-channel1 the aggregate is rebuild and works after few seconds. General Observations: There is no error detected on the interfaces/Port-Channel on the Nexus There is no error detected on the interfaces/Port-Channel on the Big-IP Conclusion: "spanning-tree port type edge", is not working for this setup "spanning-tree port type edge trunk", is working for this setup Question: Can someone explain what's happening here ? Regards my fellow companions.2.4KViews0likes2CommentsLACP on Viprion and Frame Distribution Hash
Hi, I am rather not pro in this area so excuse me if my questions are not making sense. I wonder what is relation between Frame Distribution Hash and VIPRION cluster distribution (end of article Overview of trunks on BIG-IP platforms. I assume that flow is like that (Viprion uses vCMP, and multi blade vGuests): Peer switch receives packet destined to Viprion Peer is creating hash (based on its frame distribution algorithm) Hash is assigned to chosen interface in trunk All other packets with the same hash are send via the same interface Viprion receives packet and assigns it to TMM - this TMM can be running on vGuest VM on another blade than interface receiving packet - if I am not wrong this is possible? If TMM is on another blade then packet is send over backplane to VM on that balde According to article response packet will be send via interface on the blade where TMM actually processing packet is running, not via receiving interface If above is correct is that not creating issue on the peer side? Will it not expect returning packet on the same trunk interface as send packet? According to mentioned article: When frames are transmitted on a trunk, they are distributed across the working member links. The distribution function ensures that the frames belonging to a particular conversation are neither mis-ordered nor duplicated at the receiving end. What "conversation" means here? Only egress packets with the same hash value Both ingress and egress packets with same source IP:destination IP - I doubt, by not sure. In the end I would like just to find logic for LACP: On given side (peer or Viprion) LACP only assures that packets with given hash entering trunk will be send via the same interface. Packets received from trunk (even if being response to given packet send into trunk) can arrive on completely different interface - LACP do not care to have one TCP connection or UDP flow to run via the same interface in both directions. If above is true then choice of hashing algorithm can be different on both sides of trunk - Am I right? Piotr412Views0likes1CommentTrunk setup procedure - review?
I need to change a single interface over to a two-interface Trunk group on a pair of 7200 chassis. Below is how I'm thinking it could be done, but I thought I'd post it here for the more experienced folks to look over and see if there's anything missing (or a better alternative). The switch config side of this is being handled by a fellow tech, and we've done this before on a 2200 (but as an initial config). So, anway... Scenario: A pair of active/active 7200 chassis with four VCMP guests each. Guests run in four active/standby pairs. Usually, all VCMP guests on one chassis are active and the other standby (no technical reason for doing so, it's just easier to remember who's active). Tagged interface 2.1 on each chassis is currently used for 19 vlans. Plan is to create a Trunk with interfaces 2.1 and 2.2 (not in use) in it on each. Do this first on the "standby" 7200 chassis (all VMs in standby). Once complete, force failover all active VMs and then repeat on the other chassis. Force failover again (back to the original one) afterward to verify. Create "Trunk01" and add interface 2.2. Move a vlan over to it and verify nodes in that vlan recover on one or more VMs. Test a ping to a self IP, etc. Trunk01 will be used as a "tagged" interface. Once the secondary link connectivity looks good, move over the other vlans to Trunk01. Check to ensure nodes recover. Once all vlans have been moved from 2.1 to Trunk01, move 2.1 into the Trunk01 LAG with 2.2. Force failover the active VMs to the standby ones and repeat the procedure on the other chassis. Once complete, force failover back to verify. Thanks!312Views0likes4CommentsF5 tagged vlan with copper and fiber?
Hi I've some question regard network infrastructure. Currently we have network switch behind F5 which connect F5 and server by port 1Gbps normally. In the future, we will have new server and new switch which support only 10Gbps (but we didn't replace old server. new server reside in the same vlan as old server and we still using old server too.) Can we config vlan tagging by assign interface 1.x (copper) and 2.x (fiber) in the same vlan and using it properly? So logical diagram will be like F5 have 2 cable. copper cable connect to old switch and fiber cable connect to new switch. (same vlan) Thank you387Views0likes1CommentVIPRION, vCMP, TMM and DAG - how it works
Hi, I tried to find definitive answer in KB as well in post on DC but I am still not sure if I get things right. Scenario: VIPRION with two blades Trunk created including one port from each blade - let's say it's used by ext VLAN vGuest spanning two blades set with 2 vCPU per slot vGuest will consist of two VMs (one per blade) each with 2 x vCPU - vGuest total vCPU = 4 According to all I found that will give 4 x TMM process - 1 per vCPU (treated as core I guess), don't know how it translates to TMM instances? Assume we will have as well 4 TMM instances? First question is how DAG is performing distribution in relation to vGuest setup - 1 vGuest (4 TMM processes) = 2 x VMs (2 x 2 TMMs). Is DAG threating vGuest as one entity and distributes connections among all 4 TMMs or just among TMMs on given VM? In other words, let's say that new connection was directed by LACP on switch to blade 1 interface. This is new connection so I assume DAG needs to assign it to TMM process/instance. Will it consider only TMMs running on VM on blade 1 or all TMMs of vGuest? If all and TMM on blade 2 VM is selected, then I assume that VM (or DAG and HSB) on blade 1 will use chassis backplane to pass traffic to TMM running on VM/blade 2 - is that right? If so will returning traffic be passed back via backplane to VM on blade 1 and then via blade 1 interface to switch, or maybe it will go back directly via interface on blade 2? If second option is true will it be an issue for LACP? Traffic from switch to blade hashed (assuming Src/Dst IP:port hash) to blade 1 interface and traffic from VIPRION going back via blade 2 interface link - even if hash is the same? If DAG is distributing connections between all vGuest TMMs how it decides to which VM traffic should be send - checking load on VMs, RR so each new TCP/UDP port hash is directed to new TMM? Piotr371Views0likes1CommentvCMP and Trunk configuration
I'm new to working with vCMPs... I have two 52550v set up as vCMP hosts. Each host has two 10GB fiber connections that connect to the pair of core switches. I have created a trunk and added the interfaces. I have then created several VLANs (with tagging) and added the trunk to the VLANs. Then I added the VLANs to the vCMPs. Now when I log into the vCMP, I see the VLANs, but when I try to set up HA using the VLANs, I get a message that the the VLAN doesn’t have an interface. When I look at the VLAN the interface is empty. I can see the Trunk in the list, but I can't add it to the VLAN, which makes sense, because I should only add it to the VLAN from the HOST. Am I missing something? Is there something wrong with me using the same trunk for more than one VLAN?848Views0likes9CommentsMAC address masquerade configuration for multi-VLAN trunk interface
I've got a 2-device LTM cluster with a 2-port LACP-bundle trunk that has several VLANs on it, and I'm looking at deploying MAC masquerade. Currently, the LTM cluster does not have masquerade configured. I've been looking at https://support.f5.com/kb/en-us/solutions/public/13000/500/sol13502.html for configuration instructions. Do I only need to set a single virtual MAC, or do I need to specify a virtual MAC for each VLAN? If only one, will it iterate through virtual MACs for each VLAN like it does with the predefined MAC addresses? Or will it end up using the same MAC address for each VLAN? For example, currently, the (anonymized) MAC address for the eth0 interface is: eth0 Link encap:Ethernet HWaddr DE:AD:BE:EF:00:01 But each VLAN IP interface has a VLAN-specific MAC address that's the same as the base eth0 MAC address with a different last byte. I.e.: MYVLAN1 Link encap:Ethernet HWaddr DE:AD:BE:EF:00:07 MYVLAN2 Link encap:Ethernet HWaddr DE:AD:BE:EF:00:08 MYVLAN3 Link encap:Ethernet HWaddr DE:AD:BE:EF:00:09 MYVLAN4 Link encap:Ethernet HWaddr DE:AD:BE:EF:00:0A If I configure my LTM and go to Device Management->Traffic Groups->traffic-group-1 and enter 2B:AD:BE:EF:00:01 in the "MAC Masquerade Address" field, will my interface MAC addresses be like this? eth0 Link encap:Ethernet HWaddr 2B:AD:BE:EF:00:01 MYVLAN1 Link encap:Ethernet HWaddr 2B:AD:BE:EF:00:07 MYVLAN2 Link encap:Ethernet HWaddr 2B:AD:BE:EF:00:08 etc or will each VLAN have the same virtual MAC, like this: eth0 Link encap:Ethernet HWaddr 2B:AD:BE:EF:00:01 MYVLAN1 Link encap:Ethernet HWaddr 2B:AD:BE:EF:00:01 MYVLAN2 Link encap:Ethernet HWaddr 2B:AD:BE:EF:00:01 Thanks!358Views0likes1CommentLAG with Brocade switches
Hello, Anyone here using F5 BIG-IP LTM with Brocade Ethernet switches? Did you configure Brocade's LAG as a static or dynamic LAG? And what options did you use for the F5 trunk configuration (Link Selection Policy and Frame Distribution Hash)? Thank you.218Views0likes0CommentsEtherChannel Issue
Hi, I am having an issue with a four port ether-channel between a Big IP 2000 and a Cisco 3750. Looking at the interface stats on the F5, outbound traffic from F5 back to the clients is well balanced across the four links. However, the inbound traffic from the switch only uses two of the available four. Load balancing method on the Cisco is src-IP, and src-dest-IP on the F5. we have tried various permutations of LB method, but still only receiving traffic on the two ports. Could this be due to other settings on the ether-channel setup? And if so, can anybody point me in the right direction please? We are currently only getting 2Gbps up and 2Gbps down. All four gig ports are up and set to full duplex. Thanks Nick231Views0likes1Comment