l2 transparency
4 TopicsBIG-IP L2 Virtual Wire LACP Passthrough Deployment with IXIA Bypass Switch and Network Packet Broker (Single Service Chain - Active / Active)
Introduction This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer tohttps://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer tohttps://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibilityandhttps://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2). F5’s BIG-IP hardware appliances can be inserted in L2 networks. This can be achieved using either virtual Wire (vWire) or by bridging 2 Virtual LANs using a VLAN Groups. This document covers the design and implementation of the IXIA Bypass Switch/Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire). This document focus on IXIA Bypass Switch / Network Packet Broker. For more information about architecture overview of bypass switch and network packet broker refer tohttps://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?tab=series&page=1. This article is continuation of https://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1 with latest versions of BIG-IP and IXIA Devices. Also focused on various combination of configurations in BIG-IP and IXIA devices. Network Topology Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker. Figure 1 - Deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker Please refer Lab Overview section in https://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1 for more insights on lab topology and connections. Hardware Specification Hardware used in this article are IXIA iBypass DUO ( Bypass Switch) IXIA Vision E40 (Network Packet Broker) BIG-IP Arista DCS-7010T-48 (all the four switches) Software Specification Software used in this article are BIG-IP 16.1.0 IXIA iBypass DUO 1.4.1 IXIA Vision E40 5.9.1.8 Arista 4.21.3F (North Switches) Arista 4.19.2F (South Switches) Switch Configuration LAG or link aggregation is a way of bonding multiple physical links into a combined logical link. MLAG or multi-chassis link aggregation extends this capability allowing a downstream switch or host to connect to two switches configured as an MLAG domain. This provides redundancy by giving the downstream switch or host two uplink paths as well as full bandwidth utilization since the MLAG domain appears to be a single switch to Spanning Tree (STP). Lab Overview section in https://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1shows MLAG configuring in both the switches. This article focus on LACP deployment for tagged packets. For more details on MLAG configuration, refer tohttps://eos.arista.com/mlag-basic-configuration/#Verify_MLAG_operation Step Summary Step 1 : Configuration of MLAG peering between both the North Switches Step 2 : Verify MLAG Peering in North Switches Step 3 : Configuration of MLAG Port-Channels in North Switches Step 4 : Configuration of MLAG peering between both the South Switches Step 5 : Verify MLAG Peering in South Switches Step 6 : Configuration of MLAG Port-Channels in South Switches Step 7 : Verify Port-Channel Status Step 1 : Configuration of MLAG peering between both the North Switches MLAG Configuration in North Switch1 and North Switch2 are as follows North Switch 1: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.0.1/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.0.2 peer-link Port-Channel10 reload-delay 150 North Switch 2: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.0.2/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.0.1 peer-link Port-Channel10 reload-delay 150 Step 2 : Verify MLAG Peering in North Switches North Switch 1: North-1#show mlag MLAG Configuration: domain-id:mlag1 local-interface:Vlan4094 peer-address:172.16.0.2 peer-link:Port-Channel10 peer-config : consistent MLAG Status: state:Active negotiation status:Connected peer-link status:Up local-int status:Up system-id:2a:99:3a:23:94:c7 dual-primary detection :Disabled MLAG Ports: Disabled:0 Configured:0 Inactive:6 Active-partial:0 Active-full:2 North Switch 2: North-2#show mlag MLAG Configuration: domain-id:mlag1 local-interface:Vlan4094 peer-address:172.16.0.1 peer-link:Port-Channel10 peer-config : consistent MLAG Status: state:Active negotiation status:Connected peer-link status:Up local-int status:Up system-id:2a:99:3a:23:94:c7 dual-primary detection :Disabled MLAG Ports: Disabled:0 Configured:0 Inactive:6 Active-partial:0 Active-full:2 Step 3 : Configuration of MLAG Port-Channels in North Switches North Switch 1: interface Port-Channel513 switchport trunk allowed vlan 513 switchport mode trunk mlag 513 interface Ethernet50 channel-group 513 mode active North Switch 2: interface Port-Channel513 switchport trunk allowed vlan 513 switchport mode trunk mlag 513 interface Ethernet50 channel-group 513 mode active Step 4 : Configuration of MLAG peering between both the South Switches MLAG Configuration in South Switch1 and South Switch2 are as follows South Switch 1: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.1.1/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.1.2 peer-link Port-Channel10 reload-delay 150 South Switch 2: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.1.2/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.1.1 peer-link Port-Channel10 reload-delay 150 Step 5 : Verify MLAG Peering in South Switches South Switch 1: South-1#show mlag MLAG Configuration: domain-id : mlag1 local-interface : Vlan4094 peer-address : 172.16.1.2 peer-link : Port-Channel10 peer-config : consistent MLAG Status: state : Active negotiation status : Connected peer-link status : Up local-int status : Up system-id : 2a:99:3a:48:78:d7 MLAG Ports: Disabled : 0 Configured : 0 Inactive : 6 Active-partial : 0 Active-full : 2 South Switch 2: South-2#show mlag MLAG Configuration: domain-id : mlag1 local-interface : Vlan4094 peer-address : 172.16.1.1 peer-link : Port-Channel10 peer-config : consistent MLAG Status: state : Active negotiation status : Connected peer-link status : Up local-int status : Up system-id : 2a:99:3a:48:78:d7 MLAG Ports: Disabled : 0 Configured : 0 Inactive : 6 Active-partial : 0 Active-full : 2 Step 6 : Configuration of MLAG Port-Channels in South Switches South Switch 1: interface Port-Channel513 switchport trunk allowed vlan 513 switchport mode trunk mlag 513 interface Ethernet50 channel-group 513 mode active South Switch 2: interface Port-Channel513 switchport trunk allowed vlan 513 switchport mode trunk mlag 513 interface Ethernet50 channel-group 513 mode active LACP modes are as follows On Active Passive LACP Connection establishment will occur only for below configurations Active in both North and South Switch Active in North or South Switch and Passive in other switch On in both North and South Switch Note: In this case, all the interfaces of both North and South Switches are configured with LACP mode as Active. Step 7 : Verify Port-Channel Status North Switch 1: North-1#show mlag interfaces detail local/remote mlag state local remote oper config last change changes ---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- ------- 513 active-full Po513 Po513 up/up ena/ena 4 days, 0:34:28 ago 198 North Switch 2: North-2#show mlag interfaces detail local/remote mlag state local remote oper config last change changes ---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- ------- 513 active-full Po513 Po513 up/up ena/ena 4 days, 0:35:58 ago 198 South Switch 1: South-1#show mlag interfaces detail local/remote mlag state local remote oper config last change changes ---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- ------- 513 active-full Po513 Po513 up/up ena/ena 4 days, 0:36:04 ago 190 South Switch 2: South-2#show mlag interfaces detail local/remote mlag state local remote oper config last change changes ---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- ------- 513 active-full Po513 Po513 up/up ena/ena 4 days, 0:36:02 ago 192 Ixia iBypass Duo Configuration For detailed insight, refer to IXIA iBypass Duo Configuration section in https://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?page=1 Figure 2 - Configuration of iBypass Duo (Bypass Switch) Heartbeat Configuration Heartbeats are configured on both bypass switches to monitor tools in their primary path and secondary paths. If a tool failure is detected, the bypass switch forwards traffic to the secondary path. Heartbeat can be configured using multiple protocols, here Bypass switch 1 uses DNS and Bypass Switch 2 uses IPX for Heartbeat. Figure 3 - Heartbeat Configuration of Bypass Switch 1 ( DNS Heartbeat ) In this infrastructure, the VLAN ID is 513 and represented as hex 0201. Figure 4 - VLAN Representation in Heartbeat Figure 5 - Heartbeat Configuration of Bypass Switch 1 ( B Side ) Figure 6 - Heartbeat Configuration of Bypass Switch 2 ( IPX Heartbeat ) Figure 7 - Heartbeat Configuration of Bypass Switch 2 ( B Side ) IXIA Vision E40 Configuration Create the following resources with the information provided. Bypass Port Pairs Inline Tool Pair Service Chains Figure 8 - Configuration of Vision E40 ( NPB ) This articles focus on deployment of Network Packet Broker with single service chain whereas previous article is based on 2 service chain. Figure 9 - Configuration of Tool Resources In Single Tool Resource, 2 Inline Tool Pairs configured which allows to configure both the Bypass Port pair with single Service Chain. Figure 10 - Configuration of VLAN Translation From Switch Configuration, Source VLAN is 513 and it will be translated to 2001 and 2002 for Bypass 1 and Bypass 2 respectively. For more insights with respect to VLAN translation, refer https://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?page=1 For Tagged Packets, VLAN translation should be enabled. LACP frames will be untagged which should be bypassed and routed to other Port-Channel. In this case LACP traffic will not reach BIG-IP, instead it will get routed directly from NPB to other pair of switches. LACP bypass Configuration The network packet broker is configured to forward (or bypass) the LACP frames directly from the north to the south switch and vice versa.LACP frames bear the ethertype 8809 (in hex). This filter is configured during the Bypass Port Pair configuration. Note: There are methods to configure this filter, with the use of service chains and filters but this is the simplest for this deployment. Figure 11 - Configuration to redirect LACP BIG-IP Configuration Step Summary Step 1 : Configure interfaces to support vWire Step 2 : Configure trunk in passthrough mode Step 3 : Configure Virtual Wire Note: Steps mentioned above are specific to topology inFigure 2.For more details on Virtual Wire (vWire), referhttps://devcentral.f5.com/s/articles/BIG-IP-vWire-Configuration?tab=series&page=1andhttps://devcentral.f5.com/s/articles/vWire-Deployment-Configuration-and-Troubleshooting?tab=series&page=1 Step 1 : Configure interfaces to support vWire To configure interfaces to support vWire, do the following Log into BIG-IP GUI SelectNetwork -> Interfaces -> Interface List Select Specific Interface and in vWire configuration, select Virtual Wire as Forwarding Mode Figure 12 - Example GUI configuration of interface to support vWire Step 2 : Configure trunk in passthrough mode To configure trunk, do the following Log into BIG-IP GUI SelectNetwork -> Trunks ClickCreateto configure new Trunk. Disable LACP for LACP passthrough mode Figure 13 - Configuration of North Trunk in Passthrough Mode Figure 14 - Configuration of South Trunk in Passthrough Mode Step 3 : Configure Virtual Wire To configure trunk, do the following Log into BIG-IP GUI SelectNetwork -> Virtual Wire ClickCreateto configure Virtual Wire Figure 15 - Configuration of Virtual Wire As VLAN 513 is translated into 2001 and 2002, vWire configured with explicit tagged VLANs. It is also recommended to have untagged VLAN in vWire to allow any untagged traffic. Enable multicast bridging sys db variable as below for LACP passthrough mode modify sys db l2.virtualwire.multicast.bridging value enable Note: Make sure sys db variable enabled after reboot and upgrade. For LACP mode, multicast bridging sys db variable should be disabled. Scenarios As LACP passthrough mode configured in BIG-IP, LACP frames will passthrough BIG-IP. LACP will be established between North and South Switches.ICMP traffic is used to represent network traffic from the north switches to the south switches. Scenario 1: Traffic flow through BIG-IP with North and South Switches configured in LACP active mode Above configurations shows that all the four switches are configured with LACP active mode. Figure 16 - MLAG after deployment of BIG-IP and IXIA with Switches configured in LACP ACTIVE mode Figure 16shows that port-channels 513 is active at both North Switches and South Switches. Figure 17 - ICMP traffic flow from client to server through BIG-IP Figure 17shows ICMP is reachable from client to server through BIG-IP. This verifies test case 1, LACP getting established between Switches and traffic passthrough BIG-IP successfully. Scenario 2: Active BIG-IP link goes down with link state propagation enabled in BIG-IP Figure 15shows Propagate Virtual Wire Link Status enabled in BIG-IP. Figure 17shows that interface 1.1 of BIG-IP is active incoming interface and interface 1.4 of BIG-IP is active outgoing interface. Disabling BIG-IP interface 1.1 will make active link down as below Figure 18 - BIG-IP interface 1.1 disabled Figure 19 - Trunk state after BIG-IP interface 1.1 disabled Figure 19shows that the trunks are up even though interface 1.1 is down. As per configuration, North_Trunk has 2 interfaces connected to it 1.1 and 1.3 and one of the interface is still up, so North_Trunk status is active. Figure 20 - MLAG status with interface 1.1 down and Link State Propagation enabled Figure 20shows that port-channel 513 is active at both North Switches and South Switches. This shows that switches are not aware of link failure and it is been handled by IXIA configuration. Figure 21 - IXIA Bypass Switch after 1.1 interface of BIG-IP goes down As shown in Figure 8, Single Service Chain is configured and which will be down only if both Inline Tool Port pairs are down in NPB. So Bypass will be enabled only if Service Chain goes down in NPB. Figure 21 shows that still Bypass is not enabled in IXIA Bypass Switch. Figure 22 - Service Chain and Inline Tool Port Pair status in IXIA Vision E40 ( NPB ) Figure 22 shows that Service Chain is still up as BIG IP2 ( Inline Tool Port Pair ) is up whereas BIG IP1 is down. Figure 1 shows that P09 of NPB is connected 1.1 of BIG-IP which is down. Figure 23 - ICMP traffic flow from client to server through BIG-IP Figure 23 shows that still traffic flows through BIG-IP even though 1.1 interface of BIG-IP is down. Now active incoming interface is 1.3 and active outgoing interface is 1.4. Low bandwidth traffic is still allowed through BIG-IP as bypass not enabled and IXIA handles rate limit process. Scenario 3: When North_Trunk goes down with link state propagation enabled in BIG-IP Figure 24 - BIG-IP interface 1.1 and 1.3 disabled Figure 25 - Trunk state after BIG-IP interface 1.1 and 1.3 disabled Figure 15 shows that Propagate Virtual Wire Link State enabled and thus both the trunks are down. Figure 26 - IXIA Bypass Switch after 1.1 and 1.3 interfaces of BIG-IP goes down Figure 27 - ICMP traffic flow from client to server bypassing BIG-IP Conclusion This article covers BIG-IP L2 Virtual Wire Passthrough deployment with IXIA. IXIA configured using Single Service Chain. Observations of this deployment are as below VLAN Translation in IXIA NPB will convert real VLAN ID (513) to Translated VLAN ID (2001 and 2002) BIG-IP will receive packets with translated VLAN ID (2001 and 2002) VLAN Translation needs all packets to be tagged, untagged packets will be dropped. LACP frames are untagged and thus bypass configured in NPB for LACP. Tool Sharing needs to be enabled for allowing untagged packet which will add extra tag. This type of configuration and testing will be covered in upcoming articles. With Single Service Chain, If any one of the Inline Tool Port Pairs goes down, low bandwidth traffic will be still allowed to pass through BIG-IP (tool) If any of the Inline Tool link goes down, IXIA handles whether to bypass or rate limit. Switches will be still unaware of the changes. With Single Service Chain, if Tool resource configured with both Inline Tool Port pair in Active - Active state then load balancing will happen and both path will be active at a point of time. Multiple Service Chains in IXIA NPB can be used instead of Single Service Chain to remove rate limit process. This type of configuration and testing will be covered in upcoming articles. If BIG-IP goes down, IXIA enables bypass and ensures there is no packet drop.1.2KViews9likes0CommentsBIG-IP L2 Virtual Wire LACP Passthrough Deployment with Gigamon Network Packet Broker - I
Introduction This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer tohttps://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer tohttps://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibilityandhttps://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2). This article covers the design and implementation of the Gigamon Bypass Switch / Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire) with LACP Passthrough Mode. This article covers one of the variation mentioned in article https://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon. Network Topology Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with Gigamon. Figure 1 - Topology with MLAG and LAG before deployment of Gigamon and BIG-IP Figure 2 - Topology with MLAG and LAG after deployment of Gigamon and BIG-IP Figure 3 - Connection between Gigamon and BIG-IP Hardware Specification Hardware used in this article are BIG-IP i5800 GigaVUE-HC1 Arista DCS-7010T-48 (all the four switches) Note: All the Interfaces/Ports are 1G speed Software Specification Software used in this article are BIG-IP 16.1.0 GigaVUE-OS 5.7.01 Arista 4.21.3F (North Switches) Arista 4.19.2F (South Switches) Switch Configuration LAG or link aggregation is a way of bonding multiple physical links into a combined logical link. MLAG or multi-chassis link aggregation extends this capability allowing a downstream switch or host to connect to two switches configured as an MLAG domain. This provides redundancy by giving the downstream switch or host two uplink paths as well as full bandwidth utilization since the MLAG domain appears to be a single switch to Spanning Tree (STP). Figure 1, shows MLAG configured at North Switches and LAG configured at South Switches. This article focus on LACP deployment for untagged packets. For more details on MLAG configuration, refer to https://eos.arista.com/mlag-basic-configuration/#Verify_MLAG_operation Step Summary Step 1 : Configuration of MLAG peering between both switches Step 2 : Verify MLAG Peering Step 3 : Configuration of MLAG Port-Channels Step 4 : Configuration of LAG Port-Channels Step 5 : Verify Port-Channel Status Step 1 : Configuration of MLAG peering between both switches MLAG Configuration in North Switch1 and North Switch2 are as follows North Switch 1: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.0.1/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.0.2 peer-link Port-Channel10 reload-delay 150 North Switch 2: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.0.2/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.0.1 peer-link Port-Channel10 reload-delay 150 Step 2 : Verify MLAG Peering North Switch 1: North-1#show mlag MLAG Configuration: domain-id:mlag1 local-interface:Vlan4094 peer-address:172.16.0.2 peer-link:Port-Channel10 MLAG Status: state:Active negotiation status:Connected peer-link status:Up local-int status:Up system-id:2a:99:3a:23:94:c7 dual-primary detection :Disabled MLAG Ports: Disabled:0 Configured:0 Inactive:6 Active-partial:0 Active-full:2 North Switch 2: North-2#show mlag MLAG Configuration: domain-id:mlag1 local-interface:Vlan4094 peer-address:172.16.0.1 peer-link:Port-Channel10 MLAG Status: state:Active negotiation status:Connected peer-link status:Up local-int status:Up system-id:2a:99:3a:23:94:c7 dual-primary detection :Disabled MLAG Ports: Disabled:0 Configured:0 Inactive:6 Active-partial:0 Active-full:2 Step 3 : Configuration of MLAG Port-Channels Figure 1, has 2 MLAG Port-Channels at North Switches and 2 LAG Port-Channel at South Switches. One of the ports from both the South Switches (South Switch 1 and South Switch 2) are connected to North Switch 1 and the other port is connected to North Switch 2. The two interfaces on South Switches can be configured as a regular port-channel using LACP. MLAG Port-Channel Configuration are as follows North Switch 1: interface Port-Channel120 switchport access vlan 120 mlag 120 interface Ethernet36 channel-group 120 mode active interface Port-Channel121 switchport access vlan 120 mlag 121 interface Ethernet37 channel-group 121 mode active North Switch 2: interface Port-Channel120 switchport access vlan 120 mlag 120 interface Ethernet37 channel-group 120 mode active interface Port-Channel121 switchport access vlan 120 mlag 121 interface Ethernet36 channel-group 121 mode active Step 4 : Configuration of LAG Port-Channels The two interfaces on South Switches can be configured as a regular port-channel using LACP. South Switch 1: interface Port-Channel120 switchport access vlan 120 interface Ethernet36 channel-group 120 mode active interface Ethernet37 channel-group 120 mode active South Switch 2: interface Port-Channel121 switchport access vlan 121 interface Ethernet36 channel-group 121 mode active interface Ethernet37 channel-group 121 mode active LACP modes are as follows On Active Passive LACP Connection establishment will occur only for below configurations Active in both North and South Switch Active in North or South Switch and Passive in other switch On in both North and South Switch Note: In this case, all the interfaces of both North and South Switches are configured with LACP mode as Active Step 5 : Verify Port-Channel Status North Switch 1: North-1#show mlag interfaces detail local/remote mlagstatelocalremoteoperconfiglast changechanges ---------- ----------------- ----------- ------------ --------------- ------------- ---------------------------- ------- 120active-fullPo120Po120up/upena/ena0:00:00 ago270 121active-fullPo121Po121up/upena/ena0:00:00 ago238 North Switch 2: North-2#show mlag interfaces detail local/remote mlagstatelocalremoteoperconfiglast changechanges ---------- ----------------- ----------- ------------ --------------- ------------- ---------------------------- ------- 120active-fullPo120Po120up/upena/ena0:01:34 ago269 121active-fullPo121Po121up/upena/ena0:01:33 ago235 South Switch 1: South-1#show port-channel 120 Port Channel Port-Channel120: Active Ports: Ethernet36 Ethernet37 South Switch 2: South-2#show port-channel 121 Port Channel Port-Channel121: Active Ports: Ethernet36 Ethernet37 Gigamon Configuration In this article, Gigamon will be configured using Inline Network Groups and Inline Tools Groups. For GUI and Port configurations of Gigamon refer https://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon. Find below configuration of Gigamon in Command line Inline-network configurations: inline-network alias Bypass1 pair net-a 1/1/x1 and net-b 1/1/x2 physical-bypass disable traffic-path to-inline-tool exit inline-network alias Bypass2 pair net-a 1/1/x3 and net-b 1/1/x4 physical-bypass disable traffic-path to-inline-tool exit inline-network alias Bypass3 pair net-a 1/1/x5 and net-b 1/1/x6 physical-bypass disable traffic-path to-inline-tool exit inline-network alias Bypass4 pair net-a 1/1/x7 and net-b 1/1/x8 physical-bypass disable traffic-path to-inline-tool exit Inline-network-group configuration: inline-network-group alias Bypassgroup network-list Bypass1,Bypass2,Bypass3,Bypass4 exit Inline-tool configurations: inline-tool alias BIGIP1 pair tool-a 1/1/x9 and tool-b 1/1/x10 enable shared true exit inline-tool alias BIGIP2 pair tool-a 1/1/x11 and tool-b 1/1/x12 enable shared true exit inline-tool alias BIGIP3 pair tool-a 1/1/g1 and tool-b 1/1/g2 enable shared true exit inline-tool alias BIGIP4 pair tool-a 1/1/g3 and tool-b 1/1/g4 enable shared true exit Inline-tool-group configuration: inline-tool-group alias BIGIPgroup tool-list BIGIP1,BIGIP2,BIGIP3,BIGIP4 enable exit Traffic map connection configuration: map-passall alias BIGIP_MAP roles replace admin to owner_roles to BIGIPgroup from Bypassgroup Note: Gigamon configuration with Inline network group and Inline tool group requires to enable Inline tool sharing mode which will insert additional tag on the tool side. As BIG-IP supports single tagging, this configuration works only for untagged packets. BIG-IP Configuration In this article, BIG-IP configured in L2 mode with Virtual Wire and trunks will be configured for individual interfaces. For more details on group configuration of trunk and other configurations, refer https://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon. Configuration of trunk for individual interfaces in LACP passthrough Mode: tmsh create net trunk Left_Trunk_1 interfaces add { 1.1 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Left_Trunk_2 interfaces add { 1.3 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Left_Trunk_3 interfaces add { 2.1 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Left_Trunk_4 interfaces add { 2.3 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Right_Trunk_1 interfaces add { 1.2 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Right_Trunk_2 interfaces add { 1.4 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Right_Trunk_3 interfaces add { 2.2 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Right_Trunk_4 interfaces add { 2.4 } qinq-ethertype 0x8100 link-select-policy auto Figure 4 - Trunk configuration in GUI Figure 5 - Configuration of Virtual Wire Enable multicast bridging sys db variable as below for LACP passthrough mode modify sys db l2.virtualwire.multicast.bridging value enable Note: Make sure sys db variable enabled after reboot and upgrade. For LACP mode, multicast bridging sys db variable should be disabled. Scenarios As per Figure 2 and 3, setup is completely up and functional. As LACP passthrough mode configured in BIG-IP, LACP frames will passthrough BIG-IP. LACP will be established between North and South Switches. ICMP traffic is used to represent network traffic from the north switches to the south switches. Scenario 1: Traffic flow through BIG-IP with North and South Switches configured in LACP active mode Above configurations shows that all the four switches are configured with LACP active mode. Figure 6 - MLAG and LAG status after deployment of BIG-IP and Gigamon with Switches configured in LACP ACTIVE mode Figure 6 shows that port-channels 120 and 121 are active at both North Switches and South Switches. Above configuration shows MLAG configured at North Switches and LAG configured at South Switches. Figure 7 - ICMP traffic flow from client to server through BIG-IP Figure 7 shows ICMP is reachable from client to server through BIG-IP. This verifies test case 1, LACP getting established between Switches and traffic passthrough BIG-IP successfully. Scenario 2: Active BIG-IP link goes down with link state propagation disabled in BIG-IP Figure 5 shows Propagate Virtual Wire Link Status disabled in BIG-IP. Figure 7 shows that interface 1.1 of BIG-IP is active incoming interface and interface 1.2 of BIG-IP is active outgoing interface. Disabling BIG-IP interface 1.1 will make active link down as below Figure 8 - BIG-IP interface 1.1 disabled Figure 9 - Trunk state after BIG-IP interface 1.1 disabled Figure 9 shows only Left_Trunk1 is down which has interface 1.1 configured. As link state propagation disabled in Virtual Wire configuration, interface 1.1 and Right_trunk1 are still active. Figure 10 - MLAG and LAG status with interface 1.1 down and Link State Propagation disabled Figure 10 shows that port-channels 120 and 121 are active at both North Switches and South Switches. This shows that switches are not aware of link failure and it is been handled by Gigamon configuration. As Gigamon is configured with Inline Network Groups and Inline Tool Groups, bypass will be enabled only after all the active Inline Tool goes down. Figure 11 - One of Inline Tool goes down after link failure Figure 11 shows Inline Tool which is connected to interface 1.1 of BIG-IP goes down. Low bandwidth traffic is still allowed through BIG-IP as bypass not enabled and Gigamon handles rate limit process. Note: With one to one mapping of Gigamon instead of groups, bypass can be enabled specific to link failure and this removes the need of rate limit. This configuration and scenarios will be covered in upcoming articles. Figure 12 - ICMP traffic flow from client to server through BIG-IP Figure 12 shows ICMP traffic flows through BIG-IP and now VirtualWire2 is active. Figure 12 shows that interface 1.3 of BIG-IP is active incoming interface and interface 1.4 of BIG-IP is active outgoing interface. Scenario 3: Active BIG-IP link goes down with link state propagation enabled in BIG-IP Figure 13 - Virtual Wire configuration with Link State Propagation enabled Figure 13 shows Propagate Virtual Wire Link Status enabled. Similar to Scenario 2 when active goes down, other interfaces part of Virtual Wire will also goes down. In this case when 1.1 interface of BIG-IP goes down, 1.2 interface of BIG-IP will automatically goes down as both are part of same Virtual Wire. Figure 14 - BIG-IP interface 1.1 disabled Figure 15 - Trunk state after BIG-IP interface 1.1 disabled Figure 15 shows Right_Trunk1 goes down automatically, as 1.2 is the only interface part of the trunk. As Gigamon handles all link failure action, there is no major difference with respect to switches and Gigamon. All the other observations are similar to scenario2, so there is no major difference in behavior with respect to Link State Propagation in this deployment. Scenario 4: BIG-IP goes down and bypass enabled in Gigamon Figure 16 - All the BIG-IP interfaces disabled Figure 17 - Inline tool status after BIG-IP goes down Figure 17 shows that all the Inline Tool pair goes down once BIG-IP is down. Figure 18 - Bypass enabled in Gigamon Figure 18 shows bypass enabled in Gigamon and ensure there is no network failure. ICMP traffic still flows between ubuntu client and ubuntu server as below Figure 19 - ICMP traffic flow from client to server bypassing BIG-IP Conclusion This article covers BIG-IP L2 Virtual Wire Passthrough deployment with Gigamon. Gigamon configured using Inline Network Group and Inline Tool Group. Observations of this deployment are as below Group configuration in Gigamon requires to enable Inline Tool Sharing mode which inserts additional tag. BIG-IP supports L2 Mode with single tagging, this configurations will work only for untagged packets. Group configuration in Gigamon will enable Bypass only if all the active Inline Tool pairs goes down. If any of the Inline Tool Pairs goes down, low bandwidth traffic will be still allowed to pass through BIG-IP (tool) If any of the Inline Tool link goes down, Gigamon handles whether to bypass or rate limit. Switches will be still unware of the changes. One to one configuration of Gigamon can be used instead of Group configuration to remove rate limit process. This type of configuration and testing will be covered in upcoming articles. If BIG-IP goes down, Gigamon enables bypass and ensures there is no packet drop.936Views5likes0CommentsBIG-IP L2 Virtual Wire LACP Mode Deployment with Gigamon Network Packet Broker
Introduction This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer tohttps://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer tohttps://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibilityandhttps://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2). This article covers the design and implementation of the Gigamon Bypass Switch / Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire) with LACP Mode. This article covers LACP Mode deployment mentioned in articlehttps://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon. Network Topology Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with Gigamon. Figure 1 - Topology with MLAG and LAG before deployment of Gigamon and BIG-IP Figure 2 - Topology with MLAG and LAG after deployment of Gigamon and BIG-IP Figure 3 - Connection between Gigamon and BIG-IP Hardware Specification Hardware used in this article are BIG-IP i5800 GigaVUE-HC1 Arista DCS-7010T-48 (all the four switches) Note: All the Interfaces/Ports are 1G speed Software Specification Software used in this article are BIG-IP 16.1.0 GigaVUE-OS 5.7.01 Arista 4.21.3F (North Switches) Arista 4.19.2F (South Switches) Switch Configuration Switch Configuration is same as previous articlehttps://devcentral.f5.com/s/articles/BIG-IP-L2-V-Wire-LACP-Passthorugh-Deployment-with-Gigamon Note: In above mentioned configuration switch ports are configured as access port to allow vlan 120, so BIG-IP will receive untagged frames. In case to have tagged frame, configure switch ports as trunk ports. In this article, below scenarios are tested with Tagged frames. Gigamon Configuration Gigamon Configuration is same as previous article https://devcentral.f5.com/s/articles/BIG-IP-L2-vWire-LACP-Passthrough-Deployment-with-1-to-1-mapping-of-Gigamon-NPS BIG-IP Configuration BIG-IP configuration is exactly same as configuration mentioned inhttps://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon This article is specific to LACP Mode, find below trunk configuration with LACP mode enabled. Figure 4 - Trunk configuration with LACP enabled Note: For LACP mode, in vWire configuration Propagate Virtual Wire Link Status should be disabled. Scenarios As perFigure 2 and 3, setup is completely up and functional. As LACP passthrough mode configured in BIG-IP, LACP frames will passthrough BIG-IP. LACP will be established between North and South Switches.ICMP traffic is used to represent network traffic from the north switches to the south switches. Scenario 1: Traffic flow through BIG-IP with North and South Switches configured in LACP active mode Above configurations shows that all the four switches are configured with LACP active mode. Figure 5 - MLAG and LAG status after deployment of BIG-IP and Gigamon with Switches configured in LACP AC TIVE mode Figure 5shows that port-channels 120 and 121 are active at both North Switches and South Switches. Above configuration shows MLAG configured at North Switches and LAG configured at South Switches. Figure 6 - ICMP traffic flow from client to server through BIG-IP Figure 6shows ICMP is reachable from client to server through BIG-IP. Here LACP is established between Switches and BIG-IP, whereas in passthrough mode LACP will be established between switches Figure 7 - Actor ID of BIG-IP Figure 8 - LACP neighbor details in switches Figure 7 and Figure 8 shows LACP is established between Switches and BIG-IP. Scenario 2: Traffic flow through BIG-IP with North and South Switches configured in LACP Passive mode North Switch 1: interface Ethernet36 channel-group 120 mode passive interface Ethernet37 channel-group 121 mode passive North Switch 2: interface Ethernet37 channel-group 120 mode passive interface Ethernet36 channel-group 121 mode passive South Switch 1: interface Ethernet36 channel-group 120 mode passive interface Ethernet37 channel-group 120 mode passive South Switch 2: interface Ethernet36 channel-group 121 mode passive interface Ethernet37 channel-group 121 mode passive Figure 9 - MLAG and LAG status after deployment of BIG-IP and Gigamon with Switches configured in LACP Passive mode Figure 9shows that port-channels 120 and 121 are active at both North Switches and South Switches. Above configuration shows MLAG configured at North Switches and LAG configured at South Switches. Figure 10 - ICMP traffic flow from client to server through BIG-IP Figure 10shows ICMP is reachable from client to server through BIG-IP. BIG-IP configured with LACP in Active mode and Switches configured with LACP in Passive mode, thus LACP got established successfully. This behavior will not occur when BIG-IP configured in Passthrough mode, in that case both the North and South will be in LACP passive mode and LACP will not get established. Scenario 3: Active BIG-IP link goes down in BIG-IP Figure 10shows that interface 1.1 of BIG-IP is active incoming interface and interface 1.2 of BIG-IP is active outgoing interface. Disabling BIG-IP interface 1.1 will make active link down as below Figure 11 - BIG-IP interface 1.1 disabled Figure 12 - Trunk state after BIG-IP interface 1.1 disabled Figure 12shows that all the trunks are up even though interface 1.1 is down. As per configuration, Left_Trunk1 has 2 interfaces connected to it 1.1 and 2.3 and one of the interface is still up, so Left_Trunk1 status is active. In previous articlehttps://devcentral.f5.com/s/articles/BIG-IP-L2-V-Wire-LACP-Passthorugh-Deployment-with-Gigamon, individual trunks got configured and status of Left_Trunk1 was down. Figure 13 - MLAG and LAG status with interface 1.1 down Figure 13shows that port-channels 120 and 121 are active at both North Switches and South Switches. This shows that switches are not aware of link failure and it is been handled by Gigamon configuration. Figure 14 - One of Inline Tool goes down after link failure Figure 14shows Inline Tool which is connected to interface 1.1 of BIG-IP goes down. Figure 15 - Bypass enabled for specific flow Figure 15shows tool failure introduced bypass for Inline-network pair Bypass1 ( Interface 1.1 and 1.2) If traffic hits interface 1.1 then Gigamon will send traffic directly to interface 1.2. This shows traffic bypassed BIG-IP. Figure 16 -ICMP traffic flow from client to server bypassing BIG-IP Figure 16shows client is reaching server and no traffic passing through BIG-IP which means traffic bypassed BIG-IP. Figure 17 - Port Statistics of Gigamon Figure 17shows traffic reaches interface 1.1 of Gigamon and forwards to interface 1.2. Traffic is not routed to tool, as specific Inline-Network enabled with bypass. In the same scenario, if traffic hits any other interface apart from interface 1.1 of Gigamon then traffic will be route to BIG-IP. Please note that only one Inline-network pair enables bypass, remaining 3 Inline-network pairs are still in normal forwarding state. Scenario 4: BIG-IP goes down and bypass enabled in Gigamon Figure 18 - All the BIG-IP interfaces disabled Figure 19 - Inline tool status after BIG-IP goes down Figure 19shows that all the Inline Tool pair goes down once BIG-IP is down. Figure 20 - Bypass enabled in Gigamon Figure 20shows bypass enabled in Gigamon and ensures there is no network failure. ICMP traffic still flows between ubuntu client and ubuntu server as below Figure 21 - ICMP traffic flow from client to server bypassing BIG-IP Conclusion This article covers BIG-IP L2 Virtual Wire LACP mode deployment with Gigamon. Gigamon configured with one to one mapping between Inline-network and Inline-tool. No Inline-network group and Inline-tool group configured in Gigamon. Observations of this deployment are as below As one to one mapping configured between Inline-network and Inline-tool, no additional tag inserted by Gigamon. As there is no additional tag in frames when reaching BIG-IP, this configuration works for both Tagged and Untagged packets. If any of the Inline Tool link goes down, Gigamon handles bypass. Switches will be still unware of the changes. If any of the Inline Tool Pairs goes down, then specific Inline-network enables bypass. If traffic hits bypass enabled Inline-network, then traffic will be bypassing BIG IP. If traffic hits Normal forward state Inline-Network, the traffic will be forwarded to BIG-IP. If BIG-IP goes down, Gigamon enables bypass and ensures there is no packet drop. Propagate Virtual Wire Link State should be disabled for LACP Mode in Virtual Wire Configuration605Views5likes0CommentsBIG-IP L2 Virtual Wire LACP Passthrough Deployment with Gigamon Network Packet Broker - II
Introduction This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer tohttps://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer tohttps://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibilityandhttps://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2). This article covers the design and implementation of one to one mapping of Gigamon Bypass Switch / Network Packet Broker in conjunction with BIG-IP i5800 appliance and Virtual Wire (vWire) with LACP Passthrough Mode. Previous article https://devcentral.f5.com/s/articles/BIG-IP-L2-V-Wire-LACP-Passthorugh-Deployment-with-Gigamon focused on group mapping of Gigamon and it will be ideal only for untagged packets. One to one mapping of Gigamon doesn't insert additional tag, so it will be ideal for both Tagged and Untagged packets. Network Topology Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with Gigamon. Figure 1 - Topology with MLAG and LAG before deployment of Gigamon and BIG-IP Figure 2 - Topology with MLAG and LAG after deployment of Gigamon and BIG-IP Figure 3 - Connection between Gigamon and BIG-IP Hardware Specification Hardware used in this article are BIG-IP i5800 GigaVUE-HC1 Arista DCS-7010T-48 (all the four switches) Note: All the Interfaces/Ports are 1G speed Software Specification Software used in this article are BIG-IP 16.1.0 GigaVUE-OS 5.7.01 Arista 4.21.3F (North Switches) Arista 4.19.2F (South Switches) Switch Configuration Switch Configuration is same as previous article https://devcentral.f5.com/s/articles/BIG-IP-L2-V-Wire-LACP-Passthorugh-Deployment-with-Gigamon Gigamon Configuration In this article, Gigamon will be configured without Inline Network Groups and Inline Tools Groups. For GUI and Port configurations of Gigamon refer https://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon. Find below configuration of Gigamon in Command line Inline-network Configurations: inline-network alias Bypass1 pair net-a 1/1/x1 and net-b 1/1/x2 physical-bypass disable traffic-path to-inline-tool exit inline-network alias Bypass2 pair net-a 1/1/x3 and net-b 1/1/x4 physical-bypass disable traffic-path to-inline-tool exit inline-network alias Bypass3 pair net-a 1/1/x5 and net-b 1/1/x6 physical-bypass disable traffic-path to-inline-tool exit inline-network alias Bypass4 pair net-a 1/1/x7 and net-b 1/1/x8 physical-bypass disable traffic-path to-inline-tool exit Inline-tool Configurations: inline-tool alias BIGIP1 pair tool-a 1/1/x9 and tool-b 1/1/x10 enable exit inline-tool alias BIGIP2 pair tool-a 1/1/x11 and tool-b 1/1/x12 enable exit inline-tool alias BIGIP3 pair tool-a 1/1/g1 and tool-b 1/1/g2 enable exit inline-tool alias BIGIP4 pair tool-a 1/1/g3 and tool-b 1/1/g4 enable exit Traffic map connection configuration: map-passall alias lacp1bypass roles replace admin to owner_roles to BIGIP1 from Bypass1 exit map-passall alias lacp4bypass roles replace admin to owner_roles to BIGIP4 from Bypass4 exit map-passall alias lacpbypass2 roles replace admin to owner_roles to BIGIP2 from Bypass2 exit map-passall alias lacp3bypass roles replace admin to owner_roles to BIGIP3 from Bypass3 exit In this article, Traffic Flow Maps are configured between individual Inline-network pairs and Inline-tool pairs. So traffic from specific Inline-network will be forwarded to specific Inline-tool. If any Inline-tool goes down, related Inline-Network enables bypass for that specific flow. Figure 4 - Example GUI configuration of Traffic Flow Map BIG-IP Configuration BIG-IP configuration is exactly same as configuration mentioned in https://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon Scenarios As per Figure 2 and 3, setup is completely up and functional. As LACP passthrough mode configured in BIG-IP, LACP frames will passthrough BIG-IP. LACP will be established between North and South Switches. ICMP traffic is used to represent network traffic from the north switches to the south switches. Scenario 1: Traffic flow through BIG-IP with North and South Switches configured in LACP active mode Above configurations shows that all the four switches are configured with LACP active mode. Figure 5 - MLAG and LAG status after deployment of BIG-IP and Gigamon with Switches configured in LACP ACTIVE mode Figure 5 shows that port-channels 120 and 121 are active at both North Switches and South Switches. Above configuration shows MLAG configured at North Switches and LAG configured at South Switches. Figure 6 - ICMP traffic flow from client to server through BIG-IP Figure 6 shows ICMP is reachable from client to server through BIG-IP. This verifies test case 1, LACP getting established between Switches and traffic passthrough BIG-IP successfully. Scenario 2: Active BIG-IP link goes down in BIG-IP Figure 6shows that interface 1.1 of BIG-IP is active incoming interface and interface 1.2 of BIG-IP is active outgoing interface. Disabling BIG-IP interface 1.1 will make active link down as below Figure 7 - BIG-IP interface 1.1 disabled Figure 8 - Trunk state after BIG-IP interface 1.1 disabled Figure 8 shows that all the trunks are up even though interface 1.1 is down. As per configuration, Left_Trunk1 has 2 interfaces connected to it 1.1 and 2.3 and one of the interface is still up, so Left_Trunk1 status is active. In previous article https://devcentral.f5.com/s/articles/BIG-IP-L2-V-Wire-LACP-Passthorugh-Deployment-with-Gigamon, individual trunks got configured and status of Left_Trunk1 was down. Figure 9 - MLAG and LAG status with interface 1.1 down Figure 9shows that port-channels 120 and 121 are active at both North Switches and South Switches. This shows that switches are not aware of link failure and it is been handled by Gigamon configuration. Figure 10 - One of Inline Tool goes down after link failure Figure 10shows Inline Tool which is connected to interface 1.1 of BIG-IP goes down. Figure 11 - Bypass enabled for specific flow Figure 11shows tool failure introduced bypass for Inline-network pair Bypass1 ( Interface 1.1 and 1.2) If traffic hits interface 1.1 then Gigamon will send traffic directly to interface 1.2. This shows traffic bypassed BIG-IP. Figure 12 - ICMP traffic flow from client to server bypassing BIG-IP Figure 12shows client is reaching server and no traffic passing through BIG-IP which means traffic bypassed BIG-IP. Figure 13 - Port Statistics of Gigamon Figure 13shows traffic reaches interface 1.1 of Gigamon and forwards to interface 1.2. Traffic is not routed to tool, as specific Inline-Network enabled with bypass. In the same scenario, if traffic hits any other interface apart from interface 1.1 of Gigamon then traffic will be route to BIG-IP. Please note that only one Inline-network pair enables bypass, remaining 3 Inline-network pairs are still in normal forwarding state. Scenario 3: BIG-IP goes down and bypass enabled in Gigamon Figure 14 - All the BIG-IP interfaces disabled Figure 15 - Inline tool status after BIG-IP goes down Figure 15shows that all the Inline Tool pair goes down once BIG-IP is down. Figure 16 - Bypass enabled in Gigamon Figure 16shows bypass enabled in Gigamon and ensures there is no network failure. ICMP traffic still flows between ubuntu client and ubuntu server as below Figure 17 - ICMP traffic flow from client to server bypassing BIG-IP Conclusion This article covers BIG-IP L2 Virtual Wire Passthrough deployment with Gigamon. Gigamon configured with one to one mapping between Inline-network and Inline-tool. No Inline-network group and Inline-tool group configured in Gigamon. Observations of this deployment are as below As one to one mapping configured between Inline-network and Inline-tool, no additional tag inserted by Gigamon. As there is no additional tag in frames when reaching BIG-IP, this configuration works for both Tagged and Untagged packets. If any of the Inline Tool link goes down, Gigamon handles bypass. Switches will be still unware of the changes. If any of the Inline Tool Pairs goes down, then specific Inline-network enables bypass. If traffic hits bypass enabled Inline-network, then traffic will be bypassing BIG IP. If traffic hits Normal forward state Inline-Network, the traffic will be forwarded to BIG-IP. If BIG-IP goes down, Gigamon enables bypass and ensures there is no packet drop.366Views4likes0Comments