BIG-IP L2 Virtual Wire LACP Passthrough Deployment with Gigamon Network Packet Broker - I

Introduction

This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer to https://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer to https://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibility and https://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2).

This article covers the design and implementation of the Gigamon Bypass Switch / Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire) with LACP Passthrough Mode. This article covers one of the variation mentioned in article https://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon.

Network Topology

Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with Gigamon.

Figure 1 - Topology with MLAG and LAG before deployment of Gigamon and BIG-IP

Figure 2 - Topology with MLAG and LAG after deployment of Gigamon and BIG-IP

 

Figure 3 - Connection between Gigamon and BIG-IP

 

Hardware Specification

Hardware used in this article are

  • BIG-IP i5800
  • GigaVUE-HC1
  • Arista DCS-7010T-48 (all the four switches)

Note: All the Interfaces/Ports are 1G speed

Software Specification

Software used in this article are

  • BIG-IP 16.1.0
  • GigaVUE-OS 5.7.01
  • Arista 4.21.3F (North Switches)
  • Arista 4.19.2F (South Switches)

Switch Configuration

LAG or link aggregation is a way of bonding multiple physical links into a combined logical link. MLAG or multi-chassis link aggregation extends this capability allowing a downstream switch or host to connect to two switches configured as an MLAG domain. This provides redundancy by giving the downstream switch or host two uplink paths as well as full bandwidth utilization since the MLAG domain appears to be a single switch to Spanning Tree (STP).

Figure 1, shows MLAG configured at North Switches and LAG configured at South Switches. This article focus on LACP deployment for untagged packets. For more details on MLAG configuration, refer to https://eos.arista.com/mlag-basic-configuration/#Verify_MLAG_operation

Step Summary

Step 1 : Configuration of MLAG peering between both switches

Step 2 : Verify MLAG Peering

Step 3 : Configuration of MLAG Port-Channels

Step 4 : Configuration of LAG Port-Channels

Step 5 : Verify Port-Channel Status

 

 

Step 1 : Configuration of MLAG peering between both switches

MLAG Configuration in North Switch1 and North Switch2 are as follows

North Switch 1:

  • Configure Port-Channel

interface Port-Channel10
  switchport mode trunk
  switchport trunk group m1peer

  • Configure VLAN

interface Vlan4094
  ip address 172.16.0.1/30

  • Configure MLAG

mlag configuration
  domain-id mlag1
  heartbeat-interval 2500
  local-interface Vlan4094
  peer-address 172.16.0.2
  peer-link Port-Channel10
  reload-delay 150

North Switch 2:

  • Configure Port-Channel

interface Port-Channel10
  switchport mode trunk
  switchport trunk group m1peer

  • Configure VLAN

interface Vlan4094
  ip address 172.16.0.2/30

  • Configure MLAG

mlag configuration
  domain-id mlag1
  heartbeat-interval 2500
  local-interface Vlan4094
  peer-address 172.16.0.1
  peer-link Port-Channel10
  reload-delay 150

 

Step 2 : Verify MLAG Peering

North Switch 1:

North-1#show mlag
MLAG Configuration:
domain-id             :              mlag1
local-interface       :           Vlan4094
peer-address          :         172.16.0.2
peer-link             :     Port-Channel10

MLAG Status:
state                 :             Active
negotiation status    :          Connected
peer-link status      :                 Up
local-int status      :                 Up
system-id             :  2a:99:3a:23:94:c7
dual-primary detection :           Disabled

MLAG Ports:
Disabled              :                  0
Configured            :                  0
Inactive              :                  6
Active-partial        :                  0
Active-full           :                  2

North Switch 2:

North-2#show mlag
MLAG Configuration:
domain-id             :              mlag1
local-interface       :           Vlan4094
peer-address          :         172.16.0.1
peer-link             :     Port-Channel10

MLAG Status:
state                 :             Active
negotiation status    :          Connected
peer-link status      :                 Up
local-int status      :                 Up
system-id             :  2a:99:3a:23:94:c7
dual-primary detection :           Disabled

MLAG Ports:
Disabled              :                  0
Configured            :                  0
Inactive              :                  6
Active-partial        :                  0
Active-full           :                  2

 

Step 3 : Configuration of MLAG Port-Channels

Figure 1, has 2 MLAG Port-Channels at North Switches and 2 LAG Port-Channel at South Switches. One of the ports from both the South Switches (South Switch 1 and South Switch 2) are connected to North Switch 1 and the other port is connected to North Switch 2. The two interfaces on South Switches can be configured as a regular port-channel using LACP.

MLAG Port-Channel Configuration are as follows

North Switch 1:

interface Port-Channel120
 switchport access vlan 120
 mlag 120
interface Ethernet36
  channel-group 120 mode active
interface Port-Channel121
  switchport access vlan 120
  mlag 121
interface Ethernet37
  channel-group 121 mode active

North Switch 2:

interface Port-Channel120
 switchport access vlan 120
 mlag 120
interface Ethernet37
  channel-group 120 mode active
interface Port-Channel121
  switchport access vlan 120
  mlag 121
interface Ethernet36
  channel-group 121 mode active

 

Step 4 : Configuration of LAG Port-Channels

The two interfaces on South Switches can be configured as a regular port-channel using LACP.

South Switch 1:

interface Port-Channel120
  switchport access vlan 120
interface Ethernet36
  channel-group 120 mode active
interface Ethernet37
  channel-group 120 mode active

South Switch 2:

interface Port-Channel121
  switchport access vlan 121
interface Ethernet36
  channel-group 121 mode active
interface Ethernet37
  channel-group 121 mode active

LACP modes are as follows

  1. On
  2. Active
  3. Passive

LACP Connection establishment will occur only for below configurations

  • Active in both North and South Switch
  • Active in North or South Switch and Passive in other switch
  • On in both North and South Switch

Note: In this case, all the interfaces of both North and South Switches are configured with LACP mode as Active

 

Step 5 : Verify Port-Channel Status

North Switch 1:

North-1#show mlag interfaces detail
                               local/remote
  mlag       state    local    remote      oper    config         last change  changes
---------- ----------------- ----------- ------------ --------------- ------------- ---------------------------- -------
  120    active-full    Po120    Po120      up/up    ena/ena         0:00:00 ago    270
  121    active-full    Po121    Po121      up/up    ena/ena         0:00:00 ago    238

North Switch 2:

North-2#show mlag interfaces detail
                               local/remote
  mlag       state    local    remote      oper    config         last change  changes
---------- ----------------- ----------- ------------ --------------- ------------- ---------------------------- -------
  120    active-full    Po120    Po120      up/up    ena/ena         0:01:34 ago    269
  121    active-full    Po121    Po121      up/up    ena/ena         0:01:33 ago    235

South Switch 1:

South-1#show port-channel 120
Port Channel Port-Channel120:
 Active Ports: Ethernet36 Ethernet37

South Switch 2:

South-2#show port-channel 121
Port Channel Port-Channel121:
 Active Ports: Ethernet36 Ethernet37

Gigamon Configuration

In this article, Gigamon will be configured using Inline Network Groups and Inline Tools Groups. For GUI and Port configurations of Gigamon refer https://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon. Find below configuration of Gigamon in Command line

 

Inline-network configurations:

inline-network alias Bypass1
 pair net-a 1/1/x1 and net-b 1/1/x2
 physical-bypass disable
 traffic-path to-inline-tool
 exit
inline-network alias Bypass2
 pair net-a 1/1/x3 and net-b 1/1/x4
 physical-bypass disable
 traffic-path to-inline-tool
 exit
inline-network alias Bypass3
 pair net-a 1/1/x5 and net-b 1/1/x6
 physical-bypass disable
 traffic-path to-inline-tool
 exit
inline-network alias Bypass4
 pair net-a 1/1/x7 and net-b 1/1/x8
 physical-bypass disable
 traffic-path to-inline-tool
 exit

Inline-network-group configuration:

inline-network-group alias Bypassgroup
 network-list Bypass1,Bypass2,Bypass3,Bypass4
 exit

Inline-tool configurations:

inline-tool alias BIGIP1
 pair tool-a 1/1/x9 and tool-b 1/1/x10
 enable
 shared true
 exit
inline-tool alias BIGIP2
 pair tool-a 1/1/x11 and tool-b 1/1/x12
 enable
 shared true
 exit
inline-tool alias BIGIP3
 pair tool-a 1/1/g1 and tool-b 1/1/g2
 enable
 shared true
 exit
inline-tool alias BIGIP4
 pair tool-a 1/1/g3 and tool-b 1/1/g4
 enable
 shared true
 exit

Inline-tool-group configuration:

inline-tool-group alias BIGIPgroup
 tool-list BIGIP1,BIGIP2,BIGIP3,BIGIP4
 enable
 exit

Traffic map connection configuration:

map-passall alias BIGIP_MAP
 roles replace admin to owner_roles
 to BIGIPgroup
 from Bypassgroup

Note: Gigamon configuration with Inline network group and Inline tool group requires to enable Inline tool sharing mode which will insert additional tag on the tool side. As BIG-IP supports single tagging, this configuration works only for untagged packets.

BIG-IP Configuration

In this article, BIG-IP configured in L2 mode with Virtual Wire and trunks will be configured for individual interfaces. For more details on group configuration of trunk and other configurations, refer https://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon.

 

Configuration of trunk for individual interfaces in LACP passthrough Mode:

tmsh create net trunk Left_Trunk_1 interfaces add { 1.1 } qinq-ethertype 0x8100 link-select-policy auto
tmsh create net trunk Left_Trunk_2 interfaces add { 1.3 } qinq-ethertype 0x8100 link-select-policy auto
tmsh create net trunk Left_Trunk_3 interfaces add { 2.1 } qinq-ethertype 0x8100 link-select-policy auto
tmsh create net trunk Left_Trunk_4 interfaces add { 2.3 } qinq-ethertype 0x8100 link-select-policy auto
tmsh create net trunk Right_Trunk_1 interfaces add { 1.2 } qinq-ethertype 0x8100 link-select-policy auto
tmsh create net trunk Right_Trunk_2 interfaces add { 1.4 } qinq-ethertype 0x8100 link-select-policy auto
tmsh create net trunk Right_Trunk_3 interfaces add { 2.2 } qinq-ethertype 0x8100 link-select-policy auto
tmsh create net trunk Right_Trunk_4 interfaces add { 2.4 } qinq-ethertype 0x8100 link-select-policy auto

Figure 4 - Trunk configuration in GUI

Figure 5 - Configuration of Virtual Wire

Enable multicast bridging sys db variable as below for LACP passthrough mode

modify sys db l2.virtualwire.multicast.bridging value enable

Note: Make sure sys db variable enabled after reboot and upgrade. For LACP mode, multicast bridging sys db variable should be disabled.

Scenarios

As per Figure 2 and 3, setup is completely up and functional. As LACP passthrough mode configured in BIG-IP, LACP frames will passthrough BIG-IP. LACP will be established between North and South Switches. ICMP traffic is used to represent network traffic from the north switches to the south switches.

 

Scenario 1: Traffic flow through BIG-IP with North and South Switches configured in LACP active mode

Above configurations shows that all the four switches are configured with LACP active mode.

Figure 6 - MLAG and LAG status after deployment of BIG-IP and Gigamon with Switches configured in LACP ACTIVE mode

Figure 6 shows that port-channels 120 and 121 are active at both North Switches and South Switches. Above configuration shows MLAG configured at North Switches and LAG configured at South Switches.

Figure 7 - ICMP traffic flow from client to server through BIG-IP

Figure 7 shows ICMP is reachable from client to server through BIG-IP. This verifies test case 1, LACP getting established between Switches and traffic passthrough BIG-IP successfully.

 

Scenario 2: Active BIG-IP link goes down with link state propagation disabled in BIG-IP

Figure 5 shows Propagate Virtual Wire Link Status disabled in BIG-IP. Figure 7 shows that interface 1.1 of BIG-IP is active incoming interface and interface 1.2 of BIG-IP is active outgoing interface. Disabling BIG-IP interface 1.1 will make active link down as below

Figure 8 - BIG-IP interface 1.1 disabled

Figure 9 - Trunk state after BIG-IP interface 1.1 disabled

Figure 9 shows only Left_Trunk1 is down which has interface 1.1 configured. As link state propagation disabled in Virtual Wire configuration, interface 1.1 and Right_trunk1 are still active.

Figure 10 - MLAG and LAG status with interface 1.1 down and Link State Propagation disabled

Figure 10 shows that port-channels 120 and 121 are active at both North Switches and South Switches. This shows that switches are not aware of link failure and it is been handled by Gigamon configuration. As Gigamon is configured with Inline Network Groups and Inline Tool Groups, bypass will be enabled only after all the active Inline Tool goes down.

Figure 11 - One of Inline Tool goes down after link failure

Figure 11 shows Inline Tool which is connected to interface 1.1 of BIG-IP goes down. Low bandwidth traffic is still allowed through BIG-IP as bypass not enabled and Gigamon handles rate limit process.

Note: With one to one mapping of Gigamon instead of groups, bypass can be enabled specific to link failure and this removes the need of rate limit. This configuration and scenarios will be covered in upcoming articles.

Figure 12 - ICMP traffic flow from client to server through BIG-IP

Figure 12 shows ICMP traffic flows through BIG-IP and now VirtualWire2 is active. Figure 12 shows that interface 1.3 of BIG-IP is active incoming interface and interface 1.4 of BIG-IP is active outgoing interface.

 

Scenario 3: Active BIG-IP link goes down with link state propagation enabled in BIG-IP

Figure 13 - Virtual Wire configuration with Link State Propagation enabled

Figure 13 shows Propagate Virtual Wire Link Status enabled. Similar to Scenario 2 when active goes down, other interfaces part of Virtual Wire will also goes down. In this case when 1.1 interface of BIG-IP goes down, 1.2 interface of BIG-IP will automatically goes down as both are part of same Virtual Wire.

Figure 14 - BIG-IP interface 1.1 disabled

Figure 15 - Trunk state after BIG-IP interface 1.1 disabled

Figure 15 shows Right_Trunk1 goes down automatically, as 1.2 is the only interface part of the trunk. As Gigamon handles all link failure action, there is no major difference with respect to switches and Gigamon. All the other observations are similar to scenario2, so there is no major difference in behavior with respect to Link State Propagation in this deployment.

 

Scenario 4: BIG-IP goes down and bypass enabled in Gigamon

Figure 16 - All the BIG-IP interfaces disabled

Figure 17 - Inline tool status after BIG-IP goes down

Figure 17 shows that all the Inline Tool pair goes down once BIG-IP is down.

Figure 18 - Bypass enabled in Gigamon

Figure 18 shows bypass enabled in Gigamon and ensure there is no network failure. ICMP traffic still flows between ubuntu client and ubuntu server as below

Figure 19 - ICMP traffic flow from client to server bypassing BIG-IP

 

Conclusion

This article covers BIG-IP L2 Virtual Wire Passthrough deployment with Gigamon. Gigamon configured using Inline Network Group and Inline Tool Group. Observations of this deployment are as below

  1. Group configuration in Gigamon requires to enable Inline Tool Sharing mode which inserts additional tag.
  2. BIG-IP supports L2 Mode with single tagging, this configurations will work only for untagged packets.
  3. Group configuration in Gigamon will enable Bypass only if all the active Inline Tool pairs goes down.
  4. If any of the Inline Tool Pairs goes down, low bandwidth traffic will be still allowed to pass through BIG-IP (tool)
  5. If any of the Inline Tool link goes down, Gigamon handles whether to bypass or rate limit. Switches will be still unware of the changes.
  6. One to one configuration of Gigamon can be used instead of Group configuration to remove rate limit process. This type of configuration and testing will be covered in upcoming articles.
  7. If BIG-IP goes down, Gigamon enables bypass and ensures there is no packet drop. 
Published Jul 20, 2021
Version 1.0

Was this article helpful?

No CommentsBe the first to comment