L2 Deployment of BIG-IP with Gigamon
Introduction
This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer to https://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer to https://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibility and https://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2).
F5’s BIG-IP hardware appliances can be inserted in L2 networks. This can be achieved using either virtual Wire (vWire) or by bridging 2 Virtual LANs using a VLAN Groups.
This document covers the design and implementation of the Gigamon Bypass Switch/Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire).
This document focuses on Gigamon Bypass Switch / Network Packet Broker. For more information about architecture overview of bypass switch and network packet broker refer to https://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?tab=series&page=1. Gigamon provides internal bypass switch within network packet broker device whereas Ixia has external bypass switch.
Network Topology
Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with Gigamon.
Figure 1 - Topology before deployment of Gigamon and BIG-IP
Figure 2 - Topology after deployment of Gigamon and BIG-IP
Figure 3 - Connection between Gigamon and BIG-IP
Hardware Specification
Hardware used in this article are
- BIG-IP i5800
- GigaVUE-HC1
- Arista DCS-7010T-48 (all the four switches)
Note: All the Interfaces/Ports are 1G speed
Software Specification
Software used in this article are
- BIG-IP 16.1.0
- GigaVUE-OS 5.7.01
- Arista 4.21.3F (North Switches)
- Arista 4.19.2F (South Switches)
Gigamon Configuration
In this lab, the Gigamon is configured with two type of ports, Inline Network and Inline Tool.
Steps Summary
- Step 1 : Configure Port Type
- Step 2 : Configure Inline Network Bypass Pair
- Step 3 : Configure Inline Network Group (if applicable)
- Step 4 : Configure Inline Tool Pair
- Step 5 : Configure Inline Tool Group (if applicable)
- Step 6 : Configure Inline Traffic Flow Maps
Step 1 : Configure Port Type
First and Foremost step is to configure Ports. Figure 2 shows all the ports that are connected between Switches and Gigamon. Ports that are connected to switch should be configured as Inline Network Ports. As per Figure 2, find below Inline Network ports
Inline Network ports: 1/1/x1, 1/1/x2, 1/1/x3, 1/1/x4, 1/1/x5. 1/1/x6, 1/1/x7, 1/1/x8
Figure 3 shows all the ports that are connected between BIG-IP and Gigamon. Ports that are connected to BIG-IP should be configured as Inline Tool Ports. As per Figure 3, find below Inline Tool ports
Inline Tool ports: 1/1/x9, 1/1/x10, 1/1/x11, 1/1/x12, 1/1/g1, 1/1/g2, 1/1/g3, 1/1/g4
To configure Port Type, do the following
- Log into GigaVUE-HC1 GUI
- Select Ports -> Go to specific port and modify Port Type as Inline Network or Inline Tool
Figure 4 - GUI configuration of Port Types
Equivalent command for configuring Inline Network port and other port configuration
port 1/1/x1 type inline-net port 1/1/x1 alias N-SW1-36 port 1/1/x1 params admin enable autoneg enable
Equivalent command for configuring Inline Tool Port and other port configuration
port 1/1/x9 type inline-tool port 1/1/x9 alias BIGIP-1.1 port 1/1/x9 params admin enable autoneg enable
Step 2 : Configure Inline Network Bypass Pair
Figure 1 shows direct connections between switches. An inline network bypass pair will ensure the same connections through Gigamon. An inline network is an arrangement of two ports of the inline-network type. The arrangement facilitates access to a bidirectional link between two networks (two far-end network devices) that need to be linked through an inline tool. As per Figure 2, find below Inline Network bypass pairs
Inline Network bypass pair 2 : 1/1/x3 -> 1/1/x4
Inline Network bypass pair 3 : 1/1/x5 -> 1/1/x6
Inline Network bypass pair 4 : 1/1/x7 -> 1/1/x8
To configure the inline network bypass pair, do the following
- Log into GigaVUE-HC1 GUI
- Select Inline Bypass -> Inline Networks
Figure 5 - Example GUI configuration of Inline Network Bypass Pair
Equivalent command for configuring Inline Network Bypass Pair
inline-network alias Bypass1 pair net-a 1/1/x1 and net-b 1/1/x2 physical-bypass disable traffic-path to-inline-tool
Step 3 : Configure Inline Network Group
An inline network group is an arrangement of multiple inline networks that share the same inline tool.
To configure the inline network bypass group, do the following
- Log into GigaVUE-HC1 GUI
- Select Inline Bypass -> Inline Networks Groups
Figure 6 - Example GUI configuration of Inline Network Bypass Group
Equivalent command for configuring Inline Network Bypass Group
inline-network-group alias Bypassgroup network-list Bypass1,Bypass2,Bypass3,Bypass4
Step 4 : Configure Inline Tool Pair
Figure 3 shows connection between BIG-IP and Gigamon which will be in pairs. An inline tool consists of inline tool ports, always in pairs, running at the same speed, on the same medium. As per Figure 3, find below Inline Tool pairs.
Inline Network bypass pair 2 : 1/1/x11 -> 1/1/x12
Inline Network bypass pair 3 : 1/1/g1 -> 1/1/g2
Inline Network bypass pair 4 : 1/1/g3 -> 1/1/g4
To configure the inline tool pair, do the following
- Log into GigaVUE-HC1 GUI
- Select Inline Bypass -> Inline Tools
Figure 7 - Example GUI configuration of Inline Tool Pair
Equivalent command for configuring Inline Tool pair
inline-tool alias BIGIP1 pair tool-a 1/1/x9 and tool-b 1/1/x10 enable shared true
Step 5 : Configure Inline Tool Group (if applicable)
An inline tool group is an arrangement of multiple inline tools to which traffic is distributed to the inline tools based on hardware-calculated hash values. For example, if one tool goes down, traffic is redistributed to other tools in the group using hashing.
To configure the inline tool group, do the following
- Log into GigaVUE-HC1 GUI
- Select Inline Bypass -> Inline Tool Groups
Figure 8 - Example GUI configuration of Inline Tool Group
Equivalent command for configuring Inline Tool Group
inline-tool-group alias BIGIPgroup tool-list BIGIP1,BIGIP2,BIGIP3,BIGIP4 enable
Step 6 : Configure Inline Traffic Flow Maps
Flow mapping takes traffic from a network TAP or a SPAN/mirror port and sends it through a set of user-defined map rules to the tools and applications that secure, monitor and analyze IT infrastructure. As per Figure 2, it is the high-level process for configuring traffic to flow from the inline network links to the inline tool group, allowing you to test the deployment functionality of the BIG-IP appliances within the group.
To configure the inline tool group, do the following
- Log into GigaVUE-HC1 GUI
- Select Maps -> New
Figure 9 - Example GUI configuration of Flow Maps
Note: Above configuration allows all traffic from Inline Network Group to flow through Inline Tool Group
Equivalent command for configuring PASS ALL Flow Map
map-passall alias Map1 to BIGIPgroup from Bypassgroup
Flow Maps can be configured specific to certain traffic. For example, If LACP traffic should bypass BIG-IP and all other traffic should pass through BIG-IP. Find below command to achieve mentioned condition
map alias inMap type inline byRule roles replace admin to owner_roles comment " " rule add pass ethertype 8809 to bypass from Bypassgroup exit map-scollector alias SCollector roles replace admin to owner_roles from Bypassgroup collector BIGIPgroup exit
Note: For more details on Gigamon, refer https://docs.gigamon.com/pdfs/Content/Shared/5700-doclist.html
BIG-IP Configuration
In series of BIG-IP and Gigamon deployment, BIG-IP configured in L2 mode with Virtual Wire (vWire)
Step Summary
- Step 1 : Configure interfaces to support vWire
- Step 2 : Configure trunk in LACP mode or passthrough mode
- Step 3 : Configure Virtual Wire
Note: Steps mentioned above are specific to topology in Figure 2. For more details on Virtual Wire (vWire), refer https://devcentral.f5.com/s/articles/BIG-IP-vWire-Configuration?tab=series&page=1 and https://devcentral.f5.com/s/articles/vWire-Deployment-Configuration-and-Troubleshooting?tab=series&page=1
Step 1 : Configure interfaces to support vWire
To configure interfaces to support vWire, do the following
- Log into BIG-IP GUI
- Select Network -> Interfaces -> Interface List
- Select Specific Interface and in vWire configuration, select Virtual Wire as Forwarding Mode
Figure 10 - Example GUI configuration of interface to support vWire
Step 2 : Configure trunk in LACP mode or passthrough mode
To configure trunk, do the following
- Log into BIG-IP GUI
- Select Network -> Trunks
- Click Create to configure new Trunk. Enable LACP for LACP mode and disable LACP for LACP passthrough mode
Figure 11 - Example GUI configuration of Trunk in LACP Mode
Figure 12 - Example GUI configuration of Trunk in LACP Passthrough Mode
As per Figure 2, when configured in LACP Mode, LACP will be established between BIG-IP and switches. When configured in LACP passthrough mode, LACP will be established between North and South Switches.
As per Figure 2 and 3 , there will be four trunk configured as below,
Left_Trunk 2 : Interfaces 1.3 and 2.1
Right_Trunk 1 : Interfaces 1.2 and 2.4
Right_Trunk 2 : Interfaces 1.4 and 2.2
Left_Trunk ensure connectivity between BIG-IP and North Switches. Right_Trunk ensure connectivity between BIG-IP and South Switches.
Note: Trunks can be configured for individual interfaces, if LACP passthrough configured as LACP frames not getting terminated at BIG-IP
Step 3 : Configure Virtual Wire
To configure trunk, do the following
- Log into BIG-IP GUI
- Select Network -> Virtual Wire
- Click Create to configure Virtual Wire
Figure 13 - Example GUI configuration of Virtual Wire
Above Virtual Wire configuration will work for both Tagged and Untagged traffic. Figure 2 and 3, requires both the Virtual Wire configured. This configuration works for both LACP mode and LACP passthrough mode. If each interface configured with specific trunk in passthrough deployment, then there will be 4 specific Virtual Wires configured.
Note: In this series, all the mentioned scenarios and configuration will be covered in upcoming articles.
Conclusion
This deployment ensures transparent integration of network security tools with little to no network redesign and configuration change. The Merits of above network deployment are
- Increases reliability of Production link
- Inline devices can be upgraded or replaced without loss of the link
- Traffic can be shared between multiple tools
- Specific Traffic can be forwarded to customized tools
- Trusted Traffic can be Bypassed un-inspected
- Carlos_Eduardo5Employee
Hi,
have a challenge over here, using QinQ in this deployment, to send vlans to big IP. I understand that we don't support QinQ in vWire Mode. anyone with this issue too?
- C_PandeyEmployee
Thanks Veera for publishing this article. Great content and really helped me alot while the deployment.
- Samir_Kumar_JhaRet. Employee
Thank you so much for publishing L2-deployment article.
- Veeraraghavan_AEmployee
Thanks for sharing your experience and details. This example of vWire configuration is specific to untagged packets where we explicitly mentioned untagged vlan in vWire configuration. Default vWire configuration will allow any tag, so this configuration will allow both Tagged and Untagged packets. We have other articles in pipeline which covers various scenarios and configuration.
- DojsCirrostratus
Hi,
we are using something similiar.
But we had 2 environments with each one HC2 and i5800. and we had some special configuration to works well with Gigamon. As the BUG <https://cdn.f5.com/product/bugtracker/ID885961.html>, so just after run this workaround the Gigamon works Well.