BIG-IP L2 Virtual Wire LACP Passthrough Deployment with IXIA Bypass Switch and Network Packet Broker (Multiple Service Chain)

Introduction

This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer to https://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer to https://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibility and https://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2).

F5’s BIG-IP hardware appliances can be inserted in L2 networks. This can be achieved using either virtual Wire (vWire) or by bridging 2 Virtual LANs using a VLAN Groups.

This document covers the design and implementation of the IXIA Bypass Switch/Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire).

This document focuses on IXIA Bypass Switch / Network Packet Broker. For more information about architecture overview of bypass switch and network packet broker refer to https://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?tab=series&page=1.

This article covers configuration and scenarios when IXIA Network Packet Broker configured with 2 Service Chains where below articles are specific to single service chain.

https://devcentral.f5.com/s/articles/BIG-IP-L2-Virtual-Wire-LACP-Passthrough-Deployment-with-IXIA-Bypass-Switch-and-Network-Packet-Broker-I

https://devcentral.f5.com/s/articles/BIG-IP-L2-Virtual-Wire-LACP-Passthrough-Deployment-with-IXIA-Bypass-Switch-and-Network-Packet-Broker-II


Network Topology

Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker.

Figure 1 - Deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker

Please refer Lab Overview section in https://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1 for more insights on lab topology and connections.


Hardware Specification

Hardware used in this article are

  • IXIA iBypass DUO ( Bypass Switch)
  • IXIA Vision E40 (Network Packet Broker)
  • BIG-IP
  • Arista DCS-7010T-48 (all the four switches)


Software Specification

Software used in this article are

  • BIG-IP 16.1.0
  • IXIA iBypass DUO 1.4.1
  • IXIA Vision E40 5.9.1.8
  • Arista 4.21.3F (North Switches)
  • Arista 4.19.2F (South Switches)


Switch and Ixia iBypass Duo Configuration

Switch and IXIA iBypass configurations are same as mentioned in below article

https://devcentral.f5.com/s/articles/BIG-IP-L2-Virtual-Wire-LACP-Passthrough-Deployment-with-IXIA-Bypass-Switch-and-Network-Packet-Broker-I


IXIA Vision E40 Configuration

Most of the configuration are same as mentioned in below article

https://devcentral.f5.com/s/articles/BIG-IP-L2-Virtual-Wire-LACP-Passthrough-Deployment-with-IXIA-Bypass-Switch-and-Network-Packet-Broker-I.

In this article, IXIA NPB is configured with 2 Service Chains.

Create the following resources with the information provided.

Bypass Port Pairs


Tool Resources


Service Chains


Figure 2 - Final Configuration of IXIA NPB


BIG-IP Configuration

Most of the configuration are same as mentioned in below article

https://devcentral.f5.com/s/articles/BIG-IP-L2-Virtual-Wire-LACP-Passthrough-Deployment-with-IXIA-Bypass-Switch-and-Network-Packet-Broker-I.

In this article, we have 4 trunks whereas in previous articles we had 2 trunks.


Figure 3 - Trunk Configuration in BIG-IP


Please below list of allocated interface specific to each Trunk

North Trunk -> 1.1

North Trunk1 -> 1.3

South Trunk -> 1.2

South Trunk1 -> 1.4


Figure 4 - vWire Configuration in BIG-IP


Scenarios

As LACP passthrough mode configured in BIG-IP, LACP frames will passthrough BIG-IP. LACP will be established between North and South Switches. ICMP traffic is used to represent network traffic from the north switches to the south switches.


Scenario 1: Traffic flow through BIG-IP with North and South Switches configured in LACP active mode

Above configurations shows that all the four switches are configured with LACP active mode.


Figure 5 - MLAG after deployment of BIG-IP and IXIA with Switches configured in LACP ACTIVE mode

Figure 5 shows that port-channels 513 is active at both North Switches and South Switches.


Figure 6 - ICMP traffic flow from client to server through BIG-IP

Figure 6 shows ICMP is reachable from client to server through BIG-IP. This verifies test case 1, LACP getting established between Switches and traffic passthrough BIG-IP successfully.

In this case incoming request uses 1.1 interface and outgoing request uses 1.2 interface whereas in previous articles Incoming and outgoing request uses 1.1 and 1.4 by default. Figure 4 shows dedicated vWire configured, so traffic from 1.1 will be send to 1.2 and traffic from 1.3 will be send to 1.4.

Figure 4 shows ICMP request uses VLAN 2001 and ICMP reply uses VLAN 2002 which means request uses Service Chain 1 and reply uses Service Chain 2 in NPB.


Scenario 2: Active BIG-IP link goes down with link state propagation enabled in BIG-IP

Figure 4 shows Propagate Virtual Wire Link Status enabled in BIG-IP. Figure 6 shows that interface 1.1 of BIG-IP is active incoming interface and interface 1.2 of BIG-IP is active outgoing interface. Disabling BIG-IP interface 1.1 will make active link down as below


Figure 7 - BIG-IP interface 1.1 disabled


Figure 8 - Trunk state after BIG-IP interface 1.1 disabled


Figure 8 shows North_Trunk and South_Trunk down, North_Trunk is down because it has only one interface (1.1) which is disabled. As links State Propagation enabled and dedicated vWire configured, South_trunk is also down.


Figure 9 - MLAG status with interface 1.1 down and Link State Propagation enabled

Figure 9 shows that port-channel 513 is active at both North Switches and South Switches. This shows that switches are not aware of link failure and it is been handled by IXIA configuration.


Figure 10 - IXIA Bypass Switch after 1.1 interface of BIG-IP goes down


Figure 10 shows that Bypass is switched ON in Bypass Switch 1. As we have dedicated Service Chain in NPB and vWire configuration in BIG-IP, Bypass Switch 1 moved to Bypass Mode. Figure 6 shows ICMP request uses Bypass 1 as default and ICMP reply Uses Bypass2 by default, so request will be bypassing BIG-IP and reply will pass through BIG-IP.


Figure 11 - ICMP reply traffic flow from client to server through BIG-IP

Scenario 3: When BIG-IP interfaces goes down with link state propagation enabled in BIG-IP


Figure 12 - BIG-IP interface 1.1 and 1.3 disabled


Figure 13 - Trunk state after BIG-IP interface 1.1 and 1.3 disabled


Figure 14 - IXIA Bypass Switch after 1.1 and 1.3 interfaces of BIG-IP goes down


Figure 15 - ICMP traffic flow from client to server bypassing BIG-IP


Conclusion

This article covers BIG-IP L2 Virtual Wire Passthrough deployment with IXIA. IXIA configured using multiple Service Chain. Observations of this deployment are as below

  1. VLAN Translation in IXIA NPB will convert real VLAN ID (513) to Translated VLAN ID (2001 and 2002)
  2. BIG-IP will receive packets with translated VLAN ID (2001 and 2002)
  3. VLAN Translation needs all packets to be tagged, untagged packets will be dropped.
  4. LACP frames are untagged and thus bypass configured in NPB for LACP.
  5. Tool Sharing needs to be enabled for allowing untagged packet which will add extra tag. This type of configuration and testing will be covered in upcoming articles.
  6. With Multiple Service Chain, If any one of the Inline Tool Port Pairs goes down and specific Bypass Switch with turn on Bypass Mode in iBypass DUO.
  7. If any of the Inline Tool link goes down, IXIA handles it. Switches will be still unaware of the changes.
  8. If BIG-IP goes down, IXIA enables bypass and ensures there is no packet drop.


Published Dec 28, 2021
Version 1.0