Virtual-wire Configuration and Troubleshooting
A virtual wire(vWire) logically connects two interfaces or trunks, in any combination, to each other, enabling the BIG-IP system to forward traffic from one interface to the other, in either direction. This type of configuration is typically used for security monitoring, where the BIG-IP system inspects ingress packets without modifying them in any way. To deploy a BIG-IP system without making changes to other devices on your network, you can configure the system to operate strictly at Layer 2. By deploying a virtual wire configuration, you transparently add the device to the network without having to create self IP addresses or change the configuration of other network devices that the BIG-IP device is connected to. Topology Before vWire Deployment After vWire Deployment Few points about virtual wire configurations in general: vWire works in transparent mode,which means there is no packet modification The system bridges both tagged and untagged packets Neither VLANs nor MAC addresses change in Symmetry mode propagate virtual wire link: When enabled, the BIG-IP system changes the peer port state to down when the corresponding interface is disabled or down. If disabled, the BIG-IPsystem does not change the peer port state Configuring vWire in UI BIG-IP Navigate to Network>>Virtual Wire Select Create (upper right) Enter the values for interfaces added to the virtual wire Enter VLAN information and click on Add for every VLAN object created Recommended- Enable propagate virtual wire link status for detecting link failure Once all the selections are made and you are ready to implement, click on "Commit Changes to System": The resulting screen will look like the following: The resulting VLAN configuration will look as follows: Note: Be sure to configure an untagged VLAN on the relevant virtual wire interface to enable the system to correctly handle untagged traffic. Note that many Layer 2 protocols, such as Spanning Tree Protocol (STP), employ untagged traffic in the form of BPDUs. Configuring vWire in cli mode Configure interfaces to support virtual wire: tmsh modify net interface 1.1 port-fwd-mode virtual-wire tmsh modify net interface 1.2 port-fwd-mode virtual-wire Create all VLAN tag VLAN objects: tmsh create net vlan Direct_all_vlan_4096_1 tag 4096 interfaces add { 1.1 { tagged } } tmsh create net vlan Direct_all_vlan_4096_2 tag 4096 interfaces add { 1.2 { tagged } } Create specific (802.1Q tag 512) VLAN objects: tmsh create net vlan Direct_vlan_512_1 tag 512 interfaces add { 1.1 { tagged } } tmsh create net vlan Direct_vlan_512_2 tag 512 interfaces add { 1.2 { tagged } } Create VLAN Groups: tmsh create net vlan-group Direct_all_vlan members add { Direct_all_vlan_4096_1 Direct_all_vlan_4096_2 } mode virtual-wire tmsh create net vlan-group Direct_vlan_512 members add { Direct_vlan_512_1 Direct_vlan_512_2 } mode virtual-wire config save: tmsh save sys config partitions all vWire config with trunk with LACP and LACP Pass Through LACP Pass through feature tunnels LACP packets through trunks between switches. Configure an untagged VLAN on the virtual wire interface to tunnel LACP packets. Note: Propagate virtual wire link status should be enabled for LACP pass through mode.LACP Pass through and Propagate virtual wire link status is supported from 16.1.x Configuring LACP Pass Through Configuring interface to support in vwire mode: tmsh modify net interface 1.1 port-fwd-mode virtual-wire tmsh modify net interface 2.1 port-fwd-mode virtual-wire tmsh modify net interface 1.2 port-fwd-mode virtual-wire tmsh modify net interface 2.2 port-fwd-mode virtual-wire Configure trunk : tmsh create net trunk left_trunk_1 interfaces add { 1.1 2.1 }qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk right_trunk_1 interfaces add { 1.2 2.2 }qinq-ethertype 0x8100 link-select-policy auto Configure VLAN tagged and untagged interface: tmsh create net vlan left_vlan_1_4k tag 4096 interfaces add {left_trunk_1 {tagged}} tmsh create net vlan left_vlan_1 tag 31 interfaces add {left_trunk_1 {tagged}} tmsh create net vlan left_vlan_333 tag 333 interfaces add {left_trunk_1 {untagged}} tmsh create net vlan right_vlan_1_4k tag 4096 interfaces add {right_trunk_1 {tagged}} tmsh create net vlan right_vlan_1 tag 31 interfaces add {right_trunk_1 {tagged}} tmsh create net vlan right_vlan_333 tag 333 interfaces add {right_trunk_1 {untagged}} Create VLAN Group and enabled propagate-linkstatus: tmsh create net vlan-group vg_1_4k bridge-traffic enabled mode virtual-wire members add { left_vlan_1_4k right_vlan_1_4k } vwire-propagate-linkstatus enabled tmsh create net vlan-group vg_untagged bridge-traffic enabled mode virtual-wire members add { left_vlan_333 right_vlan_333 } vwire-propagate-linkstatus enabled tmsh create net vlan-group vg_1 bridge-traffic enabled mode virtual-wire members add { left_vlan_1 right_vlan_1 } vwire-propagate-linkstatus enabled configuring LACP(Active-Active) mode Configuring interface to support in vwire mode: tmsh modify net interface 1.1 port-fwd-mode virtual-wire tmsh modify net interface 1.2 port-fwd-mode virtual-wire tmsh modify net interface 2.1 port-fwd-mode virtual-wire tmsh modify net interface 2.2 port-fwd-mode virtual-wire Configure trunk in LACP Active mode : tmsh create net trunk left_trunk_1 interfaces add { 1.1 1.2 }qinq-ethertype 0x8100 link-select-policy auto lacp enabled lacp-mode active tmsh create net trunk right_trunk_1 interfaces add { 2.1 2.2 }qinq-ethertype 0x8100 link-select-policy auto lacp enabled lacp-mode active Configure VLAN tagged interface: tmsh create net vlan left_vlan_1_4k tag 4096 interfaces add {left_trunk_1 {tagged}} tmsh create net vlan left_vlan_1 tag 31 interfaces add {left_trunk_1 {tagged}} tmsh create net vlan right_vlan_1_4k tag 4096 interfaces add {right_trunk_1 {tagged}} tmsh create net vlan right_vlan_1 tag 31 interfaces add {right_trunk_1 {tagged}} Create VLAN Group and enabled propagate-linkstatus: tmsh create net vlan-group vg_1_4k bridge-traffic enabled mode virtual-wire members add { left_vlan_1_4k right_vlan_1_4k } vwire-propagate-linkstatus enabled tmsh create net vlan-group vg_1 bridge-traffic enabled mode virtual-wire members add { left_vlan_1 right_vlan_1 } vwire-propagate-linkstatus enabled DB variables for vWire: Trouble shooting vWire : 1 . Verify that traffic flowing through default Virtual Server(_vlangroup) Tcpdump cmd: tcpdump -nne -s0 -i 0.0:nnn 22:00:53.398116 00:00:00:00:01:31 > 33:33:00:00:00:05, ethertype 802.1Q (0x8100), length 139: vlan 31, p 0, ethertype IPv6, fe80::200:ff:fe00:131 > ff02::5: OSPFv3, Hello, length 40 out slot1/tmm9 lis=_vlangroup 22:00:53.481645 00:00:5e:00:01:01 > 01:00:5e:00:00:12, ethertype 802.1Q (0x8100), length 91: vlan 31, p 0, ethertype IPv4, 10.31.0.3 > 224.0.0.18: VRRPv3, Advertisement, vrid 1, prio 150, intvl 100cs, length 12 out slot1/tmm4 lis=_vlangroup 2. Now create Virtual Server based on requirements like TCP, UDP and ICMP with Virtual Server name as test.Verify traffic is hitting Virtual Server Tcpdump cmd: tcpdump -nne -s0 -i 0.0:nnn 22:04:54.161197 3c:41:0e:9b:01:31 > 00:00:00:00:03:31, ethertype 802.1Q (0x8100), length 145: vlan 31, p 0, ethertype IPv4, 10.20.0.10 > 10.13.0.10: ICMP echo request, id 30442, seq 2, length 64 out slot4/tmm2 lis=/Common/test 22:05:14.126544 3c:41:0e:9b:01:31 > 00:00:00:00:03:31, ethertype 802.1Q (0x8100), length 121: vlan 31, p 0, ethertype IPv4, 10.20.0.10.41692 > 10.13.0.10.80: Flags [S], seq 2716535389, win 64240, options [mss 1460,sackOK,TS val 685348731 ecr 0,nop,wscale 7], length 0 out slot3/tmm8 lis=/Common/test 22:05:14.126945 3c:41:0e:9b:03:31 > 00:00:00:00:01:31, ethertype 802.1Q (0x8100), length 121: vlan 31, p 0, ethertype IPv4, 10.13.0.10.80 > 10.20.0.10.41692: Flags [S.], seq 1173350299, ack 2716535390, win 65160, options [mss 1460,sackOK,TS val 4074187325 ecr 685348731,nop,wscale 7], length 0 in slot3/tmm8 lis=/Common/test 3 . Trouble Shooting steps Get the tcpdump and check the traffic hitting Virtual Server or not If traffic is dropped, enable “tmsh modify sys db vlangroup.forwarding.override value enable” with destination as catch all and check whether traffic is hitting _vlangroup and going out or not. If traffic is going without any issue, then there is an issue with created virtual server. Even after enabling vlangroup.forwarding.override db variable, then take the output of below commands tmctl ifc_stats - displays interface traffic statistics tmctl ip_stat - displays ip traffic statistics tmctl ip6_stat - displays ipv6 traffic statistics vWire Behavior This table describes how the BIG-IP system handles certain conditions when the relevant interfaces are configured to use a virtual wire. The table also shows what actions you can take, if possible Notable Effects-Caveats When deploying a pair of BIG-IP’s in HA mode, the virtual wire configuration will create objects with different names on each BIG-IP.So for example, the creation of vwire_lab01 will result in the creation of VLAN objects vwire_lab01_1_567 and vwire_lab01_2_567 on one BIG-IP, while the other BIG-IP will have vwire_lab01_1_000 and vwire_lab01_2_000 in its configuration.For modules like SSL Orchestrator, or in cases where a Virtual Server needs to be associated with a specific VLAN, the numbering is problematic. The administrator will not be able to associate the topology or Virtual Server to one VLAN object (vwire_lab01_2_567) on the first BIG-IP and the other VLAN object (vwire_lab01_2_000) on the peer BIG-IP.(this is not possible for a number of reasons, one of which is the way configurations are synchronized between BIG-IP devices) Q-in-Q is not supported in a virtual wire configuration Virtual wire feature is not supported on Virtual Clustered Multiprocessing (vCMP) Active/Active deployment is not supported vWire is not supported on virtual edition(VE) Conclusion BIG-IP in Virtual Wire can be deployed in any network without any network design or configuration changes, as it works in L2 transparent mode. L2Transparency Caveats There are few caveats with respect to L2 Transparency OSPF neighborship struck in exstart state. BGP neighborship won’t come with MD5 authentication OSPF neighborship struck in exstart state In transparent modewhen standard Virtual server is configured, the VS will process the DBD packet with this the TTL value become zero and the OSPF neighborship will struck at Exstart state. To solve the above problem, we need to configure a profile to preserve the TTL value and attachthe profile to the virtual server. Below are the steps to configure the profile and the virtual server. Same steps can be configured for both vwire and vlangroup Create a profile to preserve TTL Click on create and enter the profile name as TTL and select to preserve Now attach the profile under ipother After attachingthe profile the OSPF neighborship will come up. BGP neighborship won’t come with MD5 authentication In transparent mode when standard Virtual server is configured, the VS will process the BGP packet and willreply backto the tcp connection without MD5with BGP wont come up between two devices To solve the above problem, we need to configure a profile to support Md5 authentication and attachthe profile to the virtual server. Below are the steps to configure the profile and the virtual server. Same steps can be configured for both Vwire and Vlan-group Create a profile to support md5 authentication Create a profile with name md5. Enable md5 authentication and provide md5 authentication password. Attach the MD5 profile to virtual server After attaching the md5 policy the BGP neighborship will up.3.5KViews4likes0CommentsBIG-IP vWire Configuration
Introduction Theinsertion ofinlinesecurity and application delivery devices into anexisting network infrastructure canrequiresignificant networkre-design and architecture changes. Deploying tools that operatetransparentlyat Layer 2of the OSI model (L2)cangreatly reducethecomplexity anddisruption associated with theseimplementations. F5’s BIG-IP hardware appliances can be inserted as L2 devices in existing networks.Thiscan be achieved using either virtual Wire (vWire) or by bridging 2 Virtual LANs using aVLANGroups. This article focusses on the configuration of vWire on a standalone BIG-IP with 2 physical interface. The 2 physical interfaces are bridged together and allow traffic through the BIG-IP behaving like a wire. Note: Virtual Wire is available on BIG-IP hardware. For more informationon F5 security and other modules and their configurationplease refer towww.f5.comto access user guides, recommended practices andother deployment documentation.The configuration of BIG-IP modules, such as those providing DDoS protection/mitigation or SSL visibility, is beyond the scope of this article and is the subject of otheruser guides. Under the covers Building virtual wires leverages the underlying configuration of two separate VLAN objects that are bridged using a VLAN group.For convenience, going forward, one will be called the “ingress VLAN object” and the other one the “egress VLAN object”. This is significant because, you will be able to use these objects in your configuration to setup listeners and associate them to either VLAN object. Configuration Using the CLI Overview: Modify the 2 interfaces' mode to support virtual wire Create 2 VLAN objects using the interfaces selected above using VLAN id 4096 - this is the default "any" VLAN ID which will accept and forward all 802.1Q tagged traffic. Create 2 VLAN objects using the same interfaces above using the desired VLAN id (512 will be used as an example below) Create VLAN Groups to bridge the VLAN's created above Sample Configuration: The sample below creates a virtual wire that will work with 802.1Q VLAN id. 512. Configure interfaces to support virtual wire: root@(localhost)(cfg-sync Standalone)(Active)(/Common)(tmos)# modify net interface 1.1 port-fwd-mode virtual-wire root@(localhost)(cfg-sync Standalone)(Active)(/Common)(tmos)# modify net interface 1.2 port-fwd-mode virtual-wire Create all VLAN tag VLAN objects: root@(localhost)(cfg-sync Standalone)(Active)(/Common)(tmos)# create net vlan Direct_all_vlan_4096_1 tag 4096 interfaces add { 1.1 { tagged } } root@(localhost)(cfg-sync Standalone)(Active)(/Common)(tmos)# create net vlan Direct_all_vlan_4096_2 tag 4096 interfaces add { 1.2 { tagged } } Create specific (802.1Q tag 512) VLAN objects: root@(localhost)(cfg-sync Standalone)(Active)(/Common)(tmos)# create net vlan Direct_vlan_512_1 tag 512 interfaces add { 1.1 { tagged } } root@(localhost)(cfg-sync Standalone)(Active)(/Common)(tmos)# create net vlan Direct_vlan_512_2 tag 512 interfaces add { 1.2 { tagged } } Create VLAN Groups: root@(localhost)(cfg-sync Standalone)(Active)(/Common)(tmos)# create net vlan-group Direct_all_vlan members add { Direct_all_vlan_4096_1 Direct_all_vlan_4096_2 } mode virtual-wire root@(localhost)(cfg-sync Standalone)(Active)(/Common)(tmos)# create net vlan-group Direct_vlan_512 members add { Direct_vlan_512_1 Direct_vlan_512_2 } mode virtual-wire Don't forget to save: root@(localhost)(cfg-sync Standalone)(Active)(/Common)(tmos)# save sys config partitions all Using the WEBUI: Overview: There is a single interface to create and configure the necessary configuration objects. Create a virtual wire with the desired interfaces Associate VLANs that will be used by the BIG-IP function (e.g. SSL Orchestrator, Traffic Manager, etc.) Apply the configuration Sample Configuration: From the BIG-IP WebUI (Network>>Virtual Wire): Select Create (upper right) Enter the values for interfaces added to the virtual wire Enter VLAN information and click on Add for every VLAN object created Once the all the selections are made and you are ready to implement, click on "Commig Changes to System": The resulting screen will look like the following: The resulting VLAN configuration will look as follows: Notable Effects-Caveats Virtual Wire Created Through WebUI Configuring vWire via the WebUI will result in creating the aforementioned VLANs automatically.During the creation process, an identifier is appended to the VLAN object-name.This identifier will vary from one BIG-IP to another. When deploying a pair of BIG-IP’s in HA mode, the virtual wire configuration will create objects with different names on each BIG-IP.So for example, the creation of vwire_lab01 will result in the creation of VLAN objects vwire_lab01_1_567 and vwire_lab01_2_567 on one BIG-IP, while the other BIG-IP will have vwire_lab01_1_000 and vwire_lab01_2_000 in its configuration.For modules like SSL Orchestrator, or in cases where a Virtual Server needs to be associated with a specific VLAN, the numbering is problematic. The administrator will not be able to associate the topology or Virtual Server to one VLAN object (vwire_lab01_2_567) on the first BIG-IP and the other VLAN object (vwire_lab01_2_000) on the peer BIG-IP.(this is not possible for a number of reasons, one of which is the way configurations are synchronized between BIG-IP devices) This results in the necessary manual configuration using the procedure described above. VLAN Objects Available for Configuration After creating virtual wire objects, VLANs are available for you to configure the desired services. This includes BIG-IP LTM or SSL Orchestrator objects allowing you to take different actions when traffic comes in one or the other "side" of the virtual wire. For example, you might want connections initiated from the LAN (in the picture above) to be decrypted for security inspection purposes, while having traffic coming in from the firewall passed through transparently. Conclusion Deploying the BIG-IP in virtual wire mode provides a great way to insert services into your network without affecting the rest of the network configuration, routing and forwarding. The flexibility of the BIG-IP allows you to control the traffic traversing the BIG-IP on what ever VLAN (tagged or not). I hope this has been useful.3KViews6likes9CommentsL2 Deployment of vCMP guest with Ixia network packet broker
Introduction The insertion of inline security devices into an existing network infrastructure can require significant network re-design and architecture changes. Deploying tools that operate transparently at Layer 2 of the OSI model (L2) can greatly reduce the complexity and disruption associated with these implementations. This type of insertion eliminates the need to make changes to the infrastructure and provides failsafe mechanisms to ensure business continuity should a security device fail. F5’s BIG-IP hardware appliances can be inserted in L2 networks.This can be achieved using either virtual Wire (vWire) or by bridging 2 Virtual LANs using a VLAN Groups. This document covers the design and implementation of the Ixia bypass switch, Ixia packet broker in conjunction with the BIG-IP i5800 appliance configured with hardware virtualization (vCMP), VLAN Groups and VLAN tagging (IEEE 802.1q tagging). Emphasis is made on the network insertion, integration and Layer 2 configuration. The configuration of BIG-IP modules, such as those providing DDoS protection/mitigation or SSL visibility, is beyond the scope of this document and is the subject of other deployment guides.For more information on F5 security modules and their configuration please refer to www.f5.com to access user guides, recommended practices and other deployment documentation. Architecture Overview Enterprise networks are built using various architectures depending on business objectives and budget requirements.As corporate security policies, regulations and requirements evolve, new security services need to be inserted into the existing infrastructure. These new services can be provided by tools such as intrusion detection and prevention systems (IDS/IPS), web application firewalls (WAF), denial of service protection (DoS), or data loss prevention devices (DLP).These are often implemented in the form of physical or virtual appliances requiring network-level integration. Figure 1- Bypass Switch Operation This document focuses on using bypass switches as insertion points and network packet brokers to provide further flexibility. Bypass switches are passive networking devices that mimic the behavior of a straight piece of wire between devices while offering the flexibility to forward traffic to a security service.They offer the possibility to detecting service failure and bypassing the service completely should it become unavailable.This is illustrated in the Figure 1.The bypass switch forwards traffic to the service during normal operation, and bypasses the tool in other circumstances (e.g. tool failure, maintenance, manual offline). Capabilities of the bypass switch can be enhanced with the use of network packet brokers. Note: Going forward, “tool” or “security service” refers to the appliance providing a security service. In the example below, this is an F5 BIG-IP appliance providing DDoS protection. Network packet brokers are similar to bypass switches in that they operate at L2 and do not take part in the switching infrastructure signaling (STP, bpdu, etc.) and are transparent to the rest of the network.They provide forwarding flexibility to integrate and forward traffic to more than one device and create a chain.These chains allow for the use of multiple security services tools. The Figure 2 provides a simplified example where the network packet broker is connected to 2 different tools/security services.Network packet brokers operate programmatically and are capable to conditionally forward traffic to tools.Administrators are able to create multiple service chains based on ingress conditions or traffic types. Another function of the network packet broker is to provide logical forwarding and encapsulation (Q-in-Q) functions without taking part into the Ethernet switching.This includes adding,removing, replacing 802.1q tags and conditional forwarding based on frame type, VLAN tags, etc. Figure 2-Network Packet Broker - Service Chain When inserted into the network at L2, BIG-IP devices leveraging system-level virtualization (vCMP) require the use of VLAN Groups.VLAN groups bridge 2 VLAN’s together.In this document, the VLANs utilized are tagged using 802.1q.This means that tagging used on traffic ingress is different from tagging used on traffic egress as shown in Figure 3. From an enterprise network perspective, the infrastructure typically consists of border routers feeding into border switches.Firewalls connect into the border switches with their outside (unsecured/internet-facing) interfaces.They connect to the core switching mesh with their inside (protected, corporate and systems-facing) interfaces. The Figure 3 below shows the insertion of the bypass switch in the infrastructure between the firewall and the core switching layer. A network packet broker is also inserted between the bypass switch and the security services. Figure 3. Service Chain Insertion Note: the core switch and firewall configuration are not altered in anyway. Figure 4 describes how frames traverse the bypass switch, network packet broker and security device.It also shows the transformation of the frames in transit. VLAN tags used in the diagram are provided for illustration purposes.Network administrators may wish to use VLAN tags consistent with their environment. Prior to the tool chain insertion, packets egress the core and ingress the firewall with a VLAN tag of 101. After the insertion, packets egress the core (blue path) tagged with 101 and ingress the Bypass 1 (BP1) switch (1). They are redirected to the network packet broker (PB1). On ingress to the PB1 (2), an outer VLAN tag of 2001 is added. The VLAN tag is then changed to match the BIG-IP VLAN Group tag of 4001 before egressing the PB1 (3). An explanation of the network packet broker use of VLAN tags and the VLAN ID replacement is covered in the next section. The packet is processed by the BIG-IP 1 (4) and returns it to the PB1 with a replaced outer VLAN of 2001(5). The PB1 removes the outer VLAN tag and sends it back to BP1 (6). The BP1 forwards it to the north switch (1) with the original VLAN tag of 101. The Path 2 (green) follows the same flow but on a different bypass switch, network packet broker and BIG-IP. Path 2 is assigned a different outer VLAN tags (2003 and 4003) by packet broker. Figure 4 - South-North traffic flow Heartbeats are configured on both bypass switches to monitor tools in their primary path and secondary paths. If a tool failure is detected, the bypass switch forwards traffic to the secondary path. This is illustrated in Figure 4.5. Figure 4.5. Heartbeat and Network Packet Broker (NPB) VLAN Re-write The network packet broker utilizes VLANs to keep track of flows from different paths in a tool-sharing configuration. A unique VLAN ID is configured for each path. The tag is added on ingress and removed on egress. The VLAN tags enable the packet broker to keep track of flows in and out of the shared tool and return them to the correct path. If the flow entering the network packet broker has a VLAN tag, than the packet broker must be configured to use Q-in-Q to add an outer tag. In this document, the BIG-IP is deployed as a tool in the network packet broker service chain. The Big-IP is running vCMP and is configured in VLAN Group mode. In this mode, the BIG-IP requires two VLANs to operate, one facing north and the other facing south. As packets traverse the BIG-IP, the VLAN tag is changed. This presents a challenge for the network packet broker because it expects to receive same the unaltered packets that it sends to the inline tools. The network packet broker will drop the altered packets. To address this issue, additional configurations are required, using service chains, filters and hard loops. Network Packet Broker VLAN Replacement 1. The frames ingress the network packet broker on port 2. An outer VLAN tag of 2001 is added to the frames by the Service Chain 3 (SC3). 2. The frames are forwarded of port 17 and egress the network packet broker, which is externally patched to port 18. 3.Port 18 is internally linked to port 10 by a filter. 4.As traffic egress port 10, a filter is applied to change the VLAN from 2001 to 4001. 5.The outer VLAN tag on the frames are changed from 4001 to 2001 as they traverse the BIG-IP. The frames egress port 2.1 on the BIG-IP and ingress the network packet broker on port 9. 6.The frames are sent through the SC3, where the outer VLAN is stripped off and egress on port 1. 7.Frames are forwarded back to the bypass. The return traffic follows the same flow as described above but in reverse order. The only difference is a different filter is applied to port 10 to replace the 4001 tag with 2001. Figure 5. Network Packet Broker VLAN Tag Replacement Lab The use case selected for this verified design is based on a customer design. The customer’s requirements were the BIG-IPs must be deployed in vCMP mode and in layer 2. This limits the BIG-IP deployment to VLAN Group. The design presented challenges and creative solutions to overcome them. The intention is not for reader to replicate the design but to …. The focus of this lab is the L2 insertion point and the flow traffic through the service chain. A pair of switches were used to represent the north and south ends of each path, a pair for blue and a pair for green. One physical bypass switch configured with two logical bypass switches and one physical network packet broker simulating two network packet brokers. Lab Equipment List Appliance Version Figure 6. Lab diagram Lab Configuration Arista network switches Ixia Bypass switch Ixia Network Packet Broker F5 BIG-IP Test case Arista Network Switches Four Arista switches were used to generate the north-south traffic. A pair of switches represents the Path 1 (blue) with a firewall to the north of the insertion and the core to the south. The second pair of switches represents Path 2 (green). A VLAN 101 and a VLAN interface 101 were created on each switch. Each VLAN interface was assigned an IP address in the 10.10.101.0/24 range. Ixia iBypass Duo Configuration Step 1.Power Fail State Step 2.Enable ports Step 3.Bypass Switch Step 4.Heartbeat The initial setup of the iBypass Duo switch is covered in the Ixia iBypass Duo User’s Guide. Please visit the Ixia website to download a copy. This section will cover the the configuration of the bypass switch to forwards traffic to the network packet broker (PB1). In the event the PB1 fails, forward traffic to the secondary network packet broker (PB2). As the last the last resort, fail open and permit traffic to flow, bypassing the service chain. Step 1.In the invent of a power failure, the bypass switch is configured to fail open and permit the traffic to flow uninterrupted. a. Click the CONFIGURATION (1) menu bar and select Chassis (2). Select Open (3) from the Power Fail State and click SAVE (4) on the menu bar. Step 2.Enable Ports a.Click the CONFIGURATION (1) menu bar and select Port (2) b.Check the box (3) at the top of the column to select all ports and click and Enable (4) c.Click SAVE (5)on the menu bar Step 3.Configure Bypass Switch 1 and 2 a.Click Diagram (1) and click +Add Bypass Switch (2) b.Select the Inline Network Links tab (1) and click Add Ports (2). From the pop-up window, select port A. The B side port will automatically be selected. c.Select the Inline Tools (1) tab and click the + (2) d.From the Edit Tool Connections window, on the A side (top) , click Add Ports (1) and select port 1 from the pop-up windows (2). Repeat and select port 5. On the B side (bottom), click Add Ports and select port 2 (3). Repeat and select port 6. Note: The position of the ports is also the priority of the ports. In this example, ports 1 (A side) and 2 (B side) are the primary path. e.Repeat steps a through d to create Bypass Switch 2 with Inline Network Links C and D and Inline Tools ports 7,8 and 3,4 as the secondary. Step 4.Heartbeat config a.From the Diagram view, click the Bypass Switch 1 menu square (1) and select Properties (2). b.Click the Heartbeats tab (1), click show (2) and populate the values (3). To edit a field, just click the field and edit. Click OK and check the Enabled box (4). Note: To edit the heartbeat values, just click on the field and type. c.Repeat steps a. and b. to create the heartbeats for the remaining interfaces. Ideally, heartbeats are configured to check both directions. From tool port 1 -> tool port 2 and from tool port 2 -> tool port 1. Repeat steps to create the heartbeat for port 2 but reverse the MACs for SMAC and DMAC Use a different set of MACs (ex. 0050 c23c 6012 and 0050 c23c 6013) when configuring the heartbeat for tool ports 5 and 6. This concludes the bypass switch configuration. Network Packet Broker (NPB) Configuration In this lab, the NPB is configured with three type of ports, Bypass, Inline Tool and Network. Steps Summary Step 1.Configure Bypass Port Pairs Step 2.Create Inline Tool Resources Ports Step 3.Create Service Chains Step 4.Link the Bypass Pairs with the Service Chains Step 5.Create Dynamic Filters Step 6.Apply the Filters Step 1.Configure Bypass Port Pairs (BPP) Bypass ports are ports that send and receive traffic from the network side. In this lab, they are connected to the bypass switches. a.Click the INLINE menu (1) and click the Add Bypass Port Pair (2). b.In the Add Bypass Port Pair window, enter a name (ByPass 1 Primary). To select Side A Port, click the Select Port button (2). In the pop-up window, select a port ( P0 1). Now select the Side B Port (P02) (3) and click OK. Repeat these steps to create the remain BPPs. ByPass 1 Secondary with P05 (Side A) and P06 (Side B) ByPass 2 Primary with P07 (Side A) and P08 (Side B) ByPass 2 Secondary with P03 (Side A) and P04 (Side B) Step 2.Create Inline Tool Resources Ports Inline Tool Resources (ITR) are ports connected to tools, such as the BIG-IP. These ports are used in the service chain configuration to connect BPPs to ITRs. a.Click the INLINE menu (1) and click the Add Tool Resource (2). b.Enter a name (BIG-IP 1) (1) and click the Inline Tool Ports tab (2) c.To select the Side 1 Port, click the Select Port (1) button and select a port (P09) from the pop-up window. Do the same for Side 2 port(P17) (2). Provide an Inline Tool Name (BIG-IP 1) (3) and click Create Port Pair (4). Repeat these steps to create ITR BIG-IP 2 using ports P13 and P21. NOTE: The Side B port do not match the diagram due to the VLAN replacement explained previously. Step3.Create Service Chains A Service Chain connects BPPs to the inline tools. It controls how traffic flows from the BPPs to the tools in the chain through the use of Dynamic Filters. a.Click the INLINE menu (1) and click the Add Service Chain (2). b.In the pop-up window, enter a name (Service Chain 1) (1) and check the box to Enable Tool Sharing (2). Click Add (3) and in the pop-up window, select Bypass 1 Primary and Bypass 2 Secondary. Once added, the BPPs are displayed in the window. Select each VLAN Id field and replace them with (4) 2001 and (5) 2002. Repeat these steps to create Service Chain 2. Use BPPs Bypass 2 Primary and Bypass 1 Secondary and VLAN 2003 and 2004 respectively. Click the Inline Tool Resource tab (6) to add ITRs. c.On the Inline Tool Resource, click Add and select the ITR (BIG-IP 1) from the pop-up window. Repeat these steps for Service Chain 2 and select BIG-IP 2. d.The next step connects the network (BPPs) to the tools using the service chains. To connect the BPPs to the service chains, simply drag a line to link them. The lines in the red box are created manually. The lines in the blue box are automatically created to correlate with the links in the red box. This means traffic sent out BPP port A, into the service chain, is automatically return to port B. 4.Configure Filters Filters are used to link ports, filter, a.Click the OBJECTS menu (1), select Dynamic Filters (2), click the +Add (3) and select Dynamic Filters. b.Enter a name (1) c.On the Filter Criteria tab, select Pass by Criteria (1) and click VLAN (2). In the pop-up window, enter a VLAN ID (4001) and select Match Any (3) d.On the Connections tab, click Add Ports (1) to add a network port. In the pop-up window, select a port (P10). Add a port for tools (P18) (2). e.Skip the Access Control tab and select the VLAN Replacement tab. Check the Enable VLAN Replacement box and enter a VLAN ID (2001). Repeat these steps and create the remaining filters using the table below. NOTE: The filter name (Fx) does not need to match this table exactly. This concludes the network packet broker configuration. Filters BIG-IP Configuration This section describes how to configure a vCMP BIG-IP device to utilize VLAN Groups. As a reminder a VLAN Group is a configuration element that allows the bridging of VLANs. In vCMP, the hypervisor is called the vCMP host.Virtual machines running on the host are called guests. Lower layer configuration for networking on vCMP is done at the host level.VLAN’s are then made available to the guest.The VLAN bridging is configured at the guest level. In the setup described herein, the VLAN interfaces are tagged with two 802.1q tags.Q-in-Q is used to provide inner and outer tagging. The following assumes that the BIG-IP’s are up and running, that they are upgraded, licensed and provisioned for vCMP.Also, it is assumed that all physical connectivity is completed as appropriate following a design identifying port, VLAN tagging and other ethernet media choices. Prior to proceeding you will need the following information for each BIG-IP that will be configured: Configuration overview: 1.[vCMP host] Create VLANs that will be bridged 2.[vCMP host] Create the vCMP guest: a.Configure – define what version of software, size of VM, associate the VLANs etc. b.Provision – create the BIG-IP virtual machine or guest c.Deploy – start the BIG-IP guest 3.[vCMP guest] Bridge VLAN group Create VLANs that will be bridged: ·Login to the vCMP host interface ·Go to Network>> VLAN >> VLAN List ·Select “Create” ·In the VLAN configuration panel: oProvide a name for the object oEnter the Tag (this corresponds to the “outer” tag) oSelect “Specify” in the Customer Tag dropdown oEnter a value for the Customer Tag, this is a value between 1 and 4094 (this is the “inner” tag) oSelect an interface to associate the VLAN to oSelect “Tagged” in the “Tagging” drop down oSelect “Double” in the “Tag Mode” drop down oClick on the “add” button in the Resources box oSelect “Finished” as shown in the figure below Repeat the steps above to create a second VLAN that will be added to the VLAN group.Once the above steps completed the VLAN webUI should look like: Create vCMP Guest ·Login to the vCMP host interface ·Go to vCMP>>Guest List ·Select “Create…” (upper right-hand corner) ·Populate the following fields: oName oHost Name oManagement Port oIP Address oNetwork Mask oManagement Route oVLAN List, ensure that the VLANs that need to be bridged are in the “selected” pane ·Set the “Requested State” to “Deployed” (this will create a virtual BIG-IP ·Click on “Finish” – window should look like the following: Clicking on “Finish” will configure, provision and deploy the BIG-IP guest Bridge VLAN group ·Login to the vCMP guest interface ·Go to Network >> VLANs >> VLAN Groups ·Select “Create ·In the configuration window as shown below: oEnter a unique name for the VLAN group object oSelect the VLAN’s to associate that need to be bridged oKeep the default configuration for the other settings oSelect “Finished” Once created, traffic should be able to traverse the BIG-IP. This concludes the BIG-IPs configuration.2.2KViews3likes0CommentsVLAN Group and Asymmetric Deployment
A VLAN group is a logical container that includes two or more distinct VLANs. VLAN groups are intended for load balancing traffic in a Layer 2 network, when you want to minimize the reconfiguration of hosts on that network. A VLAN group ensures that the BIG-IP system can process traffic between a client and server, when the two hosts reside in the same address space but on two different VLAN’s. Configuring VLAN group in UI On the Main tab, click Network > VLANs. Click on create. 3. Provide the following details on the VLAN creation page. VLAN Name VLAN Tag Interface Details Tag type - Tagged/Untagged 4.Similarly create another VLAN as mentioned in step 3. 5.Click on VLAN groups to create a VLAN group. 6. Select VLAN interfaces and appropriate mode from the Transparency mode drop down. Note: Mandatory to create self IP for opaque mode VLAN Group Modes The BIG-IP system is capable of processing traffic using a combination of Layer 2 and Layer 3 forwarding, that is, switching and IP routing. When you set the transparency mode, you specify the type of forwarding that the BIG-IP system performs when forwarding a message to a host in a VLAN. The default setting is translucent, which means that the BIG-IP system uses a mix of Layer 2 and Layer 3 processing. The allowed modes are: 1.Transparent 2.Translucent 3.Opaque Transparent mode: In transparent mode original MAC address of the remote system preserved across VLANs. Configuring in cli mode tmsh modify net interface 1/1.1 port-fwd-mode l3 tmsh modify net interface 2/1.2 port-fwd-mode l3 tmsh create net vlan left_vlan_1 tag 77 interfaces add {1/1.1 {tagged}} tmsh create net vlan right_vlan_1 tag 78 interfaces add {2/1.2 {tagged}} tmsh create net vlan-group vg_1 bridge-traffic enabled mode transparent members add { left_vlan_1 right_vlan_1 } Sample ICMP packet capture on BIGIP tcpdump -nne -s0 -i 0.0:nnicmp 22:34:43.858872 3c:41:0e:9b:36:e4 > 3c:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 189: vlan 81, p 0, ethertype IPv4, 10.0.81.1 > 10.0.81.2: ICMP echo request, id 207, seq 0, length 80 in slot4/tmm4 lis= port=1/1.1 trunk= flowtype=0 flowid=0 peerid=0 conflags=0 inslot=19 inport=4 haunit=0 priority=3 22:34:43.859194 3c:41:0e:9b:36:e4 > 3c:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 199: vlan 82, p 0, ethertype IPv4, 10.0.81.1 > 10.0.81.2: ICMP echo request, id 207, seq 0, length 80 out slot4/tmm9 lis=_vlangroup port=2/1.2 trunk= flowtype=132 flowid=560CED5A8500 peerid=560CED5A8400 conflags=100000E26 inslot=19 inport=4 haunit=1 priority=3 22:34:43.860821 3c:41:0e:9b:1c:6a > 3c:41:0e:9b:36:e4, ethertype 802.1Q (0x8100), length 189: vlan 82, p 0, ethertype IPv4, 10.0.81.2 > 10.0.81.1: ICMP echo reply, id 207, seq 0, length 80 in slot4/tmm4 lis= port=2/1.2 trunk= flowtype=0 flowid=0 peerid=0 conflags=0 inslot=19 inport=4 haunit=0 priority=3 22:34:43.860830 3c:41:0e:9b:1c:6a > 3c:41:0e:9b:36:e4, ethertype 802.1Q (0x8100), length 199: vlan 81, p 0, ethertype IPv4, 10.0.81.2 > 10.0.81.1: ICMP echo reply, id 207, seq 0, length 80 out slot4/tmm9 lis=_vlangroup port=1/1.1 trunk= From the above packet capture we can see the mac address is preserved 3c:41:0e:9b:36:e4 > 3c:41:0e:9b:1c:6a Translucent mode: In Translucent mode, locally-unique bit is toggled in all the packets across VLAN Configuring in cli mode tmsh modify net interface 1/1.1 port-fwd-mode l3 tmsh modify net interface 2/1.2 port-fwd-mode l3 tmsh create net vlan left_vlan_1 tag 77 interfaces add {1/1.1 {tagged}} tmsh create net vlan right_vlan_1 tag 78 interfaces add {2/1.2 {tagged}} tmsh create net vlan-group vg_1 bridge-traffic enabled mode translucent members add { left_vlan_1 right_vlan_1 } Sample ICMP packet capture on BIGIP tcpdump -nne -s0 -i 0.0:nnicmp 22:46:40.143781 3c:41:0e:9b:36:e4 > 3e:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 189: vlan 81, p 0, ethertype IPv4, 10.0.81.1 > 10.0.81.2: ICMP echo request, id 208, seq 1, length 80 in slot4/tmm1 lis= port=1/1.1 trunk= flowtype=0 flowid=0 peerid=0 conflags=0 inslot=19 inport=17 haunit=0 priority=3 22:46:40.143859 3e:41:0e:9b:36:e4 > 3c:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 199: vlan 82, p 0, ethertype IPv4, 10.0.81.1 > 10.0.81.2: ICMP echo request, id 208, seq 1, length 80 out slot4/tmm6 lis=_vlangroup port=2/1.2 trunk= flowtype=132 flowid=56089F3A8300 peerid=56089F3A8200 conflags=100000E26 inslot=19 inport=17 haunit=1 priority=3 22:46:40.145613 3c:41:0e:9b:1c:6a > 3e:41:0e:9b:36:e4, ethertype 802.1Q (0x8100), length 189: vlan 82, p 0, ethertype IPv4, 10.0.81.2 > 10.0.81.1: ICMP echo reply, id 208, seq 1, length 80 in slot4/tmm1 lis= port=2/1.2 trunk= flowtype=0 flowid=0 peerid=0 conflags=0 inslot=19 inport=17 haunit=0 priority=3 22:46:40.145781 3e:41:0e:9b:1c:6a > 3c:41:0e:9b:36:e4, ethertype 802.1Q (0x8100), length 199: vlan 81, p 0, ethertype IPv4, 10.0.81.2 > 10.0.81.1: ICMP echo reply, id 208, seq 1, length 80 out slot4/tmm6 lis=_vlangroup port=1/1.1 trunk= From the above packet capture we can see locally-unique bit is toggled from 3c:41:0e:9b:36:e4 to 3e:41:0e:9b:36:e4. Opaque mode: Opaque mode uses proxy ARP with Layer 3 forwarding. Proxy ARP occurs when one host is responding to an ARP request on behalf of another host. In opaque mode we need to configure self IP on VLAN group to forward the traffic. Configuring in cli mode tmsh modify net interface 1/1.1 port-fwd-mode l3 tmsh modify net interface 2/1.2 port-fwd-mode l3 tmsh create net vlan left_vlan_1 tag 81 interfaces add {1/1.1 {tagged}} tmsh create net vlan right_vlan_1 tag 82 interfaces add {2/1.2 {tagged}} tmsh create net vlan-group vg_1 bridge-traffic enabled mode opaque members add { left_vlan_1 right_vlan_1 } Sample ICMP packet capture on BIGIP tcpdump -nne -s0 -i 0.0:nnicmp listening on 0.0:nn, link-type EN10MB (Ethernet), capture size 65535 bytes 22:59:12.866402 3c:41:0e:9b:36:e4 > 02:23:e9:04:98:06, ethertype 802.1Q (0x8100), length 189: vlan 81, p 0, ethertype IPv4, 10.0.81.1 > 10.0.81.2: ICMP echo request, id 221, seq 0, length 80 in slot4/tmm2 lis= port=1/1.1 trunk= flowtype=0 flowid=0 peerid=0 conflags=0 inslot=19 inport=2 haunit=0 priority=3 22:59:12.866634 02:23:e9:04:98:06 > 3c:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 199: vlan 82, p 0, ethertype IPv4, 10.0.81.1 > 10.0.81.2: ICMP echo request, id 221, seq 0, length 80 out slot4/tmm3 lis=_vlangroup port=2/1.2 trunk= flowtype=132 flowid=5604511A8100 peerid=5604511A8000 conflags=E26 inslot=19 inport=2 haunit=1 priority=3 22:59:12.868114 3c:41:0e:9b:1c:6a > 02:23:e9:04:98:06, ethertype 802.1Q (0x8100), length 189: vlan 82, p 0, ethertype IPv4, 10.0.81.2 > 10.0.81.1: ICMP echo reply, id 221, seq 0, length 80 in slot4/tmm2 lis= port=2/1.2 trunk= flowtype=0 flowid=0 peerid=0 conflags=0 inslot=19 inport=2 haunit=0 priority=3 22:59:12.868266 02:23:e9:04:98:06 > 3c:41:0e:9b:36:e4, ethertype 802.1Q (0x8100), length 199: vlan 81, p 0, ethertype IPv4, 10.0.81.2 > 10.0.81.1: ICMP echo reply, id 221, seq 0, length 80 out slot4/tmm3 lis=_vlangroup port=1/1.1 trunk= flowtype=68 flowid=5604511A8000 peerid=5604511A8100 conflags=100200000000E26 inslot=19 inport=2 haunit=1 priority=3 From the above packet capture we can see BIG IP is doing proxy by sending its BIGIP mac 02:23:e9:04:98:06 to neighbor switch. VLAN Group Trunk In VLAN group deployment trunk can be configured in two LACP modes on BIGIP with neighboring devices . The two LACP mode are mentioned below Active: Specifies that the BIGIP periodically sends control packets regardless of whether the partner system has issued a request. Passive: Specifies that the BIGIP sends control packets only when the partner system has issued a request. Configuring Trunk in Active mode tmsh modify net interface 1.1 port-fwd-mode l3 tmsh modify net interface 1.2 port-fwd-mode l3 tmsh create net trunk left_trunk_1 interfaces add { 1.1 1.2 }qinq-ethertype 0x8100 link-select-policy auto lacp enabled lacp-mode active tmsh create net vlan left_vlan_1 tag 31 interfaces add {left_trunk_1 {tagged}} tmsh modify net interface 1.1 port-fwd-mode l3 tmsh modify net interface 1.2 port-fwd-mode l3 tmsh create net vlan left_vlan_2 tag 32 interfaces add {left_trunk_1 {tagged}} tmsh modify net interface 2.1 port-fwd-mode l3 tmsh modify net interface 2.2 port-fwd-mode l3 tmsh create net trunk right_trunk_1 interfaces add { 2.1 2.2 }qinq-ethertype 0x8100 link-select-policy auto lacp enabled lacp-mode active tmhs create net vlan right_vlan_1 tag 41 interfaces add {right_trunk_1 {tagged}} tmsh modify net interface 2.1 port-fwd-mode l3 tmsh modify net interface 2.2 port-fwd-mode l3 tmsh create net vlan right_vlan_2 tag 42 interfaces add {right_trunk_1 {tagged}} tmsh create net vlan-group vg_1 bridge-traffic enabled mode transparent members add { left_vlan_1 right_vlan_1 } tmsh create net vlan-group vg_2 bridge-traffic enabled mode transparent members add { left_vlan_2 right_vlan_2 } Configuring Trunk in passive mode tmsh modify net interface 1.1 port-fwd-mode l3 tmsh modify net interface 1.2 port-fwd-mode l3 tmsh create net trunk left_trunk_1 interfaces add { 1.1 1.2 }qinq-ethertype 0x8100 link-select-policy auto lacp enabled lacp-mode passive tmsh create net vlan left_vlan_1 tag 31 interfaces add {left_trunk_1 {tagged}} tmsh modify net interface 1.1 port-fwd-mode l3 tmsh modify net interface 1.2 port-fwd-mode l3 tmsh create net vlan left_vlan_2 tag 32 interfaces add {left_trunk_1 {tagged}} tmsh modify net interface 2.1 port-fwd-mode l3 tmsh modify net interface 2.2 port-fwd-mode l3 tmsh create net trunk right_trunk_1 interfaces add { 2.1 2.2 }qinq-ethertype 0x8100 link-select-policy auto lacp enabled lacp-mode passive tmhs create net vlan right_vlan_1 tag 41 interfaces add {right_trunk_1 {tagged}} tmsh modify net interface 2.1 port-fwd-mode l3 tmsh modify net interface 2.2 port-fwd-mode l3 tmsh create net vlan right_vlan_2 tag 42 interfaces add {right_trunk_1 {tagged}} tmsh create net vlan-group vg_1 bridge-traffic enabled mode transparent members add { left_vlan_1 right_vlan_1 } tmsh create net vlan-group vg_2 bridge-traffic enabled mode transparent members add { left_vlan_2 right_vlan_2 } VLAN Group Options VLAN-based fail-safe: VLAN fail-safe is a feature you enable when you want to base redundant-system failover on VLAN-related events. To configure VLAN fail-safe, you specify a timeout value and the action that you want the system to take when the timeout period expires. Configure in CLI tmsh create net vlan test1 {failsafe enabled failsafe-action reboot failsafe-timeout 90} Configure in UI 1.Follow the steps mentioned above for creating VLAN. 2.In VLAN click on advance menu and enable the failsafe. 3.Also mention failsafe time out and failsafe action. If the VLAN failsafe enabled link goes down, then BIGIP will wait for the failsafe timeout time and do a reboot or restartof BIGIP based on the failsafe action on aHA system. It is recommend to configurefailsafe on HA system. Proxy Exclusion List: A host in a VLAN cannot normally communicate to a host in another VLAN. This rule applies to ARP requests as well. However, if you put the VLANs into a single VLAN group, the BIG-IP system can perform a proxy ARP request. The ARP request should be learned on member VLAN and not VLAN group Proxy ARP request is an ARP request that the BIG-IP system can send, on behalf of a host in a VLAN, to hosts in another VLAN. A proxied ARP request requires that both VLANs belong to the same VLAN group. In some cases, you might not want a host to forward proxied ARP requests to a specific host, or to other hosts in the configuration. To exclude specific hosts from receiving forwarded proxied ARP requests, specify the IP addresses in proxy exclusion list. Configuring in CLI tmsh modify net vlan-group vlt1001 proxy-excludes add {10.10.10.2} Configuring in UI 1.Follow the steps mention above as in mentioned in VLAN group configuration. 2.Click on proxy exclusion list. 3.Click on create button. 4.Add IP address which you want to block the ARP request. 5.Final configwill look like below. VLAN GROUP Asymmetric Deployment When VLAN group is deployed in asymmetric way then network packets enter via VLAN AçèVLAN B and return via VLAN D & VLANC unlike symmetric path, in which packets come and go using the VLAN A and VLAN B.For the traffic to work in asymmetric path we need to disabledb variable connection.vlankeyed. Disabling VLAN-keyed connections With VLAN-keyed connections enabled, the VLAN for the ingresstraffic must match the configured VLANand bepresent in the BIG-IPconnflow lookup table, otherwise, the connectionwill not be processed by the BIG-IP system. This behavior is different for egresstraffic, as egress traffic may use an alternateVLAN. For example, when a client sends SYN packets to avirtual server addressconfigured on VLAN A, and that virtual server addressreplies to the connection request with a SYN/ACK fromVLAN B, the ACK from thatclient will be matched when arriving on VLANAor VLANB. The BIG-IP system will not process theclient'sACK reply if the reply arrives on a VLANthat is on VLANC or VLAND. Disabling VLAN-keyed connections allows the BIG-IP system to accept asymmetrically routed connections across multiple VLANs. To disable VLAN-keyed in CLI tmsh modify sys db connection.vlankeyed valuedisable To disable VLAN-keyed in CLI 1.On the Main tab, click Configuration > Local Traffic. 2.Uncheck the VLAN-keyed connection to disable. Sample TCP packet capture on BIGIP 07:47:00.648140 3c:41:0e:9b:36:e4 > 3c:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 109: vlan 81, p 0, ethertype IPv4, 10.0.80.2.42568 > 10.0.90.2.80: Flags [S], seq 1701549128, win 64240, options [mss 1460,sackOK,TS val 1817130709 ecr 0,nop,wscale 7], length 0 in slot1/tmm4 lis= port=1/1.1 trunk= 07:47:00.648216 3c:41:0e:9b:36:e4 > 3c:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 121: vlan 82, p 0, ethertype IPv4, 10.0.80.2.42568 > 10.0.90.2.80: Flags [S], seq 1701549128, win 64240, options [mss 1460,sackOK,TS val 1817130709 ecr 0,nop,wscale 7], length 0 out slot1/tmm4 lis=/Common/test port=2/1.2 trunk= 07:47:00.648702 3c:41:0e:9b:1c:52 > 3c:41:0e:9b:36:d8, ethertype 802.1Q (0x8100), length 121: vlan 84, p 0, ethertype IPv4, 10.0.90.2.80 > 10.0.80.2.42568: Flags [S.], seq 41553624, ack 1701549129, win 65160, options [mss 1460,sackOK,TS val 2143595190 ecr 1817130709,nop,wscale 7], length 0 in slot1/tmm4 lis=/Common/test port=2/1.1 trunk= 07:47:00.648710 3c:41:0e:9b:1c:52 > 3c:41:0e:9b:36:d8, ethertype 802.1Q (0x8100), length 121: vlan 83, p 0, ethertype IPv4, 10.0.90.2.80 > 10.0.80.2.42568: Flags [S.], seq 41553624, ack 1701549129, win 65160, options [mss 1460,sackOK,TS val 2143595190 ecr 1817130709,nop,wscale 7], length 0 out slot1/tmm4 lis=/Common/test port=1/1.2 trunk= 07:47:00.648950 3c:41:0e:9b:36:e4 > 3c:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 113: vlan 81, p 0, ethertype IPv4, 10.0.80.2.42568 > 10.0.90.2.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 1817130710 ecr 2143595190], length 0 in slot1/tmm4 lis=/Common/test port=1/1.1 trunk= 07:47:00.648957 3c:41:0e:9b:36:e4 > 3c:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 113: vlan 82, p 0, ethertype IPv4, 10.0.80.2.42568 > 10.0.90.2.80: Flags [.], ack 1, win 502, options [nop,nop,TS val 1817130710 ecr 2143595190], length 0 out slot1/tmm4 lis=/Common/test port=2/1.2 trunk 07:47:00.649193 3c:41:0e:9b:36:e4 > 3c:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 186: vlan 81, p 0, ethertype IPv4, 10.0.80.2.42568 > 10.0.90.2.80: Flags [P.], seq 1:74, ack 1, win 502, options [nop,nop,TS val 1817130710 ecr 2143595190], length 73: HTTP: GET / HTTP/1.1 in slot1/tmm4 lis=/Common/test port=1/1.1 trunk= 07:47:00.649198 3c:41:0e:9b:36:e4 > 3c:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 186: vlan 82, p 0, ethertype IPv4, 10.0.80.2.42568 > 10.0.90.2.80: Flags [P.], seq 1:74, ack 1, win 502, options [nop,nop,TS val 1817130710 ecr 2143595190], length 73: HTTP: GET / HTTP/1.1 out slot1/tmm4 lis=/Common/test port=2/1.2 trunk= 07:47:00.649495 3c:41:0e:9b:1c:52 > 3c:41:0e:9b:36:d8, ethertype 802.1Q (0x8100), length 113: vlan 84, p 0, ethertype IPv4, 10.0.90.2.80 > 10.0.80.2.42568: Flags [.], ack 74, win 509, options [nop,nop,TS val 2143595190 ecr 1817130710], length 0 in slot1/tmm4 lis=/Common/test port=2/1.1 trunk= 07:47:00.649500 3c:41:0e:9b:1c:52 > 3c:41:0e:9b:36:d8, ethertype 802.1Q (0x8100), length 113: vlan 83, p 0, ethertype IPv4, 10.0.90.2.80 > 10.0.80.2.42568: Flags [.], ack 74, win 509, options [nop,nop,TS val 2143595190 ecr 1817130710], length 0 out slot1/tmm4 lis=/Common/test port=1/1.2 trunk= 07:47:00.653094 3c:41:0e:9b:1c:52 > 3c:41:0e:9b:36:d8, ethertype 802.1Q (0x8100), length 1561: vlan 84, p 0, ethertype IPv4, 10.0.90.2.80 > 10.0.80.2.42568: Flags [.], seq 1:1449, ack 74, win 509, options [nop,nop,TS val 2143595192 ecr 1817130710], length 1448: HTTP: HTTP/1.1 200 OK in slot1/tmm4 lis=/Common/test port=2/1.1 trunk= 07:47:00.653097 3c:41:0e:9b:1c:52 > 3c:41:0e:9b:36:d8, ethertype 802.1Q (0x8100), length 220: vlan 84, p 0, ethertype IPv4, 10.0.90.2.80 > 10.0.80.2.42568: Flags [P.], seq 1449:1556, ack 74, win 509, options [nop,nop,TS val 2143595192 ecr 1817130710], length 107: HTTP in slot1/tmm4 lis=/Common/test port=2/1.1 trunk= From the above packet capture we can see the syn packet coming from VLAN 81 , VLAN 82 and the sync ack is going on the VLAN 84, VLAN 83 . The BIG-IP will process the packets with different vlan matching the same connflow DB Variables The below table describes VLAN group behavior with system database variables Troubleshooting 1.Verify that traffic flowing through default Virtual Server(_vlangroup) Tcpdump cmd: tcpdump -nne -s0 -i 0.0:nnn 22:46:40.143781 3c:41:0e:9b:36:e4 > 3e:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 189: vlan 81, p 0, ethertype IPv4, 10.0.81.1 > 10.0.81.2: ICMP echo request, id 208, seq 1, length 80 in slot4/tmm1 lis= port=1/1.1 trunk= flowtype=0 flowid=0 peerid=0 conflags=0 inslot=19 inport=17 haunit=0 priority=3 22:46:40.143859 3e:41:0e:9b:36:e4 > 3c:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 199: vlan 82, p 0, ethertype IPv4, 10.0.81.1 > 10.0.81.2: ICMP echo request, id 208, seq 1, length 80 out slot4/tmm6 lis=_vlangroup port=2/1.2 trunk= flowtype=132 flowid=56089F3A8300 peerid=56089F3A8200 conflags=100000E26 inslot=19 inport=17 haunit=1 priority=3 flowtype=0 flowid=0 peerid=0 conflags=0 inslot=19 inport=17 haunit=0 priority=3 22:46:40.145781 3e:41:0e:9b:1c:6a > 3c:41:0e:9b:36:e4, ethertype 802.1Q (0x8100), length 199: vlan 81, p 0, ethertype IPv4, 10.0.81.2 > 10.0.81.1: ICMP echo reply, id 208, seq 1, length 80 out slot4/tmm6 lis=_vlangroup port=1/1.1 trunk= 2.Now create Virtual Server based on requirements like TCP, UDP and ICMP with VS name as test.Verify traffic is hitting Virtual Server Tcpdump cmd: tcpdump -nne -s0 -i 0.0:nnn tcp 07:53:33.112175 3c:41:0e:9b:36:e4 > 3c:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 109: vlan 81, p 0, ethertype IPv4, 10.0.80.2.42570 > 10.0.90.2.80: Flags [S], seq 4156824303, win 64240, options [mss 1460,sackOK,TS val 1817523173 ecr 0,nop,wscale 7], length 0 in slot1/tmm6 lis= port=1/1.1 trunk= 07:53:33.112251 3c:41:0e:9b:36:e4 > 3c:41:0e:9b:1c:6a, ethertype 802.1Q (0x8100), length 121: vlan 82, p 0, ethertype IPv4, 10.0.80.2.42570 > 10.0.90.2.80: Flags [S], seq 4156824303, win 64240, options [mss 1460,sackOK,TS val 1817523173 ecr 0,nop,wscale 7], length 0 out slot1/tmm6 lis=/Common/test port=2/1.2 trunk= 07:53:33.112779 3c:41:0e:9b:1c:6a > 3c:41:0e:9b:36:e4, ethertype 802.1Q (0x8100), length 121: vlan 82, p 0, ethertype IPv4, 10.0.90.2.80 > 10.0.80.2.42570: Flags [S.], seq 1696348196, ack 4156824304, win 65160, options [mss 1460,sackOK,TS val 2143987653 ecr 1817523173,nop,wscale 7], length 0 in slot1/tmm6 lis=/Common/test port=2/1.2 trunk= 07:53:33.112786 3c:41:0e:9b:1c:6a > 3c:41:0e:9b:36:e4, ethertype 802.1Q (0x8100), length 121: vlan 81, p 0, ethertype IPv4, 10.0.90.2.80 > 10.0.80.2.42570: Flags [S.], seq 1696348196, ack 4156824304, win 65160, options [mss 1460,sackOK,TS val 2143987653 ecr 1817523173,nop,wscale 7], length 0 out slot1/tmm6 lis=/Common/test port=1/1.1 trunk= 3.Debugging steps a.Get the tcpdump and check the traffic hitting VS or not b.If traffic is dropped, enable “tmsh modify sys db vlangroup.forwarding.override value enable” with destination as catch all and check whether traffic is hitting _vlangroup and going out or not. If traffic is going without any issue, then there is an issue with created VS. c. Check ARP entries are learned on member vlan d.Even after enabling vlangroup.forwarding.override db variable, then take the ouput of below commands tmctl ifc_stats- Displays interface statistics tmctl ip_stat - Displays IP statistics tmctl ip6_stat- Displays IPV6 statistics Notable Effects-Caveats Active/Active deployment is not supported STP should be disabled with VLAN groups Asymmetric is not supported on Translucent mode Conclusion VLAN group is deployed to bridge between two L2 network segments and used for load balancing traffic in Layer 2 networks.2KViews3likes0CommentsL2 Deployment of BIG-IP with Gigamon
Introduction This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer to https://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer to https://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibility and https://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2). F5’s BIG-IP hardware appliances can be inserted in L2 networks. This can be achieved using either virtual Wire (vWire) or by bridging 2 Virtual LANs using a VLAN Groups. This document covers the design and implementation of the Gigamon Bypass Switch/Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire). This document focuses on Gigamon Bypass Switch / Network Packet Broker. For more information about architecture overview of bypass switch and network packet broker refer to https://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?tab=series&page=1. Gigamon provides internal bypass switch within network packet broker device whereas Ixia has external bypass switch. Network Topology Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with Gigamon. Figure 1 - Topology before deployment of Gigamon and BIG-IP Figure 2 - Topology after deployment of Gigamon and BIG-IP Figure 3 - Connection between Gigamon and BIG-IP Hardware Specification Hardware used in this article are BIG-IP i5800 GigaVUE-HC1 Arista DCS-7010T-48 (all the four switches) Note: All the Interfaces/Ports are 1G speed Software Specification Software used in this article are BIG-IP 16.1.0 GigaVUE-OS 5.7.01 Arista 4.21.3F (North Switches) Arista 4.19.2F (South Switches) Gigamon Configuration In this lab, the Gigamon is configured with two type of ports, Inline Network and Inline Tool. Steps Summary Step 1 : Configure Port Type Step 2 : Configure Inline Network Bypass Pair Step 3 : Configure Inline Network Group (if applicable) Step 4 : Configure Inline Tool Pair Step 5 : Configure Inline Tool Group (if applicable) Step 6 : Configure Inline Traffic Flow Maps Step 1 : Configure Port Type First and Foremost step is to configure Ports. Figure 2 shows all the ports that are connected between Switches and Gigamon. Ports that are connected to switch should be configured as Inline Network Ports. As per Figure 2, find below Inline Network ports Inline Network ports: 1/1/x1, 1/1/x2, 1/1/x3, 1/1/x4, 1/1/x5. 1/1/x6, 1/1/x7, 1/1/x8 Figure 3 shows all the ports that are connected between BIG-IP and Gigamon. Ports that are connected to BIG-IP should be configured as Inline Tool Ports. As per Figure 3, find below Inline Tool ports Inline Tool ports: 1/1/x9, 1/1/x10, 1/1/x11, 1/1/x12, 1/1/g1, 1/1/g2, 1/1/g3, 1/1/g4 To configure Port Type, do the following Log into GigaVUE-HC1 GUI Select Ports -> Go to specific port and modify Port Type as Inline Network or Inline Tool Figure 4 - GUI configuration of Port Types Equivalent command for configuring Inline Network port and other port configuration port 1/1/x1 type inline-net port 1/1/x1 alias N-SW1-36 port 1/1/x1 params admin enable autoneg enable Equivalent command for configuring Inline Tool Port and other port configuration port 1/1/x9 type inline-tool port 1/1/x9 alias BIGIP-1.1 port 1/1/x9 params admin enable autoneg enable Step 2 : Configure Inline Network Bypass Pair Figure 1 shows direct connections between switches. An inline network bypass pair will ensure the same connections through Gigamon. An inline network is an arrangement of two ports of the inline-network type. The arrangement facilitates access to a bidirectional link between two networks (two far-end network devices) that need to be linked through an inline tool. As per Figure 2, find below Inline Network bypass pairs Inline Network bypass pair 1 : 1/1/x1 -> 1/1/x2 Inline Network bypass pair 2 : 1/1/x3 -> 1/1/x4 Inline Network bypass pair 3 : 1/1/x5 -> 1/1/x6 Inline Network bypass pair 4 : 1/1/x7 -> 1/1/x8 To configure the inline network bypass pair, do the following Log into GigaVUE-HC1 GUI Select Inline Bypass -> Inline Networks Figure 5 - Example GUI configuration of Inline Network Bypass Pair Equivalent command for configuring Inline Network Bypass Pair inline-network alias Bypass1 pair net-a 1/1/x1 and net-b 1/1/x2 physical-bypass disable traffic-path to-inline-tool Step 3 : Configure Inline Network Group An inline network group is an arrangement of multiple inline networks that share the same inline tool. To configure the inline network bypass group, do the following Log into GigaVUE-HC1 GUI Select Inline Bypass -> Inline Networks Groups Figure 6 - Example GUI configuration of Inline Network Bypass Group Equivalent command for configuring Inline Network Bypass Group inline-network-group alias Bypassgroup network-list Bypass1,Bypass2,Bypass3,Bypass4 Step 4 : Configure Inline Tool Pair Figure 3 shows connection between BIG-IP and Gigamon which will be in pairs. An inline tool consists of inline tool ports, always in pairs, running at the same speed, on the same medium. As per Figure 3, find below Inline Tool pairs. Inline Network bypass pair 1 : 1/1/x9 -> 1/1/x10 Inline Network bypass pair 2 : 1/1/x11 -> 1/1/x12 Inline Network bypass pair 3 : 1/1/g1 -> 1/1/g2 Inline Network bypass pair 4 : 1/1/g3 -> 1/1/g4 To configure the inline tool pair, do the following Log into GigaVUE-HC1 GUI Select Inline Bypass -> Inline Tools Figure 7 - Example GUI configuration of Inline Tool Pair Equivalent command for configuring Inline Tool pair inline-tool alias BIGIP1 pair tool-a 1/1/x9 and tool-b 1/1/x10 enable shared true Step 5 : Configure Inline Tool Group (if applicable) An inline tool group is an arrangement of multiple inline tools to which traffic is distributed to the inline tools based on hardware-calculated hash values. For example, if one tool goes down, traffic is redistributed to other tools in the group using hashing. To configure the inline tool group, do the following Log into GigaVUE-HC1 GUI Select Inline Bypass -> Inline Tool Groups Figure 8 - Example GUI configuration of Inline Tool Group Equivalent command for configuring Inline Tool Group inline-tool-group alias BIGIPgroup tool-list BIGIP1,BIGIP2,BIGIP3,BIGIP4 enable Step 6 : Configure Inline Traffic Flow Maps Flow mapping takes traffic from a network TAP or a SPAN/mirror port and sends it through a set of user-defined map rules to the tools and applications that secure, monitor and analyze IT infrastructure. As per Figure 2, it is the high-level process for configuring traffic to flow from the inline network links to the inline tool group, allowing you to test the deployment functionality of the BIG-IP appliances within the group. To configure the inline tool group, do the following Log into GigaVUE-HC1 GUI Select Maps -> New Figure 9 - Example GUI configuration of Flow Maps Note: Above configuration allows all traffic from Inline Network Group to flow through Inline Tool Group Equivalent command for configuring PASS ALL Flow Map map-passall alias Map1 to BIGIPgroup from Bypassgroup Flow Maps can be configured specific to certain traffic. For example, If LACP traffic should bypass BIG-IP and all other traffic should pass through BIG-IP. Find below command to achieve mentioned condition map alias inMap type inline byRule roles replace admin to owner_roles comment " " rule add pass ethertype 8809 to bypass from Bypassgroup exit map-scollector alias SCollector roles replace admin to owner_roles from Bypassgroup collector BIGIPgroup exit Note: For more details on Gigamon, refer https://docs.gigamon.com/pdfs/Content/Shared/5700-doclist.html BIG-IP Configuration In series of BIG-IP and Gigamon deployment, BIG-IP configured in L2 mode with Virtual Wire (vWire) Step Summary Step 1 : Configure interfaces to support vWire Step 2 : Configure trunk in LACP mode or passthrough mode Step 3 : Configure Virtual Wire Note: Steps mentioned above are specific to topology in Figure 2. For more details on Virtual Wire (vWire), refer https://devcentral.f5.com/s/articles/BIG-IP-vWire-Configuration?tab=series&page=1 and https://devcentral.f5.com/s/articles/vWire-Deployment-Configuration-and-Troubleshooting?tab=series&page=1 Step 1 : Configure interfaces to support vWire To configure interfaces to support vWire, do the following Log into BIG-IP GUI Select Network -> Interfaces -> Interface List Select Specific Interface and in vWire configuration, select Virtual Wire as Forwarding Mode Figure 10 - Example GUI configuration of interface to support vWire Step 2 : Configure trunk in LACP mode or passthrough mode To configure trunk, do the following Log into BIG-IP GUI Select Network -> Trunks Click Create to configure new Trunk. Enable LACP for LACP mode and disable LACP for LACP passthrough mode Figure 11 - Example GUI configuration of Trunk in LACP Mode Figure 12 - Example GUI configuration of Trunk in LACP Passthrough Mode As per Figure 2, when configured in LACP Mode, LACP will be established between BIG-IP and switches. When configured in LACP passthrough mode, LACP will be established between North and South Switches. As per Figure 2 and 3 , there will be four trunk configured as below, Left_Trunk 1 : Interfaces 1.1 and 2.3 Left_Trunk 2 : Interfaces 1.3 and 2.1 Right_Trunk 1 : Interfaces 1.2 and 2.4 Right_Trunk 2 : Interfaces 1.4 and 2.2 Left_Trunk ensure connectivity between BIG-IP and North Switches. Right_Trunk ensure connectivity between BIG-IP and South Switches. Note: Trunks can be configured for individual interfaces, if LACP passthrough configured as LACP frames not getting terminated at BIG-IP Step 3 : Configure Virtual Wire To configure trunk, do the following Log into BIG-IP GUI Select Network -> Virtual Wire Click Create to configure Virtual Wire Figure 13 - Example GUI configuration of Virtual Wire Above Virtual Wire configuration will work for both Tagged and Untagged traffic. Figure 2 and 3, requires both the Virtual Wire configured. This configuration works for both LACP mode and LACP passthrough mode. If each interface configured with specific trunk in passthrough deployment, then there will be 4 specific Virtual Wires configured. Note: In this series, all the mentioned scenarios and configuration will be covered in upcoming articles. Conclusion This deployment ensures transparent integration of network security tools with little to no network redesign and configuration change. The Merits of above network deployment are Increases reliability of Production link Inline devices can be upgraded or replaced without loss of the link Traffic can be shared between multiple tools Specific Traffic can be forwarded to customized tools Trusted Traffic can be Bypassed un-inspected2KViews9likes5CommentsBIG-IP L2 Deployment with Bypasss, Network Packet Broker and LACP
Introduction This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change.For more information about bypass switch devices refer to https://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer to https://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibility. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2).In this installment, we will cover the deployment of the bypass switch (BP), network packet broker (NPB) and BIG-IP in Virtual Wire (vWire) mode with LACP (ref. https://en.wikipedia.org/wiki/Link_aggregation). Design Overview The insertion of inline network tools at L2 reduces the complexity associated with these deployments because no configuration change is required for the routing or switching infrastructure. The Figure 1 below is an example of a L2 insertion. It shows switches to the north and south of the bypass switches, in other networks these devices may be routers, firewalls or any other device capable of using LACP to provide greater throughput and/or network resilience. In normal operation, traffic passing through the bypass switches is forwarded to the network packet brokers and to the BIG-IP on the primary path (solid lines). The BIG-IP is configured in vWire mode. The bypass switches monitor the tools’ availability using heartbeats. In the invent of a failure ofthe primary path/tool, the bypass switches will forward traffic using the secondary path (dotted lines). If both BIG-IP devices fail, it will enter bypass mode and permit traffic to flow directly from north to south. Figure 1. Topology Overview LACP Bypass A Link Aggregation Group (LAG) combines multiple physical ports together to make a single high-bandwidth connection by load balancing traffic over individual ports. It also offers the benefit of resiliency. As port(s) fail(s) within the aggregate, bandwidth is reduced but the connection remains up and passing traffic. Link Aggregation Control Protocol (LACP) provides a method to control the aggregation of multiple ports to form a LAG. Network devices configured with LACP ports send LACP frames to its peers to dynamically build LAGs. Common network designs leverage link aggregation using multiple chassis (MLAG) (aka Virtual Port Channel or VPC on Cisco devices).This allows for LAG to terminate to 2 or more devices.For more information about MLAG refer to https://en.wikipedia.org/wiki/MC-LAG. By default, the BIG-IP device participates in the LACP peering. It processes the LACP frames but does NOT forward them. This means the LAGs are formed between the switches and the BIG-IP and not between the north and south switches. This may not be suited for all deployments. In cases where LACP peering is required between the north and south switches, LACP packets need to bypass the inline tool (BIG-IP) and forward to the next hop unaltered. Figure 2 illustrates how the LACP traffic is handled by NPBs. LACP packets sent from the north switches are forwarded to the NPBs by the BP switches. The NPBs are configured to filter and bypass frames with Ethertype 8809 (LACP). The LACP packets are returned to BPs switches and forwarded to the south switches. LACP peering is established between the north and south switches. Figure 2. LACP Bypass Heartbeats Monitoring the paths and the tools is critical in minimizing service interruption. Heartbeats can be used to provide this monitoring function. In Figure 3, heartbeats are configured on BP1 to monitor the path from the BP to the tool. In normal operation, heartbeats are sent out on BP1 port 1 (top solid blue line) and received on BP1 port 2 (bottom solid blue line). Heartbeats are also configured to monitor the reverse path, sent from BP1 port 2 to BP1 port 1. This ensures the network connections are up and the tools are processing traffic initiated in both directions. If the heartbeats are not received on the primary path, BP1 will start forwarding traffic over the secondary path. If both paths are detected to be down, BP1 is configured to bypass the NPB and BIG-IP for all traffic.This means that all traffic is permitted to traverse the BP from north to south and vice versa. Heartbeats are configured on all four paths, see Figure 4. Figure 3. Heartbeat Path Figure 4. Heartbeats monitor paths and tools Lab Overview The following discusses the lab and setup that was used to validate this design. The objective of this lab is twofold (refer to refer to Figure 5 for details): Demonstrate tool failure detection by the BP switch using heartbeat LACP traffic bypass by the NPB. The focus is on the primary (active) path Note 1: In this environment, a single bypass switch is used to simulate two bypass switches. Note 2: This article is part of a series. The steps below are general configurations, for step-by-step instructions, please refer to the lab section of the article L2 Deployment of vCMP guest with Ixia network packet broker. Figure 5. Primary Path The lab equipment consists of following: L3 switches – A pair north and south of the insertion point, each pair is configured as MLAG peers. Each pair has a LACP LAG to connect to the other pair. Ixia iBypass Switch (BP) – Provides L2 insertion capabilities with fail to wire configured. Also configured to monitors paths and tools. Ixia Network Packet Broker - Configured to filter and bypass function. BIG-IP i5800 – To test traffic flow, the BIG-IP was configured to forward traffic, with no tool. It operates in vWire mode.s The Figure 6 below shows the lab configuration and cabling. Figure 6. Lab Configuration Ixia iBypass Switch Configuration The following shows the configuration of the Ixia BP. Bypass Switch Heartbeat Configuration The heartbeat configuration is identical to the one mentioned in the xxx guide with the exception of the VLAN ID. In this infrastructure, the VLAN ID is 513 and represented as hex 0201. Network Packet Broker Configuration Create the following resources with the information provided. Bypass Port Pairs Tool Resources Service Chains The final config should look like the following: LACP bypass Configuration The network packet broker is configured to forward (or bypass) the LACP frames directly from the north to the south switch and vice versa.LACP frames bear the ethertype 8809 (in hex). This filter is configured during the Bypass Port Pair configuration. Note: There are methods to configure this filter, with the use of service chains and filters but his is the simplest for this deployment. Big-IP Configuration Two vWire groups are created, one for each link of the LACP LAG. Testing Pings were used to represent network traffic from the north switches to the south switches. To simulate a tool failure, the vWire2 configuration was removed. In this failure simulation, the interfaces associated with vWire2 remained up but the tool is not processing traffic, see Figure 7. The BP heartbeats detected the tool failure and put the ports in bypass mode.The north and south switches renegotiated the LACP LAG. In the process of renegotiating, approximately 200 pings packets were lost over a period of a few seconds.The failure and bypass mode is displayed on the BP dashboard in Figure 8. The LACP status of port 50 is shown in Figure 9 from the North Switch 2 CLI. Figure 7. Remove vWire2 configuration Figure 8. Bypass Mode Enabled Figure 9. North Switch 2 CLI Output for LACP Peer When the vWire2 configuration is restored, the BP detects the tool has been restored and resumes the traffic follow. The traffic on half of the LAG is interrupted for 400 pings (a few seconds) during the renegotiation of LAG (Diagram 10). The BP dashboard (Figure 11) shows operations has returned to normal. Figure 10. vWire2 configuration restored. Figure 11. Bypass switch returns to normal operating state1.9KViews2likes1CommentBIG-IP L2 Virtual Wire LACP Passthrough Deployment with IXIA Bypass Switch and Network Packet Broker (Single Service Chain - Active / Active)
Introduction This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer tohttps://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer tohttps://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibilityandhttps://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2). F5’s BIG-IP hardware appliances can be inserted in L2 networks. This can be achieved using either virtual Wire (vWire) or by bridging 2 Virtual LANs using a VLAN Groups. This document covers the design and implementation of the IXIA Bypass Switch/Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire). This document focus on IXIA Bypass Switch / Network Packet Broker. For more information about architecture overview of bypass switch and network packet broker refer tohttps://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?tab=series&page=1. This article is continuation of https://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1 with latest versions of BIG-IP and IXIA Devices. Also focused on various combination of configurations in BIG-IP and IXIA devices. Network Topology Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker. Figure 1 - Deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker Please refer Lab Overview section in https://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1 for more insights on lab topology and connections. Hardware Specification Hardware used in this article are IXIA iBypass DUO ( Bypass Switch) IXIA Vision E40 (Network Packet Broker) BIG-IP Arista DCS-7010T-48 (all the four switches) Software Specification Software used in this article are BIG-IP 16.1.0 IXIA iBypass DUO 1.4.1 IXIA Vision E40 5.9.1.8 Arista 4.21.3F (North Switches) Arista 4.19.2F (South Switches) Switch Configuration LAG or link aggregation is a way of bonding multiple physical links into a combined logical link. MLAG or multi-chassis link aggregation extends this capability allowing a downstream switch or host to connect to two switches configured as an MLAG domain. This provides redundancy by giving the downstream switch or host two uplink paths as well as full bandwidth utilization since the MLAG domain appears to be a single switch to Spanning Tree (STP). Lab Overview section in https://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1shows MLAG configuring in both the switches. This article focus on LACP deployment for tagged packets. For more details on MLAG configuration, refer tohttps://eos.arista.com/mlag-basic-configuration/#Verify_MLAG_operation Step Summary Step 1 : Configuration of MLAG peering between both the North Switches Step 2 : Verify MLAG Peering in North Switches Step 3 : Configuration of MLAG Port-Channels in North Switches Step 4 : Configuration of MLAG peering between both the South Switches Step 5 : Verify MLAG Peering in South Switches Step 6 : Configuration of MLAG Port-Channels in South Switches Step 7 : Verify Port-Channel Status Step 1 : Configuration of MLAG peering between both the North Switches MLAG Configuration in North Switch1 and North Switch2 are as follows North Switch 1: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.0.1/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.0.2 peer-link Port-Channel10 reload-delay 150 North Switch 2: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.0.2/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.0.1 peer-link Port-Channel10 reload-delay 150 Step 2 : Verify MLAG Peering in North Switches North Switch 1: North-1#show mlag MLAG Configuration: domain-id:mlag1 local-interface:Vlan4094 peer-address:172.16.0.2 peer-link:Port-Channel10 peer-config : consistent MLAG Status: state:Active negotiation status:Connected peer-link status:Up local-int status:Up system-id:2a:99:3a:23:94:c7 dual-primary detection :Disabled MLAG Ports: Disabled:0 Configured:0 Inactive:6 Active-partial:0 Active-full:2 North Switch 2: North-2#show mlag MLAG Configuration: domain-id:mlag1 local-interface:Vlan4094 peer-address:172.16.0.1 peer-link:Port-Channel10 peer-config : consistent MLAG Status: state:Active negotiation status:Connected peer-link status:Up local-int status:Up system-id:2a:99:3a:23:94:c7 dual-primary detection :Disabled MLAG Ports: Disabled:0 Configured:0 Inactive:6 Active-partial:0 Active-full:2 Step 3 : Configuration of MLAG Port-Channels in North Switches North Switch 1: interface Port-Channel513 switchport trunk allowed vlan 513 switchport mode trunk mlag 513 interface Ethernet50 channel-group 513 mode active North Switch 2: interface Port-Channel513 switchport trunk allowed vlan 513 switchport mode trunk mlag 513 interface Ethernet50 channel-group 513 mode active Step 4 : Configuration of MLAG peering between both the South Switches MLAG Configuration in South Switch1 and South Switch2 are as follows South Switch 1: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.1.1/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.1.2 peer-link Port-Channel10 reload-delay 150 South Switch 2: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.1.2/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.1.1 peer-link Port-Channel10 reload-delay 150 Step 5 : Verify MLAG Peering in South Switches South Switch 1: South-1#show mlag MLAG Configuration: domain-id : mlag1 local-interface : Vlan4094 peer-address : 172.16.1.2 peer-link : Port-Channel10 peer-config : consistent MLAG Status: state : Active negotiation status : Connected peer-link status : Up local-int status : Up system-id : 2a:99:3a:48:78:d7 MLAG Ports: Disabled : 0 Configured : 0 Inactive : 6 Active-partial : 0 Active-full : 2 South Switch 2: South-2#show mlag MLAG Configuration: domain-id : mlag1 local-interface : Vlan4094 peer-address : 172.16.1.1 peer-link : Port-Channel10 peer-config : consistent MLAG Status: state : Active negotiation status : Connected peer-link status : Up local-int status : Up system-id : 2a:99:3a:48:78:d7 MLAG Ports: Disabled : 0 Configured : 0 Inactive : 6 Active-partial : 0 Active-full : 2 Step 6 : Configuration of MLAG Port-Channels in South Switches South Switch 1: interface Port-Channel513 switchport trunk allowed vlan 513 switchport mode trunk mlag 513 interface Ethernet50 channel-group 513 mode active South Switch 2: interface Port-Channel513 switchport trunk allowed vlan 513 switchport mode trunk mlag 513 interface Ethernet50 channel-group 513 mode active LACP modes are as follows On Active Passive LACP Connection establishment will occur only for below configurations Active in both North and South Switch Active in North or South Switch and Passive in other switch On in both North and South Switch Note: In this case, all the interfaces of both North and South Switches are configured with LACP mode as Active. Step 7 : Verify Port-Channel Status North Switch 1: North-1#show mlag interfaces detail local/remote mlag state local remote oper config last change changes ---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- ------- 513 active-full Po513 Po513 up/up ena/ena 4 days, 0:34:28 ago 198 North Switch 2: North-2#show mlag interfaces detail local/remote mlag state local remote oper config last change changes ---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- ------- 513 active-full Po513 Po513 up/up ena/ena 4 days, 0:35:58 ago 198 South Switch 1: South-1#show mlag interfaces detail local/remote mlag state local remote oper config last change changes ---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- ------- 513 active-full Po513 Po513 up/up ena/ena 4 days, 0:36:04 ago 190 South Switch 2: South-2#show mlag interfaces detail local/remote mlag state local remote oper config last change changes ---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- ------- 513 active-full Po513 Po513 up/up ena/ena 4 days, 0:36:02 ago 192 Ixia iBypass Duo Configuration For detailed insight, refer to IXIA iBypass Duo Configuration section in https://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?page=1 Figure 2 - Configuration of iBypass Duo (Bypass Switch) Heartbeat Configuration Heartbeats are configured on both bypass switches to monitor tools in their primary path and secondary paths. If a tool failure is detected, the bypass switch forwards traffic to the secondary path. Heartbeat can be configured using multiple protocols, here Bypass switch 1 uses DNS and Bypass Switch 2 uses IPX for Heartbeat. Figure 3 - Heartbeat Configuration of Bypass Switch 1 ( DNS Heartbeat ) In this infrastructure, the VLAN ID is 513 and represented as hex 0201. Figure 4 - VLAN Representation in Heartbeat Figure 5 - Heartbeat Configuration of Bypass Switch 1 ( B Side ) Figure 6 - Heartbeat Configuration of Bypass Switch 2 ( IPX Heartbeat ) Figure 7 - Heartbeat Configuration of Bypass Switch 2 ( B Side ) IXIA Vision E40 Configuration Create the following resources with the information provided. Bypass Port Pairs Inline Tool Pair Service Chains Figure 8 - Configuration of Vision E40 ( NPB ) This articles focus on deployment of Network Packet Broker with single service chain whereas previous article is based on 2 service chain. Figure 9 - Configuration of Tool Resources In Single Tool Resource, 2 Inline Tool Pairs configured which allows to configure both the Bypass Port pair with single Service Chain. Figure 10 - Configuration of VLAN Translation From Switch Configuration, Source VLAN is 513 and it will be translated to 2001 and 2002 for Bypass 1 and Bypass 2 respectively. For more insights with respect to VLAN translation, refer https://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?page=1 For Tagged Packets, VLAN translation should be enabled. LACP frames will be untagged which should be bypassed and routed to other Port-Channel. In this case LACP traffic will not reach BIG-IP, instead it will get routed directly from NPB to other pair of switches. LACP bypass Configuration The network packet broker is configured to forward (or bypass) the LACP frames directly from the north to the south switch and vice versa.LACP frames bear the ethertype 8809 (in hex). This filter is configured during the Bypass Port Pair configuration. Note: There are methods to configure this filter, with the use of service chains and filters but this is the simplest for this deployment. Figure 11 - Configuration to redirect LACP BIG-IP Configuration Step Summary Step 1 : Configure interfaces to support vWire Step 2 : Configure trunk in passthrough mode Step 3 : Configure Virtual Wire Note: Steps mentioned above are specific to topology inFigure 2.For more details on Virtual Wire (vWire), referhttps://devcentral.f5.com/s/articles/BIG-IP-vWire-Configuration?tab=series&page=1andhttps://devcentral.f5.com/s/articles/vWire-Deployment-Configuration-and-Troubleshooting?tab=series&page=1 Step 1 : Configure interfaces to support vWire To configure interfaces to support vWire, do the following Log into BIG-IP GUI SelectNetwork -> Interfaces -> Interface List Select Specific Interface and in vWire configuration, select Virtual Wire as Forwarding Mode Figure 12 - Example GUI configuration of interface to support vWire Step 2 : Configure trunk in passthrough mode To configure trunk, do the following Log into BIG-IP GUI SelectNetwork -> Trunks ClickCreateto configure new Trunk. Disable LACP for LACP passthrough mode Figure 13 - Configuration of North Trunk in Passthrough Mode Figure 14 - Configuration of South Trunk in Passthrough Mode Step 3 : Configure Virtual Wire To configure trunk, do the following Log into BIG-IP GUI SelectNetwork -> Virtual Wire ClickCreateto configure Virtual Wire Figure 15 - Configuration of Virtual Wire As VLAN 513 is translated into 2001 and 2002, vWire configured with explicit tagged VLANs. It is also recommended to have untagged VLAN in vWire to allow any untagged traffic. Enable multicast bridging sys db variable as below for LACP passthrough mode modify sys db l2.virtualwire.multicast.bridging value enable Note: Make sure sys db variable enabled after reboot and upgrade. For LACP mode, multicast bridging sys db variable should be disabled. Scenarios As LACP passthrough mode configured in BIG-IP, LACP frames will passthrough BIG-IP. LACP will be established between North and South Switches.ICMP traffic is used to represent network traffic from the north switches to the south switches. Scenario 1: Traffic flow through BIG-IP with North and South Switches configured in LACP active mode Above configurations shows that all the four switches are configured with LACP active mode. Figure 16 - MLAG after deployment of BIG-IP and IXIA with Switches configured in LACP ACTIVE mode Figure 16shows that port-channels 513 is active at both North Switches and South Switches. Figure 17 - ICMP traffic flow from client to server through BIG-IP Figure 17shows ICMP is reachable from client to server through BIG-IP. This verifies test case 1, LACP getting established between Switches and traffic passthrough BIG-IP successfully. Scenario 2: Active BIG-IP link goes down with link state propagation enabled in BIG-IP Figure 15shows Propagate Virtual Wire Link Status enabled in BIG-IP. Figure 17shows that interface 1.1 of BIG-IP is active incoming interface and interface 1.4 of BIG-IP is active outgoing interface. Disabling BIG-IP interface 1.1 will make active link down as below Figure 18 - BIG-IP interface 1.1 disabled Figure 19 - Trunk state after BIG-IP interface 1.1 disabled Figure 19shows that the trunks are up even though interface 1.1 is down. As per configuration, North_Trunk has 2 interfaces connected to it 1.1 and 1.3 and one of the interface is still up, so North_Trunk status is active. Figure 20 - MLAG status with interface 1.1 down and Link State Propagation enabled Figure 20shows that port-channel 513 is active at both North Switches and South Switches. This shows that switches are not aware of link failure and it is been handled by IXIA configuration. Figure 21 - IXIA Bypass Switch after 1.1 interface of BIG-IP goes down As shown in Figure 8, Single Service Chain is configured and which will be down only if both Inline Tool Port pairs are down in NPB. So Bypass will be enabled only if Service Chain goes down in NPB. Figure 21 shows that still Bypass is not enabled in IXIA Bypass Switch. Figure 22 - Service Chain and Inline Tool Port Pair status in IXIA Vision E40 ( NPB ) Figure 22 shows that Service Chain is still up as BIG IP2 ( Inline Tool Port Pair ) is up whereas BIG IP1 is down. Figure 1 shows that P09 of NPB is connected 1.1 of BIG-IP which is down. Figure 23 - ICMP traffic flow from client to server through BIG-IP Figure 23 shows that still traffic flows through BIG-IP even though 1.1 interface of BIG-IP is down. Now active incoming interface is 1.3 and active outgoing interface is 1.4. Low bandwidth traffic is still allowed through BIG-IP as bypass not enabled and IXIA handles rate limit process. Scenario 3: When North_Trunk goes down with link state propagation enabled in BIG-IP Figure 24 - BIG-IP interface 1.1 and 1.3 disabled Figure 25 - Trunk state after BIG-IP interface 1.1 and 1.3 disabled Figure 15 shows that Propagate Virtual Wire Link State enabled and thus both the trunks are down. Figure 26 - IXIA Bypass Switch after 1.1 and 1.3 interfaces of BIG-IP goes down Figure 27 - ICMP traffic flow from client to server bypassing BIG-IP Conclusion This article covers BIG-IP L2 Virtual Wire Passthrough deployment with IXIA. IXIA configured using Single Service Chain. Observations of this deployment are as below VLAN Translation in IXIA NPB will convert real VLAN ID (513) to Translated VLAN ID (2001 and 2002) BIG-IP will receive packets with translated VLAN ID (2001 and 2002) VLAN Translation needs all packets to be tagged, untagged packets will be dropped. LACP frames are untagged and thus bypass configured in NPB for LACP. Tool Sharing needs to be enabled for allowing untagged packet which will add extra tag. This type of configuration and testing will be covered in upcoming articles. With Single Service Chain, If any one of the Inline Tool Port Pairs goes down, low bandwidth traffic will be still allowed to pass through BIG-IP (tool) If any of the Inline Tool link goes down, IXIA handles whether to bypass or rate limit. Switches will be still unaware of the changes. With Single Service Chain, if Tool resource configured with both Inline Tool Port pair in Active - Active state then load balancing will happen and both path will be active at a point of time. Multiple Service Chains in IXIA NPB can be used instead of Single Service Chain to remove rate limit process. This type of configuration and testing will be covered in upcoming articles. If BIG-IP goes down, IXIA enables bypass and ensures there is no packet drop.1.2KViews9likes0CommentsBIG-IP L2 Virtual Wire LACP Passthrough Deployment with Gigamon Network Packet Broker - I
Introduction This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer tohttps://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer tohttps://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibilityandhttps://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2). This article covers the design and implementation of the Gigamon Bypass Switch / Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire) with LACP Passthrough Mode. This article covers one of the variation mentioned in article https://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon. Network Topology Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with Gigamon. Figure 1 - Topology with MLAG and LAG before deployment of Gigamon and BIG-IP Figure 2 - Topology with MLAG and LAG after deployment of Gigamon and BIG-IP Figure 3 - Connection between Gigamon and BIG-IP Hardware Specification Hardware used in this article are BIG-IP i5800 GigaVUE-HC1 Arista DCS-7010T-48 (all the four switches) Note: All the Interfaces/Ports are 1G speed Software Specification Software used in this article are BIG-IP 16.1.0 GigaVUE-OS 5.7.01 Arista 4.21.3F (North Switches) Arista 4.19.2F (South Switches) Switch Configuration LAG or link aggregation is a way of bonding multiple physical links into a combined logical link. MLAG or multi-chassis link aggregation extends this capability allowing a downstream switch or host to connect to two switches configured as an MLAG domain. This provides redundancy by giving the downstream switch or host two uplink paths as well as full bandwidth utilization since the MLAG domain appears to be a single switch to Spanning Tree (STP). Figure 1, shows MLAG configured at North Switches and LAG configured at South Switches. This article focus on LACP deployment for untagged packets. For more details on MLAG configuration, refer to https://eos.arista.com/mlag-basic-configuration/#Verify_MLAG_operation Step Summary Step 1 : Configuration of MLAG peering between both switches Step 2 : Verify MLAG Peering Step 3 : Configuration of MLAG Port-Channels Step 4 : Configuration of LAG Port-Channels Step 5 : Verify Port-Channel Status Step 1 : Configuration of MLAG peering between both switches MLAG Configuration in North Switch1 and North Switch2 are as follows North Switch 1: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.0.1/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.0.2 peer-link Port-Channel10 reload-delay 150 North Switch 2: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.0.2/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.0.1 peer-link Port-Channel10 reload-delay 150 Step 2 : Verify MLAG Peering North Switch 1: North-1#show mlag MLAG Configuration: domain-id:mlag1 local-interface:Vlan4094 peer-address:172.16.0.2 peer-link:Port-Channel10 MLAG Status: state:Active negotiation status:Connected peer-link status:Up local-int status:Up system-id:2a:99:3a:23:94:c7 dual-primary detection :Disabled MLAG Ports: Disabled:0 Configured:0 Inactive:6 Active-partial:0 Active-full:2 North Switch 2: North-2#show mlag MLAG Configuration: domain-id:mlag1 local-interface:Vlan4094 peer-address:172.16.0.1 peer-link:Port-Channel10 MLAG Status: state:Active negotiation status:Connected peer-link status:Up local-int status:Up system-id:2a:99:3a:23:94:c7 dual-primary detection :Disabled MLAG Ports: Disabled:0 Configured:0 Inactive:6 Active-partial:0 Active-full:2 Step 3 : Configuration of MLAG Port-Channels Figure 1, has 2 MLAG Port-Channels at North Switches and 2 LAG Port-Channel at South Switches. One of the ports from both the South Switches (South Switch 1 and South Switch 2) are connected to North Switch 1 and the other port is connected to North Switch 2. The two interfaces on South Switches can be configured as a regular port-channel using LACP. MLAG Port-Channel Configuration are as follows North Switch 1: interface Port-Channel120 switchport access vlan 120 mlag 120 interface Ethernet36 channel-group 120 mode active interface Port-Channel121 switchport access vlan 120 mlag 121 interface Ethernet37 channel-group 121 mode active North Switch 2: interface Port-Channel120 switchport access vlan 120 mlag 120 interface Ethernet37 channel-group 120 mode active interface Port-Channel121 switchport access vlan 120 mlag 121 interface Ethernet36 channel-group 121 mode active Step 4 : Configuration of LAG Port-Channels The two interfaces on South Switches can be configured as a regular port-channel using LACP. South Switch 1: interface Port-Channel120 switchport access vlan 120 interface Ethernet36 channel-group 120 mode active interface Ethernet37 channel-group 120 mode active South Switch 2: interface Port-Channel121 switchport access vlan 121 interface Ethernet36 channel-group 121 mode active interface Ethernet37 channel-group 121 mode active LACP modes are as follows On Active Passive LACP Connection establishment will occur only for below configurations Active in both North and South Switch Active in North or South Switch and Passive in other switch On in both North and South Switch Note: In this case, all the interfaces of both North and South Switches are configured with LACP mode as Active Step 5 : Verify Port-Channel Status North Switch 1: North-1#show mlag interfaces detail local/remote mlagstatelocalremoteoperconfiglast changechanges ---------- ----------------- ----------- ------------ --------------- ------------- ---------------------------- ------- 120active-fullPo120Po120up/upena/ena0:00:00 ago270 121active-fullPo121Po121up/upena/ena0:00:00 ago238 North Switch 2: North-2#show mlag interfaces detail local/remote mlagstatelocalremoteoperconfiglast changechanges ---------- ----------------- ----------- ------------ --------------- ------------- ---------------------------- ------- 120active-fullPo120Po120up/upena/ena0:01:34 ago269 121active-fullPo121Po121up/upena/ena0:01:33 ago235 South Switch 1: South-1#show port-channel 120 Port Channel Port-Channel120: Active Ports: Ethernet36 Ethernet37 South Switch 2: South-2#show port-channel 121 Port Channel Port-Channel121: Active Ports: Ethernet36 Ethernet37 Gigamon Configuration In this article, Gigamon will be configured using Inline Network Groups and Inline Tools Groups. For GUI and Port configurations of Gigamon refer https://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon. Find below configuration of Gigamon in Command line Inline-network configurations: inline-network alias Bypass1 pair net-a 1/1/x1 and net-b 1/1/x2 physical-bypass disable traffic-path to-inline-tool exit inline-network alias Bypass2 pair net-a 1/1/x3 and net-b 1/1/x4 physical-bypass disable traffic-path to-inline-tool exit inline-network alias Bypass3 pair net-a 1/1/x5 and net-b 1/1/x6 physical-bypass disable traffic-path to-inline-tool exit inline-network alias Bypass4 pair net-a 1/1/x7 and net-b 1/1/x8 physical-bypass disable traffic-path to-inline-tool exit Inline-network-group configuration: inline-network-group alias Bypassgroup network-list Bypass1,Bypass2,Bypass3,Bypass4 exit Inline-tool configurations: inline-tool alias BIGIP1 pair tool-a 1/1/x9 and tool-b 1/1/x10 enable shared true exit inline-tool alias BIGIP2 pair tool-a 1/1/x11 and tool-b 1/1/x12 enable shared true exit inline-tool alias BIGIP3 pair tool-a 1/1/g1 and tool-b 1/1/g2 enable shared true exit inline-tool alias BIGIP4 pair tool-a 1/1/g3 and tool-b 1/1/g4 enable shared true exit Inline-tool-group configuration: inline-tool-group alias BIGIPgroup tool-list BIGIP1,BIGIP2,BIGIP3,BIGIP4 enable exit Traffic map connection configuration: map-passall alias BIGIP_MAP roles replace admin to owner_roles to BIGIPgroup from Bypassgroup Note: Gigamon configuration with Inline network group and Inline tool group requires to enable Inline tool sharing mode which will insert additional tag on the tool side. As BIG-IP supports single tagging, this configuration works only for untagged packets. BIG-IP Configuration In this article, BIG-IP configured in L2 mode with Virtual Wire and trunks will be configured for individual interfaces. For more details on group configuration of trunk and other configurations, refer https://devcentral.f5.com/s/articles/L2-Deployment-of-BIG-IP-with-Gigamon. Configuration of trunk for individual interfaces in LACP passthrough Mode: tmsh create net trunk Left_Trunk_1 interfaces add { 1.1 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Left_Trunk_2 interfaces add { 1.3 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Left_Trunk_3 interfaces add { 2.1 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Left_Trunk_4 interfaces add { 2.3 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Right_Trunk_1 interfaces add { 1.2 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Right_Trunk_2 interfaces add { 1.4 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Right_Trunk_3 interfaces add { 2.2 } qinq-ethertype 0x8100 link-select-policy auto tmsh create net trunk Right_Trunk_4 interfaces add { 2.4 } qinq-ethertype 0x8100 link-select-policy auto Figure 4 - Trunk configuration in GUI Figure 5 - Configuration of Virtual Wire Enable multicast bridging sys db variable as below for LACP passthrough mode modify sys db l2.virtualwire.multicast.bridging value enable Note: Make sure sys db variable enabled after reboot and upgrade. For LACP mode, multicast bridging sys db variable should be disabled. Scenarios As per Figure 2 and 3, setup is completely up and functional. As LACP passthrough mode configured in BIG-IP, LACP frames will passthrough BIG-IP. LACP will be established between North and South Switches. ICMP traffic is used to represent network traffic from the north switches to the south switches. Scenario 1: Traffic flow through BIG-IP with North and South Switches configured in LACP active mode Above configurations shows that all the four switches are configured with LACP active mode. Figure 6 - MLAG and LAG status after deployment of BIG-IP and Gigamon with Switches configured in LACP ACTIVE mode Figure 6 shows that port-channels 120 and 121 are active at both North Switches and South Switches. Above configuration shows MLAG configured at North Switches and LAG configured at South Switches. Figure 7 - ICMP traffic flow from client to server through BIG-IP Figure 7 shows ICMP is reachable from client to server through BIG-IP. This verifies test case 1, LACP getting established between Switches and traffic passthrough BIG-IP successfully. Scenario 2: Active BIG-IP link goes down with link state propagation disabled in BIG-IP Figure 5 shows Propagate Virtual Wire Link Status disabled in BIG-IP. Figure 7 shows that interface 1.1 of BIG-IP is active incoming interface and interface 1.2 of BIG-IP is active outgoing interface. Disabling BIG-IP interface 1.1 will make active link down as below Figure 8 - BIG-IP interface 1.1 disabled Figure 9 - Trunk state after BIG-IP interface 1.1 disabled Figure 9 shows only Left_Trunk1 is down which has interface 1.1 configured. As link state propagation disabled in Virtual Wire configuration, interface 1.1 and Right_trunk1 are still active. Figure 10 - MLAG and LAG status with interface 1.1 down and Link State Propagation disabled Figure 10 shows that port-channels 120 and 121 are active at both North Switches and South Switches. This shows that switches are not aware of link failure and it is been handled by Gigamon configuration. As Gigamon is configured with Inline Network Groups and Inline Tool Groups, bypass will be enabled only after all the active Inline Tool goes down. Figure 11 - One of Inline Tool goes down after link failure Figure 11 shows Inline Tool which is connected to interface 1.1 of BIG-IP goes down. Low bandwidth traffic is still allowed through BIG-IP as bypass not enabled and Gigamon handles rate limit process. Note: With one to one mapping of Gigamon instead of groups, bypass can be enabled specific to link failure and this removes the need of rate limit. This configuration and scenarios will be covered in upcoming articles. Figure 12 - ICMP traffic flow from client to server through BIG-IP Figure 12 shows ICMP traffic flows through BIG-IP and now VirtualWire2 is active. Figure 12 shows that interface 1.3 of BIG-IP is active incoming interface and interface 1.4 of BIG-IP is active outgoing interface. Scenario 3: Active BIG-IP link goes down with link state propagation enabled in BIG-IP Figure 13 - Virtual Wire configuration with Link State Propagation enabled Figure 13 shows Propagate Virtual Wire Link Status enabled. Similar to Scenario 2 when active goes down, other interfaces part of Virtual Wire will also goes down. In this case when 1.1 interface of BIG-IP goes down, 1.2 interface of BIG-IP will automatically goes down as both are part of same Virtual Wire. Figure 14 - BIG-IP interface 1.1 disabled Figure 15 - Trunk state after BIG-IP interface 1.1 disabled Figure 15 shows Right_Trunk1 goes down automatically, as 1.2 is the only interface part of the trunk. As Gigamon handles all link failure action, there is no major difference with respect to switches and Gigamon. All the other observations are similar to scenario2, so there is no major difference in behavior with respect to Link State Propagation in this deployment. Scenario 4: BIG-IP goes down and bypass enabled in Gigamon Figure 16 - All the BIG-IP interfaces disabled Figure 17 - Inline tool status after BIG-IP goes down Figure 17 shows that all the Inline Tool pair goes down once BIG-IP is down. Figure 18 - Bypass enabled in Gigamon Figure 18 shows bypass enabled in Gigamon and ensure there is no network failure. ICMP traffic still flows between ubuntu client and ubuntu server as below Figure 19 - ICMP traffic flow from client to server bypassing BIG-IP Conclusion This article covers BIG-IP L2 Virtual Wire Passthrough deployment with Gigamon. Gigamon configured using Inline Network Group and Inline Tool Group. Observations of this deployment are as below Group configuration in Gigamon requires to enable Inline Tool Sharing mode which inserts additional tag. BIG-IP supports L2 Mode with single tagging, this configurations will work only for untagged packets. Group configuration in Gigamon will enable Bypass only if all the active Inline Tool pairs goes down. If any of the Inline Tool Pairs goes down, low bandwidth traffic will be still allowed to pass through BIG-IP (tool) If any of the Inline Tool link goes down, Gigamon handles whether to bypass or rate limit. Switches will be still unware of the changes. One to one configuration of Gigamon can be used instead of Group configuration to remove rate limit process. This type of configuration and testing will be covered in upcoming articles. If BIG-IP goes down, Gigamon enables bypass and ensures there is no packet drop.936Views5likes0CommentsBIG-IP L2 Virtual Wire LACP Passthrough Deployment with IXIA Bypass Switch and Network Packet Broker (Single Service Chain - Active / Standby)
Introduction This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer tohttps://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer tohttps://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibilityandhttps://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2). F5’s BIG-IP hardware appliances can be inserted in L2 networks. This can be achieved using either virtual Wire (vWire) or by bridging 2 Virtual LANs using a VLAN Groups. This document covers the design and implementation of the IXIA Bypass Switch/Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire). This document focuses on IXIA Bypass Switch / Network Packet Broker. For more information about architecture overview of bypass switch and network packet broker refer tohttps://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?tab=series&page=1. This article focuses on Active / Standby configuration of Inline Tool Port Pairs in IXIA NPB Network Topology Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker. Figure 1 - Deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker Please refer Lab Overview section inhttps://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1for more insights on lab topology and connections. Hardware Specification Hardware used in this article are IXIA iBypass DUO ( Bypass Switch) IXIA Vision E40 (Network Packet Broker) BIG-IP Arista DCS-7010T-48 (all the four switches) Software Specification Software used in this article are BIG-IP 16.1.0 IXIA iBypass DUO 1.4.1 IXIA Vision E40 5.9.1.8 Arista 4.21.3F (North Switches) Arista 4.19.2F (South Switches) Switch and Ixia iBypass Duo Configuration Switch and IXIA iBypass configurations are same as mentioned in below article https://devcentral.f5.com/s/articles/BIG-IP-L2-Virtual-Wire-LACP-Passthrough-Deployment-with-IXIA-Bypass-Switch-and-Network-Packet-Broker-I IXIA Vision E40 Configuration Most of the configurations are same as mentioned in https://devcentral.f5.com/s/articles/BIG-IP-L2-Virtual-Wire-LACP-Passthrough-Deployment-with-IXIA-Bypass-Switch-and-Network-Packet-Broker-I. In this article Inline Tool Port pairs are configured as Active/ Standby in Tool Resources as below Figure 2 - Configuration of Tool Resources Here BIG IP1 Inline Tool Port Pair is Active and BIG IP2 Inline Tool Port Pair is Standby. Traffic will be passing through BIG IP1 Inline Tool Port Pair initially and once it is down then BIG IP2 will become active BIG-IP Configuration Most of the configurations are same as mentioned in https://devcentral.f5.com/s/articles/BIG-IP-L2-Virtual-Wire-LACP-Passthrough-Deployment-with-IXIA-Bypass-Switch-and-Network-Packet-Broker-I. In this article, vWire is configured with Links State Propagation disabled as below Figure 3 - Configuration of Virtual Wire Note: As we covered Propagate Virtual Wire Link Status enabled in previous article, here plan is to disable Propagate Virtual Wire Link Status and test the scenarios. Both the Enabling and disabling of Link state Propagation work for both Active / Active and Active / Standby configuration of Inline Tool Port Pair in NPB. Scenarios As LACP passthrough mode configured in BIG-IP, LACP frames will passthrough BIG-IP. LACP will be established between North and South Switches.ICMP traffic is used to represent network traffic from the north switches to the south switches. Scenario 1: Traffic flow through BIG-IP with North and South Switches configured in LACP active mode Above configurations shows that all the four switches are configured with LACP active mode. Figure 4 - MLAG after deployment of BIG-IP and IXIA with Switches configured in LACP ACTIVE mode Figure 4shows that port-channels 513 is active at both North Switches and South Switches. Figure 5 - ICMP traffic flow from client to server through BIG-IP Figure 5shows ICMP is reachable from client to server through BIG-IP. This verifies test case 1, LACP getting established between Switches and traffic passthrough BIG-IP successfully. Scenario 2: Active BIG-IP link goes down with link state propagation disabled in BIG-IP Figure 3shows Propagate Virtual Wire Link Status enabled in BIG-IP. Figure 5shows that interface 1.1 of BIG-IP is active incoming interface and interface 1.4 of BIG-IP is active outgoing interface. Disabling BIG-IP interface 1.1 will make active link down as below Figure 6 - BIG-IP interface 1.1 disabled Figure 7 - Trunk state after BIG-IP interface 1.1 disabled Figure 7shows that the trunks are up even though interface 1.1 is down. As per configuration, North_Trunk has 2 interfaces connected to it 1.1 and 1.3 and one of the interface is still up, so North_Trunk status is active. Figure 8 - MLAG status with interface 1.1 down and Link State Propagation disabled Figure 8shows that port-channel 513 is active at both North Switches and South Switches. This shows that switches are not aware of link failure and it is been handled by IXIA configuration. Figure 9 - IXIA Bypass Switch after 1.1 interface of BIG-IP goes down As Single Service Chain is configured and which will be down only if both Inline Tool Port pairs are down in NPB. So Bypass will be enabled only if Service Chain goes down in NPB.Figure 9 shows that still Bypass is not enabled in IXIA Bypass Switch. Figure 10 - Service Chain and Inline Tool Port Pair status in IXIA Vision E40 ( NPB ) Figure 10shows that Service Chain is still up as BIG IP2 ( Inline Tool Port Pair ) is active whereas BIG IP1 is down.Figure 1shows that P09 of NPB is connected 1.1 of BIG-IP which is down. As Tool Status of active Inline Tool Port Pair is offline, Standby will become active. Figure 11 - ICMP traffic flow from client to server through BIG-IP Figure 11shows that still traffic flows through BIG-IP even though 1.1 interface of BIG-IP is down. Now active incoming interface is 1.3 and active outgoing interface is 1.4. Low bandwidth traffic is still allowed through BIG-IP as bypass not enabled and IXIA handles rate limit process. Scenario 3: When North_Trunk goes down with link state propagation enabled in BIG-IP Figure 12 - BIG-IP interfaces 1.1 and 1.3 disabled Figure 13 - Trunk state after BIG-IP interfaces 1.1 and 1.3 disabled As Propagate Virtual Wire Link State disabled, only North_Trunk is down. Figure 14 - IXIA Bypass Switch after 1.1 and 1.3 interfaces of BIG-IP goes down Figure 15 - ICMP traffic flow from client to server bypassing BIG-IP Conclusion This article covers BIG-IP L2 Virtual Wire Passthrough deployment with IXIA. IXIA configured using Single Service Chain and Tool Resource configured with Active/Standby of Inline Tool Port Pairs. Observations of this deployment are as below VLAN Translation in IXIA NPB will convert real VLAN ID (513) to Translated VLAN ID (2001 and 2002) BIG-IP will receive packets with translated VLAN ID (2001 and 2002) VLAN Translation needs all packets to be tagged, untagged packets will be dropped. LACP frames are untagged and thus bypass configured in NPB for LACP. Tool Sharing needs to be enabled for allowing untagged packet which will add extra tag. This type of configuration and testing will be covered in upcoming articles. With Single Service Chain, If any one of the Inline Tool Port Pairs goes down, low bandwidth traffic will be still allowed to pass through BIG-IP (tool) If any of the Inline Tool link goes down, IXIA handles whether to bypass or rate limit. Switches will be still unaware of the changes. With Single Service Chain, if Tool resource configured with Inline Tool Port pairs in Active - Standby state then primary Port Pair will be active and if Primary Port pair goes down, Standby will become active Multiple Service Chains in IXIA NPB can be used instead of Single Service Chain to remove rate limit process. This type of configuration and testing will be covered in upcoming articles. If BIG-IP goes down, IXIA enables bypass and ensures there is no packet drop.828Views6likes0Comments