ixia
7 TopicsHandle Over 100 Gbps With a Single BIG-IP Virtual Edition
Cloud computing is an inescapable term. The general public knows that their cat pictures, videos, and memes go to the cloud somehow. Companies and application developers go cloud-first, or often cloud-only, when they develop a new service. They think of the cloud as the group of resources and APIs offered by cloud providers. Large enterprises and service providers have a bifurcated view of cloud computing: they see a public cloud and a private cloud. A service provider might mandate that any new software or services must run within their orchestrated virtualization or bare metal environment, actively discouraging or simply disallowing new vendor-specific hardware purchases. This has the effect of causing traditional networking vendors to improve the efficiency of their software offerings, and take advantage of the available server hardware in an opportunistic fashion. Behold the early fruits of our labor. 100+ Gbps L4 From a Single VE?! We introduced the high performance license option for BIG-IP Virtual Edition with BIG-IP v13.0.0 HF1. This means that rather than having a throughput capped license, you can purchase a license that is only restricted by the maximum number of vCPUs that can be assigned. This license allows you to optimize the utilization of the underlying hypervisor hardware. BIG-IP v13.0.0 HF1 introduced a limit of 16 vCPUs per VE. BIG-IP v13.1.0.1 raised the maximum to 24 vCPUs. Given that this is a non-trivial amount of computing capacity for a single VM, we decided to see what kind of performance can be obtained when you use the largest VE license on recent hypervisor hardware. The result is decidedly awesome. I want to show you precisely how we achieved 100+ Gbps in a single VE. Test Harness Overview The hypervisor for this test was KVM running on an enterprise grade rack mount server. The server had two sockets, and each socket had an Intel processor with 24 physical cores / 48 hyperthreads. We exposed 3 x 40 Gbps interfaces from Intel XL710 NICs to the guest via SR-IOV. Each NIC utilized a PCI-E 3.0x8 slot. There was no over-subscription of hypervisor resources. Support for "huge pages", or memory pages much larger than 4 KB, was enabled on the hypervisor. It is not a tuning requirement, but it proved beneficial on our hypervisor. See: Ubuntu community - using hugepages. The VE was configured to run BIG-IP v13.1.0.1 with 24 vCPU and 48 GB of RAM in an "unpacked" configuration. This means that we dedicated a single vCPU per physical core. This was done to prevent hyperthread contention within each physical core. Additionally, all of the physical cores were on the same socket. This eliminated any inter-socket communication latency and bus limitations. The VE was provisioned with LTM only, and all test traffic utilized a single FastL4 virtual server. There were two logical VLANs. The 3 x 40 Gbps interfaces were logically trunked. The VE only has two L3 presences, one for the client network and one for the server network. In direct terms, this is a single application deployment achieving 100+ Gbps with a single BIG-IP Virtual Edition. Result The network load was generated using Ixia IxLoad and Ixia hardware appliances. The traffic was legitimate HTTP traffic with full TCP handshakes and graceful TCP teardowns. A single 512 kB HTTP transaction was completed for every TCP connection. We describe this scenario as one request per connection, or 1-RPC. It's worth noting that 1-RPC is the worst case for an ADC. For every Ixia client connection: Three-way TCP handshake HTTP request (less than 200 B) delivered to Ixia servers HTTP response (512 kB, multiple packets) from Ixia servers Three-way TCP termination The following plot shows the L7 throughput in Gbps during the "sustained" period of a test, meaning that the device is under constant load and new connections are being established immediately after a previous connection is satisfied. If you work in the network testing world, you'll probably note how stupendously smooth this graph is... The average for the sustained period ends up around 108 Gbps. Note that, as hardware continues to improve, this performance will only go up. Considerations Technical forums love car analogies and initialisms, like "your mileage may vary" as YMMV. This caveat applies to the result described above. You should consider these factors when planning a high performance VE deployment: Physical hardware layout of the hypervisor - Non-uniform memory access (NUMA) architectures are ubiquitous in today's high density servers. In very simple terms, the implication of NUMA architectures is that the physical locality of a computational core matters. All of the work for a given task should be confined to a single NUMA node when possible. The slot placement of physical NICs can be a factor as well. Your server vendor can guide you in understanding the physical layout of your hardware. Example: you have a hypervisor with two sockets, and each socket has 20c / 40t. You have 160 Gbps of connectivity to the hypervisor. The recommended deployment would be two 20 vCPU high performance VE guests, one per socket, with each receiving 80 Gbps of connectivity. Spanning a 24 vCPU guest across both sockets would result in more CPU load per unit of work done, as the guest would be communicating between both sockets rather than within a single socket. Driver support - The number of drivers that BIG-IP supports for SR-IOV access is growing. See: https://support.f5.com/csp/article/K17204. Do note that we also have driver support for VMXNET3, virtio, and OvS-DPDK via virtio. Experimentation and an understanding of the available hypervisor configurations will allow you to select the proper deployment. Know the workload - This result was generated with a pure L4 configuration using simple load balancing, and no L5-L7 inspection or policy enforcement. The TMM CPU utilization was at maximum during this test. Additional inspection and manipulation of network traffic requires more CPU cycles per unit of work.3.9KViews1like7CommentsBIG-IP L2 Virtual Wire LACP Passthrough Deployment with IXIA Bypass Switch and Network Packet Broker (Multiple Service Chain)
Introduction This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer tohttps://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer tohttps://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibilityandhttps://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2). F5’s BIG-IP hardware appliances can be inserted in L2 networks. This can be achieved using either virtual Wire (vWire) or by bridging 2 Virtual LANs using a VLAN Groups. This document covers the design and implementation of the IXIA Bypass Switch/Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire). This document focuses on IXIA Bypass Switch / Network Packet Broker. For more information about architecture overview of bypass switch and network packet broker refer tohttps://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?tab=series&page=1. This article covers configuration and scenarios when IXIA Network Packet Broker configured with 2 Service Chains where below articles are specific to single service chain. https://devcentral.f5.com/s/articles/BIG-IP-L2-Virtual-Wire-LACP-Passthrough-Deployment-with-IXIA-Bypass-Switch-and-Network-Packet-Broker-I https://devcentral.f5.com/s/articles/BIG-IP-L2-Virtual-Wire-LACP-Passthrough-Deployment-with-IXIA-Bypass-Switch-and-Network-Packet-Broker-II Network Topology Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker. Figure 1 - Deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker Please refer Lab Overview section inhttps://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1for more insights on lab topology and connections. Hardware Specification Hardware used in this article are IXIA iBypass DUO ( Bypass Switch) IXIA Vision E40 (Network Packet Broker) BIG-IP Arista DCS-7010T-48 (all the four switches) Software Specification Software used in this article are BIG-IP 16.1.0 IXIA iBypass DUO 1.4.1 IXIA Vision E40 5.9.1.8 Arista 4.21.3F (North Switches) Arista 4.19.2F (South Switches) Switch and Ixia iBypass Duo Configuration Switch and IXIA iBypass configurations are same as mentioned in below article https://devcentral.f5.com/s/articles/BIG-IP-L2-Virtual-Wire-LACP-Passthrough-Deployment-with-IXIA-Bypass-Switch-and-Network-Packet-Broker-I IXIA Vision E40 Configuration Most of the configuration are same as mentioned in below article https://devcentral.f5.com/s/articles/BIG-IP-L2-Virtual-Wire-LACP-Passthrough-Deployment-with-IXIA-Bypass-Switch-and-Network-Packet-Broker-I. In this article, IXIA NPB is configured with 2 Service Chains. Create the following resources with the information provided. Bypass Port Pairs Tool Resources Service Chains Figure 2 - Final Configuration of IXIA NPB BIG-IP Configuration Most of the configuration are same as mentioned in below article https://devcentral.f5.com/s/articles/BIG-IP-L2-Virtual-Wire-LACP-Passthrough-Deployment-with-IXIA-Bypass-Switch-and-Network-Packet-Broker-I. In this article, we have 4 trunks whereas in previous articles we had 2 trunks. Figure 3 - Trunk Configuration in BIG-IP Please below list of allocated interface specific to each Trunk North Trunk -> 1.1 North Trunk1 -> 1.3 South Trunk -> 1.2 South Trunk1 -> 1.4 Figure 4 - vWire Configuration in BIG-IP Scenarios As LACP passthrough mode configured in BIG-IP, LACP frames will passthrough BIG-IP. LACP will be established between North and South Switches.ICMP traffic is used to represent network traffic from the north switches to the south switches. Scenario 1: Traffic flow through BIG-IP with North and South Switches configured in LACP active mode Above configurations shows that all the four switches are configured with LACP active mode. Figure 5 - MLAG after deployment of BIG-IP and IXIA with Switches configured in LACP ACTIVE mode Figure 5shows that port-channels 513 is active at both North Switches and South Switches. Figure 6 - ICMP traffic flow from client to server through BIG-IP Figure 6shows ICMP is reachable from client to server through BIG-IP. This verifies test case 1, LACP getting established between Switches and traffic passthrough BIG-IP successfully. In this case incoming request uses 1.1 interface and outgoing request uses 1.2 interface whereas in previous articles Incoming and outgoing request uses 1.1 and 1.4 by default. Figure 4shows dedicated vWire configured, so traffic from 1.1 will be send to 1.2 and traffic from 1.3 will be send to 1.4. Figure 4shows ICMP request uses VLAN 2001 and ICMP reply uses VLAN 2002 which means request uses Service Chain 1 and reply uses Service Chain 2 in NPB. Scenario 2: Active BIG-IP link goes down with link state propagation enabled in BIG-IP Figure 4shows Propagate Virtual Wire Link Status enabled in BIG-IP. Figure 6shows that interface 1.1 of BIG-IP is active incoming interface and interface 1.2 of BIG-IP is active outgoing interface. Disabling BIG-IP interface 1.1 will make active link down as below Figure 7 - BIG-IP interface 1.1 disabled Figure 8 - Trunk state after BIG-IP interface 1.1 disabled Figure 8shows North_Trunk and South_Trunk down, North_Trunk is down because it has only one interface (1.1) which is disabled. As links State Propagation enabled and dedicated vWire configured, South_trunk is also down. Figure 9 - MLAG status with interface 1.1 down and Link State Propagation enabled Figure 9shows that port-channel 513 is active at both North Switches and South Switches. This shows that switches are not aware of link failure and it is been handled by IXIA configuration. Figure 10 - IXIA Bypass Switch after 1.1 interface of BIG-IP goes down Figure 10shows that Bypass is switched ON in Bypass Switch 1. As we have dedicated Service Chain in NPB and vWire configuration in BIG-IP, Bypass Switch 1 moved to Bypass Mode. Figure 6shows ICMP request uses Bypass 1 as default and ICMP reply Uses Bypass2 by default, so request will be bypassing BIG-IP and reply will pass through BIG-IP. Figure 11 - ICMP reply traffic flow from client to server through BIG-IP Scenario 3: When BIG-IP interfaces goes down with link state propagation enabled in BIG-IP Figure 12 - BIG-IP interface 1.1 and 1.3 disabled Figure 13 - Trunk state after BIG-IP interface 1.1 and 1.3 disabled Figure 14 - IXIA Bypass Switch after 1.1 and 1.3 interfaces of BIG-IP goes down Figure 15 - ICMP traffic flow from client to server bypassing BIG-IP Conclusion This article covers BIG-IP L2 Virtual Wire Passthrough deployment with IXIA. IXIA configured using multiple Service Chain. Observations of this deployment are as below VLAN Translation in IXIA NPB will convert real VLAN ID (513) to Translated VLAN ID (2001 and 2002) BIG-IP will receive packets with translated VLAN ID (2001 and 2002) VLAN Translation needs all packets to be tagged, untagged packets will be dropped. LACP frames are untagged and thus bypass configured in NPB for LACP. Tool Sharing needs to be enabled for allowing untagged packet which will add extra tag. This type of configuration and testing will be covered in upcoming articles. With Multiple Service Chain, If any one of the Inline Tool Port Pairs goes down and specific Bypass Switch with turn on Bypass Mode in iBypass DUO. If any of the Inline Tool link goes down, IXIA handles it. Switches will be still unaware of the changes. If BIG-IP goes down, IXIA enables bypass and ensures there is no packet drop.706Views1like1CommentBIG-IP L2 Virtual Wire LACP Passthrough Deployment with IXIA Bypass Switch and Network Packet Broker (Single Service Chain - Active / Active)
Introduction This article is part of a series on deploying BIG-IPs with bypass switches and network packet brokers. These devices allow for the transparent integration of network security tools with little to no network redesign and configuration change. For more information about bypass switch devices refer tohttps://en.wikipedia.org/wiki/Bypass_switch; for network packet brokers, refer tohttps://www.ixiacom.com/company/blog/network-packet-brokers-abcs-network-visibilityandhttps://www.gigamon.com/campaigns/next-generation-network-packet-broker.html. The article series introduces network designs to forward traffic to the inline tools at layer 2 (L2). F5’s BIG-IP hardware appliances can be inserted in L2 networks. This can be achieved using either virtual Wire (vWire) or by bridging 2 Virtual LANs using a VLAN Groups. This document covers the design and implementation of the IXIA Bypass Switch/Network Packet Broker in conjunction with the BIG-IP i5800 appliance and Virtual Wire (vWire). This document focus on IXIA Bypass Switch / Network Packet Broker. For more information about architecture overview of bypass switch and network packet broker refer tohttps://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?tab=series&page=1. This article is continuation of https://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1 with latest versions of BIG-IP and IXIA Devices. Also focused on various combination of configurations in BIG-IP and IXIA devices. Network Topology Below diagram is a representation of the actual lab network. This shows deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker. Figure 1 - Deployment of BIG-IP with IXIA Bypass Switch and Network Packet Broker Please refer Lab Overview section in https://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1 for more insights on lab topology and connections. Hardware Specification Hardware used in this article are IXIA iBypass DUO ( Bypass Switch) IXIA Vision E40 (Network Packet Broker) BIG-IP Arista DCS-7010T-48 (all the four switches) Software Specification Software used in this article are BIG-IP 16.1.0 IXIA iBypass DUO 1.4.1 IXIA Vision E40 5.9.1.8 Arista 4.21.3F (North Switches) Arista 4.19.2F (South Switches) Switch Configuration LAG or link aggregation is a way of bonding multiple physical links into a combined logical link. MLAG or multi-chassis link aggregation extends this capability allowing a downstream switch or host to connect to two switches configured as an MLAG domain. This provides redundancy by giving the downstream switch or host two uplink paths as well as full bandwidth utilization since the MLAG domain appears to be a single switch to Spanning Tree (STP). Lab Overview section in https://devcentral.f5.com/s/articles/BIG-IP-L2-Deployment-with-Bypasss-Network-Packet-Broker-and-LACP?tab=series&page=1shows MLAG configuring in both the switches. This article focus on LACP deployment for tagged packets. For more details on MLAG configuration, refer tohttps://eos.arista.com/mlag-basic-configuration/#Verify_MLAG_operation Step Summary Step 1 : Configuration of MLAG peering between both the North Switches Step 2 : Verify MLAG Peering in North Switches Step 3 : Configuration of MLAG Port-Channels in North Switches Step 4 : Configuration of MLAG peering between both the South Switches Step 5 : Verify MLAG Peering in South Switches Step 6 : Configuration of MLAG Port-Channels in South Switches Step 7 : Verify Port-Channel Status Step 1 : Configuration of MLAG peering between both the North Switches MLAG Configuration in North Switch1 and North Switch2 are as follows North Switch 1: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.0.1/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.0.2 peer-link Port-Channel10 reload-delay 150 North Switch 2: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.0.2/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.0.1 peer-link Port-Channel10 reload-delay 150 Step 2 : Verify MLAG Peering in North Switches North Switch 1: North-1#show mlag MLAG Configuration: domain-id:mlag1 local-interface:Vlan4094 peer-address:172.16.0.2 peer-link:Port-Channel10 peer-config : consistent MLAG Status: state:Active negotiation status:Connected peer-link status:Up local-int status:Up system-id:2a:99:3a:23:94:c7 dual-primary detection :Disabled MLAG Ports: Disabled:0 Configured:0 Inactive:6 Active-partial:0 Active-full:2 North Switch 2: North-2#show mlag MLAG Configuration: domain-id:mlag1 local-interface:Vlan4094 peer-address:172.16.0.1 peer-link:Port-Channel10 peer-config : consistent MLAG Status: state:Active negotiation status:Connected peer-link status:Up local-int status:Up system-id:2a:99:3a:23:94:c7 dual-primary detection :Disabled MLAG Ports: Disabled:0 Configured:0 Inactive:6 Active-partial:0 Active-full:2 Step 3 : Configuration of MLAG Port-Channels in North Switches North Switch 1: interface Port-Channel513 switchport trunk allowed vlan 513 switchport mode trunk mlag 513 interface Ethernet50 channel-group 513 mode active North Switch 2: interface Port-Channel513 switchport trunk allowed vlan 513 switchport mode trunk mlag 513 interface Ethernet50 channel-group 513 mode active Step 4 : Configuration of MLAG peering between both the South Switches MLAG Configuration in South Switch1 and South Switch2 are as follows South Switch 1: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.1.1/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.1.2 peer-link Port-Channel10 reload-delay 150 South Switch 2: Configure Port-Channel interface Port-Channel10 switchport mode trunk switchport trunk group m1peer Configure VLAN interface Vlan4094 ip address 172.16.1.2/30 Configure MLAG mlag configuration domain-id mlag1 heartbeat-interval 2500 local-interface Vlan4094 peer-address 172.16.1.1 peer-link Port-Channel10 reload-delay 150 Step 5 : Verify MLAG Peering in South Switches South Switch 1: South-1#show mlag MLAG Configuration: domain-id : mlag1 local-interface : Vlan4094 peer-address : 172.16.1.2 peer-link : Port-Channel10 peer-config : consistent MLAG Status: state : Active negotiation status : Connected peer-link status : Up local-int status : Up system-id : 2a:99:3a:48:78:d7 MLAG Ports: Disabled : 0 Configured : 0 Inactive : 6 Active-partial : 0 Active-full : 2 South Switch 2: South-2#show mlag MLAG Configuration: domain-id : mlag1 local-interface : Vlan4094 peer-address : 172.16.1.1 peer-link : Port-Channel10 peer-config : consistent MLAG Status: state : Active negotiation status : Connected peer-link status : Up local-int status : Up system-id : 2a:99:3a:48:78:d7 MLAG Ports: Disabled : 0 Configured : 0 Inactive : 6 Active-partial : 0 Active-full : 2 Step 6 : Configuration of MLAG Port-Channels in South Switches South Switch 1: interface Port-Channel513 switchport trunk allowed vlan 513 switchport mode trunk mlag 513 interface Ethernet50 channel-group 513 mode active South Switch 2: interface Port-Channel513 switchport trunk allowed vlan 513 switchport mode trunk mlag 513 interface Ethernet50 channel-group 513 mode active LACP modes are as follows On Active Passive LACP Connection establishment will occur only for below configurations Active in both North and South Switch Active in North or South Switch and Passive in other switch On in both North and South Switch Note: In this case, all the interfaces of both North and South Switches are configured with LACP mode as Active. Step 7 : Verify Port-Channel Status North Switch 1: North-1#show mlag interfaces detail local/remote mlag state local remote oper config last change changes ---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- ------- 513 active-full Po513 Po513 up/up ena/ena 4 days, 0:34:28 ago 198 North Switch 2: North-2#show mlag interfaces detail local/remote mlag state local remote oper config last change changes ---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- ------- 513 active-full Po513 Po513 up/up ena/ena 4 days, 0:35:58 ago 198 South Switch 1: South-1#show mlag interfaces detail local/remote mlag state local remote oper config last change changes ---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- ------- 513 active-full Po513 Po513 up/up ena/ena 4 days, 0:36:04 ago 190 South Switch 2: South-2#show mlag interfaces detail local/remote mlag state local remote oper config last change changes ---------- ----------------- ----------- ------------ --------------- ------------- --------------------------- ------- 513 active-full Po513 Po513 up/up ena/ena 4 days, 0:36:02 ago 192 Ixia iBypass Duo Configuration For detailed insight, refer to IXIA iBypass Duo Configuration section in https://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?page=1 Figure 2 - Configuration of iBypass Duo (Bypass Switch) Heartbeat Configuration Heartbeats are configured on both bypass switches to monitor tools in their primary path and secondary paths. If a tool failure is detected, the bypass switch forwards traffic to the secondary path. Heartbeat can be configured using multiple protocols, here Bypass switch 1 uses DNS and Bypass Switch 2 uses IPX for Heartbeat. Figure 3 - Heartbeat Configuration of Bypass Switch 1 ( DNS Heartbeat ) In this infrastructure, the VLAN ID is 513 and represented as hex 0201. Figure 4 - VLAN Representation in Heartbeat Figure 5 - Heartbeat Configuration of Bypass Switch 1 ( B Side ) Figure 6 - Heartbeat Configuration of Bypass Switch 2 ( IPX Heartbeat ) Figure 7 - Heartbeat Configuration of Bypass Switch 2 ( B Side ) IXIA Vision E40 Configuration Create the following resources with the information provided. Bypass Port Pairs Inline Tool Pair Service Chains Figure 8 - Configuration of Vision E40 ( NPB ) This articles focus on deployment of Network Packet Broker with single service chain whereas previous article is based on 2 service chain. Figure 9 - Configuration of Tool Resources In Single Tool Resource, 2 Inline Tool Pairs configured which allows to configure both the Bypass Port pair with single Service Chain. Figure 10 - Configuration of VLAN Translation From Switch Configuration, Source VLAN is 513 and it will be translated to 2001 and 2002 for Bypass 1 and Bypass 2 respectively. For more insights with respect to VLAN translation, refer https://devcentral.f5.com/s/articles/L2-Deployment-of-vCMP-guest-with-Ixia-network-packet-broker?page=1 For Tagged Packets, VLAN translation should be enabled. LACP frames will be untagged which should be bypassed and routed to other Port-Channel. In this case LACP traffic will not reach BIG-IP, instead it will get routed directly from NPB to other pair of switches. LACP bypass Configuration The network packet broker is configured to forward (or bypass) the LACP frames directly from the north to the south switch and vice versa.LACP frames bear the ethertype 8809 (in hex). This filter is configured during the Bypass Port Pair configuration. Note: There are methods to configure this filter, with the use of service chains and filters but this is the simplest for this deployment. Figure 11 - Configuration to redirect LACP BIG-IP Configuration Step Summary Step 1 : Configure interfaces to support vWire Step 2 : Configure trunk in passthrough mode Step 3 : Configure Virtual Wire Note: Steps mentioned above are specific to topology inFigure 2.For more details on Virtual Wire (vWire), referhttps://devcentral.f5.com/s/articles/BIG-IP-vWire-Configuration?tab=series&page=1andhttps://devcentral.f5.com/s/articles/vWire-Deployment-Configuration-and-Troubleshooting?tab=series&page=1 Step 1 : Configure interfaces to support vWire To configure interfaces to support vWire, do the following Log into BIG-IP GUI SelectNetwork -> Interfaces -> Interface List Select Specific Interface and in vWire configuration, select Virtual Wire as Forwarding Mode Figure 12 - Example GUI configuration of interface to support vWire Step 2 : Configure trunk in passthrough mode To configure trunk, do the following Log into BIG-IP GUI SelectNetwork -> Trunks ClickCreateto configure new Trunk. Disable LACP for LACP passthrough mode Figure 13 - Configuration of North Trunk in Passthrough Mode Figure 14 - Configuration of South Trunk in Passthrough Mode Step 3 : Configure Virtual Wire To configure trunk, do the following Log into BIG-IP GUI SelectNetwork -> Virtual Wire ClickCreateto configure Virtual Wire Figure 15 - Configuration of Virtual Wire As VLAN 513 is translated into 2001 and 2002, vWire configured with explicit tagged VLANs. It is also recommended to have untagged VLAN in vWire to allow any untagged traffic. Enable multicast bridging sys db variable as below for LACP passthrough mode modify sys db l2.virtualwire.multicast.bridging value enable Note: Make sure sys db variable enabled after reboot and upgrade. For LACP mode, multicast bridging sys db variable should be disabled. Scenarios As LACP passthrough mode configured in BIG-IP, LACP frames will passthrough BIG-IP. LACP will be established between North and South Switches.ICMP traffic is used to represent network traffic from the north switches to the south switches. Scenario 1: Traffic flow through BIG-IP with North and South Switches configured in LACP active mode Above configurations shows that all the four switches are configured with LACP active mode. Figure 16 - MLAG after deployment of BIG-IP and IXIA with Switches configured in LACP ACTIVE mode Figure 16shows that port-channels 513 is active at both North Switches and South Switches. Figure 17 - ICMP traffic flow from client to server through BIG-IP Figure 17shows ICMP is reachable from client to server through BIG-IP. This verifies test case 1, LACP getting established between Switches and traffic passthrough BIG-IP successfully. Scenario 2: Active BIG-IP link goes down with link state propagation enabled in BIG-IP Figure 15shows Propagate Virtual Wire Link Status enabled in BIG-IP. Figure 17shows that interface 1.1 of BIG-IP is active incoming interface and interface 1.4 of BIG-IP is active outgoing interface. Disabling BIG-IP interface 1.1 will make active link down as below Figure 18 - BIG-IP interface 1.1 disabled Figure 19 - Trunk state after BIG-IP interface 1.1 disabled Figure 19shows that the trunks are up even though interface 1.1 is down. As per configuration, North_Trunk has 2 interfaces connected to it 1.1 and 1.3 and one of the interface is still up, so North_Trunk status is active. Figure 20 - MLAG status with interface 1.1 down and Link State Propagation enabled Figure 20shows that port-channel 513 is active at both North Switches and South Switches. This shows that switches are not aware of link failure and it is been handled by IXIA configuration. Figure 21 - IXIA Bypass Switch after 1.1 interface of BIG-IP goes down As shown in Figure 8, Single Service Chain is configured and which will be down only if both Inline Tool Port pairs are down in NPB. So Bypass will be enabled only if Service Chain goes down in NPB. Figure 21 shows that still Bypass is not enabled in IXIA Bypass Switch. Figure 22 - Service Chain and Inline Tool Port Pair status in IXIA Vision E40 ( NPB ) Figure 22 shows that Service Chain is still up as BIG IP2 ( Inline Tool Port Pair ) is up whereas BIG IP1 is down. Figure 1 shows that P09 of NPB is connected 1.1 of BIG-IP which is down. Figure 23 - ICMP traffic flow from client to server through BIG-IP Figure 23 shows that still traffic flows through BIG-IP even though 1.1 interface of BIG-IP is down. Now active incoming interface is 1.3 and active outgoing interface is 1.4. Low bandwidth traffic is still allowed through BIG-IP as bypass not enabled and IXIA handles rate limit process. Scenario 3: When North_Trunk goes down with link state propagation enabled in BIG-IP Figure 24 - BIG-IP interface 1.1 and 1.3 disabled Figure 25 - Trunk state after BIG-IP interface 1.1 and 1.3 disabled Figure 15 shows that Propagate Virtual Wire Link State enabled and thus both the trunks are down. Figure 26 - IXIA Bypass Switch after 1.1 and 1.3 interfaces of BIG-IP goes down Figure 27 - ICMP traffic flow from client to server bypassing BIG-IP Conclusion This article covers BIG-IP L2 Virtual Wire Passthrough deployment with IXIA. IXIA configured using Single Service Chain. Observations of this deployment are as below VLAN Translation in IXIA NPB will convert real VLAN ID (513) to Translated VLAN ID (2001 and 2002) BIG-IP will receive packets with translated VLAN ID (2001 and 2002) VLAN Translation needs all packets to be tagged, untagged packets will be dropped. LACP frames are untagged and thus bypass configured in NPB for LACP. Tool Sharing needs to be enabled for allowing untagged packet which will add extra tag. This type of configuration and testing will be covered in upcoming articles. With Single Service Chain, If any one of the Inline Tool Port Pairs goes down, low bandwidth traffic will be still allowed to pass through BIG-IP (tool) If any of the Inline Tool link goes down, IXIA handles whether to bypass or rate limit. Switches will be still unaware of the changes. With Single Service Chain, if Tool resource configured with both Inline Tool Port pair in Active - Active state then load balancing will happen and both path will be active at a point of time. Multiple Service Chains in IXIA NPB can be used instead of Single Service Chain to remove rate limit process. This type of configuration and testing will be covered in upcoming articles. If BIG-IP goes down, IXIA enables bypass and ensures there is no packet drop.1.3KViews9likes0CommentsL2 Deployment of vCMP guest with Ixia network packet broker
Introduction The insertion of inline security devices into an existing network infrastructure can require significant network re-design and architecture changes. Deploying tools that operate transparently at Layer 2 of the OSI model (L2) can greatly reduce the complexity and disruption associated with these implementations. This type of insertion eliminates the need to make changes to the infrastructure and provides failsafe mechanisms to ensure business continuity should a security device fail. F5’s BIG-IP hardware appliances can be inserted in L2 networks.This can be achieved using either virtual Wire (vWire) or by bridging 2 Virtual LANs using a VLAN Groups. This document covers the design and implementation of the Ixia bypass switch, Ixia packet broker in conjunction with the BIG-IP i5800 appliance configured with hardware virtualization (vCMP), VLAN Groups and VLAN tagging (IEEE 802.1q tagging). Emphasis is made on the network insertion, integration and Layer 2 configuration. The configuration of BIG-IP modules, such as those providing DDoS protection/mitigation or SSL visibility, is beyond the scope of this document and is the subject of other deployment guides.For more information on F5 security modules and their configuration please refer to www.f5.com to access user guides, recommended practices and other deployment documentation. Architecture Overview Enterprise networks are built using various architectures depending on business objectives and budget requirements.As corporate security policies, regulations and requirements evolve, new security services need to be inserted into the existing infrastructure. These new services can be provided by tools such as intrusion detection and prevention systems (IDS/IPS), web application firewalls (WAF), denial of service protection (DoS), or data loss prevention devices (DLP).These are often implemented in the form of physical or virtual appliances requiring network-level integration. Figure 1- Bypass Switch Operation This document focuses on using bypass switches as insertion points and network packet brokers to provide further flexibility. Bypass switches are passive networking devices that mimic the behavior of a straight piece of wire between devices while offering the flexibility to forward traffic to a security service.They offer the possibility to detecting service failure and bypassing the service completely should it become unavailable.This is illustrated in the Figure 1.The bypass switch forwards traffic to the service during normal operation, and bypasses the tool in other circumstances (e.g. tool failure, maintenance, manual offline). Capabilities of the bypass switch can be enhanced with the use of network packet brokers. Note: Going forward, “tool” or “security service” refers to the appliance providing a security service. In the example below, this is an F5 BIG-IP appliance providing DDoS protection. Network packet brokers are similar to bypass switches in that they operate at L2 and do not take part in the switching infrastructure signaling (STP, bpdu, etc.) and are transparent to the rest of the network.They provide forwarding flexibility to integrate and forward traffic to more than one device and create a chain.These chains allow for the use of multiple security services tools. The Figure 2 provides a simplified example where the network packet broker is connected to 2 different tools/security services.Network packet brokers operate programmatically and are capable to conditionally forward traffic to tools.Administrators are able to create multiple service chains based on ingress conditions or traffic types. Another function of the network packet broker is to provide logical forwarding and encapsulation (Q-in-Q) functions without taking part into the Ethernet switching.This includes adding,removing, replacing 802.1q tags and conditional forwarding based on frame type, VLAN tags, etc. Figure 2-Network Packet Broker - Service Chain When inserted into the network at L2, BIG-IP devices leveraging system-level virtualization (vCMP) require the use of VLAN Groups.VLAN groups bridge 2 VLAN’s together.In this document, the VLANs utilized are tagged using 802.1q.This means that tagging used on traffic ingress is different from tagging used on traffic egress as shown in Figure 3. From an enterprise network perspective, the infrastructure typically consists of border routers feeding into border switches.Firewalls connect into the border switches with their outside (unsecured/internet-facing) interfaces.They connect to the core switching mesh with their inside (protected, corporate and systems-facing) interfaces. The Figure 3 below shows the insertion of the bypass switch in the infrastructure between the firewall and the core switching layer. A network packet broker is also inserted between the bypass switch and the security services. Figure 3. Service Chain Insertion Note: the core switch and firewall configuration are not altered in anyway. Figure 4 describes how frames traverse the bypass switch, network packet broker and security device.It also shows the transformation of the frames in transit. VLAN tags used in the diagram are provided for illustration purposes.Network administrators may wish to use VLAN tags consistent with their environment. Prior to the tool chain insertion, packets egress the core and ingress the firewall with a VLAN tag of 101. After the insertion, packets egress the core (blue path) tagged with 101 and ingress the Bypass 1 (BP1) switch (1). They are redirected to the network packet broker (PB1). On ingress to the PB1 (2), an outer VLAN tag of 2001 is added. The VLAN tag is then changed to match the BIG-IP VLAN Group tag of 4001 before egressing the PB1 (3). An explanation of the network packet broker use of VLAN tags and the VLAN ID replacement is covered in the next section. The packet is processed by the BIG-IP 1 (4) and returns it to the PB1 with a replaced outer VLAN of 2001(5). The PB1 removes the outer VLAN tag and sends it back to BP1 (6). The BP1 forwards it to the north switch (1) with the original VLAN tag of 101. The Path 2 (green) follows the same flow but on a different bypass switch, network packet broker and BIG-IP. Path 2 is assigned a different outer VLAN tags (2003 and 4003) by packet broker. Figure 4 - South-North traffic flow Heartbeats are configured on both bypass switches to monitor tools in their primary path and secondary paths. If a tool failure is detected, the bypass switch forwards traffic to the secondary path. This is illustrated in Figure 4.5. Figure 4.5. Heartbeat and Network Packet Broker (NPB) VLAN Re-write The network packet broker utilizes VLANs to keep track of flows from different paths in a tool-sharing configuration. A unique VLAN ID is configured for each path. The tag is added on ingress and removed on egress. The VLAN tags enable the packet broker to keep track of flows in and out of the shared tool and return them to the correct path. If the flow entering the network packet broker has a VLAN tag, than the packet broker must be configured to use Q-in-Q to add an outer tag. In this document, the BIG-IP is deployed as a tool in the network packet broker service chain. The Big-IP is running vCMP and is configured in VLAN Group mode. In this mode, the BIG-IP requires two VLANs to operate, one facing north and the other facing south. As packets traverse the BIG-IP, the VLAN tag is changed. This presents a challenge for the network packet broker because it expects to receive same the unaltered packets that it sends to the inline tools. The network packet broker will drop the altered packets. To address this issue, additional configurations are required, using service chains, filters and hard loops. Network Packet Broker VLAN Replacement 1. The frames ingress the network packet broker on port 2. An outer VLAN tag of 2001 is added to the frames by the Service Chain 3 (SC3). 2. The frames are forwarded of port 17 and egress the network packet broker, which is externally patched to port 18. 3.Port 18 is internally linked to port 10 by a filter. 4.As traffic egress port 10, a filter is applied to change the VLAN from 2001 to 4001. 5.The outer VLAN tag on the frames are changed from 4001 to 2001 as they traverse the BIG-IP. The frames egress port 2.1 on the BIG-IP and ingress the network packet broker on port 9. 6.The frames are sent through the SC3, where the outer VLAN is stripped off and egress on port 1. 7.Frames are forwarded back to the bypass. The return traffic follows the same flow as described above but in reverse order. The only difference is a different filter is applied to port 10 to replace the 4001 tag with 2001. Figure 5. Network Packet Broker VLAN Tag Replacement Lab The use case selected for this verified design is based on a customer design. The customer’s requirements were the BIG-IPs must be deployed in vCMP mode and in layer 2. This limits the BIG-IP deployment to VLAN Group. The design presented challenges and creative solutions to overcome them. The intention is not for reader to replicate the design but to …. The focus of this lab is the L2 insertion point and the flow traffic through the service chain. A pair of switches were used to represent the north and south ends of each path, a pair for blue and a pair for green. One physical bypass switch configured with two logical bypass switches and one physical network packet broker simulating two network packet brokers. Lab Equipment List Appliance Version Figure 6. Lab diagram Lab Configuration Arista network switches Ixia Bypass switch Ixia Network Packet Broker F5 BIG-IP Test case Arista Network Switches Four Arista switches were used to generate the north-south traffic. A pair of switches represents the Path 1 (blue) with a firewall to the north of the insertion and the core to the south. The second pair of switches represents Path 2 (green). A VLAN 101 and a VLAN interface 101 were created on each switch. Each VLAN interface was assigned an IP address in the 10.10.101.0/24 range. Ixia iBypass Duo Configuration Step 1.Power Fail State Step 2.Enable ports Step 3.Bypass Switch Step 4.Heartbeat The initial setup of the iBypass Duo switch is covered in the Ixia iBypass Duo User’s Guide. Please visit the Ixia website to download a copy. This section will cover the the configuration of the bypass switch to forwards traffic to the network packet broker (PB1). In the event the PB1 fails, forward traffic to the secondary network packet broker (PB2). As the last the last resort, fail open and permit traffic to flow, bypassing the service chain. Step 1.In the invent of a power failure, the bypass switch is configured to fail open and permit the traffic to flow uninterrupted. a. Click the CONFIGURATION (1) menu bar and select Chassis (2). Select Open (3) from the Power Fail State and click SAVE (4) on the menu bar. Step 2.Enable Ports a.Click the CONFIGURATION (1) menu bar and select Port (2) b.Check the box (3) at the top of the column to select all ports and click and Enable (4) c.Click SAVE (5)on the menu bar Step 3.Configure Bypass Switch 1 and 2 a.Click Diagram (1) and click +Add Bypass Switch (2) b.Select the Inline Network Links tab (1) and click Add Ports (2). From the pop-up window, select port A. The B side port will automatically be selected. c.Select the Inline Tools (1) tab and click the + (2) d.From the Edit Tool Connections window, on the A side (top) , click Add Ports (1) and select port 1 from the pop-up windows (2). Repeat and select port 5. On the B side (bottom), click Add Ports and select port 2 (3). Repeat and select port 6. Note: The position of the ports is also the priority of the ports. In this example, ports 1 (A side) and 2 (B side) are the primary path. e.Repeat steps a through d to create Bypass Switch 2 with Inline Network Links C and D and Inline Tools ports 7,8 and 3,4 as the secondary. Step 4.Heartbeat config a.From the Diagram view, click the Bypass Switch 1 menu square (1) and select Properties (2). b.Click the Heartbeats tab (1), click show (2) and populate the values (3). To edit a field, just click the field and edit. Click OK and check the Enabled box (4). Note: To edit the heartbeat values, just click on the field and type. c.Repeat steps a. and b. to create the heartbeats for the remaining interfaces. Ideally, heartbeats are configured to check both directions. From tool port 1 -> tool port 2 and from tool port 2 -> tool port 1. Repeat steps to create the heartbeat for port 2 but reverse the MACs for SMAC and DMAC Use a different set of MACs (ex. 0050 c23c 6012 and 0050 c23c 6013) when configuring the heartbeat for tool ports 5 and 6. This concludes the bypass switch configuration. Network Packet Broker (NPB) Configuration In this lab, the NPB is configured with three type of ports, Bypass, Inline Tool and Network. Steps Summary Step 1.Configure Bypass Port Pairs Step 2.Create Inline Tool Resources Ports Step 3.Create Service Chains Step 4.Link the Bypass Pairs with the Service Chains Step 5.Create Dynamic Filters Step 6.Apply the Filters Step 1.Configure Bypass Port Pairs (BPP) Bypass ports are ports that send and receive traffic from the network side. In this lab, they are connected to the bypass switches. a.Click the INLINE menu (1) and click the Add Bypass Port Pair (2). b.In the Add Bypass Port Pair window, enter a name (ByPass 1 Primary). To select Side A Port, click the Select Port button (2). In the pop-up window, select a port ( P0 1). Now select the Side B Port (P02) (3) and click OK. Repeat these steps to create the remain BPPs. ByPass 1 Secondary with P05 (Side A) and P06 (Side B) ByPass 2 Primary with P07 (Side A) and P08 (Side B) ByPass 2 Secondary with P03 (Side A) and P04 (Side B) Step 2.Create Inline Tool Resources Ports Inline Tool Resources (ITR) are ports connected to tools, such as the BIG-IP. These ports are used in the service chain configuration to connect BPPs to ITRs. a.Click the INLINE menu (1) and click the Add Tool Resource (2). b.Enter a name (BIG-IP 1) (1) and click the Inline Tool Ports tab (2) c.To select the Side 1 Port, click the Select Port (1) button and select a port (P09) from the pop-up window. Do the same for Side 2 port(P17) (2). Provide an Inline Tool Name (BIG-IP 1) (3) and click Create Port Pair (4). Repeat these steps to create ITR BIG-IP 2 using ports P13 and P21. NOTE: The Side B port do not match the diagram due to the VLAN replacement explained previously. Step3.Create Service Chains A Service Chain connects BPPs to the inline tools. It controls how traffic flows from the BPPs to the tools in the chain through the use of Dynamic Filters. a.Click the INLINE menu (1) and click the Add Service Chain (2). b.In the pop-up window, enter a name (Service Chain 1) (1) and check the box to Enable Tool Sharing (2). Click Add (3) and in the pop-up window, select Bypass 1 Primary and Bypass 2 Secondary. Once added, the BPPs are displayed in the window. Select each VLAN Id field and replace them with (4) 2001 and (5) 2002. Repeat these steps to create Service Chain 2. Use BPPs Bypass 2 Primary and Bypass 1 Secondary and VLAN 2003 and 2004 respectively. Click the Inline Tool Resource tab (6) to add ITRs. c.On the Inline Tool Resource, click Add and select the ITR (BIG-IP 1) from the pop-up window. Repeat these steps for Service Chain 2 and select BIG-IP 2. d.The next step connects the network (BPPs) to the tools using the service chains. To connect the BPPs to the service chains, simply drag a line to link them. The lines in the red box are created manually. The lines in the blue box are automatically created to correlate with the links in the red box. This means traffic sent out BPP port A, into the service chain, is automatically return to port B. 4.Configure Filters Filters are used to link ports, filter, a.Click the OBJECTS menu (1), select Dynamic Filters (2), click the +Add (3) and select Dynamic Filters. b.Enter a name (1) c.On the Filter Criteria tab, select Pass by Criteria (1) and click VLAN (2). In the pop-up window, enter a VLAN ID (4001) and select Match Any (3) d.On the Connections tab, click Add Ports (1) to add a network port. In the pop-up window, select a port (P10). Add a port for tools (P18) (2). e.Skip the Access Control tab and select the VLAN Replacement tab. Check the Enable VLAN Replacement box and enter a VLAN ID (2001). Repeat these steps and create the remaining filters using the table below. NOTE: The filter name (Fx) does not need to match this table exactly. This concludes the network packet broker configuration. Filters BIG-IP Configuration This section describes how to configure a vCMP BIG-IP device to utilize VLAN Groups. As a reminder a VLAN Group is a configuration element that allows the bridging of VLANs. In vCMP, the hypervisor is called the vCMP host.Virtual machines running on the host are called guests. Lower layer configuration for networking on vCMP is done at the host level.VLAN’s are then made available to the guest.The VLAN bridging is configured at the guest level. In the setup described herein, the VLAN interfaces are tagged with two 802.1q tags.Q-in-Q is used to provide inner and outer tagging. The following assumes that the BIG-IP’s are up and running, that they are upgraded, licensed and provisioned for vCMP.Also, it is assumed that all physical connectivity is completed as appropriate following a design identifying port, VLAN tagging and other ethernet media choices. Prior to proceeding you will need the following information for each BIG-IP that will be configured: Configuration overview: 1.[vCMP host] Create VLANs that will be bridged 2.[vCMP host] Create the vCMP guest: a.Configure – define what version of software, size of VM, associate the VLANs etc. b.Provision – create the BIG-IP virtual machine or guest c.Deploy – start the BIG-IP guest 3.[vCMP guest] Bridge VLAN group Create VLANs that will be bridged: ·Login to the vCMP host interface ·Go to Network>> VLAN >> VLAN List ·Select “Create” ·In the VLAN configuration panel: oProvide a name for the object oEnter the Tag (this corresponds to the “outer” tag) oSelect “Specify” in the Customer Tag dropdown oEnter a value for the Customer Tag, this is a value between 1 and 4094 (this is the “inner” tag) oSelect an interface to associate the VLAN to oSelect “Tagged” in the “Tagging” drop down oSelect “Double” in the “Tag Mode” drop down oClick on the “add” button in the Resources box oSelect “Finished” as shown in the figure below Repeat the steps above to create a second VLAN that will be added to the VLAN group.Once the above steps completed the VLAN webUI should look like: Create vCMP Guest ·Login to the vCMP host interface ·Go to vCMP>>Guest List ·Select “Create…” (upper right-hand corner) ·Populate the following fields: oName oHost Name oManagement Port oIP Address oNetwork Mask oManagement Route oVLAN List, ensure that the VLANs that need to be bridged are in the “selected” pane ·Set the “Requested State” to “Deployed” (this will create a virtual BIG-IP ·Click on “Finish” – window should look like the following: Clicking on “Finish” will configure, provision and deploy the BIG-IP guest Bridge VLAN group ·Login to the vCMP guest interface ·Go to Network >> VLANs >> VLAN Groups ·Select “Create ·In the configuration window as shown below: oEnter a unique name for the VLAN group object oSelect the VLAN’s to associate that need to be bridged oKeep the default configuration for the other settings oSelect “Finished” Once created, traffic should be able to traverse the BIG-IP. This concludes the BIG-IPs configuration.2.2KViews3likes0CommentsIxia Xcellon-Ultra XT-80 validates F5 Network's VIPRION 2400 SSL Performance
Courtesy IxiaTested YouTube Channel Ryan Kearny, VP of Product Development at F5 Networks, explains how Ixia's Xcellon-Ultra XT80, high-density application performance platform was is used to test and verify the performance limits of the VIPRION 2400. </p> <p>ps </p> <p>Resources: </p> <ul> <li><a href="http://www.youtube.com/watch?v=FFmtDpE6Ing" _fcksavedurl="http://www.youtube.com/watch?v=FFmtDpE6Ing">Interop 2011 - Find F5 Networks Booth 2027</a></li> <li><a href="http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/10/interop-2011-f5-in-the-interop-noc.aspx" _fcksavedurl="http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/10/interop-2011-f5-in-the-interop-noc.aspx">Interop 2011 - F5 in the Interop NOC</a></li> <li><a href="http://devcentral.f5.com/s/psilva/psilva/archive/2011/05/10/interop-2011-viprion-2400-and-vcmp.aspx" _fcksavedurl="http://devcentral.f5.com/s/psilva/psilva/archive/2011/05/10/interop-2011-viprion-2400-and-vcmp.aspx">Interop 2011 - VIPRION 2400 and vCMP</a></li> <li><a href="http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/11/interop-2011-ixia-and-viprion-2400-performance-test.aspx" _fcksavedurl="http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/11/interop-2011-ixia-and-viprion-2400-performance-test.aspx">Interop 2011 - IXIA and VIPRION 2400 Performance Test</a></li> <li><a href="http://devcentral.f5.com/s/psilva/psilva/archive/2011/05/12/interop-2011-f5-in-the-interop-noc-follow-up.aspx" _fcksavedurl="http://devcentral.f5.com/s/psilva/psilva/archive/2011/05/12/interop-2011-f5-in-the-interop-noc-follow-up.aspx">Interop 2011 - F5 in the Interop NOC Follow Up</a></li> <li><a href="http://devcentral.f5.com/s/psilva/archive/2011/05/13/interop-2011-wrapping-it-up.aspx" _fcksavedurl="http://devcentral.f5.com/s/psilva/archive/2011/05/13/interop-2011-wrapping-it-up.aspx">Interop 2011 - Wrapping It Up</a></li> <li><a href="http://devcentral.f5.com/s/weblogs/psilva/archive/2011/05/16/interop-2011-the-video-outtakes.aspx" _fcksavedurl="http://devcentral.f5.com/s/weblogs/psilva/archive/2011/05/16/interop-2011-the-video-outtakes.aspx">Interop 2011 - The Video Outtakes</a></li> <li><a href="http://devcentral.f5.com/s/weblogs/psilva/archive/2011/05/25/interop-2011-tmcnet-interview.aspx" _fcksavedurl="http://devcentral.f5.com/s/weblogs/psilva/archive/2011/05/25/interop-2011-tmcnet-interview.aspx">Interop 2011 - TMCNet Interview</a></li> <li><a href="http://www.youtube.com/user/f5networksinc" _fcksavedurl="http://www.youtube.com/user/f5networksinc">F5 YouTube Channel</a></li> <li><a href="www.ixiacom.com" _fcksavedurl="www.ixiacom.com">Ixia Website</a></li> </ul> <p>Technorati Tags: <a href="http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/09/" _fcksavedurl="http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/09/">F5</a>, <a href="http://technorati.com/tags/interop" _fcksavedurl="http://technorati.com/tags/interop">interop</a>, <a href="http://technorati.com/tags/Pete+Silva" _fcksavedurl="http://technorati.com/tags/Pete+Silva">Pete Silva</a>, <a href="http://technorati.com/tags/security" _fcksavedurl="http://technorati.com/tags/security">security</a>, <a href="http://technorati.com/tag/business" _fcksavedurl="http://technorati.com/tag/business">business</a>, <a href="http://technorati.com/tag/education" _fcksavedurl="http://technorati.com/tag/education">education</a>, <a href="http://technorati.com/tag/technology" _fcksavedurl="http://technorati.com/tag/technology">technology</a>, <a href="http://technorati.com/tags/internet" _fcksavedurl="http://technorati.com/tags/internet">internet, </a><a href="http://technorati.com/tags/big-ip" _fcksavedurl="http://technorati.com/tags/big-ip">big-ip</a>, <a href="http://technorati.com/tags/VIPRION" _fcksavedurl="http://technorati.com/tags/VIPRION">VIPRION</a>, <a href="http://technorati.com/tags/vCMP" _fcksavedurl="http://technorati.com/tags/vCMP">vCMP</a>, <a href="http://technorati.com/tags/ixia" _fcksavedurl="http://technorati.com/tags/ixia">ixia</a>, <a href="http://technorati.com/tags/performace" _fcksavedurl="http://technorati.com/tags/performace">performance</a>, <a href="http://technorati.com/tags/ssl%20tps" _fcksavedurl="http://technorati.com/tags/ssl%20tps">ssl tps</a>, <a href="http://technorati.com/tags/testing" _fcksavedurl="http://technorati.com/tags/testing">testing</a></p> <table border="0" cellspacing="0" cellpadding="2" width="380"><tbody> <tr> <td valign="top" width="200">Connect with Peter: </td> <td valign="top" width="178">Connect with F5: </td> </tr> <tr> <td valign="top" width="200"><a href="http://www.linkedin.com/pub/peter-silva/0/412/77a" _fcksavedurl="http://www.linkedin.com/pub/peter-silva/0/412/77a"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_linkedin[1]" border="0" alt="o_linkedin[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_linkedin.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_linkedin.png" width="24" height="24" /></a> <a href="http://devcentral.f5.com/s/weblogs/psilva/Rss.aspx" _fcksavedurl="http://devcentral.f5.com/s/weblogs/psilva/Rss.aspx"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_rss[1]" border="0" alt="o_rss[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_rss.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_rss.png" width="24" height="24" /></a> <a href="http://www.facebook.com/f5networksinc" _fcksavedurl="http://www.facebook.com/f5networksinc"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_facebook[1]" border="0" alt="o_facebook[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png" width="24" height="24" /></a> <a href="http://twitter.com/psilvas" _fcksavedurl="http://twitter.com/psilvas"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_twitter[1]" border="0" alt="o_twitter[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png" width="24" height="24" /></a> </td> <td valign="top" width="178"> <a href="http://www.facebook.com/f5networksinc" _fcksavedurl="http://www.facebook.com/f5networksinc"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_facebook[1]" border="0" alt="o_facebook[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png" width="24" height="24" /></a> <a href="http://twitter.com/f5networks" _fcksavedurl="http://twitter.com/f5networks"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_twitter[1]" border="0" alt="o_twitter[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png" width="24" height="24" /></a> <a href="http://www.slideshare.net/f5dotcom/" _fcksavedurl="http://www.slideshare.net/f5dotcom/"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_slideshare[1]" border="0" alt="o_slideshare[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_slideshare.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_slideshare.png" width="24" height="24" /></a> <a href="http://www.youtube.com/f5networksinc" _fcksavedurl="http://www.youtube.com/f5networksinc"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_youtube[1]" border="0" alt="o_youtube[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_youtube.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_youtube.png" width="24" height="24" /></a></td> </tr> </tbody></table></body></html> ps Resources: Interop 2011 - Find F5 Networks Booth 2027 Interop 2011 - F5 in the Interop NOC Interop 2011 - VIPRION 2400 and vCMP Interop 2011 - IXIA and VIPRION 2400 Performance Test Interop 2011 - F5 in the Interop NOC Follow Up Interop 2011 - Wrapping It Up Interop 2011 - The Video Outtakes Interop 2011 - TMCNet Interview F5 YouTube Channel Ixia Website235Views0likes0CommentsCloud Testing: The Next Generation
It seems only fair that as the Internet caused the problem, it should solve it. One of the negatives of deploying an Internet-scale infrastructure and application is that until it’s put to the test, you can’t have 100 percent confidence that it will scale as expected. If you do, you probably shouldn’t. Applications and infrastructure that perform well – and correctly – at nominal scale may begin to act wonky as load increases. Dan Bartow , VP at SOASTA, says it is still often load balancing configuration errors that crop up during testing that impedes scalability and performance under load. Choices regarding the load balancing algorithm have a direct impact on the way in which sites and applications scale – or fail to scale – and only under stress does infrastructure and applications begin to experience problems. The last time I ran a scalability and performance test on industry load balancers that’s exactly what happened – what appeared to be a well-behaving Load balancer under normal load turned into a temper-tantrum throwing device under heavier load. The problem? A defect deep in the code that only appeared when the device’s session table was full. Considering the capability of such devices even then, that meant millions of connections had to be seen in a single session before the problem reared its ugly head. Today load balancers are capable of not millions, but tens of millions of connections. Scale that is difficult if not impossible for organizations to duplicate. cloud computing and virtualization bring new challenges to testing the scalability of an application deployment. Application deployed in a cloud environment may be designed to auto-scale “infinitely” which implies testing that application and its infrastructure requires the same capability in a testing solution. That’s no small trick. Traditionally organizations would leverage a load testing solution capable of generating enough clients and traffic to push an application and its infrastructure to the limits. But given increases in raw compute power and parallel improvements in capacity and performance of infrastructure solutions, the cost of a solution capable of generating the kind of Internet-scale load necessary is prohibitive. One of our internal performance management engineers applied some math and came up with a jaw-dropping investment: In other words, enough hardware to test a top-of-the-line ADC [application delivery controller] would set you back a staggering $3 million. It should be clear that even buying equipment to test a fairly low-end ADC would be a big ticket item, likely costing quite a bit more than the device under test. It seems fairly obvious that testing Internet-scale architectures is going to require Internet-scale load generation solutions but without the Internet-scale cost. It’s only fair that if the scalability of the Internet is the cause of the problem that it should also provide the solution.183Views0likes0CommentsFor Thirty Pieces of Silver My Product Can Beat Your Product
One of the side-effects of the rapid increases in compute power combined with an explosion of Internet users has been the need for organizations to grow their application infrastructures to support more and more load. That means higher capacity everything – from switches to routers to application delivery infrastructure to the applications themselves. Cloud computing has certainly stepped up to address this, providing the means by which organizations can efficiently and more cost-effectively increase capacity. Between cloud computing and increasing demands on applications there is a need for organizations to invest in the infrastructure necessary to build out a new network, one that can handle the load and integrate into the broader ecosystem to enable automation and ultimately orchestration. Indeed, Denise Dubie of Network World pulled together data from analyst firms Gartner and Forrester and the trend in IT spending shows that hardware is king this year. "Computing hardware suffered the steepest spending decline of the four major IT spending category segments in 2009. However, it is now forecast to enjoy the joint strongest rebound in 2010," said George Shiffler, research director at Gartner, in a statement. That is, of course, good news for hardware vendors. The bad news is that the perfect storm of increasing capacity needs, massively more powerful compute resources, and the death of objective third party performance reviews result in a situation that forces would-be IT buyers to rely upon third-parties to provide “real-world” performance data to assist in the evaluation of solutions. The ability – or willingness - of an organization to invest in the hardware or software solutions to generate the load necessary to simulate “real-world” traffic on any device is minimal and unsurprising. Performance testing products like those from Spirent and Ixia are not inexpensive, and the investment is hard to justify because it isn’t used very often. But without such solutions it is nearly impossible for an organization to generate the kind of load necessary to really test out potential solutions. And organizations need to test them out because they, themselves, are not inexpensive and it’s perfectly understandable that an organization wants to make sure their investment is in a solution that performs as advertised. That means relying on third-parties who have made the investment in performance testing solutions and can generate the kind of load necessary to test vendor claims. That’s bad news because many third-parties aren’t necessarily interested in getting at the truth, they’re interested in getting at the check that’s cut at the end of the test. Because it’s the vendor cutting that check and not the customer, you can guess who’s interests are best served by such testing.161Views0likes0Comments