L2 Deployment of vCMP guest with Ixia network packet broker
Introduction The insertion of inline security devices into an existing network infrastructure can require significant network re-design and architecture changes. Deploying tools that operate transparently at Layer 2 of the OSI model (L2) can greatly reduce the complexity and disruption associated with these implementations. This type of insertion eliminates the need to make changes to the infrastructure and provides failsafe mechanisms to ensure business continuity should a security device fail. F5’s BIG-IP hardware appliances can be inserted in L2 networks.This can be achieved using either virtual Wire (vWire) or by bridging 2 Virtual LANs using a VLAN Groups. This document covers the design and implementation of the Ixia bypass switch, Ixia packet broker in conjunction with the BIG-IP i5800 appliance configured with hardware virtualization (vCMP), VLAN Groups and VLAN tagging (IEEE 802.1q tagging). Emphasis is made on the network insertion, integration and Layer 2 configuration. The configuration of BIG-IP modules, such as those providing DDoS protection/mitigation or SSL visibility, is beyond the scope of this document and is the subject of other deployment guides.For more information on F5 security modules and their configuration please refer to www.f5.com to access user guides, recommended practices and other deployment documentation. Architecture Overview Enterprise networks are built using various architectures depending on business objectives and budget requirements.As corporate security policies, regulations and requirements evolve, new security services need to be inserted into the existing infrastructure. These new services can be provided by tools such as intrusion detection and prevention systems (IDS/IPS), web application firewalls (WAF), denial of service protection (DoS), or data loss prevention devices (DLP).These are often implemented in the form of physical or virtual appliances requiring network-level integration. Figure 1- Bypass Switch Operation This document focuses on using bypass switches as insertion points and network packet brokers to provide further flexibility. Bypass switches are passive networking devices that mimic the behavior of a straight piece of wire between devices while offering the flexibility to forward traffic to a security service.They offer the possibility to detecting service failure and bypassing the service completely should it become unavailable.This is illustrated in the Figure 1.The bypass switch forwards traffic to the service during normal operation, and bypasses the tool in other circumstances (e.g. tool failure, maintenance, manual offline). Capabilities of the bypass switch can be enhanced with the use of network packet brokers. Note: Going forward, “tool” or “security service” refers to the appliance providing a security service. In the example below, this is an F5 BIG-IP appliance providing DDoS protection. Network packet brokers are similar to bypass switches in that they operate at L2 and do not take part in the switching infrastructure signaling (STP, bpdu, etc.) and are transparent to the rest of the network.They provide forwarding flexibility to integrate and forward traffic to more than one device and create a chain.These chains allow for the use of multiple security services tools. The Figure 2 provides a simplified example where the network packet broker is connected to 2 different tools/security services.Network packet brokers operate programmatically and are capable to conditionally forward traffic to tools.Administrators are able to create multiple service chains based on ingress conditions or traffic types. Another function of the network packet broker is to provide logical forwarding and encapsulation (Q-in-Q) functions without taking part into the Ethernet switching.This includes adding,removing, replacing 802.1q tags and conditional forwarding based on frame type, VLAN tags, etc. Figure 2-Network Packet Broker - Service Chain When inserted into the network at L2, BIG-IP devices leveraging system-level virtualization (vCMP) require the use of VLAN Groups.VLAN groups bridge 2 VLAN’s together.In this document, the VLANs utilized are tagged using 802.1q.This means that tagging used on traffic ingress is different from tagging used on traffic egress as shown in Figure 3. From an enterprise network perspective, the infrastructure typically consists of border routers feeding into border switches.Firewalls connect into the border switches with their outside (unsecured/internet-facing) interfaces.They connect to the core switching mesh with their inside (protected, corporate and systems-facing) interfaces. The Figure 3 below shows the insertion of the bypass switch in the infrastructure between the firewall and the core switching layer. A network packet broker is also inserted between the bypass switch and the security services. Figure 3. Service Chain Insertion Note: the core switch and firewall configuration are not altered in anyway. Figure 4 describes how frames traverse the bypass switch, network packet broker and security device.It also shows the transformation of the frames in transit. VLAN tags used in the diagram are provided for illustration purposes.Network administrators may wish to use VLAN tags consistent with their environment. Prior to the tool chain insertion, packets egress the core and ingress the firewall with a VLAN tag of 101. After the insertion, packets egress the core (blue path) tagged with 101 and ingress the Bypass 1 (BP1) switch (1). They are redirected to the network packet broker (PB1). On ingress to the PB1 (2), an outer VLAN tag of 2001 is added. The VLAN tag is then changed to match the BIG-IP VLAN Group tag of 4001 before egressing the PB1 (3). An explanation of the network packet broker use of VLAN tags and the VLAN ID replacement is covered in the next section. The packet is processed by the BIG-IP 1 (4) and returns it to the PB1 with a replaced outer VLAN of 2001(5). The PB1 removes the outer VLAN tag and sends it back to BP1 (6). The BP1 forwards it to the north switch (1) with the original VLAN tag of 101. The Path 2 (green) follows the same flow but on a different bypass switch, network packet broker and BIG-IP. Path 2 is assigned a different outer VLAN tags (2003 and 4003) by packet broker. Figure 4 - South-North traffic flow Heartbeats are configured on both bypass switches to monitor tools in their primary path and secondary paths. If a tool failure is detected, the bypass switch forwards traffic to the secondary path. This is illustrated in Figure 4.5. Figure 4.5. Heartbeat and Network Packet Broker (NPB) VLAN Re-write The network packet broker utilizes VLANs to keep track of flows from different paths in a tool-sharing configuration. A unique VLAN ID is configured for each path. The tag is added on ingress and removed on egress. The VLAN tags enable the packet broker to keep track of flows in and out of the shared tool and return them to the correct path. If the flow entering the network packet broker has a VLAN tag, than the packet broker must be configured to use Q-in-Q to add an outer tag. In this document, the BIG-IP is deployed as a tool in the network packet broker service chain. The Big-IP is running vCMP and is configured in VLAN Group mode. In this mode, the BIG-IP requires two VLANs to operate, one facing north and the other facing south. As packets traverse the BIG-IP, the VLAN tag is changed. This presents a challenge for the network packet broker because it expects to receive same the unaltered packets that it sends to the inline tools. The network packet broker will drop the altered packets. To address this issue, additional configurations are required, using service chains, filters and hard loops. Network Packet Broker VLAN Replacement 1. The frames ingress the network packet broker on port 2. An outer VLAN tag of 2001 is added to the frames by the Service Chain 3 (SC3). 2. The frames are forwarded of port 17 and egress the network packet broker, which is externally patched to port 18. 3.Port 18 is internally linked to port 10 by a filter. 4.As traffic egress port 10, a filter is applied to change the VLAN from 2001 to 4001. 5.The outer VLAN tag on the frames are changed from 4001 to 2001 as they traverse the BIG-IP. The frames egress port 2.1 on the BIG-IP and ingress the network packet broker on port 9. 6.The frames are sent through the SC3, where the outer VLAN is stripped off and egress on port 1. 7.Frames are forwarded back to the bypass. The return traffic follows the same flow as described above but in reverse order. The only difference is a different filter is applied to port 10 to replace the 4001 tag with 2001. Figure 5. Network Packet Broker VLAN Tag Replacement Lab The use case selected for this verified design is based on a customer design. The customer’s requirements were the BIG-IPs must be deployed in vCMP mode and in layer 2. This limits the BIG-IP deployment to VLAN Group. The design presented challenges and creative solutions to overcome them. The intention is not for reader to replicate the design but to …. The focus of this lab is the L2 insertion point and the flow traffic through the service chain. A pair of switches were used to represent the north and south ends of each path, a pair for blue and a pair for green. One physical bypass switch configured with two logical bypass switches and one physical network packet broker simulating two network packet brokers. Lab Equipment List Appliance Version Figure 6. Lab diagram Lab Configuration Arista network switches Ixia Bypass switch Ixia Network Packet Broker F5 BIG-IP Test case Arista Network Switches Four Arista switches were used to generate the north-south traffic. A pair of switches represents the Path 1 (blue) with a firewall to the north of the insertion and the core to the south. The second pair of switches represents Path 2 (green). A VLAN 101 and a VLAN interface 101 were created on each switch. Each VLAN interface was assigned an IP address in the 10.10.101.0/24 range. Ixia iBypass Duo Configuration Step 1.Power Fail State Step 2.Enable ports Step 3.Bypass Switch Step 4.Heartbeat The initial setup of the iBypass Duo switch is covered in the Ixia iBypass Duo User’s Guide. Please visit the Ixia website to download a copy. This section will cover the the configuration of the bypass switch to forwards traffic to the network packet broker (PB1). In the event the PB1 fails, forward traffic to the secondary network packet broker (PB2). As the last the last resort, fail open and permit traffic to flow, bypassing the service chain. Step 1.In the invent of a power failure, the bypass switch is configured to fail open and permit the traffic to flow uninterrupted. a. Click the CONFIGURATION (1) menu bar and select Chassis (2). Select Open (3) from the Power Fail State and click SAVE (4) on the menu bar. Step 2.Enable Ports a.Click the CONFIGURATION (1) menu bar and select Port (2) b.Check the box (3) at the top of the column to select all ports and click and Enable (4) c.Click SAVE (5)on the menu bar Step 3.Configure Bypass Switch 1 and 2 a.Click Diagram (1) and click +Add Bypass Switch (2) b.Select the Inline Network Links tab (1) and click Add Ports (2). From the pop-up window, select port A. The B side port will automatically be selected. c.Select the Inline Tools (1) tab and click the + (2) d.From the Edit Tool Connections window, on the A side (top) , click Add Ports (1) and select port 1 from the pop-up windows (2). Repeat and select port 5. On the B side (bottom), click Add Ports and select port 2 (3). Repeat and select port 6. Note: The position of the ports is also the priority of the ports. In this example, ports 1 (A side) and 2 (B side) are the primary path. e.Repeat steps a through d to create Bypass Switch 2 with Inline Network Links C and D and Inline Tools ports 7,8 and 3,4 as the secondary. Step 4.Heartbeat config a.From the Diagram view, click the Bypass Switch 1 menu square (1) and select Properties (2). b.Click the Heartbeats tab (1), click show (2) and populate the values (3). To edit a field, just click the field and edit. Click OK and check the Enabled box (4). Note: To edit the heartbeat values, just click on the field and type. c.Repeat steps a. and b. to create the heartbeats for the remaining interfaces. Ideally, heartbeats are configured to check both directions. From tool port 1 -> tool port 2 and from tool port 2 -> tool port 1. Repeat steps to create the heartbeat for port 2 but reverse the MACs for SMAC and DMAC Use a different set of MACs (ex. 0050 c23c 6012 and 0050 c23c 6013) when configuring the heartbeat for tool ports 5 and 6. This concludes the bypass switch configuration. Network Packet Broker (NPB) Configuration In this lab, the NPB is configured with three type of ports, Bypass, Inline Tool and Network. Steps Summary Step 1.Configure Bypass Port Pairs Step 2.Create Inline Tool Resources Ports Step 3.Create Service Chains Step 4.Link the Bypass Pairs with the Service Chains Step 5.Create Dynamic Filters Step 6.Apply the Filters Step 1.Configure Bypass Port Pairs (BPP) Bypass ports are ports that send and receive traffic from the network side. In this lab, they are connected to the bypass switches. a.Click the INLINE menu (1) and click the Add Bypass Port Pair (2). b.In the Add Bypass Port Pair window, enter a name (ByPass 1 Primary). To select Side A Port, click the Select Port button (2). In the pop-up window, select a port ( P0 1). Now select the Side B Port (P02) (3) and click OK. Repeat these steps to create the remain BPPs. ByPass 1 Secondary with P05 (Side A) and P06 (Side B) ByPass 2 Primary with P07 (Side A) and P08 (Side B) ByPass 2 Secondary with P03 (Side A) and P04 (Side B) Step 2.Create Inline Tool Resources Ports Inline Tool Resources (ITR) are ports connected to tools, such as the BIG-IP. These ports are used in the service chain configuration to connect BPPs to ITRs. a.Click the INLINE menu (1) and click the Add Tool Resource (2). b.Enter a name (BIG-IP 1) (1) and click the Inline Tool Ports tab (2) c.To select the Side 1 Port, click the Select Port (1) button and select a port (P09) from the pop-up window. Do the same for Side 2 port(P17) (2). Provide an Inline Tool Name (BIG-IP 1) (3) and click Create Port Pair (4). Repeat these steps to create ITR BIG-IP 2 using ports P13 and P21. NOTE: The Side B port do not match the diagram due to the VLAN replacement explained previously. Step3.Create Service Chains A Service Chain connects BPPs to the inline tools. It controls how traffic flows from the BPPs to the tools in the chain through the use of Dynamic Filters. a.Click the INLINE menu (1) and click the Add Service Chain (2). b.In the pop-up window, enter a name (Service Chain 1) (1) and check the box to Enable Tool Sharing (2). Click Add (3) and in the pop-up window, select Bypass 1 Primary and Bypass 2 Secondary. Once added, the BPPs are displayed in the window. Select each VLAN Id field and replace them with (4) 2001 and (5) 2002. Repeat these steps to create Service Chain 2. Use BPPs Bypass 2 Primary and Bypass 1 Secondary and VLAN 2003 and 2004 respectively. Click the Inline Tool Resource tab (6) to add ITRs. c.On the Inline Tool Resource, click Add and select the ITR (BIG-IP 1) from the pop-up window. Repeat these steps for Service Chain 2 and select BIG-IP 2. d.The next step connects the network (BPPs) to the tools using the service chains. To connect the BPPs to the service chains, simply drag a line to link them. The lines in the red box are created manually. The lines in the blue box are automatically created to correlate with the links in the red box. This means traffic sent out BPP port A, into the service chain, is automatically return to port B. 4.Configure Filters Filters are used to link ports, filter, a.Click the OBJECTS menu (1), select Dynamic Filters (2), click the +Add (3) and select Dynamic Filters. b.Enter a name (1) c.On the Filter Criteria tab, select Pass by Criteria (1) and click VLAN (2). In the pop-up window, enter a VLAN ID (4001) and select Match Any (3) d.On the Connections tab, click Add Ports (1) to add a network port. In the pop-up window, select a port (P10). Add a port for tools (P18) (2). e.Skip the Access Control tab and select the VLAN Replacement tab. Check the Enable VLAN Replacement box and enter a VLAN ID (2001). Repeat these steps and create the remaining filters using the table below. NOTE: The filter name (Fx) does not need to match this table exactly. This concludes the network packet broker configuration. Filters BIG-IP Configuration This section describes how to configure a vCMP BIG-IP device to utilize VLAN Groups. As a reminder a VLAN Group is a configuration element that allows the bridging of VLANs. In vCMP, the hypervisor is called the vCMP host.Virtual machines running on the host are called guests. Lower layer configuration for networking on vCMP is done at the host level.VLAN’s are then made available to the guest.The VLAN bridging is configured at the guest level. In the setup described herein, the VLAN interfaces are tagged with two 802.1q tags.Q-in-Q is used to provide inner and outer tagging. The following assumes that the BIG-IP’s are up and running, that they are upgraded, licensed and provisioned for vCMP.Also, it is assumed that all physical connectivity is completed as appropriate following a design identifying port, VLAN tagging and other ethernet media choices. Prior to proceeding you will need the following information for each BIG-IP that will be configured: Configuration overview: 1.[vCMP host] Create VLANs that will be bridged 2.[vCMP host] Create the vCMP guest: a.Configure – define what version of software, size of VM, associate the VLANs etc. b.Provision – create the BIG-IP virtual machine or guest c.Deploy – start the BIG-IP guest 3.[vCMP guest] Bridge VLAN group Create VLANs that will be bridged: ·Login to the vCMP host interface ·Go to Network>> VLAN >> VLAN List ·Select “Create” ·In the VLAN configuration panel: oProvide a name for the object oEnter the Tag (this corresponds to the “outer” tag) oSelect “Specify” in the Customer Tag dropdown oEnter a value for the Customer Tag, this is a value between 1 and 4094 (this is the “inner” tag) oSelect an interface to associate the VLAN to oSelect “Tagged” in the “Tagging” drop down oSelect “Double” in the “Tag Mode” drop down oClick on the “add” button in the Resources box oSelect “Finished” as shown in the figure below Repeat the steps above to create a second VLAN that will be added to the VLAN group.Once the above steps completed the VLAN webUI should look like: Create vCMP Guest ·Login to the vCMP host interface ·Go to vCMP>>Guest List ·Select “Create…” (upper right-hand corner) ·Populate the following fields: oName oHost Name oManagement Port oIP Address oNetwork Mask oManagement Route oVLAN List, ensure that the VLANs that need to be bridged are in the “selected” pane ·Set the “Requested State” to “Deployed” (this will create a virtual BIG-IP ·Click on “Finish” – window should look like the following: Clicking on “Finish” will configure, provision and deploy the BIG-IP guest Bridge VLAN group ·Login to the vCMP guest interface ·Go to Network >> VLANs >> VLAN Groups ·Select “Create ·In the configuration window as shown below: oEnter a unique name for the VLAN group object oSelect the VLAN’s to associate that need to be bridged oKeep the default configuration for the other settings oSelect “Finished” Once created, traffic should be able to traverse the BIG-IP. This concludes the BIG-IPs configuration.2.2KViews3likes0CommentsThe Top Ten Hardcore F5 Security Features in BIG-IP 11.5.0
that went unsung at #RSAC 2014. There’s lots of new security stuff in BIG-IP that shouldn’t be overlooked amidst all the press releases and hoopla at #RSAC 2014. Don’t get me wrong, hoopla has its place: for example, the banking community is excited about the new anti-fraud thing we bought. And Pete Silva’s video interview of Joel Moses for the new Secure Web Gateway forward proxy is great. But the features I’m talking about are too low-level to warrant a press release, interview, or media dinner. In a way they’re even more important because platform-level security features are often the basis for the higher-level software-defined applications services that reside upon them. Just before the RSA 2014 conference, we upgraded the BIG-IP platform to version 11.5.0. The upgrade has hundreds of new features and bug fixes, but these following security features are particularly cool. The Top 10 Hardcore F5 Security Features in BIG-IP 11.5.0 UDP Flood protection in AFM –The new UDP flood protection in the Advanced Firewall Manager (AFM) module automatically detects and mitigates UDP floods. It even categorizes incoming UDP packets so that you don’t end up rate-limiting legitimate DNS requests. Full ECC and DSA Support for client SSL profiles -- on the same virtual server with RSA profiles! A single client SSL profile can now have up to three certificate/key pairs associated with it to support the full range of cipher suites now available! This is huge; people have been asking for it since before germs. Heavy URL DDoS Protection in ASM. Smart attackers may attempt to slow a website by repeatedly requesting heavy URLs such as large media objects or slow database queries. The new Heavy URL DDoS feature of ASM identifies your vulnerable URLs and then defends them. AES-GCM mode for TLS1.2! The crypto community has been waiting for GCM to become prevalent enough to start switching away from simple block and streaming ciphers. This is a big step toward enabling the whole world to be ready for GCM, and what we hope is a future reduction in TLS protocol weaknesses. Improved whitelist and blacklist support in AFM. IP addresses that are blacklisted or whitelisted can be assigned to pre-existing or user-defined blacklist classes (called categories in tmsh), and firewall actions can be applied based on those categories. AFM can be configured to query dynamic lists of blacklist or whitelist addresses, called feeds, and update the configuration accordingly. SafeNet Luna SA HSM integration. For the last few years we’ve been getting requests to integrate with Networked Hardware Security Modules (HSM). We’ve been supporting nCipher (née Thales) HSMs, and now with 11.5.0 we’re announcing our integration with Thales (née SafeNet). Hook your virtual BIG-IPs up to this and you have a pretty compelling security story. F5 HSM Feature Comparision 11.5.0 Features BIG-IP FIPS nCipher (née Thales) Thales (née SafeNet) VIPRION ✔ ✔ ✔ vCMP ✔ ✔ GTM/DNSSEC ✔ ✔ PKCS#11 N/A ✔ ✔ Virtual Edition ✔ ✔ AWS CloudHSM ✔ FIPS 140-2 Level 2 ✔ ✔ ✔ FIPS 140-2 Level 3 ✔ ✔ Perfect Forward Secrecy ✔ ✔ ✔ EAL4+ ✔ ✔ Performance 9000 TPS 3000 TPS 1500 TPS 45 Hardware-level DDoS protections in AFM. The firewall team has added and refactored the network DDoS code to make the hardware vectors exactly match the software vectors. See the complete list of pathological packets that will be dropped before the CPU even sees them. Full PKCS#12 support for key import. The paranoid among us point to the Edward Snowden files and say they’ve never had more reason to be paranoid. For them, we’re making it possible to import SSL keys directly to BIG-IP without them ever being available in the clear. Appliance Mode for vCMP guests. Appliance mode disables the root account and prevents access to the bash system shell. Appliance mode can now be configured on a guest-by-guest basis in multi-tenant environments where a particular guest virtual instance may be less trusted than others. BER-encoding iRule commands. When I was a lazy software developer, one of my goals was to get through life without ever having to write an ASN1 decoder. Guess what, someone has done just that for iRules! Check out the BER/DER iRule command reference. Honestly, this is kind of amazing. These were just the top 10 - there are a ton more features in 11.5.0 (release notes). You can play with them all in your cloud with the virtual edition of BIG-IP – download it here and have fun!493Views0likes1CommentF5 BIG-IP Platform Security
When creating any security-enabled network device, development teams must fully investigate security of the device itself to ensure it cannot be compromised. A gate provides no security to a house if the gap between the bars is large enough to drive a truck through. Many highly effective exploits have breached the very software and hardware that are designed to protect against them. If an attacker can breach the guards, then they don’t need to worry about being stealthy, meaning if one can compromise the box, then they probably can compromise the code. F5 BIG-IP Application Delivery Controllers are positioned at strategic points of control to manage an organization’s critical information flow. In the BIG-IP product family and the TMOS operating system, F5 has built and maintained a secure and robust application delivery platform, and has implemented many different checks and counter-checks to ensure a totally secure network environment. Application delivery security includes providing protection to the customer’s Application Delivery Network (ADN), and mandatory and routine checks against the stack source code to provide internal security—and it starts with a secure Application Delivery Controller. The BIG-IP system and TMOS are designed so that the hardware and software work together to provide the highest level of security. While there are many factors in a truly secure system, two of the most important are design and coding. Sound security starts early in the product development process. Before writing a single line of code, F5 Product Development goes through a process called threat modeling. Engineers evaluate each new feature to determine what vulnerabilities it might create or introduce to the system. F5’s rule of thumb is a vulnerability that takes one hour to fix at the design phase, will take ten hours to fix in the coding phase and one thousand hours to fix after the product is shipped—so it’s critical to catch vulnerabilities during the design phase. The sum of all these vulnerabilities is called the threat surface, which F5 strives to minimize. F5, like many companies that develop software, has invested heavily in training internal development staff on writing secure code. Security testing is time-consuming and a huge undertaking; but it’s a critical part of meeting F5’s stringent standards and its commitment to customers. By no means an exhaustive list but the BIG-IP system has a number of features that provide heightened and hardened security: Appliance mode, iApp Templates, FIPS and Secure Vault Appliance Mode Beginning with version 10.2.1-HF3, the BIG-IP system can run in Appliance mode. Appliance mode is designed to meet the needs of customers in industries with especially sensitive data, such as healthcare and financial services, by limiting BIG-IP system administrative access to match that of a typical network appliance rather than a multi-user UNIX device. The optional Appliance mode “hardens” BIG-IP devices by removing advanced shell (Bash) and root-level access. Administrative access is available through the TMSH (TMOS Shell) command-line interface and GUI. When Appliance mode is licensed, any user that previously had access to the Bash shell will now only have access to the TMSH. The root account home directory (/root) file permissions have been tightened for numerous files and directories. By default, new files are now only user readable and writeable and all directories are better secured. iApp Templates Introduced in BIG-IP v11, F5 iApps is a powerful new set of features in the BIG-IP system. It provides a new way to architect application delivery in the data center, and it includes a holistic, application-centric view of how applications are managed and delivered inside, outside, and beyond the data center. iApps provide a framework that application, security, network, systems, and operations personnel can use to unify, simplify, and control the entire ADN with a contextual view and advanced statistics about the application services that support business. iApps are designed to abstract the many individual components required to deliver an application by grouping these resources together in templates associated with applications; this alleviates the need for administrators to manage discrete components on the network. F5’s new NIST 800-53 iApp Template helps organizations become NIST-compliant. F5 has distilled the 240-plus pages of guidance from NIST into a template with the relevant BIG-IP configuration settings—saving organizations hours of management time and resources. Federal Information Processing Standards (FIPS) Developed by the National Institute of Standards and Technology (NIST), Federal Information Processing Standards are used by United States government agencies and government contractors in non-military computer systems. FIPS 140 series are U.S. government computer security standards that define requirements for cryptography modules, including both hardware and software components, for use by departments and agencies of the United States federal government. The requirements cover not only the cryptographic modules themselves but also their documentation. As of December 2006, the current version of the standard is FIPS 140-2. A hardware security module (HSM) is a secure physical device designed to generate, store, and protect digital, high-value cryptographic keys. It is a secure crypto-processor that often comes in the form of a plug-in card (or other hardware) with tamper protection built in. HSMs also provide the infrastructure for finance, government, healthcare, and others to conform to industry-specific regulatory standards. FIPS 140 enforces stronger cryptographic algorithms, provides good physical security, and requires power-on self tests to ensure a device is still in compliance before operating. FIPS 140-2 evaluation is required to sell products implementing cryptography to the federal government, and the financial industry is increasingly specifying FIPS 140-2 as a procurement requirement. The BIG-IP system includes a FIPS cryptographic/SSL accelerator—an HSM option specifically designed for processing SSL traffic in environments that require FIPS 140-1 Level 2–compliant solutions. Many BIG-IP devices are FIPS 140-2 Level 2–compliant. This security rating indicates that once sensitive data is imported into the HSM, it incorporates cryptographic techniques to ensure the data is not extractable in a plain-text format. It provides tamper-evident coatings or seals to deter physical tampering. The BIG-IP system includes the option to install a FIPS HSM (BIG-IP 6900, 8900, 11000, and 11050 devices). BIG-IP devices can be customized to include an integrated FIPS 140-2 Level 2–certified SSL accelerator. Other solutions require a separate system or a FIPS-certified card for each web server; but the BIG-IP system’s unique key management framework enables a highly scalable secure infrastructure that can handle higher traffic levels and to which organizations can easily add new services. Additionally the FIPS cryptographic/SSL accelerator uses smart cards to authenticate administrators, grant access rights, and share administrative responsibilities to provide a flexible and secure means for enforcing key management security. Secure Vault It is generally a good idea to protect SSL private keys with passphrases. With a passphrase, private key files are stored encrypted on non-volatile storage. If an attacker obtains an encrypted private key file, it will be useless without the passphrase. In PKI (public key infrastructure), the public key enables a client to validate the integrity of something signed with the private key, and the hashing enables the client to validate that the content was not tampered with. Since the private key of the public/private key pair could be used to impersonate a valid signer, it is critical to keep those keys secure. Secure Vault, a super-secure SSL-encrypted storage system introduced in BIG-IP version 9.4.5, allows passphrases to be stored in an encrypted form on the file system. In BIG-IP version 11, companies now have the option of securing their cryptographic keys in hardware, such as a FIPS card, rather than encrypted on the BIG-IP hard drive. Secure Vault can also encrypt certificate passwords for enhanced certificate and key protection in environments where FIPS 140-2 hardware support is not required, but additional physical and role-based protection is preferred. In the absence of hardware support like FIPS/SEEPROM (Serial (PC) Electrically Erasable Programmable Read-Only Memory), Secure Vault will be implemented in software. Even if an attacker removed the hard disk from the system and painstakingly searched it, it would be nearly impossible to recover the contents due to Secure Vault AES encryption. Each BIG-IP device comes with a unit key and a master key. Upon first boot, the BIG-IP system automatically creates a master key for the purpose of encrypting, and therefore protecting, key passphrases. The master key encrypts SSL private keys, decrypts SSL key files, and synchronizes certificates between BIG-IP devices. Further increasing security, the master key is also encrypted by the unit key, which is an AES 256 symmetric key. When stored on the system, the master key is always encrypted with a hardware key, and never in the form of plain text. Master keys follow the configuration in an HA (high-availability) configuration so all units would share the same master key but still have their own unit key. The master key gets synchronized using the secure channel established by the CMI Infrastructure as of BIG-IP v11. The master key encrypted passphrases cannot be used on systems other than the units for which the master key was generated. Secure Vault support has also been extended for vCMP guests. vCMP (Virtual Clustered Multiprocessing) enables multiple instances of BIG-IP software to run on one device. Each guest gets their own unit key and master key. The guest unit key is generated and stored at the host, thus enforcing the hardware support, and it’s protected by the host master key, which is in turn protected by the host unit key in hardware. Finally F5 provides Application Delivery Network security to protect the most valuable application assets. To provide organizations with reliable and secure access to corporate applications, F5 must carry the secure application paradigm all the way down to the core elements of the BIG-IP system. It’s not enough to provide security to application transport; the transporting appliance must also provide a secure environment. F5 ensures BIG-IP device security through various features and a rigorous development process. It is a comprehensive process designed to keep customers’ applications and data secure. The BIG-IP system can be run in Appliance mode to lock down configuration within the code itself, limiting access to certain shell functions; Secure Vault secures precious keys from tampering; and optional FIPS cards ensure organizations can meet or exceed particular security requirements. An ADN is only as secure as its weakest link. F5 ensures that BIG-IP Application Delivery Controllers use an extremely secure link in the ADN chain. ps Resources: F5 Security Solutions Security is our Job (Video) F5 BIG-IP Platform Security (Whitepaper) Security, not HSMs, in Droves Sometimes It Is About the Hardware Investing in security versus facing the consequences | Bloor Research White Paper Securing Your Enterprise Applications with the BIG-IP (Whitepaper) TMOS Secure Development and Implementation (Whitepaper) BIG-IP Hardware Updates – SlideShare Presentation Audio White Paper - Application Delivery Hardware A Critical Component F5 Introduces High-Performance Platforms to Help Organizations Optimize Application Delivery and Reduce Costs Technorati Tags: F5, PCI DSS, virtualization, cloud computing, Pete Silva, security, coding, iApp, compliance, FIPS, internet, TMOS, big-ip, vCMP481Views0likes1CommentF5 Friday: What’s Inside an F5?
Is it Linux? Is it third-party? Is it proprietary? Isn’t #vcmp just a #virtualization platform? Just what is inside an F5 BIG-IP that makes it go vroom? Over the years I’ve seen some pretty wild claims about what, exactly, is “inside” a BIG-IP that makes it go. I’ve read articles that claim it’s Linux, that it’s based on Linux, that it’s voodoo magic. I’ve heard competitors make up information about just about every F5 technology – TMOS, vCMP, iRules – that enables a BIG-IP to do what it does. There are two sources of the confusion with respect to what’s really inside an F5 BIG-IP. The first stems, I think, from the evolution of the BIG-IP. Once upon a time, BIG-IP was a true appliance – a pure software solution delivered pre-deployed on pretty standard hardware. But it’s been many, many years since that was true, since before v9 was introduced back in 2004. BIG-IP version 9 was the beginning of BIG-IP as not a true appliance, but a purpose-built networking device. Appliances deployed on off the shelf hardware generally leverage existing operating systems to manage operating system and even networking tasks – CPU scheduling, I/O, switching, etc… but BIG-IP does not because with version 9 the internal architecture of BIG-IP was redesigned from the ground up to include a variety of not-so-off-the-shelf components. Switch backplanes aren’t commonly found in your white-box x86 server, after all, and a bladed chassis isn’t something common operating systems handle. TMOS – the core of the BIG-IP system – is custom built from the ground up. It had to be to support the variety of hardware components included in the system – the FPGAs, the ASICs, the acceleration cards, the switching backplane. It had to be custom built to enable advances in BIG-IP to support the non-disruptive scale of itself when it became available on a chassis-based hardware platform. It had to be custom built so that advances in internal architectures to support virtualization of its compute and network resources, a la vCMP, could come to fruition. The second source of confusion with respect to the internal architecture of BIG-IP comes from the separation of the operational and traffic management responsibilities. Operational management – administration, configuration, CLI and GUI – resides in its own internal container using off-the-shelf components and software. It’s a box in a box, if you will. It doesn’t make sense for us – or any vendor, really – to recreate the environment necessary to support a web-based GUI or network access (SSH, etc…) for management purposes. That side of BIG-IP starts with a standard Linux core operating system and is tweaked and modified as necessary to support things like TMSH (TMOS Shell). That’s all it does. Monitoring, management. It generates pretty charts and collects statistics. It’s the interface to the configuration of the BIG-IP. It’s lights out management. This “side” of BIG-IP has nothing to do with the actual flow of traffic through a BIG-IP aside from configuration. At run time, when traffic flows through a BIG-IP, it’s all going through TMOS – through the purpose and very custom built system designed specifically to support application delivery services. This very purposeful design and development of technology is too often mischaracterized – intentionally or unintentionally – as third-party or just a modified existing kernel/virtualization platform. That’s troubling because it hampers the understanding of just what such technologies do and why they’re so good at doing it. Take vCMP, which has sometimes been maligned as little more than third-party virtualization. That’s somewhat amusing because vCMP isn’t really virtualization in the sense we think about virtualization today. vCMP is designed to allow the resources for a guest instance to span one or multiple blades. It’s an extension of multi-processing concepts as applied to virtual machines. If we analogized the technology to server virtualization, vCMP would be the ability to assign compute and network resources from server A to a virtual machine running on server B. Cloud computing providers cannot do this (today) and it’s not something that’s associated with today’s cloud computing models; only grid computing comes close, and it still takes a workload-distributed view rather than a resource-distributed view. vCMP stands for virtual CMP – clustered multi-processing. CMP was the foundational technology introduced in BIG-IP version 9.4 that allowed TMOS to take advantage of multiple multi-core processors by instantiating one TMM (Traffic Management Microkernel) per core, and then aggregating them – regardless of physical location on BIG-IP – to appear as a single pool of resources. This allowed BIG-IP to scale much more effectively. Basically we applied many of the same high-availability and load distribution techniques we use to ensure applications are fast and available to our internal architecture. This allowed us to scale across blades and is the reason adding (or removing) blades in a VIPRION is non-disruptive. Along comes a demand for multi-tenancy, resulting in virtual CMP. vCMP isn’t the virtual machine, it’s the technology that manages and provisions BIG-IP hardware resources across multiple instances of BIG-IP virtual machines; the vCMP guests, as we generally call them. What we do under the covers is more akin to an application (a vCMP guest) being comprised of multiple virtual machines (cores), with load balancing providing the mechanism by which resources are assigned (vCMP) than it is simple virtualization. So now you know a lot more about what’s inside a BIG-IP and why we’re able to do things with applications and traffic that no one else in the industry can. Because we aren’t relying on “standard” virtualization or operating systems. We purposefully design and develop the internal technology specifically for the task at hand, with an eye toward how best to provide a platform on which we can continue to develop technologies that are more efficient and adaptable.237Views0likes0CommentsF5 Monday? The Evolution To IT as a Service Continues … in the Network
#v11 #vcmp #scaleN #iApp It’s time to bring the benefits of server virtualization, rapid provisioning and efficient, flexible scalability models to the network. Many of you know I’m a developer by trade and gained my networking stripes after joining Network Computing Magazine around the turn of the century. I focused heavily on application-centric solutions (sometimes much to my chagrin; consider evaluating ERP solutions for a moment and I’m sure you’ll understand why) but I was also tasked with reviewing networking solutions. In particular, the realm of load balancing and application delivery fell squarely to me for most of my time with the magazine. Thus I’ve been following F5 for a lot longer than I’ve been on the dark-side team, and have seen its solutions evolve from the most basic of network appliances (remember when SSL accelerators and load balancers were stand-alone?) to the highly complex and advanced application delivery service solutions offered today. The last time F5 released a version this significant was v9, when it moved from a simple proxy-based appliance to a hardware-enabled, full-proxy architecture. It moved from a solution to being a platform upon which We could build out future solutions more easily and empowered customers by providing both a service-enabled control plane, iControl, and a programmable framework for better managing the diverse traffic needs of our very disparate vertical industry customer-base, iRules. While version 10 was not a trivial release by any stretch of the imagination, it pales in comparison with the fundamental architectural changes and enhancements in v11. It isn’t just about new features and functionality – though these are certainly included – it’s about architecture (internal to BIG-IP and for data centers) and automation (internal to BIG-IP and for data centers). It’s about breaking the traditional high-availability (scalability) paradigm by taking advantage of virtualization inside and out. It’s about enabling a service-oriented view of policies and infrastructure that better fits within the cloud computing demesne. It’s about laying the foundation of an optimized data center architecture based on strategic points of control throughout the network that enable increased efficiency and reliability while simultaneously bringing the benefits associated with a service-focused architecture – reusability, repeatability and consistency – to the network. I can’t, without writing more than you really want to read on a Monday, describe everything that makes v11 such a significant release. With that in mind, let me just touch on what I think is the most game-changing pieces of the newest version of BIG-IP – and why. iApp iApp is evolutionary for F5, but is likely the most revolutionary for the industry in general. iApp is a framework that moves the focus of BIG-IP configuration from network-oriented objects to application-centric views. But that’s just the surface under it. It’s a framework, and that implies programmability and extensibility, and that’s exactly what it offers. You might recall that for many years F5 has offered Application Ready Solutions; deployment guides developed by our engineers, often in concert with application partners like Microsoft, Oracle, SAP and VMware, that detailed the optimal configuration of BIG-IP (including all relevant modules) for a specific application or solution – SharePoint, Oracle Database, VMware View, etc… These guides were step-by-step instructions administrators could be guided by to optimally configure BIG-IP. This certainly reduced the application deployment lifecycle, but it didn’t address related challenges such as misconfiguration. As cloud computing and virtualization came along it also failed to address the need to create consistent deployment configurations that were easily repeatable through automation. iApp not only addresses this challenge, it goes above and beyond by providing the framework through which F5 partners and customers can easily create their own iApp deployment packages and share them. So not only do you get repeatable application deployment architectures that can be shared across BIG-IP instances, you also get the same to share with others – or benefit from others’ expertise. iApp is to infrastructure deployment what an EAR or an assembly is to application deployment. It’s a mobile package of BIG-IP configuration. It’s application-centric management from a pool of reusable application services. Scale N Traditional infrastructure high-availability models rely on an active-standby (or in some cases active-active) configuration. That means two (or more) devices which one designated primary and the others as secondary. Scale N breaks this paradigm and says, why should be so restricted? Cloud computing and virtualization benefits are premised upon the ability to scale out and up on –demand, why shouldn’t infrastructure follow the same model? After all, the traditional paradigm requires fail-over by device – or instance. If you need to fail-over – or simply move – one application, the entire device must be interrupted. With v11 F5 is introducing the concept of Device Service Clusters, which allow the targeted fail-over of application instances and achieves active-active-active N . Basically you can now scale out across a pool of CPUs or devices regardless of BIG-IP form factor or deployment platform, and manage them as one. This is possible by leveraging a previously introduced feature called vCMP. vCMP I’ll include a short refresher on vCMP because it’s inherently important to the overall capabilities of scale N and the ability to impart why it is that iApp combined with scale N is so significant. If you’ll recall, vCMP standards for virtual Clustered Multi-Processing and it’s built off our previous CMP (Clustered Multi-Processing) capabilities. vCMP makes it possible to deploy individual BIG-IP “guest” instances that enable fault-isolation, version independency and on-demand hardware-layer scalability. This, combined with some secret sauce, is what makes scaleN possible and ultimately part of what makes v11 so significant a departure from traditional infrastructure deployment and management models. Now, you combine all three and provide the ability to synchronize service configurations across instances via an automated policy synchronization capability (also new to v11) and you now have capabilities in the application delivery infrastructure with similar benefits and abilities as those found previously only in the server / application virtualization infrastructure: automated, repeatable, manageable, scalable infrastructure services. It takes us several giant steps toward the realization of stateful failure; a goal that has long eluded traditional infrastructure-based scalability architectures. v11 also includes a wealth of security-related features and enhancements as well as global application delivery service support. It is, simply put, too big to contain in a single – or even several – posts. You’ll want to explore the new options and new capabilities and consider how they fit into a strategic view of your data center architecture as you continue the path toward IT as a Service. This version supports that transformation with a very service and application-centric view of infrastructure and application delivery services, and enables even more collaboration between components and customers; collaboration that is necessary to automate - and ultimately liberate – the data center of the future with a more agile, dynamic architecture. So take some time to explore, ask questions, and find out more about v11. We are confident you’ll find that the ability to move more fluidly toward an agile infrastructure, toward IT as a Service, will be served well by the strategic trifecta of iApp, Scale N and vCMP. v11 Resources iApp Wiki here on DevCentral F5 Delivers on Dynamic Data Center Vision with New Application Control Plane Architecture F5 iApp: Moving Application Delivery Beyond the Network Introducing v11: The Next Generation of Infrastructure BIG-IP v11 Information Page F5 Friday: The Gap That become a Chasm IT as a Service: A Stateless Infrastructure Architecture Model If a Network Can’t Go Virtual Then Virtual Must Come to the Network You Can’t Have IT as a Service Until IT Has Infrastructure as a Service Meet the Challenge of Consumerization by Managing Applications Instead of Clients This is Why We Can’t Have Nice Things Intercloud: Are You Moving Applications or Architectures? The Consumerization of IT: The OpsStore What CIOs Can Learn from the Spartans320Views0likes1CommentIf a Network Can’t Go Virtual Then Virtual Must Come to the Network
#vcmp #interop Whether it’s a need to support cloud computing or manage the myriad requirements from internal customers, the new network must go beyond multi-tenancy There has been a plethora of content lately discussing the need for virtual network appliances. It’s only natural, after all, that once we managed to work out all the quirks and flaws of server and storage virtualization that we’d move on to the next layer of the data center, the network. What’s being discovered as enterprises build out their own cloud computing or IT as a Service environments is that multi-tenancy doesn’t always go far enough to meet the needs of their various constituents internal to the organization. It’s not just a matter of role-based administrative access or isolating configuration from one department to another, it goes deeper than that – to maintenance windows and fault-isolation and even needs for different versions of the same solution. What’s necessary to address these diverse needs in a way that is non-disruptive is a virtualized approach. But just as valid are the arguments against moving some network-oriented solutions to a virtual form factor, as pointed out by Mike Brandenburg: For all of the compute power that a virtual environment can bring to bear on a workload, there are still many tasks that favor dedicated hardware. Network processes and tasks like SSL offload and network forensics have deferred pre-processing tasks, such as processing gigabytes of network packets, to discrete chipsets built into the hardware appliances. These chipsets take the burden off of the appliance’s general CPU. Dedicated hardware remains first choice for these specific network tasks. -- Replacing hardware-based network appliances with virtual appliances, 28 April 2011 The reason analysts and the industry at large are embracing a virtualized network is to achieve the fault-isolation, flexibility of provisioning and improved efficiency of network components that leads to better return on capital investments. But while some may point to opposing views that say hardware with its multi-tenant capabilities isn’t enough to meet the challenges of modern data center architectures and conclude that the only viable solution is to take the network to a virtual form factor, counting on the improvements in x86 processing power to counter the loss of performance, there is an alternative. MULTI-TENANCY vs VIRTUALIZATION The new network, the one capable of supporting diversity and volatility in a way that enables both applications and business to scale efficiently, may in fact need to be virtualized. Multi-tenancy has thus far been considered the solution to addressing the diverse needs of applications, departments, organizations and customers. But even though many network hardware solutions have gone “Multi-tenant” still there remains very real operational and business requirements that are not met by such an approach. This is due to the way in which multi-tenancy is implemented internal to a solution as opposed to an internally virtualized network device. If we look at the way in which resource allocation and operating systems are partitioned in each model, we find that their results are very different: These differences lead to both different benefits and different limitations. For example, because a multi-tenant model shares a single operating system, it requires less memory than a virtualized system with multiple operating system instances. Other benefits of sharing an operating system are that the underlying resources are also shared, so partitions can expand until all resources on a system are utilized. But the down side to that flexibility is the inability to run different versions of the solution. A network-device tightly couples its operating system and software together as a means to improve performance and ensure reliability, but that means every instance in a multi-tenant system must run the same operating system and solution version. This can be a drawback in situations where certain features or functions of a previous or newer version is required to meet business requirements or to solve a particular technical challenge. Multi-tenant systems also offer a lesser degree of fault isolation than a virtualized system because of the shared operating system and the tight-coupling with the device software. An issue that causes the device to reset/reboot/restart in a multi-tenant system must necessarily disrupt every instance on the device. Lastly, a multi-tenant solution with an expanding (flexible) resource allocation model – while appropriate and helpful for certain situations in which unanticipated traffic may be encountered – can negatively impact performance. Spikes in processing from one instance will impact all other instances due to the increased consumption of resources. This is detrimental to maintaining reliable and predictable performance, a necessity for real time traffic processing which is required by many vertical industries. VIRTUALIZATION is a SOLUTION Virtualization, however, addresses these limitations and thus the concept of architectural multi-tenancy was put forth as an early solution. Architectural multi-tenancy combined the multi-tenant capabilities of existing network-based solutions with virtualized network-based solutions to combine as an architectural remedy. This is still a completely valid means of achieving the fault isolation and reliability of performance required. But what we need to move forward, to continue evolving infrastructure to meet the rapidly changing needs and requirements of an increasingly dynamic data center is a network-based solution that addresses these same concerns without comprising the benefits of tightly-coupled hardware and software solutions, namely those of predictable performance and enterprise class throughput. A solution that addresses the very real possibility of network sprawl, that unsustainable model of growth that has traditionally been addressed through consolidation. If a network can’t go virtual, then virtual must come to the network… Replacing hardware-based network appliances with virtual appliances Creating a Hybrid ADN Architecture with both Virtual and Physical ADCs Multi-Tenancy Requires More Than Just Isolating Customers Data Center Feng Shui: Fault Tolerance and Fault Isolation Architectural Multi-tenancy The Question Shouldn’t Be Where are the Network Virtual Appliances but Where is the Architecture? I CAN HAS DEFINISHUN of SoftADC and vADC? The Devil is in the Details Data Center Feng Shui: Architecting for Predictable Performance269Views0likes3CommentsArchitecturally, Is There Such A Thing As Too Scalable?
We’ve all had that chilling moment when the gate attendant at the airport comes over the loudspeaker, and doing her best Charlie Brown’s Teacher imitation, announces “Jursim Puzzling vlordid Netting, gollink dummole Neptune.” (This flight is in an oversold situation, we’re looking for volunteers…). While we could discuss the causes and solutions to this being an all-too-frequent event in the daily operation of airlines, for the purposes of this blog, let’s talk about the back end. The problem on the back end is, quite simply, that the plain cannot be expanded to handle the burden demanded of it. That makes perfect sense in an airplane, I for one would be slow to get onto an airplane that could be expanded like a pop-up camper. But it makes no sense whatsoever in an IT infrastructure. While a particular application might never need to expand, the overall architecture of the datacenter must meet shifting demands on a minute-by-minute basis, and must be prepared to offer more power to an application that is currently overburdened. There are many parts to making your architecture that dynamic in the application sense, developers (be they internal or external) need to have thought of the issues that scaling brings up, you will likely need a virtualization engine – you can do it without one, but that means leaving a lot of servers sitting idle all of the time, and in the 21st century we don’t tend to do a lot of that. You’ll also need a dynamic infrastructure. It does you no good to scale up a server unless the network, security, and optimization tools available can not only handle the additional load, but adapt to the presence of a new server popping up or an existing one going away. In short, you want the functionality of a hardware ADC with an adaptability to match the VM capabilities of your organization. And as time goes on you will want the same functionality to extend to the cloud, because your ADC brings your datacenter policies to VMs running on your preferred cloud vendor. But that’s the problem with a single-faceted deployment model where architecture is concerned, in the old world you wanted hardware ADCs to offload things like encryption and compression to, so your servers could be servers and not bulk-processing engines. In the virtual world, some would tell you that you want virtual ADCs to maintain a level adaptability that matches your virtualization environment. The problem is, where to put all of these virtual machines? When virtually offloading encryption or compression to a VM on the same machine, you’re offloading nothing, just shifting which VM makes the request of your hardware, the burden is still there. Much like an airplane, it just doesn’t expand very well unless you dedicate hardware to the ADC, making it less of a bargain in terms of architecture and cost savings. We have talked in the past about the hybrid model, but this is where it shines. By maintaining a central, physical ADC at the WAN strategic point of control (that point between the world and your servers), you can then give a separate virtualized ADC to your virtualized environment – or a dozen virtualized ADCs – with computationally expensive operations like encryption and compression turned off, and route their traffic through the physical ADC for that processing. Your applications get the benefits of an ADC – from load balancing to security – from the virtual ADC, and the benefits of offloading the heavy lifting items to the physical ADC. By placing the physical ADC at the WAN strategic point of control, you can also place virtual ADCs at your cloud provider and either physical or virtual (depending upon throughput needs) ADCs at remote datacenters. With the physical ADC coordinating the efforts of this network of ADCs, you can create centralized policies and profiles that are applied no matter where the final target ADC resides. And if your network grows, you can exponentially expand your physical ADC with a Virtual Clustered Multiprocessing system, so that scalability becomes an issue of yesteryear. More on that in a future blog, promise. No, there is not such a thing as too scalable from an architectural perspective. Putting the right tools into place means your options are practically unlimited as your traffic patterns grow and change, day by day, month by month, year by year. And with an ADC processing data as it passes through – before it ever reaches your servers – you are also able to offer a more secure environment, should the organization have that need. It’s a bright future, and the more technology moves forward, the brighter it seems to get. Soon you’ll be able to meet all of your employer’s IT architecture needs with the speed and grace that virtualization allowed you to spin up new servers on demand. Without working weekends and dropping entire systems to achieve upgrades. So ditch the airbus architecture, save your customers the “We are in an overutilized network situation…” horror, and usher in the age of adaptability, your network, your staff, and the business will all thank you.205Views0likes0CommentsF5 Friday: The Data Center is Always Greener on the Other Side of the ADC
Organizations interested in greening their data centers (both green as in cash as well as in grass) will benefit from the ability to reduce, reuse and recycle in just 4Us of rack space with a leaner, greener F5 VIPRION According to the latest data from the U.S. Energy Information Administration, the average cost of electricity for commercial use rose from 9.63 (Jan 2010) to 9.88 (Jan 2011) cents per kWh. If you think that’s not significant, consider that the average cost of powering one device in the data center has increased by 3% from 2010 to 2011 – an average of about $5 per 250w device. On a per device basis, that’s not so bad, but start multiplying that by the number of devices in an enterprise-class data center and it begins to get fairly significant fairly quickly – especially given that we haven’t started calculating the costs to cool the devices yet, either. Medium is the New Large in Enterprise Sometimes It Is About the Hardware VIPRION 2400 and vCMP Presentation VIPRION Platform Resources vCMP: License to Virtualize Virtual Clustered Multiprocessing (vCMP) The ROI of Application Delivery Controllers in Traditional and Virtualized Environments If a Network Can’t Go Virtual Then Virtual Must Come to the Network Data Center Feng Shui: Architecting for Predictable Performance188Views0likes0CommentsIxia Xcellon-Ultra XT-80 validates F5 Network's VIPRION 2400 SSL Performance
Courtesy IxiaTested YouTube Channel Ryan Kearny, VP of Product Development at F5 Networks, explains how Ixia's Xcellon-Ultra XT80, high-density application performance platform was is used to test and verify the performance limits of the VIPRION 2400. </p> <p>ps </p> <p>Resources: </p> <ul> <li><a href="http://www.youtube.com/watch?v=FFmtDpE6Ing" _fcksavedurl="http://www.youtube.com/watch?v=FFmtDpE6Ing">Interop 2011 - Find F5 Networks Booth 2027</a></li> <li><a href="http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/10/interop-2011-f5-in-the-interop-noc.aspx" _fcksavedurl="http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/10/interop-2011-f5-in-the-interop-noc.aspx">Interop 2011 - F5 in the Interop NOC</a></li> <li><a href="http://devcentral.f5.com/s/psilva/psilva/archive/2011/05/10/interop-2011-viprion-2400-and-vcmp.aspx" _fcksavedurl="http://devcentral.f5.com/s/psilva/psilva/archive/2011/05/10/interop-2011-viprion-2400-and-vcmp.aspx">Interop 2011 - VIPRION 2400 and vCMP</a></li> <li><a href="http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/11/interop-2011-ixia-and-viprion-2400-performance-test.aspx" _fcksavedurl="http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/11/interop-2011-ixia-and-viprion-2400-performance-test.aspx">Interop 2011 - IXIA and VIPRION 2400 Performance Test</a></li> <li><a href="http://devcentral.f5.com/s/psilva/psilva/archive/2011/05/12/interop-2011-f5-in-the-interop-noc-follow-up.aspx" _fcksavedurl="http://devcentral.f5.com/s/psilva/psilva/archive/2011/05/12/interop-2011-f5-in-the-interop-noc-follow-up.aspx">Interop 2011 - F5 in the Interop NOC Follow Up</a></li> <li><a href="http://devcentral.f5.com/s/psilva/archive/2011/05/13/interop-2011-wrapping-it-up.aspx" _fcksavedurl="http://devcentral.f5.com/s/psilva/archive/2011/05/13/interop-2011-wrapping-it-up.aspx">Interop 2011 - Wrapping It Up</a></li> <li><a href="http://devcentral.f5.com/s/weblogs/psilva/archive/2011/05/16/interop-2011-the-video-outtakes.aspx" _fcksavedurl="http://devcentral.f5.com/s/weblogs/psilva/archive/2011/05/16/interop-2011-the-video-outtakes.aspx">Interop 2011 - The Video Outtakes</a></li> <li><a href="http://devcentral.f5.com/s/weblogs/psilva/archive/2011/05/25/interop-2011-tmcnet-interview.aspx" _fcksavedurl="http://devcentral.f5.com/s/weblogs/psilva/archive/2011/05/25/interop-2011-tmcnet-interview.aspx">Interop 2011 - TMCNet Interview</a></li> <li><a href="http://www.youtube.com/user/f5networksinc" _fcksavedurl="http://www.youtube.com/user/f5networksinc">F5 YouTube Channel</a></li> <li><a href="www.ixiacom.com" _fcksavedurl="www.ixiacom.com">Ixia Website</a></li> </ul> <p>Technorati Tags: <a href="http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/09/" _fcksavedurl="http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/09/">F5</a>, <a href="http://technorati.com/tags/interop" _fcksavedurl="http://technorati.com/tags/interop">interop</a>, <a href="http://technorati.com/tags/Pete+Silva" _fcksavedurl="http://technorati.com/tags/Pete+Silva">Pete Silva</a>, <a href="http://technorati.com/tags/security" _fcksavedurl="http://technorati.com/tags/security">security</a>, <a href="http://technorati.com/tag/business" _fcksavedurl="http://technorati.com/tag/business">business</a>, <a href="http://technorati.com/tag/education" _fcksavedurl="http://technorati.com/tag/education">education</a>, <a href="http://technorati.com/tag/technology" _fcksavedurl="http://technorati.com/tag/technology">technology</a>, <a href="http://technorati.com/tags/internet" _fcksavedurl="http://technorati.com/tags/internet">internet, </a><a href="http://technorati.com/tags/big-ip" _fcksavedurl="http://technorati.com/tags/big-ip">big-ip</a>, <a href="http://technorati.com/tags/VIPRION" _fcksavedurl="http://technorati.com/tags/VIPRION">VIPRION</a>, <a href="http://technorati.com/tags/vCMP" _fcksavedurl="http://technorati.com/tags/vCMP">vCMP</a>, <a href="http://technorati.com/tags/ixia" _fcksavedurl="http://technorati.com/tags/ixia">ixia</a>, <a href="http://technorati.com/tags/performace" _fcksavedurl="http://technorati.com/tags/performace">performance</a>, <a href="http://technorati.com/tags/ssl%20tps" _fcksavedurl="http://technorati.com/tags/ssl%20tps">ssl tps</a>, <a href="http://technorati.com/tags/testing" _fcksavedurl="http://technorati.com/tags/testing">testing</a></p> <table border="0" cellspacing="0" cellpadding="2" width="380"><tbody> <tr> <td valign="top" width="200">Connect with Peter: </td> <td valign="top" width="178">Connect with F5: </td> </tr> <tr> <td valign="top" width="200"><a href="http://www.linkedin.com/pub/peter-silva/0/412/77a" _fcksavedurl="http://www.linkedin.com/pub/peter-silva/0/412/77a"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_linkedin[1]" border="0" alt="o_linkedin[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_linkedin.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_linkedin.png" width="24" height="24" /></a> <a href="http://devcentral.f5.com/s/weblogs/psilva/Rss.aspx" _fcksavedurl="http://devcentral.f5.com/s/weblogs/psilva/Rss.aspx"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_rss[1]" border="0" alt="o_rss[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_rss.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_rss.png" width="24" height="24" /></a> <a href="http://www.facebook.com/f5networksinc" _fcksavedurl="http://www.facebook.com/f5networksinc"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_facebook[1]" border="0" alt="o_facebook[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png" width="24" height="24" /></a> <a href="http://twitter.com/psilvas" _fcksavedurl="http://twitter.com/psilvas"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_twitter[1]" border="0" alt="o_twitter[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png" width="24" height="24" /></a> </td> <td valign="top" width="178"> <a href="http://www.facebook.com/f5networksinc" _fcksavedurl="http://www.facebook.com/f5networksinc"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_facebook[1]" border="0" alt="o_facebook[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png" width="24" height="24" /></a> <a href="http://twitter.com/f5networks" _fcksavedurl="http://twitter.com/f5networks"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_twitter[1]" border="0" alt="o_twitter[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png" width="24" height="24" /></a> <a href="http://www.slideshare.net/f5dotcom/" _fcksavedurl="http://www.slideshare.net/f5dotcom/"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_slideshare[1]" border="0" alt="o_slideshare[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_slideshare.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_slideshare.png" width="24" height="24" /></a> <a href="http://www.youtube.com/f5networksinc" _fcksavedurl="http://www.youtube.com/f5networksinc"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="o_youtube[1]" border="0" alt="o_youtube[1]" src="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_youtube.png" _fcksavedurl="http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_youtube.png" width="24" height="24" /></a></td> </tr> </tbody></table></body></html> ps Resources: Interop 2011 - Find F5 Networks Booth 2027 Interop 2011 - F5 in the Interop NOC Interop 2011 - VIPRION 2400 and vCMP Interop 2011 - IXIA and VIPRION 2400 Performance Test Interop 2011 - F5 in the Interop NOC Follow Up Interop 2011 - Wrapping It Up Interop 2011 - The Video Outtakes Interop 2011 - TMCNet Interview F5 YouTube Channel Ixia Website235Views0likes0CommentsF5 Friday: Speeds, Feeds and Boats
#vcmp It’s great to be fast and furious, but if your infrastructure handles like a boat you won’t be able to take advantage of its performance We recently joined the land of modernity when I had a wild urge to acquire a Wii. Any game system is pretty useless without games, so we got some of those too. One of them, of course, had to be Transfomers: The Game because, well, our three-year old thinks he is a Transformer and I was curious as to how well the game recreated the transformation process. The three-year old obviously doesn’t have the dexterity (or patience) to play, but he loves to watch other people play, people like his older brother. The first time our oldest sat down and played he noted that Bumblebee, in particular, handled like a “boat.” Oh, he’s a fast car alright, but making it around corners and tight curves or around objects is difficult because he’s not very agile when you get down to it. Jazz, for the record, handles much better. Handling is important, of course, because the faster you go the more difficult it is to maneuver and be accurate in your driving. Handling impacts the overall experience because constantly readjusting direction and speed to get through town makes it difficult to efficiently find and destroy the “evil forces of the Decepticons.” Now while the infrastructure in which you’re considering investing may be fast and furious, with high speeds and fat feeds, the question you have to ask yourself is, “How does she handle? Is she agile, or is she a boat?” Because constantly readjusting policies and capacity and configuration can make it difficult to efficiently deliver applications. VIPRION 2400 : High Speed, Fat Feeds and Agile to Boot This week at Interop F5 announced the newest member of our VIPRION family, the VIPRION 2400 – a.k.a. Victoria. At first glance you might think the VIPRION 2400 is little more than a scaled down version of the VIPRION 4000, our flagship BIG-IP chassis-based application delivery controller. In many respects that’s true, but in many others it’s not. That’s because at the same time we also introduced a new technology called vCMP (virtual Multi-Clustered Processing) that enables the platform with some pretty awesome agility internally which translates into operational and ultimately business agility. If the network can’t go virtual, then virtual must come to the network. It’s not just having a bladed, pay-as-you-grow, system that makes VIPRION with vCMP agile. It’s the way in which you can provision and manage resources across blades, transparently, in a variety of different ways. If you’re an application-centric operations kind of group, you can manage and thus provision application delivery resources on VIPRION based on applications, not ports or IP addresses or blades. If you’re a web-site or domain focused operations kind of group, manage and provision application delivery resources by VIP (Virtual IP Address) instead. If you’re an application delivery kind of group, you may want to manage by module instead. It’s your operations, your way. What’s awesome about vCMP and the VIPRION platforms is the ability to provision and manage application delivery resources as a pool, regardless of where they’re located. Say you started with one blade in a VIPRION 2400 chassis and grew to need a second. There’s no disruption, no downtime, no changes to the network necessary. Slap in a second blade and the resources are immediately available to be provisioned and managed as though they were merely part of a large pool. Conversely, in the event of a blade failure, the resources are shifted to other available CPUs and memory across the system. Not only can you provision at the resource layer, but you can split up those resources by creating virtual instances of BIG-IP right on the platform. Each “guest” on the VIPRION platform can be assigned its own resources, be managed by completely different groups, and is for all purposes an isolated, stand-alone version of BIG-IP. Without additional hardware, without topological disruption, without all the extra cables and switches that might be necessary to achieve such a feat using traditional application delivery systems. VIPRION 2400 has the speeds and feeds necessary to support a growing mid-sized organization. Mid-sized from a traffic management perspective, not necessarily employee count. The increasing demands on even small and medium sized businesses from new clients, video, and HTML5 are driving high volumes of traffic through architectures that are not necessarily prepared to handle the growth affordably or operationally. The VIPRION 2400 was designed to address that need – both to handle volume and provide for growth over time, while being as flexible as possible to fit the myriad styles of architecture that exist in the real world. The explosion of virtualization inside the data center in medium-sized businesses, too, is problematic. These organizations need a solution that’s capable of supporting the security and delivery needs of virtualized desktops and applications in very flexible ways. VIPRION 2400 enables these organizations to take advantage of what has traditionally been a large-enterprise class only solution and enable the implementation of modern architectures and network topologies that can greatly assist in virtualization and cloud computing efforts by providing the foundation of a dynamic, agile infrastructure. VIPRION 2400 RESOURCES VIPRION 2400 and vCMP Presentation VIPRION Platform Resources F5 Introduces Midrange VIPRION Platform and Industry’s First Virtual Clustered Multiprocessing Technology VIPRION 2400 - Quantum Performance Virtual Clustered Multiprocessing (vCMP) Medium is the New Large in Enterprise Sometimes It Is About the Hardware VIPRION and vCMP ENABLE YOU TO TAKE ADVANTAGE OF MORE OF THE “50 Ways to Use Your BIG-IP System.” Share how you use your BIG-IP, get a free T-Shirt, and maybe more! Medium is the New Large in Enterprise Sometimes It Is About the Hardware If a Network Can’t Go Virtual Then Virtual Must Come to the Network Data Center Feng Shui: Architecting for Predictable Performance F5 Friday: Have You Ever Played WoW without a Good Graphics Card? All F5 Friday Posts on DevCentral Data Center Feng Shui: SSL When Did Specialized Hardware Become a Dirty Word?187Views0likes0Comments