F5 Friday: Are You One of the 61 Percent?

#centaur #40GBE #cloud That’s those that “are still not fully confident in their network infrastructure’s preparedness” for cloud …

Throughout the evolution of computing the bus speed, i.e. interconnects, between various components on the main board have traditionally been one of the most limiting factors in terms of computing performance. When considering interconnects between disparate hardware resources such as SSL, video, and network cards the bus speed has been the most limiting factor.

Networks, which connect disparate compute resources across the data center (and indeed across the Internet) are interconnects and carry with them the same limiting behavior.

I/O – whether network, storage, or image – has long been and still remains one of the most impactful components of performance for applications. As applications continue to become more complex in terms of media served as well as integration of externally hosted content, the amount of data – and thus bandwidth- required to maintain performance also continues to increase. 

"We thought the main flow of traffic through the data center was from east to west; it turned out to be from north to south. We found lots of areas where we could improve," Leinwand [Zynga's infrastructure CTO] told the crowd.

Other bottlenecks were found in the networks to storage systems, Internet traffic moving through Web servers, firewalls' ability to process the streams of traffic, and load balancers' ability to keep up with constantly shifting demand.

-- Inside Zynga’s Big Move To Private Cloud

Virtualization and by extension cloud computing exacerbate the situation by increasing the density of applications requiring network access without simultaneously increasing network capacity. Servers used by enterprises and providers to build out cloud racks are often still of a class that can support only a few (typically four) network interfaces, and those are generally also limited to 1GB. A growing reliance on external storage to ensure persistence of data across more volatile virtual machines puts additional pressure on the network and particularly on shared networks such as is found in highly virtualized and cloud computing environments.

As infrastructure refresh cycles begin to come into play, it’s time for organizations to upgrade server capacity in terms of compute and  in terms of the network. That means upgrading to modern 10GB interfaces on servers and in turn upgrade network components to ensure the aggregated capacity of these servers can be efficiently managed by upstream devices.

That means components in the application delivery tier, like BIG-IP, need to beef up density in terms of sheer throughput as well.


F5 is excited to introduce the industry’s first 40GBE capable application delivery controller, the VIPRION 4480. With layer throughput of 320Gbps and 160Gbps layer 7 requests per second, F5 VIPRION 4480 delivers revolutionary performance supporting a wide variety of deployment scenarios in the data center.

As an ICSA certified network firewall, BIG-IP on VIPRION 4480 supports 5.6 million connections per second – nearly sixteen times that of its closest competitor and well above rates seen by the “largest DDoS attack of 2011.”  With the introduction of the VIPRION 4480, F5 is redefining application delivery and data center firewall performance and scalability, offering enterprises and service providers an effective means of consolidating infrastructure as well as laying the foundation for the high-bandwidth fabrics  necessary for next generation data centers.

The combined capacity and scalability features of BIG-IP on VIPRION 4480 enable greater consolidation across data center services as well, bringing secure remote access and web application and data center firewall services together with dynamic and highly intelligent load balancing. This approach enables each service domain to scale independently and on-demand, ensuring applications stay available by making sure all dependent services are available. Converged application delivery architectures also ensure critical context is maintained across functions while reducing performance-impeding latency resulting from the need to chain multiple point solutions, reducing the number of disparate policies that must be enforced as well as the risk of misconfiguration that may lead to a security breach.

Consolidation further provides a consistent operational paradigm under which IT can operate, ensuring disjointed management and automation technologies do not impair transformational efforts toward a more dynamic, on-demand data center.

The VIPRION 4480 is designed for the dynamic data center, as a platform on which organizations can scale and grow delivery services as they scale and grow their business and operations. It is fast, it is secure, and it is available – and extends those characteristics to the data centers  in which it will be deployed and the applications and services it is designed to deliver and secure.

VIPRION 4480 Resources:

Distributed Apache Killer
Why Layer 7 Load Balancing Doesn’t Suck
Threat Assessment: Terminal Services RDP Vulnerability
Cloud Bursting: Gateway Drug for Hybrid Cloud
Identity Gone Wild! Cloud Edition
Published Apr 06, 2012
Version 1.0

Was this article helpful?


  • Izzy - I'll dig and find out what the new instance limits are for these blades.




  • Short answer Izzy is "yes" - this one is better.



    How're those clustering limitations coming along? Getting any better? ;-)