viprion
10 TopicsBuilding an elastic environment requires elastic infrastructure
One of the reasons behind some folks pushing for infrastructure as virtual appliances is the on-demand nature of a virtualized environment. When network and application delivery infrastructure hits capacity in terms of throughput - regardless of the layer of the application stack at which it happens - it's frustrating to think you might need to upgrade the hardware rather than just add more compute power via a virtual image. The truth is that this makes sense. The infrastructure supporting a virtualized environment should be elastic. It should be able to dynamically expand without requiring a new network architecture, a higher performing platform, or new configuration. You should be able to just add more compute resources and walk away. The good news is that this is possible today. It just requires that you consider carefully your choices in network and application network infrastructure when you build out your virtualized infrastructure. ELASTIC APPLICATION DELIVERY INFRASTRUCTURE Last year F5 introduced VIPRION, an elastic, dynamic application networking delivery platform capable of expanding capacity without requiring any changes to the infrastructure. VIPRION is a chassis-based bladed application delivery controller and its bladed system behaves much in the same way that a virtualized equivalent would behave. Say you start with one blade in the system, and soon after you discover you need more throughput and more processing power. Rather than bring online a new virtual image of such an appliance to increase capacity, you add a blade to the system and voila! VIPRION immediately recognizes the blade and simply adds it to its pools of processing power and capacity. There's no need to reconfigure anything, VIPRION essentially treats each blade like a virtual image and distributes requests and traffic across the network and application delivery capacity available on the blade automatically. Just like a virtual appliance model would, but without concern for the reliability and security of the platform. Traditional application delivery controllers can also be scaled out horizontally to provide similar functionality and behavior. By deploying additional application delivery controllers in what is often called an active-active model, you can rapidly deploy and synchronize configuration of the master system to add more throughput and capacity. Meshed deployments comprising more than a pair of application delivery controllers can also provide additional network compute resources beyond what is offered by a single system. The latter option (the traditional scaling model) requires more work to deploy than the former (VIPRION) simply because it requires additional hardware and all the overhead required of such a solution. The elastic option with bladed, chassis-based hardware is really the best option in terms of elasticity and the ability to grow on-demand as your infrastructure needs increase over time. ELASTIC STORAGE INFRASTRUCTURE Often overlooked in the network diagrams detailing virtualized infrastructures is the storage layer. The increase in storage needs in a virtualized environment can be overwhelming, as there is a need to standardize the storage access layer such that virtual images of applications can be deployed in a common, unified way regardless of which server they might need to be executing on at any given time. This means a shared, unified storage layer on which to store images that are necessarily large. This unified storage layer must also be expandable. As the number of applications and associated images are made available, storage needs increase. What's needed is a system in which additional storage can be added in a non-disruptive manner. If you have to modify the automation and orchestration systems driving your virtualized environment when additional storage is added, you've lost some of the benefits of a virtualized storage infrastructure. F5's ARX series of storage virtualization provides that layer of unified storage infrastructure. By normalizing the namespaces through which files (images) are accessed, the systems driving a virtualized environment can be assured that images are available via the same access method regardless of where the file or image is physically located. Virtualized storage infrastructure systems are dynamic; additional storage can be added to the infrastructure and "plugged in" to the global namespace to increase the storage available in a non-disruptive manner. An intelligent virtualized storage infrastructure can further make more efficient the use of the storage available by tiering the storage. Images and files accessed more frequently can be stored on fast, tier one storage so they are loaded and execute more quickly, while less frequently accessed files and images can be moved to less expensive and perhaps less peformant storage systems. By deploying elastic application delivery network infrastructure instead of virtual appliances you maintain stability, reliability, security, and performance across your virtualized environment. Elastic application delivery network infrastructure is already dynamic, and offers a variety of options for integration into automation and orchestration systems via standards-based control planes, many of which are nearly turn-key solutions. The reasons why some folks might desire a virtual appliance model for their application delivery network infrastructure are valid. But the reality is that the elasticity and on-demand capacity offered by a virtual appliance is already available in proven, reliable hardware solutions today that do not require sacrificing performance, security, or flexibility. Related articles by Zemanta How to instrument your Java EE applications for a virtualized environment Storage Virtualization Fundamentals Automating scalability and high availability services Building a Cloudbursting Capable Infrastructure EMC unveils Atmos cloud offering Are you (and your infrastructure) ready for virtualization?505Views0likes4CommentsLightboard Lessons: Intro to VIPRION
The F5 BIG-IP platform has tremendous flexibility, offering virtual editions and a line of appliances and chassis. In this episode of Lightboard Lessons, we’ll introduce the chassis platform, which we call VIPRION. What else would you like us to cover on the VIPRION? Drop a comment below and we’ll consider for a future episode!401Views0likes6CommentsIt's All About The 12s: B4450 vCMP Guest Support Adds New Guest Options
The recent release of version 13.0 enabled a capability on the B4450 which had been dormant up until now: vCMP support. With 13.0,running up to 12 guests per bladeis now supported, yielding 96 guests in an 8-slot chassis, the C4800 –a doubling of the guest density over the previous 4000-series blade, the B4300.While vCMP support was supposed to be part of the features enabled at launch, there was a need for more in-depth testing of vCMP on the B4450 now that a full chassis can support up to 96 guests.The increase in density also makes good use of the higher number of cores and additional RAM available vs the B4300-series. Here's a comparison of the relative densities and some basic metrics: Blade Cores RAM Max Guests Ram per HT/vCPU RAM per Guest (max density) B2250 10 (20 HT) 64 GB 20 3.1GB 3.1 GB B4300 12 (no HT) 48 GB 6 3.75 GB 3.75 GB B4340N 12 (no HT) 96 GB 6 7.50 GB 7.50 GB B4450(N) 24 (48 HT) 256 GB 12 5.25 GB 21.1 GB Dividing the B4450 evenly creates 12 guests that can simultaneously support all of the modules that TMOS offers without running out of RAM, providing ample opportunity to consolidate older or smaller machines into a single guest or multiple guests to save on overall operational costs, such as rack space or cooling. For reference, here's how a B4450 vCMP guest compares to some of the older BIG-IP hardware platforms: Single B4450 Guest versus: RAM increase vCPU increase Version Support Adds FPGA-based DDoS Mitigation? Adds SSD-based Storage BIG-IP 1600 5x 2x <12.1.x Yes Yes BIG-IP 3600 2.5x 2x <12.1.x Yes Yes BIG-IP 3900 2.5x 1x <12.1.x Yes Yes BIG-IP 6900 2.5x 1x <12.1.x Yes Yes BIG-IP 8900 1.5x 1x <12.1.x Yes Yes BIG-IP 2200s 2.5x 2x 13.0+ Yes Yes BIG-IP 4200v 1.3x 1x 13.0+ Yes Yes **These are just estimations, based on data sheet figures, and are rule of thumb only. Actual performance andconsolidation capabilities should be based on sizing guides which are accessible to F5 SEs and SAs for review with customers** Wait! There's More! Enabling vCMP on the B4450 also comes with a bonus feature: Support for guests running 12.1.2 HF1 or later on a 13.0+ hypervisor. Providing this guest support, along with the increased capabilities of the Platform Migration features that will be available in 12.1.3, means that migration and consolidation on a VIPRION with B4450 blades need not be a difficult shift between platforms. As noted above, many of the platforms that have hit or passed the 5 year-old mark do not have support for anything beyond the 12.1.x code train. With the support of the 12.1.2 HF1+ releases as guests on the B4450, there's immediate access to a long-term stability release, lessening the impact in places where version certification takes a significant amount of time while allowing for implementation of advanced features such as per-guest DDoS protection or switch-level, per-guest network rate limiting. Providing this support also maintains F5's history of delivering vCMP support for the latest release of the major version that accompanied the release of the platform. In this case, the B4450 was released with 12.1, so guest support was extended to 12.1.2 HF1. The related AskF5 articles areK14088: vCMP host and compatible guest version matrixandK14218: vCMP guest memory/CPU core allocation matrix.332Views0likes0CommentsF5 Friday: Are You One of the 61 Percent?
#centaur #40GBE #cloud That’s those that “are still not fully confident in their network infrastructure’s preparedness” for cloud … Throughout the evolution of computing the bus speed, i.e. interconnects, between various components on the main board have traditionally been one of the most limiting factors in terms of computing performance. When considering interconnects between disparate hardware resources such as SSL, video, and network cards the bus speed has been the most limiting factor. Networks, which connect disparate compute resources across the data center (and indeed across the Internet) are interconnects and carry with them the same limiting behavior. I/O – whether network, storage, or image – has long been and still remains one of the most impactful components of performance for applications. As applications continue to become more complex in terms of media served as well as integration of externally hosted content, the amount of data – and thus bandwidth- required to maintain performance also continues to increase. "We thought the main flow of traffic through the data center was from east to west; it turned out to be from north to south. We found lots of areas where we could improve," Leinwand [Zynga's infrastructure CTO] told the crowd. Other bottlenecks were found in the networks to storage systems, Internet traffic moving through Web servers, firewalls' ability to process the streams of traffic, and load balancers' ability to keep up with constantly shifting demand. -- Inside Zynga’s Big Move To Private Cloud Virtualization and by extension cloud computing exacerbate the situation by increasing the density of applications requiring network access without simultaneously increasing network capacity. Servers used by enterprises and providers to build out cloud racks are often still of a class that can support only a few (typically four) network interfaces, and those are generally also limited to 1GB. A growing reliance on external storage to ensure persistence of data across more volatile virtual machines puts additional pressure on the network and particularly on shared networks such as is found in highly virtualized and cloud computing environments. As infrastructure refresh cycles begin to come into play, it’s time for organizations to upgrade server capacity in terms of compute and in terms of the network. That means upgrading to modern 10GB interfaces on servers and in turn upgrade network components to ensure the aggregated capacity of these servers can be efficiently managed by upstream devices. That means components in the application delivery tier, like BIG-IP, need to beef up density in terms of sheer throughput as well. FIRST ADC with 40GBE SUPPORT F5 is excited to introduce the industry’s first 40GBE capable application delivery controller, the VIPRION 4480. With layer throughput of 320Gbps and 160Gbps layer 7 requests per second, F5 VIPRION 4480 delivers revolutionary performance supporting a wide variety of deployment scenarios in the data center. As an ICSA certified network firewall, BIG-IP on VIPRION 4480 supports 5.6 million connections per second – nearly sixteen times that of its closest competitor and well above rates seen by the “largest DDoS attack of 2011.” With the introduction of the VIPRION 4480, F5 is redefining application delivery and data center firewall performance and scalability, offering enterprises and service providers an effective means of consolidating infrastructure as well as laying the foundation for the high-bandwidth fabrics necessary for next generation data centers. The combined capacity and scalability features of BIG-IP on VIPRION 4480 enable greater consolidation across data center services as well, bringing secure remote access and web application and data center firewall services together with dynamic and highly intelligent load balancing. This approach enables each service domain to scale independently and on-demand, ensuring applications stay available by making sure all dependent services are available. Converged application delivery architectures also ensure critical context is maintained across functions while reducing performance-impeding latency resulting from the need to chain multiple point solutions, reducing the number of disparate policies that must be enforced as well as the risk of misconfiguration that may lead to a security breach. Consolidation further provides a consistent operational paradigm under which IT can operate, ensuring disjointed management and automation technologies do not impair transformational efforts toward a more dynamic, on-demand data center. The VIPRION 4480 is designed for the dynamic data center, as a platform on which organizations can scale and grow delivery services as they scale and grow their business and operations. It is fast, it is secure, and it is available – and extends those characteristics to the data centers in which it will be deployed and the applications and services it is designed to deliver and secure. VIPRION 4480 Resources: New VIPRION Solutions – SlideShare Presentation VIPRION Overview – Datasheet F5’s VIPRION Solutions Help Service Providers and Enterprises Optimize Infrastructures and Reduce Costs The Cost of Ignoring ‘Non-Human’ Visitors Moore’s (Traffic) Law SuperSizing the Data Center: Big Data and Jumbo Frames Desktop VDI May Be Ready for Prime Time but Is the Network? Distributed Apache Killer Why Layer 7 Load Balancing Doesn’t Suck Threat Assessment: Terminal Services RDP Vulnerability Cloud Bursting: Gateway Drug for Hybrid Cloud Identity Gone Wild! Cloud Edition277Views0likes2CommentsIxia Xcellon-Ultra XT-80 validates F5 Network's VIPRION 2400 SSL Performance
Courtesy IxiaTested YouTube Channel Ryan Kearny, VP of Product Development at F5 Networks, explains how Ixia's Xcellon-Ultra XT80, high-density application performance platform was is used to test and verify the performance limits of the VIPRION 2400. &lt;/p&gt; &lt;p&gt;ps &lt;/p&gt; &lt;p&gt;Resources: &lt;/p&gt; &lt;ul&gt; &lt;li&gt;&lt;a href=&quot;http://www.youtube.com/watch?v=FFmtDpE6Ing&quot; _fcksavedurl=&quot;http://www.youtube.com/watch?v=FFmtDpE6Ing&quot;&gt;Interop 2011 - Find F5 Networks Booth 2027&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href=&quot;http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/10/interop-2011-f5-in-the-interop-noc.aspx&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/10/interop-2011-f5-in-the-interop-noc.aspx&quot;&gt;Interop 2011 - F5 in the Interop NOC&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href=&quot;http://devcentral.f5.com/s/psilva/psilva/archive/2011/05/10/interop-2011-viprion-2400-and-vcmp.aspx&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/psilva/psilva/archive/2011/05/10/interop-2011-viprion-2400-and-vcmp.aspx&quot;&gt;Interop 2011 - VIPRION 2400 and vCMP&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href=&quot;http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/11/interop-2011-ixia-and-viprion-2400-performance-test.aspx&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/11/interop-2011-ixia-and-viprion-2400-performance-test.aspx&quot;&gt;Interop 2011 - IXIA and VIPRION 2400 Performance Test&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href=&quot;http://devcentral.f5.com/s/psilva/psilva/archive/2011/05/12/interop-2011-f5-in-the-interop-noc-follow-up.aspx&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/psilva/psilva/archive/2011/05/12/interop-2011-f5-in-the-interop-noc-follow-up.aspx&quot;&gt;Interop 2011 - F5 in the Interop NOC Follow Up&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href=&quot;http://devcentral.f5.com/s/psilva/archive/2011/05/13/interop-2011-wrapping-it-up.aspx&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/psilva/archive/2011/05/13/interop-2011-wrapping-it-up.aspx&quot;&gt;Interop 2011 - Wrapping It Up&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href=&quot;http://devcentral.f5.com/s/weblogs/psilva/archive/2011/05/16/interop-2011-the-video-outtakes.aspx&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/weblogs/psilva/archive/2011/05/16/interop-2011-the-video-outtakes.aspx&quot;&gt;Interop 2011 - The Video Outtakes&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href=&quot;http://devcentral.f5.com/s/weblogs/psilva/archive/2011/05/25/interop-2011-tmcnet-interview.aspx&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/weblogs/psilva/archive/2011/05/25/interop-2011-tmcnet-interview.aspx&quot;&gt;Interop 2011 - TMCNet Interview&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href=&quot;http://www.youtube.com/user/f5networksinc&quot; _fcksavedurl=&quot;http://www.youtube.com/user/f5networksinc&quot;&gt;F5 YouTube Channel&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href=&quot;www.ixiacom.com&quot; _fcksavedurl=&quot;www.ixiacom.com&quot;&gt;Ixia Website&lt;/a&gt;&lt;/li&gt; &lt;/ul&gt; &lt;p&gt;Technorati Tags: &lt;a href=&quot;http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/09/&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/psilva/psilva/psilva/archive/2011/05/09/&quot;&gt;F5&lt;/a&gt;, &lt;a href=&quot;http://technorati.com/tags/interop&quot; _fcksavedurl=&quot;http://technorati.com/tags/interop&quot;&gt;interop&lt;/a&gt;, &lt;a href=&quot;http://technorati.com/tags/Pete+Silva&quot; _fcksavedurl=&quot;http://technorati.com/tags/Pete+Silva&quot;&gt;Pete Silva&lt;/a&gt;, &lt;a href=&quot;http://technorati.com/tags/security&quot; _fcksavedurl=&quot;http://technorati.com/tags/security&quot;&gt;security&lt;/a&gt;, &lt;a href=&quot;http://technorati.com/tag/business&quot; _fcksavedurl=&quot;http://technorati.com/tag/business&quot;&gt;business&lt;/a&gt;, &lt;a href=&quot;http://technorati.com/tag/education&quot; _fcksavedurl=&quot;http://technorati.com/tag/education&quot;&gt;education&lt;/a&gt;, &lt;a href=&quot;http://technorati.com/tag/technology&quot; _fcksavedurl=&quot;http://technorati.com/tag/technology&quot;&gt;technology&lt;/a&gt;, &lt;a href=&quot;http://technorati.com/tags/internet&quot; _fcksavedurl=&quot;http://technorati.com/tags/internet&quot;&gt;internet, &lt;/a&gt;&lt;a href=&quot;http://technorati.com/tags/big-ip&quot; _fcksavedurl=&quot;http://technorati.com/tags/big-ip&quot;&gt;big-ip&lt;/a&gt;, &lt;a href=&quot;http://technorati.com/tags/VIPRION&quot; _fcksavedurl=&quot;http://technorati.com/tags/VIPRION&quot;&gt;VIPRION&lt;/a&gt;, &lt;a href=&quot;http://technorati.com/tags/vCMP&quot; _fcksavedurl=&quot;http://technorati.com/tags/vCMP&quot;&gt;vCMP&lt;/a&gt;, &lt;a href=&quot;http://technorati.com/tags/ixia&quot; _fcksavedurl=&quot;http://technorati.com/tags/ixia&quot;&gt;ixia&lt;/a&gt;, &lt;a href=&quot;http://technorati.com/tags/performace&quot; _fcksavedurl=&quot;http://technorati.com/tags/performace&quot;&gt;performance&lt;/a&gt;, &lt;a href=&quot;http://technorati.com/tags/ssl%20tps&quot; _fcksavedurl=&quot;http://technorati.com/tags/ssl%20tps&quot;&gt;ssl tps&lt;/a&gt;, &lt;a href=&quot;http://technorati.com/tags/testing&quot; _fcksavedurl=&quot;http://technorati.com/tags/testing&quot;&gt;testing&lt;/a&gt;&lt;/p&gt; &lt;table border=&quot;0&quot; cellspacing=&quot;0&quot; cellpadding=&quot;2&quot; width=&quot;380&quot;&gt;&lt;tbody&gt; &lt;tr&gt; &lt;td valign=&quot;top&quot; width=&quot;200&quot;&gt;Connect with Peter: &lt;/td&gt; &lt;td valign=&quot;top&quot; width=&quot;178&quot;&gt;Connect with F5: &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td valign=&quot;top&quot; width=&quot;200&quot;&gt;&lt;a href=&quot;http://www.linkedin.com/pub/peter-silva/0/412/77a&quot; _fcksavedurl=&quot;http://www.linkedin.com/pub/peter-silva/0/412/77a&quot;&gt;&lt;img style=&quot;border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px&quot; title=&quot;o_linkedin[1]&quot; border=&quot;0&quot; alt=&quot;o_linkedin[1]&quot; src=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_linkedin.png&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_linkedin.png&quot; width=&quot;24&quot; height=&quot;24&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;http://devcentral.f5.com/s/weblogs/psilva/Rss.aspx&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/weblogs/psilva/Rss.aspx&quot;&gt;&lt;img style=&quot;border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px&quot; title=&quot;o_rss[1]&quot; border=&quot;0&quot; alt=&quot;o_rss[1]&quot; src=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_rss.png&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_rss.png&quot; width=&quot;24&quot; height=&quot;24&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;http://www.facebook.com/f5networksinc&quot; _fcksavedurl=&quot;http://www.facebook.com/f5networksinc&quot;&gt;&lt;img style=&quot;border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px&quot; title=&quot;o_facebook[1]&quot; border=&quot;0&quot; alt=&quot;o_facebook[1]&quot; src=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png&quot; width=&quot;24&quot; height=&quot;24&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;http://twitter.com/psilvas&quot; _fcksavedurl=&quot;http://twitter.com/psilvas&quot;&gt;&lt;img style=&quot;border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px&quot; title=&quot;o_twitter[1]&quot; border=&quot;0&quot; alt=&quot;o_twitter[1]&quot; src=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png&quot; width=&quot;24&quot; height=&quot;24&quot; /&gt;&lt;/a&gt; &lt;/td&gt; &lt;td valign=&quot;top&quot; width=&quot;178&quot;&gt; &lt;a href=&quot;http://www.facebook.com/f5networksinc&quot; _fcksavedurl=&quot;http://www.facebook.com/f5networksinc&quot;&gt;&lt;img style=&quot;border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px&quot; title=&quot;o_facebook[1]&quot; border=&quot;0&quot; alt=&quot;o_facebook[1]&quot; src=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_facebook.png&quot; width=&quot;24&quot; height=&quot;24&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;http://twitter.com/f5networks&quot; _fcksavedurl=&quot;http://twitter.com/f5networks&quot;&gt;&lt;img style=&quot;border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px&quot; title=&quot;o_twitter[1]&quot; border=&quot;0&quot; alt=&quot;o_twitter[1]&quot; src=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_twitter.png&quot; width=&quot;24&quot; height=&quot;24&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;http://www.slideshare.net/f5dotcom/&quot; _fcksavedurl=&quot;http://www.slideshare.net/f5dotcom/&quot;&gt;&lt;img style=&quot;border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px&quot; title=&quot;o_slideshare[1]&quot; border=&quot;0&quot; alt=&quot;o_slideshare[1]&quot; src=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_slideshare.png&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_slideshare.png&quot; width=&quot;24&quot; height=&quot;24&quot; /&gt;&lt;/a&gt; &lt;a href=&quot;http://www.youtube.com/f5networksinc&quot; _fcksavedurl=&quot;http://www.youtube.com/f5networksinc&quot;&gt;&lt;img style=&quot;border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px&quot; title=&quot;o_youtube[1]&quot; border=&quot;0&quot; alt=&quot;o_youtube[1]&quot; src=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_youtube.png&quot; _fcksavedurl=&quot;http://devcentral.f5.com/s/weblogs/images/devcentral_f5_com/weblogs/macvittie/1086440/o_youtube.png&quot; width=&quot;24&quot; height=&quot;24&quot; /&gt;&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt;&lt;/table&gt;&lt;/body&gt;&lt;/html&gt; ps Resources: Interop 2011 - Find F5 Networks Booth 2027 Interop 2011 - F5 in the Interop NOC Interop 2011 - VIPRION 2400 and vCMP Interop 2011 - IXIA and VIPRION 2400 Performance Test Interop 2011 - F5 in the Interop NOC Follow Up Interop 2011 - Wrapping It Up Interop 2011 - The Video Outtakes Interop 2011 - TMCNet Interview F5 YouTube Channel Ixia Website234Views0likes0Comments業界唯一のシャーシ型ADCであるViprionシリーズの最小モデル、C2200シャーシを提供開始
このたび、F5ネットワークスジャパン株式会社は、F5 Synthesisアーキテクチャモデルの恩恵を増大する新製品、Viprionシリーズの新モデル、2スロット式小型シャーシであるC2200を発表いたしました。C2200は、従来のミッドレンジであるC2400、上位モデルのC4480、フラッグシップのC4800に加え、小型で省スペース、お求めやすい価格設定で従来と変わらない機能をお届けします。 主なキーポイントは以下の通りです。 Viprionシリーズ最小の2RU(ラックユニット)というサイズ 対応するブレードは最新のミッドレンジブレードであるB2150 / B2250 最大ブレード2枚搭載可能。つまり最大40のvCMP仮想インスタンスを構築可能 対応ソフトウェア(TMOS)のバージョンは11.5.0以降 詳細な情報はViprion製品ページをご参照下さい。スペックを含めたデータシートやプラットフォーム一覧表などもございます。 Viprion C2200では、システムをユーザの必要に応じてアップグレードする能力を保ちながら、スケーリング可能な処理力を加えることが可能となり、企業にとって重要なアプリケーションサービスのパフォーマンスとスケーリングの両方を実現します。F5の仮想クラスタ・マルチプロセシング(vCMP ® )テクノロジを用いて、アプリケーションサービスと十分に活用されていないアプリケーション・デリバリ・コントローラ(ADC)を効率的に統合させ、最高密度のマルチテナントソリューションを提供いたします。 今までも、そこまでインフラの拡張が大きく見込まれないユーザ様環境では、従来機のViprionで最大4枚・8枚という中1-2枚程度で運用が続いている事例も数多くございます。このように、より小規模なキャパシティプランニングをされているユーザ様向けにも拡張性、仮想化ソリューションを展開し、より小型で少ない投資から始める事ができる、というご提案が可能になります。新しいViprion C2200を是非ご検討下さい! 出荷体制は整っております。製品に関する詳しい情報に関しては、F5ネットワークスジャパン株式会社(https://interact.f5.com/JP-Contact.html)、または各販売代理店までご連絡ください。206Views0likes0CommentsThe Real News is Not that Facebook Serves Up 1 Trillion Pages a Month…
It’s how much load that really generates and how it scales to meet the challenge. There’s some amount of debate whether Facebook really crossed over the one trillion page view per month threshold. While one report says it did, another respected firm says it did not; that its monthly page views are a mere 467 billion per month. In the big scheme of things, the discrepancy is somewhat irrelevant, as neither show the true load on Facebook’s infrastructure – which is far more impressive a set of numbers than its externally measured “page view” metric. Mashable reported in “Facebook Surpasses 1 Trillion Pageviews per Month” that the social networking giant saw “approximately 870 million unique visitors in June and 860 million in July” and followed up with some per visitor statistics, indicating “each visitor averaged approximately 1,160 page views in July and 40 per visit — enormous by any standard. Time spent on the site was around 25 minutes per user.” From an architectural standpoint it’s not just about the page views. It’s about requests and responses, many of which occur under the radar from metrics and measurements typically gathered by external services like Google. Much of Facebook’s interactive features are powered by AJAX, which is hidden “in” the page and thus obscured from external view and a “page view” doesn’t necessarily include a count of all the external objects (scripts, images, etc…) that comprises a “page”. So while 1 trillion (or 467 billion, whichever you prefer) is impressive, consider that this is likely only a fraction of the actual requests and responses handled by Facebook’s massive infrastructure on any given day. Let’s examine what the actual requests and responses might mean in terms of load on Facebook’s infrastructure, shall we? SOME QUICK MATH Loading up Facebook yields 125 requests to load various scripts, images, and content. That’s a “page view”. Sitting on the page for a few minutes and watching Firebug’s console, you’ll note a request to update content occurs approximately every minute you are on a page. If we do the math – based on approximate page views per visitor, each of which incurs 125 GET requests – we can math that up to an approximation of 19,468 RPS (Requests per Second). That’s only an approximation, mind you, and doesn’t take into consideration the time factor, which also incurs AJAX-based requests to update content occurring on a fairly regular basis. These also add to the overall load on Facebook’s massive infrastructure. And that’s before we start considering the impact from “unseen” integrated traffic via Facebook’s API which, according to the most recently available data (2009) was adding 5 billion requests a day to that load. If you’re wondering, that’s an additional 57,870 requests per second, which gives us a more complete number of 77,338 requests per second. SOURCE: 2009 Interop F5 Keynote Let’s take a moment to digest that, because that’s a lot of load on a site – and I’m sure it still isn’t taking into consideration everything. We also have to remember that the load at any given time could be higher – or lower – based on usage patterns. Averaging totals over a month and distilling down to a per second average is just that – a mathematical average. It doesn’t take into consideration that peaks and valleys occur in usage throughout the day and that Facebook may be averaging only a fraction of that load with spikes two and three times as high throughout the day. That realization should be a bit sobering, as we’ve seen recent DDoS attacks that have crippled and even toppled sites with less traffic than Facebook handles in any given minute of the day. The question is, how do they do it? How do they manage to keep the service up and available despite the overwhelming load and certainty of traffic spikes? IT’S the ARCHITECTURE Facebook itself does a great job of discussing exactly how it manages to sustain such load over time while simultaneously managing growth, and its secret generally revolves around architectural choices. Not just the “Facebook” application architecture, but its use of infrastructure architecture as well. That may not always be apparent from Facebook’s engineering blog, which generally focuses on application and software architecture topics, but it is inherent in those architectural decisions. Take, for example, an engineer’s discussion on Facebook’s secrets to scaling to over 500 million users and beyond. The very first point made is to “scale horizontally”. This isn't at all novel but it's really important. If something is increasing exponentially, the only sensible way to deal with it is to get it spread across arbitrarily many machines. Remember, there are only three numbers in computer science: 0, 1, and n. (Scaling Facebook to 500 Million Users and Beyond (Facebook Engineering Blog)) Horizontal scalability is, of course, enabled via load balancing which generally (but not always) implies infrastructure components that are critical to an overall growth and scalability strategy. The abstraction afforded by the use of load balancing services also has the added benefit of enabling agile operations as it becomes cost and time effective to add and remove (provision and decommission) compute resources as a means to meet scaling challenges on-demand, which is a key component of cloud computing models. In other words, in addition to Facebook’s attention to application architecture as a means to enable scalability, it also takes advantage of infrastructure components providing load balancing services to ensure that its massive load is distributed not just geographically but efficiently across its various clusters of application functionality. It’s a collaborative architecture that spans infrastructure and application tiers, taking advantage of the speed and scalability benefits afforded by both approaches simultaneously. Yet Facebook is not shy about revealing its use of infrastructure as a means to scale and implement its architecture; you just have to dig around to find it. Consider as an example of a collaborative architecture the solution to some of the challenges Facebook has faced trying to scale out its database, particularly in the area of synchronization across data centers. This is a typical enterprise challenge made even more difficult by Facebook’s decision to separate “write” databases from “read” to enhance the scalability of its application architecture. The solution is found in something Facebook engineers call “Page Routing” but most of us in the industry call “Layer 7 Switching” or “Application Switching”: The problem thus boiled down to, when a user makes a request for a page, how do we decide if it is "safe" to send to Virginia or if it must be routed to California? This question turned out to have a relatively straightforward answer. One of the first servers a user request to Facebook hits is called a Load balancer; this machine's primary responsibility is picking a web server to handle the request but it also serves a number of other purposes: protecting against denial of service attacks and multiplexing user connections to name a few. This load balancer has the capability to run in Layer 7 mode where it can examine the URI a user is requesting and make routing decisions based on that information. This feature meant it was easy to tell the load balancer about our "safe" pages and it could decide whether to send the request to Virginia or California based on the page name and the user's location. (Scaling Out (Facebook Engineering Blog)) That’s the hallmark of the modern, agile data center and the core of cloud computing models: collaborative, dynamic infrastructure and applications leveraging technology to enable a cost-efficient, scalable architectures able to maintain growth along with the business. SCALABILITY TODAY REQUIRES a COMPREHENSIVE ARCHITECTURAL STRATEGY Today’s architectures – both application and infrastructure – are growing necessarily complex to meet the explosive growth of a variety of media and consumers. Applications alone cannot scale themselves out – there simply aren’t physical machines large enough to support the massive number of users and load on applications created by the nearly insatiable demand consumers have for online games, shopping, interaction, and news. Modern applications must be deployed and delivered collaboratively with infrastructure if they are to scale and support growth in an operationally and financially efficient manner. Facebook’s ability to grow and scale along with demand is enabled by its holistic, architectural approach that leverages both modern application scalability patterns as well as infrastructure scalability patterns. Together, infrastructure and applications are enabling the social networking giant to continue to grow steadily with very few hiccups along the way. Its approach is one that is well-suited for any organization wishing to scale efficiently over time with the least amount of disruption and with the speed of deployment required of today’s demanding business environments. Facebook Hits One Trillion Page Views? Nope. Facebook Surpasses 1 Trillion Pageviews per Month Scaling Out (Facebook Engineering Blog) Scaling Facebook to 500 Million Users and Beyond (Facebook Engineering Blog) WILS: Content (Application) Switching is like VLANs for HTTP Layer 7 Switching + Load Balancing = Layer 7 Load Balancing Infrastructure Scalability Pattern: Partition by Function or Type Infrastructure Scalability Pattern: Sharding Sessions Architecturally, Is There Such A Thing As Too Scalable? Forget Hyper-Scale. Think Hyper-Local Scale.205Views0likes0CommentsBare Metal Blog: Maximized Capacity
#f5 #BareMetalBlog Use the capacity you’ve already purchased – fill the chassis instead of buying new hardware. When you purchase a high-end storage array, it is not generally advised that you half fill the racks and then forget about it, purchasing a new empty rack to fill when next you need high-end storage. Knowing how the firmware is configured on a BIG-IP, we can chat about some of the more interesting aspects of hardware, firmware, and devices overall. One of the truly interesting bits to me is the proclivity of many organizations to purchase a high-end, bladed ADC system and not fill the rack. I remember way back in the day when my cohorts in the networking and storage (FC SAN) spaces had to worry about the backplane on a switch and how much it could actually handle compared to the sum of the ports you could put on the front. But that isn’t as much a concern with ADCs (or much of anything else these days), purpose-built devices tend to have plenty of bandwidth on the backplane, and you don’t buy an ADC based off of commoditized hardware that you expect to drop blades into. But there are a reasonably large number of organizations that have purchased a bladed ADC and then either done nothing more with it than the original purpose, or gone out and bought separate ADCs (normally from the same vendor!) to do new tasks. And the question there is… Are you really maximizing your infrastructure that way? We are going through a whole cycle of storage consolidation because our storage needs were met in this manner. There was corporate storage, divisional storage, departmental storage, and in many places team storage, all growing at the same time with no oversight and lots of “we’re completely out of space” at one level while another was overflowing. We, as an industry, need to apply the lessons learned there to our usage of ADCs. Sure, you might have purchased that blade rack for project X, but when you kick off project Y, do you really want to purchase all new hardware, or could you drop a couple of new blades into the rack and use any of the methods various vendors offer to partition off those blades as separate entities? Of course you could. And it would normally be more economically feasible. Of course there are cases where you can show going a different route might cost less, but there are some other considerations to be made. Like power consumption of a whole new device versus a blade, and management of a whole new device versus another interface on an existing device, and of course the cost of setting up and configuring a whole new device versus utilizing the configuration already in place for the other blades and modifying it. From the F5 perspective, we have attempted to minimize the pain of an all-new configuration with iApps, a very cool feature that handles the details for you, but they won’t initially configure the device, and blades will do a lot of that configuration for you. Indeed, if you are just expanding the capacity of an existing environment, that is automatic. Only if you’re doing new things – new apps, new ADC functionality, new segmentation – do you have to do configuration. If you’re not an F5 customer, you can of course check with your vendor, but I’m willing to bet you’ll have less work putting blades into existing chassis. And don’t forget that a modern ADC is capable of doing a lot of things. If you’re looking for new LAN/WAN/security/application delivery functionality, check into what can go into your ADC rack while you’re looking at other options. It is entirely possible that the ADC you have could perform the functionality you need, with a single (or simplified) management interface, leaving staff more time to deal with the 10 million other issues that come up in an IT shop in the course of a year. If your ADC is underutilized, it might just be possible that you could start using your ADC for the given purpose with just a license key, saving even more time, and likely some money too. Technorati Tags: Bare Metal Blog,VIPRION,Chassis,F5 Networks,Don MacVittie201Views0likes0CommentsF5 Friday: The Data Center is Always Greener on the Other Side of the ADC
Organizations interested in greening their data centers (both green as in cash as well as in grass) will benefit from the ability to reduce, reuse and recycle in just 4Us of rack space with a leaner, greener F5 VIPRION According to the latest data from the U.S. Energy Information Administration, the average cost of electricity for commercial use rose from 9.63 (Jan 2010) to 9.88 (Jan 2011) cents per kWh. If you think that’s not significant, consider that the average cost of powering one device in the data center has increased by 3% from 2010 to 2011 – an average of about $5 per 250w device. On a per device basis, that’s not so bad, but start multiplying that by the number of devices in an enterprise-class data center and it begins to get fairly significant fairly quickly – especially given that we haven’t started calculating the costs to cool the devices yet, either. Medium is the New Large in Enterprise Sometimes It Is About the Hardware VIPRION 2400 and vCMP Presentation VIPRION Platform Resources vCMP: License to Virtualize Virtual Clustered Multiprocessing (vCMP) The ROI of Application Delivery Controllers in Traditional and Virtualized Environments If a Network Can’t Go Virtual Then Virtual Must Come to the Network Data Center Feng Shui: Architecting for Predictable Performance187Views0likes0CommentsF5 Friday: Speeds, Feeds and Boats
#vcmp It’s great to be fast and furious, but if your infrastructure handles like a boat you won’t be able to take advantage of its performance We recently joined the land of modernity when I had a wild urge to acquire a Wii. Any game system is pretty useless without games, so we got some of those too. One of them, of course, had to be Transfomers: The Game because, well, our three-year old thinks he is a Transformer and I was curious as to how well the game recreated the transformation process. The three-year old obviously doesn’t have the dexterity (or patience) to play, but he loves to watch other people play, people like his older brother. The first time our oldest sat down and played he noted that Bumblebee, in particular, handled like a “boat.” Oh, he’s a fast car alright, but making it around corners and tight curves or around objects is difficult because he’s not very agile when you get down to it. Jazz, for the record, handles much better. Handling is important, of course, because the faster you go the more difficult it is to maneuver and be accurate in your driving. Handling impacts the overall experience because constantly readjusting direction and speed to get through town makes it difficult to efficiently find and destroy the “evil forces of the Decepticons.” Now while the infrastructure in which you’re considering investing may be fast and furious, with high speeds and fat feeds, the question you have to ask yourself is, “How does she handle? Is she agile, or is she a boat?” Because constantly readjusting policies and capacity and configuration can make it difficult to efficiently deliver applications. VIPRION 2400 : High Speed, Fat Feeds and Agile to Boot This week at Interop F5 announced the newest member of our VIPRION family, the VIPRION 2400 – a.k.a. Victoria. At first glance you might think the VIPRION 2400 is little more than a scaled down version of the VIPRION 4000, our flagship BIG-IP chassis-based application delivery controller. In many respects that’s true, but in many others it’s not. That’s because at the same time we also introduced a new technology called vCMP (virtual Multi-Clustered Processing) that enables the platform with some pretty awesome agility internally which translates into operational and ultimately business agility. If the network can’t go virtual, then virtual must come to the network. It’s not just having a bladed, pay-as-you-grow, system that makes VIPRION with vCMP agile. It’s the way in which you can provision and manage resources across blades, transparently, in a variety of different ways. If you’re an application-centric operations kind of group, you can manage and thus provision application delivery resources on VIPRION based on applications, not ports or IP addresses or blades. If you’re a web-site or domain focused operations kind of group, manage and provision application delivery resources by VIP (Virtual IP Address) instead. If you’re an application delivery kind of group, you may want to manage by module instead. It’s your operations, your way. What’s awesome about vCMP and the VIPRION platforms is the ability to provision and manage application delivery resources as a pool, regardless of where they’re located. Say you started with one blade in a VIPRION 2400 chassis and grew to need a second. There’s no disruption, no downtime, no changes to the network necessary. Slap in a second blade and the resources are immediately available to be provisioned and managed as though they were merely part of a large pool. Conversely, in the event of a blade failure, the resources are shifted to other available CPUs and memory across the system. Not only can you provision at the resource layer, but you can split up those resources by creating virtual instances of BIG-IP right on the platform. Each “guest” on the VIPRION platform can be assigned its own resources, be managed by completely different groups, and is for all purposes an isolated, stand-alone version of BIG-IP. Without additional hardware, without topological disruption, without all the extra cables and switches that might be necessary to achieve such a feat using traditional application delivery systems. VIPRION 2400 has the speeds and feeds necessary to support a growing mid-sized organization. Mid-sized from a traffic management perspective, not necessarily employee count. The increasing demands on even small and medium sized businesses from new clients, video, and HTML5 are driving high volumes of traffic through architectures that are not necessarily prepared to handle the growth affordably or operationally. The VIPRION 2400 was designed to address that need – both to handle volume and provide for growth over time, while being as flexible as possible to fit the myriad styles of architecture that exist in the real world. The explosion of virtualization inside the data center in medium-sized businesses, too, is problematic. These organizations need a solution that’s capable of supporting the security and delivery needs of virtualized desktops and applications in very flexible ways. VIPRION 2400 enables these organizations to take advantage of what has traditionally been a large-enterprise class only solution and enable the implementation of modern architectures and network topologies that can greatly assist in virtualization and cloud computing efforts by providing the foundation of a dynamic, agile infrastructure. VIPRION 2400 RESOURCES VIPRION 2400 and vCMP Presentation VIPRION Platform Resources F5 Introduces Midrange VIPRION Platform and Industry’s First Virtual Clustered Multiprocessing Technology VIPRION 2400 - Quantum Performance Virtual Clustered Multiprocessing (vCMP) Medium is the New Large in Enterprise Sometimes It Is About the Hardware VIPRION and vCMP ENABLE YOU TO TAKE ADVANTAGE OF MORE OF THE “50 Ways to Use Your BIG-IP System.” Share how you use your BIG-IP, get a free T-Shirt, and maybe more! Medium is the New Large in Enterprise Sometimes It Is About the Hardware If a Network Can’t Go Virtual Then Virtual Must Come to the Network Data Center Feng Shui: Architecting for Predictable Performance F5 Friday: Have You Ever Played WoW without a Good Graphics Card? All F5 Friday Posts on DevCentral Data Center Feng Shui: SSL When Did Specialized Hardware Become a Dirty Word?185Views0likes0Comments