spdy
25 TopicsSPDY/HTTP2 Profile Impact on Variable Use
When using SPDY/HTTP2 profile, TCL variables set before the HTTP_REQUEST event are not carried over to further events. This is by design as the current iRule/TCL implementation is not capable of handling multiple concurrent events, which may be they case with SPDY/HTTP2 multiplexing. This is easily reproducible with this simple iRule: when CLIENT_ACCEPTED { set default_pool [LB::server pool] } when HTTP_REQUEST { log local0. "HTTP_REQUEST EVENT Client [IP::client_addr]:[TCP::client_port] - $default_pool -" # - can't read "default_pool": no such variable while executing "log local0. "HTTP_REQUEST EVENT -- $default_pool --" } Storing values in a table works perfectly well and can be used to work around the issue as shown in the iRule below. when CLIENT_ACCEPTED { table set [IP::client_addr]:[TCP::client_port] [LB::server pool] } when HTTP_REQUEST { set default_pool [table lookup [IP::client_addr]:[TCP::client_port]] log local0. "POOL: |$default_pool|" } This information is also available in the wiki on the HTTP2 namespace command list page.754Views0likes12CommentsAAM Dynamic Policy Updates And Removal of SPDY in BIG-IP v13
February 22nd marked the downloadable release of F5 BIG-IP v13. With any major release, our user community wants to know what changes and features will present themselves post upgrade. For BIG-IP Application Acceleration Manager (AAM) users, there are two changes users can expect. Under-the-hood, users will enjoy improved performance in Bandwidth Controller updates for Dynamic Policies. For those who deployed SPDY services, keep reading. Dynamic Policy Updates Maybe you implemented dynamic bandwidth policies and maaaayyybeeee you saw policies not always metering "fairly" causing throttling prior to peak utilization. We saw it too. The algorithms behind the fairness policies within Bandwidth Controller and their surrounding policy management are updated to properly maximize your available bandwidth. BIG-IP v13 modifies existing static and common token bucket algorithms. For you this means: Better tracking of slow flows Better utilize assigned max rates in all conditions Better control of max rate and token distributions across TMM instances and blades These updates require no user intervention to implement. If you have existing policies already in use, you should see performance improvements immediately. There are two mechanisms for assigning policy-based flow rates in AAM, Policy Enforcement Manager and Access Policy Manager (and iRules if you're splitting hairs) but actual changes for these improvements reside within Bandwidth Controller. Say Goodby to SPDY Google announced removal of SPDY support in favor of HTTP/2 Februrary 2015. SPDY (and NPN) support were removed from Chrome in February 2016. All other major browsers followed suit or are planning to drop support shortly. F5 introduced HTTP/2 support in 11.6 and with version 13 we complete SPDY's removal as a functioning service. SPDY service profiles in v12 SPDY profiles create is removed in v13 SPDY profiles create is removed in v13 If for some reason you had a SPDY-only virtual server/application, kudo's for being that one person and let's start making plans to migrate your SPDY gateway/services to HTTP/2 before your upgrade. SPDY functionality exists in LTM and AAM and GUI removal applies to both. We don't want you to be surprised. SPDY removal and Bandwidth Controller fairness algorithm updates are the two changes you'll see affect AAM (and other modules) in BIG-IP v13. One requires no intervention and will provide performance increases, the other is a deprecated web standard who should already have both feet out your infrastructure door. Let us know your experiences with BIG-IP v13 keep on IT'ing264Views0likes1CommentWhat are you waiting for?
The future of HTTP is here, or almost here. It has been 5 years since SPDY was first introduced as a better way to deliver web sites. A lot has happened since then. Chrome, Firefox, Opera and some IE installations support SPDY. SPDY evolved from v2 to v3 to v3.1. Sites like Google, Facebook, Twitter, and Wordpress to name just a few are available via SPDY. F5 announced availability of a SPDY Gateway. The IETF HTTP working group announced SPDY is the starting point for HTTP/2. And most recently - Apple has announced that Safari 8, due out this fall, will support SPDY! This means that all major browsers will support SPDY by the end of the year. By the end of the year all major browsers will support SPDY, and the IETF is scheduled to have the HTTP/2 draft finalized. This week the IETF working group published the latest draft of the HTTP/2 spec. The hope is that this will be the version that becomes the proposed RFC. The Internet Explorer team posted a blog at the end of May indicating that they have HTTP/2 in development for a future version of IE, there is no commitment whether this will be in IE 12 or another version but they are preparing for the shift. We at F5, have been following the evolution of the spec and developing prototypes based on the various interoperability drafts to make sure we are ready as soon as possible to implement an HTTP/2 gateway. So what are you waiting for, why are you not using SPDY on your site? Using SPDY today allows you to see how HTTP/2 may potentially impact your applications and infrastructure. HTTP/2 is not a new protocol, there are no changes to the HTTP semantics and it does not obsolete the existing HTTP/1.1 message syntax. If it’s not a new protocol and it doesn’t obsolete HTTP/1.1 what is HTTP/2 exactly? Per the draft’s abstract: This specification describes an optimized expression of the syntax of the Hypertext Transfer Protocol (HTTP). HTTP/2 enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent messages on the same connection. It also introduces unsolicited push of representations from servers to clients. This specification is an alternative to, but does not obsolete, the HTTP/1.1 message syntax. HTTP's existing semantics remain unchanged. HTTP/2 allows communication to occur with less data transmitted over the network and with the ability to send multiple requests and responses across a single connection, out of order and interleaved – oh yeah and all over SSL. Let’s look at these in a little more detail. Sending less data has always been a good thing but just how much improvement can be achieved by compressing headers. It turns out quite a bit. Headers have a lot of repetitive information in them: the cookies, encoding types, cache settings to name just a few. With all this repetitive information compression can really help. Looking at the amount of downloaded data for a web page delivered over HTTP and over SPDY we can see just how much savings can be achieved. Below is a sample of 10 objects delivered over HTTP and SPDY, the byte savings result in a total savings of 1762 bytes. That doesn’t sound like much but we’re only talking about 10 objects. The average home page now has close to 100 objects on it, and I’m sure the total number of hits to your website is well over that number. If your website gets 1 million hits a day then extrapolating this out the savings become 168 MB, if the hits are closer to 10 million the savings nears 1.7 GB. Over the course of a month or a year these savings will start to add up. HTTP SPDY Byte Savings https://.../SitePages/Home.aspx 29179 29149 30 https://.../_layouts/1033/core.js 84457 84411 46 https://.../_layouts/sp.js 71834 71751 83 https://.../_layouts/sp.ribbon.js 57999 57827 172 https://.../_layouts/1033/init.js 42055 41864 191 https://.../_layouts/images/fgimg.png 20478 20250 228 https://.../_layouts/images/homepageSamplePhoto.jpg 16935 16704 231 https://.../ScriptResource.axd 27854 27617 237 https://.../_layouts/images/favicon.ico 5794 5525 269 https://.../_layouts/blank.js 496 221 275 SPDY performed header compression via deflate, this was discovered to be vulnerable to CRIME attacks, as a result HTTP/2 uses HPACK header compression, an HTTP header specific compression scheme which is not vulnerable to CRIME. The next element to examine is the ability to send multiple requests and response across a single connection, out of order and interleaved. We all know that latency can have a big impact on page load times and the end user experience. This is why HTTP 1.1 allowed for keep-alives, eliminating the need to perform a three way handshake for each and every request. After keep alives came, domain sharding and browsers eventually changed the default behavior to allow more than 2 concurrent TCP connections. The downside of multiple TCP connections is having to conduct the three way handshake multiple times, wouldn’t things be easier if all requests could just be sent over a single TCP connection. This is what HTTP/2 provides, and not only that the responses can be returned in a different order in which they were reqeusted. Now onto the SSL component. HTTP/2 requires strong crypto –128 bit EC or 2048 bit RSA. This requirement will be enforced by browsers and cannot be disabled. With the ever growing number of attacks having SSL everywhere is a good thing but there are performance and reporting ramifications to encrypting all data. Organizations that deploy solutions to monitor, classify and analyze Internet traffic may no longer be able to do so. All the changes coming in HTTP/2 have the potential to impact how an application is rendered and how infrastructure components will react. What are the consequences of having all requests and responses transmitted over SSL, can the network support 50 concurrent requests for objects, does the page render properly for the end user if objects are received out of order? On the positive you could end up with improved page load times and a reduction in the amount of data transferred, stop waiting and start enabling the future of the web today.919Views0likes7CommentsGoodbye SPDY, Hello HTTP/2
The Chrome team at Google recently announced that they will be removing SPDY support from Chrome in early 2016. SPDY is an application layer protocol designed to improve the way that data is sent from web servers to clients. Depending on who you read, performance benefits ranged from a 2X speed increase down to negligible. Now, since here in March 2015 only 3.6% of websites were running SPDY, maybe the end of SPDY isn’t such big news, especially since SPDY is being replaced by HTTP/2. The HTTP/2 protocol is on the way to standardization, has pretty much all of the benefits of SPDY and will undoubtedly become the standard for web traffic moving forward. All of this is great, unless you are the one who is tasked with reconfiguring or re-implementing your server estate to switch form SPDY to HTTP/2. Fortunately, for those F5 customers who implemented SPDY using the SPDY gateway feature in BIG-IP LTM, switching to HTTP/2 is easy. TMOS 11.6 ships with HTTP/2 support – we’ve cautiously labeled this as ‘for testing’ – after all the protocol has only just been finalized. All you need to do is configure an HTTP/2 profile and apply it to your Virtual Server, and (where clients are HTTP/2 capable) you are serving HTTP/2 – it’s about 10 minutes work. This also give you a way to offer HTTP/2 support without changing your HTTP/1.x backend, which is handy because, based on the evidence of IE6, old web browsers can take a decade or more to die. In fact, maybe I should get my patent lodged for software to connect aging browsers to the next generation of web infrastructure?1.3KViews0likes0CommentsF5 Friday: Should you stay with HTTP/1 or move to HTTP/2 ?
Application experience aficionados take note: you have choices now. No longer are you constrained to just HTTP/1 with a side option of WebSockets or SPDY. HTTP/2 is also an option, one that like its SPDY predecessor brings with it several enticing benefits but is not without obstacles. In fact, it is those obstacles that may hold back adoption according to IDG research, "Making the Journey to HTTP/2". In the research, respondents indicated several potential barriers to adoption including backward compatibility with HTTP/1 and the "low availability" of HTTP/2 services. In what's sure to noticed as a circular dependency, the "low availability" is likely due to the "lack of backward compatibility" barrier. Conversely, the lack of backward compatibility with HTTP/1 is likely to prevent the deployment of HTTP/2 services and cause low availability of HTTP/2 services. Which in turn, well, you get the picture. This is not a phantom barrier. The web was built on HTTP/1 and incompatibility is harder to justify today than it was when we routinely browsed the web and were shut out of cool apps because we were using the "wrong" browser. The level of integration between apps and reliance on many other APIs for functionality pose a difficult problem for would-be adopters of HTTP/2 looking for the improved performance and efficacy of resource utilization it brings. But it doesn't have to. You can have your cake and eat it too, as the saying goes. HTTP Gateways What you want is some thing that sits in between all those users and your apps and speaks their language (protocol) whether it's version 1 or version 2. You want an intermediary that's smart enough to translate SPDY or HTTP/2 to HTTP/1 so you don't have to change your applications to gain the performance and security benefits without investing hundreds of hours in upgrading web infrastructure. What you want is an HTTP Gateway. At this point in the post, you will be unsurprised to learn that F5 provides just such a thing. Try to act surprised, though, it'll make my day. One of the benefits of growing up from a load balancing to an application delivery platform is that you have to be fluent in the languages (protocols) of applications. One of those languages is HTTP, and so it's no surprise that at the heart of F5 services is the ability to support all the various flavors of HTTP available today: HTTP/1, SPDY, HTTP/2 and HTTP/S (whether over TLS or SSL). But more than just speaking the right language is the ability to proxy for the application with the user. Which means that F5 services (like SDAS) sit in between users and apps and can translate across flavors of HTTP. Is your mobile app speaking HTTP/2 or SPDY but your app infrastructure only knows HTTP/1? No problem. F5 can make that connection happen. That's because we're a full proxy, with absolute control over a dual-communication stack that lets us do one thing on the client side while doing another on the server side. We can secure the outside and speak plain-text on the inside. We can transition security protocols, web protocols, and network protocols (think IPv4 - IPv6). That means you can get those performance and resource-utilization benefits without ripping and replacing your entire web application infrastructure. You don't have to reject users because they're not using the right browser protocol and you don't have to worry about losing visibility because of an SSL/TLS requirement. You can learn more about F5's HTTP/2 and SDPY Gateway capabilities by checking out these blogs: What are you waiting for? F5 Synthesis: Your gateway to the future (of HTTP) Velocity 2014 HTTP 2.0 Gateway (feat Parzych) F5 Synthesis Reference Architecture: Acceleration811Views0likes0CommentsReady or not, HTTP 2.0 is here
From HTTP to SPDY, to HTTP 2.0. The introduction of a new protocol optimized for today’s evolving web services opens the door to a world of possibilities. But are you ready to make this change? While the breakneck speed with which applications, devices and servers are evolving is common knowledge, few are aware that the HTTP protocol, such a key part of the modern internet, has seen only a few changes of significance since its inception. HTTP – behind the times? According to the Internet Engineering Task Force (IETF), HTTP 1.0 was first officially introduced in 1996. The following version, HTTP 1.1, and the version most commonly used to this day was officially released in 1997. 1997! And we’re still using it! To give you some idea of how old this makes HTTP 1.1 in terms of the development of the internet - in 1997 there was no Google! There was no Paypal ! It goes without saying, of course, that since 1997, both the internet and the amount of traffic it has to handle have grown enormously. Standardization of HTTP 2.0 by the IETF is scheduled for 2015. It’s unrealistic to expect most businesses to immediately deploy HTTP 2.0 in their organizations. However seeing the clear benefits of HTTP 2.0, which is to enable infrastructure to handle huge volume of traffic to access applications and services, many organizations will soon implement this new protocol. F5: Ready for SPDY, ready for HTTP 2.0 The initial draft of HTTP 2.0 was based on the SPDY protocol. When SPDY was first introduced some years ago, the first company to provide a solution to enable businesses to “switch over” to SPDY was F5, we introduced a SPDY Gateway then. SPDY Gateway allowed our enterprise customers to overcome the biggest hurdle in the transition to adopt SPDY protocol on the server end, with the stable and secure ADC and gateway services of BIG-IP. Organizations have the option to make servers upgrade to support the protocol, or use the gateway to handle the connection. This gives businesses the time necessary to make a smooth and gradual transition from HTTP 1.0 to SPDY on the server side. SPDY Gateway is just one example of the many ways ADC can contribute to a flexible and agile infrastructure for businesses. One of the well-known ADC use case in SPDY protocol is SSL termination. Similarly in the transition to HTTP 2.0, a HTTP 2.0 gateway will help organizations plan their adoption and migration to accommodate demands of HTTP 2.0. Thinking ahead F5 is always in the forefront of technology evolution. HTTP 2.0 Gateway is just one example of F5’s dedication in the evolution of Application Delivery technology, providing guidance to our customers making transitions to new technology as simple as possible. Currently, F5’s BIG-IP supports HTTP 2.0 under our early access program. We expect its general availability after HTTP 2.0 standardization is completed in 2015. For a Japanese version of this post, please go here.345Views0likes0Comments新しいウェブプロトコルHTTP 2.0にいち早く対応するための「ゲートウェイ」という考え方
HTTPからSPDY、そしていよいよHTTP2.0へ ― 進化するウェブサービス最適な最新のウェブのプロトコル導入を見据え、様々な取り組みが始まっています アプリケーション、デバイス、サイバーセキュリティの進化は目まぐるしく進んでいますが、インターネットの大部分の通信を担う通信プロトコル「HTTP」は実は初代からあまり大きな進化を遂げていない ― これは意外と知られていない現実です。 インターネット標準化団体であるIETFの資料によると、初代のHTTP 1.0が標準化されたのが1996年、そしてその次世代版であるHTTP1.1が標準化されたのが1997年です。これは、Googleという会社が生まれる前の話です。また、日本で一世を風靡したimodeサービスが始まる前のタイミングでもあります。いかに長い間この仕様が大きく変わっていないかを実感頂けると思います。また、その後、御存知の通りインターネットは爆発的に普及し、トラフィック量も増加の一途を辿っています。 2015年にHTTP2.0の仕様はIETFによって標準化される予定です。すぐに多くの企業の現場で導入が進むとは考えにくいものの、膨大なアクセス数が継続的に続くアプリケーションやサービスのインフラではその恩恵をより多く受けるため、前向きに検討するモチベーションにはなると予想されています。 数年前、HTTP 2.0の前身であるSPDYの導入が始まった際に、いち早く移行ソリューションをご提案したのがF5ネットワークスでした。F5のセキュアかつ安定したADCサービスと、BIG-IPによるゲートウェイ機能は、自社サービスのインフラをSPDYに対応させるにあたり、最大の導入障壁となる「サーバ側のSPDY対応作業」を集約させることでスムーズな移行を実現できたのです。 実はこのSPDYゲートウェイとも言うべき考え方は、激変するビジネス環境に柔軟に対応するインフラを構築するのにADCが様々な場面で貢献する好例なのです。古くはADCでSSLコネクションの終端とパフォーマンス向上、前述のSPDY、そして今回のHTTP2.0、技術は違えど、根幹となるアプローチは一貫しています。新システムへの移行が急務となったときに、IT設備の大掛かりな変更はコストも時間も人員も大きなものになりがちです。BIG-IPによるHTTP 2.0ゲートウェイという付加価値によって、少ない投資で素早くインフラをHTTP 2.0に対応させ、リクエスト数の最大値などの管理、リクエスト優先順位付け等ができるようになります。 このADCならではの付加価値をF5はユーザ様と共に高め続けていきます。そして、F5のソリューションをお使いのユーザ様は将来生まれる新しい技術への対応を気づかないうちに準備できているのかもしれません。 F5のBIG-IPはHTTP 2.0の最新版のドラフトに対応する形でearly access(早期検証プログラム)を実施しており、標準化された時点でGeneral Availability(一般向け提供開始)を予定しております。 For the English version of this article, please go here.426Views0likes0CommentsMultipath TCP (MPTCP)
#mobile #webperf #IOS7 Two! Two connections at the same time, ah ah ah... Long ago, when the web was young, we (as in the industry) figured out how to multiplex TCP (and later HTTP which we now call message steering) in order to dramatically improve application performance while simultaneously reducing load on servers. Yes, you could more with less. It was all pretty exciting stuff. Now, at long last, we're seeing the inverse come to life on the client side in the form of Multipath TCP (MPTCP) or, if you prefer a more technically sounding term to confuse your friends and family: inverse multiplexing. While the geeks among us (you know who you are) have always known how to use both the wired and wireless interfaces on our clients, it's never been something that had real advantages when it comes to web performance. There was no standard way of using both connections at the same time and really, there was very little advantage. Seriously,if you're wired up to at least a 100Mbps full duplex LAN do you really need a half-duplex wireless connection to improve your performance? No. But in the case of mobile devices, the answer is a resounding yes - yes I do. Because 2 halves make a whole, right? Okay, maybe not, but it's certainly a whole lot closer. THE MAGIC of MULTIPLEXING Most mobile devices enable you to connect over both wireless radio (mobile) network and a wireless LAN network. Most of the time you're probably using both at the same time without conscious thought. It just does what it does, as long as it's configured and connected. What it doesn't do, however, is enable an application to use both connections at the same time to connect up to an application. Your application can often use either one, but it is limited to using just one at a time. Unless you're using an MPTCP-enabled device. TCP is built on the notion of a single connection between 2 hosts. MPTCP discards that notion and enables a device to seamlessly switch between and/or simultaneously send a TCP connection over multiple interfaces. Basically, MPTCP splits up a TCP connection into subflows, and is able to (based on the device) dynamically route messages across either of those subflows. This is, you're thinking, perfect for HTTP exchanges which often require a significant number of "sub-requests" for a client to retrieve all the objects required for an given web page/application. Exactly. You're sending (and one hopes receiving) data twice as fast. Which on a mobile device is likely to be very noticeable. The problem is (and you knew there was one, didn't you?) that both the client and the host need to "speak" MPTCP to realize its potential benefits with respect to application performance. There aren't a whole lot of implementations at the moment, though one of those being iOS7 is certain impetus for hosts (the server side of all those apps) to get refitted for MPTCP. Of course, that's unlikely, isn't it? If you're in the cloud, are your hosts MPTCP ready? If you're not in the cloud, is your own infrastructure MPTCP ready? Like SPDY before it (and there's an interesting scenario - running MPTCP over SPDY) these kinds of protocol enhancements require support on both the client and the server and generally speaking, while organizations want to be able to leverage the improvements in performance (or efficiency or security) they can't justify a forklift upgrade and the ensuing disruption to get there. Further complicating potential adoption is limited support. Though Apple certainly holds a significant share of the mobile market, it's not the only player and MPTCP is only supported by iOS7 - an upgrade that hasn't been exactly cheered as the greatest thing since sliced bread by the market in general. Whether MPTCP will gain momentum as iOS7 continues to roll out and other players adopt (and that is not necessarily a given) will be determined not only by application developers desiring (or perhaps demanding) support but by whether or not organizations are able to rapidly roll out support on their end without completely replacing their entire infrastructure. A very good (and fairly technical) article on MPTCP and IOS7 from ArsTechnica. And of course your day wouldn't be complete unless I pointed out the MPTCP RFC.1.4KViews0likes3CommentsArchitecting Scalable Infrastructures: CPS versus DPS
#webperf As we continue to find new ways to make connections more efficient, capacity planning must look to other metrics to ensure scalability without compromising performance. Infrastructure metrics have always been focused on speeds and feeds. Throughput, packets per second, connections per second, etc… These metrics have been used to evaluate and compare network infrastructure for years, ultimately being used as a critical component in data center design. This makes sense. After all, it's not rocket science to figure out that a firewall capable of handling 10,000 connections per second (CPS) will overwhelm a next hop (load balancer, A/V scanner, etc… ) device only capable of 5,000 CPS. Or will it? The problem with old skool performance metrics is they focus on ingress, not egress capacity. With SDN pushing a new focus on both northbound and southbound capabilities, it makes sense to revisit the metrics upon which we evaluate infrastructure and design data centers. CONNECTIONS versus DECISIONS As we've progressed from focusing on packets to sessions, from IP addresses to users, from servers to applications, we've necessarily seen an evolution in the intelligence of network components. It's not just application delivery that's gotten smarter, it's everything. Security, access control, bandwidth management, even routing (think NAC), has become much more intelligent. But that intelligence comes at a price: processing. That processing turns into latency as each device takes a certain amount of time to inspect, evaluate and ultimate decide what to do with the data. And therein lies the key to our conundrum: it makes a decision. That decision might be routing based or security based or even logging based. What the decision is is not as important as the fact that it must be made. SDN necessarily brings this key differentiator between legacy and next-generation infrastructure to the fore, as it's just software-defined but software-deciding networking. When a switch doesn't know what to do with a packet in SDN it asks the controller, which evaluates and makes a decision. The capacity of SDN – and of any modern infrastructure – is at least partially determined by how fast it can make decisions. Examples of decisions: URI-based routing (load balancers, application delivery controllers) Virus-scanning SPAM scanning Traffic anomaly scanning (IPS/IDS) SQLi / XSS inspection (web application firewalls) SYN flood protection (firewalls) BYOD policy enforcement (access control systems) Content scrubbing (web application firewalls) The DPS capacity of a system is not the same as its connection capacity, which is merely the measure of how many new connections a second can be established (and in many cases how many connections can be simultaneously sustained). Such a measure is merely determining how optimized the networking stack of any given solution might be, as connections – whether TCP or UDP or SMTP – are protocol oriented and it is the networking stack that determines how well connections are managed. The CPS rate of any given device tells us nothing about how well it will actually perform its appointed tasks. That's what the Decisions Per Second (DPS) metric tells us. CONSIDERING BOTH CPS and DPS Reality is that most systems will have a higher CPS compared to its DPS. That's not necessarily bad, as evaluating data as it flows through a device requires processing, and processing necessarily takes time. Using both CPS and DPS merely recognizes this truth and forces it to the fore, where it can be used to better design the network. A combined metric helps design the network by offering insight into the real capacity of a given device, rather than a marketing capacity. When we look only at CPS, for example, we might feel perfectly comfortable with a topological design with a flow of similar CPS capacities. But what we really want is to make sure that DPS –> CPS (and vice-versa) capabilities were matched up correctly, lest we introduce more latency than is necessary into a given flow. What we don't want is to end up with is a device with a high DPS rate feeding into a device with a lower CPS rate. We also don't want to design a flow in which DPS rates successively decline. Doing so means we're adding more and more latency into the equation. The DPS rate is a much better indicator of capacity than CPS for designing high-performance networks because it is a realistic measure of performance, and yet a high DPS coupled with a low CPS would be disastrous. Luckily, it is almost always the case that a mismatch in CPS and DPS will favor CPS, with DPS being the lower of the two metrics in almost all cases. What we want to see is as close a CPS:DPS ratio as possible. The ideal is 1:1, of course, but given the nature of inspecting data it is unrealistic to expect such a tight ratio. Still, if the ratio becomes too high, it indicates a potential bottleneck in the network that must be addressed. For example, assume an extreme case of a CPS:DPS of 2:1. The device can establish 10,000 CPS, but only process at a rate of 5,000 DPS, leading to increasing latency or other undesirable performance issues as connections queue up waiting to be processed. Obviously there's more at play than just new CPS and DPS (concurrent connection capability is also a factor) but the new CPS and DPS relationship is a good general indicator of potential issues. Knowing the DPS of a device enables architects to properly scale out the infrastructure to remediate potential bottlenecks. This is particularly true when TCP multiplexing is in play, because it necessarily reduces CPS to the target systems but in no way impacts the DPS. On the ingress, too, are emerging protocols like SPDY that make more efficient use of TCP connections, making CPS an unreliable measure of capacity, especially if DPS is significantly lower than the CPS rating of the system. Relying upon CPS alone – particularly when using TCP connection management technologies - as a means to achieve scalability can negatively impact performance. Testing systems to understand their DPS rate is paramount to designing a scalable infrastructure with consistent performance. The Need for (HTML5) Speed SPDY versus HTML5 WebSockets Y U No Support SPDY Yet? Curing the Cloud Performance Arrhythmia F5 Friday: Performance, Throughput and DPS Data Center Feng Shui: Architecting for Predictable Performance On Cloud, Integration and Performance779Views0likes0CommentsThere is more to it than performance.
Did you ever notice that sometimes, “high efficiency” furnaces aren’t? That some things the furnace just doesn’t cover – like the quality of your ductwork, for example? The same is true of a “high performance” race car. Yes, it is VERY fast, assuming a nice long flat surface for it to drive on. Put it on a dirt road in the rainy season, and, well, it’s just a gas hog. Or worse, a stuck car. I could continue the list. A “high energy” employee can be relatively useless if they are assigned tasks at which brainpower, not activity rate, determines success… But I’ll leave it at those three, I think you get the idea. The same is true of your network. Frankly, increasing your bandwidth in many scenarios will not yield the results you expected. Oh, it will improve traffic flow, and overall the performance of apps on the network will improve, the question is “how much?” It would be reasonable – or at least not unreasonable – to expect that doubling Internet bandwidth should stave off problems until you double bandwidth usage. But often times the problems are with the overloading apps we’re placing upon the network. Sometimes, it’s not the network at all. Check the ecosystem, not just the network. When I was the Storage and Servers Editor over at NWC, I reviewed a new (at the time) HP server that was loaded. It had a ton of memory, a ton of cores, and could make practically any application scream. It even had two gigabit NICs in it. But they weren’t enough. While I had almost nothing bad to say about the server itself, I did observe in the summary of my article that the network was now officially the bottleneck. Since the machine had high speed SAS disks, disk I/O was not as bi a deal as it traditionally has been, high-speed cached memory meant memory I/O wasn’t a problem at all, and multiple cores meant you could cram a ton of processing power in. But team those two NICs and you’d end up with slightly less than 2 Gigabits of network throughput. Assuming 100% saturation, that was really less than 250 Megabytes per second, and that counts both in and out. For query-intensive database applications or media streaming servers, that just wasn’t keeping pace with the server. Now here we are, six or so years later, and similar servers are in use all over the globe… Running VMs. Meaning that several copies of the OS are now carving up that throughput. So start with your server. Check it first if the rest of the network is performing, it might just be the problem. And while we’re talking about servers, the obvious one needs to be mentioned… Don’t forget to check CPU usage. You just might need a bigger server or load balancing, or these days, less virtuals on your server. Heck, as long as we’re talking about servers, let’s consider the app too. The last few years for a variety of reasons we’ve seen less focus on apps whose performance is sucktacular, but it still happens. Worth looking into if the server turns out to be the bottleneck. Old gear is old. I was working on a network that deployed an ancient Cisco switch. The entire network was 1 Gig, except for that single switch. But tracing wires showed that switch to lie between the Internet and the internal network. A simple error, easily fixed, but an easy error to have in a complex environment, and certainly one to be aware of. That switch was 10/100 only. We pulled it out of the network entirely, and performance instantly improved. There’s necessary traffic, and then there’s… Not all of the traffic on your network needs to be. And all that does need to be doesn’t have to be so bloated. Look for sources of UDP broadcasts. More often than you would think, applications broadcast that you don’t care about. Cut them off. For other traffic, well there is Application Delivery Optimization. ADO is improving application delivery by a variety of technical solutions, but we’ll focus on those that make your network and your apps seem faster. You already know about them – compression, caching, image optimization… In the case of back-end services, de-duplication. But have you considered what they do other than improve perceived or actual performance? Free Bandwidth Anything that reduces the size of application data leaving your network also reduces the burden on your Internet connection. This goes without saying, but as I alluded to above, we sometimes overlook the fact that it is not just application performance we’re impacting, but the effectiveness of our connections – connections that grow more expensive by leaps and bounds each time we have to upgrade them. While improving application performance is absolutely a valid reason to seek out ADO, delaying or eliminating the need to upgrade your Internet connection(s) is another. Indeed, in many organizations it is far easier to do TCO justification based upon deferred upgrades than it is based upon “our application will be faster”, while both are benefits of ADO. New stuff! And as time wears on, SPDY, IPv6, and a host of other technologies will be more readily available to help you improve your network. Meanwhile, check out gateways for these protocols to make the transition easier. In Summation There are a lot of reasons for apps not to perform, and there are a lot of benefits to ADO. I’ve listed some of the easier problems to ferret out, the deeper into your particular network you get, the harder it is to generalize problems. But these should give you a taste for the types of things to look for. And a bit more motivation to explore ADO. Of course I hope you choose F5 gear for ADO and overall application networking, but there are other options out there. I think. Maybe.214Views0likes0Comments