spdy
25 TopicsGoodbye SPDY, Hello HTTP/2
The Chrome team at Google recently announced that they will be removing SPDY support from Chrome in early 2016. SPDY is an application layer protocol designed to improve the way that data is sent from web servers to clients. Depending on who you read, performance benefits ranged from a 2X speed increase down to negligible. Now, since here in March 2015 only 3.6% of websites were running SPDY, maybe the end of SPDY isn’t such big news, especially since SPDY is being replaced by HTTP/2. The HTTP/2 protocol is on the way to standardization, has pretty much all of the benefits of SPDY and will undoubtedly become the standard for web traffic moving forward. All of this is great, unless you are the one who is tasked with reconfiguring or re-implementing your server estate to switch form SPDY to HTTP/2. Fortunately, for those F5 customers who implemented SPDY using the SPDY gateway feature in BIG-IP LTM, switching to HTTP/2 is easy. TMOS 11.6 ships with HTTP/2 support – we’ve cautiously labeled this as ‘for testing’ – after all the protocol has only just been finalized. All you need to do is configure an HTTP/2 profile and apply it to your Virtual Server, and (where clients are HTTP/2 capable) you are serving HTTP/2 – it’s about 10 minutes work. This also give you a way to offer HTTP/2 support without changing your HTTP/1.x backend, which is handy because, based on the evidence of IE6, old web browsers can take a decade or more to die. In fact, maybe I should get my patent lodged for software to connect aging browsers to the next generation of web infrastructure?1.3KViews0likes0CommentsMultipath TCP (MPTCP)
#mobile #webperf #IOS7 Two! Two connections at the same time, ah ah ah... Long ago, when the web was young, we (as in the industry) figured out how to multiplex TCP (and later HTTP which we now call message steering) in order to dramatically improve application performance while simultaneously reducing load on servers. Yes, you could more with less. It was all pretty exciting stuff. Now, at long last, we're seeing the inverse come to life on the client side in the form of Multipath TCP (MPTCP) or, if you prefer a more technically sounding term to confuse your friends and family: inverse multiplexing. While the geeks among us (you know who you are) have always known how to use both the wired and wireless interfaces on our clients, it's never been something that had real advantages when it comes to web performance. There was no standard way of using both connections at the same time and really, there was very little advantage. Seriously,if you're wired up to at least a 100Mbps full duplex LAN do you really need a half-duplex wireless connection to improve your performance? No. But in the case of mobile devices, the answer is a resounding yes - yes I do. Because 2 halves make a whole, right? Okay, maybe not, but it's certainly a whole lot closer. THE MAGIC of MULTIPLEXING Most mobile devices enable you to connect over both wireless radio (mobile) network and a wireless LAN network. Most of the time you're probably using both at the same time without conscious thought. It just does what it does, as long as it's configured and connected. What it doesn't do, however, is enable an application to use both connections at the same time to connect up to an application. Your application can often use either one, but it is limited to using just one at a time. Unless you're using an MPTCP-enabled device. TCP is built on the notion of a single connection between 2 hosts. MPTCP discards that notion and enables a device to seamlessly switch between and/or simultaneously send a TCP connection over multiple interfaces. Basically, MPTCP splits up a TCP connection into subflows, and is able to (based on the device) dynamically route messages across either of those subflows. This is, you're thinking, perfect for HTTP exchanges which often require a significant number of "sub-requests" for a client to retrieve all the objects required for an given web page/application. Exactly. You're sending (and one hopes receiving) data twice as fast. Which on a mobile device is likely to be very noticeable. The problem is (and you knew there was one, didn't you?) that both the client and the host need to "speak" MPTCP to realize its potential benefits with respect to application performance. There aren't a whole lot of implementations at the moment, though one of those being iOS7 is certain impetus for hosts (the server side of all those apps) to get refitted for MPTCP. Of course, that's unlikely, isn't it? If you're in the cloud, are your hosts MPTCP ready? If you're not in the cloud, is your own infrastructure MPTCP ready? Like SPDY before it (and there's an interesting scenario - running MPTCP over SPDY) these kinds of protocol enhancements require support on both the client and the server and generally speaking, while organizations want to be able to leverage the improvements in performance (or efficiency or security) they can't justify a forklift upgrade and the ensuing disruption to get there. Further complicating potential adoption is limited support. Though Apple certainly holds a significant share of the mobile market, it's not the only player and MPTCP is only supported by iOS7 - an upgrade that hasn't been exactly cheered as the greatest thing since sliced bread by the market in general. Whether MPTCP will gain momentum as iOS7 continues to roll out and other players adopt (and that is not necessarily a given) will be determined not only by application developers desiring (or perhaps demanding) support but by whether or not organizations are able to rapidly roll out support on their end without completely replacing their entire infrastructure. A very good (and fairly technical) article on MPTCP and IOS7 from ArsTechnica. And of course your day wouldn't be complete unless I pointed out the MPTCP RFC.1.3KViews0likes3CommentsWhat are you waiting for?
The future of HTTP is here, or almost here. It has been 5 years since SPDY was first introduced as a better way to deliver web sites. A lot has happened since then. Chrome, Firefox, Opera and some IE installations support SPDY. SPDY evolved from v2 to v3 to v3.1. Sites like Google, Facebook, Twitter, and Wordpress to name just a few are available via SPDY. F5 announced availability of a SPDY Gateway. The IETF HTTP working group announced SPDY is the starting point for HTTP/2. And most recently - Apple has announced that Safari 8, due out this fall, will support SPDY! This means that all major browsers will support SPDY by the end of the year. By the end of the year all major browsers will support SPDY, and the IETF is scheduled to have the HTTP/2 draft finalized. This week the IETF working group published the latest draft of the HTTP/2 spec. The hope is that this will be the version that becomes the proposed RFC. The Internet Explorer team posted a blog at the end of May indicating that they have HTTP/2 in development for a future version of IE, there is no commitment whether this will be in IE 12 or another version but they are preparing for the shift. We at F5, have been following the evolution of the spec and developing prototypes based on the various interoperability drafts to make sure we are ready as soon as possible to implement an HTTP/2 gateway. So what are you waiting for, why are you not using SPDY on your site? Using SPDY today allows you to see how HTTP/2 may potentially impact your applications and infrastructure. HTTP/2 is not a new protocol, there are no changes to the HTTP semantics and it does not obsolete the existing HTTP/1.1 message syntax. If it’s not a new protocol and it doesn’t obsolete HTTP/1.1 what is HTTP/2 exactly? Per the draft’s abstract: This specification describes an optimized expression of the syntax of the Hypertext Transfer Protocol (HTTP). HTTP/2 enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent messages on the same connection. It also introduces unsolicited push of representations from servers to clients. This specification is an alternative to, but does not obsolete, the HTTP/1.1 message syntax. HTTP's existing semantics remain unchanged. HTTP/2 allows communication to occur with less data transmitted over the network and with the ability to send multiple requests and responses across a single connection, out of order and interleaved – oh yeah and all over SSL. Let’s look at these in a little more detail. Sending less data has always been a good thing but just how much improvement can be achieved by compressing headers. It turns out quite a bit. Headers have a lot of repetitive information in them: the cookies, encoding types, cache settings to name just a few. With all this repetitive information compression can really help. Looking at the amount of downloaded data for a web page delivered over HTTP and over SPDY we can see just how much savings can be achieved. Below is a sample of 10 objects delivered over HTTP and SPDY, the byte savings result in a total savings of 1762 bytes. That doesn’t sound like much but we’re only talking about 10 objects. The average home page now has close to 100 objects on it, and I’m sure the total number of hits to your website is well over that number. If your website gets 1 million hits a day then extrapolating this out the savings become 168 MB, if the hits are closer to 10 million the savings nears 1.7 GB. Over the course of a month or a year these savings will start to add up. HTTP SPDY Byte Savings https://.../SitePages/Home.aspx 29179 29149 30 https://.../_layouts/1033/core.js 84457 84411 46 https://.../_layouts/sp.js 71834 71751 83 https://.../_layouts/sp.ribbon.js 57999 57827 172 https://.../_layouts/1033/init.js 42055 41864 191 https://.../_layouts/images/fgimg.png 20478 20250 228 https://.../_layouts/images/homepageSamplePhoto.jpg 16935 16704 231 https://.../ScriptResource.axd 27854 27617 237 https://.../_layouts/images/favicon.ico 5794 5525 269 https://.../_layouts/blank.js 496 221 275 SPDY performed header compression via deflate, this was discovered to be vulnerable to CRIME attacks, as a result HTTP/2 uses HPACK header compression, an HTTP header specific compression scheme which is not vulnerable to CRIME. The next element to examine is the ability to send multiple requests and response across a single connection, out of order and interleaved. We all know that latency can have a big impact on page load times and the end user experience. This is why HTTP 1.1 allowed for keep-alives, eliminating the need to perform a three way handshake for each and every request. After keep alives came, domain sharding and browsers eventually changed the default behavior to allow more than 2 concurrent TCP connections. The downside of multiple TCP connections is having to conduct the three way handshake multiple times, wouldn’t things be easier if all requests could just be sent over a single TCP connection. This is what HTTP/2 provides, and not only that the responses can be returned in a different order in which they were reqeusted. Now onto the SSL component. HTTP/2 requires strong crypto –128 bit EC or 2048 bit RSA. This requirement will be enforced by browsers and cannot be disabled. With the ever growing number of attacks having SSL everywhere is a good thing but there are performance and reporting ramifications to encrypting all data. Organizations that deploy solutions to monitor, classify and analyze Internet traffic may no longer be able to do so. All the changes coming in HTTP/2 have the potential to impact how an application is rendered and how infrastructure components will react. What are the consequences of having all requests and responses transmitted over SSL, can the network support 50 concurrent requests for objects, does the page render properly for the end user if objects are received out of order? On the positive you could end up with improved page load times and a reduction in the amount of data transferred, stop waiting and start enabling the future of the web today.902Views0likes7CommentsF5 Friday: Should you stay with HTTP/1 or move to HTTP/2 ?
Application experience aficionados take note: you have choices now. No longer are you constrained to just HTTP/1 with a side option of WebSockets or SPDY. HTTP/2 is also an option, one that like its SPDY predecessor brings with it several enticing benefits but is not without obstacles. In fact, it is those obstacles that may hold back adoption according to IDG research, "Making the Journey to HTTP/2". In the research, respondents indicated several potential barriers to adoption including backward compatibility with HTTP/1 and the "low availability" of HTTP/2 services. In what's sure to noticed as a circular dependency, the "low availability" is likely due to the "lack of backward compatibility" barrier. Conversely, the lack of backward compatibility with HTTP/1 is likely to prevent the deployment of HTTP/2 services and cause low availability of HTTP/2 services. Which in turn, well, you get the picture. This is not a phantom barrier. The web was built on HTTP/1 and incompatibility is harder to justify today than it was when we routinely browsed the web and were shut out of cool apps because we were using the "wrong" browser. The level of integration between apps and reliance on many other APIs for functionality pose a difficult problem for would-be adopters of HTTP/2 looking for the improved performance and efficacy of resource utilization it brings. But it doesn't have to. You can have your cake and eat it too, as the saying goes. HTTP Gateways What you want is some thing that sits in between all those users and your apps and speaks their language (protocol) whether it's version 1 or version 2. You want an intermediary that's smart enough to translate SPDY or HTTP/2 to HTTP/1 so you don't have to change your applications to gain the performance and security benefits without investing hundreds of hours in upgrading web infrastructure. What you want is an HTTP Gateway. At this point in the post, you will be unsurprised to learn that F5 provides just such a thing. Try to act surprised, though, it'll make my day. One of the benefits of growing up from a load balancing to an application delivery platform is that you have to be fluent in the languages (protocols) of applications. One of those languages is HTTP, and so it's no surprise that at the heart of F5 services is the ability to support all the various flavors of HTTP available today: HTTP/1, SPDY, HTTP/2 and HTTP/S (whether over TLS or SSL). But more than just speaking the right language is the ability to proxy for the application with the user. Which means that F5 services (like SDAS) sit in between users and apps and can translate across flavors of HTTP. Is your mobile app speaking HTTP/2 or SPDY but your app infrastructure only knows HTTP/1? No problem. F5 can make that connection happen. That's because we're a full proxy, with absolute control over a dual-communication stack that lets us do one thing on the client side while doing another on the server side. We can secure the outside and speak plain-text on the inside. We can transition security protocols, web protocols, and network protocols (think IPv4 - IPv6). That means you can get those performance and resource-utilization benefits without ripping and replacing your entire web application infrastructure. You don't have to reject users because they're not using the right browser protocol and you don't have to worry about losing visibility because of an SSL/TLS requirement. You can learn more about F5's HTTP/2 and SDPY Gateway capabilities by checking out these blogs: What are you waiting for? F5 Synthesis: Your gateway to the future (of HTTP) Velocity 2014 HTTP 2.0 Gateway (feat Parzych) F5 Synthesis Reference Architecture: Acceleration799Views0likes0CommentsSPDY/HTTP2 Profile Impact on Variable Use
When using SPDY/HTTP2 profile, TCL variables set before the HTTP_REQUEST event are not carried over to further events. This is by design as the current iRule/TCL implementation is not capable of handling multiple concurrent events, which may be they case with SPDY/HTTP2 multiplexing. This is easily reproducible with this simple iRule: when CLIENT_ACCEPTED { set default_pool [LB::server pool] } when HTTP_REQUEST { log local0. "HTTP_REQUEST EVENT Client [IP::client_addr]:[TCP::client_port] - $default_pool -" # - can't read "default_pool": no such variable while executing "log local0. "HTTP_REQUEST EVENT -- $default_pool --" } Storing values in a table works perfectly well and can be used to work around the issue as shown in the iRule below. when CLIENT_ACCEPTED { table set [IP::client_addr]:[TCP::client_port] [LB::server pool] } when HTTP_REQUEST { set default_pool [table lookup [IP::client_addr]:[TCP::client_port]] log local0. "POOL: |$default_pool|" } This information is also available in the wiki on the HTTP2 namespace command list page.736Views0likes12CommentsArchitecting Scalable Infrastructures: CPS versus DPS
#webperf As we continue to find new ways to make connections more efficient, capacity planning must look to other metrics to ensure scalability without compromising performance. Infrastructure metrics have always been focused on speeds and feeds. Throughput, packets per second, connections per second, etc… These metrics have been used to evaluate and compare network infrastructure for years, ultimately being used as a critical component in data center design. This makes sense. After all, it's not rocket science to figure out that a firewall capable of handling 10,000 connections per second (CPS) will overwhelm a next hop (load balancer, A/V scanner, etc… ) device only capable of 5,000 CPS. Or will it? The problem with old skool performance metrics is they focus on ingress, not egress capacity. With SDN pushing a new focus on both northbound and southbound capabilities, it makes sense to revisit the metrics upon which we evaluate infrastructure and design data centers. CONNECTIONS versus DECISIONS As we've progressed from focusing on packets to sessions, from IP addresses to users, from servers to applications, we've necessarily seen an evolution in the intelligence of network components. It's not just application delivery that's gotten smarter, it's everything. Security, access control, bandwidth management, even routing (think NAC), has become much more intelligent. But that intelligence comes at a price: processing. That processing turns into latency as each device takes a certain amount of time to inspect, evaluate and ultimate decide what to do with the data. And therein lies the key to our conundrum: it makes a decision. That decision might be routing based or security based or even logging based. What the decision is is not as important as the fact that it must be made. SDN necessarily brings this key differentiator between legacy and next-generation infrastructure to the fore, as it's just software-defined but software-deciding networking. When a switch doesn't know what to do with a packet in SDN it asks the controller, which evaluates and makes a decision. The capacity of SDN – and of any modern infrastructure – is at least partially determined by how fast it can make decisions. Examples of decisions: URI-based routing (load balancers, application delivery controllers) Virus-scanning SPAM scanning Traffic anomaly scanning (IPS/IDS) SQLi / XSS inspection (web application firewalls) SYN flood protection (firewalls) BYOD policy enforcement (access control systems) Content scrubbing (web application firewalls) The DPS capacity of a system is not the same as its connection capacity, which is merely the measure of how many new connections a second can be established (and in many cases how many connections can be simultaneously sustained). Such a measure is merely determining how optimized the networking stack of any given solution might be, as connections – whether TCP or UDP or SMTP – are protocol oriented and it is the networking stack that determines how well connections are managed. The CPS rate of any given device tells us nothing about how well it will actually perform its appointed tasks. That's what the Decisions Per Second (DPS) metric tells us. CONSIDERING BOTH CPS and DPS Reality is that most systems will have a higher CPS compared to its DPS. That's not necessarily bad, as evaluating data as it flows through a device requires processing, and processing necessarily takes time. Using both CPS and DPS merely recognizes this truth and forces it to the fore, where it can be used to better design the network. A combined metric helps design the network by offering insight into the real capacity of a given device, rather than a marketing capacity. When we look only at CPS, for example, we might feel perfectly comfortable with a topological design with a flow of similar CPS capacities. But what we really want is to make sure that DPS –> CPS (and vice-versa) capabilities were matched up correctly, lest we introduce more latency than is necessary into a given flow. What we don't want is to end up with is a device with a high DPS rate feeding into a device with a lower CPS rate. We also don't want to design a flow in which DPS rates successively decline. Doing so means we're adding more and more latency into the equation. The DPS rate is a much better indicator of capacity than CPS for designing high-performance networks because it is a realistic measure of performance, and yet a high DPS coupled with a low CPS would be disastrous. Luckily, it is almost always the case that a mismatch in CPS and DPS will favor CPS, with DPS being the lower of the two metrics in almost all cases. What we want to see is as close a CPS:DPS ratio as possible. The ideal is 1:1, of course, but given the nature of inspecting data it is unrealistic to expect such a tight ratio. Still, if the ratio becomes too high, it indicates a potential bottleneck in the network that must be addressed. For example, assume an extreme case of a CPS:DPS of 2:1. The device can establish 10,000 CPS, but only process at a rate of 5,000 DPS, leading to increasing latency or other undesirable performance issues as connections queue up waiting to be processed. Obviously there's more at play than just new CPS and DPS (concurrent connection capability is also a factor) but the new CPS and DPS relationship is a good general indicator of potential issues. Knowing the DPS of a device enables architects to properly scale out the infrastructure to remediate potential bottlenecks. This is particularly true when TCP multiplexing is in play, because it necessarily reduces CPS to the target systems but in no way impacts the DPS. On the ingress, too, are emerging protocols like SPDY that make more efficient use of TCP connections, making CPS an unreliable measure of capacity, especially if DPS is significantly lower than the CPS rating of the system. Relying upon CPS alone – particularly when using TCP connection management technologies - as a means to achieve scalability can negatively impact performance. Testing systems to understand their DPS rate is paramount to designing a scalable infrastructure with consistent performance. The Need for (HTML5) Speed SPDY versus HTML5 WebSockets Y U No Support SPDY Yet? Curing the Cloud Performance Arrhythmia F5 Friday: Performance, Throughput and DPS Data Center Feng Shui: Architecting for Predictable Performance On Cloud, Integration and Performance735Views0likes0Comments新しいウェブプロトコルHTTP 2.0にいち早く対応するための「ゲートウェイ」という考え方
HTTPからSPDY、そしていよいよHTTP2.0へ ― 進化するウェブサービス最適な最新のウェブのプロトコル導入を見据え、様々な取り組みが始まっています アプリケーション、デバイス、サイバーセキュリティの進化は目まぐるしく進んでいますが、インターネットの大部分の通信を担う通信プロトコル「HTTP」は実は初代からあまり大きな進化を遂げていない ― これは意外と知られていない現実です。 インターネット標準化団体であるIETFの資料によると、初代のHTTP 1.0が標準化されたのが1996年、そしてその次世代版であるHTTP1.1が標準化されたのが1997年です。これは、Googleという会社が生まれる前の話です。また、日本で一世を風靡したimodeサービスが始まる前のタイミングでもあります。いかに長い間この仕様が大きく変わっていないかを実感頂けると思います。また、その後、御存知の通りインターネットは爆発的に普及し、トラフィック量も増加の一途を辿っています。 2015年にHTTP2.0の仕様はIETFによって標準化される予定です。すぐに多くの企業の現場で導入が進むとは考えにくいものの、膨大なアクセス数が継続的に続くアプリケーションやサービスのインフラではその恩恵をより多く受けるため、前向きに検討するモチベーションにはなると予想されています。 数年前、HTTP 2.0の前身であるSPDYの導入が始まった際に、いち早く移行ソリューションをご提案したのがF5ネットワークスでした。F5のセキュアかつ安定したADCサービスと、BIG-IPによるゲートウェイ機能は、自社サービスのインフラをSPDYに対応させるにあたり、最大の導入障壁となる「サーバ側のSPDY対応作業」を集約させることでスムーズな移行を実現できたのです。 実はこのSPDYゲートウェイとも言うべき考え方は、激変するビジネス環境に柔軟に対応するインフラを構築するのにADCが様々な場面で貢献する好例なのです。古くはADCでSSLコネクションの終端とパフォーマンス向上、前述のSPDY、そして今回のHTTP2.0、技術は違えど、根幹となるアプローチは一貫しています。新システムへの移行が急務となったときに、IT設備の大掛かりな変更はコストも時間も人員も大きなものになりがちです。BIG-IPによるHTTP 2.0ゲートウェイという付加価値によって、少ない投資で素早くインフラをHTTP 2.0に対応させ、リクエスト数の最大値などの管理、リクエスト優先順位付け等ができるようになります。 このADCならではの付加価値をF5はユーザ様と共に高め続けていきます。そして、F5のソリューションをお使いのユーザ様は将来生まれる新しい技術への対応を気づかないうちに準備できているのかもしれません。 F5のBIG-IPはHTTP 2.0の最新版のドラフトに対応する形でearly access(早期検証プログラム)を実施しており、標準化された時点でGeneral Availability(一般向け提供開始)を予定しております。 For the English version of this article, please go here.420Views0likes0CommentsReady or not, HTTP 2.0 is here
From HTTP to SPDY, to HTTP 2.0. The introduction of a new protocol optimized for today’s evolving web services opens the door to a world of possibilities. But are you ready to make this change? While the breakneck speed with which applications, devices and servers are evolving is common knowledge, few are aware that the HTTP protocol, such a key part of the modern internet, has seen only a few changes of significance since its inception. HTTP – behind the times? According to the Internet Engineering Task Force (IETF), HTTP 1.0 was first officially introduced in 1996. The following version, HTTP 1.1, and the version most commonly used to this day was officially released in 1997. 1997! And we’re still using it! To give you some idea of how old this makes HTTP 1.1 in terms of the development of the internet - in 1997 there was no Google! There was no Paypal ! It goes without saying, of course, that since 1997, both the internet and the amount of traffic it has to handle have grown enormously. Standardization of HTTP 2.0 by the IETF is scheduled for 2015. It’s unrealistic to expect most businesses to immediately deploy HTTP 2.0 in their organizations. However seeing the clear benefits of HTTP 2.0, which is to enable infrastructure to handle huge volume of traffic to access applications and services, many organizations will soon implement this new protocol. F5: Ready for SPDY, ready for HTTP 2.0 The initial draft of HTTP 2.0 was based on the SPDY protocol. When SPDY was first introduced some years ago, the first company to provide a solution to enable businesses to “switch over” to SPDY was F5, we introduced a SPDY Gateway then. SPDY Gateway allowed our enterprise customers to overcome the biggest hurdle in the transition to adopt SPDY protocol on the server end, with the stable and secure ADC and gateway services of BIG-IP. Organizations have the option to make servers upgrade to support the protocol, or use the gateway to handle the connection. This gives businesses the time necessary to make a smooth and gradual transition from HTTP 1.0 to SPDY on the server side. SPDY Gateway is just one example of the many ways ADC can contribute to a flexible and agile infrastructure for businesses. One of the well-known ADC use case in SPDY protocol is SSL termination. Similarly in the transition to HTTP 2.0, a HTTP 2.0 gateway will help organizations plan their adoption and migration to accommodate demands of HTTP 2.0. Thinking ahead F5 is always in the forefront of technology evolution. HTTP 2.0 Gateway is just one example of F5’s dedication in the evolution of Application Delivery technology, providing guidance to our customers making transitions to new technology as simple as possible. Currently, F5’s BIG-IP supports HTTP 2.0 under our early access program. We expect its general availability after HTTP 2.0 standardization is completed in 2015. For a Japanese version of this post, please go here.342Views0likes0CommentsY U No Support SPDY Yet?
#fasterapp #ado #interop Mega-sites like Twitter and popular browsers are all moving to support SPDY – but there’s one small glitch in the game plan… SPDY is gaining momentum as “big” sites begin to enable support for the would-be HTTP 2.0 protocol of choice. Most recently Twitter announced its support for SPDY: Twitter has embraced Google’s vision of a faster web and is now serving webpages over the SPDY protocol to browsers that support it. SPDY is still only available for about 40 percent of desktop users. But with large services like Twitter throwing their weight behind it, SPDY may well start to take the web by storm — the more websites that embrace SPDY the more likely it is that other browsers will add support for the faster protocol. -- Twitter Catches the ‘SPDY’ Train But even with existing support from Google (where the protocol originated) and Amazon, there’s a speed bump on the road to fast for SPDY: There is not yet any spdy support for the images on the Akamai CDN that twitter uses, and that's obviously a big part of performance. But still real deployed users of this are Twitter, Google Web, Firefox, Chrome, Silk, node etc.. this really has momentum because it solves the right problems. Big pieces still left are a popular CDN, open standardization, a httpload balancing appliance like F5 or citrix. And the wind is blowing the right way on all of those things. This is happening very fast. -- Twitter, SPDY, and Firefox The “big pieces” missing theme is one that’s familiar at this point; many folks are citing the lack of SPDY support at various infrastructure layers (in particular the application delivery tier and load balancing services) as an impediment embracing what is certainly the early frontrunner in the HTTP 2.0 War. While mod_spdy may address the basic requirement to support SPDY at the web server infrastructure layer, mod_spdy does not (and really can not) address the lack of support in the rest of the infrastructure. Like the IPv4 to IPv6 transition, the move to support SPDY is transitory in nature and counts on HTTP 2.0 adopting SPDY as part of its overhaul of the aging HTTP protocol. Thus, infrastructure must maintain a dual SPDY-HTTP stack, much in the same way it must support both IPv4 and IPv6 until adoption is complete - or a firm cutover date is arrived at (unlikely). U No Support SPDY because SPDY Support is No Easy The use of SPDY is transparent to the user. But for IT the process of supporting SPDY is no simple task. To understand the impact on infrastructure first take a look at the typical steps a browser uses to switch to SPDY from HTTP: The first request to a server is sent over HTTP The server-side end-point (server or application delivery service), when it supports SPDY, can reply with an HTTP header 'Alternative-Protocols: 443:npn-spdy/2', indicating that the content can also be retrieved on this server on port 443, and on that port there is a TLS 1.2 endpoint that understands NPN and SPDY can be negotiated Alternatively the server can just 301 redirect the user to an https end-point. The client starts a TLS connection to the port described and when the server-side end-point indicates SPDY as part of the TLS NPN extension, it will use SPDY on that connection. What this means is that infrastructure must be updated not just to handle SPDY – which has some unique behavior in its bi-directional messaging model – but also TLS 1.2 and NPN (Next Protocol Negotiation). This is no trivial task, mind you, and it’s not only relevant to infrastructure, it’s relevant to site administrators who want to enable SPDY support. Doing so has several pre-requisites that will make the task a challenging one, including supporting TLS 1.2 and the NPN extension and careful attention to links served, which must use HTTPS instead of HTTP. While you technically can force SPDY to be used without TLS, this is not something a typical user will (or in the case of many mobile platforms can) attempt. For testing purposes, running SPDY without TLS can be beneficial, but for general deployment? SSL Everywhere will become a reality. Doing so without negatively impacting performance, however, is going to be a challenge. Establishing a secure SSL connection requires 15x more processing power on the server than on the client. -- THC SSL DOS Tool Released The need to support secure connections becomes a big issue for sites desiring to support both HTTP and HTTPS without forcing SSL everywhere on every user, because there must be a way to rewrite links from HTTP to HTTPS (or vice-versa) without incurring a huge performance penalty. Network-scripting provides a solution to this quandary, with many options available – all carrying varying performance penalties from minimal to unacceptable. PROS and CONS Basically, there are a whole lot of pros to SPDY – and a whole lot of cons. Expect to see more sites – particularly social media focused sites like Facebook – continue to jump on the SPDY bandwagon. HTTP was not designed to support the real-time interaction inherent in social media today, and its bursty, synchronous nature often hinders the user-experience. SPDY has shown the highest benefits to just such sites – highly interactive applications with lots of small object transfers – so we anticipate a high rate of early adoption amongst those who depend on such applications to support its business model. Supporting SPDY will be much easier for organizations that take advantage of an intelligent intermediary. Such solutions will be able to provide support for both HTTP and SPDY simultaneously – including all the pre-requisite capabilities. Using an intermediary application delivery platform further alleviates the need to attempt to deploy pre-production quality modules on critical application server infrastructure, and reduces the burden on operations to manage certificate sprawl (and all the associated costs that go along with that). SPDY is likely to continue to gain more and more momentum, especially in the mobile device community. Unless Microsoft’s Speed+Mobility offers a compelling reason it should ascend to the top of the HTTP 2.0 stack over SPDY, it’s likely that supporting SPDY will become a “must do” instead of “might do” sooner rather than later. SPDY Momentum Fueled by Juggernauts Introducing mod_spdy, a SPDY module for the Apache HTTP server The HTTP 2.0 War has Just Begun The Chromium Projects : SPDY Oops! HTML5 Does It Again F5 Friday: Mitigating the THC SSL DoS Threat SPDY - without TLS? Web App Performance: Think 1990s. Google SPDY Protocol Would Require Mass Change in Infrastructure317Views0likes0CommentsSPDY versus HTML5 WebSockets
#HTML5 #fasterapp #webperf #SPDY So much alike, yet so vastly a different impact on the data center … A recent post on the HTTP 2.0 War beginning garnered a very relevant question regarding WebSockets and where it fits in (what might shape up to be) an epic battle. The answer to the question, “Why not consider WebSockets here?” could be easily answered with two words: HTTP headers. It could also be answered with two other words: infrastructure impact. But I’m guessing Nagesh (and others) would like a bit more detail on that, so here comes the (computer) science. Different Solutions Have Different Impacts Due to a simple (and yet profound) difference between the two implementations, WebSockets is less likely to make an impact on the web (and yet more likely to make an impact inside data centers, but more on that another time). Nagesh is correct in that in almost all the important aspects, WebSockets and SPDY are identical (if not in implementation, in effect). Both are asynchronous, which eliminates the overhead of “polling” generally used to simulate “real time” updates a la Web 2.0 applications. Both use only a single TCP connection. This also reduces overhead on servers (and infrastructure) which can translate into better performance for the end-user. Both can make use of compression (although only via extensions in the case of WebSockets) to reduce size of data transferred resulting, one hopes, in better performance, particularly over more constrained mobile networks. Both protocols operate “outside” HTTP and use an upgrade mechanism to initiate. While WebSockets uses the HTTP connection header to request an upgrade, SPDY uses the Next Protocol Negotiation (proposed enhancement to the TLS specification). This mechanism engenders better backwards-compatibility across the web, allowing sites to support both next-generation web applications as well as traditional HTTP. Both specifications are designed, as pointed out, to solve the same problems. And both do, in theory and in practice. The difference lies in the HTTP headers – or lack thereof in the case of WebSockets. Once established, WebSocket data frames can be sent back and forth between the client and the server in full-duplex mode. Both text and binary frames can be sent full-duplex, in either direction at the same time. The data is minimally framed with just two bytes. In the case of text frames, each frame starts with a 0x00 byte, ends with a 0xFF byte, and contains UTF-8 data in between. WebSocket text frames use a terminator, while binary frames use a length prefix. -- HTML5 Web Sockets: A Quantum Leap in Scalability for the Web WebSockets does not use HTTP headers, SPDY does. This seemingly simple difference has an inversely proportional impact on supporting infrastructure. The Impact on Infrastructure The impact on infrastructure is why WebSockets may be more trouble than its worth – at least when it comes to public-facing web applications. While both specifications will require gateway translation services until (if) they are fully adopted, WebSockets has a much harsher impact on the intervening infrastructure than does SPDY. WebSockets effectively blinds infrastructure. IDS, IPS, ADC, firewalls, anti-virus scanners – any service which relies upon HTTP headers to determine specific content type or location (URI) of the object being requested – is unable to inspect or validate requests due to its lack of HTTP headers. Now, SPDY doesn’t make it easy – HTTP request headers are compressed – but it doesn’t make it nearly as hard, because gzip is pretty well understood and even intermediate infrastructure can deflate and recompress with relative ease (and without needing special data, such as is the case with SSL/TLS and certificates). Let me stop for a moment and shamelessly quote myself from a blog on this very subject, “Oops! HTML5 Does it Again”: One of the things WebSockets does to dramatically improve performance is eliminate all those pesky HTTP headers. You know, things like CONTENT-TYPE. You know, the header that tells the endpoint what kind of content is being transferred, such as text/html and video/avi. One of the things anti-virus and malware scanning solutions are very good at is detecting anomalies in specific types of content. The problem is that without a MIME type, the ability to correctly identify a given object gets a bit iffy. Bits and bytes are bytes and bytes, and while you could certainly infer the type based on format “tells” within the actual data, how would you really know? Sure, the HTTP headers could by lying, but generally speaking the application serving the object doesn’t lie about the type of data and it is a rare vulnerability that attempts to manipulate that value. After all, you want a malicious payload delivered via a specific medium, because that’s the cornerstone upon which many exploits are based – execution of a specific operation against a specific manipulated payload. That means you really need the endpoint to believe the content is of the type it thinks it is. But couldn’t you just use the URL? Nope – there is no URL associated with objects via a WebSocket. There is also no standard application information that next-generation firewalls can use to differentiate the content; developers are free to innovate and create their own formats and micro-formats, and undoubtedly will. And trying to prevent its use is nigh-unto impossible because of the way in which the upgrade handshake is performed – it’s all over HTTP, and stays HTTP. One minute the session is talking understandable HTTP, the next they’re whispering in Lakota, a traditionally oral-only language which neatly illustrates the overarching point of this post thus far: there’s no way to confidently know what is being passed over a WebSocket unless you “speak” the language used, which you may or may not have access to. The result of all this confusion is that security software designed to scan for specific signatures or anomalies within specific types of content can’t. They can’t extract the object flowing through a WebSocket because there’s no indication of where it begins or ends, or even what it is. The loss of HTTP headers that indicate not only type but length is problematic for any software – or hardware for that matter – that uses the information contained within to extract and process the data. SPDY, however, does not eliminate these Very-Important-to-Infrastructure-Services HTTP headers, it merely compresses them. Which makes SPDY a much more compelling option than WebSockets. SPDY can be enabled for an entire data center via the use of a single component: a SPDY gateway. WebSockets ostensibly requires the upgrade or replacement of many more infrastructure services and introduces risks that may be unacceptable to many organizations. And thus my answer to the question "Why not consider WebSockets here” is simply that the end-result (better performance) of implementing the two may be the same, WebSockets is unlikely to gain widespread acceptance as the protocol du jour for public facing web applications due to the operational burden it imposes on the rest of the infrastructure. That doesn’t mean it won’t gain widespread acceptance inside the enterprise. But that’s a topic for another day… HTML5 Web Sockets: A Quantum Leap in Scalability for the Web Oops! HTML5 Does it Again The HTTP 2.0 War has Just Begun Fire and Ice, Silk and Chrome, SPDY and HTTP Grokking the Goodness of MapReduce and SPDY Google SPDY Protocol Would Require Mass Change in Infrastructure287Views0likes0Comments