What Ops Needs to Know about HTTP/2
So HTTP/2 is official. That means all the talking is (finally) done and after 16 years of waiting, we've got ourselves a new lingua franca of the web.
Okay, maybe that's pushing it, but we do have a new standard to move to that offers some improvements in the areas of resource management and performance that make it definitely worth taking a look at.
For example, HTTP/2 is largely based on SPDY (that's Google's SPDY, for those who might have been heads down in the data center since 2009 and missed its introduction) which has proven, in the field, to offer some nice performance benefits. Since its introduction in 2009, SPDY has moved through several versions, resulting in the current (and according to Google, last) version of 3.1, has shown real improvements in page load times mostly due to a combination of reduction in round trip times (RTT) and use of header compression. An IDG research paper, "Making the Journey to HTTP/2", notes that "According to Google, SPDY has cut load times for several of its most highly used services by up to 43 percent. Given how heavily based it is on SPDY, HTTP/2 should deliver similarly significant performance gains, resulting in faster transactions and easier access to mobile users, not to mention reduced need for bandwidth, servers, and network infrastructure. "
But HTTP/2 is more than SPDY with a new name. There are a number of significant differences that, while not necessarily affecting applications themselves, definitely impact the folks who have to configure and manage the web and application servers on which those apps are deployed.
That means you, ops guy.
One of the biggest changes in HTTP/2 is that it is now binary on the wire instead of text. That's good news for transport times, but bad news because it's primarily the reason that HTTP/2 is incompatible with HTTP/1.1. While browsers will no doubt navigate protocols (separately, of course), thus alleviating any concern that end-users will be able to access your apps if you move to the new standard, it's problematic for inter-app integration; i.e. all those external services you might use to build your app or site. The assumed HTTP/1.1 will not communicate with an HTTP/2 endpoint, and vice versa.
Additionally, HTTP/2 introduces a dedicated header compression protocol, HPACK. While SPDY also supported header compression as a way to eliminate the overhead associated with redundant headers across lots (an average of 80) requests per page, it fell back on standard DEFLATE (RFC 1951) compression, which is vulnerable to CRIME (yes, I'm aware of the hilarious irony in that acronym but it is what it is, right?).
Operational ramification: New header compression techniques will mean caches and upstream infrastructure which may act upon those headers will need to be able to speak HPACK.
If you haven't been using SPDY, you may also not be aware of the changes to request management. HTTP 1.1 allowed for multiple requests over the same (single) connection but even that was found to be inadequate as page complexity (in terms of objects needing to be retrieved) increased. Browsers therefore would open 2, 3 or 6 connections per domain in order to speed up page loads. This heavily impacted the capacity of a web/app server. If a web server could manage 5000 concurrent (TCP) connections, you had to divide that by the average number of connections opened per user to figure out the concurrent user capacity. SPDY - and HTTP/2 - are based on the use of a single connection, introducing parallel request and response flows in order to address performance in the face of extreme complexity.
So that means a 1:1 ratio between users and (TCP) connections. But that doesn't necessarily mean capacity planning is simpler, as those connections are likely to be longer lived than many HTTP/1.1 connections. Idle time out values in web servers may need to be adjusted and capacity planning will need to take that into consideration.
Operational ramification: Idle time out values and maximum connections may need to be adjusted based on new TCP behavior.
Some of the other changes that may have an impact are related to security. In particular, there has been a lot of confusion over the requirement (or non requirement) for security in HTTP/2. It turns out that the working group did not have consensus to require TLS or SSL in HTTP/2 and thus it remains optional.
The market, however, seems to have other ideas as browsers currently supporting HTTP/2 do require TLS or SSL and indications are this is not likely to change. SSL Everywhere is the goal, after all, and browsers play a significant (and very authoritative) role in that effort.
With that said, TLS optional in the HTTP/2 specification. But of course since most folks are supportive of SSL Everywhere, it is important to note that when securing connections HTTP/2 requires stronger cryptography.
- Ephemeral keys only
- Preferring AEAD modes like CGM
- Minimal key sizes 128 bit EC, 2048 bit RSA
This falls squarely in the lap of ops, as this level of support is generally configured and managed at the platform (web server) layer, not within the application. Because browsers are enforcing the use of secure connections, the implications for ops start reaching beyond the web server and into the upstream infrastructure.
Operational ramification: Upstream infrastructure (caches, load balancers, NGFW, access management) will be blinded by encryption and unable to perform their functions.
Interestingly, HTTP/2 is already out there. The BitsUp blog noted the day the HTTP/2 official announcement was made that "9% of all Firefox release channel HTTP transactions are already happening over HTTP/2. There are actually more HTTP/2 connections made than SPDY ones."
So this isn't just a might be, could be in the future. It's real.
Finally.
For a deeper dive into the history of HTTP and how the protocol has evolved over time, feel free to peruse this HTTP/2 presentation.