HTTP 2.0 changes everything

#HTTP #ietf #webperf #infosec

Despite the hype and drama surrounding the HTTP 2.0 effort, the latest version of the ubiquitous HTTP protocol is not just a marketing term. It's a real, live IETF standard that is scheduled to "go live" in November (2014).

And it changes everything.

There are a lot of performance enhancing related changes in the HTTP 2.0 specification including multiplexing and header compression. These are not to be overlooked as minimal updates as they significantly improve performance, particularly for clients connecting over a mobile network. Header compression, for example, minimizes the requirement to transport HTTP headers with each and every request - and response. HTTP headers can become quite the overhead, particularly for those requests comprised simply of a URL or a few bytes of data.

Multiplexing has traditionally been a server-side technology, designated as an offload capability that optimizes both server resources and, in turn, performance. Enabling multiplexing on the client side, a la SPDY (which is the actually the basis for HTTP 2.0 and is supported by 65% of browsers today) and MPTCP protocols, enables the same benefits in terms of reducing resource consumption. It has the added benefit of improving performance by eliminating overhead associated with not just opening a new connection, but maintaining the state of each of them.

These are not what changes everything, however. While these are needed improvements and will certainly benefit clients and applications that can take advantage of them (either natively or by employing an HTTP gateway) the real game changer with HTTP 2.0 is the mandatory use of SSL.

Yes, that's right. SSL is mandatory.

What does that mean?

For everyone on the data center side of this equation - whether that data center is a cloud or a traditional one - mandating SSL or TLS for HTTP will effectively blind most of the application data path.

This has always been true; enabling end-to-end SSL for web applications (which Our (that's F5) data shows is 64% of all applications being delivered) has always meant restricting visibility into web traffic. After all, the purpose of transport layer security protocols like SSL and TLS is to protect data in flight from prying eyes. Those eyes include benevolent services like performance monitoring, IDS, IPS, DLP, web acceleration and any other service which relies on the ability to inspect data in flight.

This requirement for SSL or TLS means there's going to have to be some changes in the network architecture if you're going to move to HTTP 2.0 to take advantage of its performance benefits. Somehow you're going to have to figure out how to support a MUST use TLS/SSL requirement while still enabling monitoring, acceleration and security services - hopefully without requiring that every service  in the application service conga line decrypt and re-encrypt the data.

While marketing made much of the "SSL Everywhere" movement and many organizations did, in fact, move to complying with the notion that every web interaction should be secured with SSL or TLS, not everyone was as dedicated to enforcing it on consumers and employees. Non-secured HTTP was still often allowed, despite the risks associated with it.

HTTP 2.0 will mean giving more than just lip service to security by requiring that organizations adopting the new protocol utterly and completely embrace it.

Published Jul 10, 2014
Version 1.0

Was this article helpful?

6 Comments

  • Lori, Great article on the changes coming with mandatory SSL. You mention there's going to be some changes in the network architecture and then a way to figure out how to support the mandatory SSL while still enabling monitoring, acceleration and security services - hopefully without requiring that every service in the application service conga line decrypt and re-encrypt the data. With that, does F5 have any design documents for best practices to handle the decryption of SSL traffic for inspection/monitoring? Thx, Jeff
  • Hi Jeffrey, Thank you! That's a great question, and after doing some looking around that seems to be something we don't currently have. We do have a few iApp templates for specific applications with support for SSL, and an advanced design and deployment guide on monitoring SSL handshakes (https://devcentral.f5.com/wiki/AdvDesignConfig.HTTPSMonitor_SSL_Handshake.ashx). I'll suggest it for consideration as a reference architecture - it'd be a good one to more thoroughly document. Thanks! Lori
  • Hi Jeffrey, I've seen a few customers clone traffic and send it unencrypted to an ids system, and I've also seen a couple solutions where they will send unencrypted through an IPS system that then flows back through the BIG-IP to be re-encrypted before sending on it's way. Clever solutions, but if the requirement states no unencrypted traffic on the wire, period, well, both solutions fail that, and there isn't an official best practice stamp on them anyway. If interested: https://devcentral.f5.com/articles/divert-unencrypted-traffic-through-an-ips-with-local-traffic-manager
  • Thx to both Lori and Jason for the replied and links - this is indeed a more pressing problem that needs a solution especially when then mantra is SSL everywhere, but from a security standpoint to capture and inspect everything as well. From some other reading, it appears SDN will allow the possibility to create a cloned flow if you will that is then sent off to the various security devices, but the issue there is you're inspecting traffic almost after the fact and if something is discovered then it's too late. Jason, thx for the link on diverting the traffic through an IPS with the Local Traffic Manager. Bookmarked that solution!
  • I am thankful that F5 is helping to drawing attention to 2.0. Vendor adoption (and F5 is usually quick on this) promotes mainstream use adoption. Unfortunately, I believe the title to be misleading. Because 2.0 uses SPDY as it's foundation, the technical benefits are already available (i.e. 2.0 introduces much less than one would think in a "2.0" release and possibly much less in comparison between 1.0 & 1.1) In fact, many of the HTTP1.1 constructs such as headers, methods and such will continue to live on (arguably, if 1.1 multiplexing was easier to implement then there may have been no need for 2.0 at all... b/c most of the performance benefits come in request management & efficient use of existing tcp connections). What would be beneficial is to have an article that conveys the performance difference between 1.1 SSL connections & 2.0 SSL. Or something that dares to compare cleartext 1.1 to encrypted 2.0 for various genres of data transfer (e.g. web portal content vs. pure file transfer / download performance). best regards
  • Thanks Lori for giving me the heads-up on HTTP 2.0. As for troubleshooting, I find that f5 High Speed Logging has already solved the problem. Set up HSL (it's easy), incorporate it in your iRules, Comment it out when you're not troubleshooting - much easier/faster than packet-captures. We have SSL to the back-end servers (because 3rd party vendors can't code their products well) - once I set up HSL, life got much easier.