on 10-Jul-2014 05:28
#HTTP #ietf #webperf #infosec
Despite the hype and drama surrounding the HTTP 2.0 effort, the latest version of the ubiquitous HTTP protocol is not just a marketing term. It's a real, live IETF standard that is scheduled to "go live" in November (2014).
And it changes everything.
There are a lot of performance enhancing related changes in the HTTP 2.0 specification including multiplexing and header compression. These are not to be overlooked as minimal updates as they significantly improve performance, particularly for clients connecting over a mobile network. Header compression, for example, minimizes the requirement to transport HTTP headers with each and every request - and response. HTTP headers can become quite the overhead, particularly for those requests comprised simply of a URL or a few bytes of data.
Multiplexing has traditionally been a server-side technology, designated as an offload capability that optimizes both server resources and, in turn, performance. Enabling multiplexing on the client side, a la SPDY (which is the actually the basis for HTTP 2.0 and is supported by 65% of browsers today) and MPTCP protocols, enables the same benefits in terms of reducing resource consumption. It has the added benefit of improving performance by eliminating overhead associated with not just opening a new connection, but maintaining the state of each of them.
These are not what changes everything, however. While these are needed improvements and will certainly benefit clients and applications that can take advantage of them (either natively or by employing an HTTP gateway) the real game changer with HTTP 2.0 is the mandatory use of SSL.
Yes, that's right. SSL is mandatory.
For everyone on the data center side of this equation - whether that data center is a cloud or a traditional one - mandating SSL or TLS for HTTP will effectively blind most of the application data path.
This has always been true; enabling end-to-end SSL for web applications (which Our (that's F5) data shows is 64% of all applications being delivered) has always meant restricting visibility into web traffic. After all, the purpose of transport layer security protocols like SSL and TLS is to protect data in flight from prying eyes. Those eyes include benevolent services like performance monitoring, IDS, IPS, DLP, web acceleration and any other service which relies on the ability to inspect data in flight.
This requirement for SSL or TLS means there's going to have to be some changes in the network architecture if you're going to move to HTTP 2.0 to take advantage of its performance benefits. Somehow you're going to have to figure out how to support a MUST use TLS/SSL requirement while still enabling monitoring, acceleration and security services - hopefully without requiring that every service in the application service conga line decrypt and re-encrypt the data.
While marketing made much of the "SSL Everywhere" movement and many organizations did, in fact, move to complying with the notion that every web interaction should be secured with SSL or TLS, not everyone was as dedicated to enforcing it on consumers and employees. Non-secured HTTP was still often allowed, despite the risks associated with it.
HTTP 2.0 will mean giving more than just lip service to security by requiring that organizations adopting the new protocol utterly and completely embrace it.