http2
39 TopicsHTTP/2 Protocol in Plain English using Wireshark
1. Quick Intro Some people find it easier to do a "test drive" first to learn how a new protocol works in its simplest form and only then read the RFC. It turns out Wireshark is a perfect tool for me to do just that. It's a simple test and here's the topology: I'll just issue a HEAD request and later on a GET request and we'll see how it looks like on Wireshark. For more info about HTTP/2 profile and HTTP/2 protocol itself you can read the article I published onAskF5andJason's DevCentral article: What is HTTP Part X - HTTP/2. 2. Confirmation of which protocol will be used The packet capture taken below was the result of the following curl command issued from my ubuntu Linux client (I could've used a browser instead): Note: 10.199.3.44 is my virtual server with HTTP/2 profile applied. Here's packet capture (in case you want to follow along): http2-test-v1.zip HTTP/2 is negotiated during SSL handshake in Application Layer Protocol Negotiation (RFC 7301) SSL extension like this: Client says which protocol(s) it supports and server responds whichone it picked (in this case it's HTTP/2!). 3. Negotiation of HTTP/2 Parameters Think of it as something that has to take place like Client Hello and Server Hello in SSL for example. Server side (BIG-IP in this case) sendsSETTINGSframe which counts as confirmation that HTTP/2 is being used plus any flow control configuration we want our peer to honour: Client sendsMagicframe to confirm HTTP/2 is being used and thenSETTINGSwith its requirements for the connection. Yes,Magicframe is always the same. Still curious aboutMagicframe? Readhttps://tools.ietf.org/html/rfc7540#section-3.5. End-points are also supposed to ACK the receipt ofSETTINGSframefrom the other peer and the way they do it is by responding with another emptySETTINGSframewith ACK flag set: 4. Exchanging data Connection-wise we're all set now. For HTTP/2 GET/HEAD requests there is a specific frame type calledHEADERSwhich as the name implies carries HTTP/2 header information. If there was payload it would be carried insideDATAframe type but as this is just aHEADrequest then noDATAframe follows. 5. Appendix A - Other common frame types 5.1 WINDOW_UPDATE There are other common frame types and in my capturethe one that came up wasWINDOW_UPDATE. If you look at section 3 above we see that Client advised BIG-IP that its Initial Window Size was1073741824. WINDOW_UPDATEjust adjusted this value to1073676289: This is HTTP/2 flow control in action. 5.2 DATA in another test (http2-v2.zip) I usedHTTP/2 GETrequest instead ofHEADand requested more data which comes in throughDATAframe type: End Streamflag is false in allDATAmessages except for the last one. It signals when there is more data as well as the lastDATAframe. 5.3 GOAWAY In a subsequent test (http2-connection-idletimeout-1.zip)I set Connection Idle Timeoutin HTTP/2 profile to 1 to force BIG-IP sendingGOAWAYframe to close down connection after 1 second of idle connection. After last piece of data is sent by BIG-IP to client (frame #39), BIG-IP waits 1 second and sendsGOAWAYframewhich initiates the shutdown of HTTP/2 connection. GOAWAYmessages always containsPromised-Stream-IDwhich tells the client what is the lastStream IDit processed. A newStream IDis typically created for every new HTTP request (viaHEADERSmessage). We can see that a new HTTP request slipped in onframe #46but ignored as connection had already been closed on BIG-IP's side.7.8KViews3likes12CommentsIntroducing QUIC and HTTP/3
QUIC [1] is a new transport protocol that provides similar service guarantees to TCP, and then some, operating over a UDP substrate. It has important advantages over TCP: Streams: QUIC provides multiple reliable ordered byte streams, which has several advantages for user experience and loss response over the single stream in TCP. The stream concept was used in HTTP/2, but moving it into the transport further amplifies the benefits. Latency: QUIC can complete the transport and TLS handshakes in a single round trip. Under some conditions, it can complete the application handshake (e.g. HTTP requests) in a single round-trip as well. Privacy and Security: QUIC always uses TLS 1.3, the latest standard in application security, and hides much more data about the connection from prying eyes. Moreover, it is much more resistant than TCP to various attacks on the protocol, because almost all of its packets are authenticated. Mobility: If put in the right sort of data center infrastructure, QUIC seamlessly adjusts to changes in IP address without losing connectivity. [2] Extensibility: Innovation in TCP is frequently hobbled by middleboxes peering into packets and dropping anything that seems non-standard. QUIC’s encryption, authentication, and versioning should make it much easier to evolve the transport as the internet evolves. Google started experimenting with early versions of QUIC in 2012, eventually deploying it on Chrome browsers, their mobile apps, and most of their server applications. Anyone using these tools together has been using QUIC for years! The Internet Engineering Task Force (IETF) has been working to standardize it since 2016, and we expect that work to complete in a series of Internet Requests for Comment (RFCs) standards documents in late 2020. The first application to take advantage of QUIC is HTTP. The HTTP/3 standard will publish at the same time as QUIC, and primarily revises HTTP/2 to move the stream multiplexing down into the transport. F5 has been tracking the development of the internet standard. In TMOS 15.1.0.1, we released clientside support for draft-24 of the standard. That is, BIG-IP can proxy your HTTP/1 and HTTP/2 servers so that they communicate with HTTP/3 clients. We rolled out support for draft-25 in 15.1.0.2 and draft-27 in 15.1.0.3. While earlier drafts are available in Chrome Canary and other experimental browser builds, draft-27 is expected to see wide deployment across the internet. While we won’t support all drafts indefinitely going forward, our policy will be to support two drafts in any given maintenance release. For example, 15.1.0.2 supports both draft-24 and draft-25. If you’re delivering HTTP applications, I hope you take a look at the cutting edge and give HTTP/3 a try! You can learn more about deploying HTTP/3 on BIG-IP on our support page at K60235402: Overview of the BIG-IP HTTP/3 and QUIC profiles. -----[1] Despite rumors to the contrary, QUIC is not an acronym. [2] F5 doesn’t yet support QUIC mobility features. We're still in the midst of rolling out improvements.1.2KViews1like0CommentsHTTP/2 : quels sont les nouveautés et les gains ?
Depuis le 15 Mai 2015 - date de la ratification de la RFC 7540 - HTTP/2 est officiellement devenu le nouveau standard du Web. Il faut dire qu’après 16 ans de bons et loyaux servicesremplis par HTTP1.1, cette mise à jour était très attendue pour répondre aux besoins du web d’aujourd’hui. En effet, les dernières tendances montraient un accroissement continu du nombre et de la taille des requêtes web. En moyenne, une page Web pèse 2Mo et contient 80 éléments ! Temps de chargement: quand tu nous tiens ! Qui n’a pas pesté contre les pages web qui mettent une éternité à se charger ? Le dernier speed index montrait un temps de chargement moyen de l’ordre de 11s pour les 40 ténors du web et du e-commerce en France. HTTP1.1 est très sensible à la latence réseau et c’est justement cette latence –plus que la bande passante- qui influe sur les temps de chargement. Un autre point faible d’HTTP1.1 affectant le temps de chargement est le traitement des requêtes qui se fait séquentiellement en FIFO: le navigateur est obligé d’attendre la réponse du serveur Web avant de pouvoir soumettre une autre requête. Bien sûr, le pipelining a amélioré–partiellement- les choses en permettant de lancer plusieurs requêtes en même temps mais cela n’empêchait pas qu’une requête demandant un traitement long puisse bloquer la file d’attente (Head of Line Blocking). HTTP1.1 avait aussi une fâcheuse tendance à consommer les connections TCP. Alors que les navigateurs sont limités à 6-8 connections TCP par hostname cible, une des techniques pour améliorer les performances, le sharding (où le découpage de la ressource Web sur plusieurs hosts cibles) a permis d’outrepasser la limitation des 6 connections TCP en parallélisant en quelque sorte le chargement d’une page web, mais au prix d’une explosion du nombre de connections TCP. Aujourd'hui, une page web requiert du navigateur web d’établir en moyenne 38 connections TCP ! Ce qui ne change pas avec HTTP/2 HTTP/2 a été conçu pour ne pas impacter le Web, de faire en sortequ’il puisse prendre en charge nativement des clients HTTP1.1 et de proxifier les serveurs en HTTP1.1. Il est ainsi complètement retro-compatible avec HTTP1.1. Cela veut dire que : - Les méthodes HTTP1.1 restent les mêmes: GET, POST, PUT, HEAD, TRACE, DELETE, CONNECT … - Les schémas URI http:// et https:// restent inchangés. - Codes de statuts, les entêtes, la négociation restent le mêmes. Ce qui change Par rapport à HTTP1.1, HTTP/2décorrèle le protocole lui mêmede la couche de transport TCP, en introduisant unecouche d’abstractionquia pour objectif d’optimiser le transport des sémantiques HTTP. L’objectif étant de réduire le nombre d’allers retours, la taille des entêtes échangés,de lever le problème du Head of Line Blocking…bref, de faire un meilleur usage du réseau. L’autre objectif d’HTTP/2 est de normaliserles extensionsoptionnellesde HTTP1.1 telles quele pipelining, l’inlining, le sharding, la concaténation. Il faut dire que les 4 façons normalisées de parser un message HTTP1.1ou les extensions optionnelles -pas toujours bienou toutes implémentées- pouvaient poser des problèmes d’interopérabilité. Par exemple, le pipelining est désactivé par défaut sur les navigateurs web afin de garantir une interopérabilité large. L’approche choisie sur HTTP/2 est à l’opposé de ce qui a été fait sur HTTP1.1: il n’y a plus d’options, ni d’extensions.Les spécifications sont obligatoires et le protocole est désormais binaire (fini au passage le telnet sur le port 80), et donc plus compact et plus facile à décoder. Multiplexage C’est la nouveauté phare du protocole. HTTP/2utilise la notion de trames binaires et permet l’envoi des requêtes et la réception des réponsesen parallèle et indépendamment l’une de l’autre, sur une seule connexion TCP. Une réponse qui requiert du temps de traitement ne bloque plus les autres. Celase fait via des “streams” qui sont des séquences de trames bi-directionnelles échangés entre le client et le serveur. Une connexion http/2 peut contenir une centaines de streams ouverts sur une seule connexion TCP et chaque stream possède une notion de priorité qui peut être modifiée dynamiquement par le navigateur web. Compression d’entêtes Avec HTTP1.1, il fallait en moyenne 7 à 8 échanges TCP pour charger les entêtes d’une page web contenant 80 objets.Les entêtes étantsouvent répétitifs et de plus en plus volumineux, la compression est un bon moyen de réduire les échanges. Pour HTTP/2, HPACK a été développé pour être plus rapide que deflate ou gzip, autorisant la ré-indexation par un proxy et surtout non vulnérable à l’attaque CRIME et BREACHqui touchaient la compressiondans TLS et SPDY. Avec HTTP/2, les entêtes vont regroupés et servis en une seule fois via une trame et un stream spécifiques. Pour tout ce qui est payload HTTP,le protocole proscrit l’utilisation de la compression (y compriscelle deTLS) dès qu’il s’agit de confidentialité (secure channel) etque la source de données n’est pas de confiance. Server Push Sur HTTP 1.1,le navigateur récupère les données du serveuren lui demandant explicitement la ressource souhaitée.au fur et à mesure du chargement de la ressource HTTP, le navigateur va parser le DOM etdemander au serveur de lui fournirles objets associés(CSS, JS, images, etc). Avec le Server Push,lorsque lenavigateur va demander au serveur la ressource HTTP (par exemple /default.html), le serveur va bien sûr servir la première requête mais il va aussi lui pousser préemptivement tous les objets associés et qui seront placés dans le cache du navigateur (ils faut quand même que les objets servis soient “cacheables”). Quand celui-ci devra faire le rendu de la page, les objets étant déjà présent dansle cache,le temps de d’affichage en sera accéléré. Le Server Push est une fonctionnalitéactivée par défaut coté serveur mais quireste désactivable et contrôlableparle navigateur. Lors de l’établissement de la connexion, si le navigateur envoi au serveur le paramètreSETTINGS_ENABLE_PUSH avec la valeur 0, le serveur s’abstient de pousser les objets et bascule sur l’ancien mode où la navigateur va récupérer objet par objet (client pull). Même si le Server Push est actif entre le serveur et le navigateur,celui-cipeut aussi à tout moment envoyer au serveur une trameRST_STREAM pour lui dire d’arrêter de pousser du contenu. Chiffrement en TLS HTTP/2 prévoit deux mode de transports: HTTP/2 sur TLS (h2) pour les URIs en https:// et HTTP/2 sur TCP (h2c) pour les URIs en http://. Concernant les URI http:// (en clair sur le port 80) etaprès des débats houleux au sein du groupe de travail, le protocole prévoit un mécanisme où le clientenvoie au serveurune requête avec l’entêteUpgrade et la clé h2c. Si le serveur parle HTTP/2 en clair, il envoie au client une réponse “101 Switching” etbascule sur HTTP/2. Même si la RFC ne l’impose pas, les implémentations actuelles d’HTTP/2 sur les navigateurs et les serveurs web ne se font que sur du TLS (l’exception est avec les implémentationsde Curl et de nghttp2) Dansle mode h2 (chiffré), HTTP/2 impose un profil TLSdont les caractéristiques sont lessuivantes: -TLS en version 1.2 à minima - une liste noire (blacklist) de suites de chiffrement à ne pas utiliser (en particulier, les algorithmes d’échange de clés qui sont non éphémères sont exclus) - les extensions SNI et ALPN doivent utilisées. Le protocole ALPN(Application LayerProtocol Negociation) normalisé pour HTTP/2 remplace le NPN de SPDY et permet de négocier le protocole qui va être utilisé au dessus de TLS. La différence entre ALPN et NPN est qu’avec ALPN, c’est le serveur qui choisit le protocole utilisé parmi une liste de protocoles proposéspar le client. - la compression TLS doit être désactivée (à cause des attaques BREACH et CRIME) - la renégociation TLS doit être désactivée - les échanges de clés doivent se faire en crypto éphémère: DHE en 2048 bits minimum et ECDHE en 224 bits minimum. - le socle commun à respecter est : TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256avec leP-256 Elliptic Curve Support de HTTP/2 sur BIG-IP Le F5 BIG-IP a étéle premier ADC du marchéà prendre en charge HTTP/2 et la version 11.6 sorti en septembre 2014 supportait déjà le draft 14 en mode expérimental. Le support d’HTTP/2 dans sa spécification finale est pris en charge dans la version 12.0 de TMOS. Démonstration Une vidéo de démonstration de HTTP/2 sur BIG-IP vaut mieux qu’un long discours. On y voit une page web contenant 100 objets mettant 17 secondes à se charger en HTTP1.1 et seulement 6 secondes de chargement en HTTP/2 ...551Views1like2CommentsUsing a BIG-IP EAV external monitor to monitor HTTP/2 h2c servers
Problem this snippet solves: Introduction Beginning in BIG-IP 14.1.0, F5 provides full proxy (client and server side) support for the HTTP/2 protocol. HTTP/2 connections can run over HTTP without TLS in plaintext or HTTPS with TLS encryption. h2 is the protocol identifier for HTTP/2 with TLS and h2c identifies HTTP/2 without TLS. Note: Modern browsers today do not support HTTP/2 unencrypted. Beginning in BIG-IP 15.1.0, F5 introduces 2 new HTTP/2 monitors, http2 and http2_head_f5. They monitor HTTP/2 over TLS but do not monitor h2c. This article describes how you can use Extended Application Verification, EAV or extended monitors to monitor the h2c health of your pool members and nodes. BIG-IP Extended monitors The built-in BIG-IP http2 and http2_head_f5 monitors perform monitoring using HTTP/2 over TLS, while your h2c pool members, which serve content using HTTP/2 on TCP will fail both monitor health checks. Instead, you can configure external monitors to do this. External monitors let you create custom scripts that contain specific logic that is not available in built-in BIG-IP monitors to monitor the health of pool members and nodes. For a complete overview of EAV external monitors and the procedure to implement one, refer to K71282813: Overview of BIG-IP EAV external monitors. An important component of an external monitor is the script which runs a command such as curl or netcat that interacts with the pool member. To monitor h2c service, beginning in BIG-IP 14.1.0, you can use the nghttp command. nghttp differs from curl in how it negotiates and establishes HTTP/2 in the following way: Upgrade header: curl negotiates HTTP/2 by sending an Upgrade header within an HTTP/1.1 connection and switching protocols to HTTP/2. The following is an example: # curl --http2 -Ik http://192.0.2.5 HTTP/1.1 101 Switching Protocols Upgrade: h2c Connection: Upgrade HTTP/2.0 200 Direct: nghttp negotiates HTTP/2 by sending HTTP/2 frames directly to the pool member. The following example shows nghttp sending the initial HTTP/2 SETTINGS frame right after TCP is established. # nghttp -nv http://192.0.2.5 [0.000] Connected [0.001] send SETTINGS frame <length=12, flags=0x00, stream_id=0> (niv=2) How to use this snippet: The external monitor script The external monitoring script in this article uses the nghttp command as follows: nghttp -v http://${node_ip}:${2}${URI} 2> /dev/null | grep -E -i "${RECV}" > /dev/null The server response is piped to grep the ${RECV} variable. When grep is successful, it returns exit status code 0 and the h2c service of the server is marked up. Note: When a command in the script sends any data or output to stdout, the script exits and the external monitor marks the pool member up. For example, if you include an echo up command at the top of your script, the external monitor marks the pool member up and the rest of the code below the command does not run. External script implementation To implement an h2c external monitor, copy and paste the following code and follow the procedure in K71282813: Overview of BIG-IP EAV external monitors. You must define the RECV string in the Variables parameter of your BIG-IP external monitor on the Configuration utility. This is because referring to the nghttp command described above in the script, when RECV is undefined, the grep command will always return status code 0, thereby erroneously marking the pool member up. Optionally define the URI parameter as appropriate in your environment. For example, you can define URI as /index.html. Code : #!/bin/sh # # (c) Copyright 1996-2006, 2010-2013 F5 Networks, Inc. # # This software is confidential and may contain trade secrets that are the # property of F5 Networks, Inc. No part of the software may be disclosed # to other parties without the express written consent of F5 Networks, Inc. # It is against the law to copy the software. No part of the software may # be reproduced, transmitted, or distributed in any form or by any means, # electronic or mechanical, including photocopying, recording, or information # storage and retrieval systems, for any purpose without the express written # permission of F5 Networks, Inc. Our services are only available for legal # users of the program, for instance in the event that we extend our services # by offering the updating of files via the Internet. # # @(#) $Id: //depot/maint/bigip16.0.0/tm_daemon/monitors/sample_monitor#1 $ # # # these arguments supplied automatically for all external pingers: # $1 = IP (::ffff:nnn.nnn.nnn.nnn notation or hostname) # $2 = port (decimal, host byte order) # $3 and higher = additional arguments # # $MONITOR_NAME = name of the monitor # # In this sample script, $3 is the regular expression # # Name of the pidfile pidfile="/var/run/$MONITOR_NAME.$1..$2.pid" # Send signal to the process group to kill our former self and any children # as external monitors are run with SIGHUP blocked if [ -f $pidfile ] then kill -9 -`cat $pidfile` > /dev/null 2>&1 fi echo "$$" > $pidfile # Remove the IPv6/IPv4 compatibility prefix node_ip=`echo $1 | sed 's/::ffff://'` # Using the nghttp utility to get data from the server. # Search the data received for the expected expression. nghttp -v http://${node_ip}:${2}${URI} 2> /dev/null | grep -E -i "${RECV}" > /dev/null status=$? if [ $status -eq 0 ] then # Remove the pidfile before the script echoes anything to stdout and is killed by bigd rm -f $pidfile echo "up" fi # Remove the pidfile before the script ends rm -f $pidfile Tested this on version: No Version Found1.6KViews1like0CommentsMitigating recent HTTP/2 DoS vulnerabilities with BIG-IP
F5 Networks Threat Research team have been looking into the HTTP/2 protocol in order to assess the potential risks and possible attack vectors. During the research two previously unknown attack vectors that affect multiple implementations of the protocol were discovered. SETTINGS frame abuse DoS In our research we found out that attackers can abuse the HTTP/2 SETTINGS frame by continually sending large SETTINGS frames that contain repeating settings parameters with changing values. Although the HTTP/2 RFC explicitly warns implementations from this possible DoS vector none of the implementations that were tested took this into consideration while implementing the protocol. This attack is mitigated by BIG-IP as BIG-IP acts as a full-proxy and opens a new HTTP/2 connection to the application server and therefor SETTINGS frames received from the client will not be forwarded to the application server. Moreover, BIG-IP terminates idle connections when the connection idle timeout as configured in the HTTP/2 profile connection has reached. Figure 1: Connection idle timeout setting in BIG-IP HTTP/2 profile Additional References: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11763 https://nvd.nist.gov/vuln/detail/CVE-2018-16844 https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/ADV190005 https://nvd.nist.gov/vuln/detail/CVE-2018-12545 DoS via Slow POST (Slowloris) Although Apache http server has a module named “mod_reqtimeout” which is enabled by default since Apache httpd 2.2.15 and supposed to stop all the variants of the Slowloris attack, we decided to check how well it mitigates the same attack vector over HTTP/2. Surprisingly, it turned out that the attack slips under the radar of “mod_reqtimeout” and makes the Apache server unavailable to serve new requests. All Slowloris DoS variants are mitigated by BIG-IP as BIG-IP will only open connections and forward the request to pool members once a complete and valid HTTP request has been received. Since the requests transmitted during a Slowloris attack will never complete, no connections will be opened to the pool members. BIG-IP has also a possibility to mitigate Slowloris attacks by configuring “Slow Flow Monitoring” in its eviction policy. Figure 2: Mitigating Slowloris attacks via BIG-IP eviction policy Additional References https://nvd.nist.gov/vuln/detail/CVE-2018-17189945Views1like1CommentUnderstanding HTTP/2 Profile's Frame Size option on BIG-IP
Quick Intro The Overview of the BIG-IP HTTP/2 profile article on AskF5 I created a while ago describes all the HTTP/2 profile options but sometimes we need to test things out ourselves to grasp things at a deeper level. In this article, I'm going to show how Frame Size option sets specifically only the maximum size of HTTP/2 DATA message's payload in bytes and what happens when we change this value on Wireshark. Think of it as a quick walkthrough to give us a deeper understanding of how HTTP/2 works as we go through. The Topology It's literally a client on 10.199.3.135 and a virtual server with HTTP + HTTP/2 profile applied with the default settings: Testing Frame Size Option Here I've tried to modify frame-size to an invalid value so we can see the valid range: Let's set the frame-size to 1024 bytes: I have curl installed in my client machine and this is the command I used: If we just filter for http2 on Wireshark, we should see the negotiation phase (SETTINGS) as well as request (GET) and response (200 OK) headers in their specific message type (HEADERS). However, our focus here is on DATA message type as seen below: I've now added a new column (Length) to include the length of DATA messages so we can easily see how Frame Size settings affect DATA length. Here's how we create such filter: I've further renamed it to HTTP2 DATA Length but you've got the point. If we list only DATA messages, we can see that the payload of HTTP/2 DATA message type will not go beyond 1024 bytes: Wireshark confirms that HTTP/2 headers + DATA payload of frame 26 is 1033 bytes but DATA payload-only is 1024 bytes as seen below: We can then confirm that only payload counts for frame-size configuration on BIG-IP. I hope you enjoyed the above hands-on walk-through.1.1KViews1like0CommentsWhat is HTTP Part X - HTTP/2
In the penultimate article in this What is HTTP? series we covered iRules and local traffic policies and the power they can unleash on your HTTP traffic. To date in this series, the content primarily focuses on HTTP/1.1, as that is still the predominant industry standard. But make no mistake, HTTP/2 is here and here to stay, garnering 30% of all website traffic and climbing steadily. In this article, we’ll discuss the problems in HTTP/1.1 addressed in HTTP/2 and how BIG-IP supports the major update. What’s So Wrong withHTTP/1.1? It’s obviously a pretty good standard since it’s lasted as long as it has, right? So what’s the problem? Well, let’s set security aside for this article, since the HTTP/2 committee pretty much punted on it anyway, and let’s instead talk about performance. Keep in mind that the foundational constructs of the HTTP protocol come from the internet equivalent of the Jurassic age, where the primary function was to get and post text objects. As the functionality stretched from static sites to dynamic interactive and real-time applications, the underlying protocols didn’t change much to support this departure. That said, the two big issues with HTTP/1.1 as far as performance goes are repetitive meta data and head of line blocking.HTTP was designed to be stateless. As such, all applicable meta data is sent on every request and response, which adds from minimal to a grotesque amount of overhead. Head of Line Blocking For HTTP/1.1, this phenomenon occurs due to each request needs a completed response before a client can make another request. Browser hacks to get around this problem involved increasing the number of TCP connections allowed to each host from one to two and currently at six as you can see in the image below. More connections more objects, right? Well yeah, but you still deal with the overhead of all those connections, and as the number of objects per page continues to grow the scale doesn’t make sense. Other hacks on the server side include things like domain sharding, where you create the illusion of many hosts so the browser creates more connections. This still presents a scale problem eventually. Pipelining was a thing as well, allowing for parallel connections and the utopia of improved performance. But as it turns out, it was not a good thing at all, proving quite difficult to implement properly and brittle at that, resulting in a grand total of ZERO major browsers actually supporting it. Radical Departures - The Big Changes in HTTP/2 HTTP/2 still has the same semantics as HTTP/1. It still has request/response, headers in key/value format, a body, etc. And the great thing for clients is the browser handles the wire protocols, so there are no compatibility issues on that front. There are many improvements and feature enhancements in the HTTP/2 spec, but we’ll focus here on a few of the major changes. John recorded a Lightboard Lesson a while back on HTTP/2 with an overview of more of the features not covered here. From Text to Binary With HTTP/2 comes a new binary framing layer, doing away with the text-based roots of HTTP. As I said, the semantics of HTTP are unchanged, but the way they are encapsulated and transferred between client and server changes significantly. Instead of a text message with headers and body in tow, there are clear delineations for headers and data, transferred in isolated binary-encoded frames (photo courtesy of Google). Client and server need to understand this new wire format in order to exchange messages, but the applications need not change to utilize the core HTTP/2 changes. For backwards compatibility, all client connections begin as HTTP/1 requests with an upgrade header indicating to the server that HTTP/2 is possible. If the server can handle it, a 101 response to switch protocols is issued by the server, and if it can’t the header is simply ignored and the interaction will remain on HTTP/1. You’ll note in the picture above that TLS is optional, and while that’s true to the letter of the RFC law (see my punting on security comment earlier) the major browsers have not implemented that as optional, so if you want to use HTTP/2, you’ll most likely need to do it with encryption. Multiplexed Streams HTTP/2 solves the HTTP/1.1 head of line problem by multiplexing requests over a single TCP connection. This allows clients to make multiple requests of the server without requiring a response to earlier requests. Responses can arrive in any order as the streams all have identifiers (photo courtesy of Google). Compare the image below of an HTTP/2 request to the one from the HTTP/1.1 section above. Notice two things: 1) the reduction of TCP connections from six to one and 2) the concurrency of all the objects being requested. In the brief video below, I toggle back and forth between HTTP/1.1 and HTTP/2 requests at increasing latencies, thanks to a demo tool on golang.org, and show the associated reductions in page load experience as a result. Even at very low latency there is an incredible efficiency in making the switch to HTTP/2. This one change obviates the need for many of the hacks in place for HTTP/1.1 deployments. One thing to note on the head of line blocking: TCP actually becomes a stumbling block for HTTP/2 due to its congestion control algorithms. If there is any packet loss in the TCP connection, the retransmit has to be processed before any of the other streams are managed, effectively halting all traffic on that connection. Protocols like QUIC are being developed to ride the UDP waveand overcome some of the limitations in TCP holding back even better performance in HTTP/2. Header Compression Given that headers and data are now isolated by frame types, the headers can now be compressed independently, and there is a new compression utility specifically for this called HPACK. This occurs at the connection level. The improvements are two-fold. First, the header fields are encoded using Huffman coding thus reducing their transfer size. Second, the client and server maintain a table of previous headers that is indexed. This table has static entries that are pre-defined on common HTTP headers, and dynamic entries added as headers are seen. Once dynamic entries are present in the table, the index for that dynamic entry will be passed instead of the head values themselves (photo courtesy of amphinicy.com). BIG-IP Support F5 introduced the HTTP/2 profile in 11.6 as an early access, but it hit general availability in 12.0. The BIG-IP implementation supports HTTP/2 as a gateway, meaning that all your clients can interact with the BIG-IP over HTTP/2, but server-side traffic remains HTTP/1.1. Applying the profile also requires the HTTP and clientssl profiles. If using the GUI to configure the virtual server, the HTTP/2 Profile field will be grayed out until use select an HTTP profile. It will let you try to save it at that point even without a clientssl profile, but will complain when saving: 01070734:3: Configuration error: In Virtual Server (/Common/h2testvip) http2 specified activation mode requires a client ssl profile As far as the profile itself is concerned, the fields available for configuration are shown in the image below. Most of the fields are pretty self explanatory, but I’ll discuss a few of them briefly. Insert Header - this field allows you to configure a header to inform the HTTP/1.1 server on the back end that the front-end connection is HTTP/2. Activation Modes - The options here are to restrict modes toALPN only, which would then allow HTTP/1.1 or negatiate to HTTP/2 or Always, which tells BIG-IP that all connections will be HTTP/2. Receive Window - We didn’t cover the flow control functionality in HTTP/2, but this setting sets the level (HTTP/2 v3+) where individual streams can be stalled. Write Size - This is the size of the data frames in bytes that HTTP/2 will send in a single write operation. Larger size will improve network utilization at the expense of an increased buffer of the data. Header Table Size - This is the size of the indexed static/dynamic table that HPACK uses for header compression. Larger table size will improve compression, but at the expense of memory. In this article, we covered the basics of the major benefits of HTTP/2. There are more optimizations and features to explore, such as server push, which is not yet supported by BIG-IP. You can read about many of those features here on this very excellent article on Google’s developers portal where some of the images in this article came from.2.6KViews1like2CommentsAre the HTTP/2 profile defaults sound?
The current default for theHTTP/2 profile has a Concurrent Streams Per Connection default of 10. This seems a bit conservative. IETF recommended that this value being no smaller than 100, so as to not unnecessarily limit parallelism https://tools.ietf.org/html/rfc7540#section-6.5.2 Also, NGINX for example has a default of 128 for while Citrix Netscaler has 100 as default maximum number of concurrent HTTP/2 streams in a connection. So, should we tune this value up from 10 to say 100? What effects will that have on the appliance? Also, should we then also tune any of the other default params for better performance?765Views1like3Comments