http2
19 TopicsHTTP/2 Protocol in Plain English using Wireshark
1. Quick Intro Some people find it easier to do a "test drive" first to learn how a new protocol works in its simplest form and only then read the RFC. It turns out Wireshark is a perfect tool for me to do just that. It's a simple test and here's the topology: I'll just issue a HEAD request and later on a GET request and we'll see how it looks like on Wireshark. For more info about HTTP/2 profile and HTTP/2 protocol itself you can read the article I published onAskF5andJason's DevCentral article: What is HTTP Part X - HTTP/2. 2. Confirmation of which protocol will be used The packet capture taken below was the result of the following curl command issued from my ubuntu Linux client (I could've used a browser instead): Note: 10.199.3.44 is my virtual server with HTTP/2 profile applied. Here's packet capture (in case you want to follow along): http2-test-v1.zip HTTP/2 is negotiated during SSL handshake in Application Layer Protocol Negotiation (RFC 7301) SSL extension like this: Client says which protocol(s) it supports and server responds whichone it picked (in this case it's HTTP/2!). 3. Negotiation of HTTP/2 Parameters Think of it as something that has to take place like Client Hello and Server Hello in SSL for example. Server side (BIG-IP in this case) sendsSETTINGSframe which counts as confirmation that HTTP/2 is being used plus any flow control configuration we want our peer to honour: Client sendsMagicframe to confirm HTTP/2 is being used and thenSETTINGSwith its requirements for the connection. Yes,Magicframe is always the same. Still curious aboutMagicframe? Readhttps://tools.ietf.org/html/rfc7540#section-3.5. End-points are also supposed to ACK the receipt ofSETTINGSframefrom the other peer and the way they do it is by responding with another emptySETTINGSframewith ACK flag set: 4. Exchanging data Connection-wise we're all set now. For HTTP/2 GET/HEAD requests there is a specific frame type calledHEADERSwhich as the name implies carries HTTP/2 header information. If there was payload it would be carried insideDATAframe type but as this is just aHEADrequest then noDATAframe follows. 5. Appendix A - Other common frame types 5.1 WINDOW_UPDATE There are other common frame types and in my capturethe one that came up wasWINDOW_UPDATE. If you look at section 3 above we see that Client advised BIG-IP that its Initial Window Size was1073741824. WINDOW_UPDATEjust adjusted this value to1073676289: This is HTTP/2 flow control in action. 5.2 DATA in another test (http2-v2.zip) I usedHTTP/2 GETrequest instead ofHEADand requested more data which comes in throughDATAframe type: End Streamflag is false in allDATAmessages except for the last one. It signals when there is more data as well as the lastDATAframe. 5.3 GOAWAY In a subsequent test (http2-connection-idletimeout-1.zip)I set Connection Idle Timeoutin HTTP/2 profile to 1 to force BIG-IP sendingGOAWAYframe to close down connection after 1 second of idle connection. After last piece of data is sent by BIG-IP to client (frame #39), BIG-IP waits 1 second and sendsGOAWAYframewhich initiates the shutdown of HTTP/2 connection. GOAWAYmessages always containsPromised-Stream-IDwhich tells the client what is the lastStream IDit processed. A newStream IDis typically created for every new HTTP request (viaHEADERSmessage). We can see that a new HTTP request slipped in onframe #46but ignored as connection had already been closed on BIG-IP's side.7.7KViews3likes12CommentsUnderstanding HTTP/2 Activation Modes on BIG-IP
Introduction Activation modes specifies how the BIG-IP system negotiates HTTP/2 protocol: TMSH equivalent: In this article I go slightly deeper to explain how BIG-IP negotiates HTTP/2 connection with client peers. Traditionally, HTTP2 can be negotiated within an HTTP1.1 connection or via TLS extension Application Layer Protocol Negotiation (ALPN). Currently, the only supported method on BIG-IP is ALPN. There is another option on BIG-IP namedalways. Application Layer Protocol Negotiation (ALPN) ALPNrequires client-ssl profile applied to the Virtual Server: In ALPN, client goes through TLS handshake with BIG-IP and both inform each other about the L7 protocol they want to negotiate inapplication_layer_protocol_negotiationextension on Wireshark as seen below: When TLS handshake is finished you should see HTTP2 messages as long as traffic is decrypted because HTTP/2 requires TLS. Always Always is just for debugging purposes and not for production as this makes BIG-IP exchange HTTP/2 messages without the need for TLS. Incapture below, BIG-IP exchanges HTTP/2 messages with client immediately after TCP handshake, i.e. no TLS required like this: When I say without the need for TLS, do not confuse with HTTP/1.1 UPGRADE. Ina subsequent capture, I experimentally sent anHTTP1/1withUpgrade: h2cheader usingnghttptool from my client machine (nghttphttp://10.199.3.44)that signals we want to "talk" HTTP2 to BIG-IP and here's what happens: But BIG-IP replied withSETTINGS(HTTP2 message) andGOAWAYwhich are HTTP2 messages: If BIG-IP supported the UPGRADE from HTTP/1.1 to HTTP/2, it should have responded with HTTP1.1 101 (Switching Protocols) message instead and not HTTP/2 SETTINGS directly as seen above. This also confirms BIG-IP doesn't support upgrade from HTTP/1.1 to HTTP/2. Good bye and Thank you F5, my team and the whole community! I'd like to take this opportunity to say that I'm leaving F5 for a new challenge but I'm not leaving F5 community. I'm truly grateful to be part of this vibrant community and I'd like to thank the whole of F5 and DevCentral community members for making DevCentral great. However, a special thank you goes to my team mates Jason Rahm, John Wagnon, Leslie Hubertus, Lief Zimmerman, Chase Abbott, Peter Silva and my manager Tony Hynes. I learnt a lot from you, had lots of fun in our in-person meetings and will be always grateful for that. You'll be truly missed. I won't be posting articles but will still be in the forums so feel free to drop me a message.2.9KViews0likes2CommentsWhat is HTTP Part X - HTTP/2
In the penultimate article in this What is HTTP? series we covered iRules and local traffic policies and the power they can unleash on your HTTP traffic. To date in this series, the content primarily focuses on HTTP/1.1, as that is still the predominant industry standard. But make no mistake, HTTP/2 is here and here to stay, garnering 30% of all website traffic and climbing steadily. In this article, we’ll discuss the problems in HTTP/1.1 addressed in HTTP/2 and how BIG-IP supports the major update. What’s So Wrong withHTTP/1.1? It’s obviously a pretty good standard since it’s lasted as long as it has, right? So what’s the problem? Well, let’s set security aside for this article, since the HTTP/2 committee pretty much punted on it anyway, and let’s instead talk about performance. Keep in mind that the foundational constructs of the HTTP protocol come from the internet equivalent of the Jurassic age, where the primary function was to get and post text objects. As the functionality stretched from static sites to dynamic interactive and real-time applications, the underlying protocols didn’t change much to support this departure. That said, the two big issues with HTTP/1.1 as far as performance goes are repetitive meta data and head of line blocking.HTTP was designed to be stateless. As such, all applicable meta data is sent on every request and response, which adds from minimal to a grotesque amount of overhead. Head of Line Blocking For HTTP/1.1, this phenomenon occurs due to each request needs a completed response before a client can make another request. Browser hacks to get around this problem involved increasing the number of TCP connections allowed to each host from one to two and currently at six as you can see in the image below. More connections more objects, right? Well yeah, but you still deal with the overhead of all those connections, and as the number of objects per page continues to grow the scale doesn’t make sense. Other hacks on the server side include things like domain sharding, where you create the illusion of many hosts so the browser creates more connections. This still presents a scale problem eventually. Pipelining was a thing as well, allowing for parallel connections and the utopia of improved performance. But as it turns out, it was not a good thing at all, proving quite difficult to implement properly and brittle at that, resulting in a grand total of ZERO major browsers actually supporting it. Radical Departures - The Big Changes in HTTP/2 HTTP/2 still has the same semantics as HTTP/1. It still has request/response, headers in key/value format, a body, etc. And the great thing for clients is the browser handles the wire protocols, so there are no compatibility issues on that front. There are many improvements and feature enhancements in the HTTP/2 spec, but we’ll focus here on a few of the major changes. John recorded a Lightboard Lesson a while back on HTTP/2 with an overview of more of the features not covered here. From Text to Binary With HTTP/2 comes a new binary framing layer, doing away with the text-based roots of HTTP. As I said, the semantics of HTTP are unchanged, but the way they are encapsulated and transferred between client and server changes significantly. Instead of a text message with headers and body in tow, there are clear delineations for headers and data, transferred in isolated binary-encoded frames (photo courtesy of Google). Client and server need to understand this new wire format in order to exchange messages, but the applications need not change to utilize the core HTTP/2 changes. For backwards compatibility, all client connections begin as HTTP/1 requests with an upgrade header indicating to the server that HTTP/2 is possible. If the server can handle it, a 101 response to switch protocols is issued by the server, and if it can’t the header is simply ignored and the interaction will remain on HTTP/1. You’ll note in the picture above that TLS is optional, and while that’s true to the letter of the RFC law (see my punting on security comment earlier) the major browsers have not implemented that as optional, so if you want to use HTTP/2, you’ll most likely need to do it with encryption. Multiplexed Streams HTTP/2 solves the HTTP/1.1 head of line problem by multiplexing requests over a single TCP connection. This allows clients to make multiple requests of the server without requiring a response to earlier requests. Responses can arrive in any order as the streams all have identifiers (photo courtesy of Google). Compare the image below of an HTTP/2 request to the one from the HTTP/1.1 section above. Notice two things: 1) the reduction of TCP connections from six to one and 2) the concurrency of all the objects being requested. In the brief video below, I toggle back and forth between HTTP/1.1 and HTTP/2 requests at increasing latencies, thanks to a demo tool on golang.org, and show the associated reductions in page load experience as a result. Even at very low latency there is an incredible efficiency in making the switch to HTTP/2. This one change obviates the need for many of the hacks in place for HTTP/1.1 deployments. One thing to note on the head of line blocking: TCP actually becomes a stumbling block for HTTP/2 due to its congestion control algorithms. If there is any packet loss in the TCP connection, the retransmit has to be processed before any of the other streams are managed, effectively halting all traffic on that connection. Protocols like QUIC are being developed to ride the UDP waveand overcome some of the limitations in TCP holding back even better performance in HTTP/2. Header Compression Given that headers and data are now isolated by frame types, the headers can now be compressed independently, and there is a new compression utility specifically for this called HPACK. This occurs at the connection level. The improvements are two-fold. First, the header fields are encoded using Huffman coding thus reducing their transfer size. Second, the client and server maintain a table of previous headers that is indexed. This table has static entries that are pre-defined on common HTTP headers, and dynamic entries added as headers are seen. Once dynamic entries are present in the table, the index for that dynamic entry will be passed instead of the head values themselves (photo courtesy of amphinicy.com). BIG-IP Support F5 introduced the HTTP/2 profile in 11.6 as an early access, but it hit general availability in 12.0. The BIG-IP implementation supports HTTP/2 as a gateway, meaning that all your clients can interact with the BIG-IP over HTTP/2, but server-side traffic remains HTTP/1.1. Applying the profile also requires the HTTP and clientssl profiles. If using the GUI to configure the virtual server, the HTTP/2 Profile field will be grayed out until use select an HTTP profile. It will let you try to save it at that point even without a clientssl profile, but will complain when saving: 01070734:3: Configuration error: In Virtual Server (/Common/h2testvip) http2 specified activation mode requires a client ssl profile As far as the profile itself is concerned, the fields available for configuration are shown in the image below. Most of the fields are pretty self explanatory, but I’ll discuss a few of them briefly. Insert Header - this field allows you to configure a header to inform the HTTP/1.1 server on the back end that the front-end connection is HTTP/2. Activation Modes - The options here are to restrict modes toALPN only, which would then allow HTTP/1.1 or negatiate to HTTP/2 or Always, which tells BIG-IP that all connections will be HTTP/2. Receive Window - We didn’t cover the flow control functionality in HTTP/2, but this setting sets the level (HTTP/2 v3+) where individual streams can be stalled. Write Size - This is the size of the data frames in bytes that HTTP/2 will send in a single write operation. Larger size will improve network utilization at the expense of an increased buffer of the data. Header Table Size - This is the size of the indexed static/dynamic table that HPACK uses for header compression. Larger table size will improve compression, but at the expense of memory. In this article, we covered the basics of the major benefits of HTTP/2. There are more optimizations and features to explore, such as server push, which is not yet supported by BIG-IP. You can read about many of those features here on this very excellent article on Google’s developers portal where some of the images in this article came from.2.5KViews1like2CommentsMultiplexing: TCP vs HTTP2
Can you use both? Of course you can! Here comes the (computer) science… One of the big performance benefits of moving to HTTP/2 comes from its extensive use of multiplexing. For the uninitiated, multiplexing is the practice of reusing a single TCP connection for multiple HTTP requests and responses. See, in the old days (HTTP/1), a request/response pair required its own special TCP connection. That ultimately resulted in the TCP connection per host limits imposed on browsers and, because web sites today are comprised of an average of 86 or more individual objects each needing its own request/response, slowed down transfers. HTTP/1.1 let us use “persistent” HTTP connections, which was the emergence of multiplexing (connections could be reused) but constrained by the synchronous (in order) requirement of HTTP itself. So you’d open 6 or 7 or 8 connections and then reuse them to get those 80+ objects. With HTTP/2 that’s no longer the case. A single TCP connection is all that’s required because HTTP/2 leverages multiplexing and allows asynchronous (parallel) requests. Many request/response pairs can be transferred over that single connection in parallel, resulting in faster transfers and less networking overhead. Because as we all know by know, TCP’s three-way handshake and windowing mechanisms (slow start, anyone?) can be a drag (literally) on app performance. So the question is, now that we’ve got HTTP/2 and its multiplexing capabilities on the client side of the equation, do we still see a benefit from TCP multiplexing on the server side of the equation? Yes. Absolutely. The reason for that is that is operational and directly related to a pretty traditional transition that has to occur whenever there’s a significant “upgrade” to what is a foundational protocol like HTTP. Remember, IPv6 has been available and ready to go for a decade and we’re still not fully transitioned. Think about that for a minute when you consider how long the adoption curve for HTTP/2 is probably going to be. Part of the reason for this is while many browsers already support HTTP/2, very few organizations have web or app servers that support HTTP/2. That means that while they could support HTTP on the client side, they can’t on the server side. Assuming the server side can support HTTP/2, there are then business and architectural reasons why an organization might choose to delay migration – including licensing, support, and just the cost of the disruption to upgrade. So HTTP2 winds up being a no-go. Orgs don’t move to HTTP/2 on the client side even though it has significant performance benefits, especially for their increasingly mobile app user population because they can’t support it on the server side. But HTTP2 gateways (proxies able to support HTTP/2 on the client side and HTTP/1 on the server side) exist. So it’s a viable and less disruptive means of migrating to HTTP/2 on the client without having to go “all in” on the server side. But of course that means you’re only getting half the benefits of multiplexing associated with HTTP/2. Unless, of course, you’re using TCP multiplexing on the server side. What multiplexing offers for clients with HTTP/2, TCP multiplexing capabilities in load balancers and proxies offer for servers with HTTP/1. This is not a new capability. It’s been a core TCP optimization technique for, well, a long time and it’s heavily used to improve both performance and reduce load on web/app servers (which means they have greater capacity, operate more efficiently, and improve the economy of scale of any app). On the server side, TCP multiplexing opens (and maintains) a TCP connection to each of the web/app servers it is virtualizing. When requests come in from clients the requests are sent by the load balancer or proxy over an existing (open) connection to the appropriate app instance. That means the performance of the app is improved by the time required to open and ramp up a TCP connection. It also means that the intermediary (the load balancer or proxy) can take in multiple HTTP requests and effectively parallelize them (we call this content switching). In the world of HTTP/1, that means if the client opened six TCP connections and then sent 6 different HTTP requests, the intermediary could ostensibly send out all 6 over existing TCP connections to the appropriate web/app servers, thereby speeding up the responses and improving overall app performance. The same thing is true for HTTP/2. The difference is that with HTTP/2 those 6 different requests are coming in over the same TCP connection. But they’re still coming in. That means a TCP multiplexing-capable load balancer (or proxy) can parallelize those requests to the web/app servers and achieve gains in performance that are noticeable (in a good way) to the client. True, that gain may be measured in less than a second for most apps, but that means the user is receiving data faster. And when users expect responses (like the whole page) in less than 3 seconds. Or 5 depending on whose study you’re looking at. The father of user interface design, Jakob Nielsen, noted that users will notice a 1 second delay. And that was in 1993. I’m pretty sure my 7 year old notices sub-second delays – and is frustrated by them. The point being that every micro-second you can shave off the delivery process (receiving a request and sending a response) is going to improve engagement with users – both consumer and corporate. What HTTP/2 effectively does is provide similar TCP optimizations on the client side of the equation as TCP multiplexing offers on the server side. Thus, using both HTTP/2 and network-based TCP multiplexing is going to offer a bigger gain in performance than using either one alone. And if you couple HTTP/2 and TCP multiplexing with content switching, well.. you’re going to gain some more. So yes, go ahead. Multiplex on the app and the client side and reap the performance benefits.2.2KViews0likes2CommentsHow HTTP/2 Compression works under the hood
Introduction This is HTTP/1.1 and HTTP/2 requests side-by-side as seen on Wireshark: At first glance, they look quite similar and HTTP/1.1 even simpler. However, HTTP/2 entire header block occupied 37 bytes as opposed to 76 bytes from HTTP/1.1: This is due to HTTP/2 compression and that's what we're going to explore in this article. First I'll introduce the 3 methods HTTP/2 uses for compression and then I'll show you my lab set up. We'll then go through a real packet capture to understand how HTTP/2 compression (HPACK) works covering as much detail as reasonably possible. Lab Test Scenario For this test, I sent 3 consecutive requests (using same TCP connection) to download 3x identical 10 MB files, namedfirst_req.img,second_req.imgandthird_req.img: I have a fairly simple lab set up: How HTTP/2 compression works I'll use the first GET request above as the guinea pig here. We all know that a character is typically 1 byte, right? Notice that the whole of :method: GET and :scheme: https are only 1 byte each: That's because compression in HTTP/2 works by not sending a headers and values when possible. HTTP/2 compresses headers using a static table, dynamic table and Huffman encoding. We'll go through each of them now using examples. Static Table All client sends, instead of :method: GET, is a 1-byte index represented by a decimal number. For example, the index that represents :method: GET is 2 as seen below: HTTP/2 was implemented in such a way that when receiver reads Index 2, it immediately understands Index 2 means :method: GET. The same applies to :scheme: https which is represented by Index 7. Also note that on Wireshark, when we see Indexed Header Field, it means the whole header + value is represented by an Index. That's how HTTP/2 achieves "compression" under the hood but there's more to it. The mapping between headers or header + values is listed in a static table in Appendix A of RFC7541 and contains 61 mappings. However, HTTP/2 also has a dynamic table to store values on the go and that's what we're going to see in action now. Dynamic Table Let's now pick :authority: 10.199.3.44. This one is interesting because :authority: is in the static table but 10.199.3.44 (BIG-IP's HTTP/2 virtual server) isn't. So, how does HTTP/2 solve this problem? Because :authority: (header's name) is present in static table, it indexes it anyway using Index 1. The value (10.199.3.44) is obviously not in static table but BIG-IP assigns a dynamic Index value from Index 62 onwards to the whole ":authority: 10.199.3.44" name + value (remember static table has only 61 indexes!): How do we know BIG-IP assigned such value? Because of "Incremental Indexing" keyword. Also note that in this first request, :authority: 10.199.3.44 eats up 10 bytes (1 byte for :authority and 9 bytes for 10.199.3.44)! In the next request, we not only see that the whole :authority: 10.199.3.44 is now using a unique Index (63) but it's only eating up 1 byte this time: Note: The reason why :authority: 10199.3.44 wasn't assigned Index 62 is just because accept: */* used it first. Normally, the first value uses Index 62, second Index 63 and so on. Impressive, isn't it? This is dynamic table in action. Setting HTTP/2 Dynamic table size on BIG-IP On BIG-IP, the default value for the Dynamic table size is 4096 bytes and such value is configurable via GUI: Or tmsh: I'm now quoting the article I created for AskF5 Overview of the BIG-IP HTTP/2 profile to expand on what header table size is: "Specifies the maximum table size, in bytes, for thedynamic tableof HTTP/2 header compression. The default value is 4096. Note:The HTTP/2 protocol compresses HTTP headers to save bandwidth and uses a static table with predefined values and a dynamic table with values that are likely to be reused again in the same HTTP/2 connection. The Header Table Size limits the number of entries of HTTP/2 dynamic table as described inSection 4.2 of RFC 7541. When the limit is reached, old entries are evicted so that new entries are added." Huffman coding The values that are not compressed using static/dynamic table are still not directly sent in plain text. There is a best effort compression method using Huffman encoding that achieves around 20-30% improvement over plain-text. Remember the dynamic table where in the first request :authority: header name was compressed using static table (Index 1) but its value (10.199.3.44) wasn't? 10.199,.3.44 wasn't sent in plain text either! That's right. It was encoded using Huffman code from Appendix B in RFC7541. Appendix - Are there values that are not added to Dynamic table? Yes, implementations may decide not to add certain values to protect sensitive header fields. In our lab test above, we can see that :path: is indexed, but its value is not AND not added to dynamic table: If /first_req.img had been added to dynamic table, Wireshark's Representation field would be Literal Header Field with Incremental Index rather than Literal Header Field without Indexing. The other question I often get asked is about Name Length and Value Length fields. More specifically, why do they differ from the actual value sent on the Wire? Name Length and Value Length are just the size in bytes of the decompressed field if it was sent in plain-text. For example, :path has 5 characters and as character's size is 1-byte, decompressed :path = 5 bytes. The same goes for /first_req.img (14 characters = 14 bytes). However, in reality, HTTP/2 client only Index 4 (which is 1-byte long) is sent to represent :path and Huffman code is used to decrease the size of /first_req.img to 11 bytes instead of 14 bytes. That's about 21% reduction in size when compared to plain-text.2KViews0likes0CommentsMicroservices and HTTP/2
It's all about that architecture. There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure. The recently official HTTP/2 specification takes performance very seriously, and introduced a variety of key components designed specifically to address the need for speed. One of these was to base the newest version of the Internet's lingua franca on SPDY. One of the impacts of this decision is that connections between the client (whether tethered or mobile) and the app (whether in the cloud or on big-iron) are limited to just one. One TCP connection per app. That's a huge divergence from HTTP/1 where it was typical to open 2, 4 or 6 TCP connections per site in order to take advantage of broadband. And it worked for the most part because, well, broadband. So it wouldn't be a surprise if someone interprets that ONE connection per app limitation to be a negative in terms of app performance. There are, of course, a number of changes in the way HTTP/2 communicates over that single connection that ultimately should counteract any potential negative impact on performance from the reduction in TCP connections. The elimination of the overhead of multiple DNS lookups (not insignificant, by the way) as well as TCP-related impacts from slow start and session setup as well as a more forgiving exchange of frames under the covers is certainly a boon in terms of application performance. The ability to just push multiple responses to the client without having to play the HTTP acknowledgement game is significant in that it eliminates one of the biggest performance inhibitors of the web: latency arising from too many round trips. We've (as in the corporate We) seen gains of 2-3 times the performance of HTTP/1 with HTTP/2 during testing. And we aren't alone; there's plenty of performance testing going on out there, on the Internets, that are showing similar improvements. Which is why it's important (very important) that we not undo all the gains of HTTP/2 with an architecture that mimics the behavior (and performance) of HTTP/1. Domain Sharding and Microservices Before we jump into microservices, we should review domain sharding because the concept is important when we look at how microservices are actually consumed and delivered from an HTTP point of view. Scalability patterns (i.e. architectures) include the notion of Y-axis scale which is a sharding-based pattern. That is, it creates individual scalability domains (or clusters, if you prefer) based on some identifiable characteristic in the request. User identification (often extricated from an HTTP cookie) and URL are commonly used information upon which to shard requests and distribute them to achieve greater scalability. An incarnation of the Y-axis scaling pattern is domain sharding. Domain sharding, for the uninitiated, is the practice of distributing content to a variety of different host names within a domain. This technique was (and probably still is) very common to overcome connection limitations imposed by HTTP/1 and its supporting browsers. You can see evidence of domain sharding when a web site uses images.example.com and scripts.example.com and static.example.com to optimize page or application load time. Connection limitations were by host (origin server), not domain, so this technique was invaluable in achieving greater parallelization of data transfers that made it appear, at least, that pages were loading more quickly. Which made everyone happy. Until mobile came along. Then we suddenly began to realize the detrimental impact of introducing all that extra latency (every connection requires a DNS lookup, a TCP handshake, and suffers the performance impacts of TCP slow start) on a device with much more limited processing (and network) capability. I'm not going to detail the impact; if you want to read about it in more detail I recommend reading some material from Steve Souder and Tom Daly or Mobify on the subject. Suffice to say, domain sharding has an impact on mobile performance, and it is rarely a positive one. You might think, well, HTTP/2 is coming and all that's behind us now. Except it isn't. Microservice architectures in theory, if not in practice, are ultimately a sharding-based application architecture that, if we're not careful, can translate into a domain sharding-based network architecture that ultimately negates any of the performance gains realized by adopting HTTP/2. That means the architectural approach you (that's you, ops) adopt to delivering microservices can have a profound impact on the performance of applications composed from those services. The danger is not that each service will be its on (isolated and localized) "domain", because that's the whole point of microservices in the first place. The danger is that those isolated domains will be presented to the outside world as individual, isolated domains, each requiring their own personal, private connection by clients. Even if we assume there are load balancing services in front of each service (a good assumption at this point) that still means direct connections between the client and each of the services used by the client application because the load balancing service acts as a virtual service, but does not eliminate the isolation. Each one is still its own "domain" in the sense that it requires a separate, dedicated TCP connection. This is essentially the same thing as domain sharding as each host requires its own IP address to which the client can connect, and its behavior is counterproductive to HTTP/2*. What we need to do to continue the benefits of a single, optimized TCP connection while being able to shard the back end is to architect a different solution in the "big black box" that is the network. To be precise, we need to take advantage of the advanced capabilities of a proxy-based load balancing service rather than a simple load balancer. An HTTP/2 Enabling Network Architecture for Microservices That means we need to enable a single connection between the client and the server and then utilize capabilities like Y-axis sharding (content switching, L7 load balancing, etc...) in "the network" to maintain the performance benefits of HTTP/2 to the client while enabling all the operational and development benefits of a microservices architecture. What we can do is insert a layer 7 load balancer between the client and the local microservice load balancers. The connection on the client side maintains a single connection in the manner specified (and preferred) by HTTP/2 and requires only a single DNS lookup, one TCP session start up, and incurs the penalties from TCP slow start only once. On the service side, the layer 7 load balancer also maintains persistent connections to the local, domain load balancing services which also reduces the impact of session management on performance. Each of the local, domain load balancing services can be optimized to best distribute requests for each service. Each maintains its own algorithm and monitoring configurations which are unique to the service to ensure optimal performance. This architecture is only minimally different from the default, but the insertion of a layer 7 load balancer capable of routing application requests based on a variety of HTTP variables (such as the cookies used for persistence or to extract user IDs or the unique verb or noun associated with a service from the URL of a RESTful API call) results in a network architecture that closely maintains the intention of HTTP/2 without requiring significant changes to a microservice based application architecture. Essentially, we're combining X- and Y-axis scalability patterns to architect a collaborative operational architecture capable of scaling and supporting microservices without compromising on the technical aspects of HTTP/2 that were introduced to improve performance, particularly for mobile applications. Technically speaking we're still doing sharding, but we're doing it inside the network and without breaking the one TCP connection per app specified by HTTP/2. Which means you get the best of both worlds - performance and efficiency. Why DevOps Matters The impact of new architectures - like microservices - on the network and the resources (infrastructure) that deliver those services is not always evident to developers or even ops. That's one of the reasons DevOps as a cultural force within IT is critical; because it engenders a breaking down of the isolated silos between ops groups that exist (all four of them) and enables greater collaboration that leads to more efficient deployment, yes, but also more efficient implementations. Implementations that don't necessarily cause performance problems that require disruptive modification to applications or services. Collaboration in the design and architectural phases will go along way towards improving not only the efficacy of the deployment pipeline but the performance and efficiency of applications across the entire operational spectrum. * It's not good for HTTP/1, either, as in this scenario there is essentially no difference** between HTTP/1 and HTTP/2. ** In terms of network impact. HTTP/2 still receives benefits from its native header compression and other performance benefits.1.5KViews0likes2CommentsGoodbye SPDY, Hello HTTP/2
The Chrome team at Google recently announced that they will be removing SPDY support from Chrome in early 2016. SPDY is an application layer protocol designed to improve the way that data is sent from web servers to clients. Depending on who you read, performance benefits ranged from a 2X speed increase down to negligible. Now, since here in March 2015 only 3.6% of websites were running SPDY, maybe the end of SPDY isn’t such big news, especially since SPDY is being replaced by HTTP/2. The HTTP/2 protocol is on the way to standardization, has pretty much all of the benefits of SPDY and will undoubtedly become the standard for web traffic moving forward. All of this is great, unless you are the one who is tasked with reconfiguring or re-implementing your server estate to switch form SPDY to HTTP/2. Fortunately, for those F5 customers who implemented SPDY using the SPDY gateway feature in BIG-IP LTM, switching to HTTP/2 is easy. TMOS 11.6 ships with HTTP/2 support – we’ve cautiously labeled this as ‘for testing’ – after all the protocol has only just been finalized. All you need to do is configure an HTTP/2 profile and apply it to your Virtual Server, and (where clients are HTTP/2 capable) you are serving HTTP/2 – it’s about 10 minutes work. This also give you a way to offer HTTP/2 support without changing your HTTP/1.x backend, which is handy because, based on the evidence of IE6, old web browsers can take a decade or more to die. In fact, maybe I should get my patent lodged for software to connect aging browsers to the next generation of web infrastructure?1.3KViews0likes0CommentsUnderstanding how BIG-IP enforces TLS requirements on HTTP/2 Profile
Introduction For those new to HTTP/2 profile, RFC7540 section 9.2.1 specifies TLS requirements for HTTP/2 connections. On BIG-IP, there's an option that is enabled by default which makes BIG-IP comply with above RFC requirements: The above setting dictates whether BIG-IP should enforce TLS configuration requirementsduring client SSL profile configuration. In this article, I will talk about such RFC requirements in the context of BIG-IP configuration.. BIG-IP requires Client SSL profile before adding HTTP/2 profile BIG-IP does not allow us to add an HTTP/2 profile without adding a Client SSL profile first as HTTP/2 requires TLS: TLS Renegotiation must be disabled on Client SSL profile The other requirement is that we must explicitly disable Renegotiation on Client SSL profile: In the above example, I first added a Client SSL profile (https-vip-client-ssl) to my virtual server (http_test) and then tried adding an HTTP/2 profile (custom_http2_profile) and it fails because TLS Renegotiation is enabled on my Client SSL profile. After disabling TLS Renegotiation, I can now safely add my HTTP/2 profile to virtual server: TLS Cipher Enforcement and TLS Compression Do not use any of the cipher suites fromAppendix A from RFC7540: Roughly all ciphers that are not ephemeral and cipher mode CBC. Ephemeral ciphers such as ECDHE are allowed. You don't need to worry about making any changes here because BIG-IP will proactively either select the ciphers that are compatible with HTTP/2 from Cipher list (sent by client onClient Hellomessage) or an error (INSUFFICIENT_SECURITY) will be triggered. However, it is worth pointing out that after a profile is applied to a virtual server, we do not allow removing compatible ciphers from Cipher List as seen below: Regarding TLS compression, we do not support it anyway so nothing to worry about. Final Remarks I would personally leave Enforce TLS Requirements setting enabled to both comply with RFC and for security reasons. For more details, please check the TLS requirements section in RFC.1.2KViews0likes0CommentsIntroducing QUIC and HTTP/3
QUIC [1] is a new transport protocol that provides similar service guarantees to TCP, and then some, operating over a UDP substrate. It has important advantages over TCP: Streams: QUIC provides multiple reliable ordered byte streams, which has several advantages for user experience and loss response over the single stream in TCP. The stream concept was used in HTTP/2, but moving it into the transport further amplifies the benefits. Latency: QUIC can complete the transport and TLS handshakes in a single round trip. Under some conditions, it can complete the application handshake (e.g. HTTP requests) in a single round-trip as well. Privacy and Security: QUIC always uses TLS 1.3, the latest standard in application security, and hides much more data about the connection from prying eyes. Moreover, it is much more resistant than TCP to various attacks on the protocol, because almost all of its packets are authenticated. Mobility: If put in the right sort of data center infrastructure, QUIC seamlessly adjusts to changes in IP address without losing connectivity. [2] Extensibility: Innovation in TCP is frequently hobbled by middleboxes peering into packets and dropping anything that seems non-standard. QUIC’s encryption, authentication, and versioning should make it much easier to evolve the transport as the internet evolves. Google started experimenting with early versions of QUIC in 2012, eventually deploying it on Chrome browsers, their mobile apps, and most of their server applications. Anyone using these tools together has been using QUIC for years! The Internet Engineering Task Force (IETF) has been working to standardize it since 2016, and we expect that work to complete in a series of Internet Requests for Comment (RFCs) standards documents in late 2020. The first application to take advantage of QUIC is HTTP. The HTTP/3 standard will publish at the same time as QUIC, and primarily revises HTTP/2 to move the stream multiplexing down into the transport. F5 has been tracking the development of the internet standard. In TMOS 15.1.0.1, we released clientside support for draft-24 of the standard. That is, BIG-IP can proxy your HTTP/1 and HTTP/2 servers so that they communicate with HTTP/3 clients. We rolled out support for draft-25 in 15.1.0.2 and draft-27 in 15.1.0.3. While earlier drafts are available in Chrome Canary and other experimental browser builds, draft-27 is expected to see wide deployment across the internet. While we won’t support all drafts indefinitely going forward, our policy will be to support two drafts in any given maintenance release. For example, 15.1.0.2 supports both draft-24 and draft-25. If you’re delivering HTTP applications, I hope you take a look at the cutting edge and give HTTP/3 a try! You can learn more about deploying HTTP/3 on BIG-IP on our support page at K60235402: Overview of the BIG-IP HTTP/3 and QUIC profiles. -----[1] Despite rumors to the contrary, QUIC is not an acronym. [2] F5 doesn’t yet support QUIC mobility features. We're still in the midst of rolling out improvements.1.1KViews1like0CommentsUnderstanding HTTP/2 Profile's Frame Size option on BIG-IP
Quick Intro The Overview of the BIG-IP HTTP/2 profile article on AskF5 I created a while ago describes all the HTTP/2 profile options but sometimes we need to test things out ourselves to grasp things at a deeper level. In this article, I'm going to show how Frame Size option sets specifically only the maximum size of HTTP/2 DATA message's payload in bytes and what happens when we change this value on Wireshark. Think of it as a quick walkthrough to give us a deeper understanding of how HTTP/2 works as we go through. The Topology It's literally a client on 10.199.3.135 and a virtual server with HTTP + HTTP/2 profile applied with the default settings: Testing Frame Size Option Here I've tried to modify frame-size to an invalid value so we can see the valid range: Let's set the frame-size to 1024 bytes: I have curl installed in my client machine and this is the command I used: If we just filter for http2 on Wireshark, we should see the negotiation phase (SETTINGS) as well as request (GET) and response (200 OK) headers in their specific message type (HEADERS). However, our focus here is on DATA message type as seen below: I've now added a new column (Length) to include the length of DATA messages so we can easily see how Frame Size settings affect DATA length. Here's how we create such filter: I've further renamed it to HTTP2 DATA Length but you've got the point. If we list only DATA messages, we can see that the payload of HTTP/2 DATA message type will not go beyond 1024 bytes: Wireshark confirms that HTTP/2 headers + DATA payload of frame 26 is 1033 bytes but DATA payload-only is 1024 bytes as seen below: We can then confirm that only payload counts for frame-size configuration on BIG-IP. I hope you enjoyed the above hands-on walk-through.999Views1like0Comments