http2
19 TopicsWhat is HTTP Part X - HTTP/2
In the penultimate article in this What is HTTP? series we covered iRules and local traffic policies and the power they can unleash on your HTTP traffic. To date in this series, the content primarily focuses on HTTP/1.1, as that is still the predominant industry standard. But make no mistake, HTTP/2 is here and here to stay, garnering 30% of all website traffic and climbing steadily. In this article, we’ll discuss the problems in HTTP/1.1 addressed in HTTP/2 and how BIG-IP supports the major update. What’s So Wrong withHTTP/1.1? It’s obviously a pretty good standard since it’s lasted as long as it has, right? So what’s the problem? Well, let’s set security aside for this article, since the HTTP/2 committee pretty much punted on it anyway, and let’s instead talk about performance. Keep in mind that the foundational constructs of the HTTP protocol come from the internet equivalent of the Jurassic age, where the primary function was to get and post text objects. As the functionality stretched from static sites to dynamic interactive and real-time applications, the underlying protocols didn’t change much to support this departure. That said, the two big issues with HTTP/1.1 as far as performance goes are repetitive meta data and head of line blocking.HTTP was designed to be stateless. As such, all applicable meta data is sent on every request and response, which adds from minimal to a grotesque amount of overhead. Head of Line Blocking For HTTP/1.1, this phenomenon occurs due to each request needs a completed response before a client can make another request. Browser hacks to get around this problem involved increasing the number of TCP connections allowed to each host from one to two and currently at six as you can see in the image below. More connections more objects, right? Well yeah, but you still deal with the overhead of all those connections, and as the number of objects per page continues to grow the scale doesn’t make sense. Other hacks on the server side include things like domain sharding, where you create the illusion of many hosts so the browser creates more connections. This still presents a scale problem eventually. Pipelining was a thing as well, allowing for parallel connections and the utopia of improved performance. But as it turns out, it was not a good thing at all, proving quite difficult to implement properly and brittle at that, resulting in a grand total of ZERO major browsers actually supporting it. Radical Departures - The Big Changes in HTTP/2 HTTP/2 still has the same semantics as HTTP/1. It still has request/response, headers in key/value format, a body, etc. And the great thing for clients is the browser handles the wire protocols, so there are no compatibility issues on that front. There are many improvements and feature enhancements in the HTTP/2 spec, but we’ll focus here on a few of the major changes. John recorded a Lightboard Lesson a while back on HTTP/2 with an overview of more of the features not covered here. From Text to Binary With HTTP/2 comes a new binary framing layer, doing away with the text-based roots of HTTP. As I said, the semantics of HTTP are unchanged, but the way they are encapsulated and transferred between client and server changes significantly. Instead of a text message with headers and body in tow, there are clear delineations for headers and data, transferred in isolated binary-encoded frames (photo courtesy of Google). Client and server need to understand this new wire format in order to exchange messages, but the applications need not change to utilize the core HTTP/2 changes. For backwards compatibility, all client connections begin as HTTP/1 requests with an upgrade header indicating to the server that HTTP/2 is possible. If the server can handle it, a 101 response to switch protocols is issued by the server, and if it can’t the header is simply ignored and the interaction will remain on HTTP/1. You’ll note in the picture above that TLS is optional, and while that’s true to the letter of the RFC law (see my punting on security comment earlier) the major browsers have not implemented that as optional, so if you want to use HTTP/2, you’ll most likely need to do it with encryption. Multiplexed Streams HTTP/2 solves the HTTP/1.1 head of line problem by multiplexing requests over a single TCP connection. This allows clients to make multiple requests of the server without requiring a response to earlier requests. Responses can arrive in any order as the streams all have identifiers (photo courtesy of Google). Compare the image below of an HTTP/2 request to the one from the HTTP/1.1 section above. Notice two things: 1) the reduction of TCP connections from six to one and 2) the concurrency of all the objects being requested. In the brief video below, I toggle back and forth between HTTP/1.1 and HTTP/2 requests at increasing latencies, thanks to a demo tool on golang.org, and show the associated reductions in page load experience as a result. Even at very low latency there is an incredible efficiency in making the switch to HTTP/2. This one change obviates the need for many of the hacks in place for HTTP/1.1 deployments. One thing to note on the head of line blocking: TCP actually becomes a stumbling block for HTTP/2 due to its congestion control algorithms. If there is any packet loss in the TCP connection, the retransmit has to be processed before any of the other streams are managed, effectively halting all traffic on that connection. Protocols like QUIC are being developed to ride the UDP waveand overcome some of the limitations in TCP holding back even better performance in HTTP/2. Header Compression Given that headers and data are now isolated by frame types, the headers can now be compressed independently, and there is a new compression utility specifically for this called HPACK. This occurs at the connection level. The improvements are two-fold. First, the header fields are encoded using Huffman coding thus reducing their transfer size. Second, the client and server maintain a table of previous headers that is indexed. This table has static entries that are pre-defined on common HTTP headers, and dynamic entries added as headers are seen. Once dynamic entries are present in the table, the index for that dynamic entry will be passed instead of the head values themselves (photo courtesy of amphinicy.com). BIG-IP Support F5 introduced the HTTP/2 profile in 11.6 as an early access, but it hit general availability in 12.0. The BIG-IP implementation supports HTTP/2 as a gateway, meaning that all your clients can interact with the BIG-IP over HTTP/2, but server-side traffic remains HTTP/1.1. Applying the profile also requires the HTTP and clientssl profiles. If using the GUI to configure the virtual server, the HTTP/2 Profile field will be grayed out until use select an HTTP profile. It will let you try to save it at that point even without a clientssl profile, but will complain when saving: 01070734:3: Configuration error: In Virtual Server (/Common/h2testvip) http2 specified activation mode requires a client ssl profile As far as the profile itself is concerned, the fields available for configuration are shown in the image below. Most of the fields are pretty self explanatory, but I’ll discuss a few of them briefly. Insert Header - this field allows you to configure a header to inform the HTTP/1.1 server on the back end that the front-end connection is HTTP/2. Activation Modes - The options here are to restrict modes toALPN only, which would then allow HTTP/1.1 or negatiate to HTTP/2 or Always, which tells BIG-IP that all connections will be HTTP/2. Receive Window - We didn’t cover the flow control functionality in HTTP/2, but this setting sets the level (HTTP/2 v3+) where individual streams can be stalled. Write Size - This is the size of the data frames in bytes that HTTP/2 will send in a single write operation. Larger size will improve network utilization at the expense of an increased buffer of the data. Header Table Size - This is the size of the indexed static/dynamic table that HPACK uses for header compression. Larger table size will improve compression, but at the expense of memory. In this article, we covered the basics of the major benefits of HTTP/2. There are more optimizations and features to explore, such as server push, which is not yet supported by BIG-IP. You can read about many of those features here on this very excellent article on Google’s developers portal where some of the images in this article came from.2.6KViews1like2CommentsUnderstanding HTTP/2 Activation Modes on BIG-IP
Introduction Activation modes specifies how the BIG-IP system negotiates HTTP/2 protocol: TMSH equivalent: In this article I go slightly deeper to explain how BIG-IP negotiates HTTP/2 connection with client peers. Traditionally, HTTP2 can be negotiated within an HTTP1.1 connection or via TLS extension Application Layer Protocol Negotiation (ALPN). Currently, the only supported method on BIG-IP is ALPN. There is another option on BIG-IP namedalways. Application Layer Protocol Negotiation (ALPN) ALPNrequires client-ssl profile applied to the Virtual Server: In ALPN, client goes through TLS handshake with BIG-IP and both inform each other about the L7 protocol they want to negotiate inapplication_layer_protocol_negotiationextension on Wireshark as seen below: When TLS handshake is finished you should see HTTP2 messages as long as traffic is decrypted because HTTP/2 requires TLS. Always Always is just for debugging purposes and not for production as this makes BIG-IP exchange HTTP/2 messages without the need for TLS. Incapture below, BIG-IP exchanges HTTP/2 messages with client immediately after TCP handshake, i.e. no TLS required like this: When I say without the need for TLS, do not confuse with HTTP/1.1 UPGRADE. Ina subsequent capture, I experimentally sent anHTTP1/1withUpgrade: h2cheader usingnghttptool from my client machine (nghttphttp://10.199.3.44)that signals we want to "talk" HTTP2 to BIG-IP and here's what happens: But BIG-IP replied withSETTINGS(HTTP2 message) andGOAWAYwhich are HTTP2 messages: If BIG-IP supported the UPGRADE from HTTP/1.1 to HTTP/2, it should have responded with HTTP1.1 101 (Switching Protocols) message instead and not HTTP/2 SETTINGS directly as seen above. This also confirms BIG-IP doesn't support upgrade from HTTP/1.1 to HTTP/2. Good bye and Thank you F5, my team and the whole community! I'd like to take this opportunity to say that I'm leaving F5 for a new challenge but I'm not leaving F5 community. I'm truly grateful to be part of this vibrant community and I'd like to thank the whole of F5 and DevCentral community members for making DevCentral great. However, a special thank you goes to my team mates Jason Rahm, John Wagnon, Leslie Hubertus, Lief Zimmerman, Chase Abbott, Peter Silva and my manager Tony Hynes. I learnt a lot from you, had lots of fun in our in-person meetings and will be always grateful for that. You'll be truly missed. I won't be posting articles but will still be in the forums so feel free to drop me a message.3KViews0likes2CommentsMultiplexing: TCP vs HTTP2
Can you use both? Of course you can! Here comes the (computer) science… One of the big performance benefits of moving to HTTP/2 comes from its extensive use of multiplexing. For the uninitiated, multiplexing is the practice of reusing a single TCP connection for multiple HTTP requests and responses. See, in the old days (HTTP/1), a request/response pair required its own special TCP connection. That ultimately resulted in the TCP connection per host limits imposed on browsers and, because web sites today are comprised of an average of 86 or more individual objects each needing its own request/response, slowed down transfers. HTTP/1.1 let us use “persistent” HTTP connections, which was the emergence of multiplexing (connections could be reused) but constrained by the synchronous (in order) requirement of HTTP itself. So you’d open 6 or 7 or 8 connections and then reuse them to get those 80+ objects. With HTTP/2 that’s no longer the case. A single TCP connection is all that’s required because HTTP/2 leverages multiplexing and allows asynchronous (parallel) requests. Many request/response pairs can be transferred over that single connection in parallel, resulting in faster transfers and less networking overhead. Because as we all know by know, TCP’s three-way handshake and windowing mechanisms (slow start, anyone?) can be a drag (literally) on app performance. So the question is, now that we’ve got HTTP/2 and its multiplexing capabilities on the client side of the equation, do we still see a benefit from TCP multiplexing on the server side of the equation? Yes. Absolutely. The reason for that is that is operational and directly related to a pretty traditional transition that has to occur whenever there’s a significant “upgrade” to what is a foundational protocol like HTTP. Remember, IPv6 has been available and ready to go for a decade and we’re still not fully transitioned. Think about that for a minute when you consider how long the adoption curve for HTTP/2 is probably going to be. Part of the reason for this is while many browsers already support HTTP/2, very few organizations have web or app servers that support HTTP/2. That means that while they could support HTTP on the client side, they can’t on the server side. Assuming the server side can support HTTP/2, there are then business and architectural reasons why an organization might choose to delay migration – including licensing, support, and just the cost of the disruption to upgrade. So HTTP2 winds up being a no-go. Orgs don’t move to HTTP/2 on the client side even though it has significant performance benefits, especially for their increasingly mobile app user population because they can’t support it on the server side. But HTTP2 gateways (proxies able to support HTTP/2 on the client side and HTTP/1 on the server side) exist. So it’s a viable and less disruptive means of migrating to HTTP/2 on the client without having to go “all in” on the server side. But of course that means you’re only getting half the benefits of multiplexing associated with HTTP/2. Unless, of course, you’re using TCP multiplexing on the server side. What multiplexing offers for clients with HTTP/2, TCP multiplexing capabilities in load balancers and proxies offer for servers with HTTP/1. This is not a new capability. It’s been a core TCP optimization technique for, well, a long time and it’s heavily used to improve both performance and reduce load on web/app servers (which means they have greater capacity, operate more efficiently, and improve the economy of scale of any app). On the server side, TCP multiplexing opens (and maintains) a TCP connection to each of the web/app servers it is virtualizing. When requests come in from clients the requests are sent by the load balancer or proxy over an existing (open) connection to the appropriate app instance. That means the performance of the app is improved by the time required to open and ramp up a TCP connection. It also means that the intermediary (the load balancer or proxy) can take in multiple HTTP requests and effectively parallelize them (we call this content switching). In the world of HTTP/1, that means if the client opened six TCP connections and then sent 6 different HTTP requests, the intermediary could ostensibly send out all 6 over existing TCP connections to the appropriate web/app servers, thereby speeding up the responses and improving overall app performance. The same thing is true for HTTP/2. The difference is that with HTTP/2 those 6 different requests are coming in over the same TCP connection. But they’re still coming in. That means a TCP multiplexing-capable load balancer (or proxy) can parallelize those requests to the web/app servers and achieve gains in performance that are noticeable (in a good way) to the client. True, that gain may be measured in less than a second for most apps, but that means the user is receiving data faster. And when users expect responses (like the whole page) in less than 3 seconds. Or 5 depending on whose study you’re looking at. The father of user interface design, Jakob Nielsen, noted that users will notice a 1 second delay. And that was in 1993. I’m pretty sure my 7 year old notices sub-second delays – and is frustrated by them. The point being that every micro-second you can shave off the delivery process (receiving a request and sending a response) is going to improve engagement with users – both consumer and corporate. What HTTP/2 effectively does is provide similar TCP optimizations on the client side of the equation as TCP multiplexing offers on the server side. Thus, using both HTTP/2 and network-based TCP multiplexing is going to offer a bigger gain in performance than using either one alone. And if you couple HTTP/2 and TCP multiplexing with content switching, well.. you’re going to gain some more. So yes, go ahead. Multiplex on the app and the client side and reap the performance benefits.2.2KViews0likes2CommentsSPDY/HTTP2 Profile Impact on Variable Use
When using SPDY/HTTP2 profile, TCL variables set before the HTTP_REQUEST event are not carried over to further events. This is by design as the current iRule/TCL implementation is not capable of handling multiple concurrent events, which may be they case with SPDY/HTTP2 multiplexing. This is easily reproducible with this simple iRule: when CLIENT_ACCEPTED { set default_pool [LB::server pool] } when HTTP_REQUEST { log local0. "HTTP_REQUEST EVENT Client [IP::client_addr]:[TCP::client_port] - $default_pool -" # - can't read "default_pool": no such variable while executing "log local0. "HTTP_REQUEST EVENT -- $default_pool --" } Storing values in a table works perfectly well and can be used to work around the issue as shown in the iRule below. when CLIENT_ACCEPTED { table set [IP::client_addr]:[TCP::client_port] [LB::server pool] } when HTTP_REQUEST { set default_pool [table lookup [IP::client_addr]:[TCP::client_port]] log local0. "POOL: |$default_pool|" } This information is also available in the wiki on the HTTP2 namespace command list page.754Views0likes12CommentsLightboard Lessons: HTTP/2
HTTP/2 is the new version (developed in 2015) of the HTTP protocol. It comes with several key enhancements over the older HTTP 1.0 and HTTP 1.1 versions. Check out the video below to learn about the new HTTP/2 protocol, why you should use it, and how the BIG-IP can help with your transition from older versions to the new HTTP/2. Related Resources: Managing HTTP Traffic With The HTTP/2 Profile Goodbye SPDY, Hello HTTP/2883Views0likes8CommentsMitigating recent HTTP/2 DoS vulnerabilities with BIG-IP
F5 Networks Threat Research team have been looking into the HTTP/2 protocol in order to assess the potential risks and possible attack vectors. During the research two previously unknown attack vectors that affect multiple implementations of the protocol were discovered. SETTINGS frame abuse DoS In our research we found out that attackers can abuse the HTTP/2 SETTINGS frame by continually sending large SETTINGS frames that contain repeating settings parameters with changing values. Although the HTTP/2 RFC explicitly warns implementations from this possible DoS vector none of the implementations that were tested took this into consideration while implementing the protocol. This attack is mitigated by BIG-IP as BIG-IP acts as a full-proxy and opens a new HTTP/2 connection to the application server and therefor SETTINGS frames received from the client will not be forwarded to the application server. Moreover, BIG-IP terminates idle connections when the connection idle timeout as configured in the HTTP/2 profile connection has reached. Figure 1: Connection idle timeout setting in BIG-IP HTTP/2 profile Additional References: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11763 https://nvd.nist.gov/vuln/detail/CVE-2018-16844 https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/ADV190005 https://nvd.nist.gov/vuln/detail/CVE-2018-12545 DoS via Slow POST (Slowloris) Although Apache http server has a module named “mod_reqtimeout” which is enabled by default since Apache httpd 2.2.15 and supposed to stop all the variants of the Slowloris attack, we decided to check how well it mitigates the same attack vector over HTTP/2. Surprisingly, it turned out that the attack slips under the radar of “mod_reqtimeout” and makes the Apache server unavailable to serve new requests. All Slowloris DoS variants are mitigated by BIG-IP as BIG-IP will only open connections and forward the request to pool members once a complete and valid HTTP request has been received. Since the requests transmitted during a Slowloris attack will never complete, no connections will be opened to the pool members. BIG-IP has also a possibility to mitigate Slowloris attacks by configuring “Slow Flow Monitoring” in its eviction policy. Figure 2: Mitigating Slowloris attacks via BIG-IP eviction policy Additional References https://nvd.nist.gov/vuln/detail/CVE-2018-17189947Views1like1CommentUnderstanding how BIG-IP enforces TLS requirements on HTTP/2 Profile
Introduction For those new to HTTP/2 profile, RFC7540 section 9.2.1 specifies TLS requirements for HTTP/2 connections. On BIG-IP, there's an option that is enabled by default which makes BIG-IP comply with above RFC requirements: The above setting dictates whether BIG-IP should enforce TLS configuration requirementsduring client SSL profile configuration. In this article, I will talk about such RFC requirements in the context of BIG-IP configuration.. BIG-IP requires Client SSL profile before adding HTTP/2 profile BIG-IP does not allow us to add an HTTP/2 profile without adding a Client SSL profile first as HTTP/2 requires TLS: TLS Renegotiation must be disabled on Client SSL profile The other requirement is that we must explicitly disable Renegotiation on Client SSL profile: In the above example, I first added a Client SSL profile (https-vip-client-ssl) to my virtual server (http_test) and then tried adding an HTTP/2 profile (custom_http2_profile) and it fails because TLS Renegotiation is enabled on my Client SSL profile. After disabling TLS Renegotiation, I can now safely add my HTTP/2 profile to virtual server: TLS Cipher Enforcement and TLS Compression Do not use any of the cipher suites fromAppendix A from RFC7540: Roughly all ciphers that are not ephemeral and cipher mode CBC. Ephemeral ciphers such as ECDHE are allowed. You don't need to worry about making any changes here because BIG-IP will proactively either select the ciphers that are compatible with HTTP/2 from Cipher list (sent by client onClient Hellomessage) or an error (INSUFFICIENT_SECURITY) will be triggered. However, it is worth pointing out that after a profile is applied to a virtual server, we do not allow removing compatible ciphers from Cipher List as seen below: Regarding TLS compression, we do not support it anyway so nothing to worry about. Final Remarks I would personally leave Enforce TLS Requirements setting enabled to both comply with RFC and for security reasons. For more details, please check the TLS requirements section in RFC.1.3KViews0likes0CommentsHow HTTP/2 Compression works under the hood
Introduction This is HTTP/1.1 and HTTP/2 requests side-by-side as seen on Wireshark: At first glance, they look quite similar and HTTP/1.1 even simpler. However, HTTP/2 entire header block occupied 37 bytes as opposed to 76 bytes from HTTP/1.1: This is due to HTTP/2 compression and that's what we're going to explore in this article. First I'll introduce the 3 methods HTTP/2 uses for compression and then I'll show you my lab set up. We'll then go through a real packet capture to understand how HTTP/2 compression (HPACK) works covering as much detail as reasonably possible. Lab Test Scenario For this test, I sent 3 consecutive requests (using same TCP connection) to download 3x identical 10 MB files, namedfirst_req.img,second_req.imgandthird_req.img: I have a fairly simple lab set up: How HTTP/2 compression works I'll use the first GET request above as the guinea pig here. We all know that a character is typically 1 byte, right? Notice that the whole of :method: GET and :scheme: https are only 1 byte each: That's because compression in HTTP/2 works by not sending a headers and values when possible. HTTP/2 compresses headers using a static table, dynamic table and Huffman encoding. We'll go through each of them now using examples. Static Table All client sends, instead of :method: GET, is a 1-byte index represented by a decimal number. For example, the index that represents :method: GET is 2 as seen below: HTTP/2 was implemented in such a way that when receiver reads Index 2, it immediately understands Index 2 means :method: GET. The same applies to :scheme: https which is represented by Index 7. Also note that on Wireshark, when we see Indexed Header Field, it means the whole header + value is represented by an Index. That's how HTTP/2 achieves "compression" under the hood but there's more to it. The mapping between headers or header + values is listed in a static table in Appendix A of RFC7541 and contains 61 mappings. However, HTTP/2 also has a dynamic table to store values on the go and that's what we're going to see in action now. Dynamic Table Let's now pick :authority: 10.199.3.44. This one is interesting because :authority: is in the static table but 10.199.3.44 (BIG-IP's HTTP/2 virtual server) isn't. So, how does HTTP/2 solve this problem? Because :authority: (header's name) is present in static table, it indexes it anyway using Index 1. The value (10.199.3.44) is obviously not in static table but BIG-IP assigns a dynamic Index value from Index 62 onwards to the whole ":authority: 10.199.3.44" name + value (remember static table has only 61 indexes!): How do we know BIG-IP assigned such value? Because of "Incremental Indexing" keyword. Also note that in this first request, :authority: 10.199.3.44 eats up 10 bytes (1 byte for :authority and 9 bytes for 10.199.3.44)! In the next request, we not only see that the whole :authority: 10.199.3.44 is now using a unique Index (63) but it's only eating up 1 byte this time: Note: The reason why :authority: 10199.3.44 wasn't assigned Index 62 is just because accept: */* used it first. Normally, the first value uses Index 62, second Index 63 and so on. Impressive, isn't it? This is dynamic table in action. Setting HTTP/2 Dynamic table size on BIG-IP On BIG-IP, the default value for the Dynamic table size is 4096 bytes and such value is configurable via GUI: Or tmsh: I'm now quoting the article I created for AskF5 Overview of the BIG-IP HTTP/2 profile to expand on what header table size is: "Specifies the maximum table size, in bytes, for thedynamic tableof HTTP/2 header compression. The default value is 4096. Note:The HTTP/2 protocol compresses HTTP headers to save bandwidth and uses a static table with predefined values and a dynamic table with values that are likely to be reused again in the same HTTP/2 connection. The Header Table Size limits the number of entries of HTTP/2 dynamic table as described inSection 4.2 of RFC 7541. When the limit is reached, old entries are evicted so that new entries are added." Huffman coding The values that are not compressed using static/dynamic table are still not directly sent in plain text. There is a best effort compression method using Huffman encoding that achieves around 20-30% improvement over plain-text. Remember the dynamic table where in the first request :authority: header name was compressed using static table (Index 1) but its value (10.199.3.44) wasn't? 10.199,.3.44 wasn't sent in plain text either! That's right. It was encoded using Huffman code from Appendix B in RFC7541. Appendix - Are there values that are not added to Dynamic table? Yes, implementations may decide not to add certain values to protect sensitive header fields. In our lab test above, we can see that :path: is indexed, but its value is not AND not added to dynamic table: If /first_req.img had been added to dynamic table, Wireshark's Representation field would be Literal Header Field with Incremental Index rather than Literal Header Field without Indexing. The other question I often get asked is about Name Length and Value Length fields. More specifically, why do they differ from the actual value sent on the Wire? Name Length and Value Length are just the size in bytes of the decompressed field if it was sent in plain-text. For example, :path has 5 characters and as character's size is 1-byte, decompressed :path = 5 bytes. The same goes for /first_req.img (14 characters = 14 bytes). However, in reality, HTTP/2 client only Index 4 (which is 1-byte long) is sent to represent :path and Huffman code is used to decrease the size of /first_req.img to 11 bytes instead of 14 bytes. That's about 21% reduction in size when compared to plain-text.2.1KViews0likes0CommentsIntroducing QUIC and HTTP/3
QUIC [1] is a new transport protocol that provides similar service guarantees to TCP, and then some, operating over a UDP substrate. It has important advantages over TCP: Streams: QUIC provides multiple reliable ordered byte streams, which has several advantages for user experience and loss response over the single stream in TCP. The stream concept was used in HTTP/2, but moving it into the transport further amplifies the benefits. Latency: QUIC can complete the transport and TLS handshakes in a single round trip. Under some conditions, it can complete the application handshake (e.g. HTTP requests) in a single round-trip as well. Privacy and Security: QUIC always uses TLS 1.3, the latest standard in application security, and hides much more data about the connection from prying eyes. Moreover, it is much more resistant than TCP to various attacks on the protocol, because almost all of its packets are authenticated. Mobility: If put in the right sort of data center infrastructure, QUIC seamlessly adjusts to changes in IP address without losing connectivity. [2] Extensibility: Innovation in TCP is frequently hobbled by middleboxes peering into packets and dropping anything that seems non-standard. QUIC’s encryption, authentication, and versioning should make it much easier to evolve the transport as the internet evolves. Google started experimenting with early versions of QUIC in 2012, eventually deploying it on Chrome browsers, their mobile apps, and most of their server applications. Anyone using these tools together has been using QUIC for years! The Internet Engineering Task Force (IETF) has been working to standardize it since 2016, and we expect that work to complete in a series of Internet Requests for Comment (RFCs) standards documents in late 2020. The first application to take advantage of QUIC is HTTP. The HTTP/3 standard will publish at the same time as QUIC, and primarily revises HTTP/2 to move the stream multiplexing down into the transport. F5 has been tracking the development of the internet standard. In TMOS 15.1.0.1, we released clientside support for draft-24 of the standard. That is, BIG-IP can proxy your HTTP/1 and HTTP/2 servers so that they communicate with HTTP/3 clients. We rolled out support for draft-25 in 15.1.0.2 and draft-27 in 15.1.0.3. While earlier drafts are available in Chrome Canary and other experimental browser builds, draft-27 is expected to see wide deployment across the internet. While we won’t support all drafts indefinitely going forward, our policy will be to support two drafts in any given maintenance release. For example, 15.1.0.2 supports both draft-24 and draft-25. If you’re delivering HTTP applications, I hope you take a look at the cutting edge and give HTTP/3 a try! You can learn more about deploying HTTP/3 on BIG-IP on our support page at K60235402: Overview of the BIG-IP HTTP/3 and QUIC profiles. -----[1] Despite rumors to the contrary, QUIC is not an acronym. [2] F5 doesn’t yet support QUIC mobility features. We're still in the midst of rolling out improvements.1.2KViews1like0CommentsUnderstanding HTTP/2 Profile's Frame Size option on BIG-IP
Quick Intro The Overview of the BIG-IP HTTP/2 profile article on AskF5 I created a while ago describes all the HTTP/2 profile options but sometimes we need to test things out ourselves to grasp things at a deeper level. In this article, I'm going to show how Frame Size option sets specifically only the maximum size of HTTP/2 DATA message's payload in bytes and what happens when we change this value on Wireshark. Think of it as a quick walkthrough to give us a deeper understanding of how HTTP/2 works as we go through. The Topology It's literally a client on 10.199.3.135 and a virtual server with HTTP + HTTP/2 profile applied with the default settings: Testing Frame Size Option Here I've tried to modify frame-size to an invalid value so we can see the valid range: Let's set the frame-size to 1024 bytes: I have curl installed in my client machine and this is the command I used: If we just filter for http2 on Wireshark, we should see the negotiation phase (SETTINGS) as well as request (GET) and response (200 OK) headers in their specific message type (HEADERS). However, our focus here is on DATA message type as seen below: I've now added a new column (Length) to include the length of DATA messages so we can easily see how Frame Size settings affect DATA length. Here's how we create such filter: I've further renamed it to HTTP2 DATA Length but you've got the point. If we list only DATA messages, we can see that the payload of HTTP/2 DATA message type will not go beyond 1024 bytes: Wireshark confirms that HTTP/2 headers + DATA payload of frame 26 is 1033 bytes but DATA payload-only is 1024 bytes as seen below: We can then confirm that only payload counts for frame-size configuration on BIG-IP. I hope you enjoyed the above hands-on walk-through.1.1KViews1like0Comments