http2
39 TopicsF5 AWAF with HTTP/2, MRF and Websocket profiles
Good day all, I have F5 Big-IP AWAF's (version 16.1.4.3) and I am trying to configure HTTP/2 with MRF. My colleague and I discovered that Websocket profiles on the Virtual Server don't play well when enabling MRF. Is there a way to enable a "hybrid" configuration using websocket and HTTP/2 with MRF? I value and appreciate your time and energy and look forward to hearing from you. Thank you.99Views0likes5CommentsProblem with big packets using http2
Hi workmates, an application that passes through my F5 BIG-IP, requires for large post request, increasing the maximum header size from the default of 32k to 65k, and everything works perfectly, but only if I use http1.1.If i also enable the http2 profile, the packets are dropped by F5. Do you know if it is possible to use packets bigger than 32k using http2? My F5 version is this BIG-IP 15.1.6106Views0likes4CommentsIncosistent forwarding of HTTP/2 connections with layered virtual
Hi, I'm using a layered virtual configuration: Tier1: Virtual applying SNI-Routing (only SSL persistence profile and LTM policy as described in https://www.devcentral.f5.com/kb/technicalarticles/sni-routing-with-big-ip/282018) Tier2: Virtual applies SSL termination and delivering the actual application, with the required profiles, iRules, .... If the required, an additional LTM policy is applied for URI-based routing and forwards to Tier3 VS. Tier3 (optional, if required): Virtual delivers specific applications, like microservices, usually no monolithical apps. This configuration is very robust and I'm working with it successfully since years. Important: The tier1 uses one single IP address and a single port. So all tier2 and tier3 virtuals MUST be externally available through the same IP address and port. Now I have to publish the first HTTP/2 applications over this concept and see strange behavior of the BIG-IP. User requests www.example.com. IP and port point to tier1 virtual. Tier1 LTM policy forwards the requests, based on the SNI, to tier2 virtuals "vs-int_www.example.com". Within www.example.com there are references to piwik.example.com, which is another tier2 virtual, behind my tier1 virtual. User requests piwik.example.com. IP and port point to tier1 virtual. Tier1 LTM policy forwards the requests to "vs-int_www.example.com" instead of "vs-int_piwik.example.com". Probably not based on SNI, but on the existing TCP connection. I'm afraid, that this bahvior is a result of HTTP/2, especially because of the persistent TCP connection. I assume that, because the connection ID (gathered from browser devtools) for requests to www.example.com and piwik.example.com is identical. From the perspective of the browser I wouldn't expect such a behavior, because the target hostname differs. I didn't configure HTTP/2 in full-proxy mode, as described in several articles. I've just enabled it on the client-side. I would be very happy for any input on that. Thanks in advance!198Views0likes11CommentsSSL Offload with HTTP/2.0
I need to configure SSL Offload with HTTP/2.0. All the guidance I've read says we need to choose clientssl-secure as the client-ssl profile - but how does that work when you're terminating the TLS session? How do we configure a certificate on the client-side?Solved295Views0likes6CommentsWhat is HTTP Part X - HTTP/2
In the penultimate article in this What is HTTP? series we covered iRules and local traffic policies and the power they can unleash on your HTTP traffic. To date in this series, the content primarily focuses on HTTP/1.1, as that is still the predominant industry standard. But make no mistake, HTTP/2 is here and here to stay, garnering 30% of all website traffic and climbing steadily. In this article, we’ll discuss the problems in HTTP/1.1 addressed in HTTP/2 and how BIG-IP supports the major update. What’s So Wrong withHTTP/1.1? It’s obviously a pretty good standard since it’s lasted as long as it has, right? So what’s the problem? Well, let’s set security aside for this article, since the HTTP/2 committee pretty much punted on it anyway, and let’s instead talk about performance. Keep in mind that the foundational constructs of the HTTP protocol come from the internet equivalent of the Jurassic age, where the primary function was to get and post text objects. As the functionality stretched from static sites to dynamic interactive and real-time applications, the underlying protocols didn’t change much to support this departure. That said, the two big issues with HTTP/1.1 as far as performance goes are repetitive meta data and head of line blocking.HTTP was designed to be stateless. As such, all applicable meta data is sent on every request and response, which adds from minimal to a grotesque amount of overhead. Head of Line Blocking For HTTP/1.1, this phenomenon occurs due to each request needs a completed response before a client can make another request. Browser hacks to get around this problem involved increasing the number of TCP connections allowed to each host from one to two and currently at six as you can see in the image below. More connections more objects, right? Well yeah, but you still deal with the overhead of all those connections, and as the number of objects per page continues to grow the scale doesn’t make sense. Other hacks on the server side include things like domain sharding, where you create the illusion of many hosts so the browser creates more connections. This still presents a scale problem eventually. Pipelining was a thing as well, allowing for parallel connections and the utopia of improved performance. But as it turns out, it was not a good thing at all, proving quite difficult to implement properly and brittle at that, resulting in a grand total of ZERO major browsers actually supporting it. Radical Departures - The Big Changes in HTTP/2 HTTP/2 still has the same semantics as HTTP/1. It still has request/response, headers in key/value format, a body, etc. And the great thing for clients is the browser handles the wire protocols, so there are no compatibility issues on that front. There are many improvements and feature enhancements in the HTTP/2 spec, but we’ll focus here on a few of the major changes. John recorded a Lightboard Lesson a while back on HTTP/2 with an overview of more of the features not covered here. From Text to Binary With HTTP/2 comes a new binary framing layer, doing away with the text-based roots of HTTP. As I said, the semantics of HTTP are unchanged, but the way they are encapsulated and transferred between client and server changes significantly. Instead of a text message with headers and body in tow, there are clear delineations for headers and data, transferred in isolated binary-encoded frames (photo courtesy of Google). Client and server need to understand this new wire format in order to exchange messages, but the applications need not change to utilize the core HTTP/2 changes. For backwards compatibility, all client connections begin as HTTP/1 requests with an upgrade header indicating to the server that HTTP/2 is possible. If the server can handle it, a 101 response to switch protocols is issued by the server, and if it can’t the header is simply ignored and the interaction will remain on HTTP/1. You’ll note in the picture above that TLS is optional, and while that’s true to the letter of the RFC law (see my punting on security comment earlier) the major browsers have not implemented that as optional, so if you want to use HTTP/2, you’ll most likely need to do it with encryption. Multiplexed Streams HTTP/2 solves the HTTP/1.1 head of line problem by multiplexing requests over a single TCP connection. This allows clients to make multiple requests of the server without requiring a response to earlier requests. Responses can arrive in any order as the streams all have identifiers (photo courtesy of Google). Compare the image below of an HTTP/2 request to the one from the HTTP/1.1 section above. Notice two things: 1) the reduction of TCP connections from six to one and 2) the concurrency of all the objects being requested. In the brief video below, I toggle back and forth between HTTP/1.1 and HTTP/2 requests at increasing latencies, thanks to a demo tool on golang.org, and show the associated reductions in page load experience as a result. Even at very low latency there is an incredible efficiency in making the switch to HTTP/2. This one change obviates the need for many of the hacks in place for HTTP/1.1 deployments. One thing to note on the head of line blocking: TCP actually becomes a stumbling block for HTTP/2 due to its congestion control algorithms. If there is any packet loss in the TCP connection, the retransmit has to be processed before any of the other streams are managed, effectively halting all traffic on that connection. Protocols like QUIC are being developed to ride the UDP waveand overcome some of the limitations in TCP holding back even better performance in HTTP/2. Header Compression Given that headers and data are now isolated by frame types, the headers can now be compressed independently, and there is a new compression utility specifically for this called HPACK. This occurs at the connection level. The improvements are two-fold. First, the header fields are encoded using Huffman coding thus reducing their transfer size. Second, the client and server maintain a table of previous headers that is indexed. This table has static entries that are pre-defined on common HTTP headers, and dynamic entries added as headers are seen. Once dynamic entries are present in the table, the index for that dynamic entry will be passed instead of the head values themselves (photo courtesy of amphinicy.com). BIG-IP Support F5 introduced the HTTP/2 profile in 11.6 as an early access, but it hit general availability in 12.0. The BIG-IP implementation supports HTTP/2 as a gateway, meaning that all your clients can interact with the BIG-IP over HTTP/2, but server-side traffic remains HTTP/1.1. Applying the profile also requires the HTTP and clientssl profiles. If using the GUI to configure the virtual server, the HTTP/2 Profile field will be grayed out until use select an HTTP profile. It will let you try to save it at that point even without a clientssl profile, but will complain when saving: 01070734:3: Configuration error: In Virtual Server (/Common/h2testvip) http2 specified activation mode requires a client ssl profile As far as the profile itself is concerned, the fields available for configuration are shown in the image below. Most of the fields are pretty self explanatory, but I’ll discuss a few of them briefly. Insert Header - this field allows you to configure a header to inform the HTTP/1.1 server on the back end that the front-end connection is HTTP/2. Activation Modes - The options here are to restrict modes toALPN only, which would then allow HTTP/1.1 or negatiate to HTTP/2 or Always, which tells BIG-IP that all connections will be HTTP/2. Receive Window - We didn’t cover the flow control functionality in HTTP/2, but this setting sets the level (HTTP/2 v3+) where individual streams can be stalled. Write Size - This is the size of the data frames in bytes that HTTP/2 will send in a single write operation. Larger size will improve network utilization at the expense of an increased buffer of the data. Header Table Size - This is the size of the indexed static/dynamic table that HPACK uses for header compression. Larger table size will improve compression, but at the expense of memory. In this article, we covered the basics of the major benefits of HTTP/2. There are more optimizations and features to explore, such as server push, which is not yet supported by BIG-IP. You can read about many of those features here on this very excellent article on Google’s developers portal where some of the images in this article came from.2.6KViews1like2CommentsAPM not ready for HTTP/2 ?
Hi all, I have a config here with APM and users are login to a full webtop. Version used is v13.1.0.1. Now, for a test I changed the VS to support HTTP/2 and added a http/2 profile to the VS. When we connect we get the following error in /var/log/ltm: Jan 15 14:14:19 bigip1 err tmm1[12276]: 01220001:3: TCL error: /Common/_sys_APM_VDI_Helper - can't read "tmm_apm_client_type": no such variable while executing "if { ($tmm_apm_uri_path equals "/broker/xml") || ($tmm_apm_user_agent equals "VMware-client") } { set tmm_apm_client_type "view-xml" ..." So is APM not HTTP/2 ready yet? Thanks for a reply, PeterSolved798Views0likes2CommentsUnderstanding HTTP/2 Activation Modes on BIG-IP
Introduction Activation modes specifies how the BIG-IP system negotiates HTTP/2 protocol: TMSH equivalent: In this article I go slightly deeper to explain how BIG-IP negotiates HTTP/2 connection with client peers. Traditionally, HTTP2 can be negotiated within an HTTP1.1 connection or via TLS extension Application Layer Protocol Negotiation (ALPN). Currently, the only supported method on BIG-IP is ALPN. There is another option on BIG-IP namedalways. Application Layer Protocol Negotiation (ALPN) ALPNrequires client-ssl profile applied to the Virtual Server: In ALPN, client goes through TLS handshake with BIG-IP and both inform each other about the L7 protocol they want to negotiate inapplication_layer_protocol_negotiationextension on Wireshark as seen below: When TLS handshake is finished you should see HTTP2 messages as long as traffic is decrypted because HTTP/2 requires TLS. Always Always is just for debugging purposes and not for production as this makes BIG-IP exchange HTTP/2 messages without the need for TLS. Incapture below, BIG-IP exchanges HTTP/2 messages with client immediately after TCP handshake, i.e. no TLS required like this: When I say without the need for TLS, do not confuse with HTTP/1.1 UPGRADE. Ina subsequent capture, I experimentally sent anHTTP1/1withUpgrade: h2cheader usingnghttptool from my client machine (nghttphttp://10.199.3.44)that signals we want to "talk" HTTP2 to BIG-IP and here's what happens: But BIG-IP replied withSETTINGS(HTTP2 message) andGOAWAYwhich are HTTP2 messages: If BIG-IP supported the UPGRADE from HTTP/1.1 to HTTP/2, it should have responded with HTTP1.1 101 (Switching Protocols) message instead and not HTTP/2 SETTINGS directly as seen above. This also confirms BIG-IP doesn't support upgrade from HTTP/1.1 to HTTP/2. Good bye and Thank you F5, my team and the whole community! I'd like to take this opportunity to say that I'm leaving F5 for a new challenge but I'm not leaving F5 community. I'm truly grateful to be part of this vibrant community and I'd like to thank the whole of F5 and DevCentral community members for making DevCentral great. However, a special thank you goes to my team mates Jason Rahm, John Wagnon, Leslie Hubertus, Lief Zimmerman, Chase Abbott, Peter Silva and my manager Tony Hynes. I learnt a lot from you, had lots of fun in our in-person meetings and will be always grateful for that. You'll be truly missed. I won't be posting articles but will still be in the forums so feel free to drop me a message.3KViews0likes2CommentsSettings when configuring http/2 for the client side only
We have used the http/2 settings at https://my.f5.com/manage/s/article/K04412053 and our flow is user mobile devices to BIG-IP is http/2. BIG-IP translates http/2 to http/1.1 then sends it to our back-end servers. 1. We have seen lot of Client connection closed error messages after turning on http/2 and trying to trace if any http/2 settings need to be changed from the default http/2 settings at https://my.f5.com/manage/s/article/K04412053 ? 2. How does BIG-IP translate http/2(received from user mobile devices) to http/1.1 and how can we check those settings to tweak them? 3. Anything else we should check for?1.8KViews0likes5CommentsgRPC load balancing with F5 and nginx
I've a requirement of using gRPC through F5 using nginx at the server level which will convert port 80 to gRPC port (50001). Flow would be like: Client will hit F5 over port 443 which invariably will forward the request to nginx over port 80 which will convert it again over designated port of gRPC (50001). I enabled HTTP2 settings in F5 but application is not responding. Is there any specific setting which i need to do for gRPC at F5 level? nginx is already configured to forward request over port 80 to http2.1.9KViews0likes5CommentsMultiplexing: TCP vs HTTP2
Can you use both? Of course you can! Here comes the (computer) science… One of the big performance benefits of moving to HTTP/2 comes from its extensive use of multiplexing. For the uninitiated, multiplexing is the practice of reusing a single TCP connection for multiple HTTP requests and responses. See, in the old days (HTTP/1), a request/response pair required its own special TCP connection. That ultimately resulted in the TCP connection per host limits imposed on browsers and, because web sites today are comprised of an average of 86 or more individual objects each needing its own request/response, slowed down transfers. HTTP/1.1 let us use “persistent” HTTP connections, which was the emergence of multiplexing (connections could be reused) but constrained by the synchronous (in order) requirement of HTTP itself. So you’d open 6 or 7 or 8 connections and then reuse them to get those 80+ objects. With HTTP/2 that’s no longer the case. A single TCP connection is all that’s required because HTTP/2 leverages multiplexing and allows asynchronous (parallel) requests. Many request/response pairs can be transferred over that single connection in parallel, resulting in faster transfers and less networking overhead. Because as we all know by know, TCP’s three-way handshake and windowing mechanisms (slow start, anyone?) can be a drag (literally) on app performance. So the question is, now that we’ve got HTTP/2 and its multiplexing capabilities on the client side of the equation, do we still see a benefit from TCP multiplexing on the server side of the equation? Yes. Absolutely. The reason for that is that is operational and directly related to a pretty traditional transition that has to occur whenever there’s a significant “upgrade” to what is a foundational protocol like HTTP. Remember, IPv6 has been available and ready to go for a decade and we’re still not fully transitioned. Think about that for a minute when you consider how long the adoption curve for HTTP/2 is probably going to be. Part of the reason for this is while many browsers already support HTTP/2, very few organizations have web or app servers that support HTTP/2. That means that while they could support HTTP on the client side, they can’t on the server side. Assuming the server side can support HTTP/2, there are then business and architectural reasons why an organization might choose to delay migration – including licensing, support, and just the cost of the disruption to upgrade. So HTTP2 winds up being a no-go. Orgs don’t move to HTTP/2 on the client side even though it has significant performance benefits, especially for their increasingly mobile app user population because they can’t support it on the server side. But HTTP2 gateways (proxies able to support HTTP/2 on the client side and HTTP/1 on the server side) exist. So it’s a viable and less disruptive means of migrating to HTTP/2 on the client without having to go “all in” on the server side. But of course that means you’re only getting half the benefits of multiplexing associated with HTTP/2. Unless, of course, you’re using TCP multiplexing on the server side. What multiplexing offers for clients with HTTP/2, TCP multiplexing capabilities in load balancers and proxies offer for servers with HTTP/1. This is not a new capability. It’s been a core TCP optimization technique for, well, a long time and it’s heavily used to improve both performance and reduce load on web/app servers (which means they have greater capacity, operate more efficiently, and improve the economy of scale of any app). On the server side, TCP multiplexing opens (and maintains) a TCP connection to each of the web/app servers it is virtualizing. When requests come in from clients the requests are sent by the load balancer or proxy over an existing (open) connection to the appropriate app instance. That means the performance of the app is improved by the time required to open and ramp up a TCP connection. It also means that the intermediary (the load balancer or proxy) can take in multiple HTTP requests and effectively parallelize them (we call this content switching). In the world of HTTP/1, that means if the client opened six TCP connections and then sent 6 different HTTP requests, the intermediary could ostensibly send out all 6 over existing TCP connections to the appropriate web/app servers, thereby speeding up the responses and improving overall app performance. The same thing is true for HTTP/2. The difference is that with HTTP/2 those 6 different requests are coming in over the same TCP connection. But they’re still coming in. That means a TCP multiplexing-capable load balancer (or proxy) can parallelize those requests to the web/app servers and achieve gains in performance that are noticeable (in a good way) to the client. True, that gain may be measured in less than a second for most apps, but that means the user is receiving data faster. And when users expect responses (like the whole page) in less than 3 seconds. Or 5 depending on whose study you’re looking at. The father of user interface design, Jakob Nielsen, noted that users will notice a 1 second delay. And that was in 1993. I’m pretty sure my 7 year old notices sub-second delays – and is frustrated by them. The point being that every micro-second you can shave off the delivery process (receiving a request and sending a response) is going to improve engagement with users – both consumer and corporate. What HTTP/2 effectively does is provide similar TCP optimizations on the client side of the equation as TCP multiplexing offers on the server side. Thus, using both HTTP/2 and network-based TCP multiplexing is going to offer a bigger gain in performance than using either one alone. And if you couple HTTP/2 and TCP multiplexing with content switching, well.. you’re going to gain some more. So yes, go ahead. Multiplex on the app and the client side and reap the performance benefits.2.2KViews0likes2Comments