chunked
2 TopicsF5 Distributed Cloud and Transfer Encoding: Chunking
My team recently came across an unusual request from an F5 Distributed Cloud customer: How do we support HTTP/API clients that can only support transfer encoding chunked requests. What even is chunking? What is Transfer Encoding? The key word is "encoding" and HTTP uses a header to communicate what scheme encodes the data in a message body. These can be used for functional purposes as well as communication optimization. In the case of Transfer Encoding it is most commonly leveraged for chunking, which is taking a large bit of data and breaking it up into smaller pieces that are sent between two nodes along a path, transparently to the application sending/receiving messages. These nodes may not necessarily be the source and destination of an HTTP conversation, so proxies in between could transparently reassemble the chunks for differing parts of the path. It does not use a content-length header: Contrasting with Content Encoding, which is more commonly used for compression of message bodies (although this can be done with transfer encoding too) and requires the length to be defined. Proxies along the path are expected to not change these values, but this is not always the case. In our customer scenario, the request was exactly for the proxy (in this case Distributed Cloud) to support chunked requests from the client to an HTTP 2 server (HTTP2 does away with chunking completely). With Distributed Cloud, we fulfill this with three simple config elements: 1. The HTTP Load Balancer Object is configured to be an HTTP 1.1 virtual server: 2. The Origin is configured to use HTTP 2 (which defines Distributed Cloud's behavior as an HTTP client): And after applying the config, we go back to the HTTP Load Balancer dialog, to the Other Settings section and configure a Buffer Policy under Miscellaneous Options: A value configured in that dialog (it is the only property aside from an enable checkbox) will limit the request size to the specified value in bytes, but it has the added benefit of allowing the Distributed Cloud proxy to buffer the chunked requests and then convert them into content-encoding friendly values with length specified, and then send to the server via an HTTP 2 connection. To test this connection, a simple cURL command with the header "Transfer-Encoding: chunked" and the -v flag can validate your config. ex. curl -v --location 'https:/[URL/PATH]:PORT --header 'Transfer-Encoding: chunked' --data ‘’ In the ensuing response, the -v flag (verbose) will include the following in the response: * using HTTP/1.x > POST [PATH] HTTP/1.1 > Host: [URL] > User-Agent: curl/8.7.1 … > Transfer-Encoding: chunked … Note the Transfer-Encoding chunked line, which shows that chunking was used on the client-side connection. You can validate the server-side connection in the request logs in the Distributed Cloud dashboard by looking at the request headers specified in the event JSON: "rsp_headers": "{\":status\":\"200\",\"connection\":\"close\",\"content-length\":\"26930\", [TRUNCATED] This is a transfer-encoded chunked client-side request being converted to a content-encoded request on the server side: Special shoutout to fellow F5er Gowry Bhaagavathula for collaborating with me on getting this figured out!538Views1like0CommentsPersistence: HTTP 200 OK to client hangs when server sends HTTP responses with Transfer-Encoding: chunked
All my problems come because I need an irule to persist sessions based on an specific field that goes through inside an HTTP packet. First the client need to do a Login and with the response we persist the session_id. HTTP POST HTTP 200 OK (session_id) HTTP GET (session_id) With the following irule i'm able to do that if the response comes with the header content-length. The problem is that we discovered that if the 200 OK from Login comes with Transfer-Encoding: chunked the 200 OK is received by F5 but the 200 OK that has to be sent to the client not. Bigip persists the connection but the connection between bigip and the client hangs and we are not sending the 200 OK to the client till the client closes the connection (tcp), after 60 seconds we saw the FIN,ACK and then the bigip sends the 200 OK to the client. 😞 when HTTP_REQUEST { log local0. "HTTP_REQUEST" if {[HTTP::header exists "Content-Length"] && [HTTP::header "Content-Length"] <= 1048576}{ set content_length [HTTP::header "Content-Length"] } else { set content_length 1048576 } if { $content_length > 0} { HTTP::collect $content_length } } when HTTP_REQUEST_DATA { set SessionId [findstr [HTTP::payload] "SessionId>" 10 "<"] if { not ([string length $SessionId] == 0) } { log local0. "Persist in HTTP_REQUEST_DATA for not login operations $SessionId" persist uie $SessionId 300 } } when HTTP_RESPONSE { if {[HTTP::header exists "Content-Length"] && [HTTP::header "Content-Length"] <= 1048577}{ set content_length [HTTP::header "Content-Length"] } else { set content_length 1048577 } if { $content_length > 0} { HTTP::collect $content_length } } when HTTP_RESPONSE_DATA { set SessionId [findstr [HTTP::payload] "sessionId>" 10 "<"] if {[HTTP::payload] contains "Login"} { log local0. "Persist in HTTP_RESPONSE_DATA for login $SessionId" catch { persist add uie $SessionId 300 } } } ` This is the configuration of the rest of the elements. `ltm virtual /Common/VS_TEST { destination /Common/10.105.108.5:8998 ip-protocol tcp mask 255.255.255.255 persist { /Common/sessionid_profile { default yes } } pool /Common/OPCO1_INT_PROV_AGENT_Pool profiles { /Common/http { } /Common/oneconnect { } /Common/tcp { } } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled } I tried also changing the http profile, but it didn't solve my problem. Best Regards and Thanks in advance. Victor Jori445Views0likes1Comment