HTTP Request Unchunker
Problem this snippet solves:
This iRule will UNCHUNK an chunked HTTP request body. It reads through the payload, bit by bit, only asking for as much as it knows for sure will bre present, and then replaces the payload with a single 'chunk' (having removed all the chunk sizes) and replaces the Transfer-Encoding header with a Content-Length header.
The problem with a chunked request is that if you ask for more data than is there, the connection hangs, so you have to creep through.
It's only partly tested (I had it working for all the payloads I sent) and I think could be radically improved, and all my debug logging is still in there, but I'm submitting it here just to show it can be done, as quite a few people have asked about unchunking a chunked request body.
Also - apologies for the weird fomatting of the rule in this wiki - it looks lovely in my iRule Editor!!
I've only tested on 11.1.
Primary commands: HTTP::collect, HTTP::payload
Code :
# # The purpose of this iRule is to unchunk a chunked HTTP request. # # Version: 1.0 # It was working last time I tested it, but there are a few issues which may cause problems under some circumstances. Some # of the counters in the logs looked wrong. The logic around the variables wantedBytes and/or PayldPtr may be incorrect. It # needs to be further tested/refined before it can be put in a prod environment. # # I had to stop testing temporarily as I managed to convince our platform guys to turn off chunking in the consuming service. # when HTTP_REQUEST { # Debug off (0), On (1) set debug 1 if {[string tolower [HTTP::header Transfer-Encoding]] eq "chunked"} { if { $debug>=1 } {log local0. "Collecting 6 bytes"} # Collect only the chunk size set oldPayload 0 set payldPtr 0 set newChunkLen 0 # The problem with HTTP::collect and request chunking is that if you ask for more than the payload, it hangs the connection, # so you have to creep through the payload, only asking for you know should be there each time. # Start with 6 bytes which is enough to read the length of the first chunk and the delimiters. set wantedBytes 6 HTTP::collect 6 } } when HTTP_REQUEST_DATA { if {[string length [HTTP::payload]] < $wantedBytes } { # This means that it hasn't given us enough bytes - go and collect again - it always seems to oblige the 2nd time if { $debug>=1 } {log local0. "Collecting [expr {$wantedBytes - [string length [HTTP::payload]]}] bytes"} HTTP::collect [expr {$wantedBytes - [string length [HTTP::payload]]}] return } else { # This means that its collected at least what we asked for (up to the end of the next chunk length) - or possibly lots more if {$wantedBytes eq "6"} { # Get the length of the next chunk in hex set newChunkLenX [string trim [substr [HTTP::payload] $payldPtr "\x0d\x0a"]] if { $debug>=1 } {log local0. "First chunk is $newChunkLenX"} # Convert to decimal set newChunkLen [expr 0x$newChunkLenX] # Move Payload pointer to start of the next chunk - Increment by length of the chunk length plus the delimiter CR/LF incr payldPtr [expr {[string length $newChunkLenX] + 2}] if { $debug>=1 } {log local0. "PayLdPtr $payldPtr, newChunkLen $newChunkLen"} # Increment wanted bytes to collect the whole of the next chunk incr wantedBytes [expr {$newChunkLen + 2 + [string length $newChunkLenX]}] if { $debug>=1 } {log local0. "Wanted bytes $wantedBytes bytes"} } # Collect the latest complete chunk into bigChunk and get the length of the next incoming chunk. # I've only ever seen this loop be executed one interation at a time. while {([expr {$payldPtr + $newChunkLen + 6}] <= [string length [HTTP::payload]]) && ($newChunkLen ne 0)} { # Update the new unchunked payload append bigChunk [string range [HTTP::payload] $payldPtr [expr {$newChunkLen+$payldPtr}]] incr payldPtr [expr {$newChunkLen + 2}] # Get the length of the next chunk in hex set newChunkLenX [string trim [substr [HTTP::payload] $payldPtr "\x0d\x0a"]] if { $debug>=1 } {log local0. "Next chunk2 is $newChunkLenX"} # Convert to decimal set newChunkLen [expr 0x$newChunkLenX] incr payldPtr [expr {[string length $newChunkLenX] + 2}] if {$newChunkLen ne "0"} { # Adjust wantedBytes to get enough bytes to get the whole of the next chunk, plus the size ofthe # chunk following that, while compensating for a "length of chunk" length of less than 4 hex chars incr wantedBytes [expr {$newChunkLen + 2 + [string length $newChunkLenX]}] if { $debug>=1 } {log local0. "Wanted bytes $wantedBytes bytes"} } } if {$newChunkLen ne "0"} { if {$wantedBytes > [string length [HTTP::payload]]} { if { $debug>=1 } {log local0. "Collecting [expr {$wantedBytes - $oldPayload}] bytes"} HTTP::collect [expr {$wantedBytes - $oldPayload}] return } } } # Remove the Transfer-Encoding header which indicates 'chunked' format HTTP::header remove Transfer-Encoding # Replace chunked payload with unchunked Payload HTTP::payload replace 0 [string length [HTTP::payload]] $bigChunk # Insert Content-Length header HTTP::header insert Content-Length [string length $bigChunk] if { $debug>=1 } {log local0. "Added Content-Length [string length $bigChunk]"} # Release payload HTTP::release }