HTTP Request Cloning via iRules, Part 1
One of the requests that I've seen several times over the years is the ability to completely clone web requests across multiple servers. The idea is that you can take the HTTP traffic coming in bound for pool member A and distribute it, in its entirety, to pool member B. Or perhaps members B-G..whatever your needs are. This can be helpful for many reasons, security auditing, test or dev harnesses, archival, etc. Whatever the reasons, this has been a repeated question in the forums and in the field. While clone pool functionality works to some degree for this, it doesn't work quite as desired, and doesn't easily distribute to multiple additional members.
iRules, however, offers a solution.
Using the HSL feature in iRules that, if you remember, allows you to specify a protocol and destination, which can be a pool, we are able to treat this much like sideband connections in v11. By establishing a new connection and sending across the HTTP info as needed we're able to clone the HTTP traffic in its entirety. Let's take a look at how this starts:
1: when CLIENT_ACCEPTED {
2: # Open a new HSL connection if one is not available
3: set hsl [HSL::open -proto TCP -pool http_clone_pool]
4: log local0. "[IP::client_addr]:[TCP::client_port]: New hsl: $hsl"
5: }
As you can see, it's straight-forward enough. Using the HSL::open command we set the protocol to TCP and the pool to whichever pool you'd like to clone your HTTP traffic to. Now that we know where and how we're sending the data, we need to figure out which data to send. The only trick with HTTP in this step is that GET and POST requests need to be handled differently. With a POST we will need to collect the data that is being posted so that we can replay it back to the new destination. With a GET we simply forward through the headers of the request. Fortunately determining which is which is a cake walk in iRules, so it's just the collecting and forwarding bit we really need to worry about. This is the real "meat" of this iRule, and even that isn't difficult, it looks like:
1: when HTTP_REQUEST {
2: # Insert an XFF header if one is not inserted already
3: # So the client IP can be tracked for the duplicated traffic
4: HTTP::header insert X-Forwarded-For [IP::client_addr]
5:
6: # Check for POST requests
7: if {[HTTP::method] eq "POST"}{
8:
9: # Check for Content-Length between 1b and 1Mb
10: if { [HTTP::header Content-Length] >= 1 && [HTTP::header Content-Length] < 1048576 }{
11: HTTP::collect [HTTP::header Content-Length]
12: } elseif {[HTTP::header Content-Length] == 0}{
13: # POST with 0 content-length, so just send the headers
14: HSL::send $hsl [HTTP::request]
15: log local0. "[IP::client_addr]:[TCP::client_port]: Sending [HTTP::request]"
16: }
17: } else {
18: # Request with no payload, so send just the HTTP headers to the clone pool
19: HSL::send $hsl [HTTP::request]
20: log local0. "[IP::client_addr]:[TCP::client_port]: Sending [HTTP::request]"
21: }
22: }
As you can see this is pretty standard iRules fare for the most part. HTTP::method, HTTP::header, HTTP::collect. Nothing shocking for the most part. The real trick is in the HSL::send command. Note that it's going to "$hs1"? That's the connection we established earlier with the HSL::open command. Now that we have that handle available we're able to easily forward through other traffic. So as you can see in the POSTs with content attached we're collecting, and anything else we're forwarding along the headers alone. Note that nothing has been sent for the POSTs that have content attached, we've just entered a collect state so the client will continue sending data and we'll store it. That data is then available in the HTTP_REQUEST_DATA event, and we can forward it along when that occurs. So for those particular requests an additional event will fire:
1: when HTTP_REQUEST_DATA {
2: # The parser does not allow HTTP::request in this event, but it works
3: set request_cmd "HTTP::request"
4: log local0. "[IP::client_addr]:[TCP::client_port]: Collected [HTTP::payload length] bytes,\
5: sending [expr {[string length [eval $request_cmd]] + [HTTP::payload length]}] bytes total"
6: HSL::send $hsl "[eval $request_cmd][HTTP::payload]"
7: }
Now that the HTTP_REQUEST_DATA event has fired we know our collect has picked up the data we want it to. This event will only fire after a successful HTTP::collect. Once this happens we're ready to forward along the POST and the accompanying data. After a little expr trickery to convince the parser to allow the HTTP::request command within the HTTP_REQUEST_DATA event (it doesn't think it should work, but it does...so we trick it) we're able to send along the original request and payload data without much hassle. Again making use of the HSL::send command and the $hs1 variable we set up at the beginning makes this process easy.
At this point you now have a functioning iRule that will clone traffic inbound for your Virtual to another pool of your choosing. At this point you are probably asking yourself three questions.
1) Why hasn't this been written before?
2) Where is the version that allows forwarding to multiple other pools?!
3) Why HSL and not sideband connections?
Well, those answers are simple:
1) Because our good friend Hoolio hadn't written it yet! Aaron whipped this together and posted it. I got his okay to write it up and get it out there, so here it is. Keep in mind that this is VERY early in the testing stages and is prone to change/update. I'm sharing it here because I think it's awesome, and don't want it to slip off into the night without being called out. But this is very much a use at your own risk sort of thing for now. I'll update with notes when more testing has been done. Also worth note is that this requires at least version 10.1 or newer to function.
2) It's coming, don't fret. That will be Part 2! You did notice the Part1 in the title didn't you? We can't give it all away at once. Besides that part is still under testing. Releasing it before it's ready wouldn't be prudent. Stay tuned, it's coming.
3) I asked Aaron the exact same thing and here's what he said:
HSL automatically ACKs the server responses, but ignores the data. From limited testing of both HSL and sideband connections, HSL is also a lot more efficient in handling high connection rates. Also, HSL is available from 10.1 and sideband only on 11.x.
So there you have it. Sideband connections would work just fine, but HSL allows for a wider audience (10.1 and above), and offers a little added efficiency/ease of use in this particular case. Keep in mind that HSL won't handle many of the more complex scenarios that sideband connections will, hence the tradeoff, but in this particular case HSL seems to win out.
Keep an eye out for the next installment of this two-parter, wherein you'll see how to extend this model to work with multiple clone destinations. For now, any questions/comments are welcome. If you get this up and running in test/dev, please let us know as we'd love any feedback resulting from that testing. Many thanks to hoolio for his continued, outstanding contributions. With that, I'll leave you with the full version of the iRule:
1: when CLIENT_ACCEPTED {
2: # Open a new HSL connection if one is not available
3: set hsl [HSL::open -proto TCP -pool http_clone_pool]
4: log local0. "[IP::client_addr]:[TCP::client_port]: New hsl: $hsl"
5: }
6: when HTTP_REQUEST {
7:
8: # Insert an XFF header if one is not inserted already
9: # So the client IP can be tracked for the duplicated traffic
10: HTTP::header insert X-Forwarded-For [IP::client_addr]
11:
12: # Check for POST requests
13: if {[HTTP::method] eq "POST"}{
14:
15: # Check for Content-Length between 1b and 1Mb
16: if { [HTTP::header Content-Length] >= 1 && [HTTP::header Content-Length] < 1048576 }{
17: HTTP::collect [HTTP::header Content-Length]
18: } elseif {[HTTP::header Content-Length] == 0}{
19: # POST with 0 content-length, so just send the headers
20: HSL::send $hsl [HTTP::request]
21: log local0. "[IP::client_addr]:[TCP::client_port]: Sending [HTTP::request]"
22: }
23: } else {
24: # Request with no payload, so send just the HTTP headers to the clone pool
25: HSL::send $hsl [HTTP::request]
26: log local0. "[IP::client_addr]:[TCP::client_port]: Sending [HTTP::request]"
27: }
28: }
29: when HTTP_REQUEST_DATA {
30: # The parser does not allow HTTP::request in this event, but it works
31: set request_cmd "HTTP::request"
32: log local0. "[IP::client_addr]:[TCP::client_port]: Collected [HTTP::payload length] bytes,\
33: sending [expr {[string length [eval $request_cmd]] + [HTTP::payload length]}] bytes total"
34: HSL::send $hsl "[eval $request_cmd][HTTP::payload]"
35: }
- rajeshramhit_11NimbostratusExcellent. This is something I have been looking for weeks. Many Thanks. Eagerly waiting for second part of this....
- rob_carrCirrostratusDid 'Part 2' ever get written? I've searched Colin's contributions and the phrase 'Part 2' and never found an obvious candidate.
- ePratik_284320Nimbostratus
Do we have a part 2 for this ? for sending request to multiple pool members...
- jasvinder_singhNimbostratus
I implemented the below in my lab environement. I am able get the traffic at test server but getting the java.io.StreamCorruptedException: invalid stream header: C2ACC3AD means data is corrupted somewhere. Cam somebody help me
when HTTP_REQUEST { if {[HTTP::header exists Authorization]} { if { [HTTP::username] equals "xxxxx@xxxx" } { discard } } set hsl [HSL::open -proto UDP -pool pfl2_metrics] log local0. "[IP::client_addr]:[TCP::client_port]: New hsl: $hsl switch -glob [HTTP::uri] { "/uri1" - "/uri1" { switch -glob [HTTP::uri] { "/controller/instance/*/metrics" - "/controller/instance//metrics" { HTTP::header insert "backend" "$poolname"" if {[HTTP::method] eq "POST"} {
Check for Content-Length between 1b and 1Mb
POST with 0 content-length, so just send the headersif { [HTTP::header Content-Length] >= 1 && [HTTP::header Content-Length] < 1048576 }{ HTTP::collect [HTTP::header Content-Length] } elseif {[HTTP::header Content-Length] == 0}{
HSL::send $hsl [HTTP::request] log local0. "[IP::client_addr]:[TCP::client_port]: Sending [HTTP::request]" } } else { Request with no payload, so send just the HTTP headers to the clone pool HSL::send $hsl [HTTP::request]
log local0. "[IP::client_addr]:[TCP::client_port]: Sending [HTTP::request]" }
Debugging only log local0. "Inserted header [HTTP::header "backend"]"pool ${dis_cluster} } "URI2*" { pool ${poolname}_xxxx } "URI3*" { pool ${poolname}_xxxxx } "URI4*" - "URI5*" - "URI6*" - "URI7*" - "URI8*" { pool ${poolname}_xxxx } "/controller/sim/*/user*" { pool ${poolname} } default { pool ${poolname}_xxxxxxx } } } "URI9*" { pool ${poolname}_xxxxxxxx } "/URI10*" { pool ${poolname}_xxxxxxxxx } "cURI11*" { pool ${poolname}_xxxxxxxxxxx } default { pool ${poolname} } } } default { pool ${poolname} } }
}
when HTTP_REQUEST_DATA { The parser does not allow HTTP::request in this event, but it works set request_cmd "HTTP::request" log local0. "[IP::client_addr]:[TCP::client_port]: Collected [HTTP::payload length] bytes,\ sending [expr {[string length [eval $request_cmd]] + [HTTP::payload length]}] bytes total" HSL::send $hsl "[eval $request_cmd][HTTP::payload]" }
}
- shaggyNimbostratus
for those who waited for part 2 (multiple clone destinations), the associated codeshare by hoolio has both versions of the iRule - https://devcentral.f5.com/codeshare/http-request-cloning
- Chris_Jacobs_coNimbostratus
Why the limit of 1048576 on collecting?
- RiverFishAltostratus
The problem I'm having is if the VIP and clone pool are wildcard the traffic sent to the clone pool has a destination of x.x.x.x.0 which doesn't work for the customer, they need the actual port.
- Keith_HepnerNimbostratus
just trying to start using this, tried the link for part 2 and with the cloud doc changes it doesn't work. Does anyone please have a current link? I'm having a difficult time finding what I need to send traffic to multiple destinations.
Thanks in advance!!