HTTP request cloning

Problem this snippet solves:

These iRules send a copy of HTTP request headers and payloads to one or more pool members

These are the current iRule versions of the example from Colin's article.

Code :

###########
# First Rule #
###########
rule http_request_clone_one_pool {
# Clone HTTP requests to one clone pool
when RULE_INIT {
# Log debug locally to /var/log/ltm? 1=yes, 0=no
set static::hsl_debug 1

# Pool name to clone requests to
set static::hsl_pool "my_syslog_pool"
}
when CLIENT_ACCEPTED {

if {[active_members $static::hsl_pool]==0}{
log "[IP::client_addr]:[TCP::client_port]: [virtual name] $static::hsl_pool down, not logging"
set bypass 1
return
} else {
set bypass 0
}

# Open a new HSL connection if one is not available
set hsl [HSL::open -proto TCP -pool $static::hsl_pool]
if {$static::hsl_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: New hsl handle: $hsl"}
}
when HTTP_REQUEST {

# If the HSL pool is down, do not run more code here
if {$bypass}{
return
}
# Insert an XFF header if one is not inserted already
# So the client IP can be tracked for the duplicated traffic
HTTP::header insert X-Forwarded-For [IP::client_addr]

# Check for POST requests
if {[HTTP::method] eq "POST"}{

# Check for Content-Length between 1b and 1Mb
if { [HTTP::header Content-Length] >= 1 and [HTTP::header Content-Length] < 1048576 }{
HTTP::collect [HTTP::header Content-Length]
} elseif {[HTTP::header Content-Length] == 0}{
# POST with 0 content-length, so just send the headers
HSL::send $hsl "[HTTP::request]\n"
if {$static::hsl_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Sending [HTTP::request]"}
}
} else {
# Request with no payload, so send just the HTTP headers to the clone pool
HSL::send $hsl "[HTTP::request]\n"
if {$static::hsl_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Sending [HTTP::request]"}
}
}
when HTTP_REQUEST_DATA {
# The parser does not allow HTTP::request in this event, but it works
set request_cmd "HTTP::request"
if {$static::hsl_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Collected [HTTP::payload length] bytes,\
sending [expr {[string length [eval $request_cmd]] + [HTTP::payload length]}] bytes total"}
HSL::send $hsl "[eval $request_cmd][HTTP::payload]\nf"
}
}


#############
# Second Rule #
#############
rule http_request_close_xnum_pools {
# Clone HTTP requests to X clone pools
when RULE_INIT {

# Set up an array of pool names to clone the traffic to
# Each pool should be one server that will get a copy of each HTTP request
set static::clone_pools(0) http_clone_pool1
set static::clone_pools(1) http_clone_pool2
set static::clone_pools(2) http_clone_pool3
set static::clone_pools(3) http_clone_pool4

# Log debug messages to /var/log/ltm? 0=no, 1=yes
set static::clone_debug 1

set static::pool_count [array size static::clone_pools]
for {set i 0}{$i < $static::pool_count}{incr i}{
log local0. "Configured for cloning to pool $clone_pools($i)"
}
}
when CLIENT_ACCEPTED {
# Open a new HSL connection to each clone pool if one is not available
for {set i 0}{$i < $static::pool_count}{incr i}{
set hsl($i) [HSL::open -proto TCP -pool $static::clone_pools($i)]
if {$static::clone_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: hsl handle ($i) for $static::clone_pools($i): $hsl($i)"}
}
}
when HTTP_REQUEST {

# Insert an XFF header if one is not inserted already
# So the client IP can be tracked for the duplicated traffic
HTTP::header insert X-Forwarded-For [IP::client_addr]

# Check for POST requests
if {[HTTP::method] eq "POST"}{

# Check for Content-Length between 1b and 1Mb
if { [HTTP::header Content-Length] >= 1 and [HTTP::header Content-Length] < 1048576 }{
HTTP::collect [HTTP::header Content-Length]
} elseif {[HTTP::header Content-Length] == 0}{
# POST with 0 content-length, so just send the headers
for {set i 0}{$i < $static::pool_count}{incr i}{
HSL::send $hsl($i) "[HTTP::request]\n"
if {$static::clone_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Sending to $static::clone_pools($i), request: [HTTP::request]"}
}
}
} else {
# Request with no payload, so send just the HTTP headers to the clone pool
for {set i 0}{$i < $static::pool_count}{incr i}{
HSL::send $hsl($i) [HTTP::request]
if {$static::clone_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Sending to $static::clone_pools($i), request: [HTTP::request]"}
}
}
}
when HTTP_REQUEST_DATA {
# The parser does not allow HTTP::request in this event, but it works
set request_cmd "HTTP::request"
for {set i 0}{$i < $static::pool_count}{incr i}{
if {$static::clone_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Collected [HTTP::payload length] bytes,\
sending [expr {[string length [eval $request_cmd]] + [HTTP::payload length]}] bytes total\
to $static::clone_pools($i), request: [eval $request_cmd][HTTP::payload]"}
HSL::send $hsl($i) "[eval $request_cmd][HTTP::payload]\n"
}
}
}
Published Mar 18, 2015
Version 1.0
  • Hello Hoolio

     

    Can we use the VS setting clone pool to get the same results? Please advice

     

  • shaggy's avatar
    shaggy
    Icon for Nimbostratus rankNimbostratus

    12.1.2 - HTTP::request in HTTP_REQUEST_DATA triggers "command is not valid in the current scope" when saving the iRule. I haven't tested, but creating a variable in HTTP_REQUEST and referencing it in the HTTP_REQUEST_DATA event should do the trick:

    when HTTP_REQUEST {
        ...
        set req_headers [HTTP::request]
    }
    when HTTP_REQUEST_DATA {
        for {set i 0}{$i < $static::pool_count}{incr i}{
            if {$static::clone_debug}{
                log local0. "[IP::client_addr]:[TCP::client_port]: Collected [HTTP::payload length] bytes,\
                sending [expr {[string length $req_headers] + [HTTP::payload length]}] bytes total\
                to $static::clone_pools($i), request: $req_headers[HTTP::payload]"}
            HSL::send $hsl($i) "$req_headers[HTTP::payload]\n"
        }
    }
    
  • Does anyone know if this will handle requests larger than 1mb. The iRule is coded for requests between 1b and 1mb. I'm using this to replicate a webservice that gets images sent thru it and need it to handle larger requests, at least 10mb rather than just 1. Does anyone know if there is any known problems by increasing this allowable size?

     

  • Many thanks, Aaron! I made it working without any issues on 11.6 regards, Evelin