HTTP Request Queuing for WAM

Problem this snippet solves:

We are using WAM on v10.2.4 and wanted to have concurrent requests for the same page objects queued to prevent request stampedes to the backend when a page was freshly published or had expired. Unfortunately this functionality isn't available in WAM until v11.x and our upgrade is very slow in coming. In addition we wanted to be able to serve stale content should the refetch take too long (a configurable value), something that I don't believe is yet possible in v11 as part of the request queuing. This functionality required us to sandwich WAM between 2 virtuals so that we had a frontend iRule and a backend one. When reading the rules keep in mind there was a distinct delay between the front and backend rules being invoked - between 5-20 (!) milliseconds (tested on an averagely loaded 6400) - which necessitated extra logic. Any comments welcome!!

Code :

### Front-End iRule ###
# Request queuing iRule, Version 1.0
# March, 2014

# Created by IHeartF5 

# Purpose:
# This iRule checks a flag (using the session table) which indicates when a page request for the same page is already in progress.
# If it is,  request is placed into holding pattern until either the response has returned and populated the cache
# (after which it releases the request to be served from cache), or until a timer expires at which time we release the request.
# Intended for apps where all pages are cached (like CMS content).

# Configuration Requirements:
# Requires 2 virtual servers; vs_xxxxx to which the is iRule is attached,  and vs_xxxxx_backend, 
# to which ir_request_queuing_backend is attached.

when RULE_INIT {

# The wait interval in ms
set static::wait_ms 200
# The low watermark for wait iterations - used for conditional GETs where stale content exists in WAM - when set low will result in more requests being served stale content
# Use 0 here to achieve "serve stale on expiration" behaviour
set static::low_loop_cnt 20
# The high watermark for wait iterations - used for non-conditional GETs where no stale content exists in WAM - this must be higher than $low_loop_cnt
# when set too low will result in many concurrent requests being sent to OWS and if set too high may result in too many requests queued concurrently
# would suggest 60<=high_loop_cnt<=300 but depends on the application and baseline testing of that.
set static::high_loop_cnt 240
}

when CLIENT_ACCEPTED {

### Sanity check global variables
if {$static::low_loop_cnt > $static::high_loop_cnt} {
log local0. "RULE_INIT - {$static::low_loop_cnt} ($static::low_loop_cnt) > {$static::high_loop_cnt} ($static::high_loop_cnt)"
reject
}
set debug 0
}

when HTTP_REQUEST {

    # Can be used in prod for debugging on a per-connection basis, using a browser add-on like Firebug to add the headers
    if {[HTTP::header exists "X-tls-debug"]} {
        set debug 1
    }

    # log prefix for connection/request tracking. 
    # All debug logs will start with [xxxx.yyyy] where xxxx is the connection and yyyy is the request id
    if {$debug} { 
        # per request identifier
        set prefix "\[[expr {int (rand() * 10000)}]\] "
        # this notifies backend VIP we are in debug mode and passes the connection prefix.
        HTTP::header replace "X-tls-debug" $prefix
     }

# Check request type (file extension)
switch [getfield [string tolower [URI::decode [URI::basename [HTTP::path]]]] "." 2] {

"" -
"html" {

set table "[getfield [HTTP::host] : 1][string tolower [URI::decode [HTTP::path]]]"
if {$debug} {log local0. "${prefix}$table [clock clicks -milliseconds]"}

# Check to see if nocache header set for this page
if {[HTTP::header exists X-tls-nocache] } {
# Skip the page request queuing as this request will not be cached even when we do get a response
virtual $virt
return
} elseif {[table lookup "404$table"] ne "" } {
# Return blank 404
HTTP::respond 404 noserver
return
}

# It's a cacheable page - check if there is already an in-progress request for that host/page combo
set i 1
while {[table lookup $table] ne "" } {
if {$i == 1} {
# Mark (to the downstream virtual) this as a queued request by inserting header
HTTP::header insert X-mcms-queued "yes"
if {[table lookup $table] eq "COND"} {
# Use the low loop watermark as this object is in cache so stale content could be served
set loop_cnt $static::low_loop_cnt
} else {
# Use the high loop watermark as this object is not in cache so stale content cannot served
set loop_cnt $static::high_loop_cnt
}
}
# Limit the number of times this gets executed so that we know whether it timed out (never returned) 
# or whether the request returned successfully and table entry was deleted by backend virtual
if {$i > $loop_cnt} {
# Exceeded loop count
if {$debug} {log local0. "${prefix}Exceeded $loop_cnt, break out of loop"}
break
}

# Wait for $static::wait_ms before checking again 
# hopefully by the time the request moves through,  WAM will have a valid copy of the page cached
after $static::wait_ms
incr i
}
if {$debug && $i > 1 } {log local0. "${prefix}Stopped waiting after $i loops"}

if {[HTTP::header exists X-mcms-queued]} {
# wait just a few milli-seconds to ensure request gets into cache
after 10
}

if {[table lookup "404$table"] ne "" } {
# Return blank 404
HTTP::respond 404 noserver
return
}
}
}

    # Choose downstream virtual (append backend to this virtual name) - WAM processing will take place before traffic hits this next virtual
    virtual "[virtual]_backend"
}

### Back-End iRule ###
# Request queuing iRule, Version 1.0
# March, 2014

# Created by IHeartF5 

# Purpose:
# This iRule sets/deletes a flag using the session table to indicate when a page request is in progress.
# The flag is checked by an iRule attached to the 'front' VIP  (the VIP with WAM profile attached),
# which queues requests for the same page,  holding them until this request has completed 
# and populated the cache,  after which it releases the request

# Configuration Requirements:
# Requires 2 virtual servers; vs_xxxxx to which ir_request_queuing is related,  and vs_xxxxx_backend, 
# to which this iRule is attached.

when HTTP_REQUEST {

     set debug 0

    if {[HTTP::header exists "X-tls-debug"] } {
        set prefix [HTTP::header "X-tls-debug"]
        set debug 1
    } 

set fPage 0

set uri [HTTP::uri]

switch [getfield [URI::decode [URI::basename [HTTP::path]]] "." 2] {

"" -
"html" {
if {$debug} {log local0. "${prefix}Page request [HTTP::uri] from [IP::client_addr] clock clicks [clock clicks -milliseconds]"}

# It's a page!! Set a global flag (in the session table) indicating that there is currently a request for this page in progress
# This flag will be checked by the front-end VIP,  and subsequent requests delayed for short period to prevent request flooding to the backend
set fPage 1
# Set table name - this MUST match the name used in the iRule on the front-end VIP
set table "[getfield [HTTP::host] : 1][string tolower [URI::decode [HTTP::path]]]"

# Check if conditional GET - this can be used to inform front-end whether to wait a short time (and serve stale content),  or a long time
if {[HTTP::header exists "if-none-match"] || [HTTP::header exists "if-modified-since"]} {
set req_stat "COND"
if {[HTTP::header exists X-mcms-queued]} {
# This was a conditional queued request that was not served from cache - could mean request still in progress but exceeded high_loop_cnt
if {$debug} {log local0. "${prefix}Request timed out $table.....invoke stand-in functionality"}
HTTP::respond 500 noserver Retry-After 600
return
} elseif {[table lookup $table] ne "" || [table lookup "200$table"] ne ""} {
# Flag indicating request in progress is now set or flag indicating response just returned is set - serve stale 
if {$debug} {log local0. "${prefix}race condition $table.....invoke stand-in functionality"}
HTTP::respond 500 noserver Retry-After 600
return
}
} else {
# Mark as non conditional GET
set req_stat "N_COND"
}

# Create table entry with timeout that will allow the high watermark to be reached by queued requests
table set $table $req_stat indefinite [expr {int($static::high_loop_cnt * $static::wait_ms / 1000) + 1}]

if {$debug} {log local0. "${prefix}session key $table is set to '[table lookup $table]' with expiry [expr {int($static::high_loop_cnt * $static::wait_ms / 1000) + 1}]"}
}
}
}

when LB_FAILED {
# Either the pool is down,  or the selected pool member has failed to complete the 3-way handshake
    if {$debug} {log local0. "${prefix}LB_FAILED"}

    # Return a 500 to trigger WAM caching 'stand-in' functionality
    HTTP::respond 500 noserver Retry-After 600
    return
}

when HTTP_RESPONSE {


   if {$fPage} {
if {$debug} {log local0. "${prefix}URI: $uri Status: [HTTP::status] Content-Length [HTTP::header Content-Length] [IP::client_addr] [IP::server_addr]"}
switch [HTTP::status] {
"200" -
"304" {
# Place a very short-lived entry in the table so page requests for same page do not get sent to downstream
table set "200$table" $req_stat indefinite 1
}
"404" {
# Place a short-lived entry in the table so subsequent page requests for same page do not get queued 
table set "404$table" $req_stat indefinite 2
}
}

       # It was a page request for which we have now received a response - delete the flag (table entry) indicating that there is currently an request for this page in progress
        table delete $table
    }
}

### Front-End Virtual ###
ltm virtual vs_sites.com.au_http {
    destination 111.117.164.41:http
    http-class {
        wcl_wa_enable
    }
    ip-protocol tcp
    mask 255.255.255.255
    persist none
    pool pl_sites.com.au_http
   profiles {
        http_accel_sites.com.au { }
        oneconnect { }
        tcp-nonagle { }
    }
    rules {
        ir_request_queing_frontend
    }
    snat automap
    vlans {
        bpweb_110_fwslb
    }
    vlans-enabled
}

### Back-End Virtual ###

ltm virtual vs_sites.com.au_http_backend {
    destination 192.168.0.7:http
    ip-protocol tcp
    mask 255.255.255.255
    persist none
    pool pl_mcms_sites.com.au_http
    profiles {
        http { }
        oneconnect { }
        tcp-lan-optimized {
            context clientside
        }
        tcp-nonagle {
            context serverside
        }
    }
     rules {
        ir_request_queuing_backend
    }
    snat automap
    vlans-enabled
}

Tested this on version:

10.2
Published Jan 30, 2015
Version 1.0

Was this article helpful?

No CommentsBe the first to comment