CSV Tabular Data Sideband Importer

Problem this snippet solves:

This iRule adds the ability to import CSV-formatted tabular data to a table via an HTTP sideband connection.

The implementation is described in George Watkins' article: Populating Tables With CSV Data Via Sideband Connections

The iRule can be added to any virtual server that makes use of tables. If during the CLIENT_ACCEPTED event, the iRule detects a missing or expired table, it will initiate an HTTP sideband connection to a server containing the data. The data will then be parsed and inserted into a table and the connection will proceed as normal. No additional sideband connections will be made until the data expires as defined in the settings at the top of the iRule. Here are a list of all the configurable options:

db_host - hostname of HTTP server hosting CSV-formatted data db_path - path of CSV file on server db_line_delimiter - defines the CSV file line delimiter; LF = Unix, CR = old pre-OS X Macs, CR+LF = Windows (Notepad) db_cache_timeout - length of time (in seconds) that cached data from table should be used before refreshing dns_server - IP address of DNS server if using DNS resolution for db_host

Code :

when RULE_INIT {
    # HTTP server holding the CSV-formatted database
    set static::db_host mydbhost.testnet.local

    # HTTP path for the CSV-formatted database
    set static::db_path "/redirects.csv"
    # CSV database line delimiter: CR = \r, LF = \n, CR+LF = \r\n
    set static::db_line_delimiter "\n"

    # DNS server if using DNS resolution (optional)
    set static::dns_server

    # Timeout for database cached in a table
    set static::db_cache_timeout 3600

    # table to cacha CSV-formatted database
    set db_cache_table "db_cache_[virtual]"

    # table to track when to refresh the database's contents
    set db_cache_state_table "db_cache_timeout_[virtual]"
    set last_refresh [table lookup -subtable $db_cache_state_table last_refresh]

    if { $last_refresh eq "" } { set last_refresh 0 }

    if { [expr [clock seconds]-$last_refresh] > $static::db_cache_timeout } { 
        set db_ip [lindex [RESOLV::lookup @$static::dns_server -a $static::db_host] 0]

        if { $db_ip ne "" } {
            if { [table lookup -subtable $db_cache_state_table lock] != 1 } {
                # lock table modifications so that multiple instances don't attempt to update the table
                table set -subtable $db_cache_state_table lock 1 $static::db_cache_timeout $static::db_cache_timeout

                log local0. "Locking table"

                # establish connection to server
                set conn [connect -timeout 1000 -idle 30 $db_ip:80]

                # build request to send to HTTP server hosting DB
                set request "GET $static::db_path HTTP/1.1\r\nHost: $static::db_host\r\n\r\n"

                # send request to server
                send -timeout 1000 -status send_status $conn $request

                # receive response and place in variable
                set db_contents [getfield [recv -timeout 1000 -status recv_info $conn] "\r\n\r\n" 2]

                if { $db_contents ne "" } {
                    # update last refresh time in timeout table
                    table set -subtable $db_cache_state_table last_refresh [clock seconds] indef indef

                    # grab a list of old keys so we can remove them from cache if not in new DB copy
                    set old_keys [table keys -subtable $db_cache_table]

                    foreach field [split [string map [list $static::db_line_delimiter \uffff] $db_contents] \uffff] {
                        if { ($field contains ",") && !($field starts_with "#") } {
                            set sep_offset [string first "," $field]

                            set key [string range $field 0 [expr $sep_offset - 1]]
                            set value [string range $field [expr $sep_offset + 1] end]
                            lappend new_keys $key

                            # add key/value pairs to DB cache table
                            table set -subtable $db_cache_table $key $value indef indef

                            if { [lsearch $old_keys $key] >= 0 } {
                                log local0. "Updating \"$key\" = \"$value\" in DB cache table"
                            } else {
                                log local0. "Adding \"$key\" = \"$value\" to DB cache table"

                    foreach old_key $old_keys {
                        if { [lsearch $new_keys $old_key] < 0 } {
                            # remove any keys which don't exist in new DB copy
                            table delete -subtable $db_cache_table $old_key

                            log local0. "Deleting \"$old_key\" from DB cache table, key doesn't exist in new DB copy"

                close $conn

                table delete -subtable $db_cache_state_table lock

                log local0. "Unlocking table"
        } else {
            log local0. "Could not get valid IP for the DB server. Check the hostname and nameserver settings."

    set redirect_path [table lookup -subtable $db_cache_table [string tolower [HTTP::path]]]
    if { $redirect_path ne "" } {
        HTTP::redirect http://[HTTP::host]$redirect_path
Published Mar 17, 2015
Version 1.0

Was this article helpful?

1 Comment

  • Because of some errors [use curly braces to avoid double substitution] like the ones in https://support.f5.com/csp/article/K57410758 I have replaced line 29, 62 and 63.



    Old code:


    if { [expr [clock seconds]-$last_refresh] > $static::db_cache_timeout } {





    set key [string range $field 0 [expr $sep_offset - 1]]

    set value [string range $field [expr $sep_offset + 1] end]



    New code:




       set time_now [clock seconds]


       set test_var [expr {${time_now} - ${last_refresh}}]




       if { ${test_var} > ${static::db_cache_timeout} } {







                               set key [string range $field 0 [expr {${sep_offset} - 1}]]

                               set value [string range $field [expr {${sep_offset} + 1}] end]