http
22 TopicsURL Shortener
Problem this snippet solves: The Small URL Generator takes a long URL, examines its length, and assigns it a variable length key based on the original URL's length. The key is then stored in a subtable along with the original URL. When a user accesses the small URL (http:/// ), they are then redirected to the original long URL. This Small URL Generator also has the ability to create custom URL keys. Code : when RULE_INIT { set static::small_url_timeout 86400 set static::small_url_lifetime 86400 set static::small_url_response_header "<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\"><html><head> \ <title>Small URL Generator</title></head><body><center><h1>Small URL Generator</h1> \ " set static::small_url_response_footer "</center></body></html>" } when HTTP_REQUEST { if { ([HTTP::uri] starts_with "/create?") and ([HTTP::query] ne "") } { set url [URI::decode [string tolower [URI::query [HTTP::uri] url]]] set custom_url_key [string tolower [URI::query [HTTP::uri] custom_url_key]] if { $custom_url_key ne "" } { if { ([table lookup -subtable small_url $custom_url_key] ne "") } { HTTP::respond 200 content "$static::small_url_response_header <b><font color=\"ff0000\"> \ Error: the custom Small URL <a href=\"http://[HTTP::host]/$custom_url_key\"> \ http://[HTTP::host]/$custom_url_key</a> has already been taken. Please try again. \ </font></b> $static::small_url_response_footer" } else { set url_key $custom_url_key log local0. "Custom Small URL created for $url with custom key $url_key" } } else { switch -glob [string length $url] { {[1-9]} { set url_key_length 3 } {1[0-9]} { set url_key_length 3 } {2[0-9]} { set url_key_length 4 } {3[0-9]} { set url_key_length 5 } default { set url_key_length 6 } } set url_key [string tolower [scan [string map {/ "" + ""} [b64encode [md5 $url]]] "%${url_key_length}s"]] } if { ([table lookup -subtable small_url $url_key] eq "") } { table add -subtable small_url $url_key $url $static::small_url_timeout $static::small_url_lifetime log local0. "Small URL created for $url with key $url_key" } else { log local0. "Small URL for $url already exists with key $url_key" } HTTP::respond 200 content "$static::small_url_response_header The Small URL for \ <a href=\"$url\">$url</a> is <a href=\"http://[HTTP::host]/$url_key\"> \ http://[HTTP::host]/$url_key</a> $static::small_url_response_footer" } else { set url_key [string map {/ ""} [HTTP::path]] set url [table lookup -subtable small_url $url_key] if { [string length $url] != 0 } { log local0. "Found key $url_key, redirecting to $url" HTTP::redirect $url } else { HTTP::respond 200 content "$static::small_url_response_header <form action=\"/create\" \ method=\"get\"><input type=\"text\" name=\"url\"> \ <input type=\"submit\" value=\"make small!\"><h4>Make it custom! \ (optional)</h4>http://[HTTP::host]/<input type=\"text\" name=\"custom_url_key\"></form> \ $static::small_url_response_footer" } } } Tested this on version: 10.24.2KViews0likes0CommentsRewrite http:// to https:// in response content
Problem this snippet solves: (Maybe I missed it, but) I didn't see a code share for using a STREAM profile to rewrite content from http to https. This share is just to make it easier to find a simple iRule to replace http:// links in page content to https://. It's taken directly from the STREAM::expression Wiki page. How to use this snippet: You'll need to assign a STREAM profile to you virtual server in order for this to work (just create an empty stream profile and assign it). Code : # Example which replaces http:// with https:// in response content # Prevents server compression in responses when HTTP_REQUEST { # Disable the stream filter for all requests STREAM::disable # LTM does not uncompress response content, so if the server has compression enabled # and it cannot be disabled on the server, we can prevent the server from # sending a compressed response by removing the compression offerings from the client HTTP::header remove "Accept-Encoding" } when HTTP_RESPONSE { # Check if response type is text if {[HTTP::header value Content-Type] contains "text"}{ # Replace http:// with https:// STREAM::expression {@http://@https://@} # Enable the stream filter for this response only STREAM::enable } } Tested this on version: 11.53.2KViews0likes5CommentsSimple iRulesLX JSON rewrite
Problem this snippet solves: Like rewriting the Location field in a redirect, it's sometimes required to rewrite JSON data, returned in an HTTP response. Whilst it would be possible to write using traditional iRules, the task is made more simple (and less risky) by using iRulesLX. In the following example, an HTTP response contains JSON data with fields containing external URLs. These need to be rewritten to an internal URL for the purpose of internal routing. { "firstUrl":"https://some.public.host.com/api/preauth/ABCDEFGHIJKLM", "secondUrl":"https://some.public.host.com/api/documents/{documentId}/discussion" } The concept can be used to rewrite any JSON data, however more complicated JSON containing arrays for example would need to be taken into consideration. How to use this snippet: use the following iRule to call iRulesLX and pass the necessary parameters when CLIENT_CONNECTED { set newHost "internal.host.local" set jsonKeys "firstUrl secondUrl" set rpcHandle [ILX::init "json-parse-plugin" "json-parse-extension"] } when HTTP_RESPONSE { if {[HTTP::header "Content-Type"] eq "application/json"} { HTTP::collect [HTTP::header "Content-Length"] } } when HTTP_RESPONSE_DATA { set payload [HTTP::payload] set result [ILX::call $rpcHandle "setInternalUrl" $payload $jsonKeys $newHost] HTTP::payload replace 0 [HTTP::header "Content-Length"] $result } When used in combination with the iRulesLX code below the host portion of the URIs in the JSON data are rewritten and sent back to the origin by replacing the HTTP payload { "firstUrl":"https://internal.host.local/api/preauth/ABCDEFGHIJKLM", "secondUrl":"https://internal.host.local/api/documents/{documentId}/discussion" } Code : const f5 = require('f5-nodejs'); const url = require('url'); const ilx = new f5.ILXServer(); function setInternalUrl(req, res) { var json = JSON.parse(req.params()[0]); var jsonObj = req.params()[1].split(' '); var newHost = req.params()[2]; for (var i = 0; i < jsonObj.length; i++) { if (typeof json[jsonObj[i]] == "string") { var oldUrl = url.parse(json[jsonObj[i]]); oldUrl.host = newHost; var newUrl = decodeURI(url.format(oldUrl)); json[jsonObj[i]] = newUrl; } else { json = {"error":"unable to rewrite"}; } } res.reply(JSON.stringify(json)); } ilx.addMethod('setInternalUrl', setInternalUrl); ilx.listen(); Tested this on version: 12.12.1KViews1like1CommentDisabling HTTP Processing For Unrecognized HTTP Methods
Problem this snippet solves: The iRule below disables HTTP processing for requests using HTTP methods that are not recognized by the BIG-IP HTTP profile. For example, Web-based Distributed Authoring and Versioning (WebDAV) uses the following extended HTTP methods: PROPFIND, PROPPATCH, MKCOL, COPY, MOVE, LOCK, UNLOCK. Requests using one of these methods may provoke the behavior described in AskF5 SOL7581: https://support.f5.com/kb/en-us/solutions/public/7000/500/sol7581.html?sr=2105288 Unrecognized HTTP methods without a specified content-length or chunking header can cause the connection to stall . Use of these or other methods not described in RFC2616 (HTTP/1.1) may require an iRule similar to the following associated with the virtual server which disables further HTTP processing when they are seen. How to use this snippet: Note: You may have to disable the "HTTP::enable" command with a comment if using the iRule on an APM protected virtual service. Code : when CLIENT_ACCEPTED { # Enable HTTP processing for all requests by default HTTP::enable } when HTTP_REQUEST { # selectively disable HTTP processing for specific request methods switch [HTTP::method] { "MOVE" - "COPY" - "LOCK" - "UNLOCK" - "PROPFIND" - "PROPPATCH" - "MKCOL" { HTTP::disable } } }1.7KViews0likes4CommentsCache Expire
Problem this snippet solves: This iRule sets caching headers Expires and Cache-Control on the response. Mostly the use case will be to set client-side caching of static resources as images, stylesheets, javascripts etc.. It will honor, by default, the headers set by the origin server, then give precedence to the resource's mime-type (over it's file-extension). Also, if the origin server supplies the mime-type of the resource then file-extension entries are not considered. Code : # Expires iRule, Version 0.9.1 # August, 2012 # Created by Opher Shachar (contact me through devcentral.f5.com) # (please see end of iRule for additional credits) # Purpose: # This iRule sets caching headers on the response. Mostly the use case will be # to set client-side caching of static resources as images, stylesheets, # javascripts etc. # Configuration Requirements: # Uses a datagroup named Expires of type string containing lines that # specify mime-types or extention for the name and, the number of seconds the # client should cache the resource for the value. ex.: # "image/" := "604800" # ".js" := "604800" # The Content-Type, if specified, takes precedence over the file extention. when RULE_INIT { # Enable to debug Expires via log messages in /var/log/ltm # (2 = verbose, 1 = essential, 0 = none) set static::ExpiresDebug 0 # Overwrite cache headers in response # (1 = yes, 0 = no) set static::ExpiresOverwrite 0 } when CLIENT_ACCEPTED { # The name of the Data Group (aka class) we are going to use set vname [URI::basename [virtual name]] set vpath [URI::path [virtual name]] set Expires_clname "${vpath}Expires$vname" if {! [class exists $Expires_clname]} { log local0.notice "Data Group $Expires_clname not found." } } when HTTP_REQUEST { # The log prefix so you can find yourself in the log set Expires_lp "VS=[virtual name], URI=[HTTP::uri]" if {[class exists $Expires_clname]} { set period [string last . [HTTP::path]] if { $period >= 0 } { # Set the timeout based on the class entry if it exists for this request. set expire_content_timeout [class match -value [string tolower [getfield [string range [HTTP::path] $period end] ";" 1]] ends_with $Expires_clname] if { ($static::ExpiresDebug > 1) and ($expire_content_timeout ne "") } { log local0. "$Expires_lp: found file suffix based expiration: $expire_content_timeout." } } else { set expire_content_timeout "" } } } when HTTP_RESPONSE { if { [info exists expire_content_timeout] } { # if expire_content_timeout not set then no class was found if { [HTTP::header exists "Content-Type"] } { # Set the tiemout based on the class entry if it exists for this mime type. # TODO: Allow globbing in matching mime-types set expire_content_timeout [class match -value [string tolower [HTTP::header "Content-Type"]] starts_with $Expires_clname] if { ($static::ExpiresDebug > 1) and ($expire_content_timeout ne "") } { log local0. "$Expires_lp: found mime type based expiration: $expire_content_timeout." } } if { $expire_content_timeout ne "" } { # either matched Content-Type or file extention if { $static::ExpiresOverwrite or not [HTTP::header exists "Expires"] } { HTTP::header replace "Expires" "[clock format [expr ([clock seconds]+$expire_content_timeout)] -format "%a, %d %h %Y %T GMT" -gmt true]" if { ($static::ExpiresDebug > 0) } { log local0. "$Expires_lp: Set 'Expires' to '[clock format [expr ([clock seconds]+$expire_content_timeout)] -format "%a, %d %h %Y %T GMT" -gmt true]'." } } elseif { [HTTP::header exists "Expires"] } { set expire_content_timeout [expr [clock scan "[HTTP::header Expires]" -gmt true] - [clock seconds]] if { $expire_content_timeout < 0 } { if { ($static::ExpiresDebug > 0) } { log local0. "$Expires_lp: Found 'Expires' header either invalid or in the past." } return } if { ($static::ExpiresDebug > 0) } { log local0. "$Expires_lp: Found 'Expires' header and calculated $expire_content_timeout seconds timeout." } } if { $static::ExpiresOverwrite or not [HTTP::header exists "Cache-Control"] } { HTTP::header replace "Cache-Control" "max-age=$expire_content_timeout, public" if { ($static::ExpiresDebug > 0) } { log local0. "$Expires_lp: Set 'Cache-Control' to 'max-age=$expire_content_timeout, public'." } } } } }975Views0likes3CommentsSession Limiting
Problem this snippet solves: This iRule limits the amount of sessions. The limits of the Session command in version 9.x forces us to use a global array so we know the count of sessions. There is also a reaper function to ensure that stale sessions don't continue to eat up sessions This iRule is tailored for an application which uses JSessionID cookies or tokens in the URI. If the application does not use a JSessionID, you'll need to remove the URI checking logic and customize the name of the application's session cookie. If the application doesn't use a session cookie, you could modify this iRule to set one in the response if the request doesn't contain one already. Code : #timing on when RULE_INIT { #Defines session limit set ::limit 4 #Defines Debug Level (0=no logging 1=logging) set ::session_debug 1 #Defines Session Timeout set ::timeout 300 #Defines the session array array set ::sessionar "" #Defines the lastrun global variable used by the reaping process set ::lastrun 0 #Defines the timeout in seconds that the reaper is to run set ::sessreap 60 #Defines the redirect webpage set ::busy_redirect "http://www.example.com/busypage.html" } when HTTP_REQUEST { #if {$::session_debug}{log local0. "Got an http request..."} #Sets the current time in seconds. set currtime [clock seconds] #if { $::session_debug }{ log local0. "Session List: [array names ::sessionar]"} #if { $::session_debug }{ log local0. "Value of lastrun: $::lastrun"} #if {$::session_debug}{log local0. "Value of currtime: $currtime"} if { [info exists ::sessionar(loggedoutcontinue)] }{ unset ::sessionar(loggedoutcontinue) } #This is the reaping process. This checks upon every connection the amount of time since the last reap #and if that reap is greater than the value of the $::sessreap global it executes. The reap process will #remove sessions that have been inactive for 301 seconds or more and leave any sessions 300 or lower. set since [expr {$currtime - $::lastrun}] if {$::session_debug}{log local0. "Seconds since last reap: $since"} if { $since >= $::sessreap }{ set ::lastrun $currtime if {$::session_debug}{log local0. "At least one minute has passed. Reaping Session Array"} foreach sesskey [array names ::sessionar] { #if {$::session_debug}{log local0. "SessionID: $sesskey"} if {$::session_debug}{log local0. "Value of $sesskey: $::sessionar($sesskey)"} set lastconn $::sessionar($sesskey) set elapsedtime [expr {$currtime - $lastconn}] if { $elapsedtime > $::timeout }{ unset ::sessionar($sesskey) if { $::session_debug }{ log local0. "Session: $sesskey exceeded timeout. Removed from session table."} } } } #Since the array contains unique sessions the following variable provides for an accurate count #of the current sessions. The "array size" command gives use the amount of elements within the array #in the form of an integer set currsess [array size ::sessionar] if {$::session_debug}{log local0. "Current Sessions: $currsess"} #Here we check that the HTTP URI starts with "/licensemgmt" as this rule only pertains to #the license management application if { [HTTP::uri] starts_with "/licensemgmt" } { if { $::session_debug }{ log local0. "URL received: [HTTP::uri]"} #reap away session on logout if { [HTTP::uri] contains "/invalidateSession.lic" } { if {$::session_debug}{log local0. "sessions before reaping: $currsess"} set sesscookie [URI::query [HTTP::uri] "ID"] unset ::sessionar($sesscookie) if { $::session_debug }{ log local0. "session reaped away due to logout: $sesscookie"} set currsess_new [array size ::sessionar] if {$::session_debug}{log local0. "sessions after reaping: $currsess_new"} } #Check for the existence of the ObSSOCookie and extract the unique value of said cookie if { [HTTP::cookie exists "JSESSIONID"] }{ if { $::session_debug }{ log local0. "has cookie..."} set sesscookie [HTTP::cookie "JSESSIONID"] if { $::session_debug }{ log local0. "Value of JSESSIONID: $sesscookie"} #Check whether this cookie's value is contained as an element of the array. If it is #the iRule updates the last accessed time which is the data of each element. This is #in clock seconds. If it doesn't exist we treat is as a new session. This involves a check #of whether the threshold has been reach and if so a redirect, otherwise we add the unique #id to the array with its time in clock seconds if { [info exists ::sessionar($sesscookie)] } { if { $::session_debug }{ log local0. "Session Already Exists"} set ::sessionar($sesscookie) $currtime return } else { if { $currsess >= $::limit }{ #if {$::session_debug}{loglocal0. "Redirected to: [HTTP::header "Host"]$::busy_redirect"} #HTTP::busy_redirect [HTTP::header "Host"]$::busy_redirect HTTP::busy_redirect $::busy_redirect if {$::session_debug}{log local0. "Over Threshold and not an existing session"} if {$::session_debug}{log local0. "List of Sessions:"} foreach sesslist [array names ::sessionar] { if {$::session_debug}{log local0. "[IP::client_addr]: $sesslist"} } STATS::incr throttle "Rejected Sessions" } else { set ::sessionar($sesscookie) $currtime STATS::incr throttle "Allowed Sessions" return } } #If the client didn't have the JSESSIONID than we treat it as a new client and only allow through #if the threshold has not been met. } else { STATS::incr throttle "Total Sessions" if { $currsess <= $::limit }{ STATS::incr "throttle" "Allowed Sessions" } else { if { $::session_debug }{ log local0. "[IP::client_addr] was denied. Over Threshold" } HTTP::busy_redirect $::busy_redirect STATS::incr "throttle" "Rejected Sessions" } } } } Tested this on version: 9.0659Views0likes2CommentsClone Pool Based On Uri
Problem this snippet solves: This iRule will clone a connection to a second pool based on the input URI. In addition to using an iRule to choose what pool of servers a connection is sent to you can also set a clone pool within a rule. This is a simple proof of concept rule for this purpose. In this rule any traffic where the URI begins with "/clone_me" will not only go to the target pool real_pool but also be cloned to the pool clone_pool. Any other URI is only sent to the pool real_pool. Code : when HTTP_REQUEST { if { [HTTP::uri] starts_with "/clone_me" } { pool real_pool clone pool clone_pool } else { pool real_pool } } Tested this on version: 10.0620Views0likes2CommentsControlling Bots
Problem this snippet solves: Webbots, you can't live with them, you can't live without them... This iRule determines if a webbot is accessing your systems and assigns them to a lower priority resource. The first example includes the bot list inside the rule and uses the switch statement to find a match. Code : when HTTP_REQUEST { switch -glob [string tolower [HTTP::header User-Agent]] { "*scooter*" - "*slurp*" - "*msnbot*" - "*fast-*" - "*teoma*" - "*googlebot*" { # Send bots to the bot pool pool slow_webbot_pool } default { # Send all other requests to a default pool pool default_pool } } } ### or if you prefer data groups ### ---- String Class ---- class bots { "scooter" "slurp" "msnbot" "fast-" "teoma" "googlebot" } ---- iRule ---- when HTTP_REQUEST { if { [matchclass [string tolower [HTTP::header User-Agent]] contains $::bots] } { pool slow_webbot_pool } else { pool default_pool } } Tested this on version: 10.0577Views0likes1CommentPersist On Last JSESSIONID in HTTP Parameter
Problem this snippet solves: This iRule was written with the goal to persist on only the last jsessionid that is present when multiple jsessionid HTTP Paremeters are present.. How to use this snippet: I ran into a particular challenge where a customer was receiving a request where a jsessionid was expected to be set in an HTTP Request Parameter. For those of you who are not familiar, see where the parameters lye below: <scheme>://<username>:<password>@<host>:<port>/<path>;<parameters>?<query>#<fragment> REF: http://www.skorks.com/2010/05/what-every-developer-should-know-about-urls/ Here is an example of a full request: http://www.example.com/my/resource.html;jsessionid=0123456789abcdef;jsessionid=0123456789abcdef?alice=mouse In the above example, there are two jsessionid parameters, both of which are the same value. This is what was being used to Persist "*jsessionid*" { #Parse the URI and look for jsessionid. Skip 11 characters and match up to the next "?" set session_id [findstr [HTTP::uri] jsessionid 11 "?"] } Code : when CLIENT_ACCEPTED { set debug 1 } when HTTP_REQUEST { set logTuple "[IP::client_addr]:[TCP::client_port] - [IP::local_addr]:[TCP::local_port]" set parameters [findstr [HTTP::path] ";" 1] set session_id "" foreach parameter [split $parameters ";"] { scan $parameter {%[^=]=%s} name value if {$debug}{log local0.debug "$logTuple :: Multiple JsessionID in: [HTTP::host][HTTP::uri]"} if {$name equals "jsessionid"} {set session_id $value} } if {$session_id ne ""}{ #Persist on the parsed session ID for X seconds if {$debug}{log local0.debug "$logTuple :: Single JsessionID in: [HTTP::host][HTTP::uri]"} persist uie $session_id 86400 } }575Views0likes3CommentsBot and Request Limiting iRule
Problem this snippet solves: This iRule limits robots and what they can do. Furthermore, it restricts requests per second and blacklists a client that goes above the limit Note: Not CMP Compatible. Code : when RULE_INIT { #Define blacklist timeout set ::bl_timeout 30 #Define request per minute threshold set ::req_limit 5 #Expiration for tracking IPs set ::expiration_time 300 #Sets iRule Runlevel 0-log only 1 - Logging and Blocking set ::runlevel 1 } when HTTP_REQUEST { #Captures User-Agent header to check for known robots set ua [string tolower [HTTP::header User-Agent]] log local0. "User Agent: $ua" #Checks to see if the connection is a known robot or requests the robot.txt file if { ([matchclass $ua contains $::RUA]) or ([string tolower [HTTP::uri]] contains "robot.txt") } { set robot 1 log local0. "Robot Detected" } else { set robot 0 } #Defines client_ip variable with the address of the client set client_ip [IP::client_addr] log local0. "Client IP: $client_ip" #Robot logic if { $robot > 0 }{ set bl_check [session lookup uie blacklist_$client_ip] log local0. "Value of bl_check variable: $bl_check" set req_uri [string tolower [HTTP::uri]] log local0. "Request URI: $req_uri" #Checks to see if IP address is on blacklist if { $bl_check ne ""}{ log local0.warn "Request Blocked: $client_ipClient on Blacklist[HTTP::request]" if { $::runlevel > 0 }{ HTTP::respond 403 } } #Checks to see if Robot is allowed and sets restrictions. Default is no access switch -glob $ua { "*slurp*" - "*yahooseeker*" - "*googlebot*" - "*msnbot*" - "*teoma*" - "*voyager*" { if { [matchclass $req_uri starts_with $::robot_block] }{ log local0.warn "Request Blocked: $client_ipRequest Blocked. Robot not following Robot.txt[HTTP::request]" if { $::runlevel > 0 }{ HTTP::respond 403 } } else { pool dave_pool } } default { log local0.warn "Request Blocked: $client_ipRequest Blocked, Unauthoried Robot[HTTP::request]" if { $::runlevel > 0 }{ HTTP::respond 403 } } } } #Logic for non-robots. Checks to see if blacklisted set bl_check [session lookup uie blacklist_$client_ip] log local0. "Non-Robot bl_check: $bl_check" if { $bl_check ne "" }{ log local0.warn "Request Blocked: $client_ipClient on Blacklist[HTTP::request]" log local0.warn "Session Record: $bl_check" if { $::runlevel > 0 }{ HTTP::respond 403 } } set curr_time [clock seconds] set timekey starttime_$client_ip set reqkey reqcount_$client_ip set request_count [session lookup uie $reqkey] log local0. "Request Count: $request_count" #If user uses search their request count is reset if { [HTTP::uri] starts_with "/search" }{ session delete uie $reqkey } #Sets up new count for first time connections. If not a new connection, connection count is incremented and the iRule checks to #see if over the threshold if { $request_count eq "" } { log local0. "Request Count is 0" set request_count 1 session add uie $reqkey $request_count $::expiration_time log local0. "Current Time: $curr_time" log local0. "Timekey Value: $timekey" log local0. "Reqkey value: $reqkey" session add uie $timekey [expr {$curr_time - 2}] [expr {$::expiration_time + 2}] log local0. "Request Count is now: $request_count" } else { set start_time [session lookup uie $timekey] log local0. "Start Time: $start_time" log local0. "Request Count (beyond first request): $request_count" incr request_count session add uie $reqkey $request_count $::expiration_time set elapsed_time [expr {$curr_time - $start_time}] log local0. "Elapsed Time: $elapsed_time" if {$elapsed_time < 60} { set elapsed_time 60 } set curr_rate [expr {$request_count / ($elapsed_time/60)}] log local0. "Current Rate of Request for $client_ip: $curr_rate" if {$curr_rate > $::req_limit}{ log local0.warn "Request Blocked: $client_ipClient over Threshold. Added to Blacklist[HTTP::request]" if { $::runlevel > 0 }{ session add uie blacklist_$client_ip $::bl_timeout HTTP::respond 403 } } } } Tested this on version: 9.0552Views0likes1Comment