Rewrite http:// to https:// in response content
Problem this snippet solves: (Maybe I missed it, but) I didn't see a code share for using a STREAM profile to rewrite content from http to https. This share is just to make it easier to find a simple iRule to replace http:// links in page content to https://. It's taken directly from the STREAM::expression Wiki page. How to use this snippet: You'll need to assign a STREAM profile to you virtual server in order for this to work (just create an empty stream profile and assign it). Code : # Example which replaces http:// with https:// in response content # Prevents server compression in responses when HTTP_REQUEST { # Disable the stream filter for all requests STREAM::disable # LTM does not uncompress response content, so if the server has compression enabled # and it cannot be disabled on the server, we can prevent the server from # sending a compressed response by removing the compression offerings from the client HTTP::header remove "Accept-Encoding" } when HTTP_RESPONSE { # Check if response type is text if {[HTTP::header value Content-Type] contains "text"}{ # Replace http:// with https:// STREAM::expression {@http://@https://@} # Enable the stream filter for this response only STREAM::enable } } Tested this on version: 11.53.1KViews0likes5CommentsPersist On Last JSESSIONID in HTTP Parameter
Problem this snippet solves: This iRule was written with the goal to persist on only the last jsessionid that is present when multiple jsessionid HTTP Paremeters are present.. How to use this snippet: I ran into a particular challenge where a customer was receiving a request where a jsessionid was expected to be set in an HTTP Request Parameter. For those of you who are not familiar, see where the parameters lye below: <scheme>://<username>:<password>@<host>:<port>/<path>;<parameters>?<query>#<fragment> REF: http://www.skorks.com/2010/05/what-every-developer-should-know-about-urls/ Here is an example of a full request: http://www.example.com/my/resource.html;jsessionid=0123456789abcdef;jsessionid=0123456789abcdef?alice=mouse In the above example, there are two jsessionid parameters, both of which are the same value. This is what was being used to Persist "*jsessionid*" { #Parse the URI and look for jsessionid. Skip 11 characters and match up to the next "?" set session_id [findstr [HTTP::uri] jsessionid 11 "?"] } Code : when CLIENT_ACCEPTED { set debug 1 } when HTTP_REQUEST { set logTuple "[IP::client_addr]:[TCP::client_port] - [IP::local_addr]:[TCP::local_port]" set parameters [findstr [HTTP::path] ";" 1] set session_id "" foreach parameter [split $parameters ";"] { scan $parameter {%[^=]=%s} name value if {$debug}{log local0.debug "$logTuple :: Multiple JsessionID in: [HTTP::host][HTTP::uri]"} if {$name equals "jsessionid"} {set session_id $value} } if {$session_id ne ""}{ #Persist on the parsed session ID for X seconds if {$debug}{log local0.debug "$logTuple :: Single JsessionID in: [HTTP::host][HTTP::uri]"} persist uie $session_id 86400 } }593Views0likes3CommentsClacks-Over-HTTP Gateway
Problem this snippet solves: This iRule implements draft-1 of the Clacks-over-HTTP protocol specification as developed by the Clacks Overhead WG. This is a partial implementation of the Internet of Roundworld portion of draft-1; it obeys the Accept-Clacks header in requests, but since all clients must accept Plain it ignores any Clacks-Encoding headers. Additionally, it is client-side only. Communication with Discworld is not yet implemented, as current releases of iRules do not include a computational demonology extension. Instead, a fixed list of overhead values is included. By default this is the names of Terry Pratchett and John Dearheart, coded to remain in the Clacks overhead forever (GNU). Posted in memory of Sir Terry Pratchett, OBE. This IS an April 1 submission and should be taken with the implied grain of salt. That said, the iRule is functional and is a compliant implementation. How to use this snippet: Add to any virtual server with an HTTP profile. Header values can be modified by changing static::clacks. The format is as follows: set static::clacks "{message 1} {message 2}" - each message must be wrapped in braces, then added to the quoted string. It is recommended that memorial messages start with GNU to ensure proper return to the Clacks networks of Discworld via gateway Towers and continued transmission. Code : when RULE_INIT { # Man's not dead while his name is still spoken. set static::clacks "{GNU Terry Pratchett} {GNU John Dearheart}" } when CLIENT_ACCEPTED { set clacks_enabled 1 } when HTTP_REQUEST { # Per spec, clients can refuse all Clacks responses with an Accept-Clacks header of "no". if {[HTTP::header Accept-Clacks] == "no"} { set clacks_enabled 0 } } when HTTP_RESPONSE { if {$clacks_enabled} { foreach clack $static::clacks { HTTP::header insert Clacks $clack } } }203Views0likes0CommentsSimple iRulesLX JSON rewrite
Problem this snippet solves: Like rewriting the Location field in a redirect, it's sometimes required to rewrite JSON data, returned in an HTTP response. Whilst it would be possible to write using traditional iRules, the task is made more simple (and less risky) by using iRulesLX. In the following example, an HTTP response contains JSON data with fields containing external URLs. These need to be rewritten to an internal URL for the purpose of internal routing. { "firstUrl":"https://some.public.host.com/api/preauth/ABCDEFGHIJKLM", "secondUrl":"https://some.public.host.com/api/documents/{documentId}/discussion" } The concept can be used to rewrite any JSON data, however more complicated JSON containing arrays for example would need to be taken into consideration. How to use this snippet: use the following iRule to call iRulesLX and pass the necessary parameters when CLIENT_CONNECTED { set newHost "internal.host.local" set jsonKeys "firstUrl secondUrl" set rpcHandle [ILX::init "json-parse-plugin" "json-parse-extension"] } when HTTP_RESPONSE { if {[HTTP::header "Content-Type"] eq "application/json"} { HTTP::collect [HTTP::header "Content-Length"] } } when HTTP_RESPONSE_DATA { set payload [HTTP::payload] set result [ILX::call $rpcHandle "setInternalUrl" $payload $jsonKeys $newHost] HTTP::payload replace 0 [HTTP::header "Content-Length"] $result } When used in combination with the iRulesLX code below the host portion of the URIs in the JSON data are rewritten and sent back to the origin by replacing the HTTP payload { "firstUrl":"https://internal.host.local/api/preauth/ABCDEFGHIJKLM", "secondUrl":"https://internal.host.local/api/documents/{documentId}/discussion" } Code : const f5 = require('f5-nodejs'); const url = require('url'); const ilx = new f5.ILXServer(); function setInternalUrl(req, res) { var json = JSON.parse(req.params()[0]); var jsonObj = req.params()[1].split(' '); var newHost = req.params()[2]; for (var i = 0; i < jsonObj.length; i++) { if (typeof json[jsonObj[i]] == "string") { var oldUrl = url.parse(json[jsonObj[i]]); oldUrl.host = newHost; var newUrl = decodeURI(url.format(oldUrl)); json[jsonObj[i]] = newUrl; } else { json = {"error":"unable to rewrite"}; } } res.reply(JSON.stringify(json)); } ilx.addMethod('setInternalUrl', setInternalUrl); ilx.listen(); Tested this on version: 12.12.1KViews1like1CommentURL Shortener
Problem this snippet solves: The Small URL Generator takes a long URL, examines its length, and assigns it a variable length key based on the original URL's length. The key is then stored in a subtable along with the original URL. When a user accesses the small URL (http:/// ), they are then redirected to the original long URL. This Small URL Generator also has the ability to create custom URL keys. Code : when RULE_INIT { set static::small_url_timeout 86400 set static::small_url_lifetime 86400 set static::small_url_response_header "<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\"><html><head> \ <title>Small URL Generator</title></head><body><center><h1>Small URL Generator</h1> \ " set static::small_url_response_footer "</center></body></html>" } when HTTP_REQUEST { if { ([HTTP::uri] starts_with "/create?") and ([HTTP::query] ne "") } { set url [URI::decode [string tolower [URI::query [HTTP::uri] url]]] set custom_url_key [string tolower [URI::query [HTTP::uri] custom_url_key]] if { $custom_url_key ne "" } { if { ([table lookup -subtable small_url $custom_url_key] ne "") } { HTTP::respond 200 content "$static::small_url_response_header <b><font color=\"ff0000\"> \ Error: the custom Small URL <a href=\"http://[HTTP::host]/$custom_url_key\"> \ http://[HTTP::host]/$custom_url_key</a> has already been taken. Please try again. \ </font></b> $static::small_url_response_footer" } else { set url_key $custom_url_key log local0. "Custom Small URL created for $url with custom key $url_key" } } else { switch -glob [string length $url] { {[1-9]} { set url_key_length 3 } {1[0-9]} { set url_key_length 3 } {2[0-9]} { set url_key_length 4 } {3[0-9]} { set url_key_length 5 } default { set url_key_length 6 } } set url_key [string tolower [scan [string map {/ "" + ""} [b64encode [md5 $url]]] "%${url_key_length}s"]] } if { ([table lookup -subtable small_url $url_key] eq "") } { table add -subtable small_url $url_key $url $static::small_url_timeout $static::small_url_lifetime log local0. "Small URL created for $url with key $url_key" } else { log local0. "Small URL for $url already exists with key $url_key" } HTTP::respond 200 content "$static::small_url_response_header The Small URL for \ <a href=\"$url\">$url</a> is <a href=\"http://[HTTP::host]/$url_key\"> \ http://[HTTP::host]/$url_key</a> $static::small_url_response_footer" } else { set url_key [string map {/ ""} [HTTP::path]] set url [table lookup -subtable small_url $url_key] if { [string length $url] != 0 } { log local0. "Found key $url_key, redirecting to $url" HTTP::redirect $url } else { HTTP::respond 200 content "$static::small_url_response_header <form action=\"/create\" \ method=\"get\"><input type=\"text\" name=\"url\"> \ <input type=\"submit\" value=\"make small!\"><h4>Make it custom! \ (optional)</h4>http://[HTTP::host]/<input type=\"text\" name=\"custom_url_key\"></form> \ $static::small_url_response_footer" } } } Tested this on version: 10.24.3KViews0likes0CommentsDisabling HTTP Processing For Unrecognized HTTP Methods
Problem this snippet solves: The iRule below disables HTTP processing for requests using HTTP methods that are not recognized by the BIG-IP HTTP profile. For example, Web-based Distributed Authoring and Versioning (WebDAV) uses the following extended HTTP methods: PROPFIND, PROPPATCH, MKCOL, COPY, MOVE, LOCK, UNLOCK. Requests using one of these methods may provoke the behavior described in AskF5 SOL7581: https://support.f5.com/kb/en-us/solutions/public/7000/500/sol7581.html?sr=2105288 Unrecognized HTTP methods without a specified content-length or chunking header can cause the connection to stall . Use of these or other methods not described in RFC2616 (HTTP/1.1) may require an iRule similar to the following associated with the virtual server which disables further HTTP processing when they are seen. How to use this snippet: Note: You may have to disable the "HTTP::enable" command with a comment if using the iRule on an APM protected virtual service. Code : when CLIENT_ACCEPTED { # Enable HTTP processing for all requests by default HTTP::enable } when HTTP_REQUEST { # selectively disable HTTP processing for specific request methods switch [HTTP::method] { "MOVE" - "COPY" - "LOCK" - "UNLOCK" - "PROPFIND" - "PROPPATCH" - "MKCOL" { HTTP::disable } } }1.8KViews0likes4CommentsClone Pool Based On Uri
Problem this snippet solves: This iRule will clone a connection to a second pool based on the input URI. In addition to using an iRule to choose what pool of servers a connection is sent to you can also set a clone pool within a rule. This is a simple proof of concept rule for this purpose. In this rule any traffic where the URI begins with "/clone_me" will not only go to the target pool real_pool but also be cloned to the pool clone_pool. Any other URI is only sent to the pool real_pool. Code : when HTTP_REQUEST { if { [HTTP::uri] starts_with "/clone_me" } { pool real_pool clone pool clone_pool } else { pool real_pool } } Tested this on version: 10.0628Views0likes2CommentsSession Limiting
Problem this snippet solves: This iRule limits the amount of sessions. The limits of the Session command in version 9.x forces us to use a global array so we know the count of sessions. There is also a reaper function to ensure that stale sessions don't continue to eat up sessions This iRule is tailored for an application which uses JSessionID cookies or tokens in the URI. If the application does not use a JSessionID, you'll need to remove the URI checking logic and customize the name of the application's session cookie. If the application doesn't use a session cookie, you could modify this iRule to set one in the response if the request doesn't contain one already. Code : #timing on when RULE_INIT { #Defines session limit set ::limit 4 #Defines Debug Level (0=no logging 1=logging) set ::session_debug 1 #Defines Session Timeout set ::timeout 300 #Defines the session array array set ::sessionar "" #Defines the lastrun global variable used by the reaping process set ::lastrun 0 #Defines the timeout in seconds that the reaper is to run set ::sessreap 60 #Defines the redirect webpage set ::busy_redirect "http://www.example.com/busypage.html" } when HTTP_REQUEST { #if {$::session_debug}{log local0. "Got an http request..."} #Sets the current time in seconds. set currtime [clock seconds] #if { $::session_debug }{ log local0. "Session List: [array names ::sessionar]"} #if { $::session_debug }{ log local0. "Value of lastrun: $::lastrun"} #if {$::session_debug}{log local0. "Value of currtime: $currtime"} if { [info exists ::sessionar(loggedoutcontinue)] }{ unset ::sessionar(loggedoutcontinue) } #This is the reaping process. This checks upon every connection the amount of time since the last reap #and if that reap is greater than the value of the $::sessreap global it executes. The reap process will #remove sessions that have been inactive for 301 seconds or more and leave any sessions 300 or lower. set since [expr {$currtime - $::lastrun}] if {$::session_debug}{log local0. "Seconds since last reap: $since"} if { $since >= $::sessreap }{ set ::lastrun $currtime if {$::session_debug}{log local0. "At least one minute has passed. Reaping Session Array"} foreach sesskey [array names ::sessionar] { #if {$::session_debug}{log local0. "SessionID: $sesskey"} if {$::session_debug}{log local0. "Value of $sesskey: $::sessionar($sesskey)"} set lastconn $::sessionar($sesskey) set elapsedtime [expr {$currtime - $lastconn}] if { $elapsedtime > $::timeout }{ unset ::sessionar($sesskey) if { $::session_debug }{ log local0. "Session: $sesskey exceeded timeout. Removed from session table."} } } } #Since the array contains unique sessions the following variable provides for an accurate count #of the current sessions. The "array size" command gives use the amount of elements within the array #in the form of an integer set currsess [array size ::sessionar] if {$::session_debug}{log local0. "Current Sessions: $currsess"} #Here we check that the HTTP URI starts with "/licensemgmt" as this rule only pertains to #the license management application if { [HTTP::uri] starts_with "/licensemgmt" } { if { $::session_debug }{ log local0. "URL received: [HTTP::uri]"} #reap away session on logout if { [HTTP::uri] contains "/invalidateSession.lic" } { if {$::session_debug}{log local0. "sessions before reaping: $currsess"} set sesscookie [URI::query [HTTP::uri] "ID"] unset ::sessionar($sesscookie) if { $::session_debug }{ log local0. "session reaped away due to logout: $sesscookie"} set currsess_new [array size ::sessionar] if {$::session_debug}{log local0. "sessions after reaping: $currsess_new"} } #Check for the existence of the ObSSOCookie and extract the unique value of said cookie if { [HTTP::cookie exists "JSESSIONID"] }{ if { $::session_debug }{ log local0. "has cookie..."} set sesscookie [HTTP::cookie "JSESSIONID"] if { $::session_debug }{ log local0. "Value of JSESSIONID: $sesscookie"} #Check whether this cookie's value is contained as an element of the array. If it is #the iRule updates the last accessed time which is the data of each element. This is #in clock seconds. If it doesn't exist we treat is as a new session. This involves a check #of whether the threshold has been reach and if so a redirect, otherwise we add the unique #id to the array with its time in clock seconds if { [info exists ::sessionar($sesscookie)] } { if { $::session_debug }{ log local0. "Session Already Exists"} set ::sessionar($sesscookie) $currtime return } else { if { $currsess >= $::limit }{ #if {$::session_debug}{loglocal0. "Redirected to: [HTTP::header "Host"]$::busy_redirect"} #HTTP::busy_redirect [HTTP::header "Host"]$::busy_redirect HTTP::busy_redirect $::busy_redirect if {$::session_debug}{log local0. "Over Threshold and not an existing session"} if {$::session_debug}{log local0. "List of Sessions:"} foreach sesslist [array names ::sessionar] { if {$::session_debug}{log local0. "[IP::client_addr]: $sesslist"} } STATS::incr throttle "Rejected Sessions" } else { set ::sessionar($sesscookie) $currtime STATS::incr throttle "Allowed Sessions" return } } #If the client didn't have the JSESSIONID than we treat it as a new client and only allow through #if the threshold has not been met. } else { STATS::incr throttle "Total Sessions" if { $currsess <= $::limit }{ STATS::incr "throttle" "Allowed Sessions" } else { if { $::session_debug }{ log local0. "[IP::client_addr] was denied. Over Threshold" } HTTP::busy_redirect $::busy_redirect STATS::incr "throttle" "Rejected Sessions" } } } } Tested this on version: 9.0672Views0likes2CommentsCache Expire
Problem this snippet solves: This iRule sets caching headers Expires and Cache-Control on the response. Mostly the use case will be to set client-side caching of static resources as images, stylesheets, javascripts etc.. It will honor, by default, the headers set by the origin server, then give precedence to the resource's mime-type (over it's file-extension). Also, if the origin server supplies the mime-type of the resource then file-extension entries are not considered. Code : # Expires iRule, Version 0.9.1 # August, 2012 # Created by Opher Shachar (contact me through devcentral.f5.com) # (please see end of iRule for additional credits) # Purpose: # This iRule sets caching headers on the response. Mostly the use case will be # to set client-side caching of static resources as images, stylesheets, # javascripts etc. # Configuration Requirements: # Uses a datagroup named Expires of type string containing lines that # specify mime-types or extention for the name and, the number of seconds the # client should cache the resource for the value. ex.: # "image/" := "604800" # ".js" := "604800" # The Content-Type, if specified, takes precedence over the file extention. when RULE_INIT { # Enable to debug Expires via log messages in /var/log/ltm # (2 = verbose, 1 = essential, 0 = none) set static::ExpiresDebug 0 # Overwrite cache headers in response # (1 = yes, 0 = no) set static::ExpiresOverwrite 0 } when CLIENT_ACCEPTED { # The name of the Data Group (aka class) we are going to use set vname [URI::basename [virtual name]] set vpath [URI::path [virtual name]] set Expires_clname "${vpath}Expires$vname" if {! [class exists $Expires_clname]} { log local0.notice "Data Group $Expires_clname not found." } } when HTTP_REQUEST { # The log prefix so you can find yourself in the log set Expires_lp "VS=[virtual name], URI=[HTTP::uri]" if {[class exists $Expires_clname]} { set period [string last . [HTTP::path]] if { $period >= 0 } { # Set the timeout based on the class entry if it exists for this request. set expire_content_timeout [class match -value [string tolower [getfield [string range [HTTP::path] $period end] ";" 1]] ends_with $Expires_clname] if { ($static::ExpiresDebug > 1) and ($expire_content_timeout ne "") } { log local0. "$Expires_lp: found file suffix based expiration: $expire_content_timeout." } } else { set expire_content_timeout "" } } } when HTTP_RESPONSE { if { [info exists expire_content_timeout] } { # if expire_content_timeout not set then no class was found if { [HTTP::header exists "Content-Type"] } { # Set the tiemout based on the class entry if it exists for this mime type. # TODO: Allow globbing in matching mime-types set expire_content_timeout [class match -value [string tolower [HTTP::header "Content-Type"]] starts_with $Expires_clname] if { ($static::ExpiresDebug > 1) and ($expire_content_timeout ne "") } { log local0. "$Expires_lp: found mime type based expiration: $expire_content_timeout." } } if { $expire_content_timeout ne "" } { # either matched Content-Type or file extention if { $static::ExpiresOverwrite or not [HTTP::header exists "Expires"] } { HTTP::header replace "Expires" "[clock format [expr ([clock seconds]+$expire_content_timeout)] -format "%a, %d %h %Y %T GMT" -gmt true]" if { ($static::ExpiresDebug > 0) } { log local0. "$Expires_lp: Set 'Expires' to '[clock format [expr ([clock seconds]+$expire_content_timeout)] -format "%a, %d %h %Y %T GMT" -gmt true]'." } } elseif { [HTTP::header exists "Expires"] } { set expire_content_timeout [expr [clock scan "[HTTP::header Expires]" -gmt true] - [clock seconds]] if { $expire_content_timeout < 0 } { if { ($static::ExpiresDebug > 0) } { log local0. "$Expires_lp: Found 'Expires' header either invalid or in the past." } return } if { ($static::ExpiresDebug > 0) } { log local0. "$Expires_lp: Found 'Expires' header and calculated $expire_content_timeout seconds timeout." } } if { $static::ExpiresOverwrite or not [HTTP::header exists "Cache-Control"] } { HTTP::header replace "Cache-Control" "max-age=$expire_content_timeout, public" if { ($static::ExpiresDebug > 0) } { log local0. "$Expires_lp: Set 'Cache-Control' to 'max-age=$expire_content_timeout, public'." } } } } }986Views0likes3CommentsMultiple HTTP Redirect w/ load balancing and monitoring request
Problem this snippet solves: The goal is to even distribute HTTP Request via redirect to clients requesting a web server (i.e. URL). How to use this snippet: Hi F5 Crew, I am toying with a few HTTP::redirect and monitor combos. I need some help validating my creation. It works in my stand alone VM Workstation set up. However, I getting mix results with Code 11.2.1 and above on several dev labs. Also, I substituted the URL/URI with node addresses (i.e. 10.2.0.22, etc.) - no DNS set up, which works the same. Code : when HTTP_REQUEST { set rand [expr { rand() }] if { [active_members MY_POOL] > 1 } { set rand [expr { rand() }] if { $rand < 0.50 } { HTTP::redirect http://www.codeme.org[HTTP::uri] } elseif { $rand > 0.50 } { HTTP::redirect http://www.codeme2.org[HTTP::uri] } } Tested this on version: 12.1262Views0likes0Comments