tutorial
18 TopicsA Brief Introduction To External Application Verification Monitors
Background EAVs (External Application Verification) monitors are one of most useful and extensible features of the BIG-IP product line. They give the end user the ability to utilize the underlying Linux operating system to perform complex and thorough service checks. Given a service that does not have a monitor provided, a lot of users will assign the closest related monitor and consider the solution complete. There are more than a few cases where a TCP or UDP monitor will mark a service “up” even while the service is unresponsive. EAVs give us the ability to dive much deeper than merely performing a 3-way handshake and neglecting the other layers of the application or service. How EAVs Work An EAV monitor is an executable script located on the BIG-IP’s file system (usually under /usr/bin/monitors) that is executed at regular intervals by the bigd daemon and reports its status. One of the most common misconceptions (especially amongst those with *nix backgrounds) is that the exit status of the script dictates the fate of the pool member. The exit status has nothing to do with how bigd interprets the pool member’s health. Any output to stdout (standard output) from the script will mark the pool member “up”. This is a nuance that should receive special attention when architecting your next EAV. Analyze each line of your script and make sure nothing will inadvertently get directed to stdout during monitor execution. The most common example is when someone writes a script that echoes “up” when the checks execute correctly and “down” when they fail. The pool member will be enabled by the BIG-IP under both circumstances rendering a useless monitor. Bigd automatically provides two arguments to the EAV’s script upon execution: node IP address and node port number. The node IP address is provided with an IPv6 prefix that may need to be removed in order for the script to function correctly. You’ll notice we remove the “::ffff://” prefix with a sed substitution in the example below. Other arguments can be provided to the script when configured in the UI (or command line). The user-provided arguments will have offsets of $3, $4, etc. Without further ado, let’s take a look at a service-specific monitor that gives us a more complete view of the application’s health. An Example I have seen on more than one occasion where a DNS pool member has successfully passed the TCP monitor, but the DNS service was unresponsive. As a result, a more invasive inspection is required to make sure that the DNS service is in fact serving valid responses. Let’s take a look at an example: #!/bin/bash # $1 = node IP # $2 = node port # $3 = hostname to resolve [[ $# != 3 ]] && logger -p local0.error -t ${0##*/} -- "usage: ${0##*/} <node IP> <node port> <hostname to resolve>" && exit 1 node_ip=$(echo $1 | sed 's/::ffff://') dig +short @$node_ip $3 IN A &> /dev/null [[ $? == 0 ]] && echo “UP” We are using the dig (Domain Information Groper) command to query our DNS server for an A record. We use the exit status from dig to determine if the monitor will pass. Notice how the script will never output anything to stdout other than “UP” in the case of success. If there aren’t enough arguments for the script to proceed, we output the usage to /var/log/ltm and exit. This is a very simple 13 line script, but effective example. The Takeaways The command should be as lightweight and efficient as possible If the same result can be accomplished with a built-in monitor, use it EAV monitors don’t rely on the command’s exit status, only standard output Send all error and informational messages to logger instead of stdout or stderr (standard error) “UP” has no significance, it is just a series of character sent to stdout, the monitor would still pass if the script echoed “DOWN” Conclusion When I first discovered EAV monitors, it opened up a whole realm of possibilities that I could not accomplish with built in monitors. It gives you the ability to do more thorough checking as well as place logic in your monitors. While my example was a simple bash script, BIG-IP also ships with Perl and Python along with their standard libraries, which offer endless possibilities. In addition to using the built-in commands and libraries, it would be just as easy to write a monitor in a compiled language (C, C++, or whatever your flavor may be) and statically compile it before uploading it to the BIG-IP. If you are new to EAVs, I hope this gives you the tools to make your environments more robust and resilient. If you’re more of a seasoned veteran, we’ll have more fun examples in the near future.2.2KViews0likes7CommentsiRules Data Group Formatting Rules
BIG-IP LTM supports internal and external classes (called Data Groups in the GUI) of address, string, and integer types. An internal class is stored in the bigip.conf file, whereas external classes are split between the bigip.conf and the file system (the class itself is defined in the bigip.conf file, but the values of the class are stored in the file system in a location of your choice, though /var/class is the location defined for synchronization in the cs.dat file) Which flavor? Depends on the requirements. External classes are generally best suited for very large datasets or for datasets that require frequent updates like blacklists. Formatting is slightly different depending on whether the class is internal or external, and is also different based on the class type: address, integer, or string. Below I’ll show the formatting requirements for each scenario. If you are using the GUI to create key/value pairs in a class (and therefore deciding on an internal class), the formatting is handled for you. Note that with internal classes, the dataset is defined with the class, but with external classes, the class is defined with type, separator, and the filename where the dataset is stored. If there is no value for the type (internal or external) it is omitted with no separator. Update: The following information is for v10 only. For v11+, please reference the v11 Data Group Formatting Rules. Address Classes internal class class addr_testclass { { host 192.168.1.1 host 192.168.1.2 { "host 2" } network 192.168.2.0/24 network 192.168.3.0/24 { "network 2" } } } external class class addr_testclass_ext { type ip filename "/var/class/addr_testclass.class" separator ":=" } /var/class/addr_testclass.class host 192.168.1.1, host 192.168.1.2 := "host 2", network 192.168.2.0/24, network 192.168.3.0/24 := "network 2", Note: You can also add network entries in the address type external file like shown immediately below, but when the class is updated, it will be converted to the CIDR format. network 192.168.4.0 mask 255.255.255.0 := “network 3”, network 192.168.5.0 prefixlen 24 := "network 4", Integer Classes internal class class int_testclass { { 1 { "test 1" } 2 { "test 2" } } } external class class int_testclass_ext { type value filename "/var/class/int_testclass.class" separator ":=" } /var/class/int_testclass.class 1 := "test 1", 2 := "test 2", String Classes With string classes, quotes are necessary on the types and values: internal class class str_testclass { { "str1" { "value 1" } "str2" { "value 2" } } } external class class str_testclass_ext { type string filename "/var/class/str_testclass.class" separator ":=" } /var/class/str_class.class "str1" := "value 1", "str2" := "value 2", Working With External Files Now that the formatting of the classes themselves are complete, I’d like to point out one more issue, and that’s file formatting. If you’re editing all the external classes by hand on the BIG-IP, you have nothing to worry about. However, if you edit them on an external system and copy them over, be careful on which editor you choose. The Unix/Linux line terminator is a line feed (0x0A) whereas Windows default is a carriage return/line feed (0x0D0A) and Mac typically employs just a carriage return (0x0D). The file needs to be formatted in unix-style. I use gVim on my windows laptop. By default, it uses the dos-style, as evidenced in my hex readout in gVim below: 0000000: 6865 6c6c 6f2c 2077 6f72 6c64 0d0a 7468 hello, world..th 0000010: 6973 2069 7320 6120 6c69 6e65 2074 6572 is is a line ter 0000020: 6d69 6e61 746f 7220 7465 7374 0d0a 0d0a minator test.... 0000030: 0d0a .. Now, this is easily changed in gVim: “set fileformat=unix”. After this setting, now my linefeeds are correct: 0000000: 6865 6c6c 6f2c 2077 6f72 6c64 0a74 6869 hello, world.thi 0000010: 7320 6973 2061 206c 696e 6520 7465 726d s is a line term 0000020: 696e 6174 6f72 2074 6573 740a 0a0a inator test... The guidance here is…use a good editor (hint..Notepad and Word are not on that list!)2.6KViews0likes6CommentsComparing iRule Control Statements
A ways back, we put up an article titled “Ten Steps to iRules Optimization - DevCentral” in which we illustrated a few simple steps to making your iRules faster. Item #3 in that article was “Understanding Control Statements”. I decided to put the findings to the test and build myself an iRule that performs built-in timing diagnostics for the various control statements and hopefully shed some more light on what command to choose in your situation of choice. The Commands The control statements that we seen used primarily in iRules are the following: switch – The switch command allows you to compare a value to a list of matching values and execute a script when a match is found. switch { “val1” { # do something } “val2” { # do something else } … } switch –glob – An extenstion to the switch command that allows for wildcard matching switch –glob { “val1” { # do something } “val2” { # do something else } … } if/elseif – A control statement that allows you to match arbitrary values together and execute a script when a match is found. if { $val equals “foo” } { # do something } elseif { $val equals “bar” } # do something else } … matchclass – Perform comparisons with the contents of a datagroup or TCL list. set match [matchclass “val” equals $::list]; class match – A datagroup-only command to perform extended comparisons on group values (LTM v10.0 and above). set match [class match “val” equals listname] The Variables Since the execution of a command is very very fast and the lowest resolution timer we can get is in milliseconds, we are required to execute the commands many times sequentially to get a meaningful number. Thus we need to allow for multiple iterations of the operations for each of the tests. The total number of comparisons we are doing also is a factor in how the command performs. For a single comparison (or a data group with one value), the commands will likely be equivalent and it will be a matter of choice. So, we will allow for multiple listsize values to determine how many comparisons will occur. Where the comparison occurs in the logic flow is also important. For a 1000 if/elseif command structure, comparing on the first match will be much faster than the last match. For this reason, I’ve used a modulus over the total listsize to generate the tests on. For a list size of 100, we will test item 1, 19, 39, 59, and 79 which should give a good sampling across various locations in the lists where a match occurs. Initialization In my attempt to test as many possible combinations as possible, as well as the pain of hard-coding a 10000 line long “if/elfeif” statement, I’ve taken some shortcuts (as illustrated in the various test snippets). In this code below, we check for query string parameters and then set default values if they are not specified. A comparison list is then generated with the matchlist variable. 1: #-------------------------------------------------------------------------- 2: # read in parameters 3: #-------------------------------------------------------------------------- 4: set listsize [URI::query [HTTP::uri] "ls"]; 5: set iterations [URI::query [HTTP::uri] "i"]; 6: set graphwidth [URI::query [HTTP::uri] "gw"]; 7: set graphheight [URI::query [HTTP::uri] "gh"]; 8: set ymax [URI::query [HTTP::uri] "ym"]; 9: 10: #-------------------------------------------------------------------------- 11: # set defaults 12: #-------------------------------------------------------------------------- 13: if { ("" == $iterations) || ($iterations > 10000) } { set iterations 500; } 14: if { "" == $listsize } { set listsize 5000; } 15: if { "" == $graphwidth } { set graphwidth 300; } 16: if { "" == $graphheight } { set graphheight 200; } 17: if { "" == $ymax } { set ymax 500; } 18: 19: set modulus [expr $listsize / 5]; 20: set autosize 0; 21: 22: #-------------------------------------------------------------------------- 23: # build lookup list 24: #-------------------------------------------------------------------------- 25: set matchlist "0"; 26: for {set i 1} {$i < $listsize} {incr i} { 27: lappend matchlist "$i"; 28: } 29: 30: set luri [string tolower [HTTP::path]] The Main iRule Logic The iRule has two main components. The first checks for the main page request of /calccommands. When that is given it will generate an HTML page that embeds all the report graphs. The graph images are passed back into the second switch condition where the specific test is performed and the a redirect is given for the Google Bar chart. 1: switch -glob $luri { 2: 3: "/calccommands" { 4: #---------------------------------------------------------------------- 5: # check for existence of class file. If it doesn't exist 6: # print out a nice error message. Otherwise, generate a page of 7: # embedded graphs that route back to this iRule for processing 8: #---------------------------------------------------------------------- 9: } 10: "/calccommands/*" { 11: #---------------------------------------------------------------------- 12: # Time various commands (switch, switch -glob, if/elseif, matchclass, 13: # class match) and generate redirect to a Google Bar Chart 14: #---------------------------------------------------------------------- 15: } 16: } Defining the Graphs The first request in to “/calccommands” will first test for the existence of the class file for the test. A class must be defined named “calc_nnn” where nnn is the listsize. Values in the list should be 0 to nnn-1. I used a simple perl script to generate class files of size 100, 1000, 5000, and 10000. The catch command is used to test for the existence of the class file. If it does not exist, then an error message is presented to the user. If the class file exists, then a HTML page is generated with 5 charts, breaking up the list index we are testing into between 1 and listsize. When the HTML response is build, it is sent back to the client with the HTTP::respond command. 1: "/calccommands/*" { 2: 3: #---------------------------------------------------------------------- 4: # Time various commands (switch, switch -glob, if/elseif, matchclass, 5: # class match) and generate redirect to a Google Bar Chart 6: #---------------------------------------------------------------------- 7: 8: set item [getfield $luri "/" 3] 9: set labels "|" 10: set values "" 11: 12: #---------------------------------------------------------------------- 13: # Switch 14: #---------------------------------------------------------------------- 15: 16: set expression "set t1 \[clock clicks -milliseconds\]; \n" 17: append expression "for { set y 0 } { \$y < $iterations } { incr y } { " 18: append expression "switch $item {" 19: foreach i $matchlist { 20: append expression "\"$i\" { } "; 21: } 22: append expression " } " 23: append expression " } \n" 24: append expression "set t2 \[clock clicks -milliseconds\]"; 25: 26: eval $expression; 27: 28: set duration [expr {$t2 - $t1}] 29: if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } 30: append labels "s|"; 31: if { $values ne "" } { append values ","; } 32: append values "$duration"; 33: 34: if { $autosize && ($duration > $ymax) } { set ymax $duration } The Tests switch As I mentioned above, I cheated a bit with the generation of the test code. I found it too tedious to build a 10000 line iRule to test a 10000 switch statement, so I made use of the TCL eval command that allows you to execute a code block you define in a variable. The expression variable holds the section of code I am to execute for the given test. For the switch command, the first thing I do is take down the time before they start. Then a loop occurs for iterations times. This allows the clock counters to go high enough to build a useful report. Inside this look, I created a switch statement looking for the specified index item we are looking for in the list. Making the list content be strings of numbers made this very easy to do. The foreach item in the generated matchlist a switch comparison was added to the expression. Finally closing braces were added as was a final time. Then the switch expression was passed to the eval command which processed the code. Finally the duration was calculated by taking a difference in the two times and the labels and values variables were appended to with the results of the test. 1: "/calccommands/*" { 2: 3: #---------------------------------------------------------------------- 4: # Time various commands (switch, switch -glob, if/elseif, matchclass, 5: # class match) and generate redirect to a Google Bar Chart 6: #---------------------------------------------------------------------- 7: 8: set item [getfield $luri "/" 3] 9: set labels "|" 10: set values "" 11: 12: #---------------------------------------------------------------------- 13: # Switch 14: #---------------------------------------------------------------------- 15: 16: set expression "set t1 \[clock clicks -milliseconds\]; \n" 17: append expression "for { set y 0 } { \$y < $iterations } { incr y } { " 18: append expression "switch $item {" 19: foreach i $matchlist { 20: append expression "\"$i\" { } "; 21: } 22: append expression " } " 23: append expression " } \n" 24: append expression "set t2 \[clock clicks -milliseconds\]"; 25: 26: eval $expression; 27: 28: set duration [expr {$t2 - $t1}] 29: if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } 30: append labels "s|"; 31: if { $values ne "" } { append values ","; } 32: append values "$duration"; 33: 34: if { $autosize && ($duration > $ymax) } { set ymax $duration } switch –glob Just for kicks, I wanted to see what adding “-glob” to the switch command would do. I didn’t make use of any wildcards in the comparisons. In fact, they were identical to the previous test. The only difference is the inclusion of the wildcard matching functionality. 1: #---------------------------------------------------------------------- 2: # Switch -glob 3: #---------------------------------------------------------------------- 4: 5: set expression "set t1 \[clock clicks -milliseconds\]; \n" 6: append expression "for { set y 0 } { \$y < $iterations } { incr y } { " 7: append expression "switch -glob $item {" 8: foreach i $matchlist { 9: append expression "\"$i\" { } "; 10: } 11: append expression " } " 12: append expression " } \n" 13: append expression "set t2 \[clock clicks -milliseconds\]"; 14: 15: eval $expression; 16: 17: set duration [expr {$t2 - $t1}] 18: if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } 19: append labels "s-g|"; 20: if { $values ne "" } { append values ","; } 21: append values "$duration"; 22: 23: if { $autosize && ($duration > $ymax) } { set ymax $duration } If/elseif The if/elseif test was very similar to the switch command above. Timings were taken, but the only difference was the formation of the control statement. In this case, the first line used ‘if { $item eq \”$i\” } {}’ Subsequent entries prepended “else” to make the rest of the lines ‘elseif { $item eq \”$i\”} {}’. The evaluation of the expression was the same and the graph values were stored. 1: #---------------------------------------------------------------------- 2: # If/Elseif 3: #---------------------------------------------------------------------- 4: set z 0; 5: set y 0; 6: 7: set expression "set t1 \[clock clicks -milliseconds\]; \n" 8: append expression "for { set y 0 } { \$y < $iterations } { incr y } { " 9: foreach i $matchlist { 10: if { $z > 0 } { append expression "else"; } 11: append expression "if { $item eq \"$i\" } { } "; 12: incr z; 13: } 14: append expression " } \n"; 15: append expression "set t2 \[clock clicks -milliseconds\]"; 16: 17: eval $expression; 18: 19: set duration [expr {$t2 - $t1}] 20: if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } 21: append labels "If|"; 22: if { $values ne "" } { append values ","; } 23: append values "$duration"; 24: 25: if { $autosize && ($duration > $ymax) } { set ymax $duration } matchclass My first attempt at this iRule, was to use matchclass against the generated matchlist variable. The results weren’t that good and we realized the matchclass’s benefits come when working with native classes, not TCL lists. I decided to keep this code the same, working on the auto-generated matchlist. The next test will illustrate the power of using native data groups (classes). 1: #---------------------------------------------------------------------- 2: # Matchclass on list 3: #---------------------------------------------------------------------- 4: 5: set expression "set t1 \[clock clicks -milliseconds\]; \n" 6: append expression "for { set y 0 } { \$y < $iterations } { incr y } { " 7: append expression "if { \[matchclass $item equals \$matchlist \] } { }" 8: append expression " } \n"; 9: append expression "set t2 \[clock clicks -milliseconds\]"; 10: 11: eval $expression; 12: 13: set duration [expr {$t2 - $t1}] 14: if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } 15: append labels "mc|"; 16: if { $values ne "" } { append values ","; } 17: append values "$duration"; 18: 19: if { $autosize && ($duration > $ymax) } { set ymax $duration } class match In BIG-IP, version 10.0, we introduced the new “class” command that gives high-performance searches into data groups. I decided to include the pre-configured classes for this test. Data groups named “calc_nnn” must exist where nnn equls the listsize and it must contain that many elements for the test to be valid. 1: #---------------------------------------------------------------------- 2: # class match (with class) 3: #---------------------------------------------------------------------- 4: 5: set expression "set t1 \[clock clicks -milliseconds\]; \n" 6: append expression "for { set y 0 } { \$y < $iterations } { incr y } { " 7: append expression "if { \[class match $item equals calc_$listsize \] } { }" 8: append expression " } \n"; 9: append expression "set t2 \[clock clicks -milliseconds\]"; 10: 11: log local0. $expression; 12: 13: eval $expression; 14: 15: set duration [expr {$t2 - $t1}] 16: if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } 17: append labels "c|"; 18: if { $values ne "" } { append values ","; } 19: append values "$duration"; 20: 21: if { $autosize && ($duration > $ymax) } { set ymax $duration } Chart Generation Once all of the tests have been run and the labels and values variables have all the data for the reports. The image is served up with a simple HTTP::redirect to the appropriate google chart server. I’ve made optimizations here to use the 0-9 prefix chart servers so that the browser could render the images quicker. 1: #---------------------------------------------------------------------- 2: # build redirect for the google chart and issue a redirect 3: #---------------------------------------------------------------------- 4: 5: set mod [expr $item % 10] 6: set newuri "http://${mod}.chart.apis.google.com/chart?chxl=0:${labels}&chxr=1,0,${ymax}&chxt=x,y" 7: append newuri "&chbh=a&chs=${graphwidth}x${graphheight}&cht=bvg&chco=A2C180&chds=0,${ymax}&chd=t:${values}" 8: append newuri "&chdl=(in+ms)&chtt=Perf+(${iterations}-${item}/${listsize})&chg=0,2&chm=D,0000FF,0,0,3,1" 9: 10: HTTP::redirect $newuri; The Results Several runs of the tests are shown below. In the first, the tests are run on a list of size 100 for 10000 iterations for each test. As you see for the first element in the matchlist, the if/elseif command is the winner, slightly edging out switch and then matchclass and class. But when we start searching deeper into the list for comparisons, the if/elseif takes longer and longer depending on how far down in the list you are checking. The other commands seem to grow in a linear fashion with the only exception being the class command. http://virtualserver/calccommands?ls=100&i=10000&ym=100 Next we move on to a slightly larger list with 1000 elements. For the first entry, we see that if if/elseif command takes the longest. The others are fairly equal. Again as we move deeper into the list, the other commands grow somewhat linearly in their time with the exception of class which again stays consistent regardless of where in the list it looks. http://virtualserver/calccommands?ls=1000&i=1000&ym=100 Finally, on a 5000 sized list, If is the clear loser regardless of where you are matching. switch and matchclass (on a list) are somewhat on par with eachother, but again the class command takes the lead. http://virtualserver/calccommands?ls=5000&i=500&ym=100 Observations Let’s take the first few bullets in our optimization article and see if they matched our observations here. Always think: "switch", "data group", and then "if/elseif" in that order. If you think in this order, then in most cases you will be better off. Use switch for < 100 comparisons. Switches are fastest for fewer than 100 comparisons but if you are making changes to the list frequently, then a data group might be more manageable. Use data groups for > 100 comparisons. Not only will your iRule be easier to read by externalizing the comparisons, but it will be easier to maintain and update. Order your if/elseif's with most frequent hits at the top. If/elseif's must perform the full comparison for each if and elseif. If you can move the most frequently hit match to the top, you'll have much less processing overhead. If you don't know which hits are the highest, think about creating a Statistics profile to dynamically store your hit counts and then modify you iRule accordingly. I think this list should be changed to “always use class match” but that’s not the best option for usability in some places. In situations where you are working with smaller lists of data, managing the comparisons inline will be more practical than having them in lots of little class files. Aside from that, I think based on the graphs above, all of the techniques are on target. Get The Code You can view the entire iRule for this article in the CodeShare - DevCentral under CommandPerformance. Related Articles on DevCentral Pacfiles R iRules? - DevCentral iRules Home Mitigating Slow HTTP Post DDoS Attacks With iRules... - DevCentral DevCentral Weekly Podcast 158 - Ologizing Your iRules Fun with Hash Performance and Google Charts - DevCentral iRules-O-Rama redirect and sorry server, new to irules - DevCentral 5-Minute iRules: What’s a URL? - DevCentral Network Optimization Won’t Fix Application Performance in the Cloud - DevCentral Self-Contained BIG-IP Provisioning with iRules and pyControl ... stop processing further irules on condition match - DevCentral Like a Matrushka, WAN Optimization is Nested - DevCentral iRules: Rewriting URIs for Fun and Profit2.4KViews0likes5CommentsWriting to and rotating custom log files
Sometimes I need to log information from iRules to debug something. So I add a simple log statement, like this: when HTTP_REQUEST { if { [HTTP::uri] equals "/secure" } { log local0. "[IP::remote_addr] attempted to access /secure" } } This is fine, but it clutters up the /var/log/ltm log file. Ideally I want to log this information into a separate log file. To accomplish this, I first change the log statement to incorporate a custom string - I chose the string "##": when HTTP_REQUEST { if { [HTTP::uri] equals "/secure" } { log local0. "##[IP::remote_addr] attempted to access /secure" } } Now I have to customize syslog to catch this string, and send it somewhere other than /var/log/ltm. I do this by customizing syslog with an include statement: tmsh modify sys syslog include '" filter f_local0 { facility(local0) and not match(\": ##\"); }; filter f_local0_customlog { facility(local0) and match(\": ##\"); }; destination d_customlog { file(\"/var/log/customlog\" create_dirs(yes)); }; log { source(local); filter(f_local0_customlog); destination(d_customlog); }; "' save the configuration change: tmsh save / sys config and restarting the syslog-ng service: tmsh restart sys service syslog-ng The included "f_local0" filter overrides the built-in "f_local0" syslog-ng filter, since the include statement will be the last one to load. The "not match" statement is regex which will prevent any statement containing a “##” string from being written to the /var/log/ltm log. The next filter,"f_local0_customlog", catches the "##" log statement and the remaining include statements handle the job of sending them to a new destination which is a file I chose to name "/var/log/customlog". You may be asking yourself why I chose to match the string ": ##" instead of just "##". It turns out that specifying just "##" also catches AUDIT log entries which (in my configuration) are written every time an iRule with the string "##" is modified. But only the log statement from the actual iRule itself will contain the ": ##" string. This slight tweak keeps those two entries separated from each other. So now I have a way to force my iRule logging statements to a custom log file. This is great, but how do I incorporate this custom log file into the log rotation scheme like most other log files? The answer is with a logrotate include statement: tmsh modify sys log-rotate syslog-include '" /var/log/customlog { compress missingok notifempty }"' and save the configuration change: tmsh save / sys config Logrotate is kicked off by cron, and the change should get picked up the next time it is scheduled to run. And that's it. I now have a way to force iRule log statements to a custom log file which is rotated just like every other log file. It’s important to note that you must save the configuration with "tmsh save / sys config" whenever you execute an include statement. If you don't, your changes will be lost then next time your configuration is loaded. That's why I think this solution is so great - it's visible in the bigip_sys.conf file -not like customizing configuration files directly. And it's portable.2.5KViews0likes8CommentsDNS Resolution via iRules: NAME::lookup vs RESOLV::lookup
There are so many things that you can do with iRules that it can be pretty staggering to narrow down what the "most useful" commands are, but if I were given that task and absolutely had to, I would say that DNS resolution ranks up there pretty high on the most powerful list. Perhaps not as widely used as the HTTP or string commands, but the times that it does get used it solves problems that simply couldn't be solved any other way, often times. Whether it's querying an address before routing traffic, securing access of an application to only particular segments, or simply using DNS as a high-speed DB to store whatever other information you want, the possibilities are broad and deep. For those of you that have been familiar with DNS queries via iRules for a while now, that should come as no surprise. What was a nice surprise, however, was when the name resolution game got changed as of BIG-IP v10.1. Sure, that was a while ago, but I still run into this from time to time where people don't know the differences between the old and new methods, so I figured it was high time to put them down here. There are two ways of performing DNS queries via a native iRule command: NAME::lookup, and RESOLV::lookup. We'll take a look at both, but I'll clearly say it up front, RESOLV::lookup is the preferred method of resolution these days. I'm simply documenting both and comparing them to clear up questions and issues that might be faced by those migrating or that don't understand the differences and pros/cons. First the old way, NAME::lookup: NAME::lookup NAME::lookup <host> This command was introduced in v9, so it's been around for years now. It still works, and many people, myself included, have used it to great success in the past. The command is simple, and takes either an IP or Hostname as an argument, performing the appropriate lookup on that string as necessary. The major issues to note here that have historically made this a bit cumbersome are: 1. This command does not directly return a value You might think that's a bit awkward, that the command being executed to do a lookup doesn't actually return the value of the lookup. I wouldn't necessarily disagree anymore. At the time it was implemented this was required as there was no way to have the command suspend the connection as necessary, execute the query and return with the value retrieved in an efficient, reliable manner. Thus, there are really two commands to deal with when looking at NAME::lookup. But the lookup command, and the response command (NAME::response). To perform the lookup, you fire NAME::lookup, to retrieve the data, you call NAME::resolved. 2. It is non blocking What does non blocking mean? Well, I'll cover this in more depth soon, but basically it means that it does not pause the connection in progress. That sounds fantastic, meaning that the current connection gets to fire through without being slowed down, but think of the situations in which you're likely going to be using these commands. The chances are if you're performing a lookup it's for some sort of decision, and you want to know the result of that lookup before allowing the connection to finish processing anyway. That means that in most cases, we'd see people throwing in arbitrary collect statements to suspend the connection until the results of the lookup were returned anyway. This means you then have to manage the collect & release of the connection manually, etc. This isn't necessarily a show stopper, but it's definitely more to think about. 3. It requires a second event to retrieve the data being returned Given that this is a non blocking command, and as such doesn't return the value directly, there was an event implemented that fires when the information from the lookup returns, so that the results may be processed. The NAME_RESOLVED event is where any processing of the DNS response is performed. It's here that you're able to call the NAME::resolved command, do comparisons as needed on the result, any routing or redirection desired, etc. And don't forget to release the connection (TCP::release, HTTP::release, etc.) if you suspended it with a collect. That's a lot of talking. Here's a snippet from the Wiki of what it looks like in practice: 1: when HTTP_REQUEST { 2: # Hold HTTP data until hostname is resolved 3: HTTP::collect 4: 5: # Start a name resolution on the hostname 6: NAME::lookup [IP::client_addr] 7: } 8: 9: when NAME_RESOLVED { 10: log local0. "client name = >[NAME::response]<" 11: 12: # Release HTTP data once hostname is resolved 13: HTTP::release 14: } As you can see, while functional and powerful, that is not exactly the most straight-forward method of doing a lookup. As such, there was motivation to change things, and change they did! In version 10.1 the RESOLV::lookup command was introduced, which changed the way in which we're able to perform lookups in iRules considerably. Not only ease of use, but flexibility as well. Let's take a look at that command: RESOLV::lookup RESOLV::lookup [@<IP>|@<virtual name>] [inet|inet6] [-a|-aaaa|-txt|-mx|-ptr] <name|ip> While the syntax for the command might look a little bit more complex (it is), there's good reason for that, and I assure you that this newer, fancier command is much easier to use. First of all, this is now a blocking command, which means it will automatically handle the connection suspension for you. That's a good thing, 95% of the time, and makes for less code to write and manage, no connection management to have to build yourself, etc. All headache removing bits. But this command didn't just get easier to use in 10.1, it got far more functionality as well. Given the extra flags you can provide, you are now able to specify a resolver of your choosing, multiple types of lookup records to retrieve, and even offers IPv6 support. Now...to be fair those command options have been back-ported into the NAME::lookup command now, but they originated here, so this is where I'm giving credit. What this leaves you with is a command that is not only easier to use, faster to code, and more feature rich...but less confusing to approach. I've heard people shying away from the NAME::lookup command because of the added event and the required suspension code, but hopefully this will show how easy it can be to use the powerful DNS features within iRules thanks to the RESOLV::lookup command. Oh...and did I mention it's faster? The execution is lighter within the iRule, so it takes less cycles to execute than its predecessor. As if there weren't enough reasons already to lean this direction. Let's take a look at doing the same thing as above in the code, but with this newer, lighter command: 1: when HTTP_REQUEST { 2: log local0. "client name = > [RESOLV::lookup -ptr "[IP::client_addr]"]" 3: } Well there you have it, a look at the old and the new. I know which one I'd rather (and currently do) use. The RESOLV::lookup command makes DNS queries approachable and lighter, as well as much more flexible. If you got scared off the first time around by the NAME::lookup limitations or requirements, take a look again and see how things have changed. Even though it's not a brand new command, it's one that seems to get overlooked more than it should, so here's to many happy DNS queries in your iRuling future.1.4KViews0likes6CommentsiRules Concepts: Logging, a Deeper Understanding
Multiple times in recent iRules presentations, whether on the road or here within F5, there have been questions raised when the topic of logging within iRules gets brought up. Specifically people are curious about logging best practices, performance impact, when to log or not, and how to ensure they're not bogging down their device while still recording the necessary data. As such, it seems like a pertinent topic to demystify a bit by way of background and explanation. First, to understand the possible performance and frankly functional implications of logging from within an iRule you need to understand a bit about the TMM and how it interacts with the *nix based host OS that maintains management level services while TMM does the heavy lifting on traffic. What's a TMM? TMM stands for Traffic Management Microkernel. This is the core of our TMOS (Traffic Management Operating System) infrastructure. Whenever anyone talks about TMOS they are talking about one or more TMMs running to perform the job of passing, inspecting and manipulating traffic, among other things. This is important to understand because the TMM performs a certain set of functions, but is not a complete OS in a traditional sense. Years ago, when engineering version 9 of BIG-IP, we needed a way to achieve not only high performance but also scalability, modularity and reliability while processing traffic. As such, TMM was born. Without going into nuts & bolts too deeply, the idea is that there is a custom, purpose built kernel designed to do nothing but process network traffic as effectively and efficiently as possible. Complete with custom TCP/IP stack, session management and much more, TMM handles all of the traffic being passed over the wire and through your BIG-IP. It is also where all of the processing surrounding said traffic is handled. Profile parsing, most module influences, iRules...the magic really happens within the TMM. This is relevant and important because everything that you do inside an iRule takes place within the TMM. It handles the event structure, commands, memory management...all of it. Whenever you're invoking an iRule you're calling on TMM to do the work and output the result regardless of whether that's discarding an unsavory access attempt or writing new data to the session table. So how does that affect logging? So that's TMM, and understanding that is great, but how does it pertain to logging and performance? TMM is blazingly fast and does its job extremely well. It does not, however, do things like run sshd or Apache, or other generic "User" processes such as, say, syslogd. Those "User" processes aren't things that are directly in the path of traffic, generally speaking, and are handled by the host OS that is the support system that TMM relies on to handle those sort of things. The host OS operates with different memory and CPU restrictions, different performance limitations and generally behaves as a completely different entity than the TMM(s) that are processing traffic. This being the case, things that involve processes in user space are generally not put in the traffic path as it can, in some cases, slow down traffic processing given that the host OS doesn't run nearly as efficiently as TMM does. That ceases to be the case, however, when you put basic, on box logging in place from within an iRule. When you set up a basic log statement within your iRule it will send that information out the user land and feed it to syslogd, which is the logging engine configured on the system. Syslogd is designed to log management events, errors on the system, etc. and does so quite well. It can, however, become overwhelmed if you throttle it with requests. One of the easiest ways to do that is to, say, put 20 log statements in an iRule that then receives 50,000 requests per second. Don't misunderstand, production iRules making use of the basic log command are common and there is nothing necessarily wrong with them as a rule, but it's important to understand the number of log requests that will be sent to syslogd and gauge whether or not that is acceptable. There is no hard and fast rule for this, so I can't tell you "only ever log x requests per second on this device", because that is completely subjective based on box utilization, application needs and about 100 other variables. Are there other options? There are indeed other options, a couple of them in fact. Both of the, however, involve shipping logs to a remote syslog server. The most reliable, simplest way to increase your logging throughput while ensuring you don't impact the performance of your BIG-IP is to ship those logs off box to a system dedicated to handle that type of load. This isn't necessary for many deployments as the logging rate required most times doesn't warrant it. When you need more logging headroom though, this is the best way to go about it. For the purposes of this article I'll assume you're running on a recent version (11.0 or later), though some of these features were available much earlier (10.0 in some cases). First up is a simple change to the log statement that will signal the LTM to send the log message directly off box rather than to the host OS to be processed by syslog. You can see the simple difference here: General, default logging statement: 1: #log [-noname] <facility>.[<level>] <message> 2: #E.G: 3: log local0. "Default log messages look like this" Remote, in-line logging statement: 1: #log [-noname] <remote_ip>[:<remote_port>] <facility>.[<level>] <message> 2: #E.G: 3: log 172.27.122.54 local0.info "In line remote logging looks like this" As you can see, it's quite an easy change to make in your log statements and, in the case of environments that require a high rate of logging, it will have a significant impact on performance and logging overhead. The other option is the HSL commands, which allow you more granular control over exactly how and what you want to send your remote systems, but make use of the exact same exit point from the BIG-IP to transmit the remote logging data. The syntax of the HSL (High Speed Logging) commands looks like this: 1: HSL::open -proto <UDP|TCP> -pool <poolname> 2: HSL::send <handle> <data> And a simple, example of them in an iRule, straight from the wiki, is: 1: when CLIENT_ACCEPTED { 2: set hsl [HSL::open -proto UDP -pool syslog_server_pool] 3: } 4: when HTTP_REQUEST { 5: # Log HTTP request as local7.info; see RFC 3164 Section 4.1.1 - "PRI Part" for more info 6: HSL::send $hsl "<190> [IP::local_addr] [HTTP::uri]\n" 7: } The above example shows a very simple request in which the outcome is easy to reproduce and doesn't stray far at all from the in line option shown earlier, but the HSL commands offer drastically more possibilities. While they can be used simply, they are also a ticket to complete customization of log messages being sent off box wherever and however you want them to be. There are many possibilities here and I won't try to enumerate them, but they are highly flexible commands and allow you much more granular control over off-box message shipping. If the simple, in-line option doesn't meet your needs, HSL almost certainly will. Why is this so much faster? What this small change signals to the LTM is that there is a remote logging system that this log message should be sent to, and rather than directing this message to the host OS it will instead send it out of a high-speed output specifically designed to efficiently send messages out of TMM without involving the host OS whatsoever. That last bit is very important. This is a way to allow TMM to directly fire messages out to your remote logging system, which allows for a much, much higher rate of logging while maintaining incredibly low performance implications. This means that rather than being affected by the host OS' performance (keep in mind that logging is one of the only ways that this happens within BIG-IP), that TMM is able to run unhindered, dishing log messages out as fast as it needs to without anything slowing it down. Does this mean I should never log locally? No, not at all. Local logging is absolutely applicable and has its place in many deployments. Whether it's debugging or production logging, there is no issue with logging locally from within an iRule unless you require an extremely high rate of logging either due to many log messages in a given iRule (or many iRules logging at once) combined with a high amount of request throughput. There's no golden rule as to the tipping point where you should move from local logging to remote, but it's not something that is run into lightly. If you're working in a high throughput environment you have likely run into this type of issue elsewhere, and are already on the lookout for workarounds such as this. As I've told multiple people recently at both User Groups and training sessions, logging in an iRule is nothing to be afraid of. There are limitations and best practices, as with most things, but there is no reason to avoid it whatsoever. Rather, it's best to understand your options, the needs of your environment, and how those two things coalesce. Once you have that understanding, it's smooth sailing. Hopefully this helps clarify how iRules logging works and allows you to pick the appropriate option.1.1KViews0likes2CommentsiRules IP Comparison Considerations with IP::addr Command
Anyone utilizing IP network comparisons in iRules is probably familiar with this syntax: if { [IP::addr [IP::client_addr]/24 equals 10.10.20.0] } { # Do this } In fact, there are several methods for doing a comparison. Here are three functional equivalents that include the most common form shown above: [IP::addr [IP::remote_addr]/24 equals 10.10.20.0] [IP::addr [IP::remote_addr]/255.255.255.0 equals 10.10.20.0] [IP::addr "[IP::remote_addr] mask 255.255.255.0" equals 10.10.20.0] All three work, returning true if there is match and false if not. These formats, however, are not as ideal as it was never intended to work this way. What occurs when performing the comparison this way is the system has to convert the internal IP address to string form, apply the network mask, then re-convert the result back into an IP network object for comparison to the last argument. While possible, it isn’t as efficient and technically is an oversight in the syntax validation checking. It’s not slated to be “fixed” at this point, but it could be in the future, so you should consider updating any iRules to one of these formats: [IP::addr [IP::remote_addr] equals 10.10.20.0/24] [IP::addr [IP::remote_addr] equals 10.10.20.0/255.255.255.0] [IP::addr [IP::remote_addr] equals "10.10.20.0 mask 255.255.255.0"] In these formats, the input address does not need to be masked and can be directly compared with the pre-parsed network specification. Finally, there is one more format that works: [IP::addr [IP::addr [IP::remote_addr] mask 255.255.255.0] equals 10.10.20.0] This also doesn’t require any additional conversion, but the sheer volume of commands in that statement is an immediate indicator (to me, anyway) that it’s probably not very efficient. Performance Before running each format through a simple test (ab –n 10000 –c 25 http://<vip>), I set up a control so I could isolate the cycles for each format: when HTTP_REQUEST timing on { ### CONTROL ### if { 1 } { } } Since all the formats will return true if the match is found, then subtracting out the average cycles from this iRule should give me a pretty accurate accounting of cycles required for each specific format. So I can replace the “1” from the conditional with each format, as shown in this iRule: when HTTP_REQUEST timing on { ### FUNCTIONAL EQUIVALENTS "RECOMMENDED" ### # Format #1 Cycles: 6839 - 1136 = 5703 # if { [IP::addr [IP::remote_addr] equals 10.10.20.0/24] } { } # Format #2 Cycles: 6903 - 1136 = 5767 # if { [IP::addr [IP::remote_addr] equals 10.10.20.0/255.255.255.0] } { } # Format #3 Cycles: 7290 - 1136 = 6154 # if { [IP::addr [IP::remote_addr] equals "10.10.20.0 mask 255.255.255.0"] } { } ### FUNCTIONAL EQUIVALENTS "NOT RECOMMENDED" ### # Format #4 Cycles: 8500 - 1136 = 7364 # if { [IP::addr [IP::remote_addr]/24 equals 10.10.20.0] } { } # Format #5 Cycles: 8543 - 1136 = 7407 # if { [IP::addr [IP::remote_addr]/255.255.255.0 equals 10.10.20.0] } { } # Format #6 Cycles: 8827 - 1136 = 7691 # if { [IP::addr "[IP::remote_addr] mask 255.255.255.0" equals 10.10.20.0] } { } ### ALTERNATE FORMAT ### # Format #7 Cycles: 9124 - 1136 = 7988 # if { [IP::addr [IP::addr [IP::remote_addr] mask 255.255.255.0] equals 10.10.20.0] } { } ### CONTROL ### # Cycles: 1136 # if { 1 } { } } You can see from the average cycles data I added to each of the formats comments above that the recommended formats are all faster than the formats that are not recommended. What’s interesting is the subtle differences in the “/” formats within each group of functional equivalents and then the significant outliers that the “<net> mask <mask>” formats are within their group. Also of note is that the last format, while acceptable, is really inefficient and should probably be avoided. The table below breaks down the increase in cycles for each of the formats compared to the best performer:[IP::addr [IP::remote_addr] equals 10.10.20.0/24]. Whereas the number of cycles required to execute this operation are all quite small, the difference beyond the first two similar formats is quite significant. Conclusion Many ways to skin a cat, some good, some not so good. Just to be clear, using the mask on the initial argument of the comparison should be avoided, and if you currently have iRules utilizing this format, it would be best to update them sooner rather than later. I’ll be doing the same on all the DevCentral wikis, forums, articles, and blogs. If you find some references where we haven’t made this distinction, please comment below. Related Articles IP::addr and IPv6 Why do we need to use [IP::addr [IP::client_addr]]?2.8KViews1like3CommentsProblems Overcome During a Major LTM Software/Hardware Upgrade
I recently completed a successful major LTM hardware and software migration which accomplished two high-level goals: · Software upgrade from v9.3.1HF8 to v10.1.0HF1 · Hardware platform migration from 6400 to 6900 I encountered several problems during the migration event that would have stopped me in my trackshad I not (in most cases) encountered them already during my testing. This is a list of those issues and what I did to address them. While I may not have all the documentation about these problems or even fully understand all the details, the bottom line is that they worked. My hope is that someone else will benefit from it when it counts the most (and you know what Imean). Problem #1 – Unable to Access the Configuration Utility (admin GUI) The first issue I had to resolve was apparent immediately after the upgrade finished. When I tried to access the Configuration utility, I was denied: Access forbidden! You don't have permission to access the requested object. Error 403 I happened to find the resolution in SOL7448: Restricting access to the Configuration utility by source IP address. The SOL refers to bigpipe commands, which is what I used initially: bigpipe httpd allow all add bigpipe save Since then, I’ve developed the corresponding TMSH commands, which is F5’s long-term direction toward managing the system: tmsh modify sys httpd allow replace-all-with {all} tmsh save / sys config Problem #2 – Incompatible Profile I encountered the second issue after the upgraded configuration was loaded for the first time: [root@bigip2:INOPERATIVE] config # BIGpipe unknown operation error: 01070752:3: Virtual server vs_0_0_0_0_22 (forwarding type) has an incompatible profile. By reviewing the /config/bigip.conf file, I found that my forwarding virtual servers had a TCP profile applied: virtual vs_0_0_0_0_22 { destination any:22 ip forward ip protocol tcp translate service disable profile custom_tcp } Apparently v9 did not care about this, but v10 would not load until I manually removed these TCP profile referencesfrom all of my forwarding virtual servers. Problem #3 – BIGpipe parsing error Then I encountered a second problem while attempting to load the configuration for the first time: BIGpipe parsing error (/config/bigip.conf Line 6870): 012e0022:3: The requested value (x.x.x.x:3d-nfsd {) is invalid (show | <pool member list> | none) [add | delete]) for 'members' in 'pool' While examining this error, I noticed that the port number was translated into a service name – “3d-nfsd”. Fortunately during my initial v10 research, I came across SOL11293 - The default /etc/services file in BIG-IP version 10.1.0 contains service names that may cause a configuration load failure. While I had added a step in my upgrade process to prevent the LTM from service translation, it was notscheduled until after the configuration had been successfully loaded on the new hardware. Instead I had to move this step up in the overall process flow: bigpipe cli service number b save The corresponding TMSH commands are: tmsh modify cli global-settings service number tmsh save / sys config Problem #4 – Command is not valid in current event context This was the final error we encountered when trying to load the upgraded configuration for the first time: BIGpipe rule creation error:01070151:3: Rule [www.mycompany.com] error: line 28: [command is not valid in current event context (HTTP_RESPONSE)] [HTTP::host] While reviewing the iRule it was obvious that we had a statement which didn’t make any sense, since there is no Host header in an HTTP response. Apparently it didn’t bother v9, but v10 didn’t like it: when HTTP_RESPONSE { switch -glob[string tolower [HTTP::host]] { <do some stuff> } } We simply removed that event from the iRule. Problem #5: Failed Log Rotation After I finished my first migration, I found myself in a situation where none of the logs in the /var/log directory were not being rotated. The /var/log/secure log file held the best clue about the underlying issue: warning crond[7634]: Deprecated pam_stack module called from service "crond" I had to open a case with F5, who found that the PAM crond configuration file (/config/bigip/auth/pam.d/crond) had been pulled from the old unit: # # The PAM configuration file for the cron daemon # # auth sufficient pam_rootok.so auth required pam_stack.so service=system-auth auth required pam_env.so account required pam_stack.so service=system-auth session required pam_limits.so #session optional pam_krb5.so I had to update the file from a clean unit (which I was fortunate enough to have at my disposal): # # The PAM configuration file for the cron daemon # # auth sufficient pam_rootok.so auth required pam_env.so auth include system-auth account required pam_access.so account sufficient pam_permit.so account include system-auth session required pam_loginuid.so session include system-auth and restart crond: bigstart restart crond or in the v10 world: tmsh restart sys service crond Problem #6: LTM/GTM SSL Communication Failure This particular issue is the sole reason that my most recent migration process took 10 hours instead of four. Even if you do have a GTM, you are not likely to encounter it since it was a result of our own configuration. But I thought I’d include it since it isn’t something you’ll see documented by F5. One of the steps in my migration plan was to validate successful LTM/GTM communication with iqdump. When I got to this point in the migration process, I found that iqdump was failing in both directions because of SSL certificate verification despite having installed the new Trusted Server Certificate on the GTM, and Trusted Device Certificates on both the LTM and GTM. After several hours of troubleshooting, I decided to perform a tcpdump to see if I could gain any insight based on what was happening on the wire. I didn’t notice it at first, but when I looked at the trace again later I noticed the hostname on the certificate that the LTM was presenting was not correct. It was a very small detail that could have easily been missed, but was the key in identifying the root cause. Having dealt with Device Certificates in the past, I knew that the Device Certificate file was /config/httpd/conf/ssl.crt/server.crt. When I looked in that directory on the filesystem, there I found a number of certificates (and subsequently, private keys in /config/httpd/conf/ssl.key) that should not have been there. I also found that these certificates and keys were pulled from the configuration on the old hardware. So I removed the extraneous certificates and keys from these directories and restarted the httpd service (“bigstart restart httpd”, or “tmsh restart sys service crond”). After I did that, the LTM presented the correct Device Certificate and LTM/GTM communication was restored. I'm still not sure to this day how those certificates got there in the first place...849Views0likes3CommentsRewriting Redirects
While best practices for virtualized web applications may indicate that relative self-referencing links and redirects (those which don't include the protocol or the hostname) are preferable to absolute ones (those which do), many applications load balanced by our gear still send absolute self-references. This drives a fairly common requirement when proxying or virtualizing HTTP applications: To manipulate any redirects the servers may set such that they fully support the intended proxy or virtualization scheme. In some cases the requirement is as simple as changing "http://" to "https://" in every redirect the server sends because it is unaware of SSL offloading. Other applications or environments may require modifications to the host, URI, or other headers. LTM provides a couple of different ways to manage server-set redirects appropriately. HTTP profile option: "Rewrite Redirects" The LTM http profile contains the "Rewrite Redirects" option which supports rewriting server-set redirects to the https protocol with a hostname matching the one requested by the client. The possible settings for the option are "All", "Matching", Node", and "None". Rewrite Redirects settings for http profile Setting Effect Resulting Redirect Use Case All Rewrites all HTTP 301, 302, 303, 305, or 307 redirects https://<requested_hostname>/<requested_uri> Use "All" if all redirects are self-referencing and the applicaiton is intended to be secure throughout. You should also use "All" if your application is intended to be secure throughout, even if redirected to another hostname. Matching Rewrites redirects when the request and the redirect are identical except for a trailing slash. See K14775 . https://<requested_hostname>/<requested_uri>/ Use "Matching" to rewrite only courtesy redirects intended to append a missing trailing slash to a directory request. Node Rewrites all redirects containing pool member IP addresses instead of FQDN https://<vs_address>/<requested_uri> If your servers send redirects that include the server's own IP address instead of a hostname. None No redirects are rewritten N/A Default Setting Note that all options will rewrite the specified redirects to HTTPS, so there must be an HTTPS virtual enabled on the same address as the HTTP virtual server. iRule Options While these options cover a broad range of applications, they may not be granular enough to meet your needs. For example, you might only want to re-write the hostname, not the protocol, to support HTTP-only proxying scenarios. You might need it to temporarily work around product issues such as those noted in SOL8535/CR89873 . In these cases, you can use an iRule that uses the HTTP::is_redirect command to identify server-set redirects and selectively rewrite any part of the Location header using the HTTP::header command with the "replace" option. Here's an iRule that rewrites just one specific hostname to another, preserving the protocol scheme and URI as set by the server: when HTTP_RESPONSE { if { [HTTP::is_redirect] }{ HTTP::header replace Location [string map {"A.internal.com" "X.external.com"} [HTTP::header Location]] } } Here's one that rewrites both relative and absolute redirects to absolute HTTPS redirects, inserting the requested hostname when re-writing the relative redirect to absolute: when HTTP_REQUEST { # save hostname for use in response set fqdn_name [HTTP::host] } when HTTP_RESPONSE { if { [HTTP::is_redirect] }{ if { [HTTP::header Location] starts_with &quot;/&quot; }{ HTTP::header replace Location &quot;https://$fqdn_name[HTTP::header Location]&quot; } else { HTTP::header replace Location &quot;[string map {&quot;http://&quot; &quot;https://&quot;} [HTTP::header Location]]&quot; } } } The string map example could quite easily be adjusted or extended to meet just about any redirect rewriting need you might encounter. (The string map command will accept multiple replacement pairs which can come in handy if multiple hostnames or directory strings need to be re-written -- in many cases you can perform the intended replacements with a single string map command.) Taking it a step further As I mentioned earlier, redirects are only one place server self-references may be found. If absolute self-referencing links are embedded in the HTTP payload, you may need to build and apply a stream profile to perform the appropriate replacements. An iRule could also be used for more complex payload replacements if necessary. For the ultimate in redirect rewriting and all other things HTTP proxy, I direct your attention to the legendary ProxyPass iRule contributed to the DevCentral codeshare by Kirk Bauer (thanks, Kirk, for a very comprehensive & instructive example!)13KViews0likes9CommentsHulk Smash Puny SSH!
Sorry, I’ve been monkeying around on the twitter birds, following one of my favorites, SecurityHulk. https://twitter.com/securityhulk Follow, read, laugh, cry, and ponder the Green Wisdom. Anywho, todays fun to be had is SSH. I’m going to skip the history lesson (here) and summarize. SSH gives us a secure connection into a location, providing confidentiality and integrity. Primary usage is accessing a remote command line, but you can do so much more with it, from tunneling all web traffic through it (bypass web filtering,) to using X11 for a full graphical experience. It’s called Secure Shell, so that means it’s impenetrable right? Meh, just like any house, the front door can be solid steel, but if you leave the window open, you’re hosed. So how can we be safe with SSH on the F5? Let’s jump right into the good stuff. (the following is directed towards SSH on the F5 management interface. While it’s good general rules, it is extremely applicable to the F5) 1. Strong Passwords Why are we still harping on this one?! Cause it still is an issue. Once attackers got into a company, bouncing between devices was easy because: 80% Use of weak administrative credentials (SRC: Trustwave GSR2012 ) What’s a strong password? Tough question. There are all sorts of guidelines for password strength. If you want to go off various entropy Bit standards, than a strong password will have 80-bits of entropy. Or you could require the output of /dev/urandom be used as a password: dd if=/dev/urandom bs=1 count=15|base64 -w 0 gives: lhTiX6b6eNxoZXqugvTs That’s easy to remeber right? In the end, you have to decide what is secure to you. Personally, I recommend the following minimums: Minimum Total Characters 12 Alpha characters 3 Special Characters 2 Numbers 1 With the typical reuse limits (cannot reuse last 3 passwords), password cannot contain username, and it is on a limited life (6 months, 1 year max) 2. Management Interface SSH on Public Internet - why baD?! The F5 Platform is a critical piece of network infrastructure. Would you expose the management interface of your core routers and switches to the public internet? Please.. Please PLEASE answer no. If you did not answer no, I trust that you’ve run through a full Risk Analysis of it. So then, what’s a good way to handle the management access? Management domain Basically, create a separate vlan for management access. On the vlan, you can put in a nice set of tight ACLs. Then, you can use the vlan for all the management type traffic you need to do. Check out the Ask F5 solution for some more direct data on the management port. 3. Authorized Keys SSH on the F5 makes use of the authorized key file and you can to! The authorized key file stores the public key of a public/private key pair. Once an admin puts their public key in the file, they can then log on to the system using the corresponding private key. ** IMPORTANT NOTE ** Make sure you’re read the Security Advisory regarding an old F5 SSH issue One thing to think about if you are making use of the authorized keys path is to ensure that it is managed and audited. How do you audit the authorized key files? It’s just a string of random garbage right? People always forget that at the end of the key, there is a comments section. I recommend putting identifying information for the key in that section: ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAqJ7hlsGK+h8+6f3k1hkEy0h9SfDfoihEJa/2uhcTMXg0b2E/ ELnNjeHUC6sl0SR5qQRn2OH6F/3qawClcjrs625hK+wTAdGeg9r/0GCehnGqXR8xTAq1EqStIoLhBtzfe6l0krwsuH e3cHCW+PybFKXE7l5qBjUs8saE53o6mNc= rsa-key-20121010 bob is a monkey This way, when I do the quarterly audits, I know that that key is for the monkey bob. If monkey bob’s access ever needs to be pulled, I know which key is his. I hope you found a shiny tidbit of information in my Hulk inspired ramblings. Remember, SSH is our friend! Just want to be sure it’s not getting friendly with the wrong people. :)212Views0likes0Comments