isv
41 TopicsTroubleshooting TLS Problems With ssldump
Introduction Transport Layer Security (TLS) is used to secure network communications between two hosts. TLS largely replaced SSL (Secure Sockets Layer) starting in 1999, but many browsers still provide backwards compatibility for SSL version 3. TLS is the basis for securing all HTTPS communications on the Internet. BIG-IP provides the benefit of being able to offload the encryption and decryption of TLS traffic onto a purpose specific ASIC. This provides performance benefits for the application servers, but also provides an extra layer for troubleshooting when problems arise. It can be a daunting task to tackle a TLS issue with tcpdump alone. Luckily, there is a utility called ssldump. Ssldump looks for TLS packets and decodes the transactions, then outputs them to the console or to a file. It will display all the components of the handshake and if a private key is provided it will also display the encrypted application data. The ability to fully examine communications from the application layer down to the network layer in one place makes troubleshooting much easier. Note: The user interface of the BIG-IP refers to everything as SSL with little mention of TLS. The actual protocol being negotiated in these examples is TLS version 1.0, which appears as “Version 3.1” in the handshakes. For more information on the major and minor versions of TLS, see the TLS record protocol section of the Wikipedia article. Overview of ssldump I will spare you the man page, but here are a few of the options we will be using to examine traffic in our examples: ssldump -A -d -k <key file> -n -i <capture VLAN> <traffic expression> -A Print all fields -d Show application data when private key is provided via -k -k Private key file, found in /config/ssl/ssl.key/; the key file can be located under client SSL profile -n Do not try to resolve PTR records for IP addresses -i The capture VLAN name is the ingres VLAN for the TLS traffic The traffic expression is nearly identical to the tcpdump expression syntax. In these examples we will be looking for HTTPS traffic between two hosts (the client and the LTM virtual server). In this case, the expression will be "host <client IP> and host <virtual server IP> and port 443”. More information on expression syntax can be found in the ssldump and tcpdump manual pages. *the manual page can be found by typing 'man ssldump' or online here <http://www.rtfm.com/ssldump/Ssldump.html> A healthy TLS session When we look at a healthy TLS session we can see what things should look like in an ideal situation. First the client establishes a TCP connection to the virtual server. Next, the client initiates the handshake with a ClientHello. Within the ClientHello are a number of parameters: version, available cipher suites, a random number, and compression methods if available. The server then responds with a ServerHello in which it selects the strongest cipher suite, the version, and possibly a compression method. After these parameters have been negotiated, the server will send its certificate completing the the ServerHello. Finally, the client will respond with PreMasterSecret in the ClientKeyExchange and each will send a 1 byte ChangeCipherSpec agreeing on their symmetric key algorithm to finalize the handshake. The client and server can now exchange secure data via their TLS session until the connection is closed. If all goes well, this is what a “clean” TLS session should look like: New TCP connection #1: 10.0.0.10(57677) <-> 10.0.0.20(443) 1 1 0.0011 (0.0011) C>S Handshake ClientHello Version 3.1 cipher suites TLS_DHE_RSA_WITH_AES_256_CBC_SHA [more cipher suites] TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods unknown value NULL 1 2 0.0012 (0.0001) S>C Handshake ServerHello Version 3.1 session_id[0]= cipherSuite TLS_RSA_WITH_AES_256_CBC_SHA compressionMethod NULL 1 3 0.0012 (0.0000) S>C Handshake Certificate 1 4 0.0012 (0.0000) S>C Handshake ServerHelloDone 1 5 0.0022 (0.0010) C>S Handshake ClientKeyExchange 1 6 0.0022 (0.0000) C>S ChangeCipherSpec 1 7 0.0022 (0.0000) C>S Handshake Finished 1 8 0.0039 (0.0016) S>C ChangeCipherSpec 1 9 0.0039 (0.0000) S>C Handshake Finished 1 10 0.0050 (0.0010) C>S application_data 1 0.0093 (0.0000) S>C TCP FIN 1 0.0093 (0.0000) C>S TCP FIN Scenario 1: Virtual server missing a client SSL profile The client SSL profile defines what certificate and private key to use, a key passphrase if needed, allowed ciphers, and a number of other options related to TLS communications. Without a client SSL profile, a virtual server has no knowledge of any of the parameters necessary to create a TLS session. After you've configured a few hundred HTTPS virtuals this configuration step becomes automatic, but most of us mortals have missed step at one point or another and left ourselves scratching our heads. We'll set up a test virtual that has all the necessary configuration options for an HTTPS profile, except for the omission of the client SSL profile. The client will open a connection to the virtual on port 443, a TCP connection will be established, and the client will send a 'ClientHello'. Normally the server would then respond with ServerHello, but in this case there is no response and after some period of time (5 minutes is the default timeout for the browser) the connection is closed. This is what the ssldump would look like for a missing client SSL profile: New TCP connection #1: 10.0.0.10(46226) <-> 10.0.0.20(443) 1 1 0.0011 (0.0011) C>SV3.1(84) Handshake ClientHello Version 3.1 random[32]= 4c b6 3b 84 24 d7 93 7f 4b 09 fa f1 40 4f 04 6e af f7 92 e1 3b a7 3a c2 70 1d 34 dc 9d e5 1b c8 cipher suites TLS_DHE_RSA_WITH_AES_256_CBC_SHA [a number of other cipher suites] TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods unknown value NULL 1 299.9883 (299.9871) C>S TCP FIN 1 299.9883 (0.0000) S>C TCP FIN Scenario 2: Client and server do not share a common cipher suite This is a common scenario when really old browsers try to connect to servers with modern cipher suites. We have purposely configured our SSL profile to only accept one cipher suite (TLS_RSA_WITH_AES_256_CBC_SHA in this case). When we try connect to the virtual using a 128-bit key, the connection is immediately closed with no ServerHello from the virtual server. The differentiator here, while small, is the quick closure of the connection and the ‘TCP FIN’ that arises from the server. This is unlike the behavior of the missing SSL profile, because the server initiates the connection teardown and there is no connection timeout. The differences, while subtle, hint at the details of the problem: New TCP connection #1: 10.0.0.10(49342) <-> 10.0.0.20(443) 1 1 0.0010 (0.0010) C>SV3.1(48) Handshake ClientHello Version 3.1 random[32]= 4c b7 41 87 e3 74 88 ac 89 e7 39 2d 8c 27 0d c0 6e 27 da ea 9f 57 7c ef 24 ed 21 df a6 26 20 83 cipher suites TLS_RSA_WITH_AES_128_CBC_SHA Unknown value 0xff compression methods unknown value NULL 1 0.0011 (0.0000) S>C TCP FIN 1 0.0022 (0.0011) C>S TCP FIN Conclusion Troubleshooting TLS can be daunting at first, but an understanding of the TLS handshake can make troubleshooting much more approachable. We cannot exhibit every potential problem in this tech tip. However, we hope that walking through some of the more common examples will give you the tools necessary to troubleshoot other issues as they arise. Happy troubleshooting!7.8KViews0likes5CommentsHTTP Basic Access Authentication iRule Style
I started working on an administrator control panel for my previous Small URL Generator Tech Tip (part 1 and part 2) and realized that we probably didn’t want to make our Small URL statistics and controls viewable by everyone. That led me to make a decision as to how to secure this information. Why not the old, but venerable HTTP basic access authentication? It’s simple, albeit somewhat insecure, but it works well and is easy to deploy. First, a little background on how this mechanism works: First, a user places a GET request for a page without providing authentication. The server then responds with a “401 Authorization Required” status code and the authentication type and realm specified by the WWW-Authenticate header. In our case we will be using basic access authentication and we’ve arbitrarily named our realm “Secured Area". The user will then provide a username and password. The client will then join the username and password with a colon and apply base-64 encoding to the concatenated string. The client then presents this to the server via the Authorization header. If the credentials are verified by the server, the client will then be granted access to the “Secured Area”. Authentication Required for “Secured Area” This transaction normally takes place between the client and application server, but we can emulate this functionality using an iRule. The mechanism is rather simple and easy to implement. We need to look for the client Authorization header and if present, verify the credentials. If the credentials are valid, grant access to the virtual server’s content, if not, display the authentication box and then repeat the process. On the BIG-IP side, we are storing the username and MD5-digested password in a data group (which we aptly named authorized_users). While this is not as secure as a salted hash, it does provide some security beyond storing the credentials in plain text on our BIG-IP. Once we took these elements into consideration, this is the iRule we developed: 1: when HTTP_REQUEST { 2: binary scan [md5 [HTTP::password]] H* password 3: 4: if { [class lookup [HTTP::username] $::authorized_users] equals $password } { 5: log local0. "User [HTTP::username] has been authorized to access virtual server [virtual name]" 6: 7: # Insert iRule-based application code here if necessary 8: } else { 9: if { [string length [HTTP::password]] != 0 } { 10: log local0. "User [HTTP::username] has been denied access to virtual server [virtual name]" 11: } 12: 13: HTTP::respond 401 WWW-Authenticate "Basic realm=\"Secured Area\"" 14: } 15: } There are a couple different ways this iRule could be implemented. It can be applied as-is directly to any HTTP virtual and begin protecting the virtual’s contents. It can also be used to secure an iRule-based application. In which case the application code would need to be encapsulated where the comment is located in the above code. I will be publishing a more functional example of this in the near future, but you can start using it now if you have such a necessity. Earlier we discussed the inherent insecurity of basic access authentication due to the username and password being transmitted in plain text. To combat this, you’ll probably want to conduct this transaction over an SSL secured connection. Additionally you’ll want to generate the passwords as MD5 hashes before adding them to your data group. Here are a few ways to accomplish this: Bash % echo -n "password" | md5sum => 5f4dcc3b5aa765d61d8327deb882cf99 - Perl % perl -e "use Digest::MD5 'md5_hex'; print md5_hex("password") => 5f4dcc3b5aa765d61d8327deb882cf99 Ruby (using irb) irb(main):001:0> require "md5" => true irb(main):002:0> MD5::md5("password") => #<MD5: 5f4dcc3b5aa765d61d8327deb882cf99> While HTTP basic access authentication may not be the best authentication method for every case, it definitely has its advantages. It is easy to deploy (and even easier via an iRule), provides basic authentication without having to configure or depend on an external authentication service, and is supported by any browser developed in the last 15 years. Through the use of different data groups you can easily separate access to different virtuals or provide a simple SSO (Single Sign-On) solution for a number of virtual servers. We hope you find this tidbit of code to be useful in your environment. Stay tuned for more extensive examples and usages of this tech tip in the near future!4.3KViews0likes14CommentsWriting to and rotating custom log files
Sometimes I need to log information from iRules to debug something. So I add a simple log statement, like this: when HTTP_REQUEST { if { [HTTP::uri] equals "/secure" } { log local0. "[IP::remote_addr] attempted to access /secure" } } This is fine, but it clutters up the /var/log/ltm log file. Ideally I want to log this information into a separate log file. To accomplish this, I first change the log statement to incorporate a custom string - I chose the string "##": when HTTP_REQUEST { if { [HTTP::uri] equals "/secure" } { log local0. "##[IP::remote_addr] attempted to access /secure" } } Now I have to customize syslog to catch this string, and send it somewhere other than /var/log/ltm. I do this by customizing syslog with an include statement: tmsh modify sys syslog include '" filter f_local0 { facility(local0) and not match(\": ##\"); }; filter f_local0_customlog { facility(local0) and match(\": ##\"); }; destination d_customlog { file(\"/var/log/customlog\" create_dirs(yes)); }; log { source(local); filter(f_local0_customlog); destination(d_customlog); }; "' save the configuration change: tmsh save / sys config and restarting the syslog-ng service: tmsh restart sys service syslog-ng The included "f_local0" filter overrides the built-in "f_local0" syslog-ng filter, since the include statement will be the last one to load. The "not match" statement is regex which will prevent any statement containing a “##” string from being written to the /var/log/ltm log. The next filter,"f_local0_customlog", catches the "##" log statement and the remaining include statements handle the job of sending them to a new destination which is a file I chose to name "/var/log/customlog". You may be asking yourself why I chose to match the string ": ##" instead of just "##". It turns out that specifying just "##" also catches AUDIT log entries which (in my configuration) are written every time an iRule with the string "##" is modified. But only the log statement from the actual iRule itself will contain the ": ##" string. This slight tweak keeps those two entries separated from each other. So now I have a way to force my iRule logging statements to a custom log file. This is great, but how do I incorporate this custom log file into the log rotation scheme like most other log files? The answer is with a logrotate include statement: tmsh modify sys log-rotate syslog-include '" /var/log/customlog { compress missingok notifempty }"' and save the configuration change: tmsh save / sys config Logrotate is kicked off by cron, and the change should get picked up the next time it is scheduled to run. And that's it. I now have a way to force iRule log statements to a custom log file which is rotated just like every other log file. It’s important to note that you must save the configuration with "tmsh save / sys config" whenever you execute an include statement. If you don't, your changes will be lost then next time your configuration is loaded. That's why I think this solution is so great - it's visible in the bigip_sys.conf file -not like customizing configuration files directly. And it's portable.2.4KViews0likes8CommentsSession Table Control With iRules
In my previous article titled Session Table Exporting With iRules , I posted an example iRule that will allow you to export your session table entries for archival purposes. If you are reading this and have no clue what the session table is, you’ll want to read the series titled “The Table Command ” where we walk through all the ins and outs of accessing the session data table. As I mentioned on a recent podcast, I’ve been planning on adding to that tech tip by including the ability to “import” data back into the session table. Well, this article includes that, and much more… I started writing the code and found my self asking a bunch of “what-ifs”. What If… I could look at the session table data… I could delete session table entries… I could add session table entries… I could delete session tables… Instead of drawing this out into multiple tech tips, I decided to go ahead and push forward and include it all in this one. Previously Jason wrote the “Restful Access to BIG-IP subtables ” article where he build a web service for access into the session tables. This is great for those programmer geeks, but I figured I’d take a different approach by building a full blown GUI to interact with the session table and this turned out to be pretty easy with iRules! The iRule The entire application was written in a single iRule. By assigning this iRule to a virtual server, you can now make a web request to “http://virtual_ip/subtables” and you will have access to the application. The virtual server can be a production hosting live applications or a secondary one with no active pools behind it. For all request not going to “/subtables” it will ignore the request and pass control to additional iRules and then on the assigned pool of servers. The Application The logic in this iRule is wrapped around an application name defined in the “APPNAME” variable. You can change this to whatever you want, but in my example I’ve used the name of “subtables”. If the URI starts with “/subtables” then the below application logic is processed. Otherwise, the request is allows to continue on to the backend pool. 1: when HTTP_REQUEST { 2: set APPNAME "subtables"; 3: 4: set luri [string tolower [HTTP::uri]] 5: set app [getfield $luri "/" 2]; 6: set cmd [getfield $luri "/" 3]; 7: set tname [URI::decode [getfield [HTTP::uri] "/" 4]]; 8: set arg1 [URI::decode [getfield [HTTP::uri] "/" 5]]; 9: set arg2 [URI::decode [getfield [HTTP::uri] "/" 6]]; 10: set resp ""; 11: 12: set send_response 1; 13: 14: if { $app equals $APPNAME } { 15: # Application Logic 16: } 17: } Command Processing A little way down in the application logic is the command processor. There are 4 public commands: edit, export, import, and delete. These will be described below. There are also a couple of hidden commands that you will have to dig through the source to look at. 1: #------------------------------------------------------------------------ 2: # Process commands 3: #------------------------------------------------------------------------ 4: switch $cmd { 5: 6: "edit" { 7: # Process edit command 8: } 9: "export" { 10: # Process export command 11: } 12: "import" { 13: # Process import command 14: } 15: "delete" { 16: # Process delete command 17: } 18: } Command: edit The “edit” command will allow you to view the contents of a subtable. If no table name is specified, a form is generated prompting the user to enter a valid table name. If a table name is supplied that currently doesn’t exist on the system, it will act like a create method allowing you to create a new subtable. After that request is made, the else logic is processed and the edit table is presented along with “X” links after each record to allow you to delete that record, and a final row of edit boxes allowing you to insert a new record into the table. 1: log local0. "SUBCOMMAND: edit"; 2: if { $tname eq "" } { 3: append resp $TABLENAME_FORM 4: } else { 5: append resp ""; 35: append resp "\n"; 36: 37: append resp "\n"; 38: append resp "'$tname' Table\n"; 39: append resp "Key Value\n"; 40: foreach key [table keys -subtable $tname] { 41: append resp "$key"; 42: append resp "[table lookup -subtable $tname $key]"; 43: append resp "\[X\]"; 44: append resp "\n"; 45: } 46: # Add insertion fields 47: append resp ""; 48: append resp ""; 49: append resp "\[+\]"; 50: append resp "\n"; 51: append resp ""; 55: } Command: export The export command was taken from my previous article on exporting the session table. It essentially iterates through all the keys in the subtable and queries their values inserting them into a comma separated list. The file is then returned via the HTTP::respond command by changing the Content-Type to “text/csv” and specifying a unique file name with the Content-Disposition header. 1: log local0. "SUBCOMMAND: export"; 2: if { $tname eq "" } { 3: append resp $TABLENAME_FORM 4: } else { 5: set csv "Table,Key,Value\n"; 6: foreach key [table keys -subtable $tname] { 7: append csv "${tname},${key},[table lookup -subtable $tname $key]\n"; 8: } 9: set filename [clock format [clock seconds] -format "%Y%m%d_%H%M%S_${tname}.csv"] 10: log local0. "Responding with filename $filename..."; 11: 12: set disp "attachment; filename=${filename}"; 13: HTTP::respond 200 Content $csv "Content-Type" "text/csv" "Content-Disposition" $disp; 14: return; 15: } Command: import This logic is the opposite action from the export command. For the first request to the app, the form is returned containing the fileinput HTML form input item. The browser will then submit a POST request that has to be parsed out. The HTTP::collect call is made on the entire POST body and processing continues in the HTTP_REQUEST_DATA event. 1: if { $tname eq "" } { 2: append resp $FILEINPUT_FORM; 3: } else { 4: append resp "SUBMITTED FILE..."; 5: if { [HTTP::header exists "Content-Length"] } { 6: HTTP::collect [HTTP::header "Content-Length"]; 7: set send_response 0; 8: } 9: } Processing the File Upload POST Data In the previous code block, the HTTP::collect call is made. When the data is buffered up, the HTTP_REQUEST_DATA event is triggered and we can perform the input. I included some validation in here to make sure this is for the current application request and the import command was requested. The Payload is then parsed into lines and the parsing fun begins. File Upload POST Request 1: POST /subtables/import/file HTTP/1.1 2: Host: bigip 3: User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 4: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 5: Accept-Language: en-us,en;q=0.5 6: Accept-Encoding: gzip,deflate 7: Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 8: Keep-Alive: 115 9: Connection: keep-alive 10: Referer: http://bigip/subtables/import/ 11: Cookie: ... 12: Content-Type: multipart/form-data; boundary=---------------------------190192876111056 13: Content-Length: 260 14: 15: -----------------------------190192876111056 16: Content-Disposition: form-data; name="filedata"; filename="20101103_070007_Foo.csv" 17: Content-Type: application/vnd.ms-excel 18: 19: Table,Key,Value 20: Foo,1,Hi 21: Foo,2,There 22: 23: -----------------------------190192876111056-- First, let’s look at the post request and then hopefully the iRule logic will make more sense. First we have the “Content-Type” header that contains the boundary for the various form elements. Once we’ve got past all the headers (indicated by the first empty line), I’ll build a little state engine where I set whether I’m “in” or “out” of a boundary. From within the boundary, I check the Content-Disposition for the “filedata” parameter. This logic is then looped through until we get to the contents (again delimited by an empty line after the part headers. Each of those lines are then parsed and the “Table”, “Key”, and “Value” is extracted and the associated “table set” command is called to insert the value into the specified table. When the processing is complete, a status message is sent to the client indicating the number of records added. 1: when HTTP_REQUEST_DATA { 2: 3: log local0. "HTTP_REQUEST_DATA -> app $app"; 4: if { $app eq $APPNAME } { 5: switch $cmd { 6: "import" { 7: set payload [HTTP::payload] 8: 9: #------------------------------------------------------------------------ 10: # Extract Boundary from "Content-Type" header 11: #------------------------------------------------------------------------ 12: set ctype [HTTP::header "Content-Type"]; 13: set tokens [split $ctype ";"]; 14: set boundary ""; 15: foreach {token} $tokens { 16: set t2 [split [string trim $token] "="]; 17: set name [lindex $t2 0]; 18: set val [lindex $t2 1]; 19: if { $name eq "boundary" } { 20: set boundary $val; 21: } 22: } 23: 24: #------------------------------------------------------------------------ 25: # Process POST data 26: #------------------------------------------------------------------------ 27: set in_boundary 0; 28: set in_filedata 0; 29: set past_headers 0; 30: set process_data 0; 31: set num_lines 0; 32: if { "" ne $boundary } { 33: 34: log local0. "Boundary '$boundary'"; 35: set lines [split [HTTP::payload] "\n"] 36: foreach {line} $lines { 37: 38: set line [string trim $line]; 39: log local0. "LINE: '$line'"; 40: 41: if { $line contains $boundary } { 42: 43: if { $in_boundary == 0 } { 44: #---------------------------------------------------------------- 45: # entering boundary 46: #---------------------------------------------------------------- 47: log local0. "Entering boundary"; 48: set in_boundary 1; 49: set in_filedata 0; 50: set past_headers 0; 51: set process_data 0; 52: } else { 53: #---------------------------------------------------------------- 54: # exiting boundary 55: #---------------------------------------------------------------- 56: log local0. "Exiting boundary"; 57: set in_boundary 0; 58: set in_filedata 0; 59: set past_headers 0; 60: set process_data 0; 61: } 62: } else { 63: 64: #------------------------------------------------------------------ 65: # in boundary so check for file content 66: #------------------------------------------------------------------ 67: if { ($line starts_with "Content-Disposition: ") && 68: ($line contains "filedata") } { 69: log local0. "In Filedata"; 70: set in_filedata 1; 71: continue; 72: } elseif { $line eq "" } { 73: log local0. "Exiting headers"; 74: set past_headers 1; 75: } 76: } 77: 78: if { $in_filedata && $process_data } { 79: log local0. "Appending line"; 80: 81: if { ($num_lines > 0) && ($line ne "") } { 82: #---------------------------------------------------------------- 83: # Need to parse line and insert into table 84: # line is format : Name,Key,Value 85: #---------------------------------------------------------------- 86: set t [getfield $line "," 1]; 87: set k [getfield $line "," 2]; 88: set v [getfield $line "," 3] 89: 90: if { ($t ne "") && ($k ne "") && ($v ne "") } { 91: log local0. "Adding table '$t' entry '$k' => '$v'"; 92: table set -subtable $t $k $v indefinite indefinite 93: } 94: } 95: incr num_lines; 96: } 97: 98: if { $past_headers } { 99: log local0. "Begin processing data"; 100: set process_data 1; 101: } 102: } 103: } 104: incr num_lines -2; 105: append resp "Successfully imported $num_lines table records"; 106: append resp ""; 107: HTTP::respond 200 Content $resp; 108: } 109: } 110: } 111: } Command: delete And, as a final step, I included a “delete” command to allow you to delete all the records from the specified table. This is done by using the “table delete” command. 1: log local0. "SUBCOMMAND: delete"; 2: if { $tname eq "" } { 3: append resp $TABLENAME_FORM 4: } else { 5: table delete -subtable $tname -all; 6: append resp "Subtable $tname successfully deleted"; 7: } The Demo Below is a little walk through of the application to give you some context of what it looks like to the end user. The Code The full code for this article can be found in the iRules CodeShare under SessionTableControl.1.9KViews0likes1CommentAutomated Gomez Performance Monitoring
Gomez provides an on-demand platform that you use to optimize the performance, availability and quality of your Web and mobile applications. It identifies business-impacting issues by testing and measuring Web applications from the “outside-in” — across your users, browsers, mobile devices and geographies — using a global network of 100,000+ locations, including the industry’s only true Last Mile. F5 and Gomez have partnered to provide a users of both technologies an easy way to integrate the two together. In this article I will focus on the Web Performance Management component of the Gomez Platform. Key features of their Web Performance Management solution include Rapidly detect and troubleshoot problems that directly impact your customers. Identify the root cause of issues using real-user passive monitoring. Ensure web application availability and performance for key user segments. Optimize web application performance Improve web experiences across mobile, streaming, and web applications. Reduce downtime and response times with detailed object-level, page, connection, and host data across multiple browsers. Make smart technology investments and manage service providers effectively Quantify the benefits of technology investments such as Content Delivery Networks (CDNs), virtualization, and infrastructure changes. Ensure service level agreement compliance All of these features can be possible by simply inserting a small section of client side JavaScript code into your web page requests. Now, for a small site with only a couple of pages, this is fairly simple to do, but for sites like DevCentral with many 1000’s of pages, it is very cumbersome and requires developer support for integration. In addition to that, application level changes typically require a testing cycle before being deployed into production. The Client-side Script The Gomez script looks something like this: 1: <SCRIPT LANGUAGE="JavaScript"><!-- 2: var gomez={gs: new Date().getTime(), acctId:'XXXXXX', pgId:'', grpId:''}; 3: //--></SCRIPT> 4: <script src="/js/axfTag.js" type="text/javascript"></script> The key configuration components are the following variables acctId – Your Gomez Application id to link client page request to your Gomez Account pgId – An optional page identifier that allows you to give a page, or set of pages, a unique user friendly name in the reporting. grpId – An optional group identifier that you can use to further identify your page views. This is highly useful in multi-datacenter deployments when you would like to differentiate the servers serving up your applications. Enter iRules With our embedded scripting technology iRules on the BIG-IP, deploying this client script code across all of your application becomes very simple. Since all of your application traffic is traveling through the BIG-IP, it makes perfect sense to “inject” this code at the network layer. That is exactly what I’m going to do with this solution. Prerequsites Make sure your Virtual Server is configured as follows: Ensure that your Virtual Server has the default (or custom) “stream” profile assigned in it’s configuration. Your Virtual Server must also have an associated HTTP Profile with the “Response Chunking” property set to “Rechunk”. Handling The Request You’ll notice I utilized the ability to serve up content directly from the iRule to eliminate the need to deploy the Gomez bootstrap code to a separate server. Also, by implementing it this way instead of injecting the code directly into the page response, allows the browser to cache that data and reduce the size of your page request. 1: when HTTP_REQUEST { 2: 3: set GOMEZ_APP_ID "XXXXXX"; 4: set GOMEZ_DEBUG 0; 5: set gws_luri [string tolower [HTTP::uri]] 6: 7: if { $gws_luri eq "/js/axftag.js" } { 8: 9: # ----------------------------------------------------------------------- 10: # Serve up Gomez bootstrap javascript 11: # ----------------------------------------------------------------------- 12: HTTP::respond 200 content [string map {LBR \{ RBR \}} {/*Gomez ...}] "Content-Type" "application/x-javascript"; 13: 14: } else { 15: 16: # --------------------------------------------------------------------- 17: # Don't allow response data to be chunked 18: # --------------------------------------------------------------------- 19: if { [HTTP::version] eq "1.1" } { 20: # ------------------------------------------------------------------- 21: # Force downgrade to HTTP 1.0, but still allow keep-alive connections. 22: # Since HTTP 1.1 is keep-alive by default, and 1.0 isn't, 23: # we need make sure the headers reflect the keep-alive status. 24: # ------------------------------------------------------------------- 25: if { [HTTP::header is_keepalive] } { 26: HTTP::header replace "Connection" "Keep-Alive"; 27: } 28: } 29: } 30: } So, for request for the Gomez Bootstrap code (“/js/axftag.js”), the iRule will serve it up. Otherwise, it will allow the request to pass through to the backend server. Also, you will need to update your GOMEZ_APP_ID variable with your personal Application Id supplied to you from your Gomez account. Injecting The Client-Side JavaScript Now that the request has gone through to the backend application, we are ready to handle the response from the servers. We are going to utilize the Stream profile to do the actual content replacement. It’s fast and very easy to configure. Since we only want to inject the client code in valid web page responses, I’ve added a check for Content-Type “text/html” as well as a success HTTP Response code (200). Then we’ll create a variable for the Gomez client code with the embedded Application Id and set the Stream expression for the Stream Filter to insert it right before the end of the Head element (or Body element if the Head element doesn’t exist in the response). 1: when HTTP_RESPONSE { 2: 3: # ------------------------------------------------------------------------- 4: # Stream filter is disabled by default 5: # ------------------------------------------------------------------------- 6: STREAM::disable; 7: 8: # ------------------------------------------------------------------------- 9: # Only process stream replacement for a valid response and content type is html. 10: # ------------------------------------------------------------------------- 11: if { ([HTTP::header Content-Type] starts_with "text/html") && ([HTTP::status] == 200) } { 12: 13: set gomez_client [subst {<SCRIPT LANGUAGE="JavaScript"><!-- 14: ar gomez={gs: new Date().getTime(), acctId:'$::GOMEZ_APP_ID', pgId:'', grpId:''}; 15: /--></SCRIPT> 16: script src="/js/axfTag.js" type="text/javascript"></script> 17: ]; 18: 19: if { $::GOMEZ_DEBUG > 1 } { log local0. "Adding Gomez JavaScript"; } 20: if { $::GOMEZ_DEBUG > 2 } { log local0. "$gomez_client"; } 21: 22: # ----------------------------------------------------------------------- 23: # Set the stream replacement expression 24: # ----------------------------------------------------------------------- 25: set stream_expression "@</\[Hh]\[Ee]\[Aa]\[Dd]>@$gomez_client</head>@@<\[Bb]\[Oo]\[Dd]\[Yy]>@$gomez_client<body>@"; 26: STREAM::expression $stream_expression; 27: 28: if { $::GOMEZ_DEBUG > 2 } { log local0. "Current Stream Expression: $stream_expression"; } 29: 30: # ----------------------------------------------------------------------- 31: # Enable the stream filter for this request. 32: # ----------------------------------------------------------------------- 33: STREAM::enable; 34: 35: } else { 36: if { $::GOMEZ_DEBUG > 1 } { log local0. "Ignoring type [HTTP::header Content-Type], status=[HTTP::status]" } 37: } 38: I’ve included the STREAM_MATCHED event in the case that the Stream profile finds multiple matches. After the first match, the Stream profile is disabled so that only one replacement occurs. 1: when STREAM_MATCHED { 2: if { $::GOMEZ_DEBUG > 0 } { log local0. "URI: $gws_luri matched [STREAM::match]"; } 3: 4: # ------------------------------------------------------------------------- 5: # We've found a match, so disable the stream profile for all subsequent 6: # matches in the response. 7: # ------------------------------------------------------------------------- 8: STREAM::disable; 9: } Conclusion With this simple iRule, you can now inject the Gomez client side JavaScript code in all of your applications without effecting your application teams. In a future article, I will discuss how you can further customize this iRule by adding support for their Page and Group identifiers so if you want a little more granularity in the reporting, stay tuned. Download You can download the full script in the iRules CodeShare under GomezInjection1.3KViews0likes2CommentsSession Table Exporting With iRules
If you have been following the iRules Tech Tips here on DevCentral lately, you’ll see more and more use of the Session table for data storage and retrieval. Colin recently put up a few articles around building Heatmaps with iRules. In those examples, he uses the session table to store all his geolocation data for later reporting. Heatmaps, iRule Style – Part 1 Heatmaps, iRule Style – Part 2 Heatmaps, iRule Style – Part 3, URL Filtering Heatmaps, iRule Style – Part 4, Meaningful Numbers And George has been having fun with the session table as well in a few of his recent articles Referral Tracking With iRules Small URL Generator Part 1 Small URL Generator Part 2 Not sure how the table command works? There is a great series of articles on the Table Command giving in depth details on how it works and how to use it effectively. The Problem The Session table is a great and wonderful thing, but it does have one weakness – it resides in memory and is not persistent across server restarts. If you have a HA-pair of servers, the session data will be replicated across them, but there will be those edge cases where you need to take them both down and will lose anything you have stored in the session table. You may also want to analyze that data in an external program. There needs to be some way of archiving that data off of the BIG-IP! The Solution Well, now there is a way – thanks to iRules. I’m going to limit this article to exporting “subtable” data as most of the examples we are seeing now-a-days are focused on using subtables to segment the data within the session table. Also, it doesn’t hurt that there is a nifty “table keys” command to return all the entries in a specified subtable. The solution is actually quite simple and will qualify for Colin’s 20-Lines-Or-Less series. 1: when HTTP_REQUEST { 2: switch -glob [string tolower [HTTP::uri]] { 3: "/exporttable/*" { 4: set csv "Table,Key,Value\n"; 5: set tname [getfield [HTTP::uri] "/" 3] 6: foreach key [table keys -subtable $tname] { 7: set val [table lookup -subtable $tname $key]; 8: append csv "$tname,$key,$val\n"; 9: } 10: set filename [clock format [clock seconds] -format "%Y%m%d_%H%M%S_${tname}.csv"] 11: HTTP::respond 200 Content $csv \ 12: "Content-Type" "text/csv" \ 13: "Content-Disposition" "attachment: filename=${filename}"; 14: } 15: } 16: } The iRule looks for a request on the virtual server to the URL “http://hostname/exporttable/tablename” The sections of the URI is split apart and the tablename portion is removed with the getfield command. At this point, I call the “table keys ” sub-command to get a list of all the keys in that sub-table. A variable is created to store the resulting output. In this example, I opted to go with a simple Comma Separated Values (CSV) format but it would be trivial to convert this into XML or any other format you care to use. The list of keys is then iterated through with the “table lookup” sub-command and the resulting record is appended to the output. A unique file name is created with the TCL “clock” command to include the date and time along with the requested table name. Finally, the output is returned to the client with the correct Mime type of “text/csv” as well as a Content-Disposition header to tell the browser the file name as well as indicating that it should attempt to save it to disk. Fancying It Up A Bit I could have stopped there with the archiving, but I’m going to go a step further and add a user interface to this export iRule. 1: when HTTP_REQUEST { 2: switch -glob [string tolower [HTTP::uri]] { 3: "/exporttable" { 4: HTTP::respond 200 Content { 5: <html><head><title>iRule Table Exporter</title></head> 6: <script language="JavaScript"><!-- 7: function SubmitForm() 8: { 9: var submit = false; 10: var value = document.getElementById("table_name"); 11: if ( null != value ) 12: { 13: if ( "" != value.value ) 14: { 15: document.export_table_form.action = "/exporttable/" + value.value; 16: submit = true; 17: } 18: else 19: { 20: window.alert("Please Enter a table name"); 21: value.focus(); 22: } 23: } 24: return submit; 25: } 26: //--></script> 27: <body><h1>iRule Table Exporter</h1> 28: <form method="get" name="export_table_form" action=""> 29: <table border='1'> 30: <tr> 31: <td>Table Name</td> 32: <td><input type="text" Id="table_name" value=""></td> 33: <td><input type="submit" value="Submit" onclick="javascript:return SubmitForm()"></td> 34: </tr> 35: </table> 36: </form> 37: } 38: } 39: } Now, if the user requests the url http://hostname/exporttable, a form will be displayed allowing the user to enter the requested table name and then click submit to request the downloaded archive. Since I wanted the format to be in the URL and not the querystring parameters, I had to do some JavaScript fun to manipulate the “action” for the HTML form (in case you were wondering what the SubmitForm function was for. Be on the lookout for an upcoming article where I’ll illustrate how to reverse this article and Import an archived file back into the session table. The Full Example You will have to add both sections above together to get the fully functional iRule, or you can check it out in the iRules CodeShare under SessionTableExport.821Views0likes1CommentGetting Started With Ruby and iControl
Here on DevCentral we’ve released libraries for a number of the big languages from Java and Perl to Powershell. Up until now there has not been much love for Ruby. Well, that’s all about to change: enter the new iControl Ruby Library. This project is a work in progress and the library is a mere 48 lines long as of this initial release. There will be many more features along with more example code coming in the near future. The first set of installation instructions only cover the most recent Ubuntu release: Lucid 10.04, but should segue well with any distribution that has Ruby 1.8.6 or newer and RubyGems 1.2 or better. In addition to Ruby and Gems, you will also need the Ruby OpenSSL libraries and HTTPClient 2.1.5.2 or newer. Please feel free to test this code on as many other distributions and operating systems as you can and post your feedback in the iControl Ruby Library forum. We will do our best to get your change requests heard and rolled into the next release. Without further ado, let’s get started. Installing The Ruby iControl Libraries This installation assumes you are starting with a fresh Ubuntu Lucid (10.04) install. 1. Install the Ruby, Ruby Gems, and Ruby OpenSSL libraries apt-get install ruby rubygems libopenssl-ruby 3. Download the iControl Ruby Library Gem 4. Install the iControl Gem gem install f5-icontrol-10.2.0.gem 5. Run one of the example files (located in /var/lib/gems/1.8/gems/f5-icontrol-10.2.0.a.1/examples/ if installed as ‘root’) ruby get_version.rb 192.168.1.245 admin admin => BIG-IP_v10.1.0 Installation Notes for Older Distributions The age of the distribution does not matter nearly as much as the version of Ruby and RubyGems. If your Gems installation is too old you will get an “HTTP Response 302” and an error when trying to perform any remote actions. Ubuntu namely has not updated any of the RubyGems packages for Hardy (and older releases). As such you will see this error when trying to install iControl Ruby Library because Gems will try to remotely retrieve HTTPClient. If you are stuck using an older distribution we would suggest that you remove the old version of RubyGems and install a newer version (v1.4.1 as of this writing) manually. Instructions for manually installing RubyGems can be found on their download page. Alternatively, the HTTPClient gem could be retrieved manually and installed locally prior to the iControl Ruby Library. Example Code There are currently two pieces of sample code included with this release: create-http-virtual-and-pool.rb and get-version.rb. The ‘create-http-virtual-and-pool.rb’ script will create HTTP pool with a number of members and an associated SNAT automapped HTTP virtual server. Take a look at this code if you are looking for reference on the syntax of complex types in iControl. We will be posting a full tech tip on understanding complex types in the near future, but this should get you started. The ‘get-version.rb’ script is rather simple and does exactly what it says: gets the version of the target BIG-IP. There will be many more pieces of example code coming shortly. More information on syntax and types can be found in the iControl SDK documentation. Versioning Contained within the Gem is the iControl Ruby Library and the WSDLs for the most recent iControl SDK (currently v10.2.0). In order to keep things consistent, the first three numbers in the version correspond iControl SDK version that provided the WSDLs. The next ‘a’ signifies that this is an ‘alpha’ release, which will be dropped in subsequent releases. Lastly, the final number signifies the build number supplied by our local repository. Eventually when we deem the library stable, the version number will look something like v11.2.1.678, meaning that this future release was built using WSDLs from the version 11.2.1 iControl SDK and has a build number of 678. Conclusion Please keep in mind that this is our first ‘stab’ at an in-house iControl Ruby library and as such this is not by any means a finished product. There are a number of features we would like to add, but we wanted to start receiving your feedback as soon as possible. We feel that there is a lot of upward potential for this project and need your help and feedback to get it moving. We hope this will help all the Ruby shops out there finally start integrating iControl into their applications. Until next time, happy coding!730Views0likes17CommentsiControl 101 - #24 - Folders
Bucket Way back in time (well, not so way back), configuration objects were stored in one location in the configuration. For the sake of this article, we’ll call this the root “bucket”. This worked fine for small organizations but we found that as companies grew and, as a consequence, the number of applications they needed to support increased, it became more difficult to manage all the objects in a single “bucket”. vs_1 vs_2 pool_1 pool_2 monitor_1 monitor_2 Buckets In BIG-IP version 9.4, we introduced the concept of “Partitions”. With Partitions, you could create additional “buckets” of configuration objects. Each partition could contain objects and have it’s own set of authorization roles protecting them allowing the administrator to group them together and allow each application team to manage those objects without exposing access to objects they weren’t in control of. I discussed this interface in my article titled: iControl 101 - #08 - Partitions. A layout of the previously defined objects could now look like the following: /APP1 vs_1 pool_1 monitor_1 /APP2 vs_2 pool_2 monitor_2 This still has the limitation in that it’s not easy to determine which groupings of objects (APP1, APP2, …) belong together. The next logical extension is to allow arbitrary “buckets” of objects which is what I will talk about for the rest of this article. Buckets of Buckets BIG-IP version 11, introduced the concepts of Folders. Here’s an excerpt from the iControl SDK’s reference page for the Folder interface: A folder stores an arbitrary set of configuration objects. The system uses folders to control access to objects and to control synchronization of these objects within a device group. Folders are organized in a tree heirarchy, much like the folders or directories on a computer's file system. Objects stored in folders are referenced by the name of the individual object, preceded by its folder name, preceded by the names of any parent folders, up to the root folder (/), all separated by slashes (/), e.g., /george/server/virt-a. Note: methods to access the active folder for a session are found in the System::Session interface. So, now we can have the objects look something like this /APPGROUP /APP1 vs_1 pool_1 monitor_1 /APP2 vs_2 pool_2 monitor_2 Since I’m a console-kind-of-guy at heart, and how folders are very similar to directories in a file system, I figured, I’d write a companion sample for this article that emulated a directory shell allowing you to navigate through, create, remove, and modify folders while illustrating how to use the various iControl methods for those tasks. The Application: A Folder Shell I wrote this application in PowerShell, but it could have just as easily been coded in Java, Perl, Python, .Net, or whatever you tend to use with your projects. Let’s take a look at the implementation, the actions that it performs, and some sample output Initialization This application will take only three parameters as input. The address of the BIG-IP, and the username and password for authentication. The main application loop consists of: Verifying the connection information is valid with the code in the Do-Initialize method. Setting the prompt to the users current folder Reading a command from the user and passing it to the Process-Input function described below. param ( $bigip = $null, $uid = $null, $pwd = $null ) Set-PSDebug -strict; # Global Script variables $script:DEBUG = $false; $script:FOLDER = $null; $script:RECURSE = "STATE_DISABLED"; function Do-Initialize() { if ( (Get-PSSnapin | Where-Object { $_.Name -eq "iControlSnapIn"}) -eq $null ) { Add-PSSnapIn iControlSnapIn } $success = Initialize-F5.iControl -HostName $bigip -Username $uid -Password $pwd; return $success; } # Main Application Logic if ( ($bigip -eq $null) -or ($uid -eq $null) -or ($pwd -eq $null) ) { usage; } if ( Do-Initialize ) { $s = Get-RecursiveState; while(1) { $prompt = Get-Prompt; # Not using Read-Host here so we can support responses starting with ! $host.UI.Write($prompt); $i = $host.UI.ReadLine().Trim(); Process-Input $i; } } else { Write-Error "ERROR: iControl subsystem not initialized" } The Main Application Loop The main application loop passes commands to the Process-Input function. A wildcard match is performed against the passed in command and if there is a match, control is passed to the appropriate handler function. I won’t describe them all here but they should be fairly self explanatory. The main types of actions are described in the upcoming sections. function Process-Input() #---------------------------------------------------------------------------- { param($i); Debug-Message "< $($MyInvocation.MyCommand.Name) '$i' >"; if ( $i.Length -gt 0 ) { Debug-Message "CommandLine: '$i'..."; switch -Wildcard ($i.Trim().ToLower()) { "" { break; } "cd *" { Change-Folder (Get-Args $i); } "cd" { Get-CurrentFolder; } "d" { $script:DEBUG = -not $script:DEBUG; } "dir" { Get-ChildFolders | Sort-Object Folder; } "gd *" { Get-FolderDescription (Get-Args $i); } "h" { Show-Help; } "ls" { Get-ChildFolders | Sort-Object Folder; } "ls -l" { Get-ChildFolders -Long | Sort-Object Folder; } "ls -lr" { Get-ChildFolders -Recurse -Long | Sort-Object Folder; } "ls -r" { Get-ChildFolders -Recurse | Sort-Object Folder; } "md *" { Create-Folder (Get-Args $i); } "mkdir *" { Create-Folder (Get-Args $i); } "pwd" { Get-CurrentFolder; } "r" { Set-RecursiveState; } "r *" { Set-RecursiveState (Get-Args $i); } "rd *" { Remove-Folder (Get-Args $i); } "rmdir *" { Remove-Folder (Get-Args $i); } "sd *" { Set-FolderDescription (Get-Args $i); } "q" { exit; } "! *" { Execute-Command (Get-Args $i); } "$*" { Execute-Command $i.SubString(1); } ".." { Move-Up; } default { Show-Help; } } } } Querying The Current Folder The location of the users current, or “active”, folder is determined by calling the System.Session.get_active_folder() method. This is a server side variable that is stored during the lifetime of the iControl Portals instance and tied to the current authenticated user. This Get-CurentFolder function does some internal caching with the $script:FOLDER variable to avoid repetitive calls to the server. The caching can be overridden by passing in the “-Force” argument to the function causing a forced call to query the active folder. The value is then returned to the calling code. function Get-CurrentFolder() { param([switch]$Force = $false); if ( ($($script:FOLDER) -eq $null) -or ($Force) ) { $folder = (Get-F5.iControl).SystemSession.get_active_folder(); } else { $folder = $script:FOLDER; } $folder; } Listing the Child Folders The Get-ChildFolders function has several options you can use with it. If “-Recurse” is passed to the function, the recursive query state is set to STATE_ENABLED telling the server to return all child objects in all child folders relative to the currently active one. This is similar to a “dir /s” on Windows or “ls -r” on Unix. The second parameter is “-Long”. If this is passed in, then a “long” listing will be presented to the user containing the folder name, description, and device group. Folder descriptions are described below while I’m leaving device groups to be a task for the reader to follow up on. The function gathers the requested information and then packs the output into objects and returns them along the PowerShell pipeline to the calling code. function Get-ChildFolders() { param([switch]$Recurse = $false, [switch]$Long = $false); if ( $Recurse ) { $oldstate = (Get-F5.iControl).SystemSession.get_recursive_query_state(); (Get-F5.iControl).SystemSession.set_recursive_query_state("STATE_ENABLED"); } $folders = (Get-F5.iControl).ManagementFolder.get_list(); if ( $Recurse -and ($oldstate -ne "STATE_ENABLED") ) { (Get-F5.iControl).SystemSession.set_recursive_query_state($oldstate); } $descriptions = (Get-F5.iControl).ManagementFolder.get_description($folders); $groups = (Get-F5.iControl).ManagementFolder.get_device_group($folders); $curfolder = Get-CurrentFolder; if ( $curfolder -eq "/" ) { $curfolder = "ZZZZZZZZ"; } for($i=0;$i-lt$folders.length;$i++) { if ( $Long ) { $o = 1 | select "Folder", "Description", "DeviceGroup"; $o.Folder = $folders[$i].Replace($curfolder, ""); $o.Description = $descriptions[$i]; $o.DeviceGroup = $groups[$i]; } else { $o = 1 | select "Folder"; $o.Folder = $folders[$i].Replace($curfolder, ""); } $o; } } Changing The Current Folder The Change-Folder function emulates the “cd” or “chdir” functionality in command shells. The folder parameter can either be a fully qualified folder name (ie /PARTITION1/FOLDER1/SUBFOLDER), a relative child folder path (ie. FOLDER1/SUBFOLDER, SUBFOLDER, etc), or the special “..” folder which means to go up one level on the folder hierarchy. function Change-Folder() #---------------------------------------------------------------------------- { param($folder); Debug-Message "Setting active folder to '$folder'"; if ( $folder -eq ".." ) { Move-Up; } else { (Get-F5.iControl).SystemSession.set_active_folder($folder); } $f = Get-CurrentFolder -Force; } Moving Up A Folder I wrote a special function to move up one folder from the currently active folder. The Move-Up function parses the current folder and moves up a path using the PowerShell Split-Path cmdlet. If the path isn’t the top folder (meaning it has a parent), then the call to System.Session.set_active_folder() is made and the new current folder is queried and cached for future use. function Move-Up() { $folder = Get-CurrentFolder; $parent = (Split-Path $folder).Replace('\', '/'); if ( $parent.Length -gt 0 ) { Debug-Message "Setting active folder to '$parent'"; (Get-F5.iControl).SystemSession.set_active_folder($parent); } $f = Get-CurrentFolder -Force; } Creating And Removing Folders Navigating folders is fun, but it’s more fun to create and destroy them! The Create-Folder and Remove-Folder functions do just that. They call the Management.Folder.create() and Management.Folder.delete_folder() methods to do these actions. function Create-Folder() { param($folder); Debug-Message "Creating folder '$folder'"; (Get-F5.iControl).ManagementFolder.create($folder); Write-Host "Folder '$folder' successfully created!"; } function Remove-Folder() { param($folder); Debug-Message "Removing folder '$folder'"; (Get-F5.iControl).ManagementFolder.delete_folder($folder); Write-Host "Folder '$folder' successfully removed!"; } Folder Descriptions One great feature of folders is the ability to attach a description to it. A folder is just another type of object and it’s sometimes useful to be able to store some metadata in there with information that you can’t fit into the name. The Set-FolderDescription and Get-FolderDescription functions call the Management.Folder.set_description() and Management.Folder.get_description() iControl methods to, well, get and set the descriptions. function Set-FolderDescription() { param($cmd); $tokens = $cmd.Split(" "); if ( $tokens.Length -eq 1 ) { $f = $cmd; $d = ""; Debug-Message "Setting folder '$folder' description to '$d'"; (Get-F5.iControl).ManagementFolder.set_description($f, $d); Get-FolderDescription $f; } elseif ( $tokens.Length -gt 1 ) { # folder description goes here $f = $tokens[0]; $d = $tokens[1]; for($i=2; $i-lt$tokens.Length; $i++) { $d += " "; $d += $tokens[$i]; } Debug-Message "Setting folder '$f' description to '$d'"; (Get-F5.iControl).ManagementFolder.set_description($f, $d); Get-FolderDescription $f; } else { Show-Help; } } function Get-FolderDescription() { param($folder); Debug-Message "Retrieving folder description for '$folder'"; $descriptions = (Get-F5.iControl).ManagementFolder.get_description($folder); $descriptions[0]; } Controling The Recursive State For Queries I touched at the recursive query option above when I was showing how to query child folders. This recursive state not only applies to folder queries, but all queries across the iControl API! The Set-RecursiveState function sets this configuration variable to either STATE_ENABLED or STATE_DISABLED. If the recursive_query_state is set to STATE_DISABLED, then queries will only return objects in the current active folder. But, if it’s set to STATE_ENABLED, then it will return all child objects in all child folders. So, by setting this one value, you can determine whether you get just the objects (pools, virtuals, vlans, monitors, etc) in the current folder, or all of them in all child folders. Cool stuff! function Set-RecursiveState() { param($state = $null); $newState = "STATE_DISABLED"; if ( $state -eq $null ) { # toggle $oldState = (Get-F5.iControl).SystemSession.get_recursive_query_state(); if ( $oldState -eq "STATE_DISABLED" ) { $newState = "STATE_ENABLED"; } } else { # set if ( $state.ToLower().Contains("enable") ) { $newState = "STATE_ENABLED"; } } $script:RECURSE = $newState; (Get-F5.iControl).SystemSession.set_recursive_query_state($newState); Write-Host "Recursive State set to '$newState'"; } function Get-RecursiveState() { $oldState = (Get-F5.iControl).SystemSession.get_recursive_query_state(); $script:RECURSE = $oldState; $oldState; } Executing Arbitrary iControl Commands I threw this in to help illustrate how the recursive state described above works. By passing in the command “! LocalLBVirtualServer.get_list()”, that iControl call will be executed and the results passed to the output. By using the Invoke-Expression cmdlet, any iControl call can arbitrarily be made within the shell. Again, cool stuff! function Execute-Command() { param($cmd); $fullcmd = $cmd; if ( -not $fullcmd.ToLower().StartsWith("(get-f5.icontrol).") ) { $fullcmd = "(Get-F5.iControl).$cmd"; } Debug-Message "Executing command '$fullcmd'"; Invoke-Expression $fullcmd; } A Demo Walkthrough Conclusion Folders will allow administrators more control of grouping their objects together, thus enabling them with the long-term manageability of those objects. This example illustrated how to use the iControl methods to interact with folders and, hopefully, in doing so, showed the ease at building powerful iControl solutions. The Source Code The Full Source can be found in the iControl Wiki in the CodeShare entry titled PowerShellManagementFolder. Related Content on DevCentral v11 - DevCentral - F5 DevCentral Groups - Microsoft PowerShell with iControl PowerShell - DevCentral Wiki Lori MacVittie - v11 ABLE Infrastructure: The Next Generation – Introducing v11 v11 iRules: Intro to Sideband Connections > DevCentral > Tech Tips ... F5 Friday: You Will Appsolutely Love v11 PowerShell ABC's - A To Z Webcast - BIG-IP v11 and Microsoft Technologies FREE F5 Virtual Event - All About V11 and Microsoft - Thursday ... Technorati Tags: powershell, folders, v11, automation, Joe Pruitt640Views0likes0CommentsCustom BIG-IP Object MetaData With Data Groups
An internal email came across a few weeks ago from DevCentral’s own L4L7 asking about putting extensions into the product to allow for custom attributes to be assigned to objects within the system. The problem is pretty simple, you have a large configuration site and you would like to “tag” an object to be able to easily categorize it later. A good example is to assign an owner to a virtual server, or maybe a timestamp of when the object was last updated. By doing this you don’t only have a way to remotely query objects of a certain type, but you could use an iRule to use that metadata to make real time policy decisions. Instead of digging into how we would incorporate this into the internal schemas of the core configuration database, I figured I’d tackle the problem in a different way. One that doesn’t require an update to the product and will work backward on previous products. Of course iControl came to mind. I thought to myself about how it could be done. Several ideas came to mind but, as it so often happens, the simplest solution satisfied all the requirements. I ultimately decided on using Data Groups as the data store since those are able to be accessed from both iControl (for remote configuration) and from iRules (for policy decisions). Prerequisites This article assumes you have the iControl snapin registered in your system and you have initialized it with a connection to your BIG-IP. You will also need to “dot source” the script to set the functions in your current PowerShell runspace. This can be done with the following commands PS> Add-PSSnapin iControlSnapin PS> Initialize-F5.iControl –Hostname bigip –Username user –Password pass PS> . .\PsObjectMetaData.ps1 The DataGroup And Storage Format Since I chose the storage to be a DataGroup, we’ll have to allow the user to decide the DataGroup to use. We will also have to include the code to create that DataGroup if it doesn’t already exist. The Set-F5.MetaDataClass function sets the internal script variable to the specified Data Group name and then issues a call to the LocalLB.Class.create_string_class method for that new DataGroup. If this fails, in the situation where the DataGroup already exists, the trap catches the exception and then allows the script to proceed. For exceptions that do not include the string “already exists”, the exception will not be trapped and the error will be displayed to the console. 1: function Set-F5.MetaDataClass() 2: { 3: param($Class = $null); 4: if ( $null -eq $Class ) 5: { 6: Write-Host "Usage: Set-F5.MetaDataClass -Class classname"; 7: return; 8: } 9: 10: $StringClass = New-Object -TypeName iControl.LocalLBClassStringClass; 11: $StringClass.name = $Class 12: $StringClass.members = (,$null); 13: 14: trap { 15: if ( $_.Exception.Message.Contains("already exists") ) { 16: # Class member already exists, so eat the error. 17: $script:DataGroup = $Class; 18: continue; 19: } 20: } 21: (Get-F5.iControl).LocalLBClass.create_string_class( (,$StringClass) ); 22: 23: $script:DataGroup = $Class; 24: } I’ve also included a Get-F5.MetaDataClass function to retrieve the script level variable. 1: function Get-F5.MetaDataClass() 2: { 3: $script:DataGroup; 4: } The Data Format Since I’m just using a simple String Data Group, I’ve decided to implement it as a character delimited string. The fields in the data set will be the following Product – This will be the product the object is associated with (LTM, GTM, ASM, etc). Type – The specific type of object (VIP, Pool, PoolMember, VLAN, WideIP, etc). Name – The object name (Pool Name, VIP Name, WideIP Name, etc). Key – The name of the metadata attribute (Owner, Location, etc). Value – The value for the metadata (Joe, Seattle, etc). I’ve included the New-F5.DataObject to assist in the management of the fields within a PowerShell object. This will be used in subsequent functions. This is also handy when returning a result set to PowerShell. By having an object representation, you can use the formatting and selection functions to choose how you want your results displayed. 1: function New-F5.DataObject() 2: { 3: param( 4: $Product = $null, 5: $Type = $null, 6: $Name = $null, 7: $Key = $null, 8: $Value = $null 9: ); 10: if ( ($Product -eq $null) -or ($Type -eq $null) -or ($Name -eq $null) -or ($Key -eq $null) -or ($Value -eq $null) ) 11: { 12: Write-Host "Usage: New-F5.DataObject -Product prod -Type type -Name objname -Key datakey -Value datavalue"; 13: return; 14: } 15: 16: $o = 1 | select Product, Type, Name, Key, Value; 17: $o.Product = $Product; 18: $o.Type = $Type; 19: $o.Name = $Name; 20: $o.Key = $Key; 21: $o.Value = $Value; 22: $o; 23: } Assign MetaData To An Object The first thing you will likely want to do is to assign some metadata to an object. This is done with the Set-F5.MetaData function. It takes the properties defined in the Data Format section above. The record format is formed and the iControl LocalLB.Class.add_string_class_member method is called with the LocalLB.Class.StringClass structure containing the Data Group entry description. As above, we eat the error if the entry already exists. 1: function Set-F5.MetaData() 2: { 3: param( 4: $Product = $null, 5: $Type = $null, 6: $Name = $null, 7: $Key = $null, 8: $Value = $null 9: ); 10: if ( ($Product -eq $null) 11: -or ($Type -eq $null) 12: -or ($Name -eq $null) 13: -or ($Key -eq $null) 14: -or ($Value -eq $null) ) 15: { 16: Write-Host "Usage: Set-F5.MetaData -Product prod -Type type -Name name -Key key -Value value"; 17: return; 18: } 19: 20: $StringClass = New-Object -TypeName iControl.LocalLBClassStringClass; 21: $StringClass.name = $script:Datagroup; 22: $StringClass.members = ( ,"${Product}:${Type}:${Name}:${Key}:${Value}"); 23: 24: trap { 25: if ( $_.Exception.Message.Contains("already exists") ) { 26: # Class member already exists, so eat the error. 27: continue; 28: } 29: } 30: (Get-F5.iControl).LocalLBClass.add_string_class_member( (,$StringClass) ); 31: } Get The MetaData Associated With An Object After assigning some metadata to objects, you’ll want to have a way to query them out. I implemented this with the Get-F5.MetaData function. It takes as input the 5 fields defined above. But, in this case, all of the parameters are optional. The function queries the DataGroup and then filters the entries based on which parameters you passed in. This way you can query all “VIP” objects, or all objects owned by “Joe”, or any combination of the parameters. The New-F5.DataObject function is used here to create PowerShell objects for the specific records that match the query. As you’ll see below in the Example section, this allows filtering and display formatting of the output. 1: function Get-F5.MetaData() 2: { 3: param( 4: $Product = $null, 5: $Type = $null, 6: $Name = $null, 7: $Key = $null, 8: $Value = $null 9: ); 10: 11: $Objs = @(); 12: # Build list 13: 14: $StringClassA = (Get-F5.iControl).LocalLBClass.get_string_class( (,$script:Datagroup) ); 15: $StringClass = $StringClassA[0]; 16: 17: $classname = $StringClass.name; 18: $members = $StringClass.members; 19: 20: for($i=0; $i -lt $members.length; $i++) 21: { 22: $tokens = $members[$i].Split($Separator); 23: if ( $tokens.Length -eq 5 ) 24: { 25: $oProd = $tokens[0]; 26: $oType = $tokens[1]; 27: $oName = $tokens[2]; 28: $oKey = $tokens[3]; 29: $oValue = $tokens[4]; 30: 31: $o = New-F5.DataObject -Product $oProd -Type $oType -Name $oName -Key $oKey -Value $oValue; 32: 33: $match = $true; 34: 35: # Process filter parameters 36: if ( ($Product -ne $null) -and ($oProd -notlike $Product) ) { $match = $false; } 37: if ( ($Type -ne $null) -and ($oType -notlike $Type ) ) { $match = $false; } 38: if ( ($Name -ne $null) -and ($oName -notlike $Name ) ) { $match = $false; } 39: if ( ($Key -ne $null) -and ($oKey -notlike $Key ) ) { $match = $false; } 40: if ( ($Value -ne $null) -and ($oValue -notlike $Value ) ) { $match = $false; } 41: 42: if ( $match ) { $Objs += (,$o); } 43: } 44: } 45: 46: $Objs; 47: } Removing The MetaData From An Object Getting and setting the metadata is all fine and dandy, but there will be times when you want to delete the metadata associated with an item. This can be done with the Remove-F5.MetaData function. For this example, all fields are required so that you don’t accidentally delete the metadata for all the objects in the system by omitting a certain parameter. It would be trivial to allow a more specific removal based on one or more of the supplied parameters. I’ll leave that as an exercise for the reader. The Remove-F5.MetaData function, calls the iControl LocalLB.Class.delete_string_class_member method with the supplied StringClass definition. Errors for record not found are trapped since you probably aren’t concerned if you are deleting objects that don’t exist. All other errors are presented to the console through the standard PowerShell error processing. 1: function Remove-F5.MetaData() 2: { 3: param($Product = $null, $Type = $null, $Name = $null, $Key = $null, $Value = $null); 4: if ( ($Product -eq $null) 5: -or ($Type -eq $null) 6: -or ($Name -eq $null) 7: -or ($Key -eq $null) 8: -or ($Value -eq $null) ) 9: { 10: Write-Host "Usage: Remove-F5.MetaData -Product prod -Type type -Name name -Key key -Value value"; 11: return; 12: } 13: 14: $StringClass = New-Object -TypeName iControl.LocalLBClassStringClass; 15: $StringClass.name = $script:Datagroup; 16: $StringClass.members = ( ,"${Product}:${Type}:${Name}:${Key}:${Value}"); 17: 18: trap { 19: if ( $_.Exception.Message.Contains("was not found") ) { 20: # Class member doesn't exists, so eat the error. 21: continue; 22: } 23: } 24: (Get-F5.iControl).LocalLBClass.delete_string_class_member( (,$StringClass) ); 25: 26: } Example Usage PS> Add-PSSnapin iControlSnapin PS> Initialize-F5.iControl –Hostname bigip –Username user –Password pass PS> Set-F5.MetaDataClass –Class ObjectMetaData PS> Set-F5.MetaData –Product LTM –Type Pool –Name OWA_Servers_1 –Key Owner –Value Joe PS> Set-F5.MetaData –Product LTM –Type Pool –Name OWA_Servers_1 –Key Location –Value SEA PS> Set-F5.MetaData –Product LTM –Type Pool –Name OWA_Servers_2 –Key Owner –Value Joe PS> Set-F5.MetaData –Product LTM –Type Pool –Name OWA_Servers_2 –Key Location –Value LAS PS> Set-F5.MetaData –Product GTM –Type Wideip –Name wip_OWA –Key Owner –Value Fred PS> Set-F5.MetaData –Product GTM –Type Wideip –Name wip_OWA –Key Location –Value LAS PS> Get-F5.MetaData | Format-Table Product Type Name Key Value ------- ---- ---- --- ----- GTM Wideip wip_OWA Location LAS GTM Wideip wip_OWA Owner Joe LTM Pool OWA_Servers_1 Location SEA LTM Pool OWA_Servers_1 Owner Joe LTM Pool OWA_Servers_2 Location LAS LTM Pool OWA_Servers_2 Owner Fred PS> Get-F5.MetaData –Key Owner –Value Joe | Format-Table Product Type Name Key Value ------- ---- ---- --- ----- GTM Wideip wip_OWA Owner Joe LTM Wideip wip_OWA Owner Joe PS> Get-F5.MetaData –Key Location –Value SEA | select Type, Name Type Name ---- ---- Pool OWA_Servers_1 You can view the source library for this article in the iControl CodeShare under PsObjectMetaData. Related Articles on DevCentral Working with Datagroup members in Powershell - DevCentral - F5 ... how to use data group - DevCentral - F5 DevCentral > Forums ... How do I edit Datagroups via PowerShell - DevCentral - F5 ... iControl 101 #13 - Data Groups > DevCentral > F5 DevCentral ... Forcing a reload of External Data Groups within an iRule ... Validating Data Group (Class) References > DevCentral > F5 ... DevCentral Wiki: Power Shell PowerShell - Getting Started > DevCentral > F5 DevCentral > Tech Tips Authoring an F5 Management Pack PowerShell Agent Task in SCOM 2007 ... Creating An iControl PowerShell Monitoring Dashboard With Google ... Technorati Tags: PowerShell,DataGroup,Joe Pruitt627Views0likes0CommentsPerl: How to work with methods with multiple outbound parameters
I've seen this quesion asked in several forms so I thought I'd try to shed some light on how to handle accessing methods with multiple outbound parameters using Perl's SOAP::Lite: Let's take a look at these function prototypes: void func1(out string param1); void func2(out string param1, out string [] param2); string func3(out string param1, out string [] param2); string[] func4(out string param1, out string [] param2); string func5(out string[] param1, out string [][] param2); I know there are more combinations than this but this should cover most of the bases (other object types can take the place of string in these examples). I'll take each of these point by point and note how to access the returned parameters. Keep in mind that SOAP::Lite doesn't distinguish a returned value vs an outbound parameter, it uses the FIRST returned value whether it's the return value or the first outbound parameter. Then it stuffs all subsequent parameters into it's paramsout array. 1. void func1(out string param1); Here we have no return value but a single outbound parameter. Since the outbound parameter is the first returned value, it will be stored in the result. $soapResponse = $soap->func1(); $param1 = $soapResponse->result; # Scalar value print "param1 => $param1\n"; 2. void func2(out string param1, out string[] param2) This method has no return value but two outbound parameters so param1 will be stored in the result and param2 will be the first element of the paramsout array. $soapResponse = $soap->func2(); @params = $soapResponse->paramsout; $param1 = $soapResponse->result; # Scalar value @param2 = @{@params[0]); # Array print "param1 => $param1\n"; print "param2 => {"; foreach $val (@param2) { print "$val, "; } print "}\n"; 3. string func3(out string param1, out string [] param2); This method returns a string as a return value and has 2 outbound parameters. So, the returned value will be in the result and params 1 and 2 will be stored in the paramsout array. $soapResponse = $soap->func3(); @params = $soapResponse->paramsout; $retval = $soapResponse->result; $param1 = @params[0]; # Scalar value @param2 = @{@params[1]); # Array print "retval => $retval\n"; print "param1 => $param1\n"; print "param2 => {"; foreach $val (@param2) { print "$val, "; } print "}\n"; 4. string[] func4(out string param1, out string [] param2); This method is similar to number 2, but it returns an array which will be stored in the result and, again, params 1 and 2 will be sotred in the paramsout array. $soapResponse = $soap->func4(); @params = $soapResponse->paramsout; @retval = @{$soapResponse->result}; $param1 = @params[0]; # Scalar value @param2 = @{@params[1]); # Array print "retval => {"; foreach $val (@retval) { print "$val, "; } print "}\n"; print "param1 => $param1\n"; print "param2 => {"; foreach $val (@param2) { print "$val, "; } print "}\n"; 5. string func5(out string[] param1, out string [][] param2); Here we've introduced an Array of Arrays as an outbound parameter. the Return value will again be returned in the result with the parameters stored in the paramsout array. $soapResponse = $soap->func5(); @params = $soapResponse->paramsout; $retval = @{$soapResponse->result}; @param1 = @{@params[0]}; # Array @param2 = @{@params[1]); # Array of Array print "retval => $retval\n"; print "param1 => {"; foreach $val (@param1) { print "$val, "; } print "}\n"; print "param2 => {\n"; for $i ( 0 .. $#param2 ) { print " {"; for $j ( 0 .. $#{$param2[$i]} ) { print "$param2[$i][$j], "; } print "}\n"; } print "}\n"; You could have used the foreach statement for the array of array but this just shows another way to access the data by index. -Joe References SOAP::Lite SOAP::Lite Perldocs Manipulating Arrays of Arrays in perl612Views0likes2Comments