basic
59 TopicsiRules 101 - #12 - The Session Command
One of the things that makes iRules so incredibly powerful is the fact that it is a true scripting language, or at least based on one. The fact that they give you the tools that TCL brings to the table - regular expressions, string functions, even things as simple as storing, manipulating and recalling variable data - sets iRules apart from the rest of the crowd. It also makes it possible to do some pretty impressive things with connection data and massaging/directing it the way you want it. Other articles in the series: Getting Started with iRules: Intro to Programming with Tcl | DevCentral Getting Started with iRules: Control Structures & Operators | DevCentral Getting Started with iRules: Variables | DevCentral Getting Started with iRules: Directing Traffic | DevCentral Getting Started with iRules: Events & Priorities | DevCentral Intermediate iRules: catch | DevCentral Intermediate iRules: Data-Groups | DevCentral Getting Started with iRules: Logging & Comments | DevCentral Advanced iRules: Regular Expressions | DevCentral Getting Started with iRules: Events & Priorities | DevCentral iRules 101 - #12 - The Session Command | DevCentral Intermediate iRules: Nested Conditionals | DevCentral Intermediate iRules: Handling Strings | DevCentral Intermediate iRules: Handling Lists | DevCentral Advanced iRules: Scan | DevCentral Advanced iRules: Binary Scan | DevCentral Sometimes, though, a simple variable won't do. You've likely heard of global variables in one of the earlier 101 series and read the warning there, and are looking for another option. So here you are, you have some data you need to store, which needs to persist across multiple connections. You need it to be efficient and fast, and you don't want to have to do a whole lot of complex management of a data structure. One of the many ways that you can store and access information in your iRule fits all of these things perfectly, little known as it may be. For this scenario I'd recommend the usage of the session command. There are three main permutations of the session command that you'll be using when storing and referencing data within the session table. These are: session add: Stores user's data under the specified key for the specified persistence mode session lookup: Returns user data previously stored using session add session delete: Removes user data previously stored using session add A simple example of adding some information to the session table would look like: when CLIENTSSL_CLIENTCERT { set ssl_cert [SSL::cert 0] session add ssl $ssl_cert 90 } By using the session add command, you can manually place a specific piece of data into the LTM's session table. You can then look it up later, by unique key, with the session lookup command and use the data in a different section of your iRule, or in another connection all together. This can be helpful in different situations where data needs to be passed between iRules or events that it might not normally be when using a simple variable. Such as mining SSL data from the connection events, as below: when CLIENTSSL_CLIENTCERT { # Set results in the session so they are available to other events session add ssl [SSL::sessionid] [list [X509::issuer] [X509::subject] [X509::version]] 180 } when HTTP_REQUEST { # Retrieve certificate information from the session set sslList [session lookup ssl [SSL::sessionid]] set issuer [lindex sslList 0] set subject [lindex sslList 1] set version [lindex sslList 2] } Because the session table is optimized and designed to handle every connection that comes into the LTM, it's very efficient and can handle quite a large number of items. Also note that, as above, you can pass structured information such as TCL Lists into the session table and they will remain intact. Keep in mind, though, that there is currently no way to count the number of entries in the table with a certain key, so you'll have to build all of your own processing logic for now, where necessary. It's also important to note that there is more than one session table. If you look at the above example, you'll see that before we listed any key or data to be stored, we used the command session add ssl. Note the "ssl" portion of this command. This is a reference to which session table the data will be stored in. For our purposes here there are effectively two session tables: ssl, and uie. Be sure you're accessing the same one in your session lookup section as you are in your session add section, or you'll never find the data you're after. This is pretty easy to keep straight, once you see it. It looks like: session add uie ... session lookup uie Or: session add ssl ... session lookup ssl You can find complete documentation on the session command here, in the iRules, as well as some great examplesthat depict some more advanced iRules making use of the session command to great success. Check out Codeshare for more examples.3.4KViews0likes8CommentsIntermediate iRules: Nested Conditionals
Conditionals are a pretty standard tool in every programmer's toolbox. They are the functions that allow us to decided when we want certain actions to happen, based on, well, conditions that can be determined within our code. This concept is as old as compilers. Chances are, if you're writing code, you're going to be using a slew of these things, even in an Event based language like iRules. iRules is no different than any other programming/scripting language when it comes to conditionals; we have them. Sure how they're implemented and what they look like change from language to language, but most of the same basic tools are there: if, else, switch, elseif, etc. Just about any example that you might run across on DevCentral is going to contain some example of these being put to use. Learning which conditional to use in each situation is an integral part to learning how to code effectively. Once you have that under control, however, there's still plenty more to learn. Now that you're comfortable using a single conditional, what about starting to combine them? There are many times when it makes more sense to use a pair or more of conditionals in place of a single conditional along with logical operators. For example: if { [HTTP::host] eq "bob.com" and [HTTP::uri] starts_with "/uri1" } { pool pool1 } elseif { [HTTP::host] eq "bob.com" and [HTTP::uri] starts_with "/uri2" } { pool pool2 } elseif { [HTTP::host] eq "bob.com" and [HTTP::uri] starts_with "/uri3" } { pool pool3 } Can be re-written to use a pair of conditionals instead, making it far more efficient. To do this, you take the common case shared among the example strings and only perform that comparison once, and only perform the other comparisons if that result returns as desired. This is more easily described as nested conditionals, and it looks like this: if { [HTTP::host] eq "bob.com" } { if {[HTTP::uri] starts_with "/uri1" } { pool pool1 } elseif {[HTTP::uri] starts_with "/uri2" } { pool pool2 } elseif {[HTTP::uri] starts_with "/uri3" } { pool pool3 } } These two examples are logically equivalent, but the latter example is far more efficient. This is because in all the cases where the host is not equal to "bob.com", no other inspection needs to be done, whereas in the first example, you must perform the host check three times, as well as the uri check every single time, regardless of the fact that you could have stopped the process earlier. While basic, this concept is important in general when coding. It becomes exponentially more important, as do almost all optimizations, when talking about programming in iRules. A script being executed on a server firing perhaps once per minute benefits from small optimizations. An iRule being executed somewhere in the order of 100,000 times per second benefits that much more. A slightly more interesting example, perhaps, is performing the same logical nesting while using different operators. In this example we'll look at a series of if/elseif statements that are already using nesting, and take a look at how we might use the switch command to even further optimize things. I've seen multiple examples of people shying away from switch when nesting their logic because it looks odd to them or they're not quite sure how it should be structured. Hopefully this will help clear things up. First, the example using if statements: when HTTP_REQUEST { if { [HTTP::host] eq "secure.domain.com" } { HTTP::header insert "Client-IP:[IP::client_addr]" pool sslServers } elseif { [HTTP::host] eq "www.domain.com" } { HTTP::header insert "Client-IP:[IP::client_addr]" pool httpServers } elseif { [HTTP::host] ends_with "domain.com" and [HTTP::uri] starts_with "/secure"} { HTTP::header insert "Client-IP:[IP::client_addr]" pool sslServers } elseif {[HTTP::host] ends_with "domain.com" and [HTTP::uri] starts_with "/login"} { HTTP::header insert "Client-IP:[IP::client_addr]" pool httpServers } elseif { [HTTP::host] eq "intranet.myhost.com" } { HTTP::header insert "Client-IP:[IP::client_addr]" pool internal } } As you can see, this is completely functional and would do the job just fine. There are definitely some improvements that can be made, though. Let's try using a switch statement instead of several if comparisons for improved performance. To do that, we're going to have to use an if nested inside a switch comparison. While this might be new to some or look a bit odd if you're not used to it, it's completely valid and often times the most efficient you’re going to get. This is what the above code would look like cleaned up and put into a switch: when HTTP_REQUEST { HTTP::header insert "Client-IP:[IP::client_addr]" switch -glob [HTTP::host] { "secure.domain.com" { pool sslServers } "www.domain.com" { pool httpServers } "*.domain.com" { if { [HTTP::uri] starts_with "/secure" } { pool sslServers } else { pool httpServers } } "intranet.myhost.com" { pool internal } } } As you can see this is not only easier to read and maintain, but it will also prove to be more efficient. We've moved to the more efficient switch structure, we've gotten rid of the repeat host comparisons that were happening above with the /secure vs /login uris, and while I was at it I got rid of all those examples of inserting a header, since that was happening in every case anyway. Hopefully the benefit this technique can offer is clear, and these examples did the topic some justice. With any luck, you'll nest those conditionals with confidence now.5.7KViews0likes0CommentsiRule Security 101 - #1 - HTTP Version
When looking at securing up your web application, there are a set of fairly standard attack patterns that application firewalls make use of to protect against those bad guys out there who are trying exploit your website. A good reference for web application attacks is the Open Web Application Security Project (OWASP). In this series of blog posts, I'm going to highlight different attacks and how they can be defended against by using iRules. In the first installment of this series I will show how to only allow valid HTTP requests to your application server.The most common HTTP versions out there are 1.0 and 1.1 although version0.9 is still used in places. A common attempt to fool an application is by passing an invalid HTTP Version causing the server to not interpret the request correctly. The "HTTP version" iRules command contains the request version and you can ensure that only valid requests are processed and allowed to your app servers with this iRule: when RULE_INIT { set INFO 0 set DEBUG 0 #------------------------------------------------------------------------ # HTTP Version #------------------------------------------------------------------------ set sec_http_version_enabled 0 set sec_http_version_block 1 set sec_http_version_alert 1 set sec_http_versions [list \ "0.9" \ "1.0" \ "1.1" \ ] } when HTTP_REQUEST { #============================================================================ # HTTP Version #============================================================================ if { $::INFO } { log local0. "ASSERTION: http_version" } if { $::sec_http_version_enabled } { if { $::DEBUG } { log local0. " HTTP Version: [HTTP::version]" } if { ! [matchclass [HTTP::version] equals $::sec_http_versions ] } { if { $::sec_http_version_alert } { log local0. " SEC-ALERT: Invalid HTTP Version found: '[HTTP::version]'" } if { $::sec_http_version_block } { reject } } else { if { $::DEBUG } { log local0. " PASSED" } } } } In the RULE_INIT method I've created a few global variablesenabling one to turn on or off theverification. Without all the extraconditionals, the iRule can be stripped down to the following couple of lines: when RULE_INIT { set sec_http_versions [list "0.9" "1.0" "1.1" ] } when HTTP_REQUEST { if { ! [matchclass [HTTP::version] equals $::sec_http_versions ] } { reject } } Stay tuned for the next installment of iRules Security 101 where I'll show how to validate HTTP methods. -Joe554Views0likes2CommentsiControl 101 - #06 - File Transfer APIs
The main use we see for iControl applications is with the automation of control features such as adding, removing, enabling, and disabling objects. Another key step in multi-device management is the automation of applying hotfixes and other software updates as well as the downloading of configurations for archival and disaster recovery purposes. iControl has a set of methods that enable the uploading and downloading of files for these purposes. This article will discuss these "file transfer" APIs and how you can use them for various management purposes. The File Transfer APIs The API methods uses to transfer files to and from the device can be found in the System::ConfigSync interface. You may ask: Why are they ConfigSync interface? The answer is quite simple actually. In our first version of iControl, we had the need to transfer configurations across devices in a HA pair as part of the Configuration Sync process. So in doing so, we introduced the upload_configuration() and download_configuration() methods. When we later added more generic file transfer APIs, it seemed logical to place them next to the pre-existing configuration transfer methods. So, that's the reason... The following methods are used to download content: FileTransferContext System::ConfigSync::download_configuration( in String config_name, in long chunk_size, inout long file_offset ); FileTransferContext System::ConfigSync::download_file( in String file_name, in long chunk_size, inout long file_offset ); And the following two methods are used to upload content: void System::ConfigSync::upload_configuration( in String config_name, in System::ConfigSync::FileTransferContext file_context ); void System::ConfigSync::upload_file( in String file_name, in System::ConfigSync::FileTransferContext file_context ); The above methods use the following enum and structure as part of the control enum Common::FileChainType { FILE_UNDEFINED = 0 FILE_FIRST = 1, FILE_MIDDLE = 2, FILE_UNUSED = 3, FILE_LAST = 4, FILE_FIRST_AND_LAST }; struct System.::ConfigSync::FileTransferContext { char [] file_data, Common::FileChainType chain_type }; Chunks Due to the limitations with SOAP's with regards to payload and processing of large messages, we designed the file transfer APIs to work in "chunks". This means that for a multi-megabyte file, you a loop in your code to send up "chunks" in whatever chunk sizes you wish from 1 byte to 100's of Ks. We recommend not getting too extreme on either end of the chunk size ranges. Typically we recommend 64 to 256k per chunk. How to use the download methods. The process for downloading content is fairly straightforward. In this example, we'll use the download_configuration method. 1. Make a call to the download_configuration method with a given configuration name (a list of existing configurations can be returned from ConfigSync::get_configuration_list) with the requested chunk_size (ie 64k) and the starting file_offset of 0. 2. The first response will come back in the FileTransferContext with the data and the FileChainType describing whether this was the first chunk, a middle chunk, the last chunk, or the first and last chunk. 3. take the encoded data stored in the FileTransferContext.file_data array and save it locally. 4. If the FileChainType is FILE_LAST or FILE_FIRST_AND_LAST, you are done. 5. Otherwise, use the incremented file_offset go to step #1. The following snippet of code taken from the iControl SDK's ConfigSync C# sample application illustrates how to deal with downloading a file with an unknown size. void handle_download(string config_name, string local_file) { ConfigSync.SystemConfigSyncFileTransferContext ctx; long chunk_size = (64*1024); long file_offset = 0; bool bContinue = true; FileMode fm = FileMode.CreateNew; if ( File.Exists(local_file) ) { fm = FileMode.Truncate; } FileStream fs = new FileStream(local_file, fm); BinaryWriter w = new BinaryWriter(fs); while ( bContinue ) { ctx = ConfigSync.download_configuration(config_name, chunk_size, ref file_offset); // Append data to file w.Write(ctx.file_data, 0, ctx.file_data.Length); Console.WriteLine("Bytes Transferred: " + file_offset); if ( (CommonFileChainType.FILE_LAST == ctx.chain_type) || (CommonFileChainType.FILE_FIRST_AND_LAST == ctx.chain_type) ) { bContinue = false; } } w.Close(); } How to use the upload methods The upload methods work in a similar way but in the opposite direction. To use the upload_configuration configuration method, you would use the following logic. 1. Determine the chunk size you are going to use (64k - 256k recommended) and fill the file_data array with the first chunk of data. 2. Set the FileChainType to FILE_TYPE_FIRST_AND_LAST if all your data can fit in the first chunk size. 3. Set the FileChainType to FILE_FIRST if you fill up the data with your chunk size and there is more data to follow. 4. Make a call to upload_configuration with the configuration name and the FileTransferContext with the data and the FileChainType. 5. If the data has all been sent, stop processing. 6. Else if the remaining data will able to fit in the given chunk size, set the FileChainType to FILE_LAST, otherwise set it to FILE_MIDDLE. 7. Fill the file_data with the next chunk of data. 8. Goto Step #4. The following example taken from the iControl SDK ConfigSync perl sample and illustrates taking a local configuration file and uploading it to the device. sub uploadConfiguration() { my ($localFile, $configName) = (@_); $success = 0; $bContinue = 1; $chain_type = $FILE_FIRST; $preferred_chunk_size = 65536; $chunk_size = 65536; $total_bytes = 0; open(LOCAL_FILE, "<$localFile") or die("Can't open $localFile for input: $!"); binmode(LOCAL_FILE); while (1 == $bContinue ) { $file_data = ""; $bytes_read = read(LOCAL_FILE, $file_data, $chunk_size); if ( $preferred_chunk_size != $bytes_read ) { if ( $total_bytes == 0 ) { $chain_type = $FILE_FIRST_AND_LAST; } else { $chain_type = $FILE_LAST; } $bContinue = 0; } $total_bytes += $bytes_read; $FileTransferContext = { file_data => SOAP::Data->type(base64 => $file_data), chain_type => $chain_type }; $soap_response = $ConfigSync->upload_configuration ( SOAP::Data->name(config_name => $configName), SOAP::Data->name(file_context => $FileTransferContext) ); if ( $soap_response->fault ) { print $soap_response->faultcode, " ", $soap_response->faultstring, "\n"; $success = 0; $bContinue = 0; } else { print "Uploaded $total_bytes bytes\n"; $success = 1; } $chain_type = $FILE_MIDDLE; } print "\n"; close(LOCAL_FILE); return $success; } Other methods of data transfer The two methods illustrated above are specific to system configurations. The more generic upload_file() and download_file() commands may be used to do things like backing up other system files as well as, but not limited to, uploading hotfixes to be later installed with the System::SoftwareManagement::install_hotfix() method. The usage of those methods is identical to the configuration transfer methods except in the fact that the config_name parameter is now replaced with a fully qualified file system name. Note that there are some restrictions as to the file system locations that are readable and writable. You wouldn't want to accidentally overwrite the system kernel with your latest hotfix would you? Conclusion Hopefully this article gave you some insights on how to use the various file transfer APIs to manipulate file system content on your F5 devices. Get the Flash Player to see this player.879Views0likes2CommentsiRule Security 101 - #04 - Masking Application Platform
In this session of iRules Security 101, I'll show you how to hide your backend server application platform from the outside public. When you are browsing a website, you are likely to look at the URI's to determine what platform the application is running on. For instance, if you saw "http://www.foo.com/foo.asp", you would likely assume that the backend application was an ASP application running under Microsoft IIS. Likewise, if you saw "http://www.foo.com/foo.jsp", you would assume it was a java server platform. Well, in the world of exploits, wouldn't be best to hide the platform from application is running on from the outside world? This article will illustrate one way to do so. Other articles in the series: iRule Security 101 – #1 – HTTP Version iRule Security 101 – #02 – HTTP Methods and Cross Site Tracing iRule Security 101 – #03 – HTML Comments iRule Security 101 – #04 – Masking Application Platform iRule Security 101 – #05 – Avoiding Path Traversal iRule Security 101 – #06 – HTTP Referer iRule Security 101 – #07 – FTP Proxy iRule Security 101 – #08 – Limiting POST Data iRule Security 101 – #09 – Command Execution In this article, we will look at the URI extensions that your application uses. For this example, let's assume that your backend webserver is running an ASP application and you don't want the world to know it. We'll create a new "public" extension that the outside world will interact. In this example, we'll use ".joe". Hey, I wrote the article, so I get to name the extension B-). Let's first look at the HTTP request: when HTTP_REQUEST { # Don't allow data to be chunked. This ensures we don't get # a comment that is spread across two chunked boundaries. if { [HTTP::version] eq "1.1" } { if { [HTTP::header is_keepalive] } { HTTP::header replace "Connection" "Keep-Alive" } HTTP::version "1.0" } set orig_uri [HTTP::uri] log local0. "Old URI: $orig_uri" switch -glob $orig_uri { "*.html*" { log local0. "Found request to internal resource. Redirect to external resource" set new_uri [string map {".html" ".joe"} [HTTP::uri]] HTTP::redirect "http://[HTTP::host]$new_uri" } "*.jsp*" { log local0. "Found request to internal resource. Redirect to external resource" set new_uri [string map {".jsp" ".joe"} [HTTP::uri]] HTTP::redirect "http://[HTTP::host]$new_uri" } "*.asp*" { log local0. "Found request to internal resource. Redirect to external resource" set new_uri [string map {".asp" ".joe"} [HTTP::uri]] HTTP::redirect "http://[HTTP::host]$new_uri" } "*.joe*" { log local0. "Found external resource request, mapping URI to internal name" HTTP::uri [string map {".joe" ".asp"} [HTTP::uri]] } } } The HTTP_REQUEST event looks at the incoming URI and for the some common application extensions (.html, .jsp, and .asp), we'll redirect to the public facing ".joe" uri. The reason I did this if we only masked the internal application type of .asp, then all other requests wouuld return 404 (file not founds) from the server and the .asp requests would redirect. This would be a good sign we are hiding the .asp extension. By redirectly multiple extensions, there is no indication what the true extension really is. Here are some example redirects that this would generate: http://www.foo.com/index.html -> http://www.foo.com/index.joe http://www.foo.com/default.asp -> http://www.foo.com/default.joe http://www.foo.com/login.jsp -> http://www.foo.com/login.joe The last case in the switch statement turns all .joe requests to the true .asp requests before sending passing the request to the application server. Here are some sample URL transformations that would be made http://www.foo.com/default.joe --> http://www.foo.com/default.asp http://www.foo.com/login.joe?username=foobar --> http://www.foo.com/login.asp?username=foobar Now that the request is taken care of, we most likely should modify the responses to change all embedded URLs in the response content so that they reflect the external extension. when HTTP_RESPONSE { if { $orig_uri ends_with ".joe" } { # Ensure all of the HTTP response is collected if { [HTTP::header exists "Content-Length"] } { set content_length [HTTP::header "Content-Length"] } else { set content_length 1000000 } if { $content_length > 0 } { HTTP::collect $content_length } } if { [HTTP::header exists "Server" ] } { HTTP::header replace "Server" "Joe's Awesome App Server" } } when HTTP_RESPONSE_DATA { set new_payload [string map {".asp" ".joe"} [HTTP::payload]] HTTP::payload replace 0 [HTTP::payload length] $new_payload } In the HTTP_RESPONSE event, we check to see if the original uri was to our external extension. If so, trigger a collection of the payload from the backend server and perform a simple string replacement in the HTTP_RESPONSE_DATA event and update the response payload with the "HTTP::payload replace" command (make sure you have rechunking enabled in your HTTP profile). Oh, and make sure you check out the last line in the HTTP_RESPONSE event. Another giveaway as to your server type, is the "Server" HTTP response header. I went ahead and modified this to remove any server based identification. There are several ways you could enhance this by adding support for default index pages as well as adding support for other response content coming from non ".joe" based uri requests. You may also have more than one internal extension that you want to hide (.dll clearly indicates a windows machine). One could create multiple mappings (.joe-a -> .asp, .joe-b -> .dll, .joe-c -> .cgi, ...) and make the approprate redirections in the request and modifications in the response. Keep in mind that there are many ways a server can identify itself to the outside world. This iRule doesn't protect against all types of server "signatures" but gives you a good start. Get the Flash Player to see this player.345Views0likes1CommentiControl 101 - #11 - Performance Graphs
The BIG-IP stores history of certain types of data for reporting purposes. The Performance item in the Overview tab shows the report graphs for this performance data. The management GUI gives you the options to customize the graph interval but it stops short of giving you access to the raw data behind the graphs. Never fear, the System.Statistics interface contains methods to query the report types and extract the CSV data behind them. You can even select the start and end times as well as the poll intervals. This article will discuss the performance graph methods and how you can query the data behind them and build a chart of your own. Initialization This article uses PowerShell and the iControl Cmdlets for PowerShell as the client environment for querying the data. The following setup will be required for the examples contained in this article. PS> Add-PSSnapIn iControlSnapIn PS> Initialize-F5.iControl -Hostname bigip_address -Username bigip_user -Password bigip_pass PS> $SystemStats = (Get-F5.iControl).SystemStatistics Now that that is taken care of, let's dive right in. In the System.Statistics interface, there are two methods that get you to the performance graph data. The first is the get_performance_graph_list() method. This method takes no parameters and returns a list of structures containing each graph name, title, and description. PS> $SystemStats.get_performance_graph_list() graph_name graph_title graph_description ---------- ----------- ----------------- memory Memory Used Memory Used activecons Active Connections Active Connections newcons New Connections New Connections throughput Throughput Throughput httprequests HTTP Requests HTTP Requests ramcache RAM Cache Utilization RAM Cache Utilization detailactcons1 Active Connections Active Connections detailactcons2 Active PVA Connections Active PVA Connections detailactcons3 Active SSL Connections Active SSL Connections detailnewcons1 Total New Connections Total New Connections detailnewcons2 New PVA Connections New PVA Connections detailnewcons3 New ClientSSL Profile Connections New ClientSSL Profile Connections detailnewcons4 New Accepts/Connects New Accepts/Connects detailthroughput1 Client-side Throughput Client-side Throughput detailthroughput2 Server-side Throughput Server-side Throughput detailthroughput3 HTTP Compression Rate HTTP Compression Rate SSLTPSGraph SSL Transactions/Sec SSL Transactions/Sec GTMGraph GTM Performance GTM Requests and Resolutions GTMrequests GTM Requests GTM Requests GTMresolutions GTM Resolutions GTM Resolutions GTMpersisted GTM Resolutions Persisted GTM Resolutions Persisted GTMret2dns GTM Resolutions Returned to DNS GTM Resolutions Returned to DNS detailcpu0 CPU Utilization CPU Usage detailcpu1 CPU Utilization CPU Usage CPU CPU Utilization CPU Usage detailtmm0 TMM Utilization TMM Usage TMM TMM Utilization TMM CPU Utilization Creating a Report Ok, so you've now got the graph names, it's time to move on to accessing the data. The method you'll want is the get_performance_graph_csv_statistics() method. This method takes an array of PerformanceStatisticQuery structures containing the query parameters and returns an array of PerformanceGraphDataCSV structures, one for each input query. The following code illustrates how to make a simple query. The object_name corresponds to the graph_name in the get_performance_graph_list() method. The start_time and end_time allow you to control what the data range is. Values of 0 (the default) will return the entire result set. If the user specifies a start_time, end_time, and interval that does not exactly match the corresponding value used within the database, the database will attempt to use to closest time or interval as requested. The actual values used will be returned to the user on output. For querying purposes, the start_time can be specified as: 0: in which case by default, it means 24 hours ago. N: where N represents the number of seconds since Jan 1, 1970. -N: where -N represents the number of seconds before now, for example: -3600 means 3600 seconds ago, or now - 3600 seconds. For querying purposes, the end_time can be specified as: 0: in which case by default, it means now. N: where N represents the number of seconds since Jan 1, 1970. -N: where -N represents the number of seconds before now, for example: -3600 means 3600 seconds ago, or now - 3600 seconds. The interval is the suggested sampling interval in seconds. The default of 0 uses the system default. The maximum_rows value allows you to limit the returned rows. Values are started at the start_time and if the number of rows exceeds the value of maximum_rows, then the data is truncated at the maximum_rows value. A value of 0 implies the default of all rows. PS> # Allocate a new Query Object PS> $Query = New-Object -TypeName iControl.SystemStatisticsPerformanceStatisticQuery PS> $Query.object_name = "CPU" PS> $Query.start_time = 0 PS> $Query.end_time = 0 PS> $Query.interval = 0 PS> $Query.maximum_rows = 0 PS> # Make method call passing in an array of size one with the specified query PS> $ReportData = $SystemStats.get_performance_graph_csv_statistics( (,$Query) ) PS> # Look at the contents of the returned data. PS> $ReportData object_name : throughput start_time : 1208354160 end_time : 1208440800 interval : 240 statistic_data : {116, 105, 109, 101...} Processing the Data The statistic_data, much like the ConfigSync's file transfer data, is transferred as a base64 encoded string, which translates to a byte array in .Net. We will need to convert this byte array into a string and that can be done with the System.Text.ASCIIEncoding class. PS> # Allocate a new encoder and turn the byte array into a string PS> $ASCII = New-Object -TypeName System.Text.ASCIIEncoding PS> $csvdata = $ASCII.GetString($ReportData[0].statistic_data) PS> # Look at the resulting dataset PS> $csvdata timestamp,"CPU 0","CPU 1" 1208364000,4.3357230000e+00,0.0000000000e+00 1208364240,3.7098920000e+00,0.0000000000e+00 1208364480,3.7187980000e+00,0.0000000000e+00 1208364720,3.3311110000e+00,0.0000000000e+00 1208364960,3.5825310000e+00,0.0000000000e+00 1208365200,3.4826450000e+00,8.3330000000e-03 ... Building a Chart You will see the returned dataset is in the form of a comma separated value file. At this point you can take this data and import it into your favorite reporting package. But, if you want a quick and dirty way to see this visually, you can use PowerShell to control Excel into loading the data and generating a default report. The following code converts the csv format into a tab separated format, creates an instance of an Excel Application, loads the data, cleans it up, and inserts a default line graph based on the input data. PS> # Replace commas with tabs in the report data and save to c:\temp\tabdata.txt PS> $csvdata.Replace(",", "`t") > c:\temp\tabdata.txt PS> # Allocate an Excel application object and make it visible. PS> $e = New-Object -comobject "Excel.Application" PS> $e.visible = $true PS> # Load the tab delimited data into a workbook and get an instance of the worksheet it was inserted into. PS> $wb = $e.Workbooks.Open("c:\temp\tabdata.txt") PS> $ws = $wb.WorkSheets.Item(1) PS> # Let's remove the first row of timestamps. Ideally you'll want this to be the PS> # horizontal axis and I'll leave it up to you to figure that one out. PS> $ws.Columns.Item(1).EntireColumn.Select() PS> $ws.Columns.Item(1).Delete() PS> # The last row of the data is filled with NaN to indicate the end of the result set. Let's delete that row. PS> $ws.Rows.Item($ws.UsedRange.Rows.Count).Select() PS> $ws.Rows.Item($ws.UsedRange.Rows.Count).Delete() PS> # Select all the data in the worksheet, create a chart, and change it to a line graph. PS> $ws.UsedRange.Select() PS> $chart = $e.Charts.Add() PS> $chart.Type = 4 Conclusion Now you should have everything you need to get access to the data behind the performance graphs. There are many ways you could take these examples and I'll leave it to the chart gurus out there to figure out interesting ways to represent the data. If anyone does find some interesting ways to manipulate this data, please let me know! Get the Flash Player to see this player.1KViews0likes5CommentsiControl 101 - #13 - Data Groups
Data Groups can be useful when writing iRules. A data group is simply a group of related elements, such as a set of IP addresses, URI paths, or document extensions. When used in conjuction with the matchclass or findclass commands, you eliminate the need to list multiple values as arguments in an iRule expression. This article will discuss how to use the methods in the LocalLB::Class interface to manage the Data Groups for use within iRules. Terminology You will first notice a mixing up of terms. A "Class" and a "Data Group" can be used interchangeably. Class was the original development term and the marketing folks came up with Data Group later on so you will see "Class" embedded in the core configuration and iControl methods, thus the LocalLB::Class interface, and "Data Groups" will most often be how they are referenced in the administration GUI. Data Groups come in 4 flavors: Address, Integer, String, and External. Address Data Groups consist of a list of IP Addresses with optional netmasks and are useful for applying a policy based on a originating subnet. Integer Data Groups hold numeric integers and, to add more confusion, are referred as "value" types in the API. String Data Groups can hold a valid ascii-based string. All of the Data Group types have methods specific to their type (ie get_string_class_list(), add_address_class_member(), find_value_class_member()). External Data Groups are special in that they have one of the previous types but there are no direct accessor methods to add/remove elements from the file. The configuration consists of a file path and name, along with the type (Address, Integer, String). You will have to use the ConfigSync file transfer APIs to remotely manipulate External Data Groups. External Data Groups are meant for very large lists of lists that change frequently. This article will focus on String Data Groups, but the usage for Address and Integer classes will be similar in nature. Initialization This article uses PowerShell and the iControl Cmdlets for PowerShell as the client environment for querying the data. The following setup will be required for the examples contained in this article. PS> Add-PSSnapIn iControlSnapIn PS> Initialize-F5.iControl -Hostname bigip_address -Username bigip_user -Password bigip_pass PS> $Class = (Get-F5.iControl).LocalLBClass Listing data groups The first thing you'll want to do is determine which Data Groups exist. The get_string_class_list() method will return a array of strings containing the names of all of the existing String based Data Groups. PS> $Class.get_string_class_list() test_list test_class carp images Creating a Data Group You have to start from somewhere, so most likely you'll be creating a new Data Group to do your bidding. This example will create a data group of image extensions for use in a URI based filtering iRule. The create_string_class() takes as input an array of LocalLB::Class::StringClass structures, each containing the class name and an array of members. In this example, the string class "img_extensions" is created with the values ".jpg", ".gif", and ".png". Then the get_string_class_list() method is called to make sure the class was created and the get_string_class() method is called to return the values passed in the create method. PS> $StringClass = New-Object -typename iControl.LocalLBClassStringClass PS> $StringClass.name = "img_extensions" PS> $StringClass.members = (".jpg", ".gif", ".png") PS> $Class.create_string_class(,$StringClass) PS> $Class.get_string_class_list() test_list test_class carp images img_extensions PS> $Class.get_string_class((,"img_extensions")) name members ---- ------- img_extensions {.gif, .jpg, .png} Adding Data Group items Once you have an existing Data Group, you most likely will want to add something to it. The add_string_class_member() method will take as input the same LocalLB::Class::StringClass structure containing the name of the Data Group and the list of new items to add to it. The following code will add two values: ".ico" and ".bmp" to the img_extensions Data Group and then will query the values of the Data Group to make sure the call succeeded. PS> $StringClass.members = (".ico", ".bmp") PS> $Class.add_string_class_member(,$StringClass) PS> $Class.get_string_class((,"img_extensions")) name members ---- ------- img_extensions {.bmp, .gif, .ico, .jpg...} Removing Data Group Items If you want to add items, you may very well want to delete items. That's where the delete_string_class_member() method comes in. Like the previous examples, it takes the LocalLB::Class::StringClass structure containing the name of the Data Group and the values you would like to remove. The following example removes the ".gif" and ".jpg" value and queries the current value of the list. PS> $StringClass.members = (".gif", ".jpg") PS> $Class.delete_string_class_member(,$StringClass) PS> $Class.get_string_class((,"img_extensions")) name members ---- ------- img_extensions {.bmp, .ico, .png} Deleting Data Groups The interface wouldn't be complete if you couldn't delete the Data Groups you previously created. The delete_class() method takes as input a string array of class names. This example will delete the img_extensions Data Group and then call the get_string_class_list() method to verify that it was deleted. PS> $Class.delete_class(,"img_extensions") PS> $Class.get_string_class_list() ts_reputation test_list test_class carp images Conclusion That's about it! Just replace the "string" with "address" and "value" in the above methods and you should be well on your way to building any type of Data Group you need for all your iRule needs. Get the Flash Player to see this player.1.1KViews0likes6CommentsGetting Started with pyControl v2: Installing on Windows
It's true--pyControl v2 is officially out the door! Here are the installation instructions for Windows (captured below on XP Pro but work equally as well on 7 Ultimate.) REQUIRED PACKAGES 1. Python 2.4/2.5/2.6 (Use 32-bit version) 2. Python Setup Tools for your python version 3. Python SUDS soap library 4. pyControl version 2 WINDOWS XP PRO & 7 ULTIMATE INSTALLATION 1. Install Python and setuptools via the available executables. 2. Extract SUDS and pycontrol with 7-zip or similar tool. 3. Enter the SUDS folder and enter "python setup.py install" 4. Enter the pycontrol folder (drill down to the setup.py script) and enter "python setup.py install" 5. Start python and run these commands to verify the install: >>> import suds >>> >>> print suds.__version__ 0.3.8 >>> >>> print suds.__build__ (beta) R618-20091204 >>> >>> import pycontrol.pycontrol >>> >>> print pycontrol.pycontrol.__version__ 2.0 >>> >>> print pycontrol.pycontrol.__revision__ r72 # Your revision number might be higher, which is OK. Alternately, you can simply extract pycontrol.py from the bundle and place it somewhere on your path. Installation Video:451Views0likes3CommentsiRule Security 101 - #02 - HTTP Methods and Cross Site Tracing
In this installment of iRule Security 101, I'll refer to OWASP's Data Validation Test "Testing for HTTP Methods and XST (Cross Site Tracing)" and illustrate how to use iRules to build a defense mechanism to block potentially dangerous HTTP commands (methods) and ensure that Cross Site Tracing is not possible. Other articles in the series: iRule Security 101 – #1 – HTTP Version iRule Security 101 – #02 – HTTP Methods and Cross Site Tracing iRule Security 101 – #03 – HTML Comments iRule Security 101 – #04 – Masking Application Platform iRule Security 101 – #05 – Avoiding Path Traversal iRule Security 101 – #06 – HTTP Referer iRule Security 101 – #07 – FTP Proxy iRule Security 101 – #08 – Limiting POST Data iRule Security 101 – #09 – Command Execution GET and POST methods the most used methods to request information from a web server, but the HTTP protocol allows several others including HEAD, PUT, DELETE, TRACE, OPTIONS, and CONNECT. Some of these can cause potential security risks such as remote file access and un-intended retrieval of information. The TRACE in particular, was designed to allow echoing of strings sent to the server and is mainly used for debugging purposes. But, the use of this method has been shown to allow for an attach known as Cross Site Tracing, which was discovered by Jeremiah Grossman in his paper titled "Cross Site Tracing (XST)". when RULE_INIT { set INFO 0 set DEBUG 0 #------------------------------------------------------------------------ # HTTP Method #------------------------------------------------------------------------ set sec_http_method_enabled 1 set sec_http_method_block 1 set sec_http_method_alert 1 set sec_http_methods [list \ "CONNECT" \ "DELETE" \ "HEAD" \ "OPTIONS" \ "PUT" \ "TRACE" \ ] } when HTTP_REQUEST { #============================================================================ # HTTP Method #============================================================================ if { $::INFO } { log local0. "ASSERTION: http_method" } if { $::sec_http_method_enabled } { if { $::DEBUG } { log local0. " HTTP Method: [HTTP::method]" } if { [matchclass [HTTP::method] equals $::sec_http_methods] } { if { $::sec_http_method_alert } { log local0. " SEC-ALERT: Invalid HTTP Version found: '[HTTP::method]'" } if { $::sec_http_method_block } { reject } } else { if { $::DEBUG } { log local0. " PASSED" } } } } In the RULE_INIT method, I've created a few global variables enabling one to turn on or off the verification. This iRule will block all requests in the violation list so make sure before you deploy this iRule that you verify that your web applications don't make use of any of them. Without all the extra conditionals, the iRule can be stripped down to the following couple of lines: when RULE_INIT { set sec_http_methods [list "CONNECT" "DELETE" "HEAD" "OPTIONS" "PUT" "TRACE"] } when HTTP_REQUEST { if { [matchclass [HTTP::method] equals $::sec_http_methods] } { reject } } For 10.x and later, you should use the class command instead of matchclass. Also, global variables should not be used for CMP systems. The iRules below should work for later versions. Note however, that the logic is reversed in these versions, where you specify the versions you want to allow and block all others. # Via list in RULE_INIT when RULE_INIT { set static::sec_http_methods [list "DELETE" "GET" "PATCH" "POST" "PUT"] } when HTTP_REQUEST { if { [lsearch -exact $static::sec_http_methods [HTTP::method]] == -1 } { reject } } # Via data-group called sec_http_methods when HTTP_REQUEST { if { not [class match [HTTP::method] equals sec_http_methods} { reject } } Also available in later versions, you can use an LTM Policy and forego the iRule altogether: ltm policy sec_http_method { controls { forwarding } last-modified 2016-09-21:10:17:19 requires { http } rules { allow_methods { actions { 0 { forward reset } } conditions { 0 { http-method not values { GET POST PUT PATCH DELETE } } } } } status published strategy first-match } For more information on HTTP Method attacks and Cross Site Tracing, take a look at OWASP's documentation on the topic. Get the Flash Player to see this player.820Views0likes5CommentsiRule Security 101 - #07 - FTP Proxy
We get questions all the time about custom application protocols and how one would go about writing an iRule to "understand" what's going on with that protocol. In this article, I will look at the FTP protocol and show you how one could write the logic to understand that application flow and selectively turn on and off support for various commands within the protocol. Other articles in the series: iRule Security 101 – #1 – HTTP Version iRule Security 101 – #02 – HTTP Methods and Cross Site Tracing iRule Security 101 – #03 – HTML Comments iRule Security 101 – #04 – Masking Application Platform iRule Security 101 – #05 – Avoiding Path Traversal iRule Security 101 – #06 – HTTP Referer iRule Security 101 – #07 – FTP Proxy iRule Security 101 – #08 – Limiting POST Data iRule Security 101 – #09 – Command Execution FTP FTP, for those who don't know, stands for File Transfer Protocol. FTP is designed to allow for the remote uploading and downloading of documents. I'm not going to dig deep into the protocol in this document, but for those who want to explore further, it is defined in RFC959. The basics of FTP are as follows. Requests are made with single line requests formatted as: COMMAND COMMAND_ARGS CRLF Some FTP commands include USER, PASS, & ACCT for authentication, CWD for changing directories, LIST for requesting the contents of a directory, and QUIT for terminating a session. Responses to commands are made in two ways. Over the main "control" connection, the server will process the request and then return a response in this format CODE DESCRIPTION CRLF Where code is the status code defined for the given request command. These have some similarity to HTTP response codes (200 -> OK, 500 -> Error), but don't count on them being exactly the same for each situation. For commands that do not requests content from the server (USER, PASS, CWD, etc), the control connection is all that is uses. But, there are other commands that specifically request data from the server. RETR (downloading a file), STOR (uploading a file), and LIST (for requesting a current directory listing) are examples of these types of commands. For these commands, the status is still returned in the control channel, but the data is passed back in a separate "data" channel that is configured by the client with either the PORT or PASV commands. Writing the Proxy We'll start of the iRule with a set of global variables that are used across all connections. In this iRule will will only inspect on the following FTP commands: USER, PASV, RETR, STOR, RNFR, FNTO, PORT, RMD, MKD, LIST, PWD, CWD, and DELE. This iRule can easily be expanded to include other commands in the FTP command set. In the RULE_INIT event we will set some global variables to determine how we want the proxy to handle the specific commands. A value of 1 for the "block" options will make the iRule deny those commands from reaching the backend FTP server. Setting a value of 0 for the block flag, will allow the command to pass through. when RULE_INIT { set DEBUG 1 #------------------------------------------------------------------------ # FTP Commands #------------------------------------------------------------------------ set sec_block_anonymous_ftp 1 set sec_block_passive_ftp 0 set sec_block_retr_cmd 0 set sec_block_stor_cmd 0 set sec_block_rename_cmd 0 set sec_block_port_cmd 0 set sec_block_rmd_cmd 0 set sec_block_mkd_cmd 0 set sec_block_list_cmd 0 set sec_block_pwd_cmd 0 set sec_block_cwd_cmd 0 set sec_block_dele_cmd 1 } Since we will not be relying on a BIG-IP profile to handle the application parsing, we'll be using the low level TCP events to capture the requests and responses. When a client establishes a connection, the CLIENT_ACCPETED event will occur, from within this event we'll have to trigger a collection of the TCP data so that we can inspect it in the CLIENT_DATA event. when CLIENT_ACCEPTED { if { $::DEBUG } { log local0. "client accepted" } TCP::collect TCP::release } In the CLIENT_DATA event, we will look at the request with the TCP::payload command. We will then feed that value into a switch statement with options for each of the commands. For commands that are found that we want to disallow, we will issue an FTP error response code with description string, empty out the payload, and return from the iRule - thus breaking the connection. For all other cases, we allow the TCP engine to continue on with it's processing and then enter into data collect mode again. when CLIENT_DATA { if { $::DEBUG } { log local0. "----------------------------------------------------------" } if { $::DEBUG } { log local0. "payload [TCP::payload]" } set client_data [string trim [TCP::payload]] #--------------------------------------------------- # Block or alert specific commands #--------------------------------------------------- switch -glob $client_data { "USER anonymous*" - "USER ftp*" { if { $::DEBUG } { log local0. "LOG: Anonymous login detected" } if { $::sec_block_anonymous_ftp } { TCP::respond "530 Guest user not allowed\r\n"; reject } } "PASV*" { if { $::DEBUG } { log local0. "LOG: passive request detected" } if { $::sec_block_passive_ftp } { TCP::respond "502 Passive commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "RETR*" { if { $::DEBUG } { log local0. "LOG: RETR request detected" } if { $::sec_block_retr_cmd } { TCP::respond "550 RETR commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "STOR*" { if { $::DEBUG } { log local0. "LOG: STOR request detected" } if { $::sec_block_stor_cmd } { TCP::respond "550 STOR commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "RNFR*" - "RNTO*" { if { $::DEBUG } { log local0. "LOG: RENAME request detected" } if { $::sec_block_rename_cmd } { TCP::respond "550 RENAME commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "PORT*" { if { $::DEBUG } { log local0. "LOG: PORT request detected" } if { $::sec_block_port_cmd } { TCP::respond "550 PORT commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "RMD*" { if { $::DEBUG } { log local0. "LOG: RMD request detected" } if { $::sec_block_rmd_cmd } { TCP::respond "550 RMD commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "MKD*" { if { $::DEBUG } { log local0. "LOG: MKD request detected" } if { $::sec_block_mkd_cmd } { TCP::respond "550 MKD commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "LIST*" { if { $::DEBUG } { log local0. "LOG: LIST request detected" } if { $::sec_block_list_cmd } { TCP::respond "550 LIST commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "PWD*" { if { $::DEBUG } { log local0. "LOG: PWD request detected" } if { $::sec_block_pwd_cmd } { TCP::respond "550 PWD commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "CWD*" { if { $::DEBUG } { log local0. "LOG: CWD request detected" } if { $::sec_block_cwd_cmd } { TCP::respond "550 CWD commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "DELE*" { if { $::DEBUG } { log local0. "LOG: DELE request detected" } if { $::sec_block_dele_cmd } { TCP::respond "550 DELE commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } } TCP::release TCP::collect } Once a connection has been made to the backend server, the SERVER_CONNECTED event will be raised. In this event we will release the context and issue a collect to occur for the server data. The server data will then be returned, and optionally logged, in the SERVER_DATA event. when SERVER_CONNECTED { if { $::DEBUG } { log "server connected" } TCP::release TCP::collect } when SERVER_DATA { if { $::DEBUG } { log local0. "payload <[TCP::payload]>" } TCP::release TCP::collect } And finally when the client closes it's connection,. the CLIENT_CLOSED event will be fired and we will log the fact that the session is over. when CLIENT_CLOSED { if { $::DEBUG } { log local0. "client closed" } } Conclusion This article shows how one can use iRules to inspect, and optionally secure, an application based on command sets within that application. Not all application protocols behave like FTP (TELNET for instance sends one character at a time and it's up to the proxy to consecutively request more data until the request is complete). But this should give you the tools you need to start inspection on your TCP based application. Get the Flash Player to see this player.3.9KViews0likes5Comments