basic
59 TopicsIntermediate iRules: Nested Conditionals
Conditionals are a pretty standard tool in every programmer's toolbox. They are the functions that allow us to decided when we want certain actions to happen, based on, well, conditions that can be determined within our code. This concept is as old as compilers. Chances are, if you're writing code, you're going to be using a slew of these things, even in an Event based language like iRules. iRules is no different than any other programming/scripting language when it comes to conditionals; we have them. Sure how they're implemented and what they look like change from language to language, but most of the same basic tools are there: if, else, switch, elseif, etc. Just about any example that you might run across on DevCentral is going to contain some example of these being put to use. Learning which conditional to use in each situation is an integral part to learning how to code effectively. Once you have that under control, however, there's still plenty more to learn. Now that you're comfortable using a single conditional, what about starting to combine them? There are many times when it makes more sense to use a pair or more of conditionals in place of a single conditional along with logical operators. For example: if { [HTTP::host] eq "bob.com" and [HTTP::uri] starts_with "/uri1" } { pool pool1 } elseif { [HTTP::host] eq "bob.com" and [HTTP::uri] starts_with "/uri2" } { pool pool2 } elseif { [HTTP::host] eq "bob.com" and [HTTP::uri] starts_with "/uri3" } { pool pool3 } Can be re-written to use a pair of conditionals instead, making it far more efficient. To do this, you take the common case shared among the example strings and only perform that comparison once, and only perform the other comparisons if that result returns as desired. This is more easily described as nested conditionals, and it looks like this: if { [HTTP::host] eq "bob.com" } { if {[HTTP::uri] starts_with "/uri1" } { pool pool1 } elseif {[HTTP::uri] starts_with "/uri2" } { pool pool2 } elseif {[HTTP::uri] starts_with "/uri3" } { pool pool3 } } These two examples are logically equivalent, but the latter example is far more efficient. This is because in all the cases where the host is not equal to "bob.com", no other inspection needs to be done, whereas in the first example, you must perform the host check three times, as well as the uri check every single time, regardless of the fact that you could have stopped the process earlier. While basic, this concept is important in general when coding. It becomes exponentially more important, as do almost all optimizations, when talking about programming in iRules. A script being executed on a server firing perhaps once per minute benefits from small optimizations. An iRule being executed somewhere in the order of 100,000 times per second benefits that much more. A slightly more interesting example, perhaps, is performing the same logical nesting while using different operators. In this example we'll look at a series of if/elseif statements that are already using nesting, and take a look at how we might use the switch command to even further optimize things. I've seen multiple examples of people shying away from switch when nesting their logic because it looks odd to them or they're not quite sure how it should be structured. Hopefully this will help clear things up. First, the example using if statements: when HTTP_REQUEST { if { [HTTP::host] eq "secure.domain.com" } { HTTP::header insert "Client-IP:[IP::client_addr]" pool sslServers } elseif { [HTTP::host] eq "www.domain.com" } { HTTP::header insert "Client-IP:[IP::client_addr]" pool httpServers } elseif { [HTTP::host] ends_with "domain.com" and [HTTP::uri] starts_with "/secure"} { HTTP::header insert "Client-IP:[IP::client_addr]" pool sslServers } elseif {[HTTP::host] ends_with "domain.com" and [HTTP::uri] starts_with "/login"} { HTTP::header insert "Client-IP:[IP::client_addr]" pool httpServers } elseif { [HTTP::host] eq "intranet.myhost.com" } { HTTP::header insert "Client-IP:[IP::client_addr]" pool internal } } As you can see, this is completely functional and would do the job just fine. There are definitely some improvements that can be made, though. Let's try using a switch statement instead of several if comparisons for improved performance. To do that, we're going to have to use an if nested inside a switch comparison. While this might be new to some or look a bit odd if you're not used to it, it's completely valid and often times the most efficient you’re going to get. This is what the above code would look like cleaned up and put into a switch: when HTTP_REQUEST { HTTP::header insert "Client-IP:[IP::client_addr]" switch -glob [HTTP::host] { "secure.domain.com" { pool sslServers } "www.domain.com" { pool httpServers } "*.domain.com" { if { [HTTP::uri] starts_with "/secure" } { pool sslServers } else { pool httpServers } } "intranet.myhost.com" { pool internal } } } As you can see this is not only easier to read and maintain, but it will also prove to be more efficient. We've moved to the more efficient switch structure, we've gotten rid of the repeat host comparisons that were happening above with the /secure vs /login uris, and while I was at it I got rid of all those examples of inserting a header, since that was happening in every case anyway. Hopefully the benefit this technique can offer is clear, and these examples did the topic some justice. With any luck, you'll nest those conditionals with confidence now.6.3KViews0likes0CommentsiRule Security 101 - #07 - FTP Proxy
We get questions all the time about custom application protocols and how one would go about writing an iRule to "understand" what's going on with that protocol. In this article, I will look at the FTP protocol and show you how one could write the logic to understand that application flow and selectively turn on and off support for various commands within the protocol. Other articles in the series: iRule Security 101 – #1 – HTTP Version iRule Security 101 – #02 – HTTP Methods and Cross Site Tracing iRule Security 101 – #03 – HTML Comments iRule Security 101 – #04 – Masking Application Platform iRule Security 101 – #05 – Avoiding Path Traversal iRule Security 101 – #06 – HTTP Referer iRule Security 101 – #07 – FTP Proxy iRule Security 101 – #08 – Limiting POST Data iRule Security 101 – #09 – Command Execution FTP FTP, for those who don't know, stands for File Transfer Protocol. FTP is designed to allow for the remote uploading and downloading of documents. I'm not going to dig deep into the protocol in this document, but for those who want to explore further, it is defined in RFC959. The basics of FTP are as follows. Requests are made with single line requests formatted as: COMMAND COMMAND_ARGS CRLF Some FTP commands include USER, PASS, & ACCT for authentication, CWD for changing directories, LIST for requesting the contents of a directory, and QUIT for terminating a session. Responses to commands are made in two ways. Over the main "control" connection, the server will process the request and then return a response in this format CODE DESCRIPTION CRLF Where code is the status code defined for the given request command. These have some similarity to HTTP response codes (200 -> OK, 500 -> Error), but don't count on them being exactly the same for each situation. For commands that do not requests content from the server (USER, PASS, CWD, etc), the control connection is all that is uses. But, there are other commands that specifically request data from the server. RETR (downloading a file), STOR (uploading a file), and LIST (for requesting a current directory listing) are examples of these types of commands. For these commands, the status is still returned in the control channel, but the data is passed back in a separate "data" channel that is configured by the client with either the PORT or PASV commands. Writing the Proxy We'll start of the iRule with a set of global variables that are used across all connections. In this iRule will will only inspect on the following FTP commands: USER, PASV, RETR, STOR, RNFR, FNTO, PORT, RMD, MKD, LIST, PWD, CWD, and DELE. This iRule can easily be expanded to include other commands in the FTP command set. In the RULE_INIT event we will set some global variables to determine how we want the proxy to handle the specific commands. A value of 1 for the "block" options will make the iRule deny those commands from reaching the backend FTP server. Setting a value of 0 for the block flag, will allow the command to pass through. when RULE_INIT { set DEBUG 1 #------------------------------------------------------------------------ # FTP Commands #------------------------------------------------------------------------ set sec_block_anonymous_ftp 1 set sec_block_passive_ftp 0 set sec_block_retr_cmd 0 set sec_block_stor_cmd 0 set sec_block_rename_cmd 0 set sec_block_port_cmd 0 set sec_block_rmd_cmd 0 set sec_block_mkd_cmd 0 set sec_block_list_cmd 0 set sec_block_pwd_cmd 0 set sec_block_cwd_cmd 0 set sec_block_dele_cmd 1 } Since we will not be relying on a BIG-IP profile to handle the application parsing, we'll be using the low level TCP events to capture the requests and responses. When a client establishes a connection, the CLIENT_ACCPETED event will occur, from within this event we'll have to trigger a collection of the TCP data so that we can inspect it in the CLIENT_DATA event. when CLIENT_ACCEPTED { if { $::DEBUG } { log local0. "client accepted" } TCP::collect TCP::release } In the CLIENT_DATA event, we will look at the request with the TCP::payload command. We will then feed that value into a switch statement with options for each of the commands. For commands that are found that we want to disallow, we will issue an FTP error response code with description string, empty out the payload, and return from the iRule - thus breaking the connection. For all other cases, we allow the TCP engine to continue on with it's processing and then enter into data collect mode again. when CLIENT_DATA { if { $::DEBUG } { log local0. "----------------------------------------------------------" } if { $::DEBUG } { log local0. "payload [TCP::payload]" } set client_data [string trim [TCP::payload]] #--------------------------------------------------- # Block or alert specific commands #--------------------------------------------------- switch -glob $client_data { "USER anonymous*" - "USER ftp*" { if { $::DEBUG } { log local0. "LOG: Anonymous login detected" } if { $::sec_block_anonymous_ftp } { TCP::respond "530 Guest user not allowed\r\n"; reject } } "PASV*" { if { $::DEBUG } { log local0. "LOG: passive request detected" } if { $::sec_block_passive_ftp } { TCP::respond "502 Passive commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "RETR*" { if { $::DEBUG } { log local0. "LOG: RETR request detected" } if { $::sec_block_retr_cmd } { TCP::respond "550 RETR commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "STOR*" { if { $::DEBUG } { log local0. "LOG: STOR request detected" } if { $::sec_block_stor_cmd } { TCP::respond "550 STOR commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "RNFR*" - "RNTO*" { if { $::DEBUG } { log local0. "LOG: RENAME request detected" } if { $::sec_block_rename_cmd } { TCP::respond "550 RENAME commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "PORT*" { if { $::DEBUG } { log local0. "LOG: PORT request detected" } if { $::sec_block_port_cmd } { TCP::respond "550 PORT commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "RMD*" { if { $::DEBUG } { log local0. "LOG: RMD request detected" } if { $::sec_block_rmd_cmd } { TCP::respond "550 RMD commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "MKD*" { if { $::DEBUG } { log local0. "LOG: MKD request detected" } if { $::sec_block_mkd_cmd } { TCP::respond "550 MKD commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "LIST*" { if { $::DEBUG } { log local0. "LOG: LIST request detected" } if { $::sec_block_list_cmd } { TCP::respond "550 LIST commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "PWD*" { if { $::DEBUG } { log local0. "LOG: PWD request detected" } if { $::sec_block_pwd_cmd } { TCP::respond "550 PWD commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "CWD*" { if { $::DEBUG } { log local0. "LOG: CWD request detected" } if { $::sec_block_cwd_cmd } { TCP::respond "550 CWD commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "DELE*" { if { $::DEBUG } { log local0. "LOG: DELE request detected" } if { $::sec_block_dele_cmd } { TCP::respond "550 DELE commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } } TCP::release TCP::collect } Once a connection has been made to the backend server, the SERVER_CONNECTED event will be raised. In this event we will release the context and issue a collect to occur for the server data. The server data will then be returned, and optionally logged, in the SERVER_DATA event. when SERVER_CONNECTED { if { $::DEBUG } { log "server connected" } TCP::release TCP::collect } when SERVER_DATA { if { $::DEBUG } { log local0. "payload <[TCP::payload]>" } TCP::release TCP::collect } And finally when the client closes it's connection,. the CLIENT_CLOSED event will be fired and we will log the fact that the session is over. when CLIENT_CLOSED { if { $::DEBUG } { log local0. "client closed" } } Conclusion This article shows how one can use iRules to inspect, and optionally secure, an application based on command sets within that application. Not all application protocols behave like FTP (TELNET for instance sends one character at a time and it's up to the proxy to consecutively request more data until the request is complete). But this should give you the tools you need to start inspection on your TCP based application. Get the Flash Player to see this player.4KViews0likes5CommentsiRules 101 - #12 - The Session Command
One of the things that makes iRules so incredibly powerful is the fact that it is a true scripting language, or at least based on one. The fact that they give you the tools that TCL brings to the table - regular expressions, string functions, even things as simple as storing, manipulating and recalling variable data - sets iRules apart from the rest of the crowd. It also makes it possible to do some pretty impressive things with connection data and massaging/directing it the way you want it. Other articles in the series: Getting Started with iRules: Intro to Programming with Tcl | DevCentral Getting Started with iRules: Control Structures & Operators | DevCentral Getting Started with iRules: Variables | DevCentral Getting Started with iRules: Directing Traffic | DevCentral Getting Started with iRules: Events & Priorities | DevCentral Intermediate iRules: catch | DevCentral Intermediate iRules: Data-Groups | DevCentral Getting Started with iRules: Logging & Comments | DevCentral Advanced iRules: Regular Expressions | DevCentral Getting Started with iRules: Events & Priorities | DevCentral iRules 101 - #12 - The Session Command | DevCentral Intermediate iRules: Nested Conditionals | DevCentral Intermediate iRules: Handling Strings | DevCentral Intermediate iRules: Handling Lists | DevCentral Advanced iRules: Scan | DevCentral Advanced iRules: Binary Scan | DevCentral Sometimes, though, a simple variable won't do. You've likely heard of global variables in one of the earlier 101 series and read the warning there, and are looking for another option. So here you are, you have some data you need to store, which needs to persist across multiple connections. You need it to be efficient and fast, and you don't want to have to do a whole lot of complex management of a data structure. One of the many ways that you can store and access information in your iRule fits all of these things perfectly, little known as it may be. For this scenario I'd recommend the usage of the session command. There are three main permutations of the session command that you'll be using when storing and referencing data within the session table. These are: session add: Stores user's data under the specified key for the specified persistence mode session lookup: Returns user data previously stored using session add session delete: Removes user data previously stored using session add A simple example of adding some information to the session table would look like: when CLIENTSSL_CLIENTCERT { set ssl_cert [SSL::cert 0] session add ssl $ssl_cert 90 } By using the session add command, you can manually place a specific piece of data into the LTM's session table. You can then look it up later, by unique key, with the session lookup command and use the data in a different section of your iRule, or in another connection all together. This can be helpful in different situations where data needs to be passed between iRules or events that it might not normally be when using a simple variable. Such as mining SSL data from the connection events, as below: when CLIENTSSL_CLIENTCERT { # Set results in the session so they are available to other events session add ssl [SSL::sessionid] [list [X509::issuer] [X509::subject] [X509::version]] 180 } when HTTP_REQUEST { # Retrieve certificate information from the session set sslList [session lookup ssl [SSL::sessionid]] set issuer [lindex sslList 0] set subject [lindex sslList 1] set version [lindex sslList 2] } Because the session table is optimized and designed to handle every connection that comes into the LTM, it's very efficient and can handle quite a large number of items. Also note that, as above, you can pass structured information such as TCL Lists into the session table and they will remain intact. Keep in mind, though, that there is currently no way to count the number of entries in the table with a certain key, so you'll have to build all of your own processing logic for now, where necessary. It's also important to note that there is more than one session table. If you look at the above example, you'll see that before we listed any key or data to be stored, we used the command session add ssl. Note the "ssl" portion of this command. This is a reference to which session table the data will be stored in. For our purposes here there are effectively two session tables: ssl, and uie. Be sure you're accessing the same one in your session lookup section as you are in your session add section, or you'll never find the data you're after. This is pretty easy to keep straight, once you see it. It looks like: session add uie ... session lookup uie Or: session add ssl ... session lookup ssl You can find complete documentation on the session command here, in the iRules, as well as some great examples that depict some more advanced iRules making use of the session command to great success. Check out Codeshare for more examples.3.6KViews0likes8CommentsGetting Started with Bigsuds–a New Python Library for iControl
I imagine the progression for you, the reader, will be something like this in the first six- or seven-hundred milliseconds after reading the title: Oh cool! Wait, what? Don’t we already have like two libraries for python? Really, a third library for python? Yes. An emphatic yes. The first iteration of pycontrol (pc1) was based on the zsi library, which hasn’t been updated in years and was abandoned with the development of the second iteration, pycontrol v2 (pc2), which switched to the active and well-maintained suds library. Bigsuds, like pycontrol v2, is also based on the suds library. So why bigsuds? There are several advantages to using the bigsuds library. No need to specify which WSDLs to download In pycontrol v2, any iControl interface you wish to work with must be specified when you instantiate the BIG-IP, as well as specifying the local directory or loading from URL for the WSDLs. In bigsuds, just specify the host, username, and password (username and password optional if using test box defaults of admin/admin) and you’re good to go. Currently in pycontrol v2: >>> import pycontrol.pycontrol as pc >>> b = pc.BIGIP( ... hostname = '192.168.6.11', ... username = 'admin', ... password = 'admin', ... fromurl = True, ... wsdls = ['LocalLB.Pool']) >>> b.LocalLB.Pool.get_list() [/Common/p1, /Common/p2, /Common/p3, /Common/p5] And here in bigsuds: >>> import bigsuds >>> b = bigsuds.BIGIP(hostname = '192.168.6.11') >>> b.LocalLB.Pool.get_list() ['/Common/p1', '/Common/p2', '/Common/p3', '/Common/p5'] >>> b.GlobalLB.Pool.get_list() ['/Common/p2', '/Common/p1'] No need to define the typefactory for write operations. This was the most challenging aspect of pycontrol v2 for me personally. I would get them correct sometimes. Often I’d bang my head against the wall wondering what little thing I missed to prevent success. The cool thing with bigsuds is you are just passing lists for sequences and lists of dictionaries for structures. No object creation necessary before making the iControl calls. It’s a thing of beauty. Creating a two member pool in pycontrol v2: lbmeth = b.LocalLB.Pool.typefactory.create('LocalLB.LBMethod') # This is basically a stub holder of member items that we need to wrap up. mem_sequence = b.LocalLB.Pool.typefactory.create('Common.IPPortDefinitionSequence') # Now we'll create some pool members. mem1 = b.LocalLB.Pool.typefactory.create('Common.IPPortDefinition') mem2 = b.LocalLB.Pool.typefactory.create('Common.IPPortDefinition') # Note how this is 'pythonic' now. We set attributes agains the objects, then # pass them in. mem1.address = '1.2.3.4' mem1.port = 80 mem2.address = '1.2.3.4' mem2.port = 81 # Create a 'sequence' of pool members. mem_sequence.item = [mem1, mem2] # Let's create our pool. name = 'PC2' + str(int(time.time())) b.LocalLB.Pool.create(pool_names = [name], lb_methods = \ [lbmeth.LB_METHOD_ROUND_ROBIN], members = [mem_sequence]) In contrast, here is a two member pool in bigsuds. >>> b.LocalLB.Pool.create_v2(['/Common/Pool1'],['LB_METHOD_ROUND_ROBIN'],[[{'port':80, 'address':'1.2.3.4'},{'port':81, 'address':'1.2.3.4'}]]) Notice above that I did not use the method parameters. They are not required in bigsuds, though you can certainly include them. This could be written in the long form as: >>> b.LocalLB.Pool.create_v2(pool_names = ['/Common/Pool1'],lb_methods = ['LB_METHOD_ROUND_ROBIN'], members = [[{'port':80, 'address':'1.2.3.4'},{'port':81, 'address':'1.2.3.4'}]]) Standard python data types are returned There’s no more dealing with data returned like this: >>> p2.LocalLB.Pool.get_statistics(pool_names=['/Common/p2']) (LocalLB.Pool.PoolStatistics){ statistics[] = (LocalLB.Pool.PoolStatisticEntry){ pool_name = "/Common/p2" statistics[] = (Common.Statistic){ type = "STATISTIC_SERVER_SIDE_BYTES_IN" value = (Common.ULong64){ high = 0 low = 0 } time_stamp = 0 }, (Common.Statistic){ type = "STATISTIC_SERVER_SIDE_BYTES_OUT" value = (Common.ULong64){ high = 0 low = 0 } time_stamp = 0 }, Data is standard python types: strings, lists, dictionaries. That same data returned by bigsuds: >>> b.LocalLB.Pool.get_statistics(['/Common/p1']) {'statistics': [{'pool_name': '/Common/p1', 'statistics': [{'time_stamp': 0, 'type': 'STATISTIC_SERVER_SIDE_BYTES_IN', 'value': {'high': 0, 'low': 0}}, {'time_stamp': 0, 'type': 'STATISTIC_SERVER_SIDE_BYTES_OUT', 'value': {'high': 0, 'low': 0}} Perhaps not as readable in this form as with pycontrol v2, but far easier to work programmatically. Better session and transaction support George covered the benefits of sessions in his v11 iControl: Sessions article in fine detail, so I’ll leave that to the reader. Regarding implementations, bigsuds handles sessions with a built-in utility called with_session_id. Example code: >>> bigip2 = b.with_session_id() >>> bigip2.System.Session.set_transaction_timeout(99) >>> print b.System.Session.get_transaction_timeout() 5 >>> print bigip2.System.Session.get_transaction_timeout() 99 Also, with transactions, bigsuds has built-in transaction utilities as well. In the below sample code, creating a new pool that is dependent on a non-existent pool being deleted results in an error as expected, but also prevents the pool from the previous step from being created as show in the get_list method call. >>> try: ... with bigsuds.Transaction(bigip2): ... bigip2.LocalLB.Pool.create_v2(['mypool'],['LB_METHOD_ROUND_ROBIN'],[[]]) ... bigip2.LocalLB.Pool.delete_pool(['nonexistent']) ... except bigsuds.OperationFailed, e: ... print e ... Server raised fault: 'Exception caught in System::urn:iControl:System/Session::submit_transaction() Exception: Common::OperationFailed primary_error_code : 16908342 (0x01020036) secondary_error_code : 0 error_string : 01020036:3: The requested pool (/Common/nonexistent) was not found.' >>> bigip2.LocalLB.Pool.get_list() ['/Common/Pool1', '/Common/p1', '/Common/p2', '/Common/p3', '/Common/p5', '/Common/Pool3', '/Common/Pool2'] F5 maintained Community member L4L7, the author of the pycontrol v2 library, is no longer with F5 and just doesn’t have the cycles to maintain the library going forward. Bigsuds author Garron Moore, however, works in house and will fix bugs and enhance as time allows. Note that all iControl libraries are considered experimental and are not officially supported by F5 Networks. Library maintainers for all the languages will do their best to fix bugs and introduce features as time allows. Source is provided though, and bugs can and are encouraged to be fixed by the community! Installing bigsuds Make sure you have suds installed and grab a copy of bigsuds (you’ll need to log in) and extract the contents. You can use the easy setup tools to install it to python’s site-packages library like this: jrahm@jrahm-dev:/var/tmp$ tar xvfz bigsuds-1.0.tar.gz bigsuds-1.0/ bigsuds-1.0/setup.py bigsuds-1.0/bigsuds.egg-info/ bigsuds-1.0/bigsuds.egg-info/top_level.txt bigsuds-1.0/bigsuds.egg-info/requires.txt bigsuds-1.0/bigsuds.egg-info/SOURCES.txt bigsuds-1.0/bigsuds.egg-info/dependency_links.txt bigsuds-1.0/bigsuds.egg-info/PKG-INFO bigsuds-1.0/setup.cfg bigsuds-1.0/bigsuds.py bigsuds-1.0/MANIFEST.in bigsuds-1.0/PKG-INFO jrahm@jrahm-dev:/var/tmp$ cd bigsuds-1.0/ jrahm@jrahm-dev:/var/tmp/bigsuds-1.0$ python setup.py install Doing it that way, you can just enter the python shell (or run your script) with a simple ‘import bigsuds’ command. If you don’t want to install it that way, you can just extract the bigsuds.py from the download and drop it in a directory of your choice and make a path reference in the shell or script: >>> import bigsuds Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named bigsuds >>> import sys >>> sys.path.append(r'/home/jrahm/dev/bigsuds-1.0') >>> import bigsuds >>> Conclusion Garron Moore's bigsuds contribution is a great new library for python users. There is work to be done to convert your pycontrol v2 samples, but the flexibility and clarity in the new library makes it worth it in this guy’s humble opinion. A new page in the iControl wiki has been created for bigsuds developments. Please check it out, community! For now, I’ve converted a few scripts to bigsuds, linked in the aforementioned page as well as directly below: Get GTM Pool Status Get LTM Pool Status Get or Set GTM Pool TTL Create or Modify an LTM Pool2.6KViews0likes24CommentsiRule Security 101 - #06 - HTTP Referer
In this article, I'm going to talk about the HTTP "Referer" header, how it's used, and how you can use iRules to ensure that an access request to a website is coming from where you want it to come from. Other articles in the series: iRule Security 101 – #1 – HTTP Version iRule Security 101 – #02 – HTTP Methods and Cross Site Tracing iRule Security 101 – #03 – HTML Comments iRule Security 101 – #04 – Masking Application Platform iRule Security 101 – #05 – Avoiding Path Traversal iRule Security 101 – #06 – HTTP Referer iRule Security 101 – #07 – FTP Proxy iRule Security 101 – #08 – Limiting POST Data iRule Security 101 – #09 – Command Execution First, let me say that I know that "Referer" is misspelled. For some reason, the authors of the HTTP specification (RFC 2616, section 14.36) didn't run a spell checker on the specification and now that every browser and web server has implemented this with the wrong spelling it's too late to change it. Take a look at the definition for it on dictionary.com and you'll see for yourself. Nothing like a dictionary 'dissing an Internet spec... Once you can get past the misspelling, the HTTP "Referer" header is defined as the following (RFC 2616, section 14.36) 14.36 Referer The Referer[sic] request-header field allows the client to specify, for the server's benefit, the address (URI) of the resource from which the Request-URI was obtained (the "referrer", although the header field is misspelled.) The Referer request-header allows a server to generate lists of back-links to resources for interest, logging, optimized caching, etc. It also allows obsolete or mistyped links to be traced for maintenance. The Referer field MUST NOT be sent if the Request-URI was obtained from a source that does not have its own URI, such as input from the user keyboard. So basically, when ever you click on a link from a website causing new HTTP request to be made, the URI of the website you are on will be passed in the HTTP request in the form of a HTTP header with the name of "Referer" and a value containing the source URI. Why is this important from a security perspective? I'll give just one example attack and a way to use Referer headers to help block against it. With the massive uptake of blogging by users on the Internet, comments are a useful way to get feedback on your ideas. Unfortunately blog spam, as it's called, has been on the rise. Several ways have been developed to protect against blog spam including comment moderation, CAPTCHA (you know, when you type out the text that is displayed in randomly generated images), as well as online dynamic services such as Akismet that process the content of comments in a very similar way to common email SPAM services. CAPTCHA is the most common form of defense but it is not fool proof and spammers have found ways to build programs to defeat this system. So how does this fit with Referers? I'll get to that in just a minute... If you set a policy on your blog that only the comment form can be accessed by clicking on a feedback link on your blog, then you can make use of this fact by denying all requests that do not contain the URI of your blog post in the Referer header. Sure, there are ways to bypass this since HTTP headers are easily programmed into any HTTP client program. But, there are ways to trick the client into thinking that the post succeeded when it really didn't. Let's take a look at an example. http://www.mycoolblog.com/ - blog site in question http://www.mycoolblog.com/first_post - blog post page that is to be commented on. http://www.mycoolblog.com/PostComment.aspx - Comment post form. Legitimate commenter's will first vist the blog post page and then fill in the comment information and submit it to the PostComment.aspx form. Spammers will try to bypass this step by pulling in these images into a client program, try to determine the CAPTCHA image's text, and then formulate a HTTP POST command directly to the PostComment.aspx page. By enforcing that a Referer header from the same blog site comes in the PostComment.aspx request, we can block out those spammers. when HTTP_REQUEST { switch -glob [HTTP::header "Referer"] { "http://www.mycoolblog.com/*" { # Allow Request to go through... } "" { HTTP::respond 200 content "" } default { HTTP::redirect [HTTP::header "Referer"] } } } Basically any request coming from http://www.mycoolblog.com will be allowed through. Any request with a empty Referer header will be immediately returned with a HTTP 200 response to trick the client that a successful attempt was made, and any other Referer's will be redirected back to the referral site. Caveats: This is by far not a universal blog spam solution as each blogging engine handles comments differently. Some have a different URI for comment posting (as illustrated above) and others use POST data values on the same application page as the blog posting to indicate comment submissions. Also, it is easy for clients to spoof referer values by manually adding the header in the requests. But, it is a good start for those automated bots out there that are just searching for blogs to send their unwanted content to. Also, this solution does not support Trackback/Pingback spam as those solutions typically are programmatic submissions from references in other blogs. Conclusions: Blog Spam was just an example of the type of application security issue that could be addressed by making use of the HTTP Referer header. Hopefully this article has provided some food for thought into how you can use the Referer header to your advantage in protecting your applications. Get the Flash Player to see this player.2.5KViews0likes0CommentsiControl 101 - #05 - Exceptions
When designing the iControl API, we had two choices with regards to API design. The first option was to build our methods to return status codes as return values and outbound data as "out" parameters. The other option was to make use of exception handling as a way to return non-success results allowing the use of the return value for outbound data. This article will discuss why we went with the later and how you can build exception handling in your client application to handle the cases where your method calls fail. Camp 1: return codes As I mentioned above, there are two camps for API design. The first are the ones that return status codes as return values and external error methods to return error details for a given error code. For you developers out there who still remember your "C" programming days, this may look familiar: struct stat sb; char [] dirname = "c:\somefile.txt"; if ( 0 != stat(dirname, &sb) ) { printf("Problem with file '%s'; error: %s\n", dirname, strerror(errno)); } You'll notice that the "stat" method to determine the file status returns an integer that is zero for success. When it's non, zero a global variable is set (errno) indicating the error number, and the "strerror" method can then be called with that error number to determine the user readable error string. There is a problem with this approach, as illustrated by the "Semipredicate problem", in which users of the method need to write extra code to distinguish normal return values from erroneous ones. Camp 2: Exceptions The other option for status returns is to make use of exception handling. Exception handling makes use of the fact that when error conditions occur, the method call will not return via it's standard return logic but rather the information on the exception will be stored and the call stack is unwound until a handler for that exception is found. This code sample in C# is an example of making use of exceptions to track errors: try { Microsoft.Win32.RegistryKey cu = Microsoft.Win32.Registry.CurrentUser; Microsoft.Win32.RegistryKey subKey = cu.OpenSubKey("some_bogus_path"); } catch(Exception ex) { Console.WriteLine("Exception: " + ex.Message.ToString()); } iControl works with Exceptions Luckily for us, the SOAP specification takes into account the exception model by adding an alternate to the SOAP Response. A SOAPFault can be used to return error information for those cases where the method calls cannot be completed due to invalid arguments or other system configuration issues. A SOAPFault for an invalid parameter to Networking::VLAN::get_vlan_id() looks like this: <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap-env:body> <soap-env:fault> <faultcode xsi:type="xsd:string">SOAP-ENV:Server</faultcode> <faultstring xsi:type="xsd:string">Exception caught in Networking::VLAN::get_vlan_id() Exception: Common::OperationFailed primary_error_code : 16908342 (0x01020036) secondary_error_code : 0 error_string : 01020036:3: The requested VLAN (foo) was not found.</faultstring> </soap-env:fault> </soap-env:body> </SOAP-ENV:Envelope> The faultcode element indicates that the fault occurred on the server (ie, it wasn't a client connectivity issue) and the faultstring contains the details. You may ask why we include all our fault data in the return string and not in the new SOAPFault elements defined in SOAP v1.2? Well, when we first released our iControl interfaces, SOAP v1.0 was just coming out and they were not defined yet. At this point we cannot change our fault format for risk of breaking backward compatibility in existing iControl applications. An added benefit of using exceptions is that it makes client code much cleaner as opposed to using "out" type parameters. Wouldn't you much your code look like this: String [] pool_list = m_interfaces.LocalLBPool.get_list() As opposed to this: String [] pool_list = null; int rc = m_interfaces.LocalLBPool.get_list(pool_list); Types of Exceptions If you look in the Common module in the SDK, you'll find a list of the exceptions supported in the iControl methods. The most common of them is "OperationFailed" but in some cases you'll see AccessDenied, InvalidArgument, InvalidUser, NoSuchInterface, NotImplemented, and OutOfMemory crop up. The SDK documentation for each method lists the Exceptions that can be raised by each method if you need to narrow down what each method will give you. Processing Faults In almost all cases, it is sufficient to just know that an exception occurred. The use of the method will likely give you the reason for a possible fault. If you are trying to create a pool and it fails, odds are you passed in an existing pool name as an input parameter. But for those situations where you need to get detailed info on why an exception happened how do you go about it? Given that the Exceptions we return are all encoded as text in the faultstring field, it would be handy to have some tools to help you decipher that data. Good thing you are reading this tech tip! Here is a sample C# class to parse and identify iControl exceptions. This could easily be ported to another language of your choice. using System; using System.Collections.Generic; using System.Text; namespace iControlProgram { public class ExceptionInfo { #region Private Member Variables private Exception m_ex = null; private Type m_exceptionType = null; private String m_message = null; private String m_location = null; private String m_exception = null; private long m_primaryErrorCode = -1; private String m_primaryErrorCodeHex = null; private long m_secondaryErrorCode = -1; private String m_errorString = null; private bool m_IsiControlException = false; #endregion #region Public Member Accessors public System.Type ExceptionType { get { return m_exceptionType; } set { m_exceptionType = value; } } public String Message { get { return m_message; } set { m_message = value; } } public String Location { get { return m_location; } set { m_location = value; } } public String Exception { get { return m_exception; } set { m_exception = value; } } public long PrimaryErrorCode { get { return m_primaryErrorCode; } set { m_primaryErrorCode = value; } } public String PrimaryErrorCodeHex { get { return m_primaryErrorCodeHex; } set { m_primaryErrorCodeHex = value; } } public long SecondaryErrorCode { get { return m_secondaryErrorCode; } set { m_secondaryErrorCode = value; } } public String ErrorString { get { return m_errorString; } set { m_errorString = value; } } public bool IsiControlException { get { return m_IsiControlException; } set { m_IsiControlException = value; } } #endregion #region Constructors public ExceptionInfo() { } public ExceptionInfo(Exception ex) { parse(ex); } #endregion #region Public Methods public void parse(Exception ex) { m_ex = ex; ExceptionType = ex.GetType(); Message = ex.Message.ToString(); System.IO.StringReader sr = new System.IO.StringReader(Message); String line = null; try { while (null != (line = sr.ReadLine().Trim())) { if (line.StartsWith("Exception caught in")) { Location = line.Replace("Exception caught in ", ""); } else if (line.StartsWith("Exception:")) { Exception = line.Replace("Exception: ", ""); } else if (line.StartsWith("primary_error_code")) { line = line.Replace("primary_error_code : ", ""); String[] sSplit = line.Split(new char[] { ' ' }); PrimaryErrorCode = Convert.ToInt32(sSplit[0]); PrimaryErrorCodeHex = sSplit[1]; } else if (line.StartsWith("secondary_error_code")) { SecondaryErrorCode = Convert.ToInt32(line.Replace("secondary_error_code : ", "")); } else if (line.StartsWith("error_string")) { ErrorString = line.Replace("error_string : ", ""); } } IsiControlException = (null != Location) && (null != Exception); } catch (Exception) { } } #endregion } } And here's a usage of the above ExceptionInfo class in a snippet of code that is making use of the iControl Assembly for .NET. ... try { m_interfaces.NetworkingVLAN.get_vlan_id(new string[] { "foobar" }); } catch (Exception ex) { ExceptionInfo exi = new ExceptionInfo(ex); if (exi.IsiControlException) { Console.WriteLine("Exception: " + exi.Exception); Console.WriteLine("Location : " + exi.Location); Console.WriteLine("Primary Error : " + exi.PrimaryErrorCode + "(" + exi.PrimaryErrorCodeHex + ")"); Console.WriteLine("Seconary Error : " + exi.SecondaryErrorCode); Console.WriteLine("Description : " + exi.ErrorString); } } Conclusion The flexibility in our Exception implementation in iControl, along with some utilities to help process that information, you should help you well on your way to building a rock solid iControl application. Get the Flash Player to see this player.2.1KViews0likes0CommentsiControl 101 - #13 - Data Groups
Data Groups can be useful when writing iRules. A data group is simply a group of related elements, such as a set of IP addresses, URI paths, or document extensions. When used in conjuction with the matchclass or findclass commands, you eliminate the need to list multiple values as arguments in an iRule expression. This article will discuss how to use the methods in the LocalLB::Class interface to manage the Data Groups for use within iRules. Terminology You will first notice a mixing up of terms. A "Class" and a "Data Group" can be used interchangeably. Class was the original development term and the marketing folks came up with Data Group later on so you will see "Class" embedded in the core configuration and iControl methods, thus the LocalLB::Class interface, and "Data Groups" will most often be how they are referenced in the administration GUI. Data Groups come in 4 flavors: Address, Integer, String, and External. Address Data Groups consist of a list of IP Addresses with optional netmasks and are useful for applying a policy based on a originating subnet. Integer Data Groups hold numeric integers and, to add more confusion, are referred as "value" types in the API. String Data Groups can hold a valid ascii-based string. All of the Data Group types have methods specific to their type (ie get_string_class_list(), add_address_class_member(), find_value_class_member()). External Data Groups are special in that they have one of the previous types but there are no direct accessor methods to add/remove elements from the file. The configuration consists of a file path and name, along with the type (Address, Integer, String). You will have to use the ConfigSync file transfer APIs to remotely manipulate External Data Groups. External Data Groups are meant for very large lists of lists that change frequently. This article will focus on String Data Groups, but the usage for Address and Integer classes will be similar in nature. Initialization This article uses PowerShell and the iControl Cmdlets for PowerShell as the client environment for querying the data. The following setup will be required for the examples contained in this article. PS> Add-PSSnapIn iControlSnapIn PS> Initialize-F5.iControl -Hostname bigip_address -Username bigip_user -Password bigip_pass PS> $Class = (Get-F5.iControl).LocalLBClass Listing data groups The first thing you'll want to do is determine which Data Groups exist. The get_string_class_list() method will return a array of strings containing the names of all of the existing String based Data Groups. PS> $Class.get_string_class_list() test_list test_class carp images Creating a Data Group You have to start from somewhere, so most likely you'll be creating a new Data Group to do your bidding. This example will create a data group of image extensions for use in a URI based filtering iRule. The create_string_class() takes as input an array of LocalLB::Class::StringClass structures, each containing the class name and an array of members. In this example, the string class "img_extensions" is created with the values ".jpg", ".gif", and ".png". Then the get_string_class_list() method is called to make sure the class was created and the get_string_class() method is called to return the values passed in the create method. PS> $StringClass = New-Object -typename iControl.LocalLBClassStringClass PS> $StringClass.name = "img_extensions" PS> $StringClass.members = (".jpg", ".gif", ".png") PS> $Class.create_string_class(,$StringClass) PS> $Class.get_string_class_list() test_list test_class carp images img_extensions PS> $Class.get_string_class((,"img_extensions")) name members ---- ------- img_extensions {.gif, .jpg, .png} Adding Data Group items Once you have an existing Data Group, you most likely will want to add something to it. The add_string_class_member() method will take as input the same LocalLB::Class::StringClass structure containing the name of the Data Group and the list of new items to add to it. The following code will add two values: ".ico" and ".bmp" to the img_extensions Data Group and then will query the values of the Data Group to make sure the call succeeded. PS> $StringClass.members = (".ico", ".bmp") PS> $Class.add_string_class_member(,$StringClass) PS> $Class.get_string_class((,"img_extensions")) name members ---- ------- img_extensions {.bmp, .gif, .ico, .jpg...} Removing Data Group Items If you want to add items, you may very well want to delete items. That's where the delete_string_class_member() method comes in. Like the previous examples, it takes the LocalLB::Class::StringClass structure containing the name of the Data Group and the values you would like to remove. The following example removes the ".gif" and ".jpg" value and queries the current value of the list. PS> $StringClass.members = (".gif", ".jpg") PS> $Class.delete_string_class_member(,$StringClass) PS> $Class.get_string_class((,"img_extensions")) name members ---- ------- img_extensions {.bmp, .ico, .png} Deleting Data Groups The interface wouldn't be complete if you couldn't delete the Data Groups you previously created. The delete_class() method takes as input a string array of class names. This example will delete the img_extensions Data Group and then call the get_string_class_list() method to verify that it was deleted. PS> $Class.delete_class(,"img_extensions") PS> $Class.get_string_class_list() ts_reputation test_list test_class carp images Conclusion That's about it! Just replace the "string" with "address" and "value" in the above methods and you should be well on your way to building any type of Data Group you need for all your iRule needs. Get the Flash Player to see this player.1.3KViews0likes6CommentsGetting Started with Splunk for F5
Pete Silva & Lori MacVittie both had blog posts last week featuring the F5 Application for Splunk, so I thought I’d take the opportunity to get Splunk installed and check it out. In this first part, I’ll cover the installation process. This is one of the easiest installions I've ever written about--it's almost like I'm cheating or something. Installing Splunk My platform of choice for this article is Ubuntu, so I downloaded the 4.2.1 Debian package for 64-bit systems from the Splunk site. Installation is a one step breeze: dpkg –i /var/tmp/splunk-4.2.1-98165-linux-2.6-amd64.deb After installation (defaulting to /opt/splunk) start the Splunk server: /opt/splunk/bin/splunk start I had to accept the license agreement during the startup process. Afterwards, I was instructed to point my browser to http:<server>:8000. I logged in with the default credentials (admin / changeme) and then was instructed to change my password, which I did (you can skip this step if you prefer). Pretty easy path to an completed installation. The browser should now be in the state shown below in Figure 1. Installing Splunk for F5 Click on Manager in the upper right-hand corner of the screen, which should take you to the screen shown below in Figure 2. Next, click on Apps as shown below in Figure 3. At this point you have a choice. If you downloaded the Splunk for F5 app from splunkbase, you can click the “install app from file” button. I chose to install from the web, so I clicked the “find more apps online” button. This loaded a listing from splunkbase, with the Splunk for F5 app shown at the bottom of Figure 4 below. After clicking the “install Free” button, I had to enter my splunk.com credentials, then the application installed. Splunk requested a restart, so I restarted and then logged back in. My new session was returned to the online apps screen, so to get to my new F5 app, I clicked “back to search” in the upper left corner, which took my to the Search app home page. Finally, in the upper right corner I selected App and then clicked “Splunk for F5 Security”. This resulted in the screen show below in Figure 5. Success! Now…what to do with it? How is this useful? Check back for part two next week… For some hints, check out the blogs I mentioned at the top of this article from Pete and Lori: Spelunking for Big Data Do You Splunk 2.0 Other Related Articles Do you Splunk? ASM & Splunk integration - DevCentral - F5 DevCentral > Community ... F5 Networks Partner Spotlight - Splunk f5 ltm dashboard in splunk - DevCentral - F5 DevCentral ... Logging HTTP traffic to Splunk - DevCentral - F5 DevCentral ... Client IP Logging with F5 & Splunk - DevCentral - F5 DevCentral ...1.3KViews0likes0CommentsiControl 101 - #11 - Performance Graphs
The BIG-IP stores history of certain types of data for reporting purposes. The Performance item in the Overview tab shows the report graphs for this performance data. The management GUI gives you the options to customize the graph interval but it stops short of giving you access to the raw data behind the graphs. Never fear, the System.Statistics interface contains methods to query the report types and extract the CSV data behind them. You can even select the start and end times as well as the poll intervals. This article will discuss the performance graph methods and how you can query the data behind them and build a chart of your own. Initialization This article uses PowerShell and the iControl Cmdlets for PowerShell as the client environment for querying the data. The following setup will be required for the examples contained in this article. PS> Add-PSSnapIn iControlSnapIn PS> Initialize-F5.iControl -Hostname bigip_address -Username bigip_user -Password bigip_pass PS> $SystemStats = (Get-F5.iControl).SystemStatistics Now that that is taken care of, let's dive right in. In the System.Statistics interface, there are two methods that get you to the performance graph data. The first is the get_performance_graph_list() method. This method takes no parameters and returns a list of structures containing each graph name, title, and description. PS> $SystemStats.get_performance_graph_list() graph_name graph_title graph_description ---------- ----------- ----------------- memory Memory Used Memory Used activecons Active Connections Active Connections newcons New Connections New Connections throughput Throughput Throughput httprequests HTTP Requests HTTP Requests ramcache RAM Cache Utilization RAM Cache Utilization detailactcons1 Active Connections Active Connections detailactcons2 Active PVA Connections Active PVA Connections detailactcons3 Active SSL Connections Active SSL Connections detailnewcons1 Total New Connections Total New Connections detailnewcons2 New PVA Connections New PVA Connections detailnewcons3 New ClientSSL Profile Connections New ClientSSL Profile Connections detailnewcons4 New Accepts/Connects New Accepts/Connects detailthroughput1 Client-side Throughput Client-side Throughput detailthroughput2 Server-side Throughput Server-side Throughput detailthroughput3 HTTP Compression Rate HTTP Compression Rate SSLTPSGraph SSL Transactions/Sec SSL Transactions/Sec GTMGraph GTM Performance GTM Requests and Resolutions GTMrequests GTM Requests GTM Requests GTMresolutions GTM Resolutions GTM Resolutions GTMpersisted GTM Resolutions Persisted GTM Resolutions Persisted GTMret2dns GTM Resolutions Returned to DNS GTM Resolutions Returned to DNS detailcpu0 CPU Utilization CPU Usage detailcpu1 CPU Utilization CPU Usage CPU CPU Utilization CPU Usage detailtmm0 TMM Utilization TMM Usage TMM TMM Utilization TMM CPU Utilization Creating a Report Ok, so you've now got the graph names, it's time to move on to accessing the data. The method you'll want is the get_performance_graph_csv_statistics() method. This method takes an array of PerformanceStatisticQuery structures containing the query parameters and returns an array of PerformanceGraphDataCSV structures, one for each input query. The following code illustrates how to make a simple query. The object_name corresponds to the graph_name in the get_performance_graph_list() method. The start_time and end_time allow you to control what the data range is. Values of 0 (the default) will return the entire result set. If the user specifies a start_time, end_time, and interval that does not exactly match the corresponding value used within the database, the database will attempt to use to closest time or interval as requested. The actual values used will be returned to the user on output. For querying purposes, the start_time can be specified as: 0: in which case by default, it means 24 hours ago. N: where N represents the number of seconds since Jan 1, 1970. -N: where -N represents the number of seconds before now, for example: -3600 means 3600 seconds ago, or now - 3600 seconds. For querying purposes, the end_time can be specified as: 0: in which case by default, it means now. N: where N represents the number of seconds since Jan 1, 1970. -N: where -N represents the number of seconds before now, for example: -3600 means 3600 seconds ago, or now - 3600 seconds. The interval is the suggested sampling interval in seconds. The default of 0 uses the system default. The maximum_rows value allows you to limit the returned rows. Values are started at the start_time and if the number of rows exceeds the value of maximum_rows, then the data is truncated at the maximum_rows value. A value of 0 implies the default of all rows. PS> # Allocate a new Query Object PS> $Query = New-Object -TypeName iControl.SystemStatisticsPerformanceStatisticQuery PS> $Query.object_name = "CPU" PS> $Query.start_time = 0 PS> $Query.end_time = 0 PS> $Query.interval = 0 PS> $Query.maximum_rows = 0 PS> # Make method call passing in an array of size one with the specified query PS> $ReportData = $SystemStats.get_performance_graph_csv_statistics( (,$Query) ) PS> # Look at the contents of the returned data. PS> $ReportData object_name : throughput start_time : 1208354160 end_time : 1208440800 interval : 240 statistic_data : {116, 105, 109, 101...} Processing the Data The statistic_data, much like the ConfigSync's file transfer data, is transferred as a base64 encoded string, which translates to a byte array in .Net. We will need to convert this byte array into a string and that can be done with the System.Text.ASCIIEncoding class. PS> # Allocate a new encoder and turn the byte array into a string PS> $ASCII = New-Object -TypeName System.Text.ASCIIEncoding PS> $csvdata = $ASCII.GetString($ReportData[0].statistic_data) PS> # Look at the resulting dataset PS> $csvdata timestamp,"CPU 0","CPU 1" 1208364000,4.3357230000e+00,0.0000000000e+00 1208364240,3.7098920000e+00,0.0000000000e+00 1208364480,3.7187980000e+00,0.0000000000e+00 1208364720,3.3311110000e+00,0.0000000000e+00 1208364960,3.5825310000e+00,0.0000000000e+00 1208365200,3.4826450000e+00,8.3330000000e-03 ... Building a Chart You will see the returned dataset is in the form of a comma separated value file. At this point you can take this data and import it into your favorite reporting package. But, if you want a quick and dirty way to see this visually, you can use PowerShell to control Excel into loading the data and generating a default report. The following code converts the csv format into a tab separated format, creates an instance of an Excel Application, loads the data, cleans it up, and inserts a default line graph based on the input data. PS> # Replace commas with tabs in the report data and save to c:\temp\tabdata.txt PS> $csvdata.Replace(",", "`t") > c:\temp\tabdata.txt PS> # Allocate an Excel application object and make it visible. PS> $e = New-Object -comobject "Excel.Application" PS> $e.visible = $true PS> # Load the tab delimited data into a workbook and get an instance of the worksheet it was inserted into. PS> $wb = $e.Workbooks.Open("c:\temp\tabdata.txt") PS> $ws = $wb.WorkSheets.Item(1) PS> # Let's remove the first row of timestamps. Ideally you'll want this to be the PS> # horizontal axis and I'll leave it up to you to figure that one out. PS> $ws.Columns.Item(1).EntireColumn.Select() PS> $ws.Columns.Item(1).Delete() PS> # The last row of the data is filled with NaN to indicate the end of the result set. Let's delete that row. PS> $ws.Rows.Item($ws.UsedRange.Rows.Count).Select() PS> $ws.Rows.Item($ws.UsedRange.Rows.Count).Delete() PS> # Select all the data in the worksheet, create a chart, and change it to a line graph. PS> $ws.UsedRange.Select() PS> $chart = $e.Charts.Add() PS> $chart.Type = 4 Conclusion Now you should have everything you need to get access to the data behind the performance graphs. There are many ways you could take these examples and I'll leave it to the chart gurus out there to figure out interesting ways to represent the data. If anyone does find some interesting ways to manipulate this data, please let me know! Get the Flash Player to see this player.1.2KViews0likes5CommentsiRules 101 - #1 - iRules的介绍
介绍 iRule是F5 BIG-IP设备提供的功能强大的灵活特性,它是基于F5独一无二的TMOS架构。iRule将给你带来无与伦比的对流量的直接操控和对任意IP应用流量的管理。iRules使用的是简单易用的脚本语法,可以让你自定义如何截取,检查,转换和引导inbound和oubound的应用流量。 在后面的系列章节中,我们将探讨TCL语言,包括它的用法和架构,以及iRule对TCL语言的扩展。 iRule的组成部分 iRule包含一个或者多个事件申明,以及在事件被触发时所执行TCL的代码。首先,让我们了解一下事件是什么。 事件 事件是TCL编程语言的扩展, F5利用事件来实现TCL语言编程设计的模块化。在一个CONNECTION流入TMOS,并从另一端流出的这过程中,这个CONNECTION会历经一系列的内部状态,每个状态都对应与iRule语言的一个事件。这些事件,例如CLIENT_ACCEPTED是全局触发的 (意味着无论virtual servers应用哪个profile, 所有的连接都会触发这个事件),有的是基于某个profile而触发的, 例如HTTP_REQUEST, CLIENTSSL_CLIENTCERT, RTSP_RESPONSE 。(意味着只有当virtual servers应用特定的profile,才会触发这些事件)。通过F5附加的when语句来申明事件。 when EVENT_NAME { TCL-CODE } 关于可用的事件列表,请参考iRule Events on CloudDocs。 事件的主要优点是它将iRules分割成逻辑的块,并且以非串行的方式执行。这意味这当一个事件被触发时,只执行那个事件对应的代码。 TCL iRules运用TCL运行引擎执行脚本逻辑。 TCL语言的相关文档请参考 http://tmml.sourceforge.net/doc/tcl/index.html. TCL语言可以被分解成操作符和命令。TCL语言的一切本质上都可以看作是命令。我们把各种不同的命令划分成3种类型:函数,操作命令和命令。 操作符 操作符是个令牌用来“操作”其他的值。你可以运用操作符来对两个值进行比较。除了TCL内置的操作符(==, <=, >=, ...) iRules还添加了像"starts_with", "contains", 和 "ends_with"这样的操作符对比较进行辅助。你可以在这里”IRules Functions wiki document”查看到完整的F5增加的操作符列表。 函数 函数是个功能命令,通常情况下,它会返回一个值。 像"findclass" 和"matchclass"这样的函数帮助数据组访问;而"findstr", "getfield", 和"substr"这样的函数则用于处理字符串。你可以在”iRules functions wiki document”中查看全部的F5增加的函数。 语句 语句是种典型的没有返回值的命令。基本上,语句的作用是”做点什么事情”。你可以利用TCL的" if" and "switch"语句来执行条件判断,或者,你可以应用iRules专有的语句”log”记录信息到系统日志里,或者用”pool’根据负载均衡得出的结果将流量分配到特定的服务器中。你可以在iRules Statements on Cloud Docs.中查看所有的iRules专有的操作命令。 命令 命令差不多是可以在TCL语言中使用的所有其他的可用控制架构。使用命令,你可以实现像获取http 请求的URI(HTTP::uri),或者用AES key (AES::encrypt)封装数据这样的操作。你可以在iRules Commands on Cloud Docs.中查看扩展命令列表。 在irules语言实现中,TCL语言的一些命令被禁用了。基本上,TCL语言中任何会造成不希望发生的中断流量的TCL命令(file IO, Socket calls, procedures, ...)都被移除了。你可以在这里查看不可用的命令列表。 除了标准的TCL命令,F5还增加了一些附加的命令,这些命令有全局范围内的,如(TCP::client_port, IP::addr, ...),还有基于某个特定profile的,如(HTTP::uri, SIP::call_id, ...).你可以在command section of the iRules on Cloud Docs.中查看所有的命令列表。 把所有的内容放在一起 以下的iRules包含了很多事件,操作符,函数,语句和命令。 when HTTP_REQUEST { if { [HTTP::uri] starts_with "/foobar" } { switch -glob [HTTP::uri] { "*[0-9].jpg" { pool numbers_pool } default { if { [string length [substr [HTTP::uri] 0 "?"]] > 0 } { HTTP::respond 200 content "<html><head><title>Where's the number?</title></head></body><h1>Where's the number?</h1></body></html>" } } } } } when HTTP_RESPONSE { if { [HTTP::header Content-Length] > 100 } { log local0. "too much data requested." drop } } 问题: 你能指出所有的操作符,函数,语句和命令吗? 在switch的语句中,"default"是什么意思。 在HTTP Response中,Content-Length header如果是空,将会发生什么? -glob在switch的操作命令中,和在正则表达式中有什么区别? 在函数,语句和命令中,什么时候需要用大括号? 结论 在这篇文章中,我们详细的描述了iRules语言的基本组成部分。在这个系列中的以后的文章中,我们将深入分析iRules的不同的组成部分,以及如何正确的使用这些特点。 相关连接 TCL相关 - http://tmml.sourceforge.net/doc/tcl/index.html iRules 事件 - https://clouddocs.f5.com/api/irules/Events.html iRules 函数 - https://clouddocs.f5.com/api/irules/Functions.html iRules 陈述- https://clouddocs.f5.com/api/irules/Statements.html iRules命令 - https://clouddocs.f5.com/api/irules/Commands.html iRules 操作符 - https://clouddocs.f5.com/api/irules/Operators.html1.2KViews0likes0Comments