getting started
49 TopicsIntermediate iRules: Nested Conditionals
Conditionals are a pretty standard tool in every programmer's toolbox. They are the functions that allow us to decided when we want certain actions to happen, based on, well, conditions that can be determined within our code. This concept is as old as compilers. Chances are, if you're writing code, you're going to be using a slew of these things, even in an Event based language like iRules. iRules is no different than any other programming/scripting language when it comes to conditionals; we have them. Sure how they're implemented and what they look like change from language to language, but most of the same basic tools are there: if, else, switch, elseif, etc. Just about any example that you might run across on DevCentral is going to contain some example of these being put to use. Learning which conditional to use in each situation is an integral part to learning how to code effectively. Once you have that under control, however, there's still plenty more to learn. Now that you're comfortable using a single conditional, what about starting to combine them? There are many times when it makes more sense to use a pair or more of conditionals in place of a single conditional along with logical operators. For example: if { [HTTP::host] eq "bob.com" and [HTTP::uri] starts_with "/uri1" } { pool pool1 } elseif { [HTTP::host] eq "bob.com" and [HTTP::uri] starts_with "/uri2" } { pool pool2 } elseif { [HTTP::host] eq "bob.com" and [HTTP::uri] starts_with "/uri3" } { pool pool3 } Can be re-written to use a pair of conditionals instead, making it far more efficient. To do this, you take the common case shared among the example strings and only perform that comparison once, and only perform the other comparisons if that result returns as desired. This is more easily described as nested conditionals, and it looks like this: if { [HTTP::host] eq "bob.com" } { if {[HTTP::uri] starts_with "/uri1" } { pool pool1 } elseif {[HTTP::uri] starts_with "/uri2" } { pool pool2 } elseif {[HTTP::uri] starts_with "/uri3" } { pool pool3 } } These two examples are logically equivalent, but the latter example is far more efficient. This is because in all the cases where the host is not equal to "bob.com", no other inspection needs to be done, whereas in the first example, you must perform the host check three times, as well as the uri check every single time, regardless of the fact that you could have stopped the process earlier. While basic, this concept is important in general when coding. It becomes exponentially more important, as do almost all optimizations, when talking about programming in iRules. A script being executed on a server firing perhaps once per minute benefits from small optimizations. An iRule being executed somewhere in the order of 100,000 times per second benefits that much more. A slightly more interesting example, perhaps, is performing the same logical nesting while using different operators. In this example we'll look at a series of if/elseif statements that are already using nesting, and take a look at how we might use the switch command to even further optimize things. I've seen multiple examples of people shying away from switch when nesting their logic because it looks odd to them or they're not quite sure how it should be structured. Hopefully this will help clear things up. First, the example using if statements: when HTTP_REQUEST { if { [HTTP::host] eq "secure.domain.com" } { HTTP::header insert "Client-IP:[IP::client_addr]" pool sslServers } elseif { [HTTP::host] eq "www.domain.com" } { HTTP::header insert "Client-IP:[IP::client_addr]" pool httpServers } elseif { [HTTP::host] ends_with "domain.com" and [HTTP::uri] starts_with "/secure"} { HTTP::header insert "Client-IP:[IP::client_addr]" pool sslServers } elseif {[HTTP::host] ends_with "domain.com" and [HTTP::uri] starts_with "/login"} { HTTP::header insert "Client-IP:[IP::client_addr]" pool httpServers } elseif { [HTTP::host] eq "intranet.myhost.com" } { HTTP::header insert "Client-IP:[IP::client_addr]" pool internal } } As you can see, this is completely functional and would do the job just fine. There are definitely some improvements that can be made, though. Let's try using a switch statement instead of several if comparisons for improved performance. To do that, we're going to have to use an if nested inside a switch comparison. While this might be new to some or look a bit odd if you're not used to it, it's completely valid and often times the most efficient you’re going to get. This is what the above code would look like cleaned up and put into a switch: when HTTP_REQUEST { HTTP::header insert "Client-IP:[IP::client_addr]" switch -glob [HTTP::host] { "secure.domain.com" { pool sslServers } "www.domain.com" { pool httpServers } "*.domain.com" { if { [HTTP::uri] starts_with "/secure" } { pool sslServers } else { pool httpServers } } "intranet.myhost.com" { pool internal } } } As you can see this is not only easier to read and maintain, but it will also prove to be more efficient. We've moved to the more efficient switch structure, we've gotten rid of the repeat host comparisons that were happening above with the /secure vs /login uris, and while I was at it I got rid of all those examples of inserting a header, since that was happening in every case anyway. Hopefully the benefit this technique can offer is clear, and these examples did the topic some justice. With any luck, you'll nest those conditionals with confidence now.5.7KViews0likes0CommentsiRule Security 101 - #07 - FTP Proxy
We get questions all the time about custom application protocols and how one would go about writing an iRule to "understand" what's going on with that protocol. In this article, I will look at the FTP protocol and show you how one could write the logic to understand that application flow and selectively turn on and off support for various commands within the protocol. Other articles in the series: iRule Security 101 – #1 – HTTP Version iRule Security 101 – #02 – HTTP Methods and Cross Site Tracing iRule Security 101 – #03 – HTML Comments iRule Security 101 – #04 – Masking Application Platform iRule Security 101 – #05 – Avoiding Path Traversal iRule Security 101 – #06 – HTTP Referer iRule Security 101 – #07 – FTP Proxy iRule Security 101 – #08 – Limiting POST Data iRule Security 101 – #09 – Command Execution FTP FTP, for those who don't know, stands for File Transfer Protocol. FTP is designed to allow for the remote uploading and downloading of documents. I'm not going to dig deep into the protocol in this document, but for those who want to explore further, it is defined in RFC959. The basics of FTP are as follows. Requests are made with single line requests formatted as: COMMAND COMMAND_ARGS CRLF Some FTP commands include USER, PASS, & ACCT for authentication, CWD for changing directories, LIST for requesting the contents of a directory, and QUIT for terminating a session. Responses to commands are made in two ways. Over the main "control" connection, the server will process the request and then return a response in this format CODE DESCRIPTION CRLF Where code is the status code defined for the given request command. These have some similarity to HTTP response codes (200 -> OK, 500 -> Error), but don't count on them being exactly the same for each situation. For commands that do not requests content from the server (USER, PASS, CWD, etc), the control connection is all that is uses. But, there are other commands that specifically request data from the server. RETR (downloading a file), STOR (uploading a file), and LIST (for requesting a current directory listing) are examples of these types of commands. For these commands, the status is still returned in the control channel, but the data is passed back in a separate "data" channel that is configured by the client with either the PORT or PASV commands. Writing the Proxy We'll start of the iRule with a set of global variables that are used across all connections. In this iRule will will only inspect on the following FTP commands: USER, PASV, RETR, STOR, RNFR, FNTO, PORT, RMD, MKD, LIST, PWD, CWD, and DELE. This iRule can easily be expanded to include other commands in the FTP command set. In the RULE_INIT event we will set some global variables to determine how we want the proxy to handle the specific commands. A value of 1 for the "block" options will make the iRule deny those commands from reaching the backend FTP server. Setting a value of 0 for the block flag, will allow the command to pass through. when RULE_INIT { set DEBUG 1 #------------------------------------------------------------------------ # FTP Commands #------------------------------------------------------------------------ set sec_block_anonymous_ftp 1 set sec_block_passive_ftp 0 set sec_block_retr_cmd 0 set sec_block_stor_cmd 0 set sec_block_rename_cmd 0 set sec_block_port_cmd 0 set sec_block_rmd_cmd 0 set sec_block_mkd_cmd 0 set sec_block_list_cmd 0 set sec_block_pwd_cmd 0 set sec_block_cwd_cmd 0 set sec_block_dele_cmd 1 } Since we will not be relying on a BIG-IP profile to handle the application parsing, we'll be using the low level TCP events to capture the requests and responses. When a client establishes a connection, the CLIENT_ACCPETED event will occur, from within this event we'll have to trigger a collection of the TCP data so that we can inspect it in the CLIENT_DATA event. when CLIENT_ACCEPTED { if { $::DEBUG } { log local0. "client accepted" } TCP::collect TCP::release } In the CLIENT_DATA event, we will look at the request with the TCP::payload command. We will then feed that value into a switch statement with options for each of the commands. For commands that are found that we want to disallow, we will issue an FTP error response code with description string, empty out the payload, and return from the iRule - thus breaking the connection. For all other cases, we allow the TCP engine to continue on with it's processing and then enter into data collect mode again. when CLIENT_DATA { if { $::DEBUG } { log local0. "----------------------------------------------------------" } if { $::DEBUG } { log local0. "payload [TCP::payload]" } set client_data [string trim [TCP::payload]] #--------------------------------------------------- # Block or alert specific commands #--------------------------------------------------- switch -glob $client_data { "USER anonymous*" - "USER ftp*" { if { $::DEBUG } { log local0. "LOG: Anonymous login detected" } if { $::sec_block_anonymous_ftp } { TCP::respond "530 Guest user not allowed\r\n"; reject } } "PASV*" { if { $::DEBUG } { log local0. "LOG: passive request detected" } if { $::sec_block_passive_ftp } { TCP::respond "502 Passive commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "RETR*" { if { $::DEBUG } { log local0. "LOG: RETR request detected" } if { $::sec_block_retr_cmd } { TCP::respond "550 RETR commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "STOR*" { if { $::DEBUG } { log local0. "LOG: STOR request detected" } if { $::sec_block_stor_cmd } { TCP::respond "550 STOR commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "RNFR*" - "RNTO*" { if { $::DEBUG } { log local0. "LOG: RENAME request detected" } if { $::sec_block_rename_cmd } { TCP::respond "550 RENAME commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "PORT*" { if { $::DEBUG } { log local0. "LOG: PORT request detected" } if { $::sec_block_port_cmd } { TCP::respond "550 PORT commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "RMD*" { if { $::DEBUG } { log local0. "LOG: RMD request detected" } if { $::sec_block_rmd_cmd } { TCP::respond "550 RMD commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "MKD*" { if { $::DEBUG } { log local0. "LOG: MKD request detected" } if { $::sec_block_mkd_cmd } { TCP::respond "550 MKD commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "LIST*" { if { $::DEBUG } { log local0. "LOG: LIST request detected" } if { $::sec_block_list_cmd } { TCP::respond "550 LIST commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "PWD*" { if { $::DEBUG } { log local0. "LOG: PWD request detected" } if { $::sec_block_pwd_cmd } { TCP::respond "550 PWD commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "CWD*" { if { $::DEBUG } { log local0. "LOG: CWD request detected" } if { $::sec_block_cwd_cmd } { TCP::respond "550 CWD commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } "DELE*" { if { $::DEBUG } { log local0. "LOG: DELE request detected" } if { $::sec_block_dele_cmd } { TCP::respond "550 DELE commands not allowed\r\n" TCP::payload replace 0 [string length $client_data] "" return } } } TCP::release TCP::collect } Once a connection has been made to the backend server, the SERVER_CONNECTED event will be raised. In this event we will release the context and issue a collect to occur for the server data. The server data will then be returned, and optionally logged, in the SERVER_DATA event. when SERVER_CONNECTED { if { $::DEBUG } { log "server connected" } TCP::release TCP::collect } when SERVER_DATA { if { $::DEBUG } { log local0. "payload <[TCP::payload]>" } TCP::release TCP::collect } And finally when the client closes it's connection,. the CLIENT_CLOSED event will be fired and we will log the fact that the session is over. when CLIENT_CLOSED { if { $::DEBUG } { log local0. "client closed" } } Conclusion This article shows how one can use iRules to inspect, and optionally secure, an application based on command sets within that application. Not all application protocols behave like FTP (TELNET for instance sends one character at a time and it's up to the proxy to consecutively request more data until the request is complete). But this should give you the tools you need to start inspection on your TCP based application. Get the Flash Player to see this player.3.9KViews0likes5CommentsiRules 101 - #12 - The Session Command
One of the things that makes iRules so incredibly powerful is the fact that it is a true scripting language, or at least based on one. The fact that they give you the tools that TCL brings to the table - regular expressions, string functions, even things as simple as storing, manipulating and recalling variable data - sets iRules apart from the rest of the crowd. It also makes it possible to do some pretty impressive things with connection data and massaging/directing it the way you want it. Other articles in the series: Getting Started with iRules: Intro to Programming with Tcl | DevCentral Getting Started with iRules: Control Structures & Operators | DevCentral Getting Started with iRules: Variables | DevCentral Getting Started with iRules: Directing Traffic | DevCentral Getting Started with iRules: Events & Priorities | DevCentral Intermediate iRules: catch | DevCentral Intermediate iRules: Data-Groups | DevCentral Getting Started with iRules: Logging & Comments | DevCentral Advanced iRules: Regular Expressions | DevCentral Getting Started with iRules: Events & Priorities | DevCentral iRules 101 - #12 - The Session Command | DevCentral Intermediate iRules: Nested Conditionals | DevCentral Intermediate iRules: Handling Strings | DevCentral Intermediate iRules: Handling Lists | DevCentral Advanced iRules: Scan | DevCentral Advanced iRules: Binary Scan | DevCentral Sometimes, though, a simple variable won't do. You've likely heard of global variables in one of the earlier 101 series and read the warning there, and are looking for another option. So here you are, you have some data you need to store, which needs to persist across multiple connections. You need it to be efficient and fast, and you don't want to have to do a whole lot of complex management of a data structure. One of the many ways that you can store and access information in your iRule fits all of these things perfectly, little known as it may be. For this scenario I'd recommend the usage of the session command. There are three main permutations of the session command that you'll be using when storing and referencing data within the session table. These are: session add: Stores user's data under the specified key for the specified persistence mode session lookup: Returns user data previously stored using session add session delete: Removes user data previously stored using session add A simple example of adding some information to the session table would look like: when CLIENTSSL_CLIENTCERT { set ssl_cert [SSL::cert 0] session add ssl $ssl_cert 90 } By using the session add command, you can manually place a specific piece of data into the LTM's session table. You can then look it up later, by unique key, with the session lookup command and use the data in a different section of your iRule, or in another connection all together. This can be helpful in different situations where data needs to be passed between iRules or events that it might not normally be when using a simple variable. Such as mining SSL data from the connection events, as below: when CLIENTSSL_CLIENTCERT { # Set results in the session so they are available to other events session add ssl [SSL::sessionid] [list [X509::issuer] [X509::subject] [X509::version]] 180 } when HTTP_REQUEST { # Retrieve certificate information from the session set sslList [session lookup ssl [SSL::sessionid]] set issuer [lindex sslList 0] set subject [lindex sslList 1] set version [lindex sslList 2] } Because the session table is optimized and designed to handle every connection that comes into the LTM, it's very efficient and can handle quite a large number of items. Also note that, as above, you can pass structured information such as TCL Lists into the session table and they will remain intact. Keep in mind, though, that there is currently no way to count the number of entries in the table with a certain key, so you'll have to build all of your own processing logic for now, where necessary. It's also important to note that there is more than one session table. If you look at the above example, you'll see that before we listed any key or data to be stored, we used the command session add ssl. Note the "ssl" portion of this command. This is a reference to which session table the data will be stored in. For our purposes here there are effectively two session tables: ssl, and uie. Be sure you're accessing the same one in your session lookup section as you are in your session add section, or you'll never find the data you're after. This is pretty easy to keep straight, once you see it. It looks like: session add uie ... session lookup uie Or: session add ssl ... session lookup ssl You can find complete documentation on the session command here, in the iRules, as well as some great examplesthat depict some more advanced iRules making use of the session command to great success. Check out Codeshare for more examples.3.4KViews0likes8CommentsGetting Started with Bigsuds–a New Python Library for iControl
I imagine the progression for you, the reader, will be something like this in the first six- or seven-hundred milliseconds after reading the title: Oh cool! Wait, what? Don’t we already have like two libraries for python? Really, a third library for python? Yes. An emphatic yes. The first iteration of pycontrol (pc1) was based on the zsi library, which hasn’t been updated in years and was abandoned with the development of the second iteration, pycontrol v2 (pc2), which switched to the active and well-maintained suds library. Bigsuds, like pycontrol v2, is also based on the suds library. So why bigsuds? There are several advantages to using the bigsuds library. No need to specify which WSDLs to download In pycontrol v2, any iControl interface you wish to work with must be specified when you instantiate the BIG-IP, as well as specifying the local directory or loading from URL for the WSDLs. In bigsuds, just specify the host, username, and password (username and password optional if using test box defaults of admin/admin) and you’re good to go. Currently in pycontrol v2: >>> import pycontrol.pycontrol as pc >>> b = pc.BIGIP( ... hostname = '192.168.6.11', ... username = 'admin', ... password = 'admin', ... fromurl = True, ... wsdls = ['LocalLB.Pool']) >>> b.LocalLB.Pool.get_list() [/Common/p1, /Common/p2, /Common/p3, /Common/p5] And here in bigsuds: >>> import bigsuds >>> b = bigsuds.BIGIP(hostname = '192.168.6.11') >>> b.LocalLB.Pool.get_list() ['/Common/p1', '/Common/p2', '/Common/p3', '/Common/p5'] >>> b.GlobalLB.Pool.get_list() ['/Common/p2', '/Common/p1'] No need to define the typefactory for write operations. This was the most challenging aspect of pycontrol v2 for me personally. I would get them correct sometimes. Often I’d bang my head against the wall wondering what little thing I missed to prevent success. The cool thing with bigsuds is you are just passing lists for sequences and lists of dictionaries for structures. No object creation necessary before making the iControl calls. It’s a thing of beauty. Creating a two member pool in pycontrol v2: lbmeth = b.LocalLB.Pool.typefactory.create('LocalLB.LBMethod') # This is basically a stub holder of member items that we need to wrap up. mem_sequence = b.LocalLB.Pool.typefactory.create('Common.IPPortDefinitionSequence') # Now we'll create some pool members. mem1 = b.LocalLB.Pool.typefactory.create('Common.IPPortDefinition') mem2 = b.LocalLB.Pool.typefactory.create('Common.IPPortDefinition') # Note how this is 'pythonic' now. We set attributes agains the objects, then # pass them in. mem1.address = '1.2.3.4' mem1.port = 80 mem2.address = '1.2.3.4' mem2.port = 81 # Create a 'sequence' of pool members. mem_sequence.item = [mem1, mem2] # Let's create our pool. name = 'PC2' + str(int(time.time())) b.LocalLB.Pool.create(pool_names = [name], lb_methods = \ [lbmeth.LB_METHOD_ROUND_ROBIN], members = [mem_sequence]) In contrast, here is a two member pool in bigsuds. >>> b.LocalLB.Pool.create_v2(['/Common/Pool1'],['LB_METHOD_ROUND_ROBIN'],[[{'port':80, 'address':'1.2.3.4'},{'port':81, 'address':'1.2.3.4'}]]) Notice above that I did not use the method parameters. They are not required in bigsuds, though you can certainly include them. This could be written in the long form as: >>> b.LocalLB.Pool.create_v2(pool_names = ['/Common/Pool1'],lb_methods = ['LB_METHOD_ROUND_ROBIN'], members = [[{'port':80, 'address':'1.2.3.4'},{'port':81, 'address':'1.2.3.4'}]]) Standard python data types are returned There’s no more dealing with data returned like this: >>> p2.LocalLB.Pool.get_statistics(pool_names=['/Common/p2']) (LocalLB.Pool.PoolStatistics){ statistics[] = (LocalLB.Pool.PoolStatisticEntry){ pool_name = "/Common/p2" statistics[] = (Common.Statistic){ type = "STATISTIC_SERVER_SIDE_BYTES_IN" value = (Common.ULong64){ high = 0 low = 0 } time_stamp = 0 }, (Common.Statistic){ type = "STATISTIC_SERVER_SIDE_BYTES_OUT" value = (Common.ULong64){ high = 0 low = 0 } time_stamp = 0 }, Data is standard python types: strings, lists, dictionaries. That same data returned by bigsuds: >>> b.LocalLB.Pool.get_statistics(['/Common/p1']) {'statistics': [{'pool_name': '/Common/p1', 'statistics': [{'time_stamp': 0, 'type': 'STATISTIC_SERVER_SIDE_BYTES_IN', 'value': {'high': 0, 'low': 0}}, {'time_stamp': 0, 'type': 'STATISTIC_SERVER_SIDE_BYTES_OUT', 'value': {'high': 0, 'low': 0}} Perhaps not as readable in this form as with pycontrol v2, but far easier to work programmatically. Better session and transaction support George covered the benefits of sessions in his v11 iControl: Sessions article in fine detail, so I’ll leave that to the reader. Regarding implementations, bigsuds handles sessions with a built-in utility called with_session_id. Example code: >>> bigip2 = b.with_session_id() >>> bigip2.System.Session.set_transaction_timeout(99) >>> print b.System.Session.get_transaction_timeout() 5 >>> print bigip2.System.Session.get_transaction_timeout() 99 Also, with transactions, bigsuds has built-in transaction utilities as well. In the below sample code, creating a new pool that is dependent on a non-existent pool being deleted results in an error as expected, but also prevents the pool from the previous step from being created as show in the get_list method call. >>> try: ... with bigsuds.Transaction(bigip2): ... bigip2.LocalLB.Pool.create_v2(['mypool'],['LB_METHOD_ROUND_ROBIN'],[[]]) ... bigip2.LocalLB.Pool.delete_pool(['nonexistent']) ... except bigsuds.OperationFailed, e: ... print e ... Server raised fault: 'Exception caught in System::urn:iControl:System/Session::submit_transaction() Exception: Common::OperationFailed primary_error_code : 16908342 (0x01020036) secondary_error_code : 0 error_string : 01020036:3: The requested pool (/Common/nonexistent) was not found.' >>> bigip2.LocalLB.Pool.get_list() ['/Common/Pool1', '/Common/p1', '/Common/p2', '/Common/p3', '/Common/p5', '/Common/Pool3', '/Common/Pool2'] F5 maintained Community member L4L7, the author of the pycontrol v2 library, is no longer with F5 and just doesn’t have the cycles to maintain the library going forward. Bigsuds author Garron Moore, however, works in house and will fix bugs and enhance as time allows. Note that all iControl libraries are considered experimental and are not officially supported by F5 Networks. Library maintainers for all the languages will do their best to fix bugs and introduce features as time allows. Source is provided though, and bugs can and are encouraged to be fixed by the community! Installing bigsuds Make sure you have suds installed and grab a copy of bigsuds (you’ll need to log in) and extract the contents. You can use the easy setup tools to install it to python’s site-packages library like this: jrahm@jrahm-dev:/var/tmp$ tar xvfz bigsuds-1.0.tar.gz bigsuds-1.0/ bigsuds-1.0/setup.py bigsuds-1.0/bigsuds.egg-info/ bigsuds-1.0/bigsuds.egg-info/top_level.txt bigsuds-1.0/bigsuds.egg-info/requires.txt bigsuds-1.0/bigsuds.egg-info/SOURCES.txt bigsuds-1.0/bigsuds.egg-info/dependency_links.txt bigsuds-1.0/bigsuds.egg-info/PKG-INFO bigsuds-1.0/setup.cfg bigsuds-1.0/bigsuds.py bigsuds-1.0/MANIFEST.in bigsuds-1.0/PKG-INFO jrahm@jrahm-dev:/var/tmp$ cd bigsuds-1.0/ jrahm@jrahm-dev:/var/tmp/bigsuds-1.0$ python setup.py install Doing it that way, you can just enter the python shell (or run your script) with a simple ‘import bigsuds’ command. If you don’t want to install it that way, you can just extract the bigsuds.py from the download and drop it in a directory of your choice and make a path reference in the shell or script: >>> import bigsuds Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named bigsuds >>> import sys >>> sys.path.append(r'/home/jrahm/dev/bigsuds-1.0') >>> import bigsuds >>> Conclusion Garron Moore's bigsuds contribution is a great new library for python users. There is work to be done to convert your pycontrol v2 samples, but the flexibility and clarity in the new library makes it worth it in this guy’s humble opinion. A new page in the iControl wiki has been created for bigsuds developments. Please check it out, community! For now, I’ve converted a few scripts to bigsuds, linked in the aforementioned page as well as directly below: Get GTM Pool Status Get LTM Pool Status Get or Set GTM Pool TTL Create or Modify an LTM Pool2.4KViews0likes24CommentsiRule Security 101 - #06 - HTTP Referer
In this article, I'm going to talk about the HTTP "Referer" header, how it's used, and how you can use iRules to ensure that an access request to a website is coming from where you want it to come from. Other articles in the series: iRule Security 101 – #1 – HTTP Version iRule Security 101 – #02 – HTTP Methods and Cross Site Tracing iRule Security 101 – #03 – HTML Comments iRule Security 101 – #04 – Masking Application Platform iRule Security 101 – #05 – Avoiding Path Traversal iRule Security 101 – #06 – HTTP Referer iRule Security 101 – #07 – FTP Proxy iRule Security 101 – #08 – Limiting POST Data iRule Security 101 – #09 – Command Execution First, let me say that I know that "Referer" is misspelled. For some reason, the authors of the HTTP specification (RFC 2616, section 14.36) didn't run a spell checker on the specification and now that every browser and web server has implemented this with the wrong spelling it's too late to change it. Take a look at the definition for it on dictionary.com and you'll see for yourself. Nothing like a dictionary 'dissing an Internet spec... Once you can get past the misspelling, the HTTP "Referer" header is defined as the following (RFC 2616, section 14.36) 14.36 Referer The Referer[sic] request-header field allows the client to specify, for the server's benefit, the address (URI) of the resource from which the Request-URI was obtained (the "referrer", although the header field is misspelled.) The Referer request-header allows a server to generate lists of back-links to resources for interest, logging, optimized caching, etc. It also allows obsolete or mistyped links to be traced for maintenance. The Referer field MUST NOT be sent if the Request-URI was obtained from a source that does not have its own URI, such as input from the user keyboard. So basically, when ever you click on a link from a website causing new HTTP request to be made, the URI of the website you are on will be passed in the HTTP request in the form of a HTTP header with the name of "Referer" and a value containing the source URI. Why is this important from a security perspective? I'll give just one example attack and a way to use Referer headers to help block against it. With the massive uptake of blogging by users on the Internet, comments are a useful way to get feedback on your ideas. Unfortunately blog spam, as it's called, has been on the rise. Several ways have been developed to protect against blog spam including comment moderation, CAPTCHA (you know, when you type out the text that is displayed in randomly generated images), as well as online dynamic services such as Akismet that process the content of comments in a very similar way to common email SPAM services. CAPTCHA is the most common form of defense but it is not fool proof and spammers have found ways to build programs to defeat this system. So how does this fit with Referers? I'll get to that in just a minute... If you set a policy on your blog that only the comment form can be accessed by clicking on a feedback link on your blog, then you can make use of this fact by denying all requests that do not contain the URI of your blog post in the Referer header. Sure, there are ways to bypass this since HTTP headers are easily programmed into any HTTP client program. But, there are ways to trick the client into thinking that the post succeeded when it really didn't. Let's take a look at an example. http://www.mycoolblog.com/ - blog site in question http://www.mycoolblog.com/first_post - blog post page that is to be commented on. http://www.mycoolblog.com/PostComment.aspx - Comment post form. Legitimate commenter's will first vist the blog post page and then fill in the comment information and submit it to the PostComment.aspx form. Spammers will try to bypass this step by pulling in these images into a client program, try to determine the CAPTCHA image's text, and then formulate a HTTP POST command directly to the PostComment.aspx page. By enforcing that a Referer header from the same blog site comes in the PostComment.aspx request, we can block out those spammers. when HTTP_REQUEST { switch -glob [HTTP::header "Referer"] { "http://www.mycoolblog.com/*" { # Allow Request to go through... } "" { HTTP::respond 200 content "" } default { HTTP::redirect [HTTP::header "Referer"] } } } Basically any request coming from http://www.mycoolblog.com will be allowed through. Any request with a empty Referer header will be immediately returned with a HTTP 200 response to trick the client that a successful attempt was made, and any other Referer's will be redirected back to the referral site. Caveats: This is by far not a universal blog spam solution as each blogging engine handles comments differently. Some have a different URI for comment posting (as illustrated above) and others use POST data values on the same application page as the blog posting to indicate comment submissions. Also, it is easy for clients to spoof referer values by manually adding the header in the requests. But, it is a good start for those automated bots out there that are just searching for blogs to send their unwanted content to. Also, this solution does not support Trackback/Pingback spam as those solutions typically are programmatic submissions from references in other blogs. Conclusions: Blog Spam was just an example of the type of application security issue that could be addressed by making use of the HTTP Referer header. Hopefully this article has provided some food for thought into how you can use the Referer header to your advantage in protecting your applications. Get the Flash Player to see this player.2.2KViews0likes0CommentsiControl 101 - #05 - Exceptions
When designing the iControl API, we had two choices with regards to API design. The first option was to build our methods to return status codes as return values and outbound data as "out" parameters. The other option was to make use of exception handling as a way to return non-success results allowing the use of the return value for outbound data. This article will discuss why we went with the later and how you can build exception handling in your client application to handle the cases where your method calls fail. Camp 1: return codes As I mentioned above, there are two camps for API design. The first are the ones that return status codes as return values and external error methods to return error details for a given error code. For you developers out there who still remember your "C" programming days, this may look familiar: struct stat sb; char [] dirname = "c:\somefile.txt"; if ( 0 != stat(dirname, &sb) ) { printf("Problem with file '%s'; error: %s\n", dirname, strerror(errno)); } You'll notice that the "stat" method to determine the file status returns an integer that is zero for success. When it's non, zero a global variable is set (errno) indicating the error number, and the "strerror" method can then be called with that error number to determine the user readable error string. There is a problem with this approach, as illustrated by the "Semipredicate problem", in which users of the method need to write extra code to distinguish normal return values from erroneous ones. Camp 2: Exceptions The other option for status returns is to make use of exception handling. Exception handling makes use of the fact that when error conditions occur, the method call will not return via it's standard return logic but rather the information on the exception will be stored and the call stack is unwound until a handler for that exception is found. This code sample in C# is an example of making use of exceptions to track errors: try { Microsoft.Win32.RegistryKey cu = Microsoft.Win32.Registry.CurrentUser; Microsoft.Win32.RegistryKey subKey = cu.OpenSubKey("some_bogus_path"); } catch(Exception ex) { Console.WriteLine("Exception: " + ex.Message.ToString()); } iControl works with Exceptions Luckily for us, the SOAP specification takes into account the exception model by adding an alternate to the SOAP Response. A SOAPFault can be used to return error information for those cases where the method calls cannot be completed due to invalid arguments or other system configuration issues. A SOAPFault for an invalid parameter to Networking::VLAN::get_vlan_id() looks like this: <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap-env:body> <soap-env:fault> <faultcode xsi:type="xsd:string">SOAP-ENV:Server</faultcode> <faultstring xsi:type="xsd:string">Exception caught in Networking::VLAN::get_vlan_id() Exception: Common::OperationFailed primary_error_code : 16908342 (0x01020036) secondary_error_code : 0 error_string : 01020036:3: The requested VLAN (foo) was not found.</faultstring> </soap-env:fault> </soap-env:body> </SOAP-ENV:Envelope> The faultcode element indicates that the fault occurred on the server (ie, it wasn't a client connectivity issue) and the faultstring contains the details. You may ask why we include all our fault data in the return string and not in the new SOAPFault elements defined in SOAP v1.2? Well, when we first released our iControl interfaces, SOAP v1.0 was just coming out and they were not defined yet. At this point we cannot change our fault format for risk of breaking backward compatibility in existing iControl applications. An added benefit of using exceptions is that it makes client code much cleaner as opposed to using "out" type parameters. Wouldn't you much your code look like this: String [] pool_list = m_interfaces.LocalLBPool.get_list() As opposed to this: String [] pool_list = null; int rc = m_interfaces.LocalLBPool.get_list(pool_list); Types of Exceptions If you look in the Common module in the SDK, you'll find a list of the exceptions supported in the iControl methods. The most common of them is "OperationFailed" but in some cases you'll see AccessDenied, InvalidArgument, InvalidUser, NoSuchInterface, NotImplemented, and OutOfMemory crop up. The SDK documentation for each method lists the Exceptions that can be raised by each method if you need to narrow down what each method will give you. Processing Faults In almost all cases, it is sufficient to just know that an exception occurred. The use of the method will likely give you the reason for a possible fault. If you are trying to create a pool and it fails, odds are you passed in an existing pool name as an input parameter. But for those situations where you need to get detailed info on why an exception happened how do you go about it? Given that the Exceptions we return are all encoded as text in the faultstring field, it would be handy to have some tools to help you decipher that data. Good thing you are reading this tech tip! Here is a sample C# class to parse and identify iControl exceptions. This could easily be ported to another language of your choice. using System; using System.Collections.Generic; using System.Text; namespace iControlProgram { public class ExceptionInfo { #region Private Member Variables private Exception m_ex = null; private Type m_exceptionType = null; private String m_message = null; private String m_location = null; private String m_exception = null; private long m_primaryErrorCode = -1; private String m_primaryErrorCodeHex = null; private long m_secondaryErrorCode = -1; private String m_errorString = null; private bool m_IsiControlException = false; #endregion #region Public Member Accessors public System.Type ExceptionType { get { return m_exceptionType; } set { m_exceptionType = value; } } public String Message { get { return m_message; } set { m_message = value; } } public String Location { get { return m_location; } set { m_location = value; } } public String Exception { get { return m_exception; } set { m_exception = value; } } public long PrimaryErrorCode { get { return m_primaryErrorCode; } set { m_primaryErrorCode = value; } } public String PrimaryErrorCodeHex { get { return m_primaryErrorCodeHex; } set { m_primaryErrorCodeHex = value; } } public long SecondaryErrorCode { get { return m_secondaryErrorCode; } set { m_secondaryErrorCode = value; } } public String ErrorString { get { return m_errorString; } set { m_errorString = value; } } public bool IsiControlException { get { return m_IsiControlException; } set { m_IsiControlException = value; } } #endregion #region Constructors public ExceptionInfo() { } public ExceptionInfo(Exception ex) { parse(ex); } #endregion #region Public Methods public void parse(Exception ex) { m_ex = ex; ExceptionType = ex.GetType(); Message = ex.Message.ToString(); System.IO.StringReader sr = new System.IO.StringReader(Message); String line = null; try { while (null != (line = sr.ReadLine().Trim())) { if (line.StartsWith("Exception caught in")) { Location = line.Replace("Exception caught in ", ""); } else if (line.StartsWith("Exception:")) { Exception = line.Replace("Exception: ", ""); } else if (line.StartsWith("primary_error_code")) { line = line.Replace("primary_error_code : ", ""); String[] sSplit = line.Split(new char[] { ' ' }); PrimaryErrorCode = Convert.ToInt32(sSplit[0]); PrimaryErrorCodeHex = sSplit[1]; } else if (line.StartsWith("secondary_error_code")) { SecondaryErrorCode = Convert.ToInt32(line.Replace("secondary_error_code : ", "")); } else if (line.StartsWith("error_string")) { ErrorString = line.Replace("error_string : ", ""); } } IsiControlException = (null != Location) && (null != Exception); } catch (Exception) { } } #endregion } } And here's a usage of the above ExceptionInfo class in a snippet of code that is making use of the iControl Assembly for .NET. ... try { m_interfaces.NetworkingVLAN.get_vlan_id(new string[] { "foobar" }); } catch (Exception ex) { ExceptionInfo exi = new ExceptionInfo(ex); if (exi.IsiControlException) { Console.WriteLine("Exception: " + exi.Exception); Console.WriteLine("Location : " + exi.Location); Console.WriteLine("Primary Error : " + exi.PrimaryErrorCode + "(" + exi.PrimaryErrorCodeHex + ")"); Console.WriteLine("Seconary Error : " + exi.SecondaryErrorCode); Console.WriteLine("Description : " + exi.ErrorString); } } Conclusion The flexibility in our Exception implementation in iControl, along with some utilities to help process that information, you should help you well on your way to building a rock solid iControl application. Get the Flash Player to see this player.1.8KViews0likes0CommentsGetting Started with iApps: A Conceptual Overview
tl;dr - iApps provide admins and service desks a template solution for application deployment and management services. Deploying and managing applications require a lot of information across several disciplines. Architects have their holistic view of the application ecosystem and relevant lifecycles. Developers have their granular relationship with each application under their umbrella. Networks admins make sure applications are behaving appropriately on the network instead of hijacking QoS classes or hijacking DNS. Then there are those missing details that no one wants to own until something breaks (looking at you Java CA store). Originally F5 introduced deployment guides to help administrators understand the requirements and configurations needed to deploy popular applications behind BIG-IP. However, after the deployment was complete, those configurations were still managed through object types alone (e.g. virtual servers, pools, profiles, iRules, monitors). That can get quite tedious when you have hundreds of applications on a single BIG-IP stack. Someone somewhere said “Wouldn’t it be nice if we could have an application-based view of all the different objects that help us deploy, manage, and secure each application”? Enter iApps Introduced in BIG-IP 11.0, iApps are a customizable framework for deploying and managing application as a service. Using out-of-the-box templates, administrators can deploy commonly-used applications such as Oracle, SAP, or Exchange by completing a series of questions that relate to their management and infrastructure needs. Rather than create a bunch of virtual servers, followed by a handful of monitors, then a plethora of whatever, the responses to iApps questions create all of the BIG-IP objects needed to properly run your application. The iApps application service becomes the responsible manager of all virtual servers, monitors, policies, profiles, and iRules required to run. Consolidating these into a single view makes management and troubleshooting much easier to handle. iApps Framework iApps consist of two main elements, the template and application services created by publishing a template. We’ll dive into this in our next article, Getting Started With iApps: Components. Templates: The base configuration object which contains the layout and scripting used to configure and publish application instances. Some templates are prebuilt and included in BIG-IP, while others can be download from DevCentral (are not officially supported) or F5 support (certified and supported). Developer-oriented teams can also build custom templates for frequently used configurations or services. Application Service: An application service is the result of using an iApps template to drive the configuration process. The Administrator uses the configuration utility to create a new application service from the selected iApps template. Created objects are grouped into components of the application service and are managed accordingly. The iApps Advantage iApps are not for everyone. If you like keeping tribal control over your BIG-IP ecosystem or if you like naming virtual servers after your pets, iApps may not be for you. iApps do have an advantage if you want to templatize your deployment scenarios or wish to allow other administrators access to the services they manage. iApps reduce a lot of the mystique and intimidation a lengthy set of profiles, policies, and pools can sometimes cause to the new or intermediate administrator. Above we show an example of building a highly available LDAP namespace for internal applications with the default built-in LDAP iApps template. By providing a certificate and and answering a few questions, an LDAP environment is created for all of your internal directory authentication or lookup requirements. From there, modifying the configuration is easy as selecting the Reconfigure tab in the existing application service. Changing settings within iApps Sometimes you just want a template to assist with application deployment and from there you’re perfectly fine managing the individual object types. The Component view will show you all objects affected by the application service but if you try to apply a change, you’ll receive an error similar to: This is by design because the iApps application service is the rightful owner of the system object and shouldn’t be edited directly. However in certain cases you don’t the iApp anymore or want more granular control of some features the iApps my not have, there is an option. Each application service published via iApps have a Properties tab which allow you to disable the Strict Updates method of management. Unchecked, each object is configurable on it’s own but will deviate from the templates last known state. Some administrators prefer to operate this way, only using the iApp as a deployment method, and that’s perfectly fine. We’re leaving your application management style and method up to you. As BIG-IP expands to cover more of the application landscape, people are increasingly taking advantage of more programmatic features and iApps is no exception. Allowing our administrators to improve their ease of deployment and use is why iApps exist and we’ll continue to develop and improve these features. Our next article Getting Started with iApps: Components will dive into more detail on the properties required to create and manage iApps. Take the time to get to know iApps, they’re your ally for keeping your applications in order.1.8KViews1like1CommentGetting Started With iApps: Components
Our last article, Getting Started with iApps: Conceptual Overview we breezed through iApps are and how they can benefit deployment and management of your nastiest applications. That’s fine and dandy, but the real benefit of iApps is through creation and customization of your very own iApps. To get you started, it’s helpful to understand how iApps are put together, specifically the components involved. This article will get you familiar with the internal workings of iApps and by doing so, lower the intimidation factor they sometimes cause. The iApps Framework: Templates iApp templates generate an application servicefrom user answers related totheir applicationrequirements. Templates provide a procedural graphic interface and context-sensitive help for the administrator. A new deployment uses a a single template to create an interface and guide the user through the configuration process, deploying the configuration when published. For common-deployed infrastructure, a well-defined template can be reused for multiple application services creation. During deployment a template can: Create new configuration objects Reference existing BIG-IP configuration objects, i.e. profiles or monitors Create additional configuration objects dynamically based on template requirements. For example, if an iApp creates a pool member with an IP address that does not already exist as a node, the BIG-IP will create the node automatically Objects created by templates are identified as components of the application service, and can be viewed on the deployed application service component tab. There are 5 sections to make up an iApp Template: Attributes Required BIG-IP modules Min/Max BIG-IP version Presentation Defines User Interface Application Presentation Language (APL is used to build the interface) Implementation Processing code TCL and TMSH commands Macro Creates an iRule dynamically Help HTML-based help tab Inline help (in presentation section) iApps Templates: Attributes The template attributes defined by the developer designate what versions and modules the template needs to execute one or more of the commands defined int presentation and implementation sections. These include: System-supplied property: Indicates if the template is provided with the installed version of BIG-IP or if it was copied/imported elsewhere. This is a read-only field. Required BIG-IP Modules property: This is used to set the required modules that must be provisioned before the template can be used by an application service. Minimum BIG-IP Version property: Displays minimum version of BIG-IP the template supports. If the system does not meet the min requirement, the system will post an invalid-template alert, and does not include make that template available for deployment. Maximum BIG-IP Version property: Displays the maximum version of software supported by the template. Similar to the above minimum version, BIG-IP will issue the same alert and mark it unavailable for deployment. Verification property: Indicates if the template is verified by F5 Support. iApps Templates: Presentation Using the Application Presentation Language (APL), the presentation section builds the user interface for the iApps template being deployed. The user is given a series of questions and options, and answers provided determine the configuration objects created and/or referenced. The APL describes what questions to ask, in what order to ask, how the questions are presented (free form, drop down, lists…), and the names of the variables used to store configuration data prior to publishing the template. Below we can see the APL code defined in F5 NIST SP800-53 RC4 template for the question related to access control. sc10 "Idle Timeouts for Management Access -- AC-2(5), SC-10" sc10.purpose "" sc10.mins "How many minutes for each Idle Timeout value? " sc10.mins_help "" sc10.mins.timeout_gui "Management GUI" sc10.mins.timeout_ssh "SSH" sc10.mins.timeout_console "Console" sc10.mins.timeout_tmsh "TMSH" section sc10 { message purpose "Configure idle timeouts for management access facilities. For each facility the value zero selects a 12-hour timeout." row mins { string timeout_gui required display "small" validator "nonnegativenumber" default tcl { set tmp [tmsh::run_proc nist80053:my_item \ /sys httpd auth-pam-idle-timeout] return [expr {int(($tmp + 59) / 60)}] } string timeout_ssh required display "small" validator "nonnegativenumber" default tcl { set tmp [tmsh::run_proc nist80053:my_item \ /sys sshd inactivity-timeout] return [expr {int(($tmp + 59) / 60)}] } string timeout_console required display "small" validator "nonnegativenumber" default tcl { set tmp [tmsh::run_proc nist80053:my_item \ /sys global-settings console-inactivity-timeout] return [expr {int(($tmp + 59) / 60)}] } string timeout_tmsh required display "small" validator "nonnegativenumber" default tcl { set tmp [tmsh::run_proc nist80053:my_item \ /cli global-settings idle-timeout] return [expr {($tmp eq "disabled") ? 0 : $tmp}] } } optional (intro.help == "show") { message mins_help "For each field, type the number of minutes of idle time that should elapse before the session times out. Using a value of zero (0) sets the timeout to 12 hours (720 minutes)." } } iApps Template: Implementation Using our ol’ standby TCL, this section of the template is written in the TMSH scripting language. This is the programmatic heart of the iApp template and builds the configuration needed to run your application as a unique service. We’re using the same F5 NIST SP800-53 iApp template and the Implementation code shown below is responsible for converting the answers defined in the presentation section into BIG-IP usable commands. #various login timeout settings if {[set tmp [expr {$::sc10__mins__timeout_gui * 60}]] == 0} { set tmp 43200 } elseif {$tmp < 120} { set tmp 120 } iapp_conf "modify /sys httpd auth-pam-idle-timeout ${tmp}" iapp_conf "modify /sys httpd auth-pam-dashboard-timeout on" if {[set tmp [expr {$::sc10__mins__timeout_ssh * 60}]] == 0} { set tmp 43200 } iapp_conf "modify /sys sshd inactivity-timeout ${tmp}" if {[set tmp [expr {$::sc10__mins__timeout_console * 60}]] == 0} { set tmp 43200 } iapp_conf "modify /sys global-settings console-inactivity-timeout ${tmp}" if {[set tmp $::sc10__mins__timeout_tmsh] == 0} { set tmp 720 } iapp_conf "modify /cli global-settings idle-timeout ${tmp}" iApps Templates: Help The help section is html defined and used to either give a large overview of the iApp template being used. The snippet below shows inline help questions that the user might need for specific questions they may not be sure of. This can also be used to help define formatting for more complex questions, such as an LDAP baseDN or LDAP filter to be used. <h6>SC-7 - Boundary Protection</h6> <p>This iApp lets you manage the IP subnets from which BIG-IP management may be accessed as well as services accessible on self IP’s.</p> <h6>SC-10 - Network Disconnect</h6> <p>This iApp exposes several timeout settings for access to the system.</p> <h6>SC-17 - Public Key Infrastructure Certificates</h6> <p>This iApp does not manage TLS/SSL PKI certificates or cryptographic material as such. However, you can select the appropriate certificates and keys for single-ended and mutual authentication of connections to external authentication/directory services.</p> The iApps Framework: Application Services We’ve configured an iApps Template, and clicked Finished. This starts the deployment and creation process of converting your answers into a unique application service and if we have good connections and all services are up, a functional application. A running application service can be administered by clicking reconfigure after selecting it in the iApps Application Services menu. From there, the same questions previously answered during deployment are available for modification. This is useful when IP’s or certificate profiles change and simple and quick updates are needed. The Application Service also provides a Components view that lists each object in use to manage the specific application service and if applicable, it’s operational status. Below is a completed LDAP application service. Note the familiar red diamonds because none of the LDAP servers referenced in this service are available. I’m a bad admin, I know. Also note that I created a CA and made wicked awesome certs just for this example (really I accidentally deleted my old virtual machine with the previous CA). iApps Template: Availability and Support Some of you that frequent DevCentral know there’s a lot of information on iApps available in Codeshare and the Wiki (The wiki is managed by the iApps team so make sure to check it out). However, DevCentral is not on officialsupport channelso here’s a quick breakdown of where to get your variously supported/unsupported iApps.. iApps on the BIG-IP system: Those are created and supported by F5. Enjoy iApps on F5 Support: If there is no RC status (release candidate) then they are supported by F5. iApps on DevCentral’s Codeshare: These are usuallyworks-in-progress, release candidate, user submitted, or F5 submitted; but not certified;none are officially supported. These are use-at-your-own risk but of course you have dev and test environments for just such occasions. You wouldn’t DARE test out an unsupported iApp off DevCentral in your production environment right??? That wasn’t so bad, was it. Now that you have a better understanding of what goes into an iApp, they’re not so intimidating. If you’ve done any iRule coding, you probably have an urge to start modifying one after finishing this article. Check out the different iApps out there on DevCentral and see if you have any upcoming deployments that may benefit from using iApps. Get your developers involved too as iApps are a natural progression in your DevOp’ish plans. Tomorrow our very own Jason “Lord of the Dance and TCL” Rahm will cover modifying and creating iApps so please stay tuned and get your dev hat ready.1.2KViews1like0CommentsiRule Security 101 - #08 - Limiting POST Data
With the increasing popularity of the internet for application delivery, user supplied information is almost a given. The method of choice for uploaded content is HTML forms making use of the POST HTTP command. One can craft HTML forms to limit the length of input form fields, but there are tools available to "sniff" this post data and generate requests that look for exploits in POST data limits. This article will describe how to interrogate a HTTP POST request and reject requests containing large HTTP POST data lengths. Other articles in the series: iRule Security 101 – #1 – HTTP Version iRule Security 101 – #02 – HTTP Methods and Cross Site Tracing iRule Security 101 – #03 – HTML Comments iRule Security 101 – #04 – Masking Application Platform iRule Security 101 – #05 – Avoiding Path Traversal iRule Security 101 – #06 – HTTP Referer iRule Security 101 – #07 – FTP Proxy iRule Security 101 – #08 – Limiting POST Data iRule Security 101 – #09 – Command Execution Background For those non-dev folks out there, whenever you type something into an edit box on a web page and click a "submit" type button, in most cases whats going on under the seems is a HTTP request is made to the target server with the data packaged in a HTTP POST request. An example of a HTML form that sends a single value named "hostname" to a web application with a HTTP POST command. <form name="loginForm" action="http://somesite.com/loginform.html" method="POST"> UserName: <input type="text" name="username" maxlength="20"/> <input type="submit" value="Submit"/> </form> The generated HTTP request will look something like this if the user typed "Joe" into the username text box: POST /loginform.html HTTP/1.1 Host: somesite.com User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1) Content-Type: application/x-www-form-urlencoded Content-Length: 12 username=Joe As you see, the POST Data contains the name=value pairs for all of the form elements. The security issue Let's assume the application processing this request has assumed that since it's html front end limited the text to 20 characters (as specified by the maxlength attribute in the input element) it only had to test for entries limited to 20 characters. Unit tests were run and passed and the application was put live. Here comes the potential problem... Let's say hacker Fred comes in and thinks to himself. Hmmm. I wonder what happens when if I send this application a multi-megabyte string for the username input value. Fred packages up his own request like this POST /loginform.html HTTP/1.1 Host: somesite.com User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1) Content-Type: application/x-www-form-urlencoded Content-Length: 1000009 username=aaa...aaa Where "aaa...aaa" is 1000000 characters long and sends it to the application with the tcp client of his choice. Since the server application didn't test for large string sizes, odds are a memory error could occur such as a buffer exploit, memory corruption or something more extreme. Bad things can happen such as server crashes and possibly data corruption and leakage of sensitive information. The solution iRules provide a very easy way to inspect the Content-Length of a HTTP request and block any requests that violate length constraints before the request even makes it to the application server. The following iRule will select an arbitrary limit of 1024 characters for the HTTP POST Data. when RULE_INIT { set DEBUG 1 set sec_http_max_post_data_length 1024 } when HTTP_REQUEST { if { [HTTP::method] equals "POST" } { set len [HTTP::header "Content-Length"] if { [expr $len > $::sec_http_max_post_data_length] } { log local0. " SEC-ALERT: POST Length: uri=[HTTP::uri]; len=$len; max_len=$::sec_http_max_post_data_length" reject } } } } If the Content-Length is larger than this limit, a message is sent to the system log and the connection is rejected. The application will never see the request and you've just avoided a buffer exploit attempt. Conclusion This is not a end-all-be-all solution for form protection. In fact, there are entire products developed to perform deep field inspection for HTML forms. But this solution does allow for an arbitrary first line of defense, or a quick way to protect a newly discovered exploit until an application fix is made. Get the Flash Player to see this player.1.1KViews0likes1CommentGetting Started with Splunk for F5
Pete Silva & Lori MacVittie both had blog posts last week featuring the F5 Application for Splunk, so I thought I’d take the opportunity to get Splunk installed and check it out. In this first part, I’ll cover the installation process. This is one of the easiest installions I've ever written about--it's almost like I'm cheating or something. Installing Splunk My platform of choice for this article is Ubuntu, so I downloaded the 4.2.1 Debian package for 64-bit systems from the Splunk site. Installation is a one step breeze: dpkg –i /var/tmp/splunk-4.2.1-98165-linux-2.6-amd64.deb After installation (defaulting to /opt/splunk) start the Splunk server: /opt/splunk/bin/splunk start I had to accept the license agreement during the startup process. Afterwards, I was instructed to point my browser to http:<server>:8000. I logged in with the default credentials (admin / changeme) and then was instructed to change my password, which I did (you can skip this step if you prefer). Pretty easy path to an completed installation. The browser should now be in the state shown below in Figure 1. Installing Splunk for F5 Click on Manager in the upper right-hand corner of the screen, which should take you to the screen shown below in Figure 2. Next, click on Apps as shown below in Figure 3. At this point you have a choice. If you downloaded the Splunk for F5 app from splunkbase, you can click the “install app from file” button. I chose to install from the web, so I clicked the “find more apps online” button. This loaded a listing from splunkbase, with the Splunk for F5 app shown at the bottom of Figure 4 below. After clicking the “install Free” button, I had to enter my splunk.com credentials, then the application installed. Splunk requested a restart, so I restarted and then logged back in. My new session was returned to the online apps screen, so to get to my new F5 app, I clicked “back to search” in the upper left corner, which took my to the Search app home page. Finally, in the upper right corner I selected App and then clicked “Splunk for F5 Security”. This resulted in the screen show below in Figure 5. Success! Now…what to do with it? How is this useful? Check back for part two next week… For some hints, check out the blogs I mentioned at the top of this article from Pete and Lori: Spelunking for Big Data Do You Splunk 2.0 Other Related Articles Do you Splunk? ASM & Splunk integration - DevCentral - F5 DevCentral > Community ... F5 Networks Partner Spotlight - Splunk f5 ltm dashboard in splunk - DevCentral - F5 DevCentral ... Logging HTTP traffic to Splunk - DevCentral - F5 DevCentral ... Client IP Logging with F5 & Splunk - DevCentral - F5 DevCentral ...1.1KViews0likes0Comments