code
52 Topics20 Lines or Less #73: VIPs, SMTP Blocking and Responses
What could you do with your code in 20 Lines or Less? That's the question I like to ask for the DevCentral community, and every time I go looking to find cool new examples that show just how flexible and powerful iRules can be without getting in over your head. This week in what I believe to be 20LoL #73 (Lost track…but we’ll go with it) we've got a few more prime examples of iRules doing what iRules do best: solving problems. Whether you're trying to do some fancy footwork when it comes to traffic routing to multiple VIPs, or dealing with some SMTP requests you'd rather not have to, you'll find something handy in this week's 20 Lines or Less. Many thanks to the fine contributors of said examples, both F5ers and otherwise. VIP redirect to another VIP https://devcentral.f5.com/s/questions/vip-redirect-to-another-vip A question that comes in fairly often is how to bounce traffic from one location to another. A very specific version of that question is "How do I get traffic from one of my VIPs to another?". This has been addressed in the Q&A section more than once, but I wanted to put it here in the 20LoL as well, as it seems to be a common theme. As Kevin Stewart nicely put, there are basically two ways to do this. First is with a simple redirect. This is done either via the HTTP::redirect command or by responding with a 301. This will tell the client to seek the resource they're requesting from a different host, all you have to do is supply the address of the VIP you want to bounce them to. The other, more direct fashion is to use the VIP targeting VIP function within LTM to make the destination an internal VIP. This looks a bit different, and behaves a bit different, but the client will never see the redirect, which can be handy at times. I've included Kevin's examples of each option here: when HTTP_REQUEST { if { ...some condition... } { HTTP::redirect "https://somewhere.else.com" } } when HTTP_REQUEST { if { ...some condition... } { HTTP::respond 301 Location "https://somewhere.else.com" } } when HTTP_REQUEST { if { ...some condition... } { virtual internal-vs } } Block SMTP connections based on EHLO https://devcentral.f5.com/s/questions/need-help-blocking-smtp-connections-based-off-ehlo-name Pesky SMTP attackers getting you down? Coming from multiple different IP addresses? Looking for a way to stop them based on their connection strings? Well look no further, Cory's got your back. He shows us a simple way to check the EHLO info in an SMTP handshake to block unwanted bad guys from doing…bad guy things. Simple and clever and useful, very 20LoL-ish. Check it out. when CLIENT_ACCEPTED { TCP::respond "220\r\n" TCP::collect } when CLIENT_DATA { set clientpayload [string tolower[TCP::payload]] if { $clientpayload contains "ehlo abcd-pc" } { reject } } No HTTP Response Fired? https://devcentral.f5.com/s/questions/when-is-http_response-not-fired This one is less a trick in code, but a lesson in understanding how iRules play with other modules loaded on the LTM. A user was having some troubles with APM multi-domain and an iRule they wanted to use that fired on HTTP_RESPONSE. As Kevin so clearly explains, "The HTTP_RESPONSE is triggered for egress HTTP traffic through the box. The logon VIP in an APM multi-domain configuration doesn't trigger the HTTP_RESPONSE event because it handles all responses locally. Your best bet here, unfortunately, is to layer the APM logon VIP behind an LTM VIP that can see the HTTP response traffic from the APM VIP. You'd use a very simple iRule on the LTM VIP". And here is said iRule, for all those that might run into a similar situation. when HTTP_REQUEST { virtual [name of APM VIP] } when HTTP_RESPONSE { HTTP::header insert Strict-Transport-Security "max-age=31708800" }372Views0likes3CommentsThe TAO of Tables - Part Three
This is a series of articles to introduce you to the many uses of tables. The TAO of Tables - Part One The TAO of Tables - Part Two Last week we discussed how we could use tables to profile the execution of an iRule, so let's take it to the next level and profile the variables of an iRule. Say you have an iRule that has to run many iterations in testing and you want to make sure nothing is going awry. Wouldn't it be nice to be able to actually see what is being assigned to the variables in your iRule? Well I will show you how you can... but first lets discuss variable scope. As a general rule, when talking to people on variables I discuss scope and what it means to them. You write an iRule, time passes, another person writes an iRule performing some other function and attaches it to the same virtual. What happens if you both use the same variable name such as count? Bad things that's what, because the variable scope is across all iRules attached to that virtual. You have contaminated each other's variable space. So I suggest where there is a likelihood of more than one iRule they come up with a project related prefix to attach to their variable names. It can be something as simple as a two characters "p1_count". But it is enough to separate iRule variables into a project related scope and prevent this kind of issue. There are some other advantages to doing this as well. Imagine all your variables start with "p1_" except those which use random numbers to generate content. For those use something like "p1r_". We will get to why in a moment. Now we have a single common set of characters that link all your variables together. We can use this with a command in TCL called info to retrieve these variable names and use them in interesting ways... when HTTP_REQUEST { foreach name [info locals p1_*] { table add -subtable $name [set $name] 0 indef 3600 table add -subtable tracking $name 0 indef 3600 } } This will create subtables based on the variable names. Each table entry will have a key that is the content of that variable. Since keys are unique then all the entries in this table will represent every unique value assigned to that variable over the last hour. Of course that timeframe can be adjusted by changing 3600 to something else or even indefinite. If you do make them indefinite just make sure you add an iRule to delete the variable and tracking table when you are finished or it will sit in your F5 until it is rebooted, or forever in the case of a HA pair. We will get to that in another article. This iRule would be added after your main processing iRules to collect information on every unique value assigned to every single variable in your iRule solution. How to retrieve this information now we have stored it in a table? Attach the following iRule to any virtual to display a dump of the variable contents of your solution over the last hour. when HTTP_REQUEST { if {[HTTP::uri] ne "/variables"} { return } set content "<html><head>VariableDump</head><body>" foreach name [table keys -subtable tracking] { append content "<p>Variable: $name<br>" foreach key [table keys -subtable $name] { append content "$key<br>" } append content "</body></html>" HTTP::respond 200 content $content event disable all } Which will give you the variable dump shown below. When there is a lot of variable data it is not reasonable to check each and every unique value but it's very useful for checking the pattern of a variable content and look for exceptions. iRules ultimately are dealing with customer traffic which can be unpredictable. This will allow you to skim through variable data looking for strange or unexpected content. I have used this to identify subtle iRule errors only revealed by strange data appearing in variable profiling. Variable Dump my_count 0 1 2 3 4 5 6 7 8 9 10 my_header 712 883 449 553 55 222 555 my_status success: main code success: alternate code falure: no header failure no html I hope by now you are starting to get an idea of what is possible with tables. The truth is you are only limited by what you can think up yourself. More on this next week! As always please add comments or feedback below.276Views0likes2CommentsThe TAO of Tables - Part Two
This is a series of articles to introduce you to the many uses of tables. The TAO of Tables - Part One Previously we talked about how tables can be used for counting. The next discussion in this series deals with structure and profiling of iRules. I encourage iRule authors to keep the logic flat. Its all well and good having beautiful indented arches of if, elseif and else statements. The hard reality of iRules is we want to get in and get out fast. I encourage users to make use of the return command to provide early exits from their code. If we had the following: if {[HTTP::basename] ends_with “.html”} { if {[HTTP::header exists x-myheader]} { if {[HTTP::header x-myheader] eq 1} { # run my iRule code } else { # run my alternate code } } } It would become… # no html if { not ([HTTP::basename] ends_with “.html” ) } { return } # no header if { not ( [HTTP::header exists x-myheader] ) } { return } if { [HTTP::header x-myheader] == 1 } { # run main iRule code return } # run alternate code So in this case we have put the no-run conditionals at the front of the iRule and the rest of the code is not executed unless it needs to be. While this is a simple case of making the code flat without any optimization, when you get to larger iRules you will have multiple no-run conditions which you can put up front to prevent the main code from ever executing. Testing would show you which are the most common and they would be tested first. There are added benefits as well. It is easier to read this code, the decision logic is very simple, if you don’t meet the conditions then your out! But there is more to this and here is where it gets really interesting. Now you have discrete exit points using return you can use this to begin profiling its behavior. Say for every exit point, you set a variable which represents why an exit occurred. when HTTP_REQUEST { if { not ( [HTTP::basename] ends_with “.html” ) } { set status “failed:Not html” return } if { not ( [HTTP::header exists x-myheader] ) } { set status “failed:No header” return } if { [HTTP::header x-myheader] == 1 } { # run my iRule code set status “success:Main” return } # run my alternate code set status “success:Alternate” } Why do all this? We can add another iRule which begins execution profiling. After the iRule above add the following… when HTTP_REQUEST { set lifetime 60 set uid [expr {rand() * 10000}] table add –subtable [getfield $status “:” 1] $uid 1 indef $lifetime table add –subtable “$status” $uid 1 indef $lifetime table add –subtable tracking $status 1 indef 3600 } First we create a unique identifier for this execution of the iRule called “uid”. The first table command is creating a subtable using the first part of the status string as the name. Since that is “success” or “failure” there will be two subtables. We will add a unique entry using the “uid” as the key to one of those tables. This table entry effectively represents a single execution of your iRule. These entries have a lifetime of 60 seconds. The second and third table commands are related. The second creates unique entries in a subtable named from the entire status string with a lifetime of 60 seconds. Since we do not know what the status strings may be in advance the third table command records these in a tracking table. Now finally, add the following code to any Virtual on the same F5. when HTTP_REQUEST { if {[HTTP::uri] ne “/status”} { return } set content “iRule Status<p>” append content “iRule Success: [table keys –count –subtable “success”]<br>” append content “iRule Failure: [table keys –count –subtable “failure”]<p>” foreach name [table keys –subtable “tracking”] { append content “$name: [table keys –count –subtable $name]<br>” } HTTP::respond 200 content "<html><body>$content</body></html>" event disable all } Then navigate to /status on that virtual to get execution profile of your iRule in the last minute. In this case 250 requests were sent through the iRule iRule Status iRule Success: 234 iRule Failure: 16 failed:No Header 1 failed:No html 15 success:Main 217 success:Alternate 20 So what happens here is we count the success and failure subtables and display the results. This will tell you how much traffic your iRule has been successfully processed over the last minute. Then we display the count of each status subtables and you now have the exact number of times you iRule exited at any point in the last minute. From here you can do percentages and pretty much how you display this information is up to you. It is not just limited to iRule profiling. It could reflect useful information on any part of the information stream or the performance characteristics of your solution. You could even have an external monitoring system calling an XML formatted version of the same information to track the effectiveness of your iRule. I hope that you enjoyed this second installment and next week we will talk about another kind of profiling. Please leave any comments you have below.248Views0likes0Commentsv11 iControl: Transactions
Introduction One of the most commonly requested features for iControl we’ve seen recently has been for transaction support. It was implemented in TMSH for Version 10 and is now available for iControl in Version 11. Transactions are super handy and anyone who has used them on other networking devices or databases can attest to their usefulness. There are many occasions where we want to make large sweeping changes, but want to interrupt the changes if any of them fails. This ensures that any changes made to the device will be done cleanly and if one step fails, subsequent actions will not fail as a result leaving the device in an unusable state. Take for example a large web farm rollout: if we were building hundreds of virtual servers passing traffic to thousands of nodes, we probably would not want to do this manually, especially if we have to do it another 10 times at other datacenter locations. In a situation like this we would more than likely write an iControl script in the language of our choosing and loop through the creation of all these objects. This is works very well until we get halfway through our deployment and hit a snag. Now the device is in a semi-configured state. We will either need to fix our script to pickup where we left off or wipe the device and start over. Neither of which is an ideal situation. Wouldn’t it be great if we could tell the BIG-IP, “give me all or nothing?” Transaction Behaviors Last week we talked about iControl sessions and their importance in separating concurrent requests from the same user. They were useful for setting a variables (session timeout, query recursion, and active folder) on the BIG-IP, but didn’t provide for much else. This is where transactions enter the conversation. Within an iControl session, transactions can be initiated and a number of requests queued prior to submitting the transaction for processing. There are a few transaction behavior that you should be aware of before you start using them in all your iControl scripts. Transactions are initiated via the ‘start_transaction’ method. Any iControl requests that involve a configuration modification (adds, edits, deletes, etc.) will be queued by the BIG-IP. If the request however is a query, the response will be returned immediately. If an invalid request (improper data structure, malformed SOAP request, etc.) is sent to the BIG-IP, the error will be returned immediately. If there is a modification request and the structure is valid, but there is an error in the content (pool doesn’t exists, invalid profile, etc.), the error will not be returned until the transaction is submitted via the ‘submit_transaction’ method. Let bulletize those behaviors: Only iControl requests that make modifications to the BIG-IP configuration are queued in transactions iControl queries (requests that do not make changed) are always served in real-time and not queued in transactions SOAP errors such as malformed requests will always be returned immediately iControl errors related to configuration changes will be returned only after the transaction is submitted Transactions will remain open until they are submitted or the session times out (default is 30 minutes) Let’s walk through a couple scenarios to help us better visualize the behavior of transactions. Creating And Deleting Pools Without Transactions In this example we are going to forego the transaction and create 3 pools and immediately delete those 3 pools along with a random non-existent pool inserted in the middle of our delete requests. The non-existent pool will not be found on the LTM causing the iControl interface to throw an error and the script to exit with an error. #!/usr/bin/ruby require 'rubygems' require 'f5-icontrol' # initiate iControl interfaces bigip = F5::IControl.new('10.0.0.1', 'admin', 'admin', \ ['System.Session', 'LocalLB.Pool']).get_interfaces # an array of the pools we will be working with pools = [ \ { 'name' => 'my_http_pool_1', 'members' => [ '10.0.0.100', '10.0.0.101' ] }, \ { 'name' => 'my_http_pool_2', 'members' => [ '10.0.0.102', '10.0.0.103', '10.0.0.104' ] }, \ { 'name' => 'my_http_pool_3', 'members' => [ '10.0.0.105' ] } \ ] # create pools pools.each do |pool| # assemble members array to reflect Common::AddressPort struct members = pool['members'].collect do |member| { 'address' => member, 'port' => 80 } end puts "Creating pool #{pool['name']}..." bigip['LocalLB.Pool'].create_v2( \ [ pool['name'] ], \ ['LB_METHOD_ROUND_ROBIN'], \ [ members ] \ ) end # collect list of pools to delete and insent a random pool name that doe not exist and shuffle them pool_names = pools.collect { |pool| pool['name'] }.push('random_pool_that_doesnt_exist').shuffle # delete pools pool_names.each do |pool_name| puts "Deleting pool #{pool_name}..." bigip['LocalLB.Pool'].delete_pool(pool_name.to_a) end Now, as we run our script you'll see that it isn't so happy trying to remove that pool that doesn't exist. Notice that it is the ‘delete_pool’ method that is throwing the exception. Here's the output: Creating pool my_http_pool_1... Creating pool my_http_pool_2... Creating pool my_http_pool_3... Deleting pool my_http_pool_2... Deleting pool random_pool_that_doesnt_exist... : Exception caught in LocalLB::urn:iControl:LocalLB/Pool::delete_pool() (SOAP::FaultError) Exception: Common::OperationFailed primary_error_code : 16908342 (0x01020036) secondary_error_code : 0 error_string : 01020036:3: The requested pool (/Common/random_pool_that_doesnt_exist) was not found. shell returned 1 If we now go and check the pools on our LTM we’ll see that we have two pools left leaving us in an unfavorable half-broken state. This is where transaction shine, avoiding half-broken configurations. Creating And Deleting Pools With One Transaction Next we’ll submit all of our changes as a single transaction. We’ll again try and delete our ‘random_pool_that_doesnt_exist’, but this time the transaction should catch this error and prevent any of the changes from being submitted. While this may not be completely ideal because our pools won’t get created or deleted as we had hoped, it will prevent us from entering a bad configuration state. The end result will be that our configuration remains in the previous state before we executed our iControl script. Here’s the code: #!/usr/bin/ruby require 'rubygems' require 'f5-icontrol' # initiate iControl interfaces bigip = F5::IControl.new('10.0.0.1', 'admin', 'admin', \ ['System.Session', 'LocalLB.Pool']).get_interfaces # an array of the pools we will be working with pools = [ \ { 'name' => 'my_http_pool_1', 'members' => [ '10.0.0.100', '10.0.0.101' ] }, \ { 'name' => 'my_http_pool_2', 'members' => [ '10.0.0.102', '10.0.0.103', '10.0.0.104' ] }, \ { 'name' => 'my_http_pool_3', 'members' => [ '10.0.0.105' ] } \ ] # start transaction bigip['System.Session'].start_transaction # create pools pools.each do |pool| # assemble members array to reflect Common::AddressPort struct members = pool['members'].collect do |member| { 'address' => member, 'port' => 80 } end puts "Creating pool #{pool['name']}..." bigip['LocalLB.Pool'].create_v2( \ [ pool['name'] ], \ ['LB_METHOD_ROUND_ROBIN'], \ [ members ] \ ) end # collect list of pools to delete and insent a random pool name that doe not exist and shuffle them pool_names = pools.collect { |pool| pool['name'] }.push('random_pool_that_doesnt_exist').shuffle # delete pools pool_names.each do |pool_name| puts "Deleting pool #{pool_name}..." bigip['LocalLB.Pool'].delete_pool(pool_name.to_a) end # submit the transaction bigip['System.Session'].submit_transaction Look at the above error and notice which method threw the exception that caused our script to exit: System/Session::submit_transaction(). In the previous example the LocalLB/Pool::delete_pool() method was the culprit, now it's the transaction throwing the error. That's a good sign that it is doing its job. Here’s the script’s output: Creating pool my_http_pool_1... Creating pool my_http_pool_2... Creating pool my_http_pool_3... Deleting pool random_pool_that_doesnt_exist... Deleting pool my_http_pool_1... Deleting pool my_http_pool_2... Deleting pool my_http_pool_3... : Exception caught in System::urn:iControl:System/Session::submit_transaction() (SOAP::FaultError) Exception: Common::OperationFailed primary_error_code : 16908342 (0x01020036) secondary_error_code : 0 error_string : 01020036:3: The requested pool (/Common/random_pool_that_doesnt_exist) was not found. shell returned 1 When we go and look at the pool on our LTM now we'll notice that nothing was actually created or deleted because our request to remove a pool that didn't exist failed. Creating And Deleting Pools With Multiple Transactions There may come a time when you want to submit different batches of changes as multiple transactions. With iControl it is as simple as submitting one and starting another. They will be executed linearly with the code. Just be careful that you don't start a transaction and inadvertently submit it prematurely elsewhere in your code. In this example we will combine all of our pool creation statements into one transaction and our deletions into another. The net result should be that we have 3 pools on the LTM at the end of the script's execution as the 3 pools will be created without issue and the deletions will fail due once again to the non-existent pool. #!/usr/bin/ruby require 'rubygems' require 'f5-icontrol' # initiate iControl interfaces bigip = F5::IControl.new('10.0.0.1', 'admin', 'admin', \ ['System.Session', 'LocalLB.Pool']).get_interfaces # an array of the pools we will be working with pools = [ \ { 'name' => 'my_http_pool_1', 'members' => [ '10.0.0.100', '10.0.0.101' ] }, \ { 'name' => 'my_http_pool_2', 'members' => [ '10.0.0.102', '10.0.0.103', '10.0.0.104' ] }, \ { 'name' => 'my_http_pool_3', 'members' => [ '10.0.0.105' ] } \ ] # start transaction for pool creations bigip['System.Session'].start_transaction puts "Starting pool creation transaction..." # create pools pools.each do |pool| # assemble members array to reflect Common::AddressPort struct members = pool['members'].collect do |member| { 'address' => member, 'port' => 80 } end puts "Creating pool #{pool['name']}..." bigip['LocalLB.Pool'].create_v2( \ [ pool['name'] ], \ ['LB_METHOD_ROUND_ROBIN'], \ [ members ] \ ) end # submit the transaction for pool creations bigip['System.Session'].submit_transaction puts "Submitting pool creation transaction..." # start transaction for pool deletions bigip['System.Session'].start_transaction puts "Starting pool deletion transaction..." # collect list of pools to delete and insent a random pool name that doe not exist and shuffle them pool_names = pools.collect { |pool| pool['name'] }.push('random_pool_that_doesnt_exist').shuffle # delete pools pool_names.each do |pool_name| puts "Deleting pool #{pool_name}..." bigip['LocalLB.Pool'].delete_pool(pool_name.to_a) end # submit the transaction for pool deletions bigip['System.Session'].submit_transaction puts "Submitting pool deletion transaction..." Now when we look at the output from our script, we'll notice that there are two separate tranactions occuring. The first executes without issue and creates the 3 pools on our LTM. The second transaction however fails due to trying to delete our now infamous 'random_pool_that_doesnt_exist'. Here’s the output from our script: Starting pool creation transaction... Creating pool my_http_pool_1... Creating pool my_http_pool_2... Creating pool my_http_pool_3... Submitting pool creation transaction... Starting pool deletion transaction... Deleting pool my_http_pool_3... Deleting pool my_http_pool_2... Deleting pool my_http_pool_1... Deleting pool random_pool_that_doesnt_exist... : Exception caught in System::urn:iControl:System/Session::submit_transaction() (SOAP::FaultError) Exception: Common::OperationFailed primary_error_code : 16908342 (0x01020036) secondary_error_code : 0 error_string : 01020036:3: The requested pool (/Common/random_pool_that_doesnt_exist) was not found. shell returned 1 If we examine our LTM configuration now we'll notice that there are three new pools configured. While our original instructions had been to create then subsequently remove them all, this is not a complete failure. We were able to isolate those failures to a transaction and ensure that our LTM remained in a working state throughout the modifications we were making. Conclusion Transactions are one of the most exciting features of Version 11. They give developers and administrators a new level of control over their iControl applications. Making use of transactions can give your iControl applications a new layer of insulation from making potential mission critical mistakes. They take a minimal amount of time to implement and can save developers and engineers from hours of headaches when things go haywire. Stay tuned for more Version 11 Tech Tips!671Views0likes3CommentsMultiple Certs, One VIP: TLS Server Name Indication via iRules
An age old question that we’ve seen time and time again in the iRules forums here on DevCentral is “How can I use iRules to manage multiple SSL certs on one VIP"?”. The answer has always historically been “I’m sorry, you can’t.”. The reasoning is sound. One VIP, one cert, that’s how it’s always been. You can’t do anything with the connection until the handshake is established and decryption is done on the LTM. We’d like to help, but we just really can’t. That is…until now. The TLS protocol has somewhat recently provided the ability to pass a “desired servername” as a value in the originating SSL handshake. Finally we have what we’ve been looking for, a way to add contextual server info during the handshake, thereby allowing us to say “cert x is for domain x” and “cert y is for domain y”. Known to us mortals as "Server Name Indication" or SNI (hence the title), this functionality is paramount for a device like the LTM that can regularly benefit from hosting multiple certs on a single IP. We should be able to pull out this information and choose an appropriate SSL profile now, with a cert that corresponds to the servername value that was sent. Now all we need is some logic to make this happen. Lucky for us, one of the many bright minds in the DevCentral community has whipped up an iRule to show how you can finally tackle this challenge head on. Because Joel Moses, the shrewd mind and DevCentral MVP behind this example has already done a solid write up I’ll quote liberally from his fine work and add some additional context where fitting. Now on to the geekery: First things first, you’ll need to create a mapping of which servernames correlate to which certs (client SSL profiles in LTM’s case). This could be done in any manner, really, but the most efficient both from a resource and management perspective is to use a class. Classes, also known as DataGroups, are name->value pairs that will allow you to easily retrieve the data later in the iRule. Quoting Joel: Create a string-type datagroup to be called "tls_servername". Each hostname that needs to be supported on the VIP must be input along with its matching clientssl profile. For example, for the site "testsite.site.com" with a ClientSSL profile named "clientssl_testsite", you should add the following values to the datagroup. String: testsite.site.com Value: clientssl_testsite Once you’ve finished inputting the different server->profile pairs, you’re ready to move on to pools. It’s very likely that since you’re now managing multiple domains on this VIP you'll also want to be able to handle multiple pools to match those domains. To do that you'll need a second mapping that ties each servername to the desired pool. This could again be done in any format you like, but since it's the most efficient option and we're already using it, classes make the most sense here. Quoting from Joel: If you wish to switch pool context at the time the servername is detected in TLS, then you need to create a string-type datagroup called "tls_servername_pool". You will input each hostname to be supported by the VIP and the pool to direct the traffic towards. For the site "testsite.site.com" to be directed to the pool "testsite_pool_80", add the following to the datagroup: String: testsite.site.com Value: testsite_pool_80 If you don't, that's fine, but realize all traffic from each of these hosts will be routed to the default pool, which is very likely not what you want. Now then, we have two classes set up to manage the mappings of servername->SSLprofile and servername->pool, all we need is some app logic in line to do the management and provide each inbound request with the appropriate profile & cert. This is done, of course, via iRules. Joel has written up one heck of an iRule which is available in the codeshare (here) in it's entirety along with his solid write-up, but I'll also include it here in-line, as is my habit. Effectively what's happening is the iRule is parsing through the data sent throughout the SSL handshake process and searching for the specific TLS servername extension, which are the bits that will allow us to do the profile switching magic. He's written it up to fall back to the default client SSL profile and pool, so it's very important that both of these things exist on your VIP, or you may likely find yourself with unhappy users. One last caveat before the code: Not all browsers support Server Name Indication, so be careful not to implement this unless you are very confident that most, if not all, users connecting to this VIP will support SNI. For more info on testing for SNI compatibility and a list of browsers that do and don't support it, click through to Joel's awesome CodeShare entry, I've already plagiarized enough. So finally, the code. Again, my hat is off to Joel Moses for this outstanding example of the power of iRules. Keep at it Joel, and thanks for sharing! 1: when CLIENT_ACCEPTED { 2: if { [PROFILE::exists clientssl] } { 3: 4: # We have a clientssl profile attached to this VIP but we need 5: # to find an SNI record in the client handshake. To do so, we'll 6: # disable SSL processing and collect the initial TCP payload. 7: 8: set default_tls_pool [LB::server pool] 9: set detect_handshake 1 10: SSL::disable 11: TCP::collect 12: 13: } else { 14: 15: # No clientssl profile means we're not going to work. 16: 17: log local0. "This iRule is applied to a VS that has no clientssl profile." 18: set detect_handshake 0 19: 20: } 21: 22: } 23: 24: when CLIENT_DATA { 25: 26: if { ($detect_handshake) } { 27: 28: # If we're in a handshake detection, look for an SSL/TLS header. 29: 30: binary scan [TCP::payload] cSS tls_xacttype tls_version tls_recordlen 31: 32: # TLS is the only thing we want to process because it's the only 33: # version that allows the servername extension to be present. When we 34: # find a supported TLS version, we'll check to make sure we're getting 35: # only a Client Hello transaction -- those are the only ones we can pull 36: # the servername from prior to connection establishment. 37: 38: switch $tls_version { 39: "769" - 40: "770" - 41: "771" { 42: if { ($tls_xacttype == 22) } { 43: binary scan [TCP::payload] @5c tls_action 44: if { not (($tls_action == 1) && ([TCP::payload length] > $tls_recordlen)) } { 45: set detect_handshake 0 46: } 47: } 48: } 49: default { 50: set detect_handshake 0 51: } 52: } 53: 54: if { ($detect_handshake) } { 55: 56: # If we made it this far, we're still processing a TLS client hello. 57: # 58: # Skip the TLS header (43 bytes in) and process the record body. For TLS/1.0 we 59: # expect this to contain only the session ID, cipher list, and compression 60: # list. All but the cipher list will be null since we're handling a new transaction 61: # (client hello) here. We have to determine how far out to parse the initial record 62: # so we can find the TLS extensions if they exist. 63: 64: set record_offset 43 65: binary scan [TCP::payload] @${record_offset}c tls_sessidlen 66: set record_offset [expr {$record_offset + 1 + $tls_sessidlen}] 67: binary scan [TCP::payload] @${record_offset}S tls_ciphlen 68: set record_offset [expr {$record_offset + 2 + $tls_ciphlen}] 69: binary scan [TCP::payload] @${record_offset}c tls_complen 70: set record_offset [expr {$record_offset + 1 + $tls_complen}] 71: 72: # If we're in TLS and we've not parsed all the payload in the record 73: # at this point, then we have TLS extensions to process. We will detect 74: # the TLS extension package and parse each record individually. 75: 76: if { ([TCP::payload length] >= $record_offset) } { 77: binary scan [TCP::payload] @${record_offset}S tls_extenlen 78: set record_offset [expr {$record_offset + 2}] 79: binary scan [TCP::payload] @${record_offset}a* tls_extensions 80: 81: # Loop through the TLS extension data looking for a type 00 extension 82: # record. This is the IANA code for server_name in the TLS transaction. 83: 84: for { set x 0 } { $x < $tls_extenlen } { incr x 4 } { 85: set start [expr {$x}] 86: binary scan $tls_extensions @${start}SS etype elen 87: if { ($etype == "00") } { 88: 89: # A servername record is present. Pull this value out of the packet data 90: # and save it for later use. We start 9 bytes into the record to bypass 91: # type, length, and SNI encoding header (which is itself 5 bytes long), and 92: # capture the servername text (minus the header). 93: 94: set grabstart [expr {$start + 9}] 95: set grabend [expr {$elen - 5}] 96: binary scan $tls_extensions @${grabstart}A${grabend} tls_servername 97: set start [expr {$start + $elen}] 98: } else { 99: 100: # Bypass all other TLS extensions. 101: 102: set start [expr {$start + $elen}] 103: } 104: set x $start 105: } 106: 107: # Check to see whether we got a servername indication from TLS. If so, 108: # make the appropriate changes. 109: 110: if { ([info exists tls_servername] ) } { 111: 112: # Look for a matching servername in the Data Group and pool. 113: 114: set ssl_profile [class match -value [string tolower $tls_servername] equals tls_servername] 115: set tls_pool [class match -value [string tolower $tls_servername] equals tls_servername_pool] 116: 117: if { $ssl_profile == "" } { 118: 119: # No match, so we allow this to fall through to the "default" 120: # clientssl profile. 121: 122: SSL::enable 123: } else { 124: 125: # A match was found in the Data Group, so we will change the SSL 126: # profile to the one we found. Hide this activity from the iRules 127: # parser. 128: 129: set ssl_profile_enable "SSL::profile $ssl_profile" 130: catch { eval $ssl_profile_enable } 131: if { not ($tls_pool == "") } { 132: pool $tls_pool 133: } else { 134: pool $default_tls_pool 135: } 136: SSL::enable 137: } 138: } else { 139: 140: # No match because no SNI field was present. Fall through to the 141: # "default" SSL profile. 142: 143: SSL::enable 144: } 145: 146: } else { 147: 148: # We're not in a handshake. Keep on using the currently set SSL profile 149: # for this transaction. 150: 151: SSL::enable 152: } 153: 154: # Hold down any further processing and release the TCP session further 155: # down the event loop. 156: 157: set detect_handshake 0 158: TCP::release 159: } else { 160: 161: # We've not been able to match an SNI field to an SSL profile. We will 162: # fall back to the "default" SSL profile selected (this might lead to 163: # certificate validation errors on non SNI-capable browsers. 164: 165: set detect_handshake 0 166: SSL::enable 167: TCP::release 168: 169: } 170: } 171: }3.8KViews0likes18CommentsiCall Triggers - Invalidating Cache from iRules
iCall is BIG-IP's all new (as of BIG-IP version 11.4) event-based automation system for the control plane. Previously, I wrote up the iCall system overview, as well as an article on the use of a periodic handler for automating backups. This article will feature the use of the triggered iCall handler to allow a user to submit a http request to invalidate the cache served up for an application managed by the Application Acceleration Manager. Starting at the End Before we get to the solution, I'd like to address the use case for invalidating cache. In many cases, the team responsible for an application's health is not the network services team which is the typical point of access to the BIG-IP. For large organizations with process overhead in generating tickets, invalidating cache can take time. A lot of time. So the request has come in quite frequently..."How can I invalidate cache remotely?" Or even more often, "Can I invalidate cache from an iRule?" Others have approached this via script, and it has been absolutely possible previously with iRules, albeit through very ugly and very-not-recommended ways. In the end, you just need to issue one TMSH command to invalidate the cache for a particular application: tmsh::modify wam application content-expiration-time now So how do we get signal from iRules to instruct BIG-IP to run a TMSH command? This is where iCall trigger handlers come in. Before we hope back to the beginning and discuss the iRule, the process looks like this: Back to the Beginning The iStats interface was introduced in BIG-IP version 11 as a way to make data accessible to both the control and data planes. I'll use this to pass the data to the control plane. In this case, the only data I need to pass is to set a key. To set an iStats key, you need to specify : Class Object Measure type (counter, gauge, or string) Measure name I'm not measuring anything, so I'll use a string starting with "WA policy string" and followed by the name of the policy. You can be explicit or allow the users to pass it in a query parameter as I'm doing in this iRule below: when HTTP_REQUEST { if { [HTTP::path] eq "/invalidate" } { set wa_policy [URI::query [HTTP::uri] policy] if { $wa_policy ne "" } { ISTATS::set "WA policy string $wa_policy" 1 HTTP::respond 200 content "App $wa_policy cache invalidated." } else { HTTP::respond 200 content "Please specify a policy /invalidate?policy=policy_name" } } } Setting the key this way will allow you to create as many triggers as you have policies. I'll leave it as an exercise for the reader to make that step more dynamic. Setting the Trigger With iStats-based triggers, you need linkage to bind the iStats key to an event-name, wacache in my case. You can also set thresholds and durations, but again since I am not measuring anything, that isn't necessary. sys icall istats-trigger wacache_trigger_istats { event-name wacache istats-key "WA policy string wa_policy_name" } Creating the Script The script is very simple. Clear the cache with the TMSH command, then remove the iStats key. sys icall script wacache_script { app-service none definition { tmsh::modify wam application dc.wa_hero content-expiration-time now exec istats remove "WA policy string wa_policy_name" } description none events none } Creating the Handler The handler is the glue that binds the event I created in the iStats trigger. When the handler sees an event named wacache, it'll execute the wacache_script iCall script. sys icall handler triggered wacache_trigger_handler { script wacache_script subscriptions { messages { event-name wacache } } } Notes on Testing Add this command to your arsenal - tmsh generate sys icall event <event-name> context none</event-name> where event-name in my case is wacache. This allows you to troubleshoot the handler and script without worrying about the trigger. And this one - tmsh modify sys db log.evrouted.level value Debug. Just note that the default is Notice when you're all done troubleshooting.1.6KViews0likes6CommentsClient Cert Fingerprint Matching via iRules
Client cert authentication is not a new concept on DevCentral, it’s something that has been covered before in the forums, wikis and Tech Tips. Generally speaking it means that you’re receiving a request from a client, and want to authenticate them, as is often the case. Rather than asking for a userID and password, though, you’re requesting a certificate that only authorized clients should have access to. In this way you’re able to allow seamless access to a resource without forcing a challenge response for the user or application, but still ensuring security is enforced. That’s the short version. So that’s cert authentication, but what is a cert fingerprint? A cert fingerprint is exactly what it sounds like, a unique identifier for a particular certificate. In essence it’s a shorter way to identify a given certificate without having the entirety of the cert. A fingerprint is created by taking a digest of the entire DER encoded certificate and hashing it in MD5 or SHA-1 format. Fingerprints are often represented as hex strings to be more human readable. The process looks something like this: Command: openssl x509 -in cert.pem -noout –fingerprint Output: 3D:95:34:51:24:66:33:B9:D2:40:99:C0:C1:17:0B:D1 This can be useful in many cases, especially when wanting to store a list of viable certificates without storing the entirety of the certs themselves. Say, for instance, you want to enable client cert authentication wherein a user connects to your application and both client and server present certificates. The authentication process would happen normally and assuming all checked out access would be granted to the connecting user. What if, however, you only wanted to allow clients with a certain list of certs access? Sure you could store the entire client certificate in a database somewhere and do a full comparison each time a request is made, but that’s both a bit of a security issue by having the individual client certificates stored in the auth DB itself, and a hassle. A simpler method for limiting which certs to allow would be to store the fingerprints instead. Since the fingerprints are unique to the certificate they represent, you can use them to enforce a limitation on which client certificates to allow access to given portions of your application. Why do I bring this up? Obviously there’s an iRule for that. Credit for this example goes to one of our outstanding Field Engineers out of Australia, Cameron Jenkins. Cameron ended up whipping together an iRule to solve this exact problem for a customer and was kind enough to share the info with us here at DevCentral. Below is a sanitized version of said iRule: 1: when CLIENTSSL_HANDSHAKE { 2: set subject_dn [X509::subject [SSL::cert 0]] 3: set cert_hash [X509::hash [SSL::cert 0]] 4: set cSSLSubject [findstr $subject_dn "CN=" 0 ","] 5: 6: log local0. "Subject = $subject_dn, Hash = $cert_hash and $cSSLSubject" 7: 8: #Check if the client certificate contains the correct CN and Thumbprint from the list 9: set Expected_hash [class lookup $cSSLSubject mythumbprints] 10: 11: if { $Expected_hash != $cert_hash } { 12: log local0. "Thumbprint presented doesn't match mythumbprints. Expected Hash = $Expected_hash, Hash received = $cert_hash" 13: reject 14: } 15: } As you can see the iRule is quite reasonable for performing such a complex task. Effectively what’s happening here is we’re storing the relevent data, the Cert’s subject and fingerprint, or hash, as it’s referred to in our X509 commands, in local variables. Then we’re performing a class lookup against a data group that’s filled with all of the valid fingerprints that we want to have access to our application. We’re using the subject to perform the lookup, and the result will be what we expect the fingerprint of that certificate to be, based on the subject supplied. Then, if that expected hash doesn’t match the actual hash presented by the client, we reject the connection, thereby enforcing access as desired. Related Articles Clientless FirePass Login via the command line using client ... 26 Short Topics about Security: Stats, Stories and Suggestions Configuring SSL Communcations with Apache SOAP > DevCentral > F5 ... Manipulating Header or Content Data > DevCentral > F5 DevCentral ... Add root CA to ca-bundle? - DevCentral - F5 DevCentral > Forums ...1.3KViews0likes1CommentiControl REST: Working with Pool Members
Since iControl REST is the new kid on the block, it's bound to start getting some of the same questions we've addressed with traditional iControl. One of these oft-asked and misunderstood questions is about enabling/disabling pool members. The original poster in this case is actually facing a syntax issue with the allowable state issues in the json payload, but I figured I'd kill two birds with one stone here and address both concerns going forward. DevCentral member Rudi posted in Q&A asking for some assistance with disabling a pool member. He was able to change some properties on the pool member, but trying to change the state resulted in this error: {"code":400,"message":"invalid property value \"state\":\"up\"","errorStack":[]} The REST interface is complaining about an invalid property, mainline, the "up" state. If you do a query against an "up" pool member, you can see that the state is "unchecked" instead of up. { "state": "unchecked", "connectionLimit": 0, "address": "192.168.101.11", "selfLink": "https://localhost/mgmt/tm/ltm/pool/testpool/members/~Common~192.168.101.11:8000?ver=11.5.1", "generation": 63, "fullPath": "/Common/192.168.101.11:8000", "partition": "Common", "name": "192.168.101.11:8000", "kind": "tm:ltm:pool:members:membersstate", "dynamicRatio": 1, "inheritProfile": "enabled", "logging": "disabled", "monitor": "default", "priorityGroup": 0, "rateLimit": "disabled", "ratio": 1, "session": "user-enabled" } You might also note the session keyword in the pool member attributes as well. This is the key that controls the forced offline behavior. The mappings for these two values (state and session) to the GUI state of a pool member are as follows GUI: Enabled {"state": "unchecked", "session": "user-enabled"} GUI: Disabled {"state": "unchecked", "session": "user-disabled"} GUI: Forced Offline {"state": "user-down", "session": "user-disabled"} So to change a value on a pool member, you need to use the PUT method, and specify in the URL the pool, pool name, and the pool member: curl -sk -u admin:admin https://192.168.6.5/mgmt/tm/ltm/pool/testpool/members/~Common~192.168.101.11:8000/ \ -H "Content-Type: application/json" -X PUT -d '{"state": "user-down", "session": "user-disabled"}' This results in changed state and session for this pool member: { "state": "user-down", "connectionLimit": 0, "address": "192.168.101.11", "selfLink": "https://localhost/mgmt/tm/ltm/pool/testpool/members/~Common~192.168.101.11:8000?ver=11.5.1", "generation": 63, "fullPath": "/Common/192.168.101.11:8000", "partition": "Common", "name": "192.168.101.11:8000", "kind": "tm:ltm:pool:members:membersstate", "dynamicRatio": 1, "inheritProfile": "enabled", "logging": "disabled", "monitor": "default", "priorityGroup": 0, "rateLimit": "disabled", "ratio": 1, "session": "user-disabled" } Best tip I can give with discovering the nuances of iControl REST is to query existing objects, and change their default values around in the GUI and re-query to see what the values are supposed to be. Happy coding!2.7KViews0likes10CommentsDNS Profile Benefits in iRules
I released an article a while back on the DNS services architecture now built in to BIG-IP, as well as a solution article that showed some fancy DNS tricks utilizing the architecture to black hole malicious DNS requests. What might be lost in those articles is the difference maker the dns profile makes in using iRules to return DNS responses. I was working on a little project earlier this week and the VM I am hosting requires a single DNS response to a single question. The problem is that I don't have the particular fqdn defined in an external or internal name server. Adding the fqdn to either is problematic: Adding the FQDN to the external name server would require adding an internal view to bind, which adds risk and complexity. Adding the FQDN to the internal name server would require adding external zones to my internal server, which adds unnecessary complexity. So as I wasn't going down either of those roads...I had to find an alternate solution. Thankfully, I have BIG-IP VE at my disposal, and therefore, iRules. The DNS profile exposes in iRules the DNS:: namespace, and with it, native decodes for all the fields in requests/responses. The iRule, with the DNS namespace, is trivial: when DNS_REQUEST { if { [IP::addr [IP::remote_addr] equals 192.168.1.0/24] && ([DNS::question name] equals "www.mytest.com") } { DNS::answer insert "[DNS::question name]. 111 [DNS::question class] [DNS::question type] 192.168.1.200" DNS::return } else ( discard } } However, after trying to save the iRule, I realized I'm not licensed for dns services on my BIG-IP VE, so that path wouldn't work. So I took a packet capture of some local dns traffic on my desktop and started mapping the fields and preparing to settle in for some serious binary scan/format work, but then remembered there were already some iRules out in the codeshare that I though might get me started. Natty76's Fast DNS 2 seemed to fit the bill. So with just a little customization, I was up and running with no issues. But notice the amount of work required (both by author and by system resources) to make this happen when compared with the above iRule. when RULE_INIT priority 1 { # Domain Name = www mytest com set static::domain "www.mytest.com" # IP address in answer section (type A) set static::answer_string "192.168.1.200" } when RULE_INIT { # Header generation (in hexadecimal) # qr(1) opcode(0000) AA(1) TC(0) RD(1) RA(1) Z(000) RCODE(0000) set static::header "8580" # 1 question, X answer, 0 NS, 0 Addition set static::answer_record [format %04x [llength $static::answer_string]] set static::header "${static::header}0001${static::answer_record}00000000" # generate domain binary string set static::domainhex "" foreach static::d [split $static::domain "."] { set static::l [string length $static::d] scan $static::l %d static::h append static::domainhex [format %02x $static::h] foreach static::n [split $static::d ""] { scan $static::n %c static::h append static::domainhex [format %02x $static::h] } } set static::domainbin [binary format H* $static::domainhex] append static::domainhex 00 set static::answerhead $static::domainhex # Type = A set static::answerhead "${static::answerhead}0001" # Class = IN set static::answerhead "${static::answerhead}0001" # TTL = 1 day set static::answerhead "${static::answerhead}00015180" # Data length = 4 set static::answerhead "${static::answerhead}0004" set static::answer "" foreach static::a $static::answer_string { scan $static::a "%d.%d.%d.%d" a b c d append static::answer "${static::answerhead}[format %02x%02x%02x%02x $a $b $c $d]" } } when CLIENT_DATA { if { [IP::addr [IP::client_addr] equals 192.168.1.0/22] } { binary scan [UDP::payload] H4@12A*@12H* id dname question set dname [string tolower [getfield $dname \x00 1 ] ] switch -glob $dname \ $static::domainbin { #log local0. "match" set hex ${id}${static::header}${question}${static::answer} set payload [binary format H* $hex ] # to drop only a packet and keep UDP connection, use UDP::drop drop UDP::respond $payload } \ default { #log local0. "does not match" } } else { discard } } No native decode means you have to do all the decoding work of the protocol yourself. I don't get to share "from the trenches" as much as I used to, but this was too good a demonstration to pass up.525Views0likes3CommentsAutomating Web App Deployments with Opscode Chef and iControl
Chef is a systems integration framework developed here in Seattle by Opscode. It provides a number of configuration management facilities for deploying systems rapidly and consistently. For instance, if you want 150 web servers configured identically (or even with variances), Chef can make that happen. It also curtails the urge to make “one-off” changes to individual hosts or to skip over checking those configuration changes into revision control. Chef will revert any changes made out-of-band upon its next convergence. As a former systems administrator with “OCD-like” tendencies, these features make me happy. We were introduced to the folks at Opscode through a mutual friend and we got to chatting about their products and ours. Eventually the topic of Ruby emerged (Chef is built on Ruby). We started tossing around ideas about how to use Ruby to make Chef and BIG-IP a big happy family. What if we could use Chef to automatically add our web servers to an LTM pool as they are built? Well, that’s exactly what we did. We wrote a Chef recipe to automatically add our nodes to our pool. We were able to combine this functionality with the Apache cookbook provided by the Opscode Community and create a role that handles all these actions simultaneously. Combine this with your PXE installation and you’ve got a highly efficient system for building loads of web servers in a hurry. Chef Basics Chef consists of a number of different components, but we will deduce them collectively to the Chef server, Chef client, and knife, the command-line tool. Chef also provides access to configurations via a management console (web interface) that provides all the functionality of knife in a GUI, but I prefer the command-line, so that’s what we’ll be covering. Chef uses cookbooks of recipes to perform automated actions against its nodes (clients). The recipe houses the logic for how resources (pre-defined and user-defined) should perform actions against the nodes. A few of the more common resources are file, package, cron, execute, and Ruby block. We could define a resource for anything though and that is what makes Chef so powerful: its extensibility. Using recipes and resources we can perform sweeping changes to system, but they aren’t very “personal” at this point. That is where attributes come into play. Attributes define the node specific settings that are “personalize” the recipe for that node or class of nodes. Once we have cookbooks to support our node configurations, we can group those recipes into roles. For instance we might want to build a “base_server” role that should be applied to all of my servers regardless of their specialized purpose. This “base_server” role might include recipes for installing and configuring OpenSSH, NTP, and VIM. We would then create a “web_server” role that installs and configure Apache and Tomcat. Our “database_server” role would install MySQL and load my default database. If we wanted to take this a step further, we could organize these roles into environments, so that we could rapidly deploy development, staging, and production servers. This makes building up and tearing down environments very efficient. That was a very short introduction to Chef and its features. For more information on the basics of Chef, check out this Opscode wiki entry. Chef meets F5’s Ruby iControl Library Now that we’ve got a fair number of web servers built with our “web_server” role, we need them to start serving traffic. We could go to our LTM and add them all manually, but that wouldn’t be any fun would it? Wouldn’t it be cool if we could somehow auto-populate our LTM pool with our new web servers? This is where things get cool. We created a Chef cookbook called “f5-node-initiator” that we can add to our server roles. Whenever the node receives this recipe, it will automatically install our f5-icontrol gem, copy an “f5-node-initiator” script to /usr/local/bin/, and add the node to the LTM defined in attributes section of the server’s role. Chef Installation The installation of Chef server and it constituents is a topic beyond the scope of this article. The Opscode folks have assembled a great quick start guide, which they update regularly. We followed this guide and had no trouble getting things up and running. Using Ubuntu 10.04 LTS, the install was exceptionally easy using Aptitude (apt-get) to install the chef-server and chef packages on the Chef server and client(s), respectively. After installing the packages, we cloned the Chef sample repository, copied our user keys to the ~/.chef/ directory (covered in the quick start guide), created a knife.rb configuration file in .chef (also in quick start), finally we filled in the values in ~/chef-repo/config/rake.rb. I would encourage everyone to read the quick start guide as well as a few others here and here. Note: Our environment was Ubuntu 10.04 LTS server installs running on a VMWare ESXi box (Intel Core i7 with 8GB of RAM). This was more than enough to run a Chef server, 10 nodes, an F5 LTM VE instance, as well a few other virtual machines. The f5-node-initiator Cookbook The “f5-node-initiator” cookbook (recipe can be used interchangeably here as the cookbook only contains one recipe) is relatively simple compared to some of the examples I encountered while demoing Chef. Let’s look at the directory structure: f5-node-initiator (dir) |--> attributes (dir) |--> default.rb – contains default attribute values |--> files (dir) |--> default (dir) |--> f5-icontrol-10.2.0.2.gem – F5 Ruby iControl Library |--> f5-node-initiator – script to add nodes to BIG-IP pool; source in Codeshare |--> recipes (dir) |--> default.rb – core logic of recipe |--> metadata.rb – information about author, recipe version, recipe license, etc. |--> README.rdoc – README document with description, requirements, usage, etc. That’s it. Our recipe contains 4 directories and 6 files. If we did our job in creating this cookbook, you shouldn’t need to modify anything within it. We should be able to change the default attributes in our role or our node definition to enact any changes to the defaults. Installing the Cookbook Download the f5-node-initiator cookbook: f5-node-initiator.tgz Untar it into your chef-repo/cookbooks/ directory tar -C ~/chef-repo/cookbooks/. -zxvf f5-node-initiator.tgz Add the new cookbook to your Git repository git commit -a -m "Adding f5-node-initiator cookbook" Install the cookbook on the Chef server rake install Ensure that the cookbook is installed and available on the Chef server knife cookbook list The “web_server” Role Once we have our cookbook uploaded to our server, we need to assign it to our “web_server” role in order to get it to do anything. In this example, we are going to install and configure Apache, mod_php for Apache, and the f5-node-initiator. Here are the steps to create this role: Create a file called “web_server.rb” in ~/chef-repo/roles/ vi ~/chef-repo/roles/web_server.rb Add the following contents to the “web_server.rb” role file name "web_server" description "Common web server configuration" run_list( "recipe[apache2]", "recipe[f5-node-initiator]" ) default_attributes( "bigip" => { "address" => "10.0.0.245", "user" => "admin", "pass" => "admin", "pool_name" => "chef_test_http_pool" } ) Note: Don't forget to create the targret HTTP pool on the LTM. If there isn't a pool to add the nodes to, the f5-node-initiator recipe will fail. Add the role to your Git and commit it to the repository git add web_server.rb git commit -m "Adding web_server role" Install the "web_server" role rake install Applying the “web_server” role to a node The f5-node-initiator cookbook is now in place and the recipe has been added to our “web_server” role. We will now take the role and apply it to our new node, which we’ll call “web-001”. At the conclusion of this section, if everything goes as planned, we should have a web server running Apache and serving traffic as a pool member of our LTM. Let’s walk through the steps of adding the role to our node: Add the “web_server” role to the node’s run_list knife node run_list add web-001 "role[webserver]" Manually kick-off convergence on the node ssh root@web-001 root@web-001:~# chef-client Note: Convergence happens by default automatically every 30 minutes, but it is best to test at least one node to ensure things are working as expected. Watch the output to ensure that everything runs successfully [Fri, 08 Jul 2011 11:17:21 -0700] INFO: Starting Chef Run (Version 0.9.16) [Fri, 08 Jul 2011 11:17:24 -0700] INFO: Installing package[apache2] version 2.2.14-5ubuntu8.4 [Fri, 08 Jul 2011 11:17:29 -0700] INFO: Installing gem_package[f5-icontrol] version 10.2.0.2 [Fri, 08 Jul 2011 11:17:38 -0700] INFO: gem_package[f5-icontrol] sending run action to execute[f5-node-initiator] (immediate) [Fri, 08 Jul 2011 11:17:40 -0700] INFO: Ran execute[f5-node-initiator] successfully [Fri, 08 Jul 2011 11:17:40 -0700] INFO: Chef Run complete in 18.90055 seconds [Fri, 08 Jul 2011 11:17:40 -0700] INFO: cleaning the checksum cache [Fri, 08 Jul 2011 11:17:40 -0700] INFO: Running report handlers [Fri, 08 Jul 2011 11:17:40 -0700] INFO: Report handlers complete Verify that the node was added to the “chef_test_http_pool” on our LTM Conclusion This example used a web server as the example, but the role and attributes could be easily modified to support any number of systems and protocols. If you can pass traffic through a BIG-IP and create a pool for it, then you should be able to use the f5-node-initiator cookbook to automate additions of those nodes to the LTM pool. Give it a shot with SMTP, SIP, etc. and let us know how it goes. Chef and iControl are both incredibly powerful and versatile tools. When combined, they can perform a number of labor-intensive tasks almost effortlessly. The initial configuration of Chef may seem like a lot of work, but it will save you work in the long run. Starting with a good foundation can make large projects later on seem much more approachable. Trust us, it is worth it. Until next time, keep automating!1.2KViews0likes8Comments