tables
4 TopicsThe TAO of Tables - Part Three
This is a series of articles to introduce you to the many uses of tables. The TAO of Tables - Part One The TAO of Tables - Part Two Last week we discussed how we could use tables to profile the execution of an iRule, so let's take it to the next level and profile the variables of an iRule. Say you have an iRule that has to run many iterations in testing and you want to make sure nothing is going awry. Wouldn't it be nice to be able to actually see what is being assigned to the variables in your iRule? Well I will show you how you can... but first lets discuss variable scope. As a general rule, when talking to people on variables I discuss scope and what it means to them. You write an iRule, time passes, another person writes an iRule performing some other function and attaches it to the same virtual. What happens if you both use the same variable name such as count? Bad things that's what, because the variable scope is across all iRules attached to that virtual. You have contaminated each other's variable space. So I suggest where there is a likelihood of more than one iRule they come up with a project related prefix to attach to their variable names. It can be something as simple as a two characters "p1_count". But it is enough to separate iRule variables into a project related scope and prevent this kind of issue. There are some other advantages to doing this as well. Imagine all your variables start with "p1_" except those which use random numbers to generate content. For those use something like "p1r_". We will get to why in a moment. Now we have a single common set of characters that link all your variables together. We can use this with a command in TCL called info to retrieve these variable names and use them in interesting ways... when HTTP_REQUEST { foreach name [info locals p1_*] { table add -subtable $name [set $name] 0 indef 3600 table add -subtable tracking $name 0 indef 3600 } } This will create subtables based on the variable names. Each table entry will have a key that is the content of that variable. Since keys are unique then all the entries in this table will represent every unique value assigned to that variable over the last hour. Of course that timeframe can be adjusted by changing 3600 to something else or even indefinite. If you do make them indefinite just make sure you add an iRule to delete the variable and tracking table when you are finished or it will sit in your F5 until it is rebooted, or forever in the case of a HA pair. We will get to that in another article. This iRule would be added after your main processing iRules to collect information on every unique value assigned to every single variable in your iRule solution. How to retrieve this information now we have stored it in a table? Attach the following iRule to any virtual to display a dump of the variable contents of your solution over the last hour. when HTTP_REQUEST { if {[HTTP::uri] ne "/variables"} { return } set content "<html><head>VariableDump</head><body>" foreach name [table keys -subtable tracking] { append content "<p>Variable: $name<br>" foreach key [table keys -subtable $name] { append content "$key<br>" } append content "</body></html>" HTTP::respond 200 content $content event disable all } Which will give you the variable dump shown below. When there is a lot of variable data it is not reasonable to check each and every unique value but it's very useful for checking the pattern of a variable content and look for exceptions. iRules ultimately are dealing with customer traffic which can be unpredictable. This will allow you to skim through variable data looking for strange or unexpected content. I have used this to identify subtle iRule errors only revealed by strange data appearing in variable profiling. Variable Dump my_count 0 1 2 3 4 5 6 7 8 9 10 my_header 712 883 449 553 55 222 555 my_status success: main code success: alternate code falure: no header failure no html I hope by now you are starting to get an idea of what is possible with tables. The truth is you are only limited by what you can think up yourself. More on this next week! As always please add comments or feedback below.276Views0likes2CommentsThe TAO of Tables - Part One
This is a series of articles to introduce you to the many uses of tables. Many developers have heard about them but few have had the opportunity to use them. In this series of articles I will take you on a journey from the very beginning to the complex and marvellous creations we can make using them. Their true power lies solely in your mind and how you might use them. For instance; recently I was asked how can I track the hosts connecting to my service and if possible the number of times they have connected? table incr –subtable client_list [IP::client_addr] That’s it? One command! Yes. That’s it. Let’s break it down… in a subtable name called “client_list” store entries who’s key is the clients IP address and value is the number of times they have hit your virtual service. But… hang on are we talking connections here or requests? Ahh well, that will depend on the iRule event you use. CLIENT_CONNECTED will represent TCP connections whereas HTTP_REQUEST will represent every single request. So let go with HTTP_REQUEST and this becomes when HTTP_REQUEST { table incr –subtable client_list [IP::client_addr] } So now we focus on HTTP requests however this will register all the elements on a page such as images and css. If that’s not what you are expecting then you need to add a filter so only HTML pages are captured. If your site uses aspx pages then check for that… when HTTP_REQUEST { if { [URI::basename [HTTP::uri]] ends_with “.aspx” } { table incr –subtable client_list [IP::client_addr] } } This is not going to match “/”. However many sites these days will redirect “/” to the proper page name and since you are here to measure HTML page calls and not redirects you may not have to modify this further. This looks good but we have missed a few things. All table entries have a timeout and optional expiry time. By default this is 120 seconds. We need to specify how long we want this information to be stored. In this case, since we want absolute page counts we don’t want the records to expire. Since we cannot set the timeout using the table incr command then we have to use another. when HTTP_REQUEST { if { [URI::basename [HTTP::uri]] ends_with “.aspx” } { table incr –subtable client_list [IP::client_addr] table timeout –subtable client_list [IP::client_addr] indef } } Ok we are progressing but now we have introduced another problem to consider. By using the indef command these table entries will never be removed unless we remove them or there is a box reset. While they do not take up a lot of memory, when you add something like this, it is effectively a memory leak. It will reduce the available memory to the TMM kernel over time and therefore you should be careful to manage this usage. We will get to that later but first, having this information stored in your F5 is great but how do you get to it? Well the simplest way is to display it! Now remember that tables are global objects in memory so you can use something like this on any virtual on the same F5 to display your results. when HTTP_REQUEST { if {[HTTP::uri] ne “/status” } { return } set response “<html><head>Client Connections</head><body>” foreach ip [table keys –subtable client_list] { append response “$ip = [table lookup –subtable client_list $ip]<br>” } append response”</body></html>” HTTP::respond 200 content $response Content-Type “text/html” } And if you want an xml response which you can parse into a database then you can use something similar to the following. when HTTP_REQUEST { if {[HTTP::uri] ne “/xml” } { return } set response “<clients>” foreach ip [table keys –subtable client_list] { append response “<$ip>[table lookup –subtable client_list $ip]</$ip>” } append response”</clients>” HTTP::respond 200 content $response Content-Type “application/xml” } So that’s the solution. It’s a very simple command, triggered in the right place at the right time that will store a ton of useful information. The kind of that can be used for developing firewall rules for your service. Especially in a circumstance where you come across an existing service where the clients are unknown and auditing is required. Now I said we would get back to memory management. If you want to reset your solution then you can use the following, again on any virtual, to reset the solution. when HTTP_REQUEST { if {[HTTP::uri] ne “/reset” } { return } set response “<html><head>Client Connections</head><body>” table delete –subtable client_list -all append response “Table deleted.</body></html>” HTTP::respond 200 content $response Content-Type “text/html” } So the fundamental lessons from part one are tables are global memory storage across the device. They can be used in powerful ways quite simply to produce detailed information about what is connecting to, or passing through a virtual. I encourage readers to sit back and think of ways they might find storing information useful in their environment. I have kept this first article quite simple as an introduction. Next week we will show you some of the more funky uses of tables. Kevin Davies iRules for Breakfast ~ How many do you do? kevin.davies@rededucation.com339Views0likes1CommentThe TAO of Tables - Part Two
This is a series of articles to introduce you to the many uses of tables. The TAO of Tables - Part One Previously we talked about how tables can be used for counting. The next discussion in this series deals with structure and profiling of iRules. I encourage iRule authors to keep the logic flat. Its all well and good having beautiful indented arches of if, elseif and else statements. The hard reality of iRules is we want to get in and get out fast. I encourage users to make use of the return command to provide early exits from their code. If we had the following: if {[HTTP::basename] ends_with “.html”} { if {[HTTP::header exists x-myheader]} { if {[HTTP::header x-myheader] eq 1} { # run my iRule code } else { # run my alternate code } } } It would become… # no html if { not ([HTTP::basename] ends_with “.html” ) } { return } # no header if { not ( [HTTP::header exists x-myheader] ) } { return } if { [HTTP::header x-myheader] == 1 } { # run main iRule code return } # run alternate code So in this case we have put the no-run conditionals at the front of the iRule and the rest of the code is not executed unless it needs to be. While this is a simple case of making the code flat without any optimization, when you get to larger iRules you will have multiple no-run conditions which you can put up front to prevent the main code from ever executing. Testing would show you which are the most common and they would be tested first. There are added benefits as well. It is easier to read this code, the decision logic is very simple, if you don’t meet the conditions then your out! But there is more to this and here is where it gets really interesting. Now you have discrete exit points using return you can use this to begin profiling its behavior. Say for every exit point, you set a variable which represents why an exit occurred. when HTTP_REQUEST { if { not ( [HTTP::basename] ends_with “.html” ) } { set status “failed:Not html” return } if { not ( [HTTP::header exists x-myheader] ) } { set status “failed:No header” return } if { [HTTP::header x-myheader] == 1 } { # run my iRule code set status “success:Main” return } # run my alternate code set status “success:Alternate” } Why do all this? We can add another iRule which begins execution profiling. After the iRule above add the following… when HTTP_REQUEST { set lifetime 60 set uid [expr {rand() * 10000}] table add –subtable [getfield $status “:” 1] $uid 1 indef $lifetime table add –subtable “$status” $uid 1 indef $lifetime table add –subtable tracking $status 1 indef 3600 } First we create a unique identifier for this execution of the iRule called “uid”. The first table command is creating a subtable using the first part of the status string as the name. Since that is “success” or “failure” there will be two subtables. We will add a unique entry using the “uid” as the key to one of those tables. This table entry effectively represents a single execution of your iRule. These entries have a lifetime of 60 seconds. The second and third table commands are related. The second creates unique entries in a subtable named from the entire status string with a lifetime of 60 seconds. Since we do not know what the status strings may be in advance the third table command records these in a tracking table. Now finally, add the following code to any Virtual on the same F5. when HTTP_REQUEST { if {[HTTP::uri] ne “/status”} { return } set content “iRule Status<p>” append content “iRule Success: [table keys –count –subtable “success”]<br>” append content “iRule Failure: [table keys –count –subtable “failure”]<p>” foreach name [table keys –subtable “tracking”] { append content “$name: [table keys –count –subtable $name]<br>” } HTTP::respond 200 content "<html><body>$content</body></html>" event disable all } Then navigate to /status on that virtual to get execution profile of your iRule in the last minute. In this case 250 requests were sent through the iRule iRule Status iRule Success: 234 iRule Failure: 16 failed:No Header 1 failed:No html 15 success:Main 217 success:Alternate 20 So what happens here is we count the success and failure subtables and display the results. This will tell you how much traffic your iRule has been successfully processed over the last minute. Then we display the count of each status subtables and you now have the exact number of times you iRule exited at any point in the last minute. From here you can do percentages and pretty much how you display this information is up to you. It is not just limited to iRule profiling. It could reflect useful information on any part of the information stream or the performance characteristics of your solution. You could even have an external monitoring system calling an XML formatted version of the same information to track the effectiveness of your iRule. I hope that you enjoyed this second installment and next week we will talk about another kind of profiling. Please leave any comments you have below.248Views0likes0CommentsAPM How to keep sync Access Sessions on a Table
Hello I'm trying to deploy a connection filter logic for logged users on APM policy. Let me introduce my setup. There are two virtual servers and one of them have APM policy. Lets say this is first virtual server. The other virtual server has no APM policy and it should remain accessible only for users which are logged with APM policy in the first virtual server. It is a performance L4 virtual server and non-http traffic passing-thru over on it. Lets call this one is "second" virtual server. I'd like to allow people to connect second virtual server, if they have logged in successfully with first virtual server. On first virtual server's successful branch, i collect and store the source IP addresses in a table and using that table to check incoming requests on second virtual server. This part is working and i can safely allow or deny incoming connection requests that matches on table. But session close event causes to session leaks. Because, when an APM session closed by any reason, system fires up "ACCESS_SESSION_CLOSED" events and looks like this event doesn't allow to use "table" related commands such as "table delete". How can i keep records sync between table and APM sessions ? I mean, i want to be able to delete related table record when a session removed on APM. But how ?426Views0likes1Comment