dev
11892 TopicsDirect Access 2012 and f5
We are testing Direct Access 2012 and are planning to use the f5 to handle load balancing between two DA servers. I haven't found much info specific to using f5, mainly this:http://www.f5.com/pdf/white-papers/...ess-tb.pdf The only real technical documentation is the old Forefront UAG back in 2009: https://devcentral.f5.com/tech-tips...bjKNinnbaM http://www.f5.com/pdf/deployment-guides/f5-uag-dg.pdf I would like verifaction that the this older documention is still relevant with DA 2012. Thanks Mykel581Views0likes8CommentsiRules 101 - #12 - The Session Command
One of the things that makes iRules so incredibly powerful is the fact that it is a true scripting language, or at least based on one. The fact that they give you the tools that TCL brings to the table - regular expressions, string functions, even things as simple as storing, manipulating and recalling variable data - sets iRules apart from the rest of the crowd. It also makes it possible to do some pretty impressive things with connection data and massaging/directing it the way you want it. Other articles in the series: Getting Started with iRules: Intro to Programming with Tcl | DevCentral Getting Started with iRules: Control Structures & Operators | DevCentral Getting Started with iRules: Variables | DevCentral Getting Started with iRules: Directing Traffic | DevCentral Getting Started with iRules: Events & Priorities | DevCentral Intermediate iRules: catch | DevCentral Intermediate iRules: Data-Groups | DevCentral Getting Started with iRules: Logging & Comments | DevCentral Advanced iRules: Regular Expressions | DevCentral Getting Started with iRules: Events & Priorities | DevCentral iRules 101 - #12 - The Session Command | DevCentral Intermediate iRules: Nested Conditionals | DevCentral Intermediate iRules: Handling Strings | DevCentral Intermediate iRules: Handling Lists | DevCentral Advanced iRules: Scan | DevCentral Advanced iRules: Binary Scan | DevCentral Sometimes, though, a simple variable won't do. You've likely heard of global variables in one of the earlier 101 series and read the warning there, and are looking for another option. So here you are, you have some data you need to store, which needs to persist across multiple connections. You need it to be efficient and fast, and you don't want to have to do a whole lot of complex management of a data structure. One of the many ways that you can store and access information in your iRule fits all of these things perfectly, little known as it may be. For this scenario I'd recommend the usage of the session command. There are three main permutations of the session command that you'll be using when storing and referencing data within the session table. These are: session add: Stores user's data under the specified key for the specified persistence mode session lookup: Returns user data previously stored using session add session delete: Removes user data previously stored using session add A simple example of adding some information to the session table would look like: when CLIENTSSL_CLIENTCERT { set ssl_cert [SSL::cert 0] session add ssl $ssl_cert 90 } By using the session add command, you can manually place a specific piece of data into the LTM's session table. You can then look it up later, by unique key, with the session lookup command and use the data in a different section of your iRule, or in another connection all together. This can be helpful in different situations where data needs to be passed between iRules or events that it might not normally be when using a simple variable. Such as mining SSL data from the connection events, as below: when CLIENTSSL_CLIENTCERT { # Set results in the session so they are available to other events session add ssl [SSL::sessionid] [list [X509::issuer] [X509::subject] [X509::version]] 180 } when HTTP_REQUEST { # Retrieve certificate information from the session set sslList [session lookup ssl [SSL::sessionid]] set issuer [lindex sslList 0] set subject [lindex sslList 1] set version [lindex sslList 2] } Because the session table is optimized and designed to handle every connection that comes into the LTM, it's very efficient and can handle quite a large number of items. Also note that, as above, you can pass structured information such as TCL Lists into the session table and they will remain intact. Keep in mind, though, that there is currently no way to count the number of entries in the table with a certain key, so you'll have to build all of your own processing logic for now, where necessary. It's also important to note that there is more than one session table. If you look at the above example, you'll see that before we listed any key or data to be stored, we used the command session add ssl. Note the "ssl" portion of this command. This is a reference to which session table the data will be stored in. For our purposes here there are effectively two session tables: ssl, and uie. Be sure you're accessing the same one in your session lookup section as you are in your session add section, or you'll never find the data you're after. This is pretty easy to keep straight, once you see it. It looks like: session add uie ... session lookup uie Or: session add ssl ... session lookup ssl You can find complete documentation on the session command here, in the iRules, as well as some great examplesthat depict some more advanced iRules making use of the session command to great success. Check out Codeshare for more examples.3.4KViews0likes8CommentsAnother FSE iRules Challenge, Even More Surprising Results
I have an awesome job. I get to play with cool technology, with good people, at an awesome company, and actually don’t get in trouble for doing so. I get to blend writing and talking and blathering on endlessly to anyone that will listen with completely geeking out and diving into the nuts and bolts of things to see what makes things work. This doesn’t suck. One of the things that doesn’t suck the most is getting to kick on the light bulb for people that haven’t quite gotten their hands around our programmability technologies just yet. F5 is laden with opportunities to get your script on. From iRules to iControl to iCall and TMSH scripting, there is no shortage of opportunities to get down and dirty with some code. That being said, not everyone is up to speed on such things yet and I take particular joy in being able to help them connect the wires, get the first flickers of that “Holy crap this stuff is cool!” halogen, and then go on their merry. Lucky as I am, what with the awesome job and all, I get many opportunities for just such interactions. More seasoned readers may be familiar with the one on which today’s post is focused, the FSE challenge. I haven’t posted one of these in a little bit so let’s have a refresher from days of yore. First off, What is an FSE? An FSE is a Field Sales Engineer. FSEs are the engineering lifeblood of the sales force here at F5. They’re the ones out in the trenches dealing with customer requirements and issues, building real world solutions, and generally doing all the cool stuff that I get to talk about theoretically, but in the real world. I’ve got mad respect for those FSEs that take their jobs seriously and learn how to build full fledged F5 solutions that leverage our crazy broad product set and, you guessed it, our out of the box tools like iControl and iRules. Those that choose to flex those muscles garner a special place in my encrypted little heart. Next, What is with this challenge business? Every time we get a new batch of FSEs in at corporate for brainwashing err, training we put them through what we lovingly refer to as a boot camp. This is, as you might expect given the name, a rapid way of getting folks up to speed on not only F5 technology but all of the surrounding whats-its and know-how that is expected of someone out in the field slinging our tech. This invariably includes a delve into iRules. There is formal training, of course, but the challenge is a different beast all together. I effectively pose as a customer in the field with a complex (at least complex by beginners’ standards) problem that needs solving. I present it to the batch of keyboard jockeys, give them time to ask questions, take notes, etc., then cut them loose. In their “free time” (See: sleepless hours well into the night) they get to hash out the solution to the problem. They are expected to write, test, and lightly document an iRules solution to eradicate the posed problem point by point. Points are awarded for effectiveness, efficiency, and exportability, meaning ease of use and hand-off. I come back in a week later, after pouring over the proffered code snippets, and announce the winners (top3) based on said criteria, who are then awarded fabulous cash and prizes! (Bold + italics means it must be true, right? Even when it’s not. Since it’s not. At all.) Lastly, What was the challenge? People are always curious to hear what the actual challenge was when looking at the submissions, so here you go: Scenario: A client has an https based application that is undergoing upgrades and large changes, and they need to create business logic in the network layer to allow for a smooth transition and consistent user experience. Desired Solution: Ensure that all requests from the client to the BIG-IP are SSL encrypted Ensure all traffic to the back end is plain-text For all canonical names of domain.com (I.E. bob.domain.com, app1.domain.com, etc.) remove the canonical name and prepend to URI (I.E. bob.domain.com/my/app becomes domain.com/bob/my/app). Standard canonicals are excluded from this re-writing (mail, smtp, www, ns, ns1) These host/uri changes must happen transparently to the clients accessing the application. Anyone accessing the application from the internal network (10.1.*) with an appropriate auth cookie (Name: X-Int-Auth. Value=True) bypasses the above logic and accesses the old structure. Log any request (IP of client and URI requested) to a canonical name x.domain.com that is non standard (mail, smtp, www, ns, ns1) so that data can be collected as to when users have fully transitioned. So now that the table is set, on with the feast! This time around I had another killer host of entries into the hopper. There were people of all experience levels from “newbie, never coded, what does this double equals thing do?” levels to one heck of a ringer, who would make himself known eventually, even though he flew under the radar at first, sly dog that he is. Out of the raft of valiant attempts and solid efforts, my arduous duty was to narrow it down to the top three and announce them to the group, and later (now) the world. Such was my task, and such was performed. I bring you this quarter’s FSE iRules Challenge winners: 3rd Place – Benn Alp Complete with ASCII art, Benn put in an heroic effort on this submission. He ended up with a heck of a lot of code, and most of it was extremely valid, which just goes to show that while he didn’t have the most efficient solution, he stuck with things until he got where he wanted. I encourage less meandering approaches to coding, but I was impressed by the thought that went into this one and the potential that is apparent in his thought process, logic and effort. Way to go Benn! ############################################################################## # _ ____ _ ____ _ _ _ # # (_) _ \ _ _| | ___ ___ / ___| |__ __ _| | | ___ _ __ __ _ ___ # # | | |_) | | | | |/ _ \/ __| | | | '_ \ / _` | | |/ _ \ '_ \ / _` |/ _ \ # # | | _ <| |_| | | __/\__ \ | |___| | | | (_| | | | __/ | | | (_| | __/ # # |_|_| \_\\__,_|_|\___||___/ \____|_| |_|\__,_|_|_|\___|_| |_|\__, |\___| # # |___/ # ############################################################################## # # Benn Alp b.alp@f5.com # when RULE_INIT { # # CONFIGURABLE ITEMS #---------------------------------------------------------------------------------------------------------- # Debug Logging. (Note: Irrispective of how this is set logs to satisfy requirement 6 will be sent - # 6 Log any request (IP of client and URI requested) to a canonical name x.domain.com that is non standard # (mail, smtp, www, ns, ns1) so that data can be collected as to when users have fully transitioned" is irrespective of this setting. # 0 - Disabled # 1 - Enabled #---------------------------------------------------------------------------------------------------------- set static::DebugLogging 1 #---------------------------------------------------------------------------------------------------------- # Behaviour when requirement 1 is violated for transformed apps (Ensure that all requests from the client to the BIG-IP are SSL encrypted) # Legacy apps continue to work as per normal. # 0 - Reject # 1 - Redirect to https #---------------------------------------------------------------------------------------------------------- set static::SSLBehaviour 1 #---------------------------------------------------------------------------------------------------------- } when HTTP_REQUEST { set Rewrite 0 if { $static::DebugLogging } { log local0. "Trigger HTTP_REQUEST" } # Requirement 5 - Anyone accessing the application from the Internal Network (10.1.x.x) and with an appropriate auth cookie (Name: X-Int-Auth Value=True) bypassess the above logic and access the old structure, or anybody using domain.com bypass/return if {[IP::client_addr] starts_with "10.1." and [HTTP::header "X-Init-Auth"] equals "True" or [string tolower [HTTP::host]] equals "domain.com" } { if { $static::DebugLogging } { log local0. "Trigger return based on Requirement 5 or domain=domain.com" } return } else { if { [HTTP::host] contains ".domain.com" } { # Requirement 3.1 Standard canonicals are excluded from this re-writing (mail, smtp, www, ns, ns1) and # Requirement 6 - Log any request (IP of Client and URI requested) to a canonical name x.domain.com that is non standard (Mail.smtp,www,ns,ns1) so that data can be collected as to when users have fully transitioned # Switch -exact was faster than data groups.. switch -exact [string tolower [HTTP::host]] { "mail.domain.com" { log local0. "Legacy Connection - USERIP [IP::client_addr] - mail.domain.com/[HTTP::uri]" } "smtp.domain.com" { log local0. "Legacy Connection - USERIP [IP::client_addr] - smtp.domain.com/[HTTP::uri]" } "www.domain.com" { log local0. "Legacy Connection - USERIP [IP::client_addr] - URI www.domain.com/[HTTP::uri]" } "ns.domain.com" { log local0. "Legacy Connection - USERIP [IP::client_addr] - URI ns.domain.com/[HTTP::uri]" } "ns1.domain.com" { log local0. "Legacy Connection - USERIP [IP::client_addr] - URI ns1.domain.com/[HTTP::uri]" } ... And that's all I'm showing of Benn's solution. It goes on for a while, and was an awesome effort but it's...rather long. ;) 2nd Place – Max Iftikhar Max set a high bar indeed with his submission which used the uber efficient stream profile, a solid cut at response re-writing, one of the pitfalls of this particular challenge, and some handy dandy string manipulation. This one was efficient, brief, and looked like it could have been the overall winner. All things being equal, in many other FSE classes this very well could have won, as it is a darn fine effort, and Max should hold his head high while coding. Unless of course he can’t see the monitor, then hold it rather normally and just know you kicked some tail, Max. when HTTP_REQUEST { if { [TCP::local_port] == 80 } { # redirect to https HTTP::redirect "https://[getfield [HTTP::host] ":" 1][HTTP::uri]" } } when HTTP_REQUEST { set rewrite 0 set canonical [getfield [HTTP::host] "." 1] set host1 [HTTP::host] set host2 [getfield [HTTP::host] "$canonical" 1] set uri1 "[HTTP::uri]" set uri2 ""/"$canonical[HTTP::uri]" if {[IP::addr [IP::client_addr] equals 10.1.x.x/16] and [HTTP::cookie exists"X-Int-Auth"} { pool http_pool } else { log local0. "Received request from [IP::client_addr] -> [HTTP::host][HTTP::uri]" } # Rewrite the Host header HTTP::header replace "Host" $::host2 # Make uri path start with /canoncial if it doesn't already if { not ([HTTP::uri] starts_with "/$canonical") } { HTTP::uri [string map -nocase {$uri1 $uri2} [HTTP::uri]] set urlRewrite 1 } } when HTTP_RESPONSE { if {$rewrite} { # Check if response is a redirect if {[HTTP::is_redirect] and [HTTP::header Location] contains $find} { # Rewrite the redirect Location header value HTTP::header replace "Host" $::host1 HTTP::header replace Location [string map -nocase "$url1 $url2" [HTTP::header Location]] } # Check if response payload type is text if {[HTTP::header value Content-Type] contains "text"} { # Set the replacement strings STREAM::expression "@$url1@$url2@" # Enable the stream filter for this response only STREAM::enable } } } Winner! – Joe Martin Last but the exact inverse of least, our winner in fact, was Joe Martin. Joe seemed like a normal, average, every day FSE challenge entrant upon first blush. He didn’t even bother to out himself at the onset as having written iRules before when I asked for experience levels. Clever ploy, Joe, very clever. As I was later to find out Joe seemed to in fact be a cyborg-robot-iRules-ninja-hacker-dinosaur sent back from the future to bust the curve for all FSE iRules Challengees everywhere. Seriously, this guy knew what he was doing. This iRule is pretty darn close to the code I would churn out to solve this particular problem and, not to self aggrandize, but that’s not such a bad thing coming from the guy judging the challenge, amirite? Upon presenting the results and having shaken the hand of the Cylon (No windows, you may not autocorrect Cylon to colon. Go away, I’m making jokes here.) in charge of iRules affairs himself, I asked Joe how many iRules he’d written before, because it was obvious that he had done so. Much to his credit he admitted to having written hundreds, which makes a whole heck of a lot of sense, and makes me able to sleep just a bit better at night without keeping a light on to watch out for those cyborg iRules ninja invaders. Big congrats to Joe for a darn fine hunk of codey bits. when HTTP_REQUEST { set request_rewrite 0 #Check to see if this in an internal developer request (internal IP and X-Auth header) if { ([HTTP::header X-Int-Auth] equals "True") && ([IP::client_addr] equals "10.1.0.0/16") } { pool http_pool } else { set orig_host [string tolower [HTTP::host]] scan $orig_host %s.%s.%s host domain tld set new_host "$domain.$tld" #Make sure "host" portion of DNS name is not in exclusion data group "class_no-rewrite" if { ![class match $host equals class_no-rewrite] } { #Flag connection as "request rewritten", rewrite host header and URI, and log request info" set request_rewrite 1 HTTP::header replace Host $new_host HTTP::uri "/$host[HTTP::uri]" log local0. "Request to $orig_host from [IP::client_addr] rewritten to [HTTP::uri]" pool http_pool } } } when HTTP_RESPONSE { #If the request was rewritten we need to rewrite Location headers and embedded URLs if {$request_rewrite} { if { [HTTP::is_redirect] } { HTTP::header replace Location [string map -nocase "$new_host $orig_host" [HTTP::header Location]] } else { STREAM::expression "@/$host/@/@ @http://$new_host@https://$orig_host@" STREAM::enable } } } All said and done it was another fine experience hosting the FSE iRules challenge. There was code, and fun, and fun code, and coding fun, and funny code, and…well you get the idea. I’m looking forward to the next crop and seeing what they’re capable of. I’ll be working on my cyborg detection methodologies in the meantime. Until then, remember kids: code hard. #Colin #iRules #iRulesChallenge #Cylons773Views0likes1CommentDev Setup Help
Hi, I'm looking for advice on setting up an F5 client to help debug a javascript error on a VPN client. I don't have V-Sphere, but I was able to convert the .OVA BIG-IP Next Central Manger to a .vhdx and run on Hyper-V manager. I'm able to log into the VM and run the setup. I used all the default and the IP address on the VM for the hostname. This allows me to login to the UI and start the bootstrap process which fails and additional attempts return 500 error from the server. Is this something that I should be able to get working? Where can I view the server logs? Any recommendations would be appreciated. Best regards, JonathanSolved22Views0likes1CommentMaintenance page - hosted on LTM or redirect with fallback host - or both?
I'm in the process of implementing an automated maintenance page that is displayed when I have a pool with no healthy members. Looking around, I see two distinct methods of doing this - utilizing the fallback host feature and redirecting to another url, or setting up a page to be hosted on the LTM and using an iRule with " [active_members [LB::server pool]] < 1" in it. Does anyone have any opinions on which one is preferred, and why? Currently, I'm using the fallback host method and I'm redirecting to a page hosted on AWS. My setup includes about 70 virtual servers on a 3600 HA cluster - some are QA, some are non-http. I will likley have the need for multiple versions of the maintenance page, depending on the site content it fronts. The one thing I do see as an advantage of the LTM hosted option is that an iRule code example shows a refresh option being used to automatically pull up the healthy site when it becomes available. Thanks!! Chris332Views0likes5CommentsIntermediate iRules: Nested Conditionals
Conditionals are a pretty standard tool in every programmer's toolbox. They are the functions that allow us to decided when we want certain actions to happen, based on, well, conditions that can be determined within our code. This concept is as old as compilers. Chances are, if you're writing code, you're going to be using a slew of these things, even in an Event based language like iRules. iRules is no different than any other programming/scripting language when it comes to conditionals; we have them. Sure how they're implemented and what they look like change from language to language, but most of the same basic tools are there: if, else, switch, elseif, etc. Just about any example that you might run across on DevCentral is going to contain some example of these being put to use. Learning which conditional to use in each situation is an integral part to learning how to code effectively. Once you have that under control, however, there's still plenty more to learn. Now that you're comfortable using a single conditional, what about starting to combine them? There are many times when it makes more sense to use a pair or more of conditionals in place of a single conditional along with logical operators. For example: if { [HTTP::host] eq "bob.com" and [HTTP::uri] starts_with "/uri1" } { pool pool1 } elseif { [HTTP::host] eq "bob.com" and [HTTP::uri] starts_with "/uri2" } { pool pool2 } elseif { [HTTP::host] eq "bob.com" and [HTTP::uri] starts_with "/uri3" } { pool pool3 } Can be re-written to use a pair of conditionals instead, making it far more efficient. To do this, you take the common case shared among the example strings and only perform that comparison once, and only perform the other comparisons if that result returns as desired. This is more easily described as nested conditionals, and it looks like this: if { [HTTP::host] eq "bob.com" } { if {[HTTP::uri] starts_with "/uri1" } { pool pool1 } elseif {[HTTP::uri] starts_with "/uri2" } { pool pool2 } elseif {[HTTP::uri] starts_with "/uri3" } { pool pool3 } } These two examples are logically equivalent, but the latter example is far more efficient. This is because in all the cases where the host is not equal to "bob.com", no other inspection needs to be done, whereas in the first example, you must perform the host check three times, as well as the uri check every single time, regardless of the fact that you could have stopped the process earlier. While basic, this concept is important in general when coding. It becomes exponentially more important, as do almost all optimizations, when talking about programming in iRules. A script being executed on a server firing perhaps once per minute benefits from small optimizations. An iRule being executed somewhere in the order of 100,000 times per second benefits that much more. A slightly more interesting example, perhaps, is performing the same logical nesting while using different operators. In this example we'll look at a series of if/elseif statements that are already using nesting, and take a look at how we might use the switch command to even further optimize things. I've seen multiple examples of people shying away from switch when nesting their logic because it looks odd to them or they're not quite sure how it should be structured. Hopefully this will help clear things up. First, the example using if statements: when HTTP_REQUEST { if { [HTTP::host] eq "secure.domain.com" } { HTTP::header insert "Client-IP:[IP::client_addr]" pool sslServers } elseif { [HTTP::host] eq "www.domain.com" } { HTTP::header insert "Client-IP:[IP::client_addr]" pool httpServers } elseif { [HTTP::host] ends_with "domain.com" and [HTTP::uri] starts_with "/secure"} { HTTP::header insert "Client-IP:[IP::client_addr]" pool sslServers } elseif {[HTTP::host] ends_with "domain.com" and [HTTP::uri] starts_with "/login"} { HTTP::header insert "Client-IP:[IP::client_addr]" pool httpServers } elseif { [HTTP::host] eq "intranet.myhost.com" } { HTTP::header insert "Client-IP:[IP::client_addr]" pool internal } } As you can see, this is completely functional and would do the job just fine. There are definitely some improvements that can be made, though. Let's try using a switch statement instead of several if comparisons for improved performance. To do that, we're going to have to use an if nested inside a switch comparison. While this might be new to some or look a bit odd if you're not used to it, it's completely valid and often times the most efficient you’re going to get. This is what the above code would look like cleaned up and put into a switch: when HTTP_REQUEST { HTTP::header insert "Client-IP:[IP::client_addr]" switch -glob [HTTP::host] { "secure.domain.com" { pool sslServers } "www.domain.com" { pool httpServers } "*.domain.com" { if { [HTTP::uri] starts_with "/secure" } { pool sslServers } else { pool httpServers } } "intranet.myhost.com" { pool internal } } } As you can see this is not only easier to read and maintain, but it will also prove to be more efficient. We've moved to the more efficient switch structure, we've gotten rid of the repeat host comparisons that were happening above with the /secure vs /login uris, and while I was at it I got rid of all those examples of inserting a header, since that was happening in every case anyway. Hopefully the benefit this technique can offer is clear, and these examples did the topic some justice. With any luck, you'll nest those conditionals with confidence now.5.7KViews0likes0CommentsRemoving port from a redirect
Hi all, One of our web developers has asked me if we could strip off a port number in a redirect they are doing. I thought the following would do this, but it doesn't appear to work. when HTTP_REPSONSE { if { [HTTP::is_redirect] } { if { [HTTP::header Location] contains "www.acme.com:10040" } { log "Original Location value: [HTTP::header Location]" HTTP::header replace Location [string map -nocase {www.acme.com:10400 www.acme.com} [HTTP::header value Location]] log "Updated Location value: [HTTP::header Location]" return } } } And here is what is written to the log Original Location value: www.acme.com:10040/secure/discussion-forum Updated Location value: www.acme.com:10040/secure/discussion-forum Note: actually the log includes http but if I enter in a URL in this new forum s/w it does odd things to it. Any help appreciated. Craig442Views0likes9Comments5 Years Later: OpenAJAX Who?
Five years ago the OpenAjax Alliance was founded with the intention of providing interoperability between what was quickly becoming a morass of AJAX-based libraries and APIs. Where is it today, and why has it failed to achieve more prominence? I stumbled recently over a nearly five year old article I wrote in 2006 for Network Computing on the OpenAjax initiative. Remember, AJAX and Web 2.0 were just coming of age then, and mentions of Web 2.0 or AJAX were much like that of “cloud” today. You couldn’t turn around without hearing someone promoting their solution by associating with Web 2.0 or AJAX. After reading the opening paragraph I remembered clearly writing the article and being skeptical, even then, of what impact such an alliance would have on the industry. Being a developer by trade I’m well aware of how impactful “standards” and “specifications” really are in the real world, but the problem – interoperability across a growing field of JavaScript libraries – seemed at the time real and imminent, so there was a need for someone to address it before it completely got out of hand. With the OpenAjax Alliance comes the possibility for a unified language, as well as a set of APIs, on which developers could easily implement dynamic Web applications. A unifiedtoolkit would offer consistency in a market that has myriad Ajax-based technologies in play, providing the enterprise with a broader pool of developers able to offer long term support for applications and a stable base on which to build applications. As is the case with many fledgling technologies, one toolkit will become the standard—whether through a standards body or by de facto adoption—and Dojo is one of the favored entrants in the race to become that standard. -- AJAX-based Dojo Toolkit , Network Computing, Oct 2006 The goal was simple: interoperability. The way in which the alliance went about achieving that goal, however, may have something to do with its lackluster performance lo these past five years and its descent into obscurity. 5 YEAR ACCOMPLISHMENTS of the OPENAJAX ALLIANCE The OpenAjax Alliance members have not been idle. They have published several very complete and well-defined specifications including one “industry standard”: OpenAjax Metadata. OpenAjax Hub The OpenAjax Hub is a set of standard JavaScript functionality defined by the OpenAjax Alliance that addresses key interoperability and security issues that arise when multiple Ajax libraries and/or components are used within the same web page. (OpenAjax Hub 2.0 Specification) OpenAjax Metadata OpenAjax Metadata represents a set of industry-standard metadata defined by the OpenAjax Alliance that enhances interoperability across Ajax toolkits and Ajax products (OpenAjax Metadata 1.0 Specification) OpenAjax Metadata defines Ajax industry standards for an XML format that describes the JavaScript APIs and widgets found within Ajax toolkits. (OpenAjax Alliance Recent News) It is interesting to see the calling out of XML as the format of choice on the OpenAjax Metadata (OAM) specification given the recent rise to ascendancy of JSON as the preferred format for developers for APIs. Granted, when the alliance was formed XML was all the rage and it was believed it would be the dominant format for quite some time given the popularity of similar technological models such as SOA, but still – the reliance on XML while the plurality of developers race to JSON may provide some insight on why OpenAjax has received very little notice since its inception. Ignoring the XML factor (which undoubtedly is a fairly impactful one) there is still the matter of how the alliance chose to address run-time interoperability with OpenAjax Hub (OAH) – a hub. A publish-subscribe hub, to be more precise, in which OAH mediates for various toolkits on the same page. Don summed it up nicely during a discussion on the topic: it’s page-level integration. This is a very different approach to the problem than it first appeared the alliance would take. The article on the alliance and its intended purpose five years ago clearly indicate where I thought this was going – and where it should go: an industry standard model and/or set of APIs to which other toolkit developers would design and write such that the interface (the method calls) would be unified across all toolkits while the implementation would remain whatever the toolkit designers desired. I was clearly under the influence of SOA and its decouple everything premise. Come to think of it, I still am, because interoperability assumes such a model – always has, likely always will. Even in the network, at the IP layer, we have standardized interfaces with vendor implementation being decoupled and completely different at the code base. An Ethernet header is always in a specified format, and it is that standardized interface that makes the Net go over, under, around and through the various routers and switches and components that make up the Internets with alacrity. Routing problems today are caused by human error in configuration or failure – never incompatibility in form or function. Neither specification has really taken that direction. OAM – as previously noted – standardizes on XML and is primarily used to describe APIs and components - it isn’t an API or model itself. The Alliance wiki describes the specification: “The primary target consumers of OpenAjax Metadata 1.0 are software products, particularly Web page developer tools targeting Ajax developers.” Very few software products have implemented support for OAM. IBM, a key player in the Alliance, leverages the OpenAjax Hub for secure mashup development and also implements OAM in several of its products, including Rational Application Developer (RAD) and IBM Mashup Center. Eclipse also includes support for OAM, as does Adobe Dreamweaver CS4. The IDE working group has developed an open source set of tools based on OAM, but what appears to be missing is adoption of OAM by producers of favored toolkits such as jQuery, Prototype and MooTools. Doing so would certainly make development of AJAX-based applications within development environments much simpler and more consistent, but it does not appear to gaining widespread support or mindshare despite IBM’s efforts. The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance – driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights – has chosen a target in another field; one on which developers today are just not playing. It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t – and likely won’t ever – exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery – both as a percentage of site usage and developer usage – to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate. Of course, since that’s always the way it works in technology, it was kind of a sure bet, wasn’t it? WHY INFRASTRUCTURE SERVICE PROVIDERS and VENDORS CARE ABOUT DEVELOPER STANDARDS You might notice in the list of members of the OpenAJAX alliance several infrastructure vendors. Folks who produce application delivery controllers, switches and routers and security-focused solutions. This is not uncommon nor should it seem odd to the casual observer. All data flows, ultimately, through the network and thus, every component that might need to act in some way upon that data needs to be aware of and knowledgeable regarding the methods used by developers to perform such data exchanges. In the age of hyper-scalability and über security, it behooves infrastructure vendors – and increasingly cloud computing providers that offer infrastructure services – to be very aware of the methods and toolkits being used by developers to build applications. Applying security policies to JSON-encoded data, for example, requires very different techniques and skills than would be the case for XML-formatted data. AJAX-based applications, a.k.a. Web 2.0, requires different scalability patterns to achieve maximum performance and utilization of resources than is the case for traditional form-based, HTML applications. The type of content as well as the usage patterns for applications can dramatically impact the application delivery policies necessary to achieve operational and business objectives for that application. As developers standardize through selection and implementation of toolkits, vendors and providers can then begin to focus solutions specifically for those choices. Templates and policies geared toward optimizing and accelerating JQuery, for example, is possible and probable. Being able to provide pre-developed and tested security profiles specifically for JQuery, for example, reduces the time to deploy such applications in a production environment by eliminating the test and tweak cycle that occurs when applications are tossed over the wall to operations by developers. For example, the jQuery.ajax() documentation states: By default, Ajax requests are sent using the GET HTTP method. If the POST method is required, the method can be specified by setting a value for the type option. This option affects how the contents of the data option are sent to the server. POST data will always be transmitted to the server using UTF-8 charset, per the W3C XMLHTTPRequest standard. The data option can contain either a query string of the form key1=value1&key2=value2 , or a map of the form {key1: 'value1', key2: 'value2'} . If the latter form is used, the data is converted into a query string using jQuery.param() before it is sent. This processing can be circumvented by setting processData to false . The processing might be undesirable if you wish to send an XML object to the server; in this case, change the contentType option from application/x-www-form-urlencoded to a more appropriate MIME type. Web application firewalls that may be configured to detect exploitation of such data – attempts at SQL injection, for example – must be able to parse this data in order to make a determination regarding the legitimacy of the input. Similarly, application delivery controllers and load balancing services configured to perform application layer switching based on data values or submission URI will also need to be able to parse and act upon that data. That requires an understanding of how jQuery formats its data and what to expect, such that it can be parsed, interpreted and processed. By understanding jQuery – and other developer toolkits and standards used to exchange data – infrastructure service providers and vendors can more readily provide security and delivery policies tailored to those formats natively, which greatly reduces the impact of intermediate processing on performance while ensuring the secure, healthy delivery of applications.401Views0likes0CommentsControlling a Pool Members Ratio and Priority Group with iControl
A Little Background A question came in through the iControl forums about controlling a pool members ratio and priority programmatically. The issue really involves how the API’s use multi-dimensional arrays but I thought it would be a good opportunity to talk about ratio and priority groups for those that don’t understand how they work. In the first part of this article, I’ll talk a little about what pool members are and how their ratio and priorities apply to how traffic is assigned to them in a load balancing setup. The details in this article were based on BIG-IP version 11.1, but the concepts can apply to other previous versions as well. Load Balancing In it’s very basic form, a load balancing setup involves a virtual ip address (referred to as a VIP) that virtualized a set of backend servers. The idea is that if your application gets very popular, you don’t want to have to rely on a single server to handle the traffic. A VIP contains an object called a “pool” which is essentially a collection of servers that it can distribute traffic to. The method of distributing traffic is referred to as a “Load Balancing Method”. You may have heard the term “Round Robin” before. In this method, connections are passed one at a time from server to server. In most cases though, this is not the best method due to characteristics of the application you are serving. Here are a list of the available load balancing methods in BIG-IP version 11.1. Load Balancing Methods in BIG-IP version 11.1 Round Robin: Specifies that the system passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced. This method works well in most configurations, especially if the equipment that you are load balancing is roughly equal in processing speed and memory. Ratio (member): Specifies that the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine within the pool. Least Connections (member): Specifies that the system passes a new connection to the node that has the least number of current connections in the pool. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time. Observed (member): Specifies that the system ranks nodes based on the number of connections. Nodes that have a better balance of fewest connections receive a greater proportion of the connections. This method differs from Least Connections (member), in that the Least Connections method measures connections only at the moment of load balancing, while the Observed method tracks the number of Layer 4 connections to each node over time and creates a ratio for load balancing. This dynamic load balancing method works well in any environment, but may be particularly useful in environments where node performance varies significantly. Predictive (member): Uses the ranking method used by the Observed (member) methods, except that the system analyzes the trend of the ranking over time, determining whether a node's performance is improving or declining. The nodes in the pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. This dynamic load balancing method works well in any environment. Ratio (node): Specifies that the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine across all pools of which the server is a member. Least Connections (node): Specifies that the system passes a new connection to the node that has the least number of current connections out of all pools of which a node is a member. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node, or the fastest node response time. Fastest (node): Specifies that the system passes a new connection based on the fastest response of all pools of which a server is a member. This method might be particularly useful in environments where nodes are distributed across different logical networks. Observed (node): Specifies that the system ranks nodes based on the number of connections. Nodes that have a better balance of fewest connections receive a greater proportion of the connections. This method differs from Least Connections (node), in that the Least Connections method measures connections only at the moment of load balancing, while the Observed method tracks the number of Layer 4 connections to each node over time and creates a ratio for load balancing. This dynamic load balancing method works well in any environment, but may be particularly useful in environments where node performance varies significantly. Predictive (node): Uses the ranking method used by the Observed (member) methods, except that the system analyzes the trend of the ranking over time, determining whether a node's performance is improving or declining. The nodes in the pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. This dynamic load balancing method works well in any environment. Dynamic Ratio (node) : This method is similar to Ratio (node) mode, except that weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node or the fastest node response time. Fastest (application): Passes a new connection based on the fastest response of all currently active nodes in a pool. This method might be particularly useful in environments where nodes are distributed across different logical networks. Least Sessions: Specifies that the system passes a new connection to the node that has the least number of current sessions. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current sessions. Dynamic Ratio (member): This method is similar to Ratio (node) mode, except that weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node or the fastest node response time. L3 Address: This method functions in the same way as the Least Connections methods. We are deprecating it, so you should not use it. Weighted Least Connections (member): Specifies that the system uses the value you specify in Connection Limit to establish a proportional algorithm for each pool member. The system bases the load balancing decision on that proportion and the number of current connections to that pool member. For example,member_a has 20 connections and its connection limit is 100, so it is at 20% of capacity. Similarly, member_b has 20 connections and its connection limit is 200, so it is at 10% of capacity. In this case, the system select selects member_b. This algorithm requires all pool members to have a non-zero connection limit specified. Weighted Least Connections (node): Specifies that the system uses the value you specify in the node's Connection Limitand the number of current connections to a node to establish a proportional algorithm. This algorithm requires all nodes used by pool members to have a non-zero connection limit specified. Ratios The ratio is used by the ratio-related load balancing methods to load balance connections. The ratio specifies the ratio weight to assign to the pool member. Valid values range from 1 through 100. The default is 1, which means that each pool member has an equal ratio proportion. So, if you have server1 a with a ratio value of “10” and server2 with a ratio value of “1”, server1 will get served 10 connections for every one that server2 receives. This can be useful when you have different classes of servers with different performance capabilities. Priority Group The priority group is a number that groups pool members together. The default is 0, meaning that the member has no priority. To specify a priority, you must activate priority group usage when you create a new pool or when adding or removing pool members. When activated, the system load balances traffic according to the priority group number assigned to the pool member. The higher the number, the higher the priority, so a member with a priority of 3 has higher priority than a member with a priority of 1. The easiest way to think of priority groups is as if you are creating mini-pools of servers within a single pool. You put members A, B, and C in to priority group 5 and members D, E, and F in priority group 1. Members A, B, and C will be served traffic according to their ratios (assuming you have ratio loadbalancing configured). If all those servers have reached their thresholds, then traffic will be distributed to servers D, E, and F in priority group 1. he default setting for priority group activation is Disabled. Once you enable this setting, you can specify pool member priority when you create a new pool or on a pool member's properties screen. The system treats same-priority pool members as a group. To enable priority group activation in the admin GUI, select Less than from the list, and in the Available Member(s) box, type a number from 0 to 65535 that represents the minimum number of members that must be available in one priority group before the system directs traffic to members in a lower priority group. When a sufficient number of members become available in the higher priority group, the system again directs traffic to the higher priority group. Implementing in Code The two methods to retrieve the priority and ratio values are very similar. They both take two parameters: a list of pools to query, and a 2-D array of members (a list for each pool member passed in). long [] [] get_member_priority( in String [] pool_names, in Common__AddressPort [] [] members ); long [] [] get_member_ratio( in String [] pool_names, in Common__AddressPort [] [] members ); The following PowerShell function (utilizing the iControl PowerShell Library), takes as input a pool and a single member. It then make a call to query the ratio and priority for the specific member and writes it to the console. function Get-PoolMemberDetails() { param( $Pool = $null, $Member = $null ); $AddrPort = Parse-AddressPort $Member; $RatioAofA = (Get-F5.iControl).LocalLBPool.get_member_ratio( @($Pool), @( @($AddrPort) ) ); $PriorityAofA = (Get-F5.iControl).LocalLBPool.get_member_priority( @($Pool), @( @($AddrPort) ) ); $ratio = $RatioAofA[0][0]; $priority = $PriorityAofA[0][0]; "Pool '$Pool' member '$Member' ratio '$ratio' priority '$priority'"; } Setting the values with the set_member_priority and set_member_ratio methods take the same first two parameters as their associated get_* methods, but add a third parameter for the priorities and ratios for the pool members. set_member_priority( in String [] pool_names, in Common::AddressPort [] [] members, in long [] [] priorities ); set_member_ratio( in String [] pool_names, in Common::AddressPort [] [] members, in long [] [] ratios ); The following Powershell function takes as input the Pool and Member with optional values for the Ratio and Priority. If either of those are set, the function will call the appropriate iControl methods to set their values. function Set-PoolMemberDetails() { param( $Pool = $null, $Member = $null, $Ratio = $null, $Priority = $null ); $AddrPort = Parse-AddressPort $Member; if ( $null -ne $Ratio ) { (Get-F5.iControl).LocalLBPool.set_member_ratio( @($Pool), @( @($AddrPort) ), @($Ratio) ); } if ( $null -ne $Priority ) { (Get-F5.iControl).LocalLBPool.set_member_priority( @($Pool), @( @($AddrPort) ), @($Priority) ); } } In case you were wondering how to create the Common::AddressPort structure for the $AddrPort variables in the above examples, here’s a helper function I wrote to allocate the object and fill in it’s properties. function Parse-AddressPort() { param($Value); $tokens = $Value.Split(":"); $r = New-Object iControl.CommonAddressPort; $r.address = $tokens[0]; $r.port = $tokens[1]; $r; } Download The Source The full source for this example can be found in the iControl CodeShare under PowerShell PoolMember Ratio and Priority.28KViews0likes3Comments