cpm
2 TopicsLTM Policy
Introduction F5 Local Traffic Manager (LTM) has always provided customers with the ability to optimize their network deployment by providing tools that can observe network traffic which also allow the administrator to configure various actions to take based on those observations. This is embodied in the fundamental concept of a virtual server, which groups traffic into pools based on observed IP addresses, ports, and DNS names, and furthered by extensions like iRules, which provide a tremendous amount of flexibility and customizability. For HTTP traffic up until BIG-IP 11.4.0, the HTTP Class module provided the ability for an administrator to match various parts of an HTTP transaction using regular expressions, and specify an associated action to take. These include actions such as inserting or removing a header, sending a redirect, or deciding to which vlan or pool a request should be forwarded. This was a flexible approach, but regular expression processing can be performance intensive, serial evaluation can get bogged down when the number of conditions increases, and sometimes proper coverage would require the administrator to configure specific ordering of evaluation. With the growth of traffic on the internet, and the explosion of HTTP traffic in particular, organizations are increasingly in need of more sophisticated tools which can observe traffic more in-depth and execute actions with good performance. LTM Policy LTM Policy first appeared in BIG-IP 11.4.0 as a flexible and high-performance replacement for HTTP Class. Additional capabilities and features have been continuously added since that time. At its core, LTM Policy is a data-driven rules engine which is tightly integrated with the Traffic Management Microkernel (tmm). One of the big improvements brought by LTM Policy is the accelerated and unique way that it can evaluate all conditions in parallel. When one or more policies are applied to a virtual server, they go through a compilation step that builds a combined, high-performance internal decision tree for all of the rules, conditions, and actions. This optimized representation of a virtual server's policies guarantees that every condition is only evaluated once and allows for parallel evaluation of all conditions, as well as other performance boosts, such as short-circuit evaluation. Another improvement is that conditions can observe attributes from both the request and the response, not just the request. Unlike HTTP Class, where its first-match-win could lead to ordering issues, LTM Policy can trigger on the first matching condition, all matches, the most specific match, or execute a default action when there are no condition matches. Policies What is a policy? A policy is a collection of rules, and is associated with a matching strategy, aspects the policy requires, and other aspects the policy controls. Every rule in a policy has a set of conditions and a set of actions, where either set may be empty. Conditions Conditions describe the comparisons that occur when traffic flows through a virtual server. The properties available to a condition depend on what aspect the policy requires. (See Conditions chart below.) For example, if a policy requires the http aspect, then HTTP-specific entities like headers, cookies, URI can be used in comparisons. If the policy requires this aspect: Then these Operands are available: Some of the properties that are available for comparison in conditions: none cpu-usage 1, 5, 15 minute load average tcp tcp (+ all above) IP address, port, mss http geoip geographic region associated with IP address http-uri domain, path, query string http-method HTTP method, e.g. GET, POST, etc. http-version versions of HTTP protocol http-status numeric and text response status codes http-host host and port value from Host: header http-header header name http-referer all components of Referer: URI http-cookie cookie name http-set-cookie all components of Set-Cookie http-basic-auth username, password http-user-agent (+ all above) browser type, version; device make, model client-ssl client-ssl protocol, cipher, cipher strength ssl-persistence ssl-extension server name, alpn, npn ssl-cert common-name from cert Actions Actions are commands which are executed when the associated conditions match. As with conditions, the actions available to a policy depend on which aspects the policy controls. (See Action chart below.) For example, if a policy controls the forwarding aspect, then forwarding-specific actions, such as selecting a pool, virtual server, or vlan are available. A default rule is a rule which has no conditions - and is therefore considered to always be a match - plus one or more actions. A default rule is typically ordered such that it would be the last rule evaluated. In policies with a first-match or best-match strategy (see below), the default rule is only run when no other rules match; policies with an all-match strategy will always execute default rule actions. If the policy Controls this aspect: Then these Targets are available: Which enables you to specify some of these Actions: (none specified) ltm-policy disable LTM Policy http enable/disable HTTP filter http-uri replace path, query string, or full URI http-host replace Host: header http-header insert/remove/replace HTTP header http-referer insert/remove/replace Referer: http-cookie insert/remove Cookie in request http-set-cookie insert/remove Set-Cookie in response log write to system logs tcl evaluate Tcl expression tcp-nagle enable/disable Nagle's algorithm forwarding forward pick pool, vlan, nexthop, rateclass http-reply send redirect to client caching cache enable/disable caching compression compress enable/disable compression decompress enable/disable decompression classification pem classify traffic category/application request-adaptation request-adapt enable/disable content adaptation through internal virtual server response-adaptation response-adapt enable/disable content adaptation through internal virtual server server-ssl server-ssl enable/disable server ssl persistence persist Select persistence (e.g. cookie, source address, hash, etc) Strategy All policies are associated with a strategy, which determines the behavior when multiple rules have matching conditions. As their titles suggest, the First Match strategy will execute the actions for the first rule that matches, All Match strategy will execute the actions for all rules which match, and Best Match will select the rule which has the most specific match. The most specific match is determined by comparing the rules for the number of conditions that matched, the longest matches, or the matches which are deemed to be more significant. Multiple policies can be applied to a virtual server. The only restriction is that each aspect of the system (e.g. forwarding, caching, see Actions table) may only be controlled by one policy. This is a reasonable restriction to avoid ambiguous situations where multiple policies controlling the same aspect match but specify conflicting actions. LTM Policy and iRules iRules are an important and long-standing part of the BIG-IP architecture, and pervasive throughout the product. There is some overlap between what can be controlled by LTM Policy and iRules, not surprisingly that most of the overlap is in the realm of HTTP traffic handling. And just about anything that is possible in LTM Policy can also be written as an iRule. LTM Policy is a structured, data-driven collection of rules. iRules and Tcl are more of a general purpose programming language which provide lots of power and flexibility, but also require some programming skills. Because policies are structured and can be created by populating tables in a web UI, it is more approachable for those with limited programming skills. So, when to use LTM Policy and when to use iRules? As a general rule, where there is identical functionality, LTM Policy should be able to offer better performance. There are situations where LTM Policy may be a better choice. when rules need to span different events, (e.g. a rule that considers both request and response) dealing with HTTP headers and cookies (e.g. LTM Policy has more direct access to internal HTTP state) when there are large number of conditions (pre-compiled internal decision trees can evaluate conditions in parallel) when conditions have a lot of commonality For supported events (such as HTTP_REQUEST or HTTP_RESPONSE) , LTM Policy evaluation occurs before iRule evaluation. This means that it is possible to write an iRule to override an LTM Policy decision. LTM Policy leverages standard iRule functions Beginning with releases in 2015, selected LTM Policy actions support Tcl command substitutions and the ability to call standard iRule commands . The intention is to empower the administrator with quick, read-only access to the runtime environment. For example, it is possible to specify an expression which includes data about the current connection, such as [HTTP::uri ] which gets substituted at runtime to the URI value of the current request. Tcl support in LTM Policy is not intended as a hook for general purpose programming, and can result in an error when making calls which might have side effects, or calls which might cause a processing delay. There is also a performance trade-off to consider as well, as Tcl’s flexibility comes with a runtime cost. Below is a summary of actions which support Tcl expressions: Target Action(s) Parameter Note http-uri replace value Full URI path URI path component query string URI query string component http-header insert value Arbitrary HTTP header replace value http-cookie insert value Cookie: header http-host replace value Host: header http-referer replace value Referer: header http-set-cookie insert value Set-Cookie: header domain path log message Write to syslog tcl * setvar expression set variable in Tcl runtime environment http-reply * redirect location redirect client to location * This action has supported Tcl expressions since BigIP 11.4. While a comprehensive list of valid Tcl commands is beyond the scope of this document, it should be noted that not every Tcl command will be valid at any given time. Most standard iRule commands are associated with a tmm event , as are LTM Policy actions. For example, in the LTM Policy event request, iRule commands which are valid in the context of HTTP_REQUEST event will validate without error. A validation error will be raised if one attempts to use iRule commands that are not valid in the current event scope. For example, in an LTM Policy action associated with the request (i.e. HTTP_REQUEST) event context, specifying an expression like [HTTP::status] , which is only valid in a response event context, will not pass the validation check. iRules support LTM Policy There are several iRule commands defined which can be used to access information about policies attached to the virtual server. POLICY::controls - iRule command which returns details about the policy controls for the virtual server the iRule is enabled on POLICY::names - iRule command which returns details about the policy names for the virtual server the iRule is enabled on. POLICY::rules - iRule command which returns the policy rules of the supplied policy that had actions executed. POLICY::targets - iRule command which returns or sets properties of the policy rule targets for the policies associated with the virtual server that the iRule is enabled on What can I do with it? Sky's the limit. Here are some sample tasks and LTM Policies that could be used to implement them. Keep in mind that the policy definitions shown below, which at first glance appear to be more complicated than an equivalent iRule, are generated by a more friendly, web-based UI. The web UI allows the policy author to select valid options from menus, and build up a policy with little worry about programming and proper syntax. Task Configuration If system load average over the last minute is above 5, then disable compression. (This example assumes compression is competing for CPU cycles, and would not apply to scenarios where hardware compression is available.) Demonstrates cpu load conditions and ability to control compression. ltm policy /Common/load-avg { controls { compression } requires { http } rules { rule-1 { actions { 0 { compress disable } } conditions { 0 { cpu-usage last-1min greater values { 5 } } } ordinal 1 } } strategy /Common/first-match } If request is coming from California, forward it to pool pool_ca, and if the request comes from Washington, direct it to pool_wa. Otherwise forward to my-default-pool. Demonstrates geo-IP conditions, actions to forward to specific pool, and a default rule. ltm policy /Common/policy-sa { controls { forwarding } requires { http } rules { defaultrule { actions { 0 { forward select pool /Common/my-default-pool } } ordinal 3 } rule-1 { actions { 0 { forward select pool /Common/pool_ca } } conditions { 0 { geoip region-name values { California } } } ordinal 1 } rule-2 { actions { 0 { forward select pool /Common/pool_wa } } conditions { 0 { geoip region-name values { Washington } } } ordinal 2 } } strategy /Common/first-match } If the request was referred by my-affiliate.com and the response contains an image, set a cookie containing the current time. Example of a policy which spans both request and response, and uses Tcl command substitution for a value. ltm policy /Common/affiliate { requires { http } rules { rule-1 { actions { 0 { http-set-cookie response insert name MyAffiliateCookie value "tcl:[clock format [clock seconds] -format %H:%M:%S]" } } conditions { 0 { http-referer contains values { my-affiliate.com } } 1 { http-header response name Content-type starts-with values { image/ } } } ordinal 1 } } strategy /Common/first-match } Some rules of thumb While there are certainly exceptions to any rule, the following are some general usage guidelines. The maximum number of rules across active policies is limited by memory and cpu capability, but more than a thousand is starting to be a lot . Using Tcl command substitutions in actions can have performance implications; the more Tcl, the more performance impact. Only use Tcl commands that read and quickly return data; avoid those that change internal state or cause any delays. Conclusion LTM Policy is a powerful, flexible, and high-performance tool that administrators can leverage for application deployment. Its table-driven user interface requires very little in the way of programming experience, and new capabilities have been added continuously with each release.12KViews1like15CommentsWhen Applications Drive the Network
#SDDC #SDN #context When applications can dictate service invocation dynamically, then we'll have a truly dynamic network There's a lot of lip service given to the notion of applications defining the way the network behaves. That's primarily because it's recognized that today, at least, it's an application world. Applications are large and in-charge in the eyes of the business who, after all, pays all the bills So it's really not surprising to hear a renewed focus on applications as the driving force behind behavior of networks. The network has for too long been seen as little more than a big fat pipe, a mere transportation system for bits and bytes traversing client to application and back again. But that view undervalues the benefits of the network. The network can, after all, provide a variety of services from the mundane to the spectacular that aid in the delivery of applications. Applications, after all, aren't always aware (for a variety of technical reasons we won't dive into today because, well, you'd need a fresh pot of coffee, trust me) of conditions in the data path - including those on the Internet - that might be adversely affecting performance. Worse, there's not a whole lot an application can do about even if it knows. Many of the performance-enhancing options available to an application are configured on a "server" wide (and I use the term "server" here very loosely to refer to an application or web server instance) basis. Even though it might benefit Alice to turn off compression or turn on caching, it's not feasible to do so because it impacts Bob and Mary, as well. And they might need compression but not caching. Or some other combination thereof. That's where the "network" comes in, or more precisely where the application service network comes in. It does have the visibility into the network, and into the device layer (the client), as well as the application layer to determine what the best combination of services is that will result in the best possible performance for this user on this device at this time given conditions across all networks. Yeah, it's pretty powerful when you think about. But what isn't always easy is managing to actually provide a dynamic means of adjusting those services in real-time. Generally speaking, you configure a set of services based on something akin to an 80/20 rule and shrug. Greater good and all that, benefits of the many outweigh the negatives to a smaller population. But what if it didn't have to be like that? What if you could, actually, automatically make those adjustments based on context and serve 100% of the users with optimal performance? What if the network really were driven by applications and understood that for Bob, on his iPhone at the local Sbux, compression will help so the network invokes that service - even while at the same time Mary is on her PC at headquarters and it won't, so the network doesn't invoke that service. Same application, different context. Same application, entirely different services applied on-demand. Not only is that optimal for the user, but it's also great for the business stakeholder that's paying the bills. Because not only are they now only paying for what they use, they're only paying for what they need, as well. That's where we have to get. To a place where the network isn't just application-centric, or application-aware, but application-driven. Where applications drive service invocation in real-time, automatically and based on the unique context that surrounds each and every request that traverses the network. I'm not talking about pre-defined application-specific policies full of if-then-else statements. I'm talking about an intermediary that's able to grab the context and evaluate it for any application and make intelligent decisions regarding the invocation of services based on whether or not they'll actually provide value in terms of enhanced security, improved performance or higher reliability. That's where the evolution of application delivery is going: to a world where applications are not just sitting in the passenger seat on the network, they're driving.245Views0likes0Comments