policy
11 TopicsLTM Policy – Matching Strategies
Introduction LTM Policy is a highly performant-feature of the Big IP which allows administrators to inspect many aspects of the system and runtime traffic, and to take custom actions in response. As the name suggests, this is accomplished by creating policies, and unlike iRules, does not require programming. Every policy is a collection of rules, and is associated with a matching strategy. Every rule in a policy is like an if-then statement: it has a set of conditions and a set of actions, either of which may be empty, but not both. Conditions are the defined comparisons of runtime values against policy values. Actions are the commands which will get executed when the conditions match. As an example, one could define a policy with a condition that inspects the HTTP Referer: header, and if its hostname contains the string google.com, then take 2 actions: write a message to the system logs, and forward the connection to a certain pool. LTM Policy provides three matching strategies, described below. Matching strategies come into play when a policy contains more than one rule, because different rules can match at the same time, and different behavior may be desired depending on the situation. First Match With a first-match strategy in effect, as soon as any of the rules match, execute the associated actions and then stop all processing. This can be efficient, because once there is a match, no further effort is expended evaluating the conditions of the other rules. In the case that multiple rules match at the same time, then the ordinal property of each rule is consulted. The ordinal value is used for ordering rules, and lower value wins. All Match The all-match strategy is perhaps the most straightforward. It directs the policy engine to keep evaluating rules as traffic flows, executing the associated actions as conditions are matched. Best Match The best-match strategy is interesting and needs a little more background to describe its capability and customizability. The big idea behind best-match is to find the most specific match. When multiple rules match, the most specific match is deemed to be the one with either the most number of conditions that matched, the longest matches, or the matches which are deemed to be more significant. In the case where multiple rules match, and the rules contain the same number of conditions, then the ultimate tiebreaker is to consult the Strategy List. The Strategy List is the official system ordering or conditions, defining which are to be considered more significant than other conditions. It can be viewed in the GUI by visiting Local Traffic >> Policies >> Strategy List >> best-match , or via tmsh command line at ltm policy-strategy . The conditions at the top of the top of the table are considered more significant than those below, so the winning rule with be the one with the most significant conditions. The Strategy List is customizable to individual customer needs. It is probably not all that common, but should the default hierarchy of conditions not match expectations for the situation, the table can be customized by moving conditions up and down relative to each other. Be aware that that changes to the order affect all policies employing a best-match strategy, so consider trade-offs for customizing the order for one policy versus potential side effects on other policies that use a best-match strategy.2.8KViews0likes8CommentsLTM Policy
Introduction F5 Local Traffic Manager (LTM) has always provided customers with the ability to optimize their network deployment by providing tools that can observe network traffic which also allow the administrator to configure various actions to take based on those observations. This is embodied in the fundamental concept of a virtual server, which groups traffic into pools based on observed IP addresses, ports, and DNS names, and furthered by extensions like iRules, which provide a tremendous amount of flexibility and customizability. For HTTP traffic up until BIG-IP 11.4.0, the HTTP Class module provided the ability for an administrator to match various parts of an HTTP transaction using regular expressions, and specify an associated action to take. These include actions such as inserting or removing a header, sending a redirect, or deciding to which vlan or pool a request should be forwarded. This was a flexible approach, but regular expression processing can be performance intensive, serial evaluation can get bogged down when the number of conditions increases, and sometimes proper coverage would require the administrator to configure specific ordering of evaluation. With the growth of traffic on the internet, and the explosion of HTTP traffic in particular, organizations are increasingly in need of more sophisticated tools which can observe traffic more in-depth and execute actions with good performance. LTM Policy LTM Policy first appeared in BIG-IP 11.4.0 as a flexible and high-performance replacement for HTTP Class. Additional capabilities and features have been continuously added since that time. At its core, LTM Policy is a data-driven rules engine which is tightly integrated with the Traffic Management Microkernel (tmm). One of the big improvements brought by LTM Policy is the accelerated and unique way that it can evaluate all conditions in parallel. When one or more policies are applied to a virtual server, they go through a compilation step that builds a combined, high-performance internal decision tree for all of the rules, conditions, and actions. This optimized representation of a virtual server's policies guarantees that every condition is only evaluated once and allows for parallel evaluation of all conditions, as well as other performance boosts, such as short-circuit evaluation. Another improvement is that conditions can observe attributes from both the request and the response, not just the request. Unlike HTTP Class, where its first-match-win could lead to ordering issues, LTM Policy can trigger on the first matching condition, all matches, the most specific match, or execute a default action when there are no condition matches. Policies What is a policy? A policy is a collection of rules, and is associated with a matching strategy, aspects the policy requires, and other aspects the policy controls. Every rule in a policy has a set of conditions and a set of actions, where either set may be empty. Conditions Conditions describe the comparisons that occur when traffic flows through a virtual server. The properties available to a condition depend on what aspect the policy requires. (See Conditions chart below.) For example, if a policy requires the http aspect, then HTTP-specific entities like headers, cookies, URI can be used in comparisons. If the policy requires this aspect: Then these Operands are available: Some of the properties that are available for comparison in conditions: none cpu-usage 1, 5, 15 minute load average tcp tcp (+ all above) IP address, port, mss http geoip geographic region associated with IP address http-uri domain, path, query string http-method HTTP method, e.g. GET, POST, etc. http-version versions of HTTP protocol http-status numeric and text response status codes http-host host and port value from Host: header http-header header name http-referer all components of Referer: URI http-cookie cookie name http-set-cookie all components of Set-Cookie http-basic-auth username, password http-user-agent (+ all above) browser type, version; device make, model client-ssl client-ssl protocol, cipher, cipher strength ssl-persistence ssl-extension server name, alpn, npn ssl-cert common-name from cert Actions Actions are commands which are executed when the associated conditions match. As with conditions, the actions available to a policy depend on which aspects the policy controls. (See Action chart below.) For example, if a policy controls the forwarding aspect, then forwarding-specific actions, such as selecting a pool, virtual server, or vlan are available. A default rule is a rule which has no conditions - and is therefore considered to always be a match - plus one or more actions. A default rule is typically ordered such that it would be the last rule evaluated. In policies with a first-match or best-match strategy (see below), the default rule is only run when no other rules match; policies with an all-match strategy will always execute default rule actions. If the policy Controls this aspect: Then these Targets are available: Which enables you to specify some of these Actions: (none specified) ltm-policy disable LTM Policy http enable/disable HTTP filter http-uri replace path, query string, or full URI http-host replace Host: header http-header insert/remove/replace HTTP header http-referer insert/remove/replace Referer: http-cookie insert/remove Cookie in request http-set-cookie insert/remove Set-Cookie in response log write to system logs tcl evaluate Tcl expression tcp-nagle enable/disable Nagle's algorithm forwarding forward pick pool, vlan, nexthop, rateclass http-reply send redirect to client caching cache enable/disable caching compression compress enable/disable compression decompress enable/disable decompression classification pem classify traffic category/application request-adaptation request-adapt enable/disable content adaptation through internal virtual server response-adaptation response-adapt enable/disable content adaptation through internal virtual server server-ssl server-ssl enable/disable server ssl persistence persist Select persistence (e.g. cookie, source address, hash, etc) Strategy All policies are associated with a strategy, which determines the behavior when multiple rules have matching conditions. As their titles suggest, the First Match strategy will execute the actions for the first rule that matches, All Match strategy will execute the actions for all rules which match, and Best Match will select the rule which has the most specific match. The most specific match is determined by comparing the rules for the number of conditions that matched, the longest matches, or the matches which are deemed to be more significant. Multiple policies can be applied to a virtual server. The only restriction is that each aspect of the system (e.g. forwarding, caching, see Actions table) may only be controlled by one policy. This is a reasonable restriction to avoid ambiguous situations where multiple policies controlling the same aspect match but specify conflicting actions. LTM Policy and iRules iRules are an important and long-standing part of the BIG-IP architecture, and pervasive throughout the product. There is some overlap between what can be controlled by LTM Policy and iRules, not surprisingly that most of the overlap is in the realm of HTTP traffic handling. And just about anything that is possible in LTM Policy can also be written as an iRule. LTM Policy is a structured, data-driven collection of rules. iRules and Tcl are more of a general purpose programming language which provide lots of power and flexibility, but also require some programming skills. Because policies are structured and can be created by populating tables in a web UI, it is more approachable for those with limited programming skills. So, when to use LTM Policy and when to use iRules? As a general rule, where there is identical functionality, LTM Policy should be able to offer better performance. There are situations where LTM Policy may be a better choice. when rules need to span different events, (e.g. a rule that considers both request and response) dealing with HTTP headers and cookies (e.g. LTM Policy has more direct access to internal HTTP state) when there are large number of conditions (pre-compiled internal decision trees can evaluate conditions in parallel) when conditions have a lot of commonality For supported events (such as HTTP_REQUEST or HTTP_RESPONSE) , LTM Policy evaluation occurs before iRule evaluation. This means that it is possible to write an iRule to override an LTM Policy decision. LTM Policy leverages standard iRule functions Beginning with releases in 2015, selected LTM Policy actions support Tcl command substitutions and the ability to call standard iRule commands . The intention is to empower the administrator with quick, read-only access to the runtime environment. For example, it is possible to specify an expression which includes data about the current connection, such as [HTTP::uri ] which gets substituted at runtime to the URI value of the current request. Tcl support in LTM Policy is not intended as a hook for general purpose programming, and can result in an error when making calls which might have side effects, or calls which might cause a processing delay. There is also a performance trade-off to consider as well, as Tcl’s flexibility comes with a runtime cost. Below is a summary of actions which support Tcl expressions: Target Action(s) Parameter Note http-uri replace value Full URI path URI path component query string URI query string component http-header insert value Arbitrary HTTP header replace value http-cookie insert value Cookie: header http-host replace value Host: header http-referer replace value Referer: header http-set-cookie insert value Set-Cookie: header domain path log message Write to syslog tcl * setvar expression set variable in Tcl runtime environment http-reply * redirect location redirect client to location * This action has supported Tcl expressions since BigIP 11.4. While a comprehensive list of valid Tcl commands is beyond the scope of this document, it should be noted that not every Tcl command will be valid at any given time. Most standard iRule commands are associated with a tmm event , as are LTM Policy actions. For example, in the LTM Policy event request, iRule commands which are valid in the context of HTTP_REQUEST event will validate without error. A validation error will be raised if one attempts to use iRule commands that are not valid in the current event scope. For example, in an LTM Policy action associated with the request (i.e. HTTP_REQUEST) event context, specifying an expression like [HTTP::status] , which is only valid in a response event context, will not pass the validation check. iRules support LTM Policy There are several iRule commands defined which can be used to access information about policies attached to the virtual server. POLICY::controls - iRule command which returns details about the policy controls for the virtual server the iRule is enabled on POLICY::names - iRule command which returns details about the policy names for the virtual server the iRule is enabled on. POLICY::rules - iRule command which returns the policy rules of the supplied policy that had actions executed. POLICY::targets - iRule command which returns or sets properties of the policy rule targets for the policies associated with the virtual server that the iRule is enabled on What can I do with it? Sky's the limit. Here are some sample tasks and LTM Policies that could be used to implement them. Keep in mind that the policy definitions shown below, which at first glance appear to be more complicated than an equivalent iRule, are generated by a more friendly, web-based UI. The web UI allows the policy author to select valid options from menus, and build up a policy with little worry about programming and proper syntax. Task Configuration If system load average over the last minute is above 5, then disable compression. (This example assumes compression is competing for CPU cycles, and would not apply to scenarios where hardware compression is available.) Demonstrates cpu load conditions and ability to control compression. ltm policy /Common/load-avg { controls { compression } requires { http } rules { rule-1 { actions { 0 { compress disable } } conditions { 0 { cpu-usage last-1min greater values { 5 } } } ordinal 1 } } strategy /Common/first-match } If request is coming from California, forward it to pool pool_ca, and if the request comes from Washington, direct it to pool_wa. Otherwise forward to my-default-pool. Demonstrates geo-IP conditions, actions to forward to specific pool, and a default rule. ltm policy /Common/policy-sa { controls { forwarding } requires { http } rules { defaultrule { actions { 0 { forward select pool /Common/my-default-pool } } ordinal 3 } rule-1 { actions { 0 { forward select pool /Common/pool_ca } } conditions { 0 { geoip region-name values { California } } } ordinal 1 } rule-2 { actions { 0 { forward select pool /Common/pool_wa } } conditions { 0 { geoip region-name values { Washington } } } ordinal 2 } } strategy /Common/first-match } If the request was referred by my-affiliate.com and the response contains an image, set a cookie containing the current time. Example of a policy which spans both request and response, and uses Tcl command substitution for a value. ltm policy /Common/affiliate { requires { http } rules { rule-1 { actions { 0 { http-set-cookie response insert name MyAffiliateCookie value "tcl:[clock format [clock seconds] -format %H:%M:%S]" } } conditions { 0 { http-referer contains values { my-affiliate.com } } 1 { http-header response name Content-type starts-with values { image/ } } } ordinal 1 } } strategy /Common/first-match } Some rules of thumb While there are certainly exceptions to any rule, the following are some general usage guidelines. The maximum number of rules across active policies is limited by memory and cpu capability, but more than a thousand is starting to be a lot . Using Tcl command substitutions in actions can have performance implications; the more Tcl, the more performance impact. Only use Tcl commands that read and quickly return data; avoid those that change internal state or cause any delays. Conclusion LTM Policy is a powerful, flexible, and high-performance tool that administrators can leverage for application deployment. Its table-driven user interface requires very little in the way of programming experience, and new capabilities have been added continuously with each release.12KViews1like15CommentsSelective Compression on BIG-IP
BIG-IP provides Local Traffic Policies that simplify the way in which you can manage traffic associated with a virtual server. You can associate a BIG-IP local traffic policy to support selective compression for types of content that can benefit from compression, like HTML, XML, and CSS stylesheets. These file types can realize performance improvements, especially across slow connections, by compressing them. You can easily configure your BIG-IP system to use a simple Local Traffic Policy that selectively compresses these file types. In order to use a policy, you will want to create and configure a draft policy, publish that policy, and then associate the policy with a virtual server in BIG-IP v12. Alright, let’s log into a BIG-IP The first thing you’ll need to do is create a draft policy. On the main menu select Local Traffic>Policies>Policy List and then the Create or + button. This takes us to the create policy config screen. We’ll name the policy SelectiveCompression, add a description like ‘This policy compresses file types,’ and we’ll leave the Strategy as the default of Execute First matching rule. This is so the policy uses the first rule that matches the request. Click Create Policy which saves the policy to the policies list. When saved, the Rules search field appears but has no rules. Click Create under Rules. This brings us to the Rules General Properties area of the policy. We’ll give this rule a name (CompressFiles) and then the first settings we need to configure are the conditions that need to match the request. Click the + button to associate file types. We know that the files for compression are comprised of specific file types associated with a content type HTTP Header. We choose HTTP Header and select Content-Type in the Named field. Select ‘begins with’ next and type ‘text/’ for the condition and compress at the ‘response’ time. We’ll add another condition to manage CPU usage effectively. So we click CPU Usage from the list with a duration of 1 minute with a conditional operator of ‘less than or equal to’ 5 as the usage level at response time. Next under Do the following, click the create + button to create a new action when those conditions are met. Here, we’ll enable compression at the response time. Click Save. Now the draft policy screen appears with the General Properties and a list of rules. Here we want to click Save Draft. Now we need to publish the draft policy and associate it with a virtual server. Select the policy and click Publish. Next, on the main menu click Local Traffic>Virtual Servers>Virtual Server List and click the name of the virtual server you’d like to associate for the policy. On the menu bar click Resources and for Policies click Manage. Move SelectiveCompression to the Enabled list and click Finished. The SelectiveCompression policy is now listed in the policies list which is now associated with the chosen virtual server. The virtual server with the SelectiveCompression Local Traffic Policy will compress the file types you specified. Congrats! You’ve now added a local traffic policy for selective compression! You can also watch the full video demo thanks to our TechPubs team. ps981Views0likes7CommentsLTM Policy Recipes
LTM Policy is the powerful and performant replacement to the HTTP Class, and first appeared in Big IP 11.4.0. This is intended to be a short article showing some practical recipes using LTM Policy. Please also check out another article with a more complete overview of LTM Policy. While there are iRules which can be used to address each of the following scenarios, LTM Policy is a particularly good tool when it comes to inspecting and modifying HTTP-specific quantities like URIs, headers, and cookies. Preventing the Nimda worm If the URL contains certain strings known to be associated with Nimda, then use a forwarding action to reset the connection. Demonstrates matching on URI and terminating a connection. ltm policy /Common/nimbda-be-gone { controls { forwarding } requires { http } rules { rule-1 { actions { 0 { forward reset } } conditions { 0 { http-uri contains values { root.exe admin.dll cmd.exe } } } ordinal 1 } } strategy /Common/first-match } Preventing spoofing of X-Forwarded-For Bad actors may try to work around security by falsifying the IP address in a header and trying to pass it through the Big IP. Replace X-Forwarded-For header in the request, if any, with the actual client IP address. Demonstrates header replacement, case-insensitive comparisons of HTTP headers, use of Tcl expressions to access internal state. ltm policy /Common/xff { requires { http } rules { remove existing" { actions { 0 { http-header replace name X-foRWardED-for value tcl:[IP::client_addr] } } ordinal 2 } } strategy /Common/first-match } Mitigating Shellshock Shellshock refers to a class of exploits that take advantage of the bash shell via a specifically-crafted URL. Here is a policy which looks for the pattern "() {" in the URI . It is rare that this pattern of characters will occur in a URI so false-positives should be rare. ltm policy /Common/shellshock { controls { forwarding } requires { http } rules { rule-1 { actions { 0 { log write message "tcl:Shellshock detected from [IP::client_addr], blocked" } 1 { forward reset } } conditions { 0 { http-uri contains values { "() {" } } } ordinal 1 } } strategy /Common/first-match } Selective compression Certain types of content are good candidates for compression. For example, transfers of common text types like HTML, XML, and CSS stylesheets can show improved transfer time – especially across slow links – by compressing it. Here is a policy which demonstrates selective compression based on the Content-type, taking into account system load average. All text-type responses will be compressed, but if the 1 minute load average climbs above 5, then compression will not be enabled to save CPU resources. ltm policy /Common/blanket { controls { compression } requires { http } rules {rule-1 { conditions { 0 { http-header response name Content-type starts-with values { text/ } } 1 { cpu-usage response last-1min less-or-equal values { 5 } } } actions { 0 { compress response enable } } ordinal 1 } } strategy /Common/first-match }1.5KViews0likes3CommentsBYOD 2.0 -- Moving Beyond MDM
#BYOD has quickly transformed IT, offering a revolutionary way to support the mobile workforce. The first wave of BYOD featured MDM solutions that controlled the entire device. In the next wave, BYOD 2.0, control applies only to those apps necessary for business, enforcing corporate policy while maintaining personal privacy. The #F5 Mobile App Manager is a complete mobile application management platform built for BYOD 2.0 ps Related: F5's Feeling Alive with Newly Unveiled Mobile App Manager Inside Look - F5 Mobile App Manager (Video) BYOD - More Than an IT Issue (Video) Is BYO Already D? Will BYOL Cripple BYOD? Freedom vs. Control BYOD Uptake Has Only Just Begun BYOD Policies – More than an IT Issue Part 1: Liability BYOD Policies – More than an IT Issue Part 2: Device Choice BYOD Policies – More than an IT Issue Part 3: Economics BYOD Policies – More than an IT Issue Part 4: User Experience and Privacy BYOD Policies – More than an IT Issue Part 5: Trust Model Technorati Tags: f5,byod,mam,mdm,mobile,smartphone,big-ip,policy,security,privacy,legal,video,silva,mobile app manager Connect with Peter: Connect with F5:237Views0likes0CommentsBYOD - More Than an IT Issue
I explain the various organizational entities that should be involved when creating a BYOD policy. ps Related: F5's Feeling Alive with Newly Unveiled Mobile App Manager Inside Look - F5 Mobile App Manager Is BYO Already D? Will BYOL Cripple BYOD? Freedom vs. Control BYOD Uptake Has Only Just Begun BYOD Policies – More than an IT Issue Part 1: Liability BYOD Policies – More than an IT Issue Part 2: Device Choice BYOD Policies – More than an IT Issue Part 3: Economics BYOD Policies – More than an IT Issue Part 4: User Experience and Privacy BYOD Policies – More than an IT Issue Part 5: Trust Model Technorati Tags: f5,byod,mam,mdm,mobile,smartphone,big-ip,policy,security,privacy,legal,video,silva,mobile app manager Connect with Peter: Connect with F5:262Views0likes0CommentsThe Prosecution Calls Your Smartphone to the Stand
Or Bring-Your-Own-Defendant A very real legal situation is brewing is the wake of the bring your own device phenomena. #eDiscovery. You might be familiar with some of the various legal or liability issues that should be addressed with a BYOD policy, like privacy, the loss of personal information, working overtime or the fact that financial responsibility may dictate legal obligation. Now, technology law experts are saying that if your company is involved in litigation, criminal or civil, personal mobile devices that were used for work email or other company activity, could be confiscated and examined for evidence as part of the investigation or discovery process. So if you use your personal smartphone for work related activities and your company is involved in a lawsuit, there may come a point where the court might subpoena your phone to see what relevant evidence might be contained. During litigation, the organization itself may have the legal obligation to sift through your mobile device for related information. If sued, companies are required to make a good-faith effort to retrieve data - where ever that may be. That includes your email, GPS history, text messages, cell phone records, social media accounts, pictures and any other info that could be pertinent to the case. This is proprietary company owned data that resides on my personally owned device. This is especially true of your corporate email co-mingles with your personal email - meaning delivered through the same email app or program. In fact, according to this article, a judge recently sanctioned a company for a discovery violation because it did not search the BYOD devices during discovery. Some people seem to lose all sense of daily human functioning when social networks like Facebook, Twitter and others are unavailable for a short period of time. We've become so attached to our mobile devices and they have become the center of our lives...imagine not having that pacifier for a few days. OMG, I've time-traveled the 1980's and have no way of announcing it to the world!! What am I going to do now that I can't re-tweet that funny cat picture! I'm so lost without you, oh electronic appendage. As more organizations embrace or even require BYOD in the workplace, it becomes even more critical to be able to separate personal and work profiles. It is important that the corporate data and apps do not mingle with the already present personal data. Solutions like F5's Mobile App Manager provides a fully enclosed virtual enterprise workspace and creates a secure footprint on the device for enterprise data and access only. MAM allows organizations to safely separate personal data and usage from corporate oversight and controls how employees access key corporate information. ps Related: Use your personal smartphone for work email? Your company might take it BYOD Lawsuits Loom as Work Gets Personal BYOD and Delta Airlines Privacy Lawsuit BYOD gets messy with AT&T class action lawsuit Is BYO Already D? BYOD Policies – More than an IT Issue Part 1: Liability BYOD Policies – More than an IT Issue Part 2: Device Choice BYOD Policies – More than an IT Issue Part 3: Economics BYOD Policies – More than an IT Issue Part 4: User Experience and Privacy BYOD Policies – More than an IT Issue Part 5: Trust Model BYOD 2.0: Moving Beyond MDM (pdf) Inside Look: F5 Mobile App Manager Technorati Tags: ediscovery,legal,courts,litigation,byod,discovery,lawsuit,policy,liability,silva,f5,mam,mobile device,smartphone Connect with Peter: Connect with F5:257Views0likes0CommentsA More Practical View of Cloud Brokers
#cloud The conventional view of cloud brokers misses the need to enforce policies and ensure compliance During a dinner at VMworld organized by Lilac Schoenbeck of BMC, we had the chance to chat up cloud and related issues with Kia Behnia, CTO at BMC. Discussion turned, naturally I think, to process. That could be because BMC is heavily invested in automating and orchestrating processes. Despite the nomenclature used (business process management) for IT this is a focus on operational process automation, though eventually IT will have to raise the bar and focus on the more businessy aspects of IT and operations. Alex Williams postulated the decreasing need for IT in an increasingly cloudy world. On the surface this generally seems to be an accurate observation. After all, when business users can provision applications a la SaaS to serve their needs do you really need IT? Even in cases where you're deploying a fairly simple web site, the process has become so abstracted as to comprise the push of a button, dragging some components after specifying a template, and voila! Web site deployed, no IT necessary. While from a technical difficulty perspective this may be true (and if we say it is, it is for only the smallest of organizations) there are many responsibilities of IT that are simply overlooked and, as we all know, underappreciated for what they provide, not the least of which is being able to understand the technical implications of regulations and requirements like HIPAA, PCI-DSS, and SOX – all of which have some technical aspect to them and need to be enforced, well, with technology. See, choosing a cloud deployment environment is not just about "will this workload run in cloud X". It's far more complex than that, with many more variables that are often hidden from the end-user, a.k.a. the business peoples. Yes, cost is important. Yes, performance is important. And these are characteristics we may be able to gather with a cloud broker. But what we can't know is whether or not a particular cloud will be able to enforce other policies – those handed down by governments around the globe and those put into writing by the organization itself. Imagine the horror of a CxO upon discovering an errant employee with a credit card has just violated a regulation that will result in Severe Financial Penalties or worse – jail. These are serious issues that conventional views of cloud brokers simply do not take into account. It's one thing to violate an organizational policy regarding e-mailing confidential data to your Gmail account, it's quite another to violate some of the government regulations that govern not only data at rest but in flight. A PRACTICAL VIEW of CLOUD BROKERS Thus, it seems a more practical view of cloud brokers is necessary; a view that enables such solutions to not only consider performance and price, but ability to adhere to and enforce corporate and regulatory polices. Such a data center hosted cloud broker would be able to take into consideration these very important factors when making decisions regarding the optimal deployment environment for a given application. That may be a public cloud, it may be a private cloud – it may be a dynamic data center. The resulting decision (and options) are not nearly as important as the ability for IT to ensure that the technical aspects of policies are included in the decision making process. And it must be IT that codifies those requirements into a policy that can be leveraged by the broker and ultimately the end-user to help make deployment decisions. Business users, when faced with requirements for web application firewalls in PCI-DSS, for example, or ensuring a default "deny all" policy on firewalls and routers, are unlikely able to evaluate public cloud offerings for ability to meet such requirements. That's the role of IT, and even wearing rainbow-colored cloud glasses can't eliminate the very real and important role IT has to play here. The role of IT may be changing, transforming, but it is no way being eliminated or decreasing in importance. In fact, given the nature of today's environments and threat landscape, the importance of IT in helping to determine deployment locations that at a minimum meet organizational and regulatory requirements is paramount to enabling business users to have more control over their own destiny, as it were. So while cloud brokers currently appear to be external services, often provided by SIs with a vested interest in cloud migration and the services they bring to the table, ultimately these beasts will become enterprise-deployed services capable of making policy-based decisions that include the technical details and requirements of application deployment along with the more businessy details such as costs. The role of IT will never really be eliminated. It will morph, it will transform, it will expand and contract over time. But business and operational regulations cannot be encapsulated into policies without IT. And for those applications that cannot be deployed into public environments without violating those policies, there needs to be a controlled, local environment into which they can be deployed. Related blogs and articles: The Social Cloud - now, with appetizers The Challenges of Cloud: Infrastructure Diaspora The Next IT Killer Is… Not SDN The Cloud Integration Stack Curing the Cloud Performance Arrhythmia F5 Friday: Avoiding the Operational Debt of Cloud The Half-Proxy Cloud Access Broker The Dynamic Data Center: Cloud's Overlooked Little Brother Lori MacVittie is a Senior Technical Marketing Manager, responsible for education and evangelism across F5’s entire product suite. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University. She is the author of XAML in a Nutshell and a co-author of The Cloud Security Rules334Views0likes0CommentsWhat is a Strategic Point of Control Anyway?
From mammoth hunting to military maneuvers to the datacenter, the key to success is control Recalling your elementary school lessons, you’ll probably remember that mammoths were large and dangerous creatures and like most animals they were quite deadly to primitive man. But yet man found a way to hunt them effectively and, we assume, with more than a small degree of success as we are still here and, well, the mammoths aren’t. Marx Cavemen PHOTO AND ART WORK : Fred R Hinojosa. The theory of how man successfully hunted ginormous creatures like the mammoth goes something like this: a group of hunters would single out a mammoth and herd it toward a point at which the hunters would have an advantage – a narrow mountain pass, a clearing enclosed by large rock, etc… The qualifying criteria for the place in which the hunters would finally confront their next meal was that it afforded the hunters a strategic point of control over the mammoth’s movement. The mammoth could not move away without either (a) climbing sheer rock walls or (b) being attacked by the hunters. By forcing mammoths into a confined space, the hunters controlled the environment and the mammoth’s ability to flee, thus a successful hunt was had by all. At least by all the hunters; the mammoths probably didn’t find it successful at all. Whether you consider mammoth hunting or military maneuvers or strategy-based games (chess, checkers) one thing remains the same: a winning strategy almost always involves forcing the opposition into a situation over which you have control. That might be a mountain pass, or a densely wooded forest, or a bridge. The key is to force the entire complement of the opposition through an easily and tightly controlled path. Once they’re on that path – and can’t turn back – you can execute your plan of attack. These easily and highly constrained paths are “strategic points of control.” They are strategic because they are the points at which you are empowered to perform some action with a high degree of assurance of success. In data center architecture there are several “strategic points of control” at which security, optimization, and acceleration policies can be applied to inbound and outbound data. These strategic points of control are important to recognize as they are the most efficient – and effective – points at which control can be exerted over the use of data center resources. DATA CENTER STRATEGIC POINTS of CONTROL In every data center architecture there are aggregation points. These are points (one or more components) through which all traffic is forced to flow, for one reason or another. For example, the most obvious strategic point of control within a data center is at its perimeter – the router and firewalls that control inbound access to resources and in some cases control outbound access as well. All data flows through this strategic point of control and because it’s at the perimeter of the data center it makes sense to implement broad resource access policies at this point. Similarly, strategic points of control occur internal to the data center at several “tiers” within the architecture. Several of these tiers are: Storage virtualization provides a unified view of storage resources by virtualizing storage solutions (NAS, SAN, etc…). Because the storage virtualization tier manages all access to the resources it is managing, it is a strategic point of control at which optimization and security policies can be easily applied. Application Delivery / load balancing virtualizes application instances and ensures availability and scalability of an application. Because it is virtualizing the application it therefore becomes a point of aggregation through which all requests and responses for an application must flow. It is a strategic point of control for application security, optimization, and acceleration. Network virtualization is emerging internal to the data center architecture as a means to provide inter-virtual machine connectivity more efficiently than perhaps can be achieved through traditional network connectivity. Virtual switches often reside on a server on which multiple applications have been deployed within virtual machines. Traditionally it might be necessary for communication between those applications to physically exit and re-enter the server’s network card. But by virtualizing the network at this tier the physical traversal path is eliminated (and the associated latency, by the way) and more efficient inter-vm communication can be achieved. This is a strategic point of control at which access to applications at the network layer should be applied, especially in a public cloud environment where inter-organizational residency on the same physical machine is highly likely. OLD SKOOL VIRTUALIZATION EVOLVES You might have begun noticing a central theme to these strategic points of control: they are all points at which some kind of virtualization – and thus aggregation – occur naturally in a data center architecture. This is the original (first) kind of virtualization: the presentation of many resources as a single resources, a la load balancing and other proxy-based solutions. When there is a one —> many (1:M) virtualization solution employed, it naturally becomes a strategic point of control by virtue of the fact that all “X” traffic must flow through that solution and thus policies regarding access, security, logging, etc… can be applied in a single, centrally managed location. The key here is “strategic” and “control”. The former relates to the ability to apply the latter over data at a single point in the data path. This kind of 1:M virtualization has been a part of datacenter architectures since the mid 1990s. It’s evolved to provide ever broader and deeper control over the data that must traverse these points of control by nature of network design. These points have become, over time, strategic in terms of the ability to consistently apply policies to data in as operationally efficient manner as possible. Thus have these virtualization layers become “strategic points of control”. And you thought the term was just another square on the buzz-word bingo card, didn’t you?1.2KViews0likes6CommentsPolicy: Not just QoS and Tiered Services.
With the development of Internet Multimedia Services (IMS), the challenge of defining how the IMS infrastructure would deliver application services and control the user experience was answered with Policy. Policy is simply the application of business rules to define how a subscriber interacts with the network, application and services. Since 3GPP included Policy into the IMS standards,(3GPP TS 23.203) the market has viewed Policy as simply bandwidth management and subscriber tiered services. However, this view of Policy is limited and incomplete implementation of Policy in a Communication Service Provider (CSP) network. In order to truly implement a comprehensive policy architecture, policy must be integrated into the design and implementation of all network services. Creating rules to define how a subscriber connects to the network, authenticates, has an IP address allocated, along with all the interactions of network support services such as IPv6 translations, DNS, NAT, security services, etc. This Policy definition is the only way to truly define the subscriber interaction with services and applications. As CSP’s transition to all IP networks, maintaining the Quality of Experience (QoE) will determine the CSP’s success against competition. The ultimate challenge in transitioning to these technologies is still providing at least the same QoE as the previous networks (3G and traditional circuit switched voice) across all services. Since voice is still the largest impact on ARPU, delivering a quality VOIP solution (or VoLTE for wireless 4G) that is as stable and reliable as circuit switched voice is essential for success. Comprehensive policy across all IP services in the network provides a level of management related to these new technologies and the subscriber experience. IMS standards for Policy, specifically Policy defined at the Policy Control and Revenue Function’s (PCRF) relationship with the Policy Control Enforcement Function (PCEF), takes the first step in defining this policy architecture. The PCRF, by definition, defines the policy associated with the subscriber and sends policy updates to the PCEF, which will packet, shape (via Quality of Service (QoS)) the packet for that session. The PCRF makes these decisions based upon the subscriber’s tier of service, network origin, application, service definition and network status information. This Policy step is crucial, but it is incomplete for Comprehensive Policy across the network. For Comprehensive Policy, all network services need to be Policy aware and be able to enforce policy according to the specific network service. For example, as a device connecting to the IMS network, a DNS query is sent to determine the Call Session Control Function (CSCF) for the first SIP request. A standard DNS server will simply return the A or AAAA record (depending of if this is on an IPv4 network or IPv6 network) that it has for the appropriate CSCF. However, Policy can be used to define how that DNS server can determine which CSCF is returned based upon the network and subscriber. By defining this first interaction, the most available CSCF address can be returned to the device or, more specifically, a CSCF scheme can be defined based upon the location, network status, and subscriber. This is the first step in defining the experience that subscriber has with the IMS service. By defining Policy at the network services, the CSP takes control of the subscribers interaction at every point on the network. This makes all the network services a Policy enforcement point of the CSP’s business plan. These policies can be either dynamic or static, depending on the service or technology being deployed. Dynamic Policy allows for changes in the policy within the session without having dropping the session to make this Policy change. Static Policy is simply rules defined that do not change in mid-session. To provide for dynamic policy, a policy decision point is needed to pass policy changes to the policy enforcement point, this is the scheme that the PCRF and PCEF use to provide dynamic policy. However, using a combination of static and dynamic policy across all network services is the only way to offer comprehensive policy. As CSP technologies, applications and services evolve, the real challenge is maintaining ARPU and reducing, or managing, subscriber churn in order to maximize profit and stay competitive. The only way to achieve this is to maintain, and improve, the QoE as new applications and services are delivered to the subscriber. Understanding and managing the relationships between all services and the subscriber with the network is the only way to control the QoE. Comprehensive Policy across all network elements and services is the only way to manage these relationships between the subscriber and services. Related Articles New Service Provider Blog250Views0likes0Comments