us
1700 TopicsWhat is an iApp?
iApp is a seriously cool, game changing technology that was released in F5’s v11. There are so many benefits to our customers with this tool that I am going to break it down over a series of posts. Today we will focus on what it is. Hopefully you are already familiar with the power of F5’s iRules technology. If not, here is a quick background. F5 products support a scripting language based on TCL. This language allows an administrator to tell their BIG-IP to intercept, inspect, transform, direct and track inbound or outbound application traffic. An iRule is the bit of code that contains the set of instructions the system uses to process data flowing through it, either in the header or payload of a packet. This technology allows our customers to solve real-time application issues, security vulnerabilities, etc that are unique to their environment or are time sensitive. An iApp is like iRules, but for the management plane. Again, there is a scripting language that administrators can build instructions the system will use. But instead of describing how to process traffic, in the case of iApp, it is used to describe the user interface and how the system will act on information gathered from the user. The bit of code that contains these instructions is referred to as an iApp or iApp template. A system administrator can use F5-provided iApp templates installed on their BIG-IP to configure a service for a new application. They will be presented with the text and input fields defined by the iApp author. Once complete, their answers are submitted, and the template implements the configuration. First an application service object (ASO) is created that ties together all the configuration objects which are created, like virtual servers and profiles. Each object created by the iApp is then marked with the ASO to identify their membership in the application for future management and reporting. That about does it for what an iApp is…..next up, how they can work for you.1.3KViews0likes4CommentsBIG-IP Configuration Conversion Scripts
Kirk Bauer, John Alam, and Pete White created a handful of perl and/or python scripts aimed at easing your migration from some of the “other guys” to BIG-IP.While they aren’t going to map every nook and cranny of the configurations to a BIG-IP feature, they will get you well along the way, taking out as much of the human error element as possible.Links to the codeshare articles below. Cisco ACE (perl) Cisco ACE via tmsh (perl) Cisco ACE (python) Cisco CSS (perl) Cisco CSS via tmsh (perl) Cisco CSM (perl) Citrix Netscaler (perl) Radware via tmsh (perl) Radware (python)1.7KViews1like13CommentsXML Threat Prevention
Where should security live? The divide between operations and application developers is pretty wide, especially when it comes to defining who should be ultimately responsible for application security. Mike Fratto and I have often had lively discussions (read: arguments) about whether security is the responsibility of the developer or the network and security administrators. It's wholly inappropriate to recreate any of these discussions here, as they often devolve to including the words your mother said not to use in public. 'Nuff said. The truth is that when XML enters the picture then the responsibility for securing that traffic has to be borne by both the network/security administrators and the developers. While there is certainly good reason to expect that developers are doing simply security checks for buffer overflows, length restrictions on incoming data, and strong typing, the fact is that there are some attacks in XML that make it completely impractical to check for in the code. Let's take a couple of attack types as examples. XML Entity Expansion This attack is a million laughs, or at least a million or more bytes of memory. Applications need to parse XML in order to manipulate it, so the first thing that happens when XML hits an application is that it is parsed - before the developer even has a chance to check it. In an application server this is generally done before the arguments to the specific operation being invoked are marshaled - meaning it is the application server, not the developer that is responsible for handling this type of attack. These messages can be used to force recursive entity expansion or other repeated processing that exhausts server resources. The most common example of this type of attack is the "billion laughs" attack, which is widely available. The CPU is monopolized while the entities are being expanded, and each entity takes up X amount of memory - eventually consuming all available resources and effectively preventing legitimate traffic from being processed. It's essentially a DoS (Denial of Service) attack. ... ]> &ha128; It is accepted that almost all traditional DoS attacks (ping of death, teardrop, etc...) should be handled by a perimeter security device - a firewall oran application delivery controller - so why should a DoS attack that is perpetrated through XML be any different? It shouldn't. This isn't a developer problem, it's a parser problem and for the most part developers have little or no control over the behavior of the parser used by the application server. The application admin, however, can configure most modern parsers to prevent this type of attack, but that's assuming that their parser is modern and can be configured to handle it. Of course then you have to wonder what happens if that arbitrary limit inhibits processing of valid traffic? Yeah, it's a serious problem. SQL Injection SQL Injection is one of the most commonly perpetrated attacks via web-based applications. It consists basically of inserting SQL code into string-based fields which the attacks thinks (or knows) will be passed to a database as part of an SQL query. This type of attack can easily be accomplished via XML as well simply by inserting the appropriate SQL code into a string element. Aha! The developer can stop this one, you're thinking. After all, the developer has the string and builds the SQL that will be executed, so he can just check for it before he builds the string and sends it off for execution. While this is certainly true, there are myriad combinations of SQL commands that might induce the database to return more data than it should, or to return sensitive data not authorized to the user. This extensive list of commands and combinations of commands would need to be searched for in each and every parameter used to create an SQL query and on every call to the database. That's a lot of extra code and a lot of extra processing - which is going to slow down the application and impede performance. And when a new attack is discovered, each and every function and application needs to be updated, tested, and re-deployed. I'm fairly certain developers have better ways to spend their time than updating parameter checking in every function in every application they have in production. And we won't even talk about third-party applications and the dangers inherent in that scenario. One of the goals of SOA is engendering reuse, and this is one of the best examples of taking advantage of reuse in order to ensure consistent behavior between applications and to reduce the lengthy development cycle required to update, test, and redeploy whenever a new attack is discovered. By placing the onus for keeping this kind of attack from reaching the server on an edge device such as an application firewall like F5's application firewall, updates to address new attacks are immediately applied to all applications and there is no need to recode and redeploy applications. Although there are some aspects of security that are certainly best left to the developer, there are other aspects of security that are better deployed in the network. It's the most effective plan in terms of effort, cost, and consistent behavior where applications are concerned. Imbibing: Mountain Dew Technorati tags: security, application security, application firewall, XML, developers, networking, application delivery302Views0likes0CommentsStop Those XSS Cookie Bandits iRule Style
In a recent post, CodingHorror blogged about a story of one of his friends attempts at writing his own HTML sanitizer for his website. I won't bother repeating the details but it all boils down to the fact that his friend noticed users were logged into his website as him and hacking away with admin access. How did this happen? It turned out to be a Cross Site Scripting attack (XSS) that found it's way around his HTML sanitizing routines. A user posted some content that included mangled JavaScript that made an external reference including all history and cookies of the current users session to an alternate machine. CodingHorror recommended adding the HttpOnly attribute to Set-Cookie response headers to help protect these cookies from being able to make their way out to remote machines. Per his blog post: HttpOnly restricts all access to document.cookie in IE7, Firefox 3, and Opera 9.5 (unsure about Safari) HttpOnly removes cookie information from the response headers in XMLHttpObject.getAllResponseHeaders() in IE7. It should do the same thing in Firefox, but it doesn't, because there's a bug. XMLHttpObjects may only be submitted to the domain they originated from, so there is no cross-domain posting of the cookies. Whenever I hear about modifications made to backend servers, alarms start going off in my head and I get to thinking about how this can be accomplished on the network transparently. Well, if you happen to have a BIG-IP, then it's quite easy. A simple iRule can be constructed that will check all the response cookies and if they do not already have the HttpOnly attribute, then add it. I went one step further and added a check for the "Secure" attribute and added that one in as well for good measure. when HTTP_RESPONSE { foreach cookie [HTTP::cookie names] { set value [HTTP::cookie value $cookie]; if { "" != $value } { set testvalue [string tolower $value] set valuelen [string length $value] #log local0. "Cookie found: $cookie = $value"; switch -glob $testvalue { "*;secure*" - "*; secure*" { } default { set value "$value; Secure"; } } switch -glob $testvalue { "*;httponly*" - "*; httponly*" { } default { set value "$value; HttpOnly"; } } if { [string length $value] > $valuelen} { #log local0. "Replacing cookie $cookie with $value" HTTP::cookie value $cookie "${value}" } } } } If you are only concerned with the Secure attribute, then you can always use the "HTTP::cookie secure" command but as far as I can tell it won't include the HttpOnly attribute. So, if you determine that HttpOnly cookies are the way you want to go, you could manually configure these on all of your applications on your backend servers. Or... you could configure it in one place on the network. I think I prefer the second option. -Joe403Views0likes0Comments2.5 bad ways to implement a server load balancing architecture
I'm in a bit of mood after reading a Javaworld article on server load balancing that presents some fairly poor ideas on architectural implementations. It's not the concepts that are necessarily wrong; they will work. It's the architectures offered as a method of load balancing made me do a double-take and say "What?" I started reading this article because it was part 2 of a series on load balancing and this installment focused on application layer load balancing. You know, layer 7 load balancing. Something we at F5 just might know a thing or two about. But you never know where and from whom you'll learn something new, so I was eager to dive in and learn something. I learned something alright. I learned a couple of bad ways to implement a server load balancing architecture. TWO LOAD BALANCERS? The first indication I wasn't going to be pleased with these suggestions came with the description of a "popular" load-balancing architecture that included two load balancers: one for the transport layer (layer 4) and another for the application layer (layer 7). In contrast to low-level load balancing solutions, application-level server load balancing operates with application knowledge. One popular load-balancing architecture, shown in Figure 1, includes both an application-level load balancer and a transport-level load balancer. Even the most rudimentary, entry level load balancers on the market today - software and hardware, free and commercial - can handle both transport and application layer load balancing. There is absolutely no need to deploy two separate load balancers to handle two different layers in the stack. This is a poor architecture introducing unnecessary management and architectural complexity as well as additional points of failure into the network architecture. It's bad for performance because it introduces additional hops and points of inspection through which application messages must flow. To give the author credit he does recognize this and offers up a second option to counter the negative impact of the "additional network hops." One way to avoid additional network hops is to make use of the HTTP redirect directive. With the help of the redirect directive, the server reroutes a client to another location. Instead of returning the requested object, the server returns a redirect response such as 303. I found it interesting that the author cited an HTTP response code of 303, which is rarely returned in conjunction with redirects. More often a 302 is used. But it is valid, if not a bit odd. That's not the real problem with this one, anyway. The author claims "The HTTP redirect approach has two weaknesses." That's true, it has two weaknesses - and a few more as well. He correctly identifies that this approach does nothing for availability and exposes the infrastructure, which is a security risk. But he fails to mention that using HTTP redirects introduces additional latency because it requires additional requests that must be made by the client (increasing network traffic), and that it is further incapable of providing any other advanced functionality at the load balancing point because it essentially turns the architecture into a variation of a DSR (direct server return) configuration. THAT"S ONLY 2 BAD WAYS, WHERE'S THE .5? The half bad way comes from the fact that the solutions are presented as a Java based solution. They will work in the sense that they do what the author says they'll do, but they won't scale. Consider this: the reason you're implementing load balancing is to scale, because one server can't handle the load. A solution that involves putting a single server - with the same limitations on connections and session tables - in front of two servers with essentially the twice the capacity of the load balancer gains you nothing. The single server may be able to handle 1.5 times (if you're lucky) what the servers serving applications may be capable of due to the fact that the burden of processing application requests has been offloaded to the application servers, but you're still limited in the number of concurrent users and connections you can handle because it's limited by the platform on which you are deploying the solution. An application server acting as a cluster controller or load balancer simply doesn't scale as well as a purpose-built load balancing solution because it isn't optimized to be a load balancer and its resource management is limited to that of a typical application server. That's true whether you're using a software solution like Apache mod_proxy_balancer or hardware solution. So if you're implementing this type of a solution to scale an application, you aren't going to see the benefits you think you are, and in fact you may see a degradation of performance due to the introduction of additional hops, additional processing, and poorly designed network architectures. I'm all for load balancing, obviously, but I'm also all for doing it the right way. And these solutions are just not the right way to implement a load balancing solution unless you're trying to learn the concepts involved or are in a computer science class in college. If you're going to do something, do it right. And doing it right means taking into consideration the goals of the solution you're trying to implement. The goals of a load balancing solution are to provide availability and scale, neither of which the solutions presented in this article will truly achieve.322Views0likes1CommentDevops Proverb: Process Practice Makes Perfect
#devops Tools for automating – and optimizing – processes are a must-have for enabling continuous delivery of application deployments Some idioms are cross-cultural and cross-temporal. They transcend cultures and time, remaining relevant no matter where or when they are spoken. These idioms are often referred to as proverbs, which carries with it a sense of enduring wisdom. One such idiom, “practice makes perfect”, can be found in just about every culture in some form. In Chinese, for example, the idiom is apparently properly read as “familiarity through doing creates high proficiency”, i.e. practice makes perfect. This is a central tenet of devops, particularly where optimization of operational processes is concerned. The more often you execute a process, the more likely you are to get better at it and discover what activities (steps) within that process may need tweaking or changes or improvements. Ergo, optimization. This tenet grows out of the agile methodology adopted by devops: application release cycles should be nearly continuous, with both developers and operations iterating over the same process – develop, test, deploy – with a high level of frequency. Eventually (one hopes) we achieve process perfection – or at least what we might call process perfection: repeatable, consistent deployment success. It is implied that in order to achieve this many processes will be automated, once we have discovered and defined them in such a way as to enable them to be automated. But how does one automate a process such as an application release cycle? Business Process Management (BPM) works well for automating business workflows; such systems include adapters and plug-ins that allow communication between systems as well as people. But these systems are not designed for operations; there are no web servers or databases or Load balancer adapters for even the most widely adopted BPM systems. One such solution can be found in Electric Cloud with its recently announced ElectricDeploy. Process Automation for Operations ElectricDeploy is built upon a more well known product from Electric Cloud (well, more well-known in developer circles, at least) known as ElectricCommander, a build-test-deploy application deployment system. Its interface presents applications in terms of tiers – but extends beyond the traditional three-tiers associated with development to include infrastructure services such as – you guessed it – load balancers (yes, including BIG-IP) and virtual infrastructure. The view enables operators to create the tiers appropriate to applications and then orchestrate deployment processes through fairly predictable phases – test, QA, pre-production and production. What’s hawesome about the tools is the ability to control the process – to rollback, to restore, and even debug. The debugging capabilities enable operators to stop at specified tasks in order to examine output from systems, check log files, etc..to ensure the process is executing properly. While it’s not able to perform “step into” debugging (stepping into the configuration of the load balancer, for example, and manually executing line by line changes) it can perform what developers know as “step over” debugging, which means you can step through a process at the highest layer and pause at break points, but you can’t yet dive into the actual task. Still, the ability to pause an executing process and examine output, as well as rollback or restore specific process versions (yes, it versions the processes as well, just as you’d expect) would certainly be a boon to operations in the quest to adopt tools and methodologies from development that can aid them in improving time and consistency of deployments. The tool also enables operations to determine what is failure during a deployment. For example, you may want to stop and rollback the deployment when a server fails to launch if your deployment only comprises 2 or 3 servers, but when it comprises 1000s it may be acceptable that a few fail to launch. Success and failure of individual tasks as well as the overall process are defined by the organization and allow for flexibility. This is more than just automation, it’s managed automation; it’s agile in action; it’s focusing on the processes, not the plumbing. MANUAL still RULES Electric Cloud recently (June 2012) conducted a survey on the “state of application deployments today” and found some not unexpected but still frustrating results including that 75% of application deployments are still performed manually or with little to no automation. While automation may not be the goal of devops, but it is a tool enabling operations to achieve its goals and thus it should be more broadly considered as standard operating procedure to automate as much of the deployment process as possible. This is particularly true when operations fully adopts not only the premise of devops but the conclusion resulting from its agile roots. Tighter, faster, more frequent release cycles necessarily puts an additional burden on operations to execute the same processes over and over again. Trying to manually accomplish this may be setting operations up for failure and leave operations focused more on simply going through the motions and getting the application into production successfully than on streamlining and optimizing the processes they are executing. Electric Cloud’s ElectricDeploy is one of the ways in which process optimization can be achieved, and justifies its purchase by operations by promising to enable better control over application deployment processes across development and infrastructure. Devops is a Verb 1024 Words: The Devops Butterfly Effect Devops is Not All About Automation Application Security is a Stack Capacity in the Cloud: Concurrency versus Connections Ecosystems are Always in Flux The Pythagorean Theorem of Operational Risk258Views0likes1CommentIntro to Load Balancing for Developers – The Algorithms
If you’re new to this series, you can find the complete list of articles in the series on my personal page here If you are writing applications to sit behind a Load Balancer, it behooves you to at least have a clue what the algorithm your load balancer uses is about. We’re taking this week’s installment to just chat about the most common algorithms and give a plain- programmer description of how they work. While historically the algorithm chosen is both beyond the developers’ control, you’re the one that has to deal with performance problems, so you should know what is happening in the application’s ecosystem, not just in the application. Anything that can slow your application down or introduce errors is something worth having reviewed. For algorithms supported by the BIG-IP, the text here is paraphrased/modified versions of the help text associated with the Pool Member tab of the BIG-IP UI. If they wrote a good description and all I needed to do was programmer-ize it, then I used it. For algorithms not supported by the BIG-IP I wrote from scratch. Note that there are many, many more algorithms out there, but as you read through here you’ll see why these (or minor variants of them) are the ones you’ll see the most. Plain Programmer Description: Is not intended to say anything about the way any particular dev team at F5 or any other company writes these algorithms, they’re just an attempt to put the process into terms that are easier for someone with a programming background to understand. Hopefully a successful attempt. Interestingly enough, I’ve pared down what BIG-IP supports to a subset. That means that F5 employees and aficionados will be going “But you didn’t mention…!” and non-F5 employees will likely say “But there’s the Chi-Squared Algorithm…!” (no, chi-squared is theoretical distribution method I know of because it was presented as a proof for testing the randomness of a 20 sided die, ages ago in Dragon Magazine). The point being that I tried to stick to a group that builds on each other in some connected fashion. So send me hate mail… I’m good. Unless you can say more than 2-5% of the world’s load balancers are running the algorithm, I won’t consider that I missed something important. The point is to give developers and software architects a familiarity with core algorithms, not to build the worlds most complete lexicon of algorithms. Random: This load balancing method randomly distributes load across the servers available, picking one via random number generation and sending the current connection to it. While it is available on many load balancing products, its usefulness is questionable except where uptime is concerned – and then only if you detect down machines. Plain Programmer Description: The system builds an array of Servers being load balanced, and uses the random number generator to determine who gets the next connection… Far from an elegant solution, and most often found in large software packages that have thrown load balancing in as a feature. Round Robin: Round Robin passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced. Round Robin works well in most configurations, but could be better if the equipment that you are load balancing is not roughly equal in processing speed, connection speed, and/or memory. Plain Programmer Description: The system builds a standard circular queue and walks through it, sending one request to each machine before getting to the start of the queue and doing it again. While I’ve never seen the code (or actual load balancer code for any of these for that matter), we’ve all written this queue with the modulus function before. In school if nowhere else. Weighted Round Robin (called Ratio on the BIG-IP): With this method, the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine. This is an improvement over Round Robin because you can say “Machine 3 can handle 2x the load of machines 1 and 2”, and the load balancer will send two requests to machine #3 for each request to the others. Plain Programmer Description: The simplest way to explain for this one is that the system makes multiple entries in the Round Robin circular queue for servers with larger ratios. So if you set ratios at 3:2:1:1 for your four servers, that’s what the queue would look like – 3 entries for the first server, two for the second, one each for the third and fourth. In this version, the weights are set when the load balancing is configured for your application and never change, so the system will just keep looping through that circular queue. Different vendors use different weighting systems – whole numbers, decimals that must total 1.0 (100%), etc. but this is an implementation detail, they all end up in a circular queue style layout with more entries for larger ratings. Dynamic Round Robin (Called Dynamic Ratio on the BIG-IP): is similar to Weighted Round Robin, however, weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time. This Application Delivery Controller method is rarely available in a simple load balancer. Plain Programmer Description: If you think of Weighted Round Robin where the circular queue is rebuilt with new (dynamic) weights whenever it has been fully traversed, you’ll be dead-on. Fastest: The Fastest method passes a new connection based on the fastest response time of all servers. This method may be particularly useful in environments where servers are distributed across different logical networks. On the BIG-IP, only servers that are active will be selected. Plain Programmer Description: The load balancer looks at the response time of each attached server and chooses the one with the best response time. This is pretty straight-forward, but can lead to congestion because response time right now won’t necessarily be response time in 1 second or two seconds. Since connections are generally going through the load balancer, this algorithm is a lot easier to implement than you might think, as long as the numbers are kept up to date whenever a response comes through. These next three I use the BIG-IP name for. They are variants of a generalized algorithm sometimes called Long Term Resource Monitoring. Least Connections: With this method, the system passes a new connection to the server that has the least number of current connections. Least Connections methods work best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time. This Application Delivery Controller method is rarely available in a simple load balancer. Plain Programmer Description: This algorithm just keeps track of the number of connections attached to each server, and selects the one with the smallest number to receive the connection. Like fastest, this can cause congestion when the connections are all of different durations – like if one is loading a plain HTML page and another is running a JSP with a ton of database lookups. Connection counting just doesn’t account for that scenario very well. Observed: The Observed method uses a combination of the logic used in the Least Connections and Fastest algorithms to load balance connections to servers being load-balanced. With this method, servers are ranked based on a combination of the number of current connections and the response time. Servers that have a better balance of fewest connections and fastest response time receive a greater proportion of the connections. This Application Delivery Controller method is rarely available in a simple load balancer. Plain Programmer Description: This algorithm tries to merge Fastest and Least Connections, which does make it more appealing than either one of the above than alone. In this case, an array is built with the information indicated (how weighting is done will vary, and I don’t know even for F5, let alone our competitors), and the element with the highest value is chosen to receive the connection. This somewhat counters the weaknesses of both of the original algorithms, but does not account for when a server is about to be overloaded – like when three requests to that query-heavy JSP have just been submitted, but not yet hit the heavy work. Predictive: The Predictive method uses the ranking method used by the Observed method, however, with the Predictive method, the system analyzes the trend of the ranking over time, determining whether a servers performance is currently improving or declining. The servers in the specified pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. The Predictive methods work well in any environment. This Application Delivery Controller method is rarely available in a simple load balancer. Plain Programmer Description: This method attempts to fix the one problem with Observed by watching what is happening with the server. If its response time has started going down, it is less likely to receive the packet. Again, no idea what the weightings are, but an array is built and the most desirable is chosen. You can see with some of these algorithms that persistent connections would cause problems. Like Round Robin, if the connections persist to a server for as long as the user session is working, some servers will build a backlog of persistent connections that slow their response time. The Long Term Resource Monitoring algorithms are the best choice if you have a significant number of persistent connections. Fastest works okay in this scenario also if you don’t have access to any of the dynamic solutions. That’s it for this week, next week we’ll start talking specifically about Application Delivery Controllers and what they offer – which is a whole lot – that can help your application in a variety of ways. Until then! Don.21KViews1like9CommentsInside Look - PCoIP Proxy for VMware Horizon View
I sit down with F5 Solution Architect Paul Pindell to get an inside look at BIG-IP's native support for VMware's PCoIP protocol. He reviews the architecture, business value and gives a great demo on how to configure BIG-IP. BIG-IP APM offers full proxy support for PC-over-IP (PCoIP), a leading virtual desktop infrastructure (VDI) protocol. F5 is the first to provide this functionality which allows organizations to simplify their VMware Horizon View architectures. Combining PCoIP proxy with the power of the BIG-IP platform delivers hardened security and increased scalability for end-user computing. In addition to PCoIP, F5 supports a number of other VDI solutions, giving customers flexibility in designing and deploying their network infrastructure. ps Related: F5 Friday: Simple, Scalable and Secure PCoIP for VMware Horizon View Solutions for VMware applications F5's YouTube Channel In 5 Minutes or Less Series (24 videos – over 2 hours of In 5 Fun) Inside Look Series Life@F5 Series Technorati Tags: vdi,PCoIP,VMware,Access,Applications,Infrastructure,Performance,Security,Virtualization,silva,video,inside look,big-ip,apm Connect with Peter: Connect with F5:359Views0likes0CommentsThe Drive Towards NFV: Creating Technologies to Meet Demand
It is interesting to see what is happening in the petroleum industry over time. I won’t get into the political and social aspects of the industry or this will become a 200 page dissertation. What is interesting to me is how the petroleum industry has developed new technologies and uses these technologies in creative ways to gain more value from the resources that are available. Drilling has gone from the simple act of poking a hole in the ground using a tool, to the use of drilling fluids, ‘mud’, to optimize the drilling performance for specific situations, non-vertical directional drilling, where they can actually drill horizontally, and the use of fluids and gases to maximize the extraction of resources through a process called ‘fracking’. What the petroleum industry has done is looked for and created technologies to extract further value from known resources that would not have been available with the tools that were available to them. We see a similar evolution of the technologies used and value extracted by the communications service providers (CSPs). Looking back, CSPs usually delivered a single service such as voice over a dedicated physical infrastructure. Then, it became important to deliver data services and they added a parallel infrastructure to deliver the video content. As costs started to become prohibitive to continue to support parallel content delivery models, the CSPs started looking for ways to use the physical infrastructure as the foundation and use other technologies to drive both voice and data to the customer. Frame relay (FR) and asynchronous transfer mode (ATM) technologies were created to allow for the separation of the traffic at a layer 2 (network) perspective. The CSP is extracting more value from their physical infrastructure by delivering multiple services over it. Then, the Internet came and things changed again. Customers wanted their Internet access in addition to the voice and video services that they currently received. The CSPs evolved, yet again, and started looking at layer 3 (IP) differentiation and laid this technology on their existing FR and ATM networks. Today, mobile and fixed service providers are discovering that managing the network at the layer 3 level is no longer enough to deliver services to their customers, differentiate their offerings, and most importantly, support the revenue cost model as they continue to build and evolve their networks to new models such as 4G LTE wireless and customer usage patterns change. Voice services are not growing while data services are increasing at an explosive rate. Also, the CSPs are finding that much of their legacy revenue streams are being diverted to over-the-top providers that deliver content from the Internet and do not deliver any revenue or value to the CSP. There is Value in that Content The CSPs are moving up the OSI network stack and looking to find value in the layer 4 through 7 content and delivering services that enhance specific types of content and allow subscribers gain additional value through value added services (VAS) that can be targeted towards the subscribers and the content. This means that new technologies such as content inspection and traffic steering are necessary to leverage this function. Unfortunately, there is a non-trivial cost for the capability for the CSP to deliver content and subscriber aware services. These services require significant memory and computing resources. To offset these costs as well as introduce a more flexible dynamic network infrastructure that is able to adapt to new services and evolving technologies, a consortium of CSPs have developed the Network Functions Virtualization (NFV) technology working group. As I mentioned in a previous blog, NFV is designed to virtualize network functions such as the MME, SBC, SGSN/GGSN, and DPI onto an open hardware infrastructure using commercial, off-the-shelf (COTS) hardware. In addition, VAS solutions can leverage this architecture to enhance the customer experience. By using COTS hardware and using virtual/software versions of these functions, the CSP gains a cost benefit and the network becomes more flexible and dynamic. It is also important to remember that one of the key components of the NFV standard is to deliver a mechanism to manage and orchestrate all of these virtualized elements while tying the network elements more closely to the business needs of the operator. Since the services are deployed in a flexible and dynamic way, it becomes possible to deliver a mechanism to orchestrate the addition or removal of resources and services based on network analytics and policies. This flexibility allows operators to add and remove services and adjust capacity as needed without the need for additional personnel and time for coordination. An agile infrastructure enables operators to roll-out new services quicker to meet the evolving market demands, and also remove services, which are not contributing to the company’s bottom line or delivering a measurable benefit to the customer quality of experience, with minimal impact the the infrastructure or investment. Technology to Extract Content and Value Operators need to consider the four key elements to making the necessary application defined network (ADN) successful in an NFV-based architecture: Virtualization, Abstraction, Programmability, and Orchestration. Virtualization provides the foundation for that flexible infrastructure which allows for the standardization of the hardware layer as well as being one of the key enablers for the dynamic service provisioning. Abstraction is a key element because operators need to be able to tie their network services up into the application and business services they are offering to their customers – enabling their processes and the necessarily orchestration. Programmability of the network elements and the NFV infrastructure ensures that the components being deployed can not only be customized and successfully integrated into the network ecosystem, but adapted as the business needs and technology changes. Orchestration is the last key element. Orchestration is where operators will get some of their largest savings by being able to introduce and remove services quicker and broader through automating the service enablement on their network. This enables operators to adjust quicker to the changing market needs while “doing more with less”. As these CSPs look to introduce NFV into their architectures, they need to consider these elements and look for vendors which can deliver these attributes. I will discuss each of these features in more detail in upcoming blog posts. We will look at how these features are necessary to deliver the NFV vision and what this means to the CSPs who are looking to leverage the technologies and architectures surrounding the drive towards NFV. Ultimately, CSPs want a NFV orchestration system enabling the network to add and remove service capacity, on-demand and without human intervention, as the traffic ebbs and flows to those services. This allows the operator to be able to reduce their overall service footprint by re-using infrastructure for different services based upon their needs. F5 is combining these attributes in innovative ways to deliver solutions that enable them to leverage the NFV design. Demo of F5 utilizing NFV technologies to deliver an agile network architecture: Dynamic Service Availability through VAS bursting163Views0likes0CommentsSSL Renegotiation DOS attack – an iRule Countermeasure
The Attack Earlier this year, a paper was posted to the IETF TLS working group outlining a very easy denial of service attack that a single client could use against a server. http://www.ietf.org/mail-archive/web/tls/current/msg07553.html The premise of the attack is simple: “An SSL/TLS handshake requires at least 10 times more processing power on the server than on the client”. If a client machine and server machine were equal in RSA processing power, the client could overwhelm the server by sending ten times as many SSL handshake requests as the server could service. This vulnerability exists for all SSL negotiations; the only mitigation is the ratio between the two participants, which is why SSL acceleration is such a critical feature. Because BIG-IP uses state-of-the-art hardware crypto-processors, it is certainly not vulnerable to a single attack from a single client. However, it is quite conceivable that someone might very easily modify one of the botnets tools (such as the Low Orbit Ion Cannon that we saw used in the Wikileaks attacks) and thus the attack could become distributed. The Tool After the above paper was published, a French hacking group calling itself “The Hacker’s Choice” (www.thc.org) published a simple, yet effective proof-of-concept tool and released it at DC4420. The tool performs the attack against any server that allows SSL renegotiation. http://www.thc.org/thc-ssl-dos/thc-ssl-dos-1.4.tar.gz % ./src/thc-ssl-dos 30.1.1.134 443; ______________ ___ _________ \__ ___/ | \ \_ ___ \ | | / ~ \/ \ \/ | | \ Y /\ \____ |____| \___|_ / \______ / \/ \/ http://www.thc.org Twitter @hackerschoice Greetingz: the french underground Handshakes 0 [0.00 h/s], 1 Conn, 0 Err Handshakes 417 [455.74 h/s], 37 Conn, 0 Err Handshakes 924 [515.36 h/s], 52 Conn, 0 Err Handshakes 1410 [486.44 h/s], 62 Conn, 0 Err Handshakes 1916 [504.41 h/s], 71 Conn, 0 Err ... The tool itself is about 700 lines of readable C code. Actually, it looks better than your typical hack-tool so I have to give “The Hacker’s Choice” props on their craftmanship. The attack tool ramps up to 400 open connections and attempts to do as many renegotiations on each connection as it can. On my dedicated test client, it comes out to 800 handshakes per second (or 2 per connection per second). Moment of Irony When you first run the tool against your BIG-IP virtual server, it might say “Server does not support SSL Renegotiation.” That’s because everyone, including F5, is still recovering from last year’s SSL renegotiation vulnerability and by default our recent versions disable SSL renegotiation. So in order to do any testing at all, you have to re-enable renegotiation. But this also means that by default, virtual servers (on 10.x) are already not vulnerable unless they’ve explicitly re-enabled renegotiation. The irony is that the last critical SSL vulnerability provides some protection against this new SSL vulnerability. The iRule Countermeasure Enter DevCentral. After setting up the attack lab, we asked Jason Rahm (blog) for his assistance. He put together a beautiful little iRule that elegantly defeats the attack. Its premise is simple: If a client connection attempts to renegotiate more than five times in any 60 second period, that client connection is silently dropped. By silently dropping the client connection, the iRule causes the attack tool to stall for long periods of time, fully negating the attack. There should be no false-positives dropped, either, as there are very few valid use cases for renegotiating more than once a minute. The iRule when RULE_INIT { set static::maxquery 5 set static::seconds 60 } when CLIENT_ACCEPTED { set rand [expr { int(10000000 * rand()) }] } when CLIENTSSL_HANDSHAKE { set reqno [table incr "reqs$rand"] table set -subtable "reqrate:$rand" $reqno "ignored" indefinite $static::seconds if { [table keys -count -subtable "reqrate:$rand"] > $static::maxquery } { after 5000 drop } } when CLIENT_CLOSED { table delete reqs$rand table delete –subtable reqrate:$rand –all } With the iRule in place, you can see its effect within a few seconds of the test restarting. Handshakes 2000 [0.00 h/s], 400 Conn, 0 Err Handshakes 2000 [0.00 h/s], 400 Conn, 0 Err Handshakes 2000 [0.00 h/s], 400 Conn, 0 Err Handshakes 2000 [0.00 h/s], 400 Conn, 0 Err Handshakes 2000 [0.00 h/s], 400 Conn, 0 Err The 400 connections each get their five renegotiations and then the iRule waits five seconds (to ack any outstanding client data) before silently dropping the connection. The attack tool believes the connection is still open, so it stalls. Note that the test had to be restarted, because the iRule doesn’t apply to existing connections when it’s attached to a virtual server. Take that into account if you are already under attack. Its understandable if you are thinking “that’s the coolest 20-line iRule I’ve ever seen, I wish I understood it better.” Jason also provided a visual workflow to elucidate its mechanics. iRule DDOS countermeasure workflow Conclusion At a meeting earlier this year here in Seattle we were talking about the previous Renegotiation flaw. The question was posed “What is the next vulnerability that we’re all going to slap our foreheads about?” This particular attack falls into that category. Its a simple attack against a known property of the protocol. Fortunately, BIG-IP can leverage its hardware-offload or use countermeasures like this iRule to counter the attack. There are two take-aways here: first, even long-established and reviewed protocols like SSL/TLS can be used against you and second, iRules are pretty sweet! And thanks again, to Jason Rahm for his invaluable assistance!1.9KViews1like3Comments