rate shaping
5 TopicsI do not think that word means what you think it means
Greg Ferro over at My Etherealmind has a, for lack of a better word, interesting entry in his Network Dictionary on the term "Application Delivery Controller." He says: Application Delivery Controller (ADC) - Historically known as a “load balancer”, until someone put a shiny chrome exhaust and new buttons on it and so it needed a new marketing name. However, the Web Application Firewall and Application Acceleration / Optimisation that are in most ADC are not really load balancing so maybe its alright. Feel free to call it a load balancer when the sales rep is on the ground, guaranteed to upset them. I take issue with this definition primarily because an application delivery controller (ADC) is different from a load-balancer in many ways, and most of them aren't just "shiny chrome exhaust and new buttons". He's right that web application firewalls and web application acceleration/optimization features are also included, but application delivery controllers do more than just load-balancing these days. Application delivery controller is not just a "new marketing name", it's a new name because "load balancing" doesn't properly describe the functionality of the products that fall under the ADC moniker today. First, load-balancing is not the same as layer 7 switching. The former is focused on distribution of requests across a farm or pool of servers whilst the latter is about directing requests based on application layer data such as HTTP headers or application messages. An application delivery controller is capable of performing layer 7 switching, something a simple load-balancer is not. When the two are combined you get layer 7 load-balancing which is a very different beast than the simple load-balancing offered in the past and often offered today by application server clustering technologies, ESB (enterprise service bus) products, and solutions designed primarily for load-balancing. Layer 7 load balancing is the purvey of application delivery controllers, not load-balancers, because it requires application fluency and run-time inspection of application messages - not packets, mind you, but messages. That's an important distinction, but one best left for another day. The core functionality of an application delivery controller is load-balancing, as this is the primary mechanism through which high-availability and failover is provided. But a simple load-balancer does little more than take requests and distribute them based on simple algorithms; they do not augment the delivery of applications by offering additional features such as L7 rate shaping, application security, acceleration, message security, and dynamic inspection and manipulation of application data. Second, a load balancer isn't a platform; an application delivery controller is. It's a platform to which tasks generally left to the application can be offloaded such as cookie encryption and decryption, input validation, transformation of application messages, and exception handling. A load balancer can't dynamically determine the client link speed and then determine whether compression would improve or degrade performance, and either apply it or not based on that decision. A simple load balancer can't inspect application messages and determine whether it's a SOAP fault or not, and then once it's determined it is execute logic that handles that exception. An application delivery controller is the evolution of load balancing to something more; to application delivery. If you really believe that an application delivery controller is just a marketing name for a load-balancer then you haven't looked into the differences or how an ADC can be an integral part of a secure, fast, and available application infrastructure in a way that load-balancers never could. Let me 'splain. No, there is too much. Let me sum up. A load balancer is a paper map. An ADC is a Garmin or a TomTom.249Views0likes2CommentsQoS without Context: Good for the Network, Not So Good for the End user
#fasterapp #webperf #ado One of the most often mentioned uses of #OpenFlow revolves around QoS . Which is good for network performance metrics, but not necessarily good for application performance. In addition to issues related to virtual machine provisioning and the dynamism inherent in cloud computing environments, QoS is likely the most often mentioned “use” of OpenFlow within a production network. Certainly quality of service issues are cropping up more and more as end-user performance climbs the priority stack for CIOs faced with increasing challenges to meet expectations. Remembering, however, the history of QoS, one has to ask why anyone thinks it would solve performance issues any better today than it has in the past? Even if it becomes “flow” based instead of “packet-based”, is the end result really going to be a better, i.e. faster, end-user experience? Probably not. And here’s why… QoS techniques and technologies revolve around the ability to control the ingress and egress of packets. Whether it’s applied at a flow or packet level is irrelevant, the underlying implementation comes down to prioritization of packets. This control over how packets flow through the network is only available via devices or systems which IT controls, i.e. within the confines of the data center. Furthermore, QoS is designed to address network-related issues such as congestion and packet-loss that ultimately degrade the performance of a given application. Congestion and packet-loss happen for a number of reasons, such as oversubscription of bandwidth, high utilization on systems (making them incapable of processing incoming packets efficiently), and misconfiguration of network devices. Let’s assume one (or more) of said conditions exist. Let’s further stipulate that using OpenFlow, the network has been dynamically adjusted to compensate for said conditions. Can one assume from this that end-user performance has improved? Hint: the answer is no. The reason the answer is no is because network conditions are only one piece of the performance equation. QoS cannot address the other causes of poor performance: high load, poor external (Internet) network conditions, external bandwidth constraints, slow clients, mobile network issues, and inefficiencies in protocol processing*. Ah, but what about using OpenFlow to dynamically adjust perimeter devices providing QoS on the egress-facing side of the network? Certain types of QoS – rate shaping in particular – can assist in improving protocol-related impediments. Too, bandwidth management (which has migrated from its own niche market to simply being attached to QoS) may mitigate issues related to bandwidth constrained connections – but only from the perspective of the pipe between the data center and Internet. If the bandwidth constraint is on the other end of the pipe (the “last mile”), this technique will not improve performance, because the OpenFlow controller has no awareness of that constraint. In fact, an OpenFlow controller is going to largely be applying policies blind with respect to anything outside the data center. ROOT CAUSES When we start looking at the causes of poor end-user performance we see that many of them are outside the data center. Type of client, client network, type of content being delivered, status of the server serving the content. All these factors make up context and without visibility into that, it is impossible to redress those factors impeding performance. If you know the end-user device is a tablet, and it’s connecting over a mobile network, you know performance is most likely going to be improved by reducing the size of the content being delivered. Techniques like image optimization and minification as well as caching will improve performance. QoS techniques? Not so much, if at all. The problem is that QoS, like many attempts at improving performance, focus on one small piece of the puzzle rather than on the whole. There is almost never a mention of the client-side factors, and not even so much as a head nod in the direction of the application, even though it’s been shown that various applications have widely varying performance characteristics when delivered over different kinds of networks. Without context, QoS rarely achieves noticeable improvements in performance from the perspective of the end-user. Context is necessary in order to apply the right techniques and policies at the right time to ensure optimal performance for the user given the application being served. Context provides the insight, the visibility, QoS, on its own, is not “the answer” to the problem of poorly performing applications (and it’s even less useful in cloud computing environments where a lack of control over the infrastructure required to implement is problematic). It may be part of the solution, depending on what the problem may be, but it’s just part of the solution. There are myriad techniques and technology that can be applied to improve performance; success always depends primarily on applying the right solution at the right time. To do that, requires contextual-awareness – as well as the ability to execute “any of the above” to redress the problems. But would QoS improve, overall, the performance of applications? Sure – though perhaps only minimally and to the end-user, imperceptibly. The question becomes is IT more concerned with metrics proving they are meeting SLAs for network delivery or on improving the actual experience of the end-user? *Rate shaping (a QoS technique) does attempt to mitigate issues with TCP by manipulating window-sizes and timing of exchanges, which partially addresses this issue. SDN, OpenFlow, and Infrastructure 2.0 OpenFlow Wiki: Quality of Service The “All of the Above” Approach to Improving Application Performance F5 Friday: F5 Application Delivery Optimization (ADO) Capacity in the Cloud: Concurrency versus Connections When Big Data Meets Cloud Meets Infrastructure The HTTP 2.0 War has Just Begun199Views0likes0CommentsRate Shaping: An Old Trick You Might Need. Soon.
It should be no surprise to anyone that the number of mobile devices is increasing at an astounding rate. In fact, according to Ericsson, mobile broadband subscriptions will double in 2011. Let’s all just take a moment to ponder what that means to our worldwide infrastructure. Lots has been written about this topic from a theoretical viewpoint, but we’re about to find out how flexible our infrastructures really are. If you have web servers or other resources on the Internet, some of those new mobile devices will be coming your way. Let’s take the worst case scenario and assume that your mobile traffic will double. Since a successful company wants their customers to come interact with them more often, this is a really good thing. The problem, of course, is that too much of anything is bad. That rule applies to bandwidth consumption just as well as to everything else. While there are a variety of topics wrapped up in this explosive growth, for this blog, let’s focus on one tiny bit of technology that has traditionally not seen a ton of uptake in the enterprise market, though service providers have made use of it. That thing would be rate shaping. Yes, I know, you looked at it in 2000, it wasn’t ready, you looked at it in 2005 and it was better, but still not exactly what you were looking for. By 2010, you couldn’t prioritize some traffic and not other traffic without causing pain to your users, and you could buy more bandwidth. Well, it’s 2011, can you potentially double your bandwidth? Used in conjunction with other technologies like tcp optimization and compression, it can optimize your connection and reduce the amount of bandwidth you will require as the world gears up to multiple IPs per individual and true "always on" access, no matter which device is in your users’ hands. Add in deduplication and other optimizations available in WAN Optimization Controllers, and you have all the data going in and out of your building highly optimized. Of course rate shaping takes from one protocol or application to give to another, something my mother used to call “robbing Peter to pay Paul”, but sometimes, this is a viable answer. Particularly if you have mission-critical traffic or something related to disaster recovery (like off-site backups or DC-to-DC replication) flowing over your Internet connection(s). Considering that all indications are your backend systems will be doing more over-the-Internet communications too, that’s a whole lot of new traffic, and prioritization becomes more important. I’m not in a position to tell you how to prioritize your traffic, just that there’s going to be more traffic, and you’re going to want to prioritize it if you are successful enough to pull in your share of that traffic. And if you haven’t looked into Rate Shaping in a while, in ADC’s like F5’s BIG-IP LTM, it has definitely grown up, offering you the ability to keep certain types of traffic within boundaries you define, while turning other types of traffic off completely when utilization gets too high. It allows you to classify applications and protocols so that you can set policies based upon like or shared communications types. So you can say, for example, that your applications and videos that are public facing must get bandwidth, and other protocols can lag or even get cut out when there is too much traffic on the WAN or public Internet connection. That’s better than having all of your traffic time out, and certainly better than having customers drop connections. Think about it, the benefits might just outweigh the costs. Related Articles and Blogs Like a Matrushka, WAN Optimization is Nested. How May I Speed and Secure Replication? Let Me Count The Ways Load Balancers For Developers – ADCs WAN Optimization Functionality Dear Data Center Guy Alice In Wondercloud: The Bidirectional Rabbit Hole Layer 4 versus Layer 7 Attack199Views0likes0CommentsDevCentral Top5 1/21/2011
Settling into the new year there is goodness a plenty on DevCentral this week. Between group revamps, site improvements and maintenance, including trimming the page size down to a more svelte, downloadable size, and the general content screaming across the front page there has been a lot to keep up on. Here are my Top 5 picks for the week to help you out: Revisiting Hash Load Balancing and Persistence on BIG-IP LTM http://bit.ly/h7etBK As I may have hinted at last week, Jason had more tricks up his sleeve in regards to hash load balancing on the LTM. This week you get to see the full measure of his madness in all its graph-filled glory. What he’s doing here is basically testing how even the distribution across different members in a pool each different type of hash will be when used for load balancing. That’s right, folks, he’s finally answering the burning question of: CRC32, MD5, SHA, or CARP? I know to some of you ungeeks out there this might not be as exciting, but to us card carrying, calculator slinging, code dreaming geek types, this is wicked cool. To get a fair comparison Jason whipped up a couple of scripts in python and tmsh, an iRule for the testing results, and output some pretty graphs that show the results. Go take a read through and see if you can keep up with Jason’s mad skills (here’s a hint: most people can’t), and get a look at the power of iRules and hash LB on the LTM. 10 Ways to HA (and counting): a treatise on BIG-IP high availability http://bit.ly/hC2emp Kevin Stuart, one of the engineers in the field here at F5, put up this interesting post on different ways that BIG-IP helps you maintain an HA environment. It caught my eye enough to make the Top5 because he goes through each of the different “10”* (*note there are 14 in the “10 ways” list…) ways and explains each of them. It’s an interesting walk through the many ways that we take for granted that these kinds of environments and systems can work to keep an application up and running. Most of this stuff we don’t even think about when relying on a device like LTM, but it’s actually really cool to see things like session state sharing, session mirroring and shared MAC addresses called out in a list of important features. It’s doubly cool that he explains why. Take a read, this one’s a light refresher course after the heavy science Jason dropped in the link above, but no less interesting. Is There Such A Thing as a Safe Area of the Web? http://bit.ly/gjqTSa While Lori might not provide the answers to all of your security problems here, she certainly states a firm reminder, summed up in a sentence that I’ll just quote, since I have no hopes of saying it better myself, “Just because your house hasn’t yet been broken into doesn’t mean you stop locking your doors.” In her post she addresses the question posed by a member of the twitterverse about whether or not they truly need anti-virus software anymore. They aren’t getting any virus alerts, they’re technologically savvy and careful what they browse and click, and are up to date on all the latest patches…so is an active AV really needed? Well no, I suppose technically it’s not. Neither are seat-belts in a car when you get right down to it, but I feel better knowing they’re there if I need them unexpectedly. This one’s a good reminder that even though you’ve gotten good at being as safe as you can on the web, there are many bad people doing many bad things, and you’d rather they didn’t do them to your system. So stay safe, after you read the post of course. Rate Shaping: An Old Trick You Might Need Soon. http://bit.ly/epDN6U Don put out an article this week that contains a single statistic that, if you stop and truly absorb it, is equal parts exciting and sobering. That is that, according to Ericsson, mobile broadband subscriptions will double in 2011. While I take into account that this is a provider involved in the broadband world saying things about broadband subscriptions, I’ve heard similar things from many sources, and the numbers are all somewhere in that neighborhood. Think about that for a minute. Two times the mobile users, two times the mobile internet traffic, two times the mobile user accounts, two times the authentications, SSL handshakes, etc. So either get busy doubling your infrastructure, or start thinking of ways to get the most out of what you already have. One of the many available avenues of doing just that, is the subject of Don’s post: rate shaping. This is an interesting read to get the ole’ brain working on a problem that seems likely to rear its head as the year presses on and traffic grows. 20 Lines or Less #43 – Nesting, Rewriting Redirects and Auth http://bit.ly/dKC0YU Here we are again, with the 20 Lines or Less. In case somehow you’re not familiar with the format, the idea is to show off three examples of iRules doing the cool things that iRules can do, each in less than 21 lines of code. I love the series, and this week I’ve got more examples of iRules goodness for you to sample, share, implement, or twist to whatever your whims may be. This week I feature a couple forum posts, one on properly nesting a switch inside an if, one on how to properly re-write redirects using a class and some string trickery, as well as a Tech Tip George wrote a while back on authentication via iRules that somehow never made it into the 20LoL. I hope you enjoy. That does it for this week’s Top5. Let me know if you’ve got any comments, questions or otherwise. Otherwise thanks for reading. #Colin161Views0likes0CommentsHow May I Speed and Secure Replication? Let Me Count the Ways.
Related Articles and Blogs: SMB Storage Replication: Acceleration VS. More Bandwidth Three Top Tips for Successful Business Continuity Planning Data Replication for Backup Best Practices Like a Matrushka, WAN Optimization is Nested Load Balancing for Developers – ADC WAN Optimization Functionality152Views0likes0Comments