qos
8 TopicsAPI Request Throttling: A Better Option
This past week there's been some interesting commentary regarding Twitter's change to its API request throttling feature. Request throttling, often used as a method to ensure QoS (Quality of Service) for a variety of network and application uses, is used by Twitter as an attempt to not overwhelm the system such that they are forced to display the now (in)famous Twitter fail whale image. One of the things you can do with a BIG-IP Local Traffic Manager (LTM) and iRules is request throttling. Why would you want to let a mediating device like an application delivery controller control request throttling? Because request throttling implemented by the server still requires the server to respond to the request. The act of responding wastes some of the resources you're trying to save by request throttling in the first place. It's like taking two steps forward and one back. By allowing the application delivery controller to manage throttling requests you're relieving the burden on the servers and freeing up resources so the servers can do what they're designed to do: serve content. Because an intermediary that is also a full proxy (like BIG-IP LTM) terminates the TCP connection on the client side, it does not need to bother the server in the case that a client has exceeded their allotted request usage. Now you might be thinking that such a solution would be fine for an entire site, but Twitter (and others) use request throttling on a per API call basis, not the entire site, and wouldn't a general solution stop people from even connecting to twitter.com in general? It depends on the implementation. In the case of BIG-IP and iRules, request throttling can be done on a per virtual server (usually corresponding to a single "web site") basis or it can get as granular as specific URIs. In the case of a site with an API like twitter, the URIs generally correspond to their REST-based APIs. That means not only can you throttle requests in general, but you could get even more specific and throttle requests based on specific API calls. If one of the API calls is particularly resource-intensive, you could limit it further than those that are less resource intensive. So while querying may be limited to 40 request per hour, perhaps updating is limited to 30. Or vice-versa. The ability to inspect, detect, and direct messages lets you get as specific as you want - or need - according to the needs of your application and your specific architecture. It really gets interesting when you consider that you could further make decisions based on parameters, such as a specific user and the application function. Because an intelligent application delivery controller can inspect messages both on request and reply, you can use information that may be returned from a specific request to control the way future requests are handled, whether that's permanently or for a specified time interval. This kind of functionality is also excellent for service providers moving services to tiers, i.e. "premium (paid) services". By indicating the level of service that should be provided to a given user, usually by setting a cookie, BIG-IP can dynamically apply the appropriate request throttling to that user's service. The reason this is exciting is because it can be done transparently - without modifying the application itself. That means changes in business models can be implemented faster and with less interruption. As an example, here's a simple iRule that throttles HTTP requests to 3 per second per client. Simple, effective, transparent to the servers. Thanks to our guys in the field for writing this one and sharing! when HTTP_REQUEST { set cur_time [clock seconds] if { [HTTP::request_num] > 1 } { if { $cur_time == $start_time } { if { $reqs_sec > 3 } { HTTP::respond 503 Retry-After 2 } incr reqs_sec return } } set start_time $cur_time set reqs_sec 0 } It doesn't make sense to implement request throttling inside an application when the reason you're implementing it is because the servers are overwhelmed. Let an intermediary, an application delivery controller, do it for you.1.1KViews0likes3CommentsLoad balance base on QoS marking or DSCP value (ie. AF21)
Hi I've F5 acting as Gateway and load balance to ISP (Outbound load balance). Normally I have many set of pool and I choose pool by detect source IP. >>> ie. If IP=a >> Pool a Question is.... Can F5 choose pool by detect QoS marking (ie. AF21) ? please note that it's just L4 virtual server cause it's internet outbound loadbalance and we didn't have ssl offloading. so we can't detect anything on L7. Thank youSolved899Views0likes2CommentsConfiguring QoS for priority queueing and bandwidth reservation
Hi folks, can someone explain how to set up QoS on a BIG-IP to get priority queueing and bandwidth reservation working for packets that are already QoS tagged (L2 CoS, L3 DSCP)? The scenario is the following: Network packets will be marked by the endpoints (or other devices) on Layer 2 (CoS) and Layer 3 (DSCP) according to the following scheme: That being said the packets that enter the BIG-IP already carry their QoS tags. The BIG-IP shall do two things with the packets: The QoS tags must not be changed and remain their original values For every application/traffic class the BIG-IP shall reserve a specific bandwith and assign a priority queue according to the QoS class On the switches and routers the following QoS polices have been implemented which ideally should be the same for the BIG-IP: For the first point (leaving the QoS tags unchanged) I think the solution is to configure the UDP/TCP profiles with IP ToS/Link QoS: Pass Through But how do I accomplish the second point (bandwidth reservation and priority queueing)? In the Network tab I found Class of Service and Rate Shaping, in the Acceleration tab there is Bandwidth Controllers. Which feature is the right one and how does it need to be configured for this scenario? Thx a lot!698Views0likes2CommentsToS getting reset to 0 when egress from LTM
Scenario: Micrsoft Lync front-end servers. Their gateway is on the LTM. Client requests to the front-end are balanced by virtual servers on a variety of ports. The Lync servers are configured to mark their SSL-TLS SIP traffic (port 5061) as AF31. From packet captures, I have found that the markings are in tact while traversing the link to the LTM back-end network. However, when getting capture from the front-end interfaces, the DSCP has been reset to 0. I have a TCP profile set on the virtual server that is listening on 5061, which has the options enabled to pass-through ToS and QoS. What else am I missing here? I found a bug that looks somewhat related, but I am not running a SIP profile and this traffic is TCP. http://support.f5.com/kb/en-us/solutions/public/14000/000/sol14019.html462Views0likes6CommentsTraffic Shaping outbound do the client from APM
So I know and have implemented in a lab traffic shaping from the VPN client to the APM server, which works quite well and should shape large file-transfers inbound. Is there a way to do the opposite, i.e. traffic-shape to the client so they don't go pulling down large files from the corporate network and effectively consume the internet bandwidth? so I would like to say you user-group A can have 5Mbps, you user-group B can have 10Mbps etc.. cheers!317Views0likes1CommentQoS without Context: Good for the Network, Not So Good for the End user
#fasterapp #webperf #ado One of the most often mentioned uses of #OpenFlow revolves around QoS . Which is good for network performance metrics, but not necessarily good for application performance. In addition to issues related to virtual machine provisioning and the dynamism inherent in cloud computing environments, QoS is likely the most often mentioned “use” of OpenFlow within a production network. Certainly quality of service issues are cropping up more and more as end-user performance climbs the priority stack for CIOs faced with increasing challenges to meet expectations. Remembering, however, the history of QoS, one has to ask why anyone thinks it would solve performance issues any better today than it has in the past? Even if it becomes “flow” based instead of “packet-based”, is the end result really going to be a better, i.e. faster, end-user experience? Probably not. And here’s why… QoS techniques and technologies revolve around the ability to control the ingress and egress of packets. Whether it’s applied at a flow or packet level is irrelevant, the underlying implementation comes down to prioritization of packets. This control over how packets flow through the network is only available via devices or systems which IT controls, i.e. within the confines of the data center. Furthermore, QoS is designed to address network-related issues such as congestion and packet-loss that ultimately degrade the performance of a given application. Congestion and packet-loss happen for a number of reasons, such as oversubscription of bandwidth, high utilization on systems (making them incapable of processing incoming packets efficiently), and misconfiguration of network devices. Let’s assume one (or more) of said conditions exist. Let’s further stipulate that using OpenFlow, the network has been dynamically adjusted to compensate for said conditions. Can one assume from this that end-user performance has improved? Hint: the answer is no. The reason the answer is no is because network conditions are only one piece of the performance equation. QoS cannot address the other causes of poor performance: high load, poor external (Internet) network conditions, external bandwidth constraints, slow clients, mobile network issues, and inefficiencies in protocol processing*. Ah, but what about using OpenFlow to dynamically adjust perimeter devices providing QoS on the egress-facing side of the network? Certain types of QoS – rate shaping in particular – can assist in improving protocol-related impediments. Too, bandwidth management (which has migrated from its own niche market to simply being attached to QoS) may mitigate issues related to bandwidth constrained connections – but only from the perspective of the pipe between the data center and Internet. If the bandwidth constraint is on the other end of the pipe (the “last mile”), this technique will not improve performance, because the OpenFlow controller has no awareness of that constraint. In fact, an OpenFlow controller is going to largely be applying policies blind with respect to anything outside the data center. ROOT CAUSES When we start looking at the causes of poor end-user performance we see that many of them are outside the data center. Type of client, client network, type of content being delivered, status of the server serving the content. All these factors make up context and without visibility into that, it is impossible to redress those factors impeding performance. If you know the end-user device is a tablet, and it’s connecting over a mobile network, you know performance is most likely going to be improved by reducing the size of the content being delivered. Techniques like image optimization and minification as well as caching will improve performance. QoS techniques? Not so much, if at all. The problem is that QoS, like many attempts at improving performance, focus on one small piece of the puzzle rather than on the whole. There is almost never a mention of the client-side factors, and not even so much as a head nod in the direction of the application, even though it’s been shown that various applications have widely varying performance characteristics when delivered over different kinds of networks. Without context, QoS rarely achieves noticeable improvements in performance from the perspective of the end-user. Context is necessary in order to apply the right techniques and policies at the right time to ensure optimal performance for the user given the application being served. Context provides the insight, the visibility, QoS, on its own, is not “the answer” to the problem of poorly performing applications (and it’s even less useful in cloud computing environments where a lack of control over the infrastructure required to implement is problematic). It may be part of the solution, depending on what the problem may be, but it’s just part of the solution. There are myriad techniques and technology that can be applied to improve performance; success always depends primarily on applying the right solution at the right time. To do that, requires contextual-awareness – as well as the ability to execute “any of the above” to redress the problems. But would QoS improve, overall, the performance of applications? Sure – though perhaps only minimally and to the end-user, imperceptibly. The question becomes is IT more concerned with metrics proving they are meeting SLAs for network delivery or on improving the actual experience of the end-user? *Rate shaping (a QoS technique) does attempt to mitigate issues with TCP by manipulating window-sizes and timing of exchanges, which partially addresses this issue. SDN, OpenFlow, and Infrastructure 2.0 OpenFlow Wiki: Quality of Service The “All of the Above” Approach to Improving Application Performance F5 Friday: F5 Application Delivery Optimization (ADO) Capacity in the Cloud: Concurrency versus Connections When Big Data Meets Cloud Meets Infrastructure The HTTP 2.0 War has Just Begun199Views0likes0CommentsRate Shaping: An Old Trick You Might Need. Soon.
It should be no surprise to anyone that the number of mobile devices is increasing at an astounding rate. In fact, according to Ericsson, mobile broadband subscriptions will double in 2011. Let’s all just take a moment to ponder what that means to our worldwide infrastructure. Lots has been written about this topic from a theoretical viewpoint, but we’re about to find out how flexible our infrastructures really are. If you have web servers or other resources on the Internet, some of those new mobile devices will be coming your way. Let’s take the worst case scenario and assume that your mobile traffic will double. Since a successful company wants their customers to come interact with them more often, this is a really good thing. The problem, of course, is that too much of anything is bad. That rule applies to bandwidth consumption just as well as to everything else. While there are a variety of topics wrapped up in this explosive growth, for this blog, let’s focus on one tiny bit of technology that has traditionally not seen a ton of uptake in the enterprise market, though service providers have made use of it. That thing would be rate shaping. Yes, I know, you looked at it in 2000, it wasn’t ready, you looked at it in 2005 and it was better, but still not exactly what you were looking for. By 2010, you couldn’t prioritize some traffic and not other traffic without causing pain to your users, and you could buy more bandwidth. Well, it’s 2011, can you potentially double your bandwidth? Used in conjunction with other technologies like tcp optimization and compression, it can optimize your connection and reduce the amount of bandwidth you will require as the world gears up to multiple IPs per individual and true "always on" access, no matter which device is in your users’ hands. Add in deduplication and other optimizations available in WAN Optimization Controllers, and you have all the data going in and out of your building highly optimized. Of course rate shaping takes from one protocol or application to give to another, something my mother used to call “robbing Peter to pay Paul”, but sometimes, this is a viable answer. Particularly if you have mission-critical traffic or something related to disaster recovery (like off-site backups or DC-to-DC replication) flowing over your Internet connection(s). Considering that all indications are your backend systems will be doing more over-the-Internet communications too, that’s a whole lot of new traffic, and prioritization becomes more important. I’m not in a position to tell you how to prioritize your traffic, just that there’s going to be more traffic, and you’re going to want to prioritize it if you are successful enough to pull in your share of that traffic. And if you haven’t looked into Rate Shaping in a while, in ADC’s like F5’s BIG-IP LTM, it has definitely grown up, offering you the ability to keep certain types of traffic within boundaries you define, while turning other types of traffic off completely when utilization gets too high. It allows you to classify applications and protocols so that you can set policies based upon like or shared communications types. So you can say, for example, that your applications and videos that are public facing must get bandwidth, and other protocols can lag or even get cut out when there is too much traffic on the WAN or public Internet connection. That’s better than having all of your traffic time out, and certainly better than having customers drop connections. Think about it, the benefits might just outweigh the costs. Related Articles and Blogs Like a Matrushka, WAN Optimization is Nested. How May I Speed and Secure Replication? Let Me Count The Ways Load Balancers For Developers – ADCs WAN Optimization Functionality Dear Data Center Guy Alice In Wondercloud: The Bidirectional Rabbit Hole Layer 4 versus Layer 7 Attack199Views0likes0CommentsMultiple Stream Protocols, eBooks, And You.
EBook readers are an astounding thing, if you really stop and think about it. Prior to their creation, how could you reasonably have hundreds or thousands of books in one place, all the notes you took and highlighting you wanted to do, and your current page in each book all stored together in one easy to use place? We have a room that is a library. It has shelf upon shelf of books. We have other bookshelves throughout our house with more books. And do you think where you last left off in those books is remembered? Sure, some of them will retain bookmarks, but not automatically, you have to take steps to put the bookmark physically into the book and then hope that no one else messes with it. Lori and I have very similar tastes in reading, and we share almost 100% of the books in the house, which means inevitably someone’s page or a quote marker or something gets lost. Not with eBooks. We use Kindles, and all the books I read show up in her archive to read, all the books she reads show up in mine. My notes are mine, her notes are hers. All at the same time. No confusion at all. The revolution in reading that eBook readers have enabled is not on the “uber-fast” pace that I would have expected, just because of the cost of entry. Buy a book to read today for $8 USD, or scrounge $100 to $500 USD to purchase a reader? For lots of people tight on cash, there is no choice there. The big-name publishers themselves haven’t helped any either. I’m not going to pay book price for a book I already own, just for the right to put it on my eReader, I’ll just pick up the paper copy, thanks. But it’s still moving along at a rapid pace because demand for one small tablet device to contain tons of books was unknown until it was real, but now that it’s real, demand is growing. The same is true for stream protocols. That is protocols that bundle streams together into a single connection. From Java to VDI, these protocols are growing because they encapsulate the entire communications thread and can optimize strictly based upon whatever it is they’re transporting. In the case of Amazon’s SPDY or VDI, they’re transporting an awful lot, and often in two-way communications. And yet, like eBook readers, technology has come far enough that they do so pretty darned well. The real difference between these protocols and TCP or HTTP is that they allow multiple message streams within a single connection. Always remembering where each is, detecting lost data and which streams it impacts… Much like an eBook remembering your notes. One corner of our library And they’re growing in popularity. For Virtual Desktop Infrastructure, shared protocols are standard. For Amazon, SPDY capability is assumed on the server (or SPDY wouldn’t be an option), though it won’t be used if the client can’t support it. For Java, support of the IETF Stream Control Transmission Protocol (SCTP) is completely optional… For the developer. Since these protocols don’t impact the end user in any noticeable way, they will continue to gain popularity to multiplex several related functions over a single connection. And you should be aware of that, because if you do any load balancing or own any tool that uses packet inspection for anything, you'll want to check with your vendor about what they do/intend to support. It’s passingly difficult, for example, to load balance SPDY unless the load balancer has special features to do so. The reason is simple, the current world of TCP and HTTP has a source and a target, but under SPDY you have a source and targets. If your device doesn’t know how to crack open SPDY and see what it’s trying to do, the device can’t very well route it to the best server to handle the request. That is true of all of the multiple stream protocols, and as they gain in popularity, or when you start supporting one on your servers, you’ll want to make sure your infrastructure can deal with them intelligently. Much like back seven or so years ago, when content based routing hit the “what about encryption?” snag, you will see similar issues pop up with these protocols. If you’re using QoS functionality, for example, what if you limited video bandwidth to make certain your remote backup could complete in a timely manner, but users are streaming video over SPDY? How do you account for that without limiting all of SPDY? Well you don’t, unless your device is smart enough to handle the protocol. That doesn’t even touch the potential for prioritization that SPDY allows… If your device can parse it. My Kindle currently holds more books than those shelves. So pay attention to what’s happening in the space – when you have time – and perk up your ears for the impact on your infrastructure if someone wants to bring a product in-house that utilizes one of these protocols. They’re very cool, but don’t get caught unaware. Of course now that I’ve equated them to eBook readers, perhaps you’ll think of them whenever you read . And just like my kindle holds as many books as we have in our large library (my Kindle is around 500 right now, no idea how many are in the library, but 500 is a big number), those Multiple Stream Protocols could hold more connections than your other servers are seeing. On the bright side, at least today, IT has to make a positive decision to use a product that requires these protocols, so you’ll get a chance to do some homework. Related Articles and Blogs F5 Friday: Doing VDI, Only Better Oops! HTML5 Does It Again The Conspecific Hybrid Cloud F5 and vCloud Director: A Yellow Bricks How-to SPDY Wikipedia entry SCTP RFC Microsoft RDP description186Views0likes0Comments