bandwidth
5 TopicsRTT (Round Trip Time): Aka – Why bandwidth doesn’t matter
A great post over on ajaxian got me to thinking today. Why is it whenever you hear people talking about speed on the internet, they use a single metric? Whether they’re discussing the connection in the datacenter, their residential DSL, or the wireless connection via their mobile device, everyone references the bandwidth of their connection when talking about speed. “Oh I just got a 20Mb/s connection at home, it’s blazing fast!". That’s all well and good, and 20 Mb/s is indeed a lot of throughput for a residential connection. Unfortunately for Joe Average, about 98% of the population wouldn’t know what the heck to DO with 20 Mb/s of download speed, and even worse than that…they would likely see absolutely zero increase in performance while doing the one thing most people use their connection for the most, browsing the web. No one seems to ever bother mentioning the true culprit for slow (or fast) web browsing performance: latency, measured in RTT (Round Trip Time). This is the measure of time it takes for your system to make a request to the server and receive a response back. I.E. one complete request loop. This is where the battle for speed when browsing the web is won and lost. A round trip is measured in milliseconds (ms). This represents how much time it will take regardless of file size (this is important later in the discussion) to make the trip from you to the server and back. This means each connection you have to open with the server for an additional request must take at least this long. You add in the time it takes to download each file after accounting for RTT. “Impossible!” you say, “Clearly going from a 10Mb/s connection to the new, fast, fancy (expensive?) 20 Mb/s connection my provider is proud to now be offering will double my speed on the web! 20 is twice as much as 10 you dullard!” you assert? Oh how wrong you are, dear uneducated internet user. Allow me to illuminate the situation via a brief discussion of what actually occurs when you are browsing the web. We’ll skip some of the fine grained details and all the DNS bits, but here’s the general idea: Whenever you make a request for a web page on the net, you send out a request to a server. That server, assuming you’re an allowed user, then sends a response. Assuming you don’t get redirected and are actually served a page, the server will send you a generally simple HTML page. This is a single, small file that contains the HTML code that tells your browser how to render the site. Your computer receives the file, and your browser goes to work doing exactly that, rendering the HTML. Up to this point people tend to understand the process, at least in broad strokes. What happens next is what catches people I think. Now that your browser is rendering the HTML, it is not done loading the page or making requests to the server, not by a long shot. You still haven’t downloaded any of the images or scripts. The references to all of that are contained in the HTML. So as your browser renders the HTLM for the given site, it will begin sending requests out to the server asking for those bits of content. It makes a new request for each and every image on the page, as well as any other file it needs (script files, CSS files, included HTML files, etc.) Here are the two main points that need to be understood when discussing Bandwidth vs. RTT in regards to page load times: 1.) The average web page has over 50 objects that will need to be downloaded (reference: http://www.websiteoptimization.com/speed/tweak/average-web-page/) to complete page rendering of a single page. 2.) Browsers cannot (generally speaking) request all 50 objects at once. They will request between 2-6 (again, generally speaking) objects at a time, depending on browser configuration. This means that to receive the objects necessary for an average web page you will have to wait for around 25 Round Trips to occur, maybe even more. Assuming a reasonably low 150ms average RTT, that’s a full 3.75 seconds of page loading time not counting the time to download a single file. That’s just the time it takes for the network communication to happen to and from the server. Here’s where the bandwidth vs. RTT discussion takes a turn decidedly in the favor of RTT. See, the file size of most of the files necessary when browsing the web is so minute that bandwidth really isn’t an issue. You’re talking about downloading 30-60 tiny files (60k ish on average). Even on a 2Mb/s connection which would be considered extremely slow by today’s standards these files would each be downloaded in a tiny fraction of a second each. Since you can’t download more than a few at a time, you couldn’t even make use of a full 2 Mb/s connection, in most situations. So how do you expect going from 10Mb/s to 20Mb/s to actually increase the speed of browsing the web when you couldn’t even make use of a 2Mb/s connection? The answer is: You shouldn’t. Sure, if they were downloading huge files then bandwidth would be king, but for many small files in series, it does almost nothing. You still have to open 50 new connection, each of which has a built in 150ms of latency that can’t be avoided before even beginning to download the file. However, if you could lower your latency, the RTT from you to the server, from 150ms down to 50ms, suddenly you’re shaving a full 2.5 seconds off of the inherent delay you’re dealing with for each page load. Talk about snappier page loads…that’s a huge improvement. Now of course I realize that there are lots of things in place to make latency and RTT less of an issue. Advanced caching, pre-rendering of content where applicable so browsers don’t have to wait for ALL the content to finish downloading before the page starts rendering, etc. Those are all great and they help alleviate the pain of higher latency connections, but the reality is that in today’s internet using world bandwidth is very rarely a concern when simply browsing the web. Adding more bandwidth will not, in almost all cases, increase the speed with which you can load websites. Bandwidth is king of course, for multi-tasking on the web. If you’re the type to stream a video while downloading audio while uploading pictures while browsing the web while playing internet based games while running a fully functioning (and legal) torrent server out of your house…well then…you might want to stock up on bandwidth. But don’t let yourself be fooled into thinking that paying for more bandwidth in and of itself will speed up internet browsing in general when only performing that one task. #Colin854Views0likes1CommentIn Replication, Speed isn’t the Only Issue
In the US, many people watch the entire season of NASCAR without ever really paying attention to the racing. They are fixated on seeing a crash, and at the speed that NASCAR races average – 81mph on the most complex track to 188 mph on the least curvy track – they’re likely to get what they’re watching for. But that misses the point of the races. The merging of man and machine to react at lightning speed to changes in the environment are what the races are about. Of course speed figures in, but it is not the only issue. Mechanical issues, and the dreaded “other driver” are things that must be watched for by every driver on the track. I’ve been writing a whole lot about remote replication and keeping systems up-to-date over a limited WAN pipe, but in all of those posts, I’ve only lightly touched upon some of the other very important issues, because first and foremost in most datacenter manager or storage manager’s mind is “how fast can I cram out a big update, and how fast can I restore if needed”. But of course the other issues are more in-your-face, so in the interests of not being lax, I’ll hit them a little more directly here. Images from NASCAR.com The cost of remote replication and backups is bandwidth. Whether that bandwidth is taken in huge bursts or leached from your WAN connection in a steady stream is merely a question of implementation. You have X amount of data to move in Y amount of time before the systems on the other end are not current enough to be useful. Some systems copy changes as they occur (largely application-based replication that largely resembles or is actually called Continuous Data Protection), some systems (think traditional backups) run the transfers in one large lump at a given time of the day. Both move roughly the same amount of data, the only variable is the level of impact on your WAN connection. There are good reasons to implement both – a small, steady stream of data is unlikely to block your WAN connection and will keep you the closest to up-to-date, while traditional backups can be scheduled such that at peak times they use no bandwidth whatsoever, and utilize the connection at times when there is not much other usage. Of course your environment is not so simple. There is always other usage, and if you’re a global organization, “peak time” becomes “peak times” in a very real sense as the sun travels around the globe and different people come online at different times. This can have implications for both types of remote replication, for even the CDP style utilizes bandwidth in bursty bits. When you hit a peak time, changes to databases and files also peak. This can effectively put a throttle on your connection by increasing replication bandwidth at the same time that normal usage is increasing in bandwidth needs. The obvious answer to this dilemma is the same answer that is obvious for every “the pipe is full” problem – get a bigger connection. But we’ve gone over this one before, bigger connections are a monthly fee, and the larger you go, the larger the hike in price. In fact, because the growth is near exponential, the price spike is near exponential. And that’s something most of us can’t just shell out. So the obvious answer is often a dead end. Not to mention that the smaller the city your datacenters are in, the harder it is to get more bandwidth in a single connection. This is improving in some places, but is still very much the truth in many smaller metropolitan areas. So what is an IT admin to do? This is where WAN Optimization Controllers come into the game. Standard disclaimer: F5 plays in this space with our WOM module. Many users approach WAN Optimization products from the perspective of cramming more through the pipe – which most are very good at, but often the need is not for a bigger pipe, it is for a more evenly utilized pipe, or one that can differentiate between the traffic (like replication and web store orders) that absolutely must get through versus traffic – like YouTube streams to employee’s desks – that doesn’t have to. If you could allocate bandwidth to data going through the pipe in such a way that you tagged and tracked the important data, you could reduce the chance that your backups are invalid due to network congestion, and improve the responsiveness of backups and other critical applications simply by rating them higher and allocating bandwidth to them. Add WAN Optimization style on-the-fly compression and deduplication, and you’re sending less data over the pipe and dedicating bandwidth to it. Leaving more room for other applications while guaranteeing that critical ones get the time they need is a huge combination. Of course the science of bandwidth allocation requires a good solid product and the art of bandwidth allocation at your organization. Only you know what is critical and how much of your pipe that critical data needs. You can get help making these determinations, but in the end, your staff has the knowledge necessary to make a go of it. But think about it, your replication taking 20-50% (or less, lots of variables in this number) of its current bandwidth requirements and being more reliable. Even if nothing in your organization runs one tiny smidgen faster (and that is highly unlikely if you’re using a WAN Optimization Controller), that’s a win in overall circuit usage. And that’s huge. Like I’ve said before, don’t buy a bigger pipe, use your connection more intelligently. Not all WAN Optimization products offer Bandwidth Allocation, check with your vendor. Or call us, we’ve got it all built in – because WOM runs on TMOS, and all LTM functionality comes with the package. Once you’ve cleared away the mechanical failures and the risks of collision, unlike a NASCAR driver, then you should focus on speed. Unlike them, we don’t have to live with the risk. Maybe that’s why they’re famous and we’re geeks ;-). And no, sorry, I’m not a NASCAR fan. Just a geek with Google. Related Articles and Blogs NASCAR.com Remote Backup and the Massive Failure IT and Data: If Not Me Then Who? If Not Now, Then When? How May I Speed and Secure Replication? Let Me Count the Ways. Informatica: Data Integration In adn For the Cloud216Views0likes0CommentsAutomatically detecting client speed
We used to spend a lot of cycles worrying about detecting user agents (i.e. browser) and redirecting clients to the pages written specifically for that browser. You know, back when browser incompatibility was a way of life. Yesterday. Compatibility is still an issue, but most web developers are either using third-party JavaScript libraries to handle detection and incompatibility issues or don't use those particular features that cause problems. One thing still seen at times, however, is the "choose high bandwidth or low bandwidth" entry pages, particularly on sites laden with streaming video and audio, whose playback is highly sensitive to the effects of jitter and thus need a fatter pipe over which to stream. Web site designers necessarily include the "choose your speed" page because they can't reliably determine client speed. Invariably, some user on a poor connection is going to choose high bandwidth anyway, and then e-mail or call to complain about poor service. Because that's how people are. So obviously we still have a need to detect client speed, but the code and method of doing so in the web application would be prohibitively complex and consume time and resources better spent elsewhere. But we'd still like to direct the client to the appropriate page without asking, because we're nice that way - or more likely we just want to avoid the phone call later. That would be a huge motivator for me, but I'm like that. I hate phones. Whatever the reason, detecting client speed is valuable for directing users to appropriate content as well as providing other functionality, such as compression. Compression is itself a resource consuming function and applying compression in some situations can actually degrade performance, effectively negating the improvement in response time gained by decreasing the size of the data to be transferred. If you've got an intelligent application delivery platform in place, you can automatically determine client speed and direct requests based on that speed without needing to ask the client for input. Using iRules, just grab the round-trip time (or bandwidth) and rewrite the URI accordingly: when HTTP_REQUEST { if { [TCP::rtt] >= 1000 } { HTTP::uri "/slowsite.html" } } If you don't want to automatically direct the client, you could use this information to add a message to your normal "choose your bandwidth" page that lets the client know their connection isn't so great and perhaps they should choose the lower-bandwidth option. This is also good for collecting statistics, if you're interested, on the types of connections your customers and users are on. This can help you make a decision regarding whether you even need that choice page, and maybe lead to only supporting one option - making the development and maintenance of your site and video/audio all that much more streamlined.200Views0likes0CommentsThe Cloud and The Consumer: The Impact on Bandwidth and Broadband
Cloud-based services for all things digital will either drive – or die by – bandwidth Consumers, by definition, consume. In the realm of the Internet, they consume far more than they produce. Or so it’s been in the past. Broadband connectivity across all providers have long offered asymmetric network feeds because it mirrored reality: an HTTP request is significantly smaller than its corresponding response, and in general web-based activity is heavily biased toward fat download and thin upload speeds. The term “broadband” is really a misnomer, as it focuses only on the download speed and ignores the very narrowband of a typical consumer’s upload speed. cloud computing , or to be more accurate, cloud-hosted services aimed at consumers may very well change the status quo by necessity. As providers continue to push the notion of storing all things digital “in the cloud”, network providers must consider the impact on them – and the satisfaction of their customer base with performance over their network services. SPEED MATTERS Today we’re hearing about the next evolutionary step in Internet connectivity services: wideband. It’s a magnitude faster than broadband (enabled by the DOCSIS 3.0 standard) and it’s being pushed heavily by cable companies. Those with an eye toward the value proposition will quickly note that the magnitude of growth is nearly entirely on download speeds, with very little attention to growth on the upside of the connection. A fairly standard “wideband” package from provider Time Warner Cable, for example, touts “50 Mbps down X 5 Mbps up.” (DSL Reports, 2011) Unfortunately, that’s not likely enough to satiate the increasing need for more upstream bandwidth created by the “market” for sharing high-definition video, large data, real-time video conferencing (hang out in Google+ anyone?) and the push to store all things digital in “the cloud.” It’s suggested that “these activities require between 10 and 100 mbps upload and download speed.” (2010 Report on Internet Speeds in All 50 States, Speed Matters) Wideband is certainly a step in the right direction; Speed Matters also reported that in 2010: The median download speed for the nation in 2010 was 3.0 megabits per second (mbps) and the median upload speed was 595 kilobits per second (kbps).2 (1000 kilobits equal 1 megabit). These speeds are only slightly faster than the 2009 speedmatters.org results of 2.5 mbps download and 487 kbps upload. In other words, between 2009 and 2010, the median download speed increased by only 0.5 mbps (from 2.5 mbps to 3.0 mbps), and the average upload speed barely changed at all (from 487 kbps to 595kbps). You’ll note that upload speeds are still being reported in kbps, which even converted is significantly below the 10 mbps threshold desired for today’s cloud and video-related activities. Even “wide”band offerings fall short of the suggested 10 mbps upload speeds. WHERE DOES THAT LEAVE US? This leaves us in a situation in which either Internet providers must narrow the gap between up- and down-stream speeds or cloud-based service providers may find their services failing in adoption by the consumer market. Consumers, especially the up and coming “digital” generations, are impatient. Unwilling to wait more than a few seconds, they are quick to abandon services which do not meet their exacting view of how fast the Internet should be. Other options include a new focus for web and WAN optimization vendors – the client. Desktop and mobile clients for WAN optimization solutions that leverage deduplication and compression techniques as ways to improve performance over bandwidth constrained connections may be one option and it may be the best option for cloud-based service providers to avoid the middle-man and its likely increased costs to loosen bandwidth constraints. Another truth of consumerism is that while we want it faster, we don’t necessarily want to pay for the privilege. A client-service based WAN optimization solution bypasses the Internet service provider, allowing the cloud-based service provider to deploy a server-side WAN optimization controller and a client-side WAN optimization endpoint to enable deduplication and compression techniques to more effectively – and with better performance and reliability – transfer data to and from the provider. This isn’t as easy as it sounds, however, as it requires a non-trivial amount work on the part of the provider to deploy and manage both the server and client-side components. That said, the investment may be well worth increasing adoption among consumers – especially if the provider in question is banking on a cloud-based services offering as the core value proposition to its offerings. F5 Friday: Protocols are from Venus. Data is from Mars. Like Load Balancing WAN Optimization is a Feature of Application Delivery The Application Delivery Deus Ex Machina Optimize Prime: The Self-Optimizing Application Delivery Network Top-to-Bottom is the New End-to-End Some Services are More Equal than Others HTTP Now Serving … Everything187Views0likes0Comments