web accelerator
7 TopicsThe BYOD That is Real.
Not too long ago I wrote about VDI and BYOD, and how their hype cycles were impacting IT. In that article I was pretty dismissive of the corporate-wide democratization of IT through BYOD, and I stand by that. Internally, it is just not a realistic idea unless and until management toolsets converge. But that’s internally. Externally, we have a totally different world. If you run a website-heavy business like banking or sales, you’re going to have to deal with the proliferation of Internet enabled phones and tablets. Because they will hit your websites, and customers will expect them to work. Some companies – media companies tend to do a lot of this, for example – will ask you to download their app to view web pages. That’s ridiculous, just display the page. But some companies – again, banks are a good example – have valid reasons to want customers to use an app to access their accounts. The upshot is that any given app will have to support at least two platforms today, and that guarantees nothing a year from now. But it does not change the fact that one way or another, you’re going to have to support these devices over the web. There are plenty of companies out there trying to help you. Appcelerator offers a cross-platform development environment that translates from javascript into native Objective C or Java, for example. There are UI design tools available on the web that can output both formats but are notoriously short of source code and custom graphics. Still, good for prototyping. And the environments allow you to choose an HTML5 app, a native app, or a hybrid of the two, allowing staff to choose the best solution for the problem at hand. And then there is the network. It is not just a case of delivering a different format to the device, it is a case of optimizing that content for delivery to devices with smaller memory space, slower networks, and slower CPU speeds. That’s worth thinking about. There’s also the security factor. mobile devices are far easier to misplace than a desktop, and customers are not likely to admit their device is stolen until they’ve looked everywhere they might have left it. In the case (again) of financial institutions, if credentials are cached on the device, this is a recipe for disaster. So it is not only picking a platform and an application style, it is coding to the unique characteristics of the mobile world. Of course optimization is best handled at the network layer by products like our WebAccelerator, because it’s what they do and they’re very good at optimizing content based upon the target platform. Security, as usual, must be handled in several places. Checking that the device is not in a strange location (as I talked about here) is a good start, but not allowing username and password to be cached on the device is huge too. So while you are casting a skeptical look at BYOD inside your organization, pay attention to customers’ device preferences. They’re hitting the web on mobile devices more and more each month, and their view of your organization will be hugely impacted by how your site and/or apps respond. So invest the time and money, be there for them, so that they’ll come back to you. Or don’t. Your competitors would like that.268Views0likes0CommentsThere is more to it than performance.
Did you ever notice that sometimes, “high efficiency” furnaces aren’t? That some things the furnace just doesn’t cover – like the quality of your ductwork, for example? The same is true of a “high performance” race car. Yes, it is VERY fast, assuming a nice long flat surface for it to drive on. Put it on a dirt road in the rainy season, and, well, it’s just a gas hog. Or worse, a stuck car. I could continue the list. A “high energy” employee can be relatively useless if they are assigned tasks at which brainpower, not activity rate, determines success… But I’ll leave it at those three, I think you get the idea. The same is true of your network. Frankly, increasing your bandwidth in many scenarios will not yield the results you expected. Oh, it will improve traffic flow, and overall the performance of apps on the network will improve, the question is “how much?” It would be reasonable – or at least not unreasonable – to expect that doubling Internet bandwidth should stave off problems until you double bandwidth usage. But often times the problems are with the overloading apps we’re placing upon the network. Sometimes, it’s not the network at all. Check the ecosystem, not just the network. When I was the Storage and Servers Editor over at NWC, I reviewed a new (at the time) HP server that was loaded. It had a ton of memory, a ton of cores, and could make practically any application scream. It even had two gigabit NICs in it. But they weren’t enough. While I had almost nothing bad to say about the server itself, I did observe in the summary of my article that the network was now officially the bottleneck. Since the machine had high speed SAS disks, disk I/O was not as bi a deal as it traditionally has been, high-speed cached memory meant memory I/O wasn’t a problem at all, and multiple cores meant you could cram a ton of processing power in. But team those two NICs and you’d end up with slightly less than 2 Gigabits of network throughput. Assuming 100% saturation, that was really less than 250 Megabytes per second, and that counts both in and out. For query-intensive database applications or media streaming servers, that just wasn’t keeping pace with the server. Now here we are, six or so years later, and similar servers are in use all over the globe… Running VMs. Meaning that several copies of the OS are now carving up that throughput. So start with your server. Check it first if the rest of the network is performing, it might just be the problem. And while we’re talking about servers, the obvious one needs to be mentioned… Don’t forget to check CPU usage. You just might need a bigger server or load balancing, or these days, less virtuals on your server. Heck, as long as we’re talking about servers, let’s consider the app too. The last few years for a variety of reasons we’ve seen less focus on apps whose performance is sucktacular, but it still happens. Worth looking into if the server turns out to be the bottleneck. Old gear is old. I was working on a network that deployed an ancient Cisco switch. The entire network was 1 Gig, except for that single switch. But tracing wires showed that switch to lie between the Internet and the internal network. A simple error, easily fixed, but an easy error to have in a complex environment, and certainly one to be aware of. That switch was 10/100 only. We pulled it out of the network entirely, and performance instantly improved. There’s necessary traffic, and then there’s… Not all of the traffic on your network needs to be. And all that does need to be doesn’t have to be so bloated. Look for sources of UDP broadcasts. More often than you would think, applications broadcast that you don’t care about. Cut them off. For other traffic, well there is Application Delivery Optimization. ADO is improving application delivery by a variety of technical solutions, but we’ll focus on those that make your network and your apps seem faster. You already know about them – compression, caching, image optimization… In the case of back-end services, de-duplication. But have you considered what they do other than improve perceived or actual performance? Free Bandwidth Anything that reduces the size of application data leaving your network also reduces the burden on your Internet connection. This goes without saying, but as I alluded to above, we sometimes overlook the fact that it is not just application performance we’re impacting, but the effectiveness of our connections – connections that grow more expensive by leaps and bounds each time we have to upgrade them. While improving application performance is absolutely a valid reason to seek out ADO, delaying or eliminating the need to upgrade your Internet connection(s) is another. Indeed, in many organizations it is far easier to do TCO justification based upon deferred upgrades than it is based upon “our application will be faster”, while both are benefits of ADO. New stuff! And as time wears on, SPDY, IPv6, and a host of other technologies will be more readily available to help you improve your network. Meanwhile, check out gateways for these protocols to make the transition easier. In Summation There are a lot of reasons for apps not to perform, and there are a lot of benefits to ADO. I’ve listed some of the easier problems to ferret out, the deeper into your particular network you get, the harder it is to generalize problems. But these should give you a taste for the types of things to look for. And a bit more motivation to explore ADO. Of course I hope you choose F5 gear for ADO and overall application networking, but there are other options out there. I think. Maybe.218Views0likes0CommentsMobile Apps. New Game, New (and Old) Rules
For my regular readers: Sorry about the long break, thought I’d start back with a hard look at a seemingly minor infrastructure elements, and the history of repeating history in IT. In the history of all things, technological and methodological improvements seem to dramatically change the rules, only in the fullness of time to fall back into the old set of rules with some adjustment for the new aspects. Military history has more of this type of “accommodation” than it has “revolutionary” changes. While many people see nuclear weapons as revolutionary, many of the worlds’ largest cities were devastated by aerial bombardment in the years immediately preceding the drop of the first nuclear weapon, for example. Hamburg, Tokyo, Berlin, Osaka, the list goes on and on. Nuclear weapons were not required for the level of devastation that strategic planners felt necessary. This does not change the hazards of the atomic bomb itself, and I am not making light of those hazards, but from a strategic, war winning viewpoint, it was not a revolutionary weapon. Though scientifically and societally the atomic bomb certainly had a major impact across the globe and across time, from a warfare viewpoint, strategic bombing was already destroying military production capability by destroying cities, the atomic bomb was just more efficient. The same is true of the invention of rifled cannons. With the increased range and accuracy of rifled guns, it was believed that the warship had met its match, and while protection of ships went through fundamental changes, in the end rifled cannons increased the range of engagement but did not significantly tip the balance of power. Though in the in-between times, from when rifled cannons became commonplace and when armor plating became strong enough, there was a protection problem for ships and crews. And the most obvious example, the tank, forced military planners and strategists to rethink everything. But in the end, World War II as a whole was decided in the same manner other continental or globe spanning conflicts have throughout history – with hoards of soldiers fighting over possession of land and destruction of the enemy. Tanks were a tool that often lead to stunning victories, but in the cases of North Africa and Russia, it can be seen that many of those victories were illusory at best. Soldiers, well supplied and with sufficient morale, had to hold those gains, just like in any other war, or the gains were as vapor. Technology – High Tech as we like to call it – is the other area with stunning numbers of “This changes everything” comparisons that just don’t pan out the way the soothsayers claim it will. Largely because the changes are not so revolutionary from a technology perspective as evolutionary. The personal computer may have revolutionized a lot of things in the world – I did just hop out to Google, search for wartime pictures of Osaka, find one on Wikipedia, and insert it into my blog in less time than it would have taken me to write the National Archives requesting such a picture after all – but since the revolution of the Internet we’ve had a string of “this changes everything” predictions that haven’t been true. I’ve mentioned some of them (like XML eliminating programmers) before, I’ll stick to ones that I haven’t mentioned by way of example. Saas is perhaps the best example that I haven’t touched on in my blog (to my memory at least). When SaaS came along, there would be no need for an IT department. None. They would be going away, because everything would be SaaS driven. Or at least made tiny. If there was an IT version of mythbusters, they would have fun with that one, because now we have a (sometimes separate) staff responsible for maintaining the integration of SaaS offerings into our still-growing datacenters. Osaka Bomb Damage – source Wikipedia The newest version of the “everything is different! Look how it’s changed!” mantra is cell network access to applications. People talk about how the old systems are not good enough and we must do things differently, etc. And as always, in some areas they are absolutely right. If you’ve ever hit a website that was designed without thought for a phone-sized screen, you know that applications need to take target screen size into account, something we haven’t had to worry about since shortly after the browser came along. But in terms of performance of applications on cellular clients, there is a lot we’ve done in the past that is relevant today. Originally, a lot of technology on networks focused on improving performance. The thing is that the performance of a PC over a wired (or wireless) network has been up and down over the years as technology has shifted the sands under app developers’ feet. Network performance becomes the bottleneck and a lot of cool new stuff is created to get around that, only to find that now the CPU, or memory, or disk is the bottleneck, and resources are thrown that way to resolve problems. I would be the last to claim that cellular networks are the same as Ethernet or wireless Ethernet networks (I worked at the packet layer on CDMA smartphones long ago), but at a 50,000 foot view, they are “on the network” and they’re access applications served the same way as any other client. While some of the performance issues with these devices are being addressed by new cellular standards, some of them are the same issues we’ve had with other clients in the past. Too many round trips, too much data for the connection available, repeated downloads of the same data… All of these things are relative. Of course they’re not the only problems, but they’re the ones we already have fixes for. Take NTLM authentication for example, back when wireless networks were slow, companies like F5 came up with tools to either proxy for, or reduce the number of round trips required for authentication to multiple servers or applications. Those tools are still around, and are even turned on in many devices being used today. Want to improve performance for an employee that works on remote devices? Check your installed products and with your vendor to find out if this type of functionality can be turned on. How about image caching on the client? While less useful in the age of “Bring You Own Device”, BYOD is not yet, and may never be, the standard. Setting image (or object) caching rules that make sense for the client on devices that IT controls can help a lot. Every time a user hits a webpage with the corporate logo on it, the image really doesn’t need to be downloaded if it has been once. Lots of web app developers take care of this within the HTML of their pages, but some don’t, so again, see if you can manage this on the network somewhere. For F5 Application Acceleration products you can, I cannot speak for other vendors. The list goes on and on. Anyone with five or ten years in the industry knows what hoops were jumped through the last time we went around this merry go round, use that knowledge while assessing other, newer technologies that will also help. The wheel doesn’t need to be reinvented, just reinforce – an evolutionary change from a wooden spoke device to a steel rim, maybe with chrome. While everyone is holding out for broad 4G deployments to ease the cellular device performance issue, specialists in the field are already saying that the rate of adoption of new cellular devices indicates that 4G will be overburdened relatively quickly, so this problem isn’t going anywhere, time to look at solutions both old and new to make your applications perform on employee and customer cellular devices. F5 participates in the Application Acceleration market. I do try to write my blogs such that it’s clear there are other options, but of course I think ours are the best. And there are a LOT more ways to accelerate applications than can fit into one blog, I assure you. A simple laundry list of tools, configuration options, and features available on F5 products alone is the topic for a tome, not a blog. Now for the subliminal messaging: Buy our stuff, you’ll be happier. How was that? Italics and all. If you can flick a configuration switch on gear you’ve already paid for, do a little testing, and help employees and/or customers who are having performance problems quickly while other options are explored, then it is worth taking a few minutes to check into, right? Related Articles and Blogs: The Encrypted Elephant in the Cloud Room Stripping EXIF From Images as a Security Measure F5 Friday: Workload Optimization with F5 and IBM PureSystems Secure VDI Sign On: From a Factor of Four, to One The Four V’s of Big Data188Views0likes0CommentsWeb App Performance: Think 1990s.
As I’ve mentioned before, I am intrigued by the never-ending cycle of repetition that High Tech seems to be trapped in. Mainframe->Network->Distributed->Virtualized->Cloud, which while different, shares a lot of characteristics with a mainframe environment. The same is true with disks, after several completely different iterations, performance relative to CPUs and Application needs are really not that different from 20 years ago. The big difference is that 20 years ago we as users had a lot more tolerance for delays than they do today. One of my co-workers was talking about an article he recently read that said users are now annoyed literally “in the blink of an eye” at page load times. Right now, web applications are going through one of those phases in the performance space, and it’s something we need to be talking about. Not that delivery to the desktop is a problem, network speeds, application development improvements (both developers learning tricks and app tools getting better), and processing power have all combined to overcome performance issues in most applications, in fact, we’re kind of in a state of nirvana. Unless you have a localized problem, application performance is pretty darned good. Doubt me? Consider even trying to use something like YouTube in the 90s. Yeah, that’s a good reminder of how far we’ve come. But the world is evolving again. It’s no longer about web application performance to PCs, because right about the time a problem gets resolved in computer-land, someone changes the game. Now it’s about phones. To some extent it is about tablets, and they certainly need their love too, but when it comes to application delivery, it’s about phones, because they’re the slowest ship in the ocean. And according to a recent Gartner report, that won’t change soon. Gartner speculates that new phones are being added so fast that 4G will be overtaken relatively quickly, even though it is far and away better performance-wise than 3G. And there’s always the latency that phones have, which at this point in history is much more than wired connections – or even WLAN connections. The Louis CK video where he makes like a cell phone user going “it.. it’s not working!” when their request doesn’t come back right away is funny because it is accurate. And that’s bad news for IT trying to deliver the corporate interface to these devices. You need to make certain you have a method of delivering applications fast. Really fast. If the latency numbers are in the hundreds of milliseconds, then you have no time to waste – not with excess packets, not with stray requests. Yes of course F5 offers solutions that will help you a lot, that’s the reason I am looking into this topic, but if you’re not an F5 customer, and for any reason can’t/won’t be, there are still things you can do, they’re just not quite as effective and take a lot more man-hours. Going back through your applications to reduce the amount of data being transferred to the client (HTML can be overly verbose, and it’s not the worst offender), go through and create uber-reduced versions of images for display on a phone (or buy a tool that does this for you), consider SPDY support, since Google is opening it to the world. No doubt there are other steps you can take. They’re not as thorough as purchasing a complete solution designed around application performance that supports cell phones, but these steps will certainly help, if you have the man-hours to implement them. Note that only one in three human beings are considered online today. Imagine in five years what performance needs will be. I think that number is actually inflated. I personally own seven devices that get online, and more than one of them is turned on at a time… Considering that Lori has the same number, and that doesn’t count our servers, I’ll assume their math over-estimates the number of actual people online. Which means there’s a great big world out there waiting to receive the benefits of your optimized apps. If you can get them delivered in the blink of an eye. Related Articles And Blogs March (Marketing) Madness: Consolidation versus Consolidation March (Marketing) Madness: Feature Parity of Software with Hardware March (Marketing) Madness: Load Balancing SQL March (Marketing) Madness: Consolidation versus Consolidation March (Marketing) Madness: Feature Parity of Software with Hardware What banks can learn from Amazon Mobile versus Mobile: 867-5309230Views0likes0CommentsF5 Friday: The Mobile Road is Uphill. Both Ways.
Mobile users feel the need …. the need for spe- please wait. Loading… We spent the week, like many other folks, at O’Reilly’s Velocity Conference 2011 – a conference dedicated to speed, of web sites, that is. This year the conference organizers added a new track called Mobile Performance. With the consumerization of IT ongoing and the explosion of managed and unmanaged devices allowing ever-increasing amounts of time “connected” to enterprise applications and services, mobile performance – if it isn’t already – will surely become an issue in the next few years. The adoption of HTML5, as a standard platform across mobile and traditional devices is a boon – optimizing the performance of HTML-based application is something F5 knows a thing or two about. After all, there are more than 50 ways to use your BIG-IP system, and many of them are ways to improve performance – often in ways you may not have before considered. NARROWBAND is the NEW NORMAL The number of people who are “always on” today is astounding, and most of them are always on thanks to rapid technological improvements in mobile devices. Phones and tablets are now commonplace just about anywhere you look, and “that guy” is ready to whip out his device and verify (or debunk) whatever debate may be ongoing in the vicinity. Unfortunately the increase in use has also coincided with an increase in the amount of data being transferred without a similar increase in the available bandwidth in which to do it. The attention on video these past few years – which is increasing, certainly, in both size and length – has overshadowed similar astounding bloat in the size and complexity of web page composition. It is this combination – size and complexity – that is likely to cause even more performance woes for mobile users than video. “A Google engineer used the Google bot to crawl and analyze the Web, and found that the average web page is 320K with 43.9 resources per page (Ramachandran 2010). The average web page used 7.01 hosts per page, and 6.26 resources per host. “ (Average Web Page Size Septuples Since 2003) Certainly the increase in broadband usage – which has “more than kept pace with the increase in the size and complexity of the average web page” (Average Web Page Size Septuples Since 2003) – has mitigated most of the performance issues that might have arisen had we remained stuck in the modem-age. But the fact is that mobile users are not so fortunate, and it is their last mile that we must now focus on lest we lose their attention due to slow, unresponsive sites and applications. The consumerization of IT, too, means that enterprise applications are more and more being accessed via mobile devices – tablets, phones, etc… The result is the possibility not just of losing attention and a potential customer, but of losing productivity, a much more easily defined value that can be used to impart the potential severity of performance issues to those ultimately responsible for it. ADDRESSING MOBILE PERFORMANCE If you thought the need for application and network acceleration solutions was long over due to the rise of broadband, you thought too quickly. Narrowband, i.e. mobile connectivity, is still in the early stages of growth and as such still exhibits the same restricted bandwidth characteristics as pre-broadband solutions such as ISDN and A/DSL. The users, however, are far beyond broadband and expect instantaneous responses regardless of access medium. Thus there is a need to return to (if you left it) the use of web application acceleration techniques to redress performance issues as soon as possible. Caching and compression are but two of the most common acceleration techniques available, and F5 is no stranger to such solutions. BIG-IP WebAccelerator implements both along with other performance-enhancing features such as Intelligent Browser Referencing (IBR) and OneConnect can dramatically improve performance of web applications by leveraging the browser to load more quickly those 6.26 resources per host and simultaneously eliminating most if not all of the overhead associated with TCP session management on the servers (TCP Multiplexing). WebAccelerator – combined with some of the innate network protocol optimizations available in all F5 BIG-IP solutions due to its shared internal platform, TMOS – can do a lot to mitigate performance issues associated with narrowband mobile connections. The mobile performance problem isn’t new, after all, and thus these proven solutions should provide relief to end-users of both the customer and employee communities who weary of waiting for the web. HTML5 – the darling of the mobile world - will also have an impact on the usage patterns of web applications regardless of client device and network type. HTML5 inherently results in more request and objects, and the adoption rate is fairly significant from the developer community. A recent Evans Data survey indicates increasing adoption rates; in 2010 28% of developers were using HTML5 markup, with 48.9% planning on using it in the future. More traffic. More users. More devices. More networks. More data. More connections. It’s time to start considering how to address mobile performance before it becomes an even steeper hill to climb. The third greatest (useful) hack in the history of the Web Achieving Scalability Through Fewer Resources Long Live(d) AJAX The Impact of AJAX on the Network The AJAX Application Delivery Challenge What is server offload and why do I need it? 3 Really good reasons you should use TCP multiplexing233Views0likes0CommentsAs NetWork Speeds Increase, Focus Shifts
Someone said something interesting to me the other day, and they’re right “at 10 Gig WAN connections with compression turned on, you’re not likely to fill the pipe, the key is to make certain you’re not the bottleneck.” (the other day is relative – I’ve been sitting on this post for a while) I saw this happen when 1 Gig LANs came about, applications at the time were hard pressed to actually use up a Gigabit of bandwidth, so the focus became how slow the server and application were, if the backplane on the switch was big enough to handle all that was plugged into it, etc. After this had gone on for a while, server hardware became so fast that we chucked application performance under the bus in most enterprises. And then those applications were running on the WAN, where we didn’t have really fast connections, and we started looking at optimizing those connections in lieu of optimizing the entire application. But there is only so much that an application developer can do to speed network communications. Most of the work of network communications is out of their hands, and all they control is the amount of data they send over the pipe. Even then, if persistence is being maintained, even how much data they send may be dictated by the needs of the application. And if you are one of those organizations that has situations where databases are communicating over your WAN connection, that is completely outside the control of application developers. So the speed bottleneck became the WAN. For every problem in high tech, there is a purchasable solution though, and several companies (including F5) offer solutions for both WAN Acceleration and Application Acceleration. The cool thing about solutions like BIG-IPWebAccelerator, EDGE Gateway, and WOM are that they speed application performance (WebAccelerator for web based applications and WOM for more back-end applications or remote office), while reducing the amount of data being sent over the wire – without requiring work on the part of developers. As I’ve said before: If developers can focus on solving the business problems at hand and not the technical issues that sit in the background, they are more productive. Now that WAN connections are growing again, you would think we would be poised to shift the focus back to some other piece of the huge performance puzzle, but this stuff doesn’t happen in a vacuum, and there are other pressures growing on your WAN connection that keep the focus squarely on how much data it can pass. Those pressures are multi-core, virtualization and cloud. Multi-core increases the CPU cycles available to applications. To keep up, server vendors have been putting more NICs in every given server, increasing potential traffic on both the LAN and the WAN. With virtualization we have a ton more applications running on the network, and the comparative ease with which they can be brought online implies this trend will continue, and cloud not only does the same thing, but puts the instances on a remote network that requires trips back to your datacenter for integration and database access (yeah, there are exceptions. I would argue not many). Both of these trends mean that the size of your pipe out to the world is not only important, but because it is a monthly expense, it must be maximized. By putting in both WAN Optimization and Web Application Acceleration, you stand a chance of keeping your pipe from growing to the size of the Alaska pipeline, and that means savings for you on a monthly basis. You’ll also see that improved performance that is so elusive. Never mind that as soon as one bottleneck is cleared another will crop up, that comes with the territory. By clearing this one you’ll have improved performance until you hit the next plateau, and you can then focus on settling it, secure in the knowledge that the WAN is not the bottleneck. And with most technologies – certainly with those offered by F5 – you’ll have the graphs and data to show that the WAN link isn’t the bottleneck. Meanwhile, your developers will be busy solving business problems, and all of those cores won’t go to waste. Photo of caribou walking alongside the, taken July 1998 by Stan Shebs212Views0likes0Comments