application delivery optimization
8 TopicsPerformance versus Presentation
#webperf #ado You remember the service, not the plating (unless you're a foodie) One morning, while reading the Internet (yes, the entire Internet), I happened upon a rather snarky (and yes, I liked the tone and appreciated the honesty) blog on the value (or lack thereof) of A/B Testing, "Most of your AB-tests will fail". The blog is really a discussion on the rule of diminishing returns and notes the reality that at some point, the value of moving a button 2 pixels to the right is not worth the effort of going through the testing and analysis. When you combine the eventual statistical irrelevance of presentation with the very real impact on conversion rates due to performance (both negative and positive, depending on the direction of performance) it becomes evident that at some point it becomes more valuable to focus on performance over presentation. If you think about it, most people remember service over plating at a restaurant. As long as the meal isn't dumped on a plate in a manner that's completely unappetizing, most people are happy as long as the service was good, i.e. it was delivered within their anticipated time frame. Even those of us who appreciate an aesthetically pleasing plate will amend a description of our dining experience with "but it took f-o-r-e-v-e-r" if the service was too slow. Service - performance - ends up qualifying even our dining experiences. And really, how many people do you know who go around praising the color and font choices* on a website or application? How many gush over the painstakingly created icons or the layout that took months to decide upon? Now, how many do you hear complain about the performance? About how s-l-o-w the site was last night, or how lag caused their favorite character in their chosen MMORPG to die? See what I mean? Performance, not plating, is what users remember and it's what they discuss. Certainly a well-designed and easy to use (and navigate) application is desirable. A poorly designed application can be as much a turn off as a meal dumped unceremoniously on a plate. But pretty only gets you so far, and eventually performance is going to be more of a hindrance than plating, and you need to be ready for that. A/B testing (and other devops patterns) is a hot topic right now, especially given new tools and techniques that make it easy to conduct. But the aforementioned blog was correct in that at some point, it's just not worth the effort any more. The math says improving performance, not plating, at that point will impact conversion rates and increase revenue far more than moving a button or changing an image. As more and more customers move to mobile means of interacting with applications and web sites, performance is going to become even more critical. Mobile devices come with a wide variety of innate issues that impede performance that cannot be addressed directly. After all, unless it's a corporate or corporate-managed device you don't get to mess with the device. Instead, you'll need to leverage a variety of mobile acceleration techniques including minification, content-inlining, compression, image optimization, and even SPDY support. A/B testing is important in early stages of design, no doubt about that. Usability is not something to be overlooked. But recognize the inflection point, the point at which tweaking is no longer really returning value when compared to the investment in time. Performance improvements, however, seem to contradict the law of diminishing returns based on study after study, and always brings value to both users and the bottom line alike. So don't get so wrapped up in how the application looks that you overlook how it performs. *Except, of course, if you use Comic Sans. If you use Comic Sans you will be mocked, loudly and publicly, across the whole of the Internets no matter how fast your site is. Trust me. You can check out your application's performance usingF5's FAST.215Views0likes0CommentsBare Metal Blog: Testing for Numbers or Performance?
#BareMetalBlog What you test can say a lot about you #f5 Along the lines of the first blog in the testing portion of the Bare Metal Blog series, I’d like to talk a bit more about how the testing environment, the device configuration, and the payloads translate into test results. One of the problems most advanced mass education systems run into is the question of standardized testing. While it is true that you cannot fix what you have not determined is broken, like most things involving people, testing students for specific areas of knowledge does kind of guarantee that those doing the teaching will err on the side of preparing students to take the test rather than to succeed in life. The mere fact that there IS a test changes what is taught. It is of course possible to a make this into a massively positive proposition by targeting the standardized tests at the most important things students need to learn, but for our discussion purposes, the result is the same – the students will be taught to whatever is on that test first, and all else secondarily. This is far too often true of vendor product testing also. The mere fact that there will be a test of the equipment, and most high-tech markets being highly competitive, makes things lean toward tweaking the device (or the test) to maximize test performance, in spite of what the real world performance will be. The current most flagrant problem with testing is a variant on an old theme. Way back when testing the throughput of network switches made sense, there was a lot of “packets per second” testing with no payload. Well, you test the ability of the switch to send packets to the right place, but do not at all test the device in a manner consistent with the real world usage of switches. Today we have a whole slew of similar tests for ADCs. The purpose of an ADC is to load balance, optimize, and if needed secure the passage of packets. Primarily this is for application traffic because they’re Application Delivery Controllers. Yet, application traffic being layer seven kind of means that you need to do some layer seven decision-making if the device is to be tested in the real world. If the packet is a layer seven packet, but layer four switching is all that is performed on it, the test is completely useless to determining the actual capabilities of the device. And yet there is a lot of that type of testing going on out there right now. It’s time – way past time – to drive testing into the real world for ADCs. Layer seven decision making is much more complex and requires a deep look at the packets in question, meaning that the results will not be nearly as pretty as simple layer four switching packets are. While you cannot do a direct comparison of all of the optional features of two different ADCs simply because the level of optional functionality support is so broad once a solid ADC platform is deployed, but you can test the basic capabilities and responsiveness of the core products. And that is what we, as an industry must begin to insist on. I use one single oddity in ADC testing here, but every branch of high-tech testing I’ve been involved in over the years – security, network gear, storage, application – all have similar “this is not good enough” testing that we need to demand is dropped in favor of solid testing that reflects a real-world device. Not your real-world device unless you are running the test lab, but a real-world device that is seeing – and more importantly acting upon – data that the device will encounter in an actual network, doing the job it was designed for. As I mentioned in the last testing installment, you can make an ADC look astounding if your tests don’t actually force it to do anything. For our public testing, we have standards, and offer up our configuration and testing goals on DevCentral. Whether you use it to validate the test results F5 uses, or to set up the tests in your own environment, publicly talking about how testing is performed is a big deal. Ask your vendor for configuration files and testing plan when numbers are tossed at you, make certain you know what they’re testing when they try to impress you with over-the-top performance numbers. In my career, I have seen cases where “double the performance of our nearest competitor” was used publicly and was as close to an outright lie as possible, since the test and configuration were different between the two products the test claimed to compare. When you buy any form of datacenter equipment, you’re going to be stuck with it for a good long while. Make certain you know how testing that is informing your decision was performed, no matter who did the testing. Independent third party testing sometimes isn’t so independent, and knowing that can make you more cautious when hooking your company with gear you’ll have to live with. Technorati Tags: Testing,Application Delivery Optimization,Bare Metal Blog,F5 Networks,Don MacVittie Bare Metal Blog Series: Bare Metal Blog: Introduction to FPGAs Bare Metal Blog: Introduction Bare Metal Blog: Test for reality. Bare Metal Blog: FPGAs The Benefits and Risks Bare Metal Blog: FPGAs: Reaping the Benefits | F5 DevCentral204Views0likes0CommentsOf Escalators and Network Traffic
Escalators are an interesting first world phenomenon. While not strictly necessary anywhere, they still turn up all over in most first-world countries. The key to their popularity is, no doubt, the fact that they move traffic much more quickly than an elevator, and offer the option of walking to increase the speed to destination even more. One thing about escalators is that they’re always either going up, or down, in contrast to an elevator which changes direction with each trip. The same could be said of network traffic. It is definitely moving on the up escalator, with no signs of slackening. The increasing number of devices not just online, but accessing information both inside and outside the confines of the enterprise has brought with it a large increase in traffic. Combine that with increases in new media both inside and outside the enterprise, and you have a spike in growth that the world may never see again. And we’re in the middle of it. Let’s just take a look at a graph of Internet usage portrayed in a bit of back-and-forth between Rob Beschizza of Boing Boing and Wired magazine. This graphic only goes to 2010, and you can clearly see that the traffic growth is phenomenal. (side note, Mr. Beschizza’s blog entry is worth reading, as he dissects arguments that the web is dead) As this increase impacts an organization, there is a series of steps that generally occurs on the path to Application Delivery Networking, and it’s worth recapping here (note, the order can vary). First, an application is not performing. Application load balancing is brought in to remedy the problem. This step may be repeated, with load balancing widely deployed before... Next, Internet connections are overloaded. Link load balancing is brought in to remedy the problem. Once the enterprise side is running acceptably, it turns out that wireless devices – particularly cell phones – are slow. Application Acceleration is brought in to solve the problem. Application security becomes an issue – either for purchased packages exposed to the world, or internally developed code. A web application firewall is used to solve the problem. Remote backups or replication start to slow the systems, as more and more data is collected. WAN Optimization is generally brought in to address the problem. For storefronts and other security-enabled applications, encryption becomes a burden on CPUs – particularly in a virtualized environment. Encryption offloading is brought in to solve the problem. Traffic management and access control quickly follow – addressed with management tools and SSL VPN. That is where things generally sit right now, there are other bits, but most organizations haven’t finished going this far, so we’ll skip the other bits for now. The problem that has even the most forward-thinking companies mostly paused here is complexity. There’s a lot going on in your application network at this point, and the pause to regain control and insight is necessary. An over-arching solution to the complexity that these steps introduce is, while not strictly necessary, a precursor to further taking advantage of the infrastructure available within the datacenter (notice that I have not discussed multi-data center or datacenter to the cloud in this post), some way to control all of this burgeoning architecture from a central location. Some vendors – like F5 (just marketing here) – offer a platform that allows control of these knobs and features, while other organizations will have to look to products like Tivoli or OpenView to tie the parts together. And while we’re centralizing the management of the application infrastructure, it’s time to consider that separate datacenter or the cloud as a future location to include in the mix. Can the toolset you’re building look beyond the walls of the datacenter and meet your management and monitoring needs? Can it watch multiple cloud vendors? What metrics will you need, and can your tools get them today, or will you need more management? All stuff to ask while taking that breather. There’s a lot of change going on and it’s always a good idea to know where you’re going in the long run while you’re fighting fires in the short run. The cost of failing to ask these questions is limited capability to achieve goals in the future – eg: more firefighting. And IT works hard enough, let’s not make it harder than it needs to be. And don’t hesitate to call your sales rep. They want to give you information about products and try to convince you to buy theirs, it’s what they do. While I can’t speak for other companies, if you get on the phone with an F5 SE, you’ll find that they know their stuff, and can offer help that ranges from defining future needs to meeting current ones. To you IT pros, I say, keep making business run like they don’t know you’re there. And since they won’t generally tell you, I’ll say “thank you” for them. They have no idea how hard their life would be sans IT.193Views0likes0CommentsThe BYOD That is Real.
Not too long ago I wrote about VDI and BYOD, and how their hype cycles were impacting IT. In that article I was pretty dismissive of the corporate-wide democratization of IT through BYOD, and I stand by that. Internally, it is just not a realistic idea unless and until management toolsets converge. But that’s internally. Externally, we have a totally different world. If you run a website-heavy business like banking or sales, you’re going to have to deal with the proliferation of Internet enabled phones and tablets. Because they will hit your websites, and customers will expect them to work. Some companies – media companies tend to do a lot of this, for example – will ask you to download their app to view web pages. That’s ridiculous, just display the page. But some companies – again, banks are a good example – have valid reasons to want customers to use an app to access their accounts. The upshot is that any given app will have to support at least two platforms today, and that guarantees nothing a year from now. But it does not change the fact that one way or another, you’re going to have to support these devices over the web. There are plenty of companies out there trying to help you. Appcelerator offers a cross-platform development environment that translates from javascript into native Objective C or Java, for example. There are UI design tools available on the web that can output both formats but are notoriously short of source code and custom graphics. Still, good for prototyping. And the environments allow you to choose an HTML5 app, a native app, or a hybrid of the two, allowing staff to choose the best solution for the problem at hand. And then there is the network. It is not just a case of delivering a different format to the device, it is a case of optimizing that content for delivery to devices with smaller memory space, slower networks, and slower CPU speeds. That’s worth thinking about. There’s also the security factor. mobile devices are far easier to misplace than a desktop, and customers are not likely to admit their device is stolen until they’ve looked everywhere they might have left it. In the case (again) of financial institutions, if credentials are cached on the device, this is a recipe for disaster. So it is not only picking a platform and an application style, it is coding to the unique characteristics of the mobile world. Of course optimization is best handled at the network layer by products like our WebAccelerator, because it’s what they do and they’re very good at optimizing content based upon the target platform. Security, as usual, must be handled in several places. Checking that the device is not in a strange location (as I talked about here) is a good start, but not allowing username and password to be cached on the device is huge too. So while you are casting a skeptical look at BYOD inside your organization, pay attention to customers’ device preferences. They’re hitting the web on mobile devices more and more each month, and their view of your organization will be hugely impacted by how your site and/or apps respond. So invest the time and money, be there for them, so that they’ll come back to you. Or don’t. Your competitors would like that.264Views0likes0CommentsRandom Acts of Optimization.
When I first embarked on my application development career, I was a code optimization junky. Really, making things faster, more efficient, the tightest it could get was a big deal to me. That routine you wrote to solve a one-off problem often becomes the core routine used by applications across the infrastructure, so writing tight code was (to me) important. The industry obviously didn’t agree with me, since now we run mostly interpreted languages over the network, but that was then, this is now and all. The thing is that performance still matters, it has just changed location. The amount of overhead in the difference (in C/C++) between if() else and (x?y:z) is not so important anymore unless that particular instruction is being used a whole lot. The latency introduced to the network by all of those devices between server and client is far larger than the few clock cycles difference between these two instructions. There are still applications where optimized code really makes a difference (mostly in embedded, where all resources are less than even the tablet space), but with ever-shrinking form factors and increasing resources, even those instances are going away slowly but surely. The only place I’ve heard of that really needs a high level of source optimization in recent months is high-speed transactions in the financial services sector. Simply put, if your application is on the network, the organization will get more out of spending networking staff man-hours improving network performance than spending developer man-hours doing the same. There is still a lot of app optimization that needs to go on – databases are a notorious area where a great DBA can make your application many times faster – but the network is impacting the applications many times in its back-and-forth, and it is impacting all applications, including the ones you don’t have the source to. But there are a lot of pieces to application delivery optimization (ADO), and approaching it piecemeal has no better result than approaching application optimization piecemeal. Just because you put a load balancer in front of your server and fired up a few more VMs behind the load balancer to share the load does not mean that your application is optimized. In some instances, that is the best solution, but in most cases, a more thorough Application Delivery Network approach is required. Making the application more responsive by load balancing does not decrease the amount of data the application is sending over your Internet connection, does not optimize delivery of the application over the wire to make it faster on the client end, does not direct users to the geographically closest or least utilized datacenter/cloud, does not… Well, do a lot of things. Exactly the same as optimizing your applications’ code won’t help a bit if the database is the slowest part of the application. So I’ll recommend an holistic approach (I hate that phrase, but how else do you politely say “look at every friggin’ thing on your network”?), that focuses on application serving and application delivery. And if you’re in multiple datacenters with data having to traverse the Internet behind your application also, then back-end optimizations also. It’s not just about throwing more virtuals at the problem, and most of us know it at this point. The user experience is, in the end, what matters most, and there are plenty of places other than your app that can dog performance from the user perspective. Look into compression and caching, TCP optimizations, application specific delivery tweaks, back-end optimizations, and in some cases, code optimizations. check the performance hit that every device between your servers and the Internet introduces. Optimize everything. And, like writing tight code, it will become just the way you do things. Once it is ingrained that you check all the places your application performance can suffer, it’s less to worry about, because you’ll configure things with deployment. Call it DevOps if you will, but make it part of your normal deployment model and review things whenever there’s a change. It’s a complex beast, the enterprise network, and it’s not getting less so. Use templates (like F5’s iApps) to provision the network bits correctly for you. Taking the F5 example, there is an iApp for deploying SharePoint and a different one for Exchange. They take care of the breadth of issues that can speed delivery of each application. You just answer a few questions. I am unaware of any of our competitors having a similar solution, but it is only a question of time, so if you’re not an F5 customer, ask your sales representative what the timeline for delivery of similar functionality is. I’m not an expert on our competition, who knows, maybe they have rolled something out already. Even if not, you can make checklists much like F5 Application Guides and F5 Deployment Guides, then use them to train new employees and make certain you’ve set everything up to your liking. Generally speaking, faster is better on any given network, so optimization is something you’ll have to worry about even if you’re not thinking about it today. Hope this helps a little in understanding that there’s more to it than load balancing. But if not, at least I got to write about optimizing C source. Related Articles and Blogs: F5 Friday: F5 Application Delivery Optimization (ADO) The “All of the Above” Approach to Improving Application Performance Interop 2012 - Application Delivery Optimization with F5's Lori ... The Four V's of Big Data DevCentral Interviews | Audio - Application Delivery Controllers F5 News - Unified Application Delivery Intercloud: The Evolution of Global Application Delivery Audio White Paper - Application Delivery Hardware A Critical ... Who owns application delivery meta-data in the cloud? The Application Delivery Spell Book: Detect Invisible (Application ...258Views0likes0CommentsThere is more to it than performance.
Did you ever notice that sometimes, “high efficiency” furnaces aren’t? That some things the furnace just doesn’t cover – like the quality of your ductwork, for example? The same is true of a “high performance” race car. Yes, it is VERY fast, assuming a nice long flat surface for it to drive on. Put it on a dirt road in the rainy season, and, well, it’s just a gas hog. Or worse, a stuck car. I could continue the list. A “high energy” employee can be relatively useless if they are assigned tasks at which brainpower, not activity rate, determines success… But I’ll leave it at those three, I think you get the idea. The same is true of your network. Frankly, increasing your bandwidth in many scenarios will not yield the results you expected. Oh, it will improve traffic flow, and overall the performance of apps on the network will improve, the question is “how much?” It would be reasonable – or at least not unreasonable – to expect that doubling Internet bandwidth should stave off problems until you double bandwidth usage. But often times the problems are with the overloading apps we’re placing upon the network. Sometimes, it’s not the network at all. Check the ecosystem, not just the network. When I was the Storage and Servers Editor over at NWC, I reviewed a new (at the time) HP server that was loaded. It had a ton of memory, a ton of cores, and could make practically any application scream. It even had two gigabit NICs in it. But they weren’t enough. While I had almost nothing bad to say about the server itself, I did observe in the summary of my article that the network was now officially the bottleneck. Since the machine had high speed SAS disks, disk I/O was not as bi a deal as it traditionally has been, high-speed cached memory meant memory I/O wasn’t a problem at all, and multiple cores meant you could cram a ton of processing power in. But team those two NICs and you’d end up with slightly less than 2 Gigabits of network throughput. Assuming 100% saturation, that was really less than 250 Megabytes per second, and that counts both in and out. For query-intensive database applications or media streaming servers, that just wasn’t keeping pace with the server. Now here we are, six or so years later, and similar servers are in use all over the globe… Running VMs. Meaning that several copies of the OS are now carving up that throughput. So start with your server. Check it first if the rest of the network is performing, it might just be the problem. And while we’re talking about servers, the obvious one needs to be mentioned… Don’t forget to check CPU usage. You just might need a bigger server or load balancing, or these days, less virtuals on your server. Heck, as long as we’re talking about servers, let’s consider the app too. The last few years for a variety of reasons we’ve seen less focus on apps whose performance is sucktacular, but it still happens. Worth looking into if the server turns out to be the bottleneck. Old gear is old. I was working on a network that deployed an ancient Cisco switch. The entire network was 1 Gig, except for that single switch. But tracing wires showed that switch to lie between the Internet and the internal network. A simple error, easily fixed, but an easy error to have in a complex environment, and certainly one to be aware of. That switch was 10/100 only. We pulled it out of the network entirely, and performance instantly improved. There’s necessary traffic, and then there’s… Not all of the traffic on your network needs to be. And all that does need to be doesn’t have to be so bloated. Look for sources of UDP broadcasts. More often than you would think, applications broadcast that you don’t care about. Cut them off. For other traffic, well there is Application Delivery Optimization. ADO is improving application delivery by a variety of technical solutions, but we’ll focus on those that make your network and your apps seem faster. You already know about them – compression, caching, image optimization… In the case of back-end services, de-duplication. But have you considered what they do other than improve perceived or actual performance? Free Bandwidth Anything that reduces the size of application data leaving your network also reduces the burden on your Internet connection. This goes without saying, but as I alluded to above, we sometimes overlook the fact that it is not just application performance we’re impacting, but the effectiveness of our connections – connections that grow more expensive by leaps and bounds each time we have to upgrade them. While improving application performance is absolutely a valid reason to seek out ADO, delaying or eliminating the need to upgrade your Internet connection(s) is another. Indeed, in many organizations it is far easier to do TCO justification based upon deferred upgrades than it is based upon “our application will be faster”, while both are benefits of ADO. New stuff! And as time wears on, SPDY, IPv6, and a host of other technologies will be more readily available to help you improve your network. Meanwhile, check out gateways for these protocols to make the transition easier. In Summation There are a lot of reasons for apps not to perform, and there are a lot of benefits to ADO. I’ve listed some of the easier problems to ferret out, the deeper into your particular network you get, the harder it is to generalize problems. But these should give you a taste for the types of things to look for. And a bit more motivation to explore ADO. Of course I hope you choose F5 gear for ADO and overall application networking, but there are other options out there. I think. Maybe.214Views0likes0CommentsF5 Friday. Speedy SPDY
#ADO, #Stirling, #fasterapp a SPDY implementation that is as fast and adaptable as needed. **I originally wrote this more than a month ago… Coworkers have covered this topic extensively, but thought I’d still get it posted for those who read my blog and missed it. Remember the days when Internet connections were inherently slow, and browser usage required extreme patience? For many people – from certain geographic regions to mobile phone Internet users – that world of waiting has come around again, and they’re not as patient as people used to be, largely because instant communication has become a standard, so expectations have risen. As with all recurring themes, there are new solutions coming along to resolve these problems, and F5 is staying on top of them, helping IT to better serve the needs of the business, and the customer. In November of 2009, Google announced the SPDY protocol to improve the performance of browser-server communications. Since then, implementations of SPDY have cropped up in both Chrome and Firefox, which according to w3schools.com comprise over 70% of the global browser market. The problem is that web server and web application server implementations lag far behind client adoption. While the default is for SPDY to drop to HTTP if either client or server does not have a SPDY implementation, there are clear-cut benefits to SPDY that IT is missing out on. This is the result of a convergence of issues that will eventually be resolved on their own, most notably that it is easy to get two open source browsers to support your standard and attain market penetration, but much harder to convince tens of thousands of IT folks to disrupt their normal operations while implementing a standard that isn’t strictly necessary for most of them. Eventually, SPDY support will come pre-packaged in most web servers, and if it is something your organization needs, those webservers will be the first choice for new projects. Until then, clients with slow connections (including all mobile clients) will suffer longer delivery timeframes. What is required is a solution that allows for SPDY support without disrupting the flow of normal operations. Something that can be implemented quickly and easily, without the hassle of dropping web servers, installing modules, making configuration changes, etc. And of course that solution should be comprehensive enough to serve the most demanding environments. As of now, that requirement is fulfilled by F5. F5 WebAccelerator now supports SPDY as a proxy for all of the servers you choose to turn SPDY support on for. In the normal course of SPDY operations, the client and the server exchange information about whether they support SPDY or not, and if both do not, then HTTP is used for communication between the browser and the web server. BIG-IP WebAccelerator acts as a proxy for web servers. It terminates the connection, responds that the server behind it does indeed support SPDY, then translates requests from the browser into HTTP before passing them to the server, and responses from the server into SPDY before passing them to the client. The net result is that on the slowest part of the connection – the Internet and wireless device “last mile”, SPDY is being used, while there are zero changes to the application infrastructure. And because the BIG-IP product family specializes in configurations per-application, you can pick and choose which applications running behind a BIG-IP device actually support SPDY, should the need arise. Combined with the whole collection of other optimizations that WebAccelerator implements, the performance of web applications to any device can greatly benefit without retrofitting the entire network. The HTTP 2.0 War has Just Begun The Four V’s of Big Data The “All of the Above” Approach to Improving Application Performance Mobile Apps. New Game, New (and Old) Rules The HTTP 2.0 War has Just Begun F5 Friday: Ops First Rule220Views0likes0CommentsNew Communications = Multiplexification
I wrote a good while back about the need to translate all the various storage protocols into one that could take root and simplify the lives of IT. None of the ones currently being hawked seem to be making huge inroads in the datacenter, all have some uses, none is unifying. Those peddling the latest, greatest thing of course want to sell you on their protocol because they hope to be The One, but it’s not about selling, it’s about useful. At the time FCoE was the new thing. I don’t get much chance to follow storage like I used to, but I haven’t heard of anything new since the furor over FCoE started to calm down, so presume the market is still sitting there, with NAS split between two, and block storage split between many. There is a similar fragmentation trend going on in networking at the moment too. There have always been a zillion transport standards, and as long as the upper layers can be uniform, working out how to fit your cool new satellite link into Ethernet is a simple problem from the IT perspective. Either the vendor solves the issue or they fail due to lack of usefulness. But higher layers are starting to see fragmentation. In the form of SPDY, Speed + mobility, etc. In both of these cases, HTTP is being supplanted by something that requires configuration differences and is not universally supported by clients. And yet the benefits are such that IT is paying attention. IPv6 is causing similar issues at the lower layers, and it is worth mentioning here for a reason. The key, as Lori and I have both written, is that IT cannot afford to rework everything at once to support these new standards, but feels an imperative (for IP address space from IPv6, for web app performance for the http layer changes) to implement them whenever possible. The best solution to these problems – where upgrading has its costs and failing to upgrade has other costs – is to implement a gateway. F5s IPv6 Gateway is one solution (other vendors have them too - I’ll talk about the one I know here, but assume it applies to the others and verify that with your vendor) that’s easy to talk about because it is being utilized in IT shops to do just that. With the gateway implemented, sitting in front of your DC, it translates from IPv6 to IPv4, meaning that the datacenter can be converted at a sane pace, and support for IPv4 is not a separate stack that must be maintained while client adoption catches up. If a connection comes in to the gateway, if it is IPv4 and the server speaks IPv4, the connection is passed through. The same occurs if both client and server support IPv6. If the client and server have a mismatch, the gateway translates between them. That means you get support the day a gateway is deployed, and over time can transfer your systems while maintaining support for all clients. This type of solution works handily for protocols like SPDY too – offering the ability to say a server supports SPDY when in fact it doesn’t, the gateway does and translates between SPDY and HTTP. Deploying a SPDY gateway gives instant SPDY support to web (and application) servers behind the gateway, buying IT time to reconfigure those web servers to actually support SPDY. SPDY accelerates everything on the client side, and http is only used on the faster server side where the network is dedicated. Faster has an asterisk by it though. What if the app or web server is at a remote site? You’re going right back out onto the Internet and using HTTP unoptimized. In those cases – and other cases where network response time is slow - something is needed on the backend to keep those performance gains without finding the next bottleneck as soon as the SPDY gateway is deployed. F5 uses several technologies to improve backend communications performance, and other vendors have similar solutions (though ours are better – biased though I may be). For F5’s part, secure tunnels, WAN optimization, and a very relevant feature of BIG-IP LTM called OneConnect all work together to minimize backend traffic. OneConnect is a cool little feature that minimizes the connections from the BIG-IP to the backend server by pooling and reusing them. This process does several things, but importantly, it takes setup and teardown time for connections out of the picture. So if a (non-SPDY) client makes four connections to get its data, the BIG-IP merges them with other requests to the same server and essentially multiplexes them. Funny thing is, this is one of the features of SPDY on the other side, with the primary difference that SPDY is client focused (merges connections from the client), and OneConnect is server focused (merges connections to the server). The client side is “all connections from this client”, while the server side is “all connections to this server (regardless of client)”, but otherwise they are very similar. This enters interesting territory, because now we’re essentially multi-multi-plexing. But we’re not. Here’s a simple diagram utilizing only a couple of clients and generic server/application farm to try and show the sequence of events: 1. SPDY comes into a gateway as a single stream from the client 2. The gateway translates into HTTP’s multiple streams 3. BIG-IP identifies the server the request is for 4. If a connection exists to the server, BIG-IP passes the request through the existing connection 5. When responses are sent, this process is handled in reverse. Responses come in over OneConnect and go out SPDY encoded. There is only a brief period of time where native HTTP is being communicated, and presumably the SPDY gateway and the BIG-IP are in very close proximity. The result is application communications that are optimized end-to-end, but the only changes to your application architecture are configuring the SPDY Gateway and OneConnect. Not too bad for a problem that normally requires modification of each web and application servers that will support SPDY. As alluded to above, if the application servers are remote from the SPDY Gateway, the benefits are even more pronounced, just due to latency on the back end. All the benefits of both SPDY and OneConnect, and you will be done before lunch. Far better than loading modules into every webserver or upgrading every app server. Alternatively, you could continue to support only HTTP, but watching the list of clients that transparently support SPDY, the net result of doing so is very likely to be that customers gravitate to your competitors whose websites seem to be faster. The Four V’s of Big Data The “All of the Above” Approach to Improving Application Performance Google SPDY Accelerates Mobile Web193Views0likes0Comments