web acceleration
55 TopicsDoes Cloud Solve or Increase the 'Four Pillars' Problem?
It has long been said – often by this author – that there are four pillars to application performance: Memory CPU Network Storage As soon as you resolve one in response to application response times, another becomes the bottleneck, even if you are not hitting that bottleneck yet. For a bit more detail, they are “memory consumption” – because this impacts swapping in modern Operating Systems. “CPU utilization” – because regardless of OS, there is a magic line after which performance degrades radically. “Network throughput” – because applications have to communicate over the network, and blocking or not (almost all coding for networks today is), the information requested over the network is necessary and will eventually block code from continuing to execute. “Storage” – because IOPS matter when writing/reading to/from disk (or the OS swaps memory out/back in). These four have long been relatively easy to track. The relationship is pretty easy to spot, when you resolve one problem, one of the others becomes the “most dangerous” to application performance. But historically, you’ve always had access to the hardware. Even in highly virtualized environments, these items could be considered both at the Host and Guest level – because both individual VMs and the entire system matter. When moving to the cloud, the four pillars become much less manageable. The amount “much less” implies depends a lot upon your cloud provider, and how you define “cloud”. Put in simple terms, if you are suddenly struck blind, that does not change what’s in front of you, only your ability to perceive it. In the PaaS world, you have only the tools the provider offers to measure these things, and are urged not to think of the impact that host machines may have on your app. But they do have an impact. In an IaaS world you have somewhat more insight, but as others have pointed out, less control than in your datacenter. Picture Courtesy of Stanley Rabinowitz, Math Pro Press. In the SaaS world, assuming you include that in “cloud”, you have zero control and very little insight. If you app is not performing, you’ll have to talk to the vendors’ staff to (hopefully) get them to resolve issues. But is the problem any worse in the cloud than in the datacenter? I would have to argue no. Your ability to touch and feel the bits is reduced, but the actual problems are not. In a pureplay public cloud deployment, the performance of an application is heavily dependent upon your vendor, but the top-tier vendors (Amazon springs to mind) can spin up copies as needed to reduce workload. This is not a far cry from one common performance trick used in highly virtualized environments – bring up another VM on another server and add them to load balancing. If the app is poorly designed, the net result is not that you’re buying servers to host instances, it is instead that you’re buying instances directly. This has implications for IT. The reduced up-front cost of using an inefficient app – no matter which of the four pillars it is inefficient in – means that IT shops are more likely to tolerate inefficiency, even though in the long run the cost of paying monthly may be far more than the cost of purchasing a new server was, simply because the budget pain is reduced. There are a lot of companies out there offering information about cloud deployments that can help you to see if you feel blind. Fair disclosure, F5 is one of them, I work for F5. That’s all you’re going to hear on that topic in this blog. While knowing does not always directly correlate to taking action, and there is some information that only the cloud provider could offer you, knowing where performance bottlenecks are does at least give some level of decision-making back to IT staff. If an application is performing poorly, looking into what appears to be happening (you can tell network bandwidth, VM CPU usage, VM IOPS, etc, but not what’s happening on the physical hardware) can inform decision-making about how to contain the OpEx costs of cloud. Internal cloud is a much easier play, you still have access to all the information you had before cloud came along, and generally the investigation is similar to that used in a highly virtualized environment. From a troubleshooting performance problems perspective, it’s much the same. The key with both virtualization and internal (private) clouds is that you’re aiming for maximum utilization of resources, so you will have to watch for the bottlenecks more closely – you’re “closer to the edge” of performance problems, because you designed it that way. A comprehensive logging and monitoring environment can go a long way in all cloud and virtualization environments to keeping on top of issues that crop up – particularly in a large datacenter with many apps running. And developer education on how not to be a resource hog is helpful for internally developed apps. For externally developed apps the best you can do is ask for sizing information and then test their assumptions before buying. Sometimes, cloud simply is the right choice. If network bandwidth is the prime limiting factor, and your organization can accept the perceived security/compliance risks, for example, the cloud is an easy solution – bandwidth in the cloud is either not limited, or limited by your willingness to write a monthly check to cover usage. Either way, it’s not an Internet connection upgrade, which can be dastardly expensive not just at install, but month after month. Keep rocking it. Get the visibility you need, don’t worry about what you don’t need. Related Articles and Blogs: Don MacVittie - Load Balancing For Developers Advanced Load Balancing For Developers. The Network Dev Tool Load Balancers for Developers – ADCs Wan Optimization ... Intro to Load Balancing for Developers – How they work Intro to Load Balancing for Developers – The Gotchas Intro to Load Balancing for Developers – The Algorithms Load Balancing For Developers: Security and TCP Optimizations Advanced Load Balancers for Developers: ADCs - The Code Advanced Load Balancing For Developers: Virtual Benefits Don MacVittie - ADCs for Developers Devops Proverb: Process Practice Makes Perfect Devops is Not All About Automation 1024 Words: Why Devops is Hard Will DevOps Fork? DevOps. It's in the Culture, Not Tech. Lori MacVittie - Development and General Devops: Controlling Application Release Cycles to Avoid the ... An Aristotlean Approach to Devops and Infrastructure Integration How to Build a Silo Faster: Not Enough Ops in your Devops236Views0likes0CommentsThe BYOD That is Real.
Not too long ago I wrote about VDI and BYOD, and how their hype cycles were impacting IT. In that article I was pretty dismissive of the corporate-wide democratization of IT through BYOD, and I stand by that. Internally, it is just not a realistic idea unless and until management toolsets converge. But that’s internally. Externally, we have a totally different world. If you run a website-heavy business like banking or sales, you’re going to have to deal with the proliferation of Internet enabled phones and tablets. Because they will hit your websites, and customers will expect them to work. Some companies – media companies tend to do a lot of this, for example – will ask you to download their app to view web pages. That’s ridiculous, just display the page. But some companies – again, banks are a good example – have valid reasons to want customers to use an app to access their accounts. The upshot is that any given app will have to support at least two platforms today, and that guarantees nothing a year from now. But it does not change the fact that one way or another, you’re going to have to support these devices over the web. There are plenty of companies out there trying to help you. Appcelerator offers a cross-platform development environment that translates from javascript into native Objective C or Java, for example. There are UI design tools available on the web that can output both formats but are notoriously short of source code and custom graphics. Still, good for prototyping. And the environments allow you to choose an HTML5 app, a native app, or a hybrid of the two, allowing staff to choose the best solution for the problem at hand. And then there is the network. It is not just a case of delivering a different format to the device, it is a case of optimizing that content for delivery to devices with smaller memory space, slower networks, and slower CPU speeds. That’s worth thinking about. There’s also the security factor. mobile devices are far easier to misplace than a desktop, and customers are not likely to admit their device is stolen until they’ve looked everywhere they might have left it. In the case (again) of financial institutions, if credentials are cached on the device, this is a recipe for disaster. So it is not only picking a platform and an application style, it is coding to the unique characteristics of the mobile world. Of course optimization is best handled at the network layer by products like our WebAccelerator, because it’s what they do and they’re very good at optimizing content based upon the target platform. Security, as usual, must be handled in several places. Checking that the device is not in a strange location (as I talked about here) is a good start, but not allowing username and password to be cached on the device is huge too. So while you are casting a skeptical look at BYOD inside your organization, pay attention to customers’ device preferences. They’re hitting the web on mobile devices more and more each month, and their view of your organization will be hugely impacted by how your site and/or apps respond. So invest the time and money, be there for them, so that they’ll come back to you. Or don’t. Your competitors would like that.265Views0likes0CommentsRandom Acts of Optimization.
When I first embarked on my application development career, I was a code optimization junky. Really, making things faster, more efficient, the tightest it could get was a big deal to me. That routine you wrote to solve a one-off problem often becomes the core routine used by applications across the infrastructure, so writing tight code was (to me) important. The industry obviously didn’t agree with me, since now we run mostly interpreted languages over the network, but that was then, this is now and all. The thing is that performance still matters, it has just changed location. The amount of overhead in the difference (in C/C++) between if() else and (x?y:z) is not so important anymore unless that particular instruction is being used a whole lot. The latency introduced to the network by all of those devices between server and client is far larger than the few clock cycles difference between these two instructions. There are still applications where optimized code really makes a difference (mostly in embedded, where all resources are less than even the tablet space), but with ever-shrinking form factors and increasing resources, even those instances are going away slowly but surely. The only place I’ve heard of that really needs a high level of source optimization in recent months is high-speed transactions in the financial services sector. Simply put, if your application is on the network, the organization will get more out of spending networking staff man-hours improving network performance than spending developer man-hours doing the same. There is still a lot of app optimization that needs to go on – databases are a notorious area where a great DBA can make your application many times faster – but the network is impacting the applications many times in its back-and-forth, and it is impacting all applications, including the ones you don’t have the source to. But there are a lot of pieces to application delivery optimization (ADO), and approaching it piecemeal has no better result than approaching application optimization piecemeal. Just because you put a load balancer in front of your server and fired up a few more VMs behind the load balancer to share the load does not mean that your application is optimized. In some instances, that is the best solution, but in most cases, a more thorough Application Delivery Network approach is required. Making the application more responsive by load balancing does not decrease the amount of data the application is sending over your Internet connection, does not optimize delivery of the application over the wire to make it faster on the client end, does not direct users to the geographically closest or least utilized datacenter/cloud, does not… Well, do a lot of things. Exactly the same as optimizing your applications’ code won’t help a bit if the database is the slowest part of the application. So I’ll recommend an holistic approach (I hate that phrase, but how else do you politely say “look at every friggin’ thing on your network”?), that focuses on application serving and application delivery. And if you’re in multiple datacenters with data having to traverse the Internet behind your application also, then back-end optimizations also. It’s not just about throwing more virtuals at the problem, and most of us know it at this point. The user experience is, in the end, what matters most, and there are plenty of places other than your app that can dog performance from the user perspective. Look into compression and caching, TCP optimizations, application specific delivery tweaks, back-end optimizations, and in some cases, code optimizations. check the performance hit that every device between your servers and the Internet introduces. Optimize everything. And, like writing tight code, it will become just the way you do things. Once it is ingrained that you check all the places your application performance can suffer, it’s less to worry about, because you’ll configure things with deployment. Call it DevOps if you will, but make it part of your normal deployment model and review things whenever there’s a change. It’s a complex beast, the enterprise network, and it’s not getting less so. Use templates (like F5’s iApps) to provision the network bits correctly for you. Taking the F5 example, there is an iApp for deploying SharePoint and a different one for Exchange. They take care of the breadth of issues that can speed delivery of each application. You just answer a few questions. I am unaware of any of our competitors having a similar solution, but it is only a question of time, so if you’re not an F5 customer, ask your sales representative what the timeline for delivery of similar functionality is. I’m not an expert on our competition, who knows, maybe they have rolled something out already. Even if not, you can make checklists much like F5 Application Guides and F5 Deployment Guides, then use them to train new employees and make certain you’ve set everything up to your liking. Generally speaking, faster is better on any given network, so optimization is something you’ll have to worry about even if you’re not thinking about it today. Hope this helps a little in understanding that there’s more to it than load balancing. But if not, at least I got to write about optimizing C source. Related Articles and Blogs: F5 Friday: F5 Application Delivery Optimization (ADO) The “All of the Above” Approach to Improving Application Performance Interop 2012 - Application Delivery Optimization with F5's Lori ... The Four V's of Big Data DevCentral Interviews | Audio - Application Delivery Controllers F5 News - Unified Application Delivery Intercloud: The Evolution of Global Application Delivery Audio White Paper - Application Delivery Hardware A Critical ... Who owns application delivery meta-data in the cloud? The Application Delivery Spell Book: Detect Invisible (Application ...259Views0likes0CommentsOn the Trading Floor and in the QA Lab
#fsi problems are very public, but provide warning messages for all enterprises. The recent troubles in High Frequency Trading (HFT) involving problems on the NASDAQ over the debut of Facebook, the Knight Trading $400 Million USD loss among others are a clear warning bell to High Frequency Trading organizations. The warning comes in two parts: “Testing is not optional”, and “Police yourselves or you will be policed”. Systems glitches happen in every industry, we’ve all been victims of them, but Knight in particular has been held up as an example of rushing to market and causing financial harm. Not only did the investors in Knight itself lose most of their investment (between the impact on the stock price and the dilution of shares their big-bank bailout entailed, it is estimated that their investors lost 80% or more in a couple of weeks), but when the price of a stock – particularly a big-name stock, which the hundred they were overtrading mostly were – fluctuates dramatically, it creates winners and losers in the market. For every person that bought cheap and sold high, there was a seller at the low end and a buyer at the high end. Regulatory agencies and a collection of university professors are reportedly looking into an ISO-9000 style quality control system for the HFT market. One more major glitch could be all it takes to send them to the drafting table, so it is time for the industry itself to band together and place controls, or allow others to dictate controls. Quality assurance in every highly competitive industry has this problem. They are a cost center, and while everyone wants a quality product, many are annoyed by the “interference” QA brings into the software development process. This can be worse in a highly complex network like HFT or a large multi-national requires, because replicating the network for QA purposes can be a daunting project. This is somewhat less relevant in smaller organizations, but certainly there are mid-sized companies with networks every bit as complex as large multi-nationals. Luckily, we have reached a point in time where a QA environment can be quickly configured and reconfigured, where testing is more of a focus on finding quality problems with the software than on configuring the environment – running cables, etc – that has traditionally cost QA for networked applications a lot of time or a lot of money maintaining a full copy of the production network. From this point forward, I will mention F5 products by name. Please feel free to insert your favorite vendors’ name if they have a comparable product. F5 is my employer, so I know what our gears’ capabilities are, competitors that is less true for, so call your sales folks and ask them if they support the functionality described. Wouldn’t hurt to do that with F5 either. Our sales people know a ton, and can clarify anything that isn’t clear in this blog. In the 21st century, testing and Virtualization go hand-in-hand. There are a couple of different forms of network virtualization that can help with testing, depending upon the needs of your testing team and your organization. I refer to them as QA testing and performance testing, think of them as “low throughput testing” and “high throughput testing”. If you’re not testing performance, you don’t need to see a jillion connections a second, but you do need to see all of the things the application does, and make certain they match requirements (more often design, but that’s a different blog post concerning what happens if the design doesn’t adequately address requirements and testing is off of design…). Quality Assurance For low throughput testing, virtualization has been the king for a good long while, with only cloud even pretending to challenge the benefits of a virtualized environment. Since “cloud” in this case is simply IaaS running VMs, I see no difference for QA purposes. This example could be in the cloud or on your desktop in VMs. Dropping a Virtual Application Delivery Controller (vADC) into the VM environment will provide provisioning of networking objects in the same manner as is done in the production network. This is very useful for testing multiple-instance applications for friendliness. It doesn’t take much of a mistake to turn the database into the bottleneck in a multiple-instance web application. Really. I’ve seen it happen. QA testing can see this type of behavior without the throughput of a production network, if the network is designed to handle load balanced copies of the application. It is also very useful for security screening, assuming the vADC supports a Web Application Firewall (WAF) like the BIG-IP with its Application Security Manager. While testing security through a WAF is useful, the power of the WAF really comes into play when a security flaw is discovered in QA testing. Many times, that flaw can be compensated for with a WAF, and having one on the QA network allows staff to test with and without the WAF. Should the WAF provide cover for the vulnerability, an informed decision can then be made about whether the application deployment must be delayed for a bug fix, or if the WAF will be allowed to handle protection of that particular vulnerability until the next scheduled update. In many cases, this saves both time and money. In cases of heavy backend transport impacting the performance of web applications – like mirroring database calls to a remote datacenter – the use of a WAN Optimization manager can be evaluated in test to see if it helps performance without making changes to the production network. Testing network object configurations is easier too. If the test environment is set up to mirror the production network, the only difference being that testing is 100% virtualized, then the exact network object – load balancing, WAN optimization, Application Acceleration, Security, and WAF can all be configured in QA Test exactly as they will be configured in production. This allows for thorough testing of the entire infrastructure, not just the application being deployed. Performance Testing For high-throughput testing, the commodity hardware that runs VMs can be a limiting factor in the sense that the throughput in test needs to match the expected usage of the application at peak times. For these scenarios, organizations with high-volume, mission-critical applications to test can run the same exact testing scenario using a hardware chassis capable of multi-tenancy. As always, I work for F5 so my experience is best couched in F5 terms. Our VIPRION systems are capable of running multiple different BIG-IP instances per blade. That means that in test, the exact same hardware that will be used in production can be used for performance evaluation. Everything said above about QA testing – WAF, Application Acceleration, testing for bottlenecks, all apply. The biggest difference is that the tests are on a physical machine, which might make testing to the cloud more difficult as the machine cannot be displaced to the cloud environment. To resolve this particular issue, the hybrid model can be adopted. VIPRION on the datacenter side and BIG-IP VE on the cloud side, in the case of F5. Utilizing management tools like the iApps Analytics built in to F5 Enterprise Manager (EM) allow testers to see which portion of the architecture is limiting performance, and save man-hours searching out problems. It’s Still About The App and the Culture In the end, the primary point of testing is to safeguard against coding errors that would cause real pain to the organization and get them fixed before the application is turned live. The inclusion of network resources in testing is a reflection of the growing complexity many web based applications are experiencing in supporting infrastructure. Just as you wouldn’t test a mainframe app on a PC, testing a networked app outside of the target environment is not conclusive. But the story at Knight trading does not appear to be one about testing, but rather culture. In a rush to meet an artificial deadline, they appear to have cut corners and rushed changes in the night before. You can’t fix problems with testing if you aren’t testing. Many IT shops need to take that to heart. The testers I have worked with over the years are astounding folks with a lot of smarts, but all suffer from the problem that their organization doesn’t value testing at the level it does other IT functions. Dedicated testing time is often in short order and the first thing to go when deadlines slip. Quite often testers are developers who have the added responsibility of testing. But many of us have said over the years and will continue to say… Testing your own code is not a way to find bugs. Don’t you think – really think – that if a developer thinks of it in testing, he/she probably thought of it during development? While those problems not thought of in development can certainly be caught, a fresh set of eyes setting up tests outside the context of developer assumptions is always a good idea. And Yeah, it’s happened to me Early in my career, I was called upon to make a change to a software package used by some of the largest banks in the US. I ran the change out in a couple of hours, I tested it, there was pressure to get it out the door so the rockstars in our testing department didn’t even know about the change, and we delivered it electronically to our largest banking customer – who was one of the orgs demanding the change. In their environment, the change over-wrote the database our application used. Literally destroyed a years’ worth of sensitive data. Thankfully, they had followed our advice and backed up the database first. While they were restoring, I pawed through the effected change line-by-line and found that the error destroying their database was occurring in my code, just not doing any real harm (it was writing over a different file on my system), so I didn’t notice it. Testing would have saved all of us a ton of pain because it would have been putting the app in a whole new environment. But this was a change “that they’re demanding NOW!” The bank in question lost the better part of a day restoring things to normal, and my company took an integrity hit with several major customers. I learned then that a few hours of testing of a change that took a few hours to write is worth the investment, no matter how much pressure there is to deliver. Since then, I have definitely pushed to have a test phase with individuals not involved with development running the app. And of course, the more urgent the change, the more I want at least one person to install and test outside of my dev machine. And you should too. Related Articles and Blogs There is more to it than performance. DEFCON 20 Highlights How to Develop Next-Gen Project Managers226Views0likes0CommentsSpeed Matters, but Dev Speed or App Speed?
In running, speed matters. But how the speed matters is very important, and what type of running is your forte’ should determine what you are involved in. As a teen, I was never a very good sprinter. Just didn’t get up to speed fast enough, and was consistently overcome by more nimble opponents. But growing up on a beach was perfect conditioning for cross country track. Running five miles in beach sand that gave way underfoot and drained your energy much faster than it allowed you to move forward was solid practice for running through the woods mile after mile. And I wasn’t a bad runner – not a world champion to be sure – but I won more often than I lost when the “track” was ten or fifteen miles through the woods. The same is true of mobile apps, though most organizations don’t seem to realize it yet. There are two types of mobile apps – those that are developed for sprinting, by getting them to market rapidly, and those that are developed for the long haul, by implementing solutions based around the platform in question. By “platform” in this case, I mean the core notions of “mobile” apps – wireless, limited resources, touch interfaces, and generally different use cases than a laptop or desktop machine. It is certainly a more rapid go-to-market plan to have an outsourcer of some kind dump your existing HTML into an “app” or develop a little HTML5 and wrap it in an “app”, but I would argue that the goals of such an endeavor are short term. Much like sprinting, you’ll get there quickly, but then the race is over. How the judges (customers in this case) gauge the result is much more important. There are three basic bits to judging in mobile apps – ease of use, which is usually pretty good in a wrapped HTML or “hybrid” app; security, which is usually pretty horrendous in a hybrid app; and performance, which is usually pretty horrendous in a hybrid app. The security bit could be fixed with some serious security folks looking over the resultant application, but the performance issue is not so simple. You see, performance of a hybrid application is a simple equation… Speed of original web content + overhead of a cell phone + overhead of the app wrapper around the HTML. Sure, you’ll get faster development time wrapping HTML pages in an app, but you’ll get worse long-term performance. Kind of the same issue you get when a sprinter tries to run cross country. They rock for the first while, but burn out before the cross country racers are up to speed. You can use tools like our Application Delivery Optimization (ADO) engine to make the wrapped app perform better, but that’s not a panacea. Longer term it will be necessary to develop a more targeted, comprehensive solution. Because when you need a little bit of data and could wrap display functionality around it on the client side, transferring that display functionality and then trying to make it work in a client is pure overhead. Overhead that must be transmitted on a slower network over what is increasingly a pay-as-you-go bandwidth model. Even if the application somehow performs adequately, apps that are bandwidth hogs are not going to be repaid with joy as increasing numbers of carriers drop unlimited bandwidth plans. So before you shell out the money for an intermediate step, stop and consider your needs. Enterprises are being beaten about the head and shoulders with claims that if you don’t have a mobile app, you’re doomed. Think really carefully before you take the chicken-little mentality to heart. Are your customers demanding an app? If so are they demanding it right this instant? if so, perhaps a hybrid app is a good option, if you’re willing to spend whatever it costs to get it developed only to rewrite the app native in six or ten months. Take a look at the Play store or the Apple store, and you’ll see that just throwing an app out there is not enough. You need to develop a method to let your customers know it’s available, and it has to offer them… Something. If you can’t clearly define both of those requirements, then you can’t clearly define what you need, and should take a deep breath while considering your options. Let’s say you have a web-based calculator for mortgage interest rates. It is calling web services to do the interest rate calculations. For not much more development time, it is possible to build a very sweet version of the same calculator in native mode for either iPhones or Android (depending upon your platform priorities, could be either), with a larger up-front investment but less long-term investment by re-using those web services calls from within the mobile app. A little more money now, and no need to rewrite for better performance or targeting Mobile in the future? Take the little extra hit now and do it right. There are plenty of apps out there, and unless you can prove you’re losing money every day over lack of a mobile app, no one will notice that your application came out a month or two later – but they will notice how cool it is. While we’re on the topic, I hate to burst any bubbles, but every single website doesn’t need a dedicated app. We have to get over the hype bit and get to reality. Most people do not want 50 reader apps on their phone, each one just a simple hybrid shell to allow easier reading of a single website. They just don’t. So consider whether you even need an app. Seriously. If the purpose of your app is to present your website in a different format, well news flash, all mobile devices have this nifty little tool called a web browser that’s pretty good at presenting your website. Of course, when you do deploy apps, or even before you do, consider F5’s ADO and security products. They do a lot with mobile that is specific to the mobile world. App development is no simple task, and good app development, like all good development, will cost you money. Make the right choices, drive the best app you can out to your customers, because they’re not very forgiving of slow or buggy apps, and they’re completely unforgiving about apps that mess up their mobile devices. And maybe one day soon, if we’re lucky, we’ll have a development toolkit that works well and delivers something like this: Related Articles and Blogs F5 Solutions for VMware View Mobile Secure Desktop Drama in the Cloud: Coming to a Security Theatre Near You Scary App Games. SSL without benefit. Will BYOL Cripple BYOD? Four Best Practices for Reducing Risk in the Cloud Birds on a Wire(less) 22 Beginner Travel Tips Dreaming of Work 20 Lines or Less #59: SSL Re-encryption, Mobile Browsing, and iFiles Scaling Web Security Operations with DAST and One-Click Virtual Patching BIG-IP Edge Client v1.0.4 for iOS223Views0likes0CommentsF5 Friday. Speedy SPDY
#ADO, #Stirling, #fasterapp a SPDY implementation that is as fast and adaptable as needed. **I originally wrote this more than a month ago… Coworkers have covered this topic extensively, but thought I’d still get it posted for those who read my blog and missed it. Remember the days when Internet connections were inherently slow, and browser usage required extreme patience? For many people – from certain geographic regions to mobile phone Internet users – that world of waiting has come around again, and they’re not as patient as people used to be, largely because instant communication has become a standard, so expectations have risen. As with all recurring themes, there are new solutions coming along to resolve these problems, and F5 is staying on top of them, helping IT to better serve the needs of the business, and the customer. In November of 2009, Google announced the SPDY protocol to improve the performance of browser-server communications. Since then, implementations of SPDY have cropped up in both Chrome and Firefox, which according to w3schools.com comprise over 70% of the global browser market. The problem is that web server and web application server implementations lag far behind client adoption. While the default is for SPDY to drop to HTTP if either client or server does not have a SPDY implementation, there are clear-cut benefits to SPDY that IT is missing out on. This is the result of a convergence of issues that will eventually be resolved on their own, most notably that it is easy to get two open source browsers to support your standard and attain market penetration, but much harder to convince tens of thousands of IT folks to disrupt their normal operations while implementing a standard that isn’t strictly necessary for most of them. Eventually, SPDY support will come pre-packaged in most web servers, and if it is something your organization needs, those webservers will be the first choice for new projects. Until then, clients with slow connections (including all mobile clients) will suffer longer delivery timeframes. What is required is a solution that allows for SPDY support without disrupting the flow of normal operations. Something that can be implemented quickly and easily, without the hassle of dropping web servers, installing modules, making configuration changes, etc. And of course that solution should be comprehensive enough to serve the most demanding environments. As of now, that requirement is fulfilled by F5. F5 WebAccelerator now supports SPDY as a proxy for all of the servers you choose to turn SPDY support on for. In the normal course of SPDY operations, the client and the server exchange information about whether they support SPDY or not, and if both do not, then HTTP is used for communication between the browser and the web server. BIG-IP WebAccelerator acts as a proxy for web servers. It terminates the connection, responds that the server behind it does indeed support SPDY, then translates requests from the browser into HTTP before passing them to the server, and responses from the server into SPDY before passing them to the client. The net result is that on the slowest part of the connection – the Internet and wireless device “last mile”, SPDY is being used, while there are zero changes to the application infrastructure. And because the BIG-IP product family specializes in configurations per-application, you can pick and choose which applications running behind a BIG-IP device actually support SPDY, should the need arise. Combined with the whole collection of other optimizations that WebAccelerator implements, the performance of web applications to any device can greatly benefit without retrofitting the entire network. The HTTP 2.0 War has Just Begun The Four V’s of Big Data The “All of the Above” Approach to Improving Application Performance Mobile Apps. New Game, New (and Old) Rules The HTTP 2.0 War has Just Begun F5 Friday: Ops First Rule220Views0likes0CommentsNew Communications = Multiplexification
I wrote a good while back about the need to translate all the various storage protocols into one that could take root and simplify the lives of IT. None of the ones currently being hawked seem to be making huge inroads in the datacenter, all have some uses, none is unifying. Those peddling the latest, greatest thing of course want to sell you on their protocol because they hope to be The One, but it’s not about selling, it’s about useful. At the time FCoE was the new thing. I don’t get much chance to follow storage like I used to, but I haven’t heard of anything new since the furor over FCoE started to calm down, so presume the market is still sitting there, with NAS split between two, and block storage split between many. There is a similar fragmentation trend going on in networking at the moment too. There have always been a zillion transport standards, and as long as the upper layers can be uniform, working out how to fit your cool new satellite link into Ethernet is a simple problem from the IT perspective. Either the vendor solves the issue or they fail due to lack of usefulness. But higher layers are starting to see fragmentation. In the form of SPDY, Speed + mobility, etc. In both of these cases, HTTP is being supplanted by something that requires configuration differences and is not universally supported by clients. And yet the benefits are such that IT is paying attention. IPv6 is causing similar issues at the lower layers, and it is worth mentioning here for a reason. The key, as Lori and I have both written, is that IT cannot afford to rework everything at once to support these new standards, but feels an imperative (for IP address space from IPv6, for web app performance for the http layer changes) to implement them whenever possible. The best solution to these problems – where upgrading has its costs and failing to upgrade has other costs – is to implement a gateway. F5s IPv6 Gateway is one solution (other vendors have them too - I’ll talk about the one I know here, but assume it applies to the others and verify that with your vendor) that’s easy to talk about because it is being utilized in IT shops to do just that. With the gateway implemented, sitting in front of your DC, it translates from IPv6 to IPv4, meaning that the datacenter can be converted at a sane pace, and support for IPv4 is not a separate stack that must be maintained while client adoption catches up. If a connection comes in to the gateway, if it is IPv4 and the server speaks IPv4, the connection is passed through. The same occurs if both client and server support IPv6. If the client and server have a mismatch, the gateway translates between them. That means you get support the day a gateway is deployed, and over time can transfer your systems while maintaining support for all clients. This type of solution works handily for protocols like SPDY too – offering the ability to say a server supports SPDY when in fact it doesn’t, the gateway does and translates between SPDY and HTTP. Deploying a SPDY gateway gives instant SPDY support to web (and application) servers behind the gateway, buying IT time to reconfigure those web servers to actually support SPDY. SPDY accelerates everything on the client side, and http is only used on the faster server side where the network is dedicated. Faster has an asterisk by it though. What if the app or web server is at a remote site? You’re going right back out onto the Internet and using HTTP unoptimized. In those cases – and other cases where network response time is slow - something is needed on the backend to keep those performance gains without finding the next bottleneck as soon as the SPDY gateway is deployed. F5 uses several technologies to improve backend communications performance, and other vendors have similar solutions (though ours are better – biased though I may be). For F5’s part, secure tunnels, WAN optimization, and a very relevant feature of BIG-IP LTM called OneConnect all work together to minimize backend traffic. OneConnect is a cool little feature that minimizes the connections from the BIG-IP to the backend server by pooling and reusing them. This process does several things, but importantly, it takes setup and teardown time for connections out of the picture. So if a (non-SPDY) client makes four connections to get its data, the BIG-IP merges them with other requests to the same server and essentially multiplexes them. Funny thing is, this is one of the features of SPDY on the other side, with the primary difference that SPDY is client focused (merges connections from the client), and OneConnect is server focused (merges connections to the server). The client side is “all connections from this client”, while the server side is “all connections to this server (regardless of client)”, but otherwise they are very similar. This enters interesting territory, because now we’re essentially multi-multi-plexing. But we’re not. Here’s a simple diagram utilizing only a couple of clients and generic server/application farm to try and show the sequence of events: 1. SPDY comes into a gateway as a single stream from the client 2. The gateway translates into HTTP’s multiple streams 3. BIG-IP identifies the server the request is for 4. If a connection exists to the server, BIG-IP passes the request through the existing connection 5. When responses are sent, this process is handled in reverse. Responses come in over OneConnect and go out SPDY encoded. There is only a brief period of time where native HTTP is being communicated, and presumably the SPDY gateway and the BIG-IP are in very close proximity. The result is application communications that are optimized end-to-end, but the only changes to your application architecture are configuring the SPDY Gateway and OneConnect. Not too bad for a problem that normally requires modification of each web and application servers that will support SPDY. As alluded to above, if the application servers are remote from the SPDY Gateway, the benefits are even more pronounced, just due to latency on the back end. All the benefits of both SPDY and OneConnect, and you will be done before lunch. Far better than loading modules into every webserver or upgrading every app server. Alternatively, you could continue to support only HTTP, but watching the list of clients that transparently support SPDY, the net result of doing so is very likely to be that customers gravitate to your competitors whose websites seem to be faster. The Four V’s of Big Data The “All of the Above” Approach to Improving Application Performance Google SPDY Accelerates Mobile Web193Views0likes0CommentsMobile Apps. New Game, New (and Old) Rules
For my regular readers: Sorry about the long break, thought I’d start back with a hard look at a seemingly minor infrastructure elements, and the history of repeating history in IT. In the history of all things, technological and methodological improvements seem to dramatically change the rules, only in the fullness of time to fall back into the old set of rules with some adjustment for the new aspects. Military history has more of this type of “accommodation” than it has “revolutionary” changes. While many people see nuclear weapons as revolutionary, many of the worlds’ largest cities were devastated by aerial bombardment in the years immediately preceding the drop of the first nuclear weapon, for example. Hamburg, Tokyo, Berlin, Osaka, the list goes on and on. Nuclear weapons were not required for the level of devastation that strategic planners felt necessary. This does not change the hazards of the atomic bomb itself, and I am not making light of those hazards, but from a strategic, war winning viewpoint, it was not a revolutionary weapon. Though scientifically and societally the atomic bomb certainly had a major impact across the globe and across time, from a warfare viewpoint, strategic bombing was already destroying military production capability by destroying cities, the atomic bomb was just more efficient. The same is true of the invention of rifled cannons. With the increased range and accuracy of rifled guns, it was believed that the warship had met its match, and while protection of ships went through fundamental changes, in the end rifled cannons increased the range of engagement but did not significantly tip the balance of power. Though in the in-between times, from when rifled cannons became commonplace and when armor plating became strong enough, there was a protection problem for ships and crews. And the most obvious example, the tank, forced military planners and strategists to rethink everything. But in the end, World War II as a whole was decided in the same manner other continental or globe spanning conflicts have throughout history – with hoards of soldiers fighting over possession of land and destruction of the enemy. Tanks were a tool that often lead to stunning victories, but in the cases of North Africa and Russia, it can be seen that many of those victories were illusory at best. Soldiers, well supplied and with sufficient morale, had to hold those gains, just like in any other war, or the gains were as vapor. Technology – High Tech as we like to call it – is the other area with stunning numbers of “This changes everything” comparisons that just don’t pan out the way the soothsayers claim it will. Largely because the changes are not so revolutionary from a technology perspective as evolutionary. The personal computer may have revolutionized a lot of things in the world – I did just hop out to Google, search for wartime pictures of Osaka, find one on Wikipedia, and insert it into my blog in less time than it would have taken me to write the National Archives requesting such a picture after all – but since the revolution of the Internet we’ve had a string of “this changes everything” predictions that haven’t been true. I’ve mentioned some of them (like XML eliminating programmers) before, I’ll stick to ones that I haven’t mentioned by way of example. Saas is perhaps the best example that I haven’t touched on in my blog (to my memory at least). When SaaS came along, there would be no need for an IT department. None. They would be going away, because everything would be SaaS driven. Or at least made tiny. If there was an IT version of mythbusters, they would have fun with that one, because now we have a (sometimes separate) staff responsible for maintaining the integration of SaaS offerings into our still-growing datacenters. Osaka Bomb Damage – source Wikipedia The newest version of the “everything is different! Look how it’s changed!” mantra is cell network access to applications. People talk about how the old systems are not good enough and we must do things differently, etc. And as always, in some areas they are absolutely right. If you’ve ever hit a website that was designed without thought for a phone-sized screen, you know that applications need to take target screen size into account, something we haven’t had to worry about since shortly after the browser came along. But in terms of performance of applications on cellular clients, there is a lot we’ve done in the past that is relevant today. Originally, a lot of technology on networks focused on improving performance. The thing is that the performance of a PC over a wired (or wireless) network has been up and down over the years as technology has shifted the sands under app developers’ feet. Network performance becomes the bottleneck and a lot of cool new stuff is created to get around that, only to find that now the CPU, or memory, or disk is the bottleneck, and resources are thrown that way to resolve problems. I would be the last to claim that cellular networks are the same as Ethernet or wireless Ethernet networks (I worked at the packet layer on CDMA smartphones long ago), but at a 50,000 foot view, they are “on the network” and they’re access applications served the same way as any other client. While some of the performance issues with these devices are being addressed by new cellular standards, some of them are the same issues we’ve had with other clients in the past. Too many round trips, too much data for the connection available, repeated downloads of the same data… All of these things are relative. Of course they’re not the only problems, but they’re the ones we already have fixes for. Take NTLM authentication for example, back when wireless networks were slow, companies like F5 came up with tools to either proxy for, or reduce the number of round trips required for authentication to multiple servers or applications. Those tools are still around, and are even turned on in many devices being used today. Want to improve performance for an employee that works on remote devices? Check your installed products and with your vendor to find out if this type of functionality can be turned on. How about image caching on the client? While less useful in the age of “Bring You Own Device”, BYOD is not yet, and may never be, the standard. Setting image (or object) caching rules that make sense for the client on devices that IT controls can help a lot. Every time a user hits a webpage with the corporate logo on it, the image really doesn’t need to be downloaded if it has been once. Lots of web app developers take care of this within the HTML of their pages, but some don’t, so again, see if you can manage this on the network somewhere. For F5 Application Acceleration products you can, I cannot speak for other vendors. The list goes on and on. Anyone with five or ten years in the industry knows what hoops were jumped through the last time we went around this merry go round, use that knowledge while assessing other, newer technologies that will also help. The wheel doesn’t need to be reinvented, just reinforce – an evolutionary change from a wooden spoke device to a steel rim, maybe with chrome. While everyone is holding out for broad 4G deployments to ease the cellular device performance issue, specialists in the field are already saying that the rate of adoption of new cellular devices indicates that 4G will be overburdened relatively quickly, so this problem isn’t going anywhere, time to look at solutions both old and new to make your applications perform on employee and customer cellular devices. F5 participates in the Application Acceleration market. I do try to write my blogs such that it’s clear there are other options, but of course I think ours are the best. And there are a LOT more ways to accelerate applications than can fit into one blog, I assure you. A simple laundry list of tools, configuration options, and features available on F5 products alone is the topic for a tome, not a blog. Now for the subliminal messaging: Buy our stuff, you’ll be happier. How was that? Italics and all. If you can flick a configuration switch on gear you’ve already paid for, do a little testing, and help employees and/or customers who are having performance problems quickly while other options are explored, then it is worth taking a few minutes to check into, right? Related Articles and Blogs: The Encrypted Elephant in the Cloud Room Stripping EXIF From Images as a Security Measure F5 Friday: Workload Optimization with F5 and IBM PureSystems Secure VDI Sign On: From a Factor of Four, to One The Four V’s of Big Data188Views0likes0CommentsAdvanced Load Balancing For Developers. The Network Dev Tool
It has been a while since I wrote an installment of Load Balancing for Developers, and now I think it has been too long, but never fear, this is the grad-daddy of Load Balancing for Developers blogs, covering a useful bit of information about Application Delivery Controllers that you might want to take advantage of. For those who have joined us since my last installment, feel free to check out the entire list of blog entries (along with related blog entries) here, though I assure you that this installment, like most of the others, does not require you to have read those that went before. ZapNGo! Is still a growing enterprise, now with several dozen complex applications and a high availability architecture that spans datacenters and the cloud. While the organization relies upon its web properties to generate revenue, those properties have been going along fine with your Application Delivery Controller (ADC) architecture. Now though, you’re seeing a need to centralize administration of a whole lot of functions. What worked fine separately for one or two applications is no longer working so well now that you have several development teams and several dozen applications, and you need to find a way to bring the growing inter-relationships under control before maintenance and hidden dependencies swamp you in a cascading mess of disruption. With maintenance taking a growing portion of your application development manhours, and a reasonably well positioned test environment configured with a virtual ADC to mimic your production environment, all you need now is a way to cut those maintenance manhours and reduce the amount of repetitive work required to create or update an application. Particularly update an application, because that is a constant problem, where creating is less frequent. With many of the threats that your ZapNGo application will be known as ZapNGone eliminated, now it is efficiencies you are after. And believe it or not, these too are available in an ADC. Not all ADC’s are created equal, but this discussion will stay on topics that most ADCs can handle, and I’ll mention it when I stray from generic into specific – which I will do in one case because only one vendor supports one of the tools you can use, but all of the others should be supported by whatever ADC vendor you have, though as always, check with your vendor directly first, since I’m not an expert in the inner workings of every one. There is a lot that many organizations do for themselves, and the array of possibilities is long – from implementing load balancing in source code to security checks in the application, the boundaries of what is expected of developers are shaped by an organization, its history, and its chosen future direction. At ZapNGo, the team has implemented a virtual test environment that as close as possible mirrors production, so that code can be implemented and tested in the way it will be used. They use an ADC for load balancing, so that they don’t have to rewrite the same code over and over, and they have a policy of utilizing a familiar subset of ADC functionality on all applications that face the public. The company is successful and growing, but as always happens in companies in that situation, the pressures upon them are changing just by virtue of their growth. There are more new people who don’t yet have intimate knowledge of the code base, network topology, security policies, whatever their area of expertise is. There are more lines of code to maintain, while new projects are being brought up at a more rapid pace and with higher priorities (I’ve twice lived through the “Everything is high priority? Well this is highest priority!” syndrome while working in IT. Thankfully, most companies grow out of that fast when it’s pointed out that if everything is priority #1, nothing is). Timelines to complete projects – be they new development, bug fixes, or enhancements are stretching longer and longer as the percentage of gurus in the company is down and the complexity of the code and the architecture it runs on is up. So what is a development manager to do to increase productivity? Teaming newer developers with people who’ve been around since the beginning is helping, but those seasoned developers are a smaller and smaller percentage of the workforce, while the volume of work has slowly removed them from some of the many products now under management. Adopting coding standards and standardized libraries helps increase experience portability between projects, but doesn’t do enough. Enter offloading to the ADC. Some things just don’t have to be done in code, and if they don’t have to be, at this stage in the company’s growth, IT management at ZapNGo (that’s you!) decides they won’t be. There just isn’t time for non-essential development anymore. Utilizing a policy management tool and/or an Application Firewall on the ADC can improve security without increasing the code base, for example. And that shaves hours off of maintenance projects, while standardizing on one or a few implementations that are simply selected on the ADC. Implementing Web Application Acceleration protocols on the ADC means that less in-code optimization has to occur. Performance is no longer purely the role of developers (but of course it is still a concern. No Web Application Acceleration tool can make a loop that runs for five minutes run faster), they can allow the Web Application Acceleration tool to shrink the amount of data being sent to the users’ browser for you. Utilizing a WAN Optimization ADC tool to improve the performance of bulk copies or backups to a remote datacenter or cloud storage… The list goes on and on. The key is that the ADC enables a lot of opportunities for App Dev to be more responsive to the needs of the organization by moving repetitive tasks to the ADC and standardizing them. And a heaping bonus is that it also does that for operations with a different subset of functionality, meaning one toolset gives both App Dev and Operations a bit more time out of their day for servicing important organizational needs. Some would say this is all part of DevOps, some would say it is not. I leave those discussions to others, all I care is that it can make your apps more secure, fast, and available, while cutting down on workload. And if your ADC supports an SSL VPN, your developers can work from home when necessary. Or more likely, if your code is your IP, a subset of your developers can. Making ZapNGo more responsive, easier to maintain, and more adaptable to the changes coming next week/month/year. That’s what ADCs do. And they’re pretty darned good at it. That brings us to the one bit that I have to caveat with F5 only, and that is iApps. An iApp is a constructed configuration tool that asks a few questions and then deploys all the bits necessary to set up an ADC for a particular application. Why do I mention it here? Well if you have dozens of applications with similar characteristics, you can create an iApp Template and use it to rapidly bring new applications or new instances of applications online. And since it is abstracted, these iApp templates can be designed such that AppDev, or even the business owner, is able to operate them Meaning less time worrying about what network resources will be available, how they’re configured, and waiting for operations to have time to implement them (in an advanced ADC that is being utilized to its maximum in a complex application environment, this can be hundreds of networking objects to configure – all encapsulated into a form). Less time on the project timeline, more time for the next project. Or for the post deployment party. One of the two. That’s it for the F5 only bit. And knowing that all of these items are standardized means less things to get mis-configured, more surety that it will all work right the first time. As with all of these articles, that offers you the most important benefit… A good night’s sleep.238Views0likes0CommentsLike Cars on a Highway.
Every once in a while, as the number of people following me grows (thank you, each and every one), I like to revisit something that is fundamental to the high-tech industry but is often overlooked or not given the attention it deserves. This is one of those times, and the many-faceted nature of any application infrastructure is the topic. While much has changed since I last touched on this topic, much has not, leaving us in an odd inflection point. When referring to movies that involve a lot of CGI, my oldest son called it “the valley of expectations”, that point where you know what you’d like to see and you’re so very close to it, but the current offerings fall flat. He specifically said that the Final Fantasy movie was just such a production. The movie came so close to realism that it was disappointing because you could still tell the characters were all animations. I thought it was insightful, but still enjoyed the movie. It is common to use the “weakest link in the chain” analogy whenever we discuss hardware, because you have parts sold by several vendors that include parts manufactured by several more vendors, making the entire infrastructure start to sound like the “weakest link” problem. Whether you’re discussing individual servers and their performance bottlenecks (which vary from year to year, depending upon what was most recently improved upon), or network infrastructures, which vary with a wide variety of factors including that server and its bottlenecks. I think a better analogy is a busy freeway. My reasoning is simple, you have to worry about the manufacture and operation of each vehicle (device) on the road, the road (wire) itself, interchanges, road conditions, and toll booths. There is a lot going on in your infrastructure, and “weakest link in the chain” is not a detailed enough comparison. In fact, if you’re of a mathematical bent, then the performance of your overall architecture could be summarized by the following equation: Where n is the number of infrastructure elements required for the application to function correctly and deliver information to the end user. From databases to Internet connections to client bandwidth, it’s all jumbled up in there. Even this equation isn’t perfect, simply because some performance degradation is so bad that it drags down the entire system, and other issues are not obvious until the worst offender is fixed. This is the case in the iterative improvement of servers… Today the memory is the bottleneck, once it is fixed, then the next bottleneck is disk, once it is improved, the next bottleneck is network I/O… on and on it goes, and with each iteration we get faster overall servers. And interestingly enough, security is very much the same equation, with the caveat that a subset of infrastructure elements is likely to be looked at for security, just because not everything is exposed to the outside world – for example, the database only need be considered if you allow users to enter data into forms that will power a DB query directly. So what is my point? well simply put, when you are budgeting, items that impact more than one element – from a security or performance perspective – or more than one application, should be prioritized over things that are specific to one element or one application. The goal of improving the overall architecture should trump the needs of individual pieces or applications, because IT – indeed, the business – is built upon the overall application delivery architecture, not just a single application. Even though one application may indeed be more relevant to the business (I can’t imagine that eBay has any application more important than their web presence, for example, since it is their revenue generation tool), overall improvements will help that application and your other applications. Of course you should fix those terribly glaring issues with either of these topics that are slowing the entire system down or compromising overall security, but you should also consider solutions that will net you more than a single-item fix. Yes, I think an advanced ADC product like F5’s BIG-IP is one of these multi-solution products, but it goes well beyond F5 into areas like SSDs for database caches and such. So keep it in mind. Sometimes the solution to making application X faster or more secure is to make the entire infrastructure faster or more secure. And if you look at it right, availability fits into this space too. Pretty easily in fact.231Views0likes0Comments