wan
88 TopicsDeduplication and Compression – Exactly the same, but different.
One day many years ago, Lori and I’s oldest son held up two sheets of paper and said “These two things are exactly the same, but different!” Now, he’s a very bright individual, he was just young, and didn’t even get how incongruous the statement was. We, being a fun loving family that likes to tease each other on occasion, we of course have not yet let him live it down. It was honestly more than a decade ago, but all is fair, he doesn’t let Lori live down something funny that she did before he was born. It is all in good fun of course. Why am I bringing up this family story? Because that phrase does come to mind when you start talking about deduplication and compression. Highly complimentary and very similar, they are pretty much “Exactly the same, but different”. Since these technologies are both used pretty heavily in WAN Optimization, and are growing in use on storage products, this topic intrigued me. To get this out of the way, at F5, compression is built into the BIG-IP family as a feature of the core BIG-IP LTM product, and deduplication is an added layer implemented over BIG-IP LTM on BIG-IP WAN Optimization Module (WOM). Other vendors have similar but varied (there goes a variant of that phrase again) implementation details. Before we delve too deeply into this topic though, what caught my attention and started me pondering the whys of this topic was that F5’s deduplication is applied before compression, and it seems that reversing the order changes performance characteristics. I love a good puzzle, and while the fact that one should come before the other was no surprise, I started wanting to know why the order it was, and what the impact of reversing them in processing might be. So I started working to understand the details of implementation for these two technologies. Not understand them from an F5 perspective, though that is certainly where I started, but try to understand how they interact and compliment each other. While much of this discussion also applies to in-place compression and deduplication such as that used on many storage devices, some of it does not, so assume that I am talking about networking, specifically WAN networking, throughout this blog. At the very highest level, deduplication and compression are the same thing. They both look for ways to shrink your dataset before passing it along. After that, it gets a bit more complex. If it was really that simple, after all, we wouldn’t call them two different things. Well, okay, we might, IT has a way of having competing standards, product categories, even jobs that we lump together with the same name. But still, they wouldn’t warrant two different names in the same product like F5 does with BIG-IP WOM. The thing is that compression can do transformations to data to shrink it, and it also looks for small groupings of repetitive byte patterns and replaces them, while deduplication looks for larger groupings of repetitive byte patterns and replaces them. In the implementation you’ll see on BIG-IP WOM, deduplication looks for larger byte patterns repeated across all streams, while compression applies transformations to the data, and when removing duplication only looks for smaller combinations on a single stream. The net result? The two are very complimentary, but if you run compression before deduplication, it will find a whole collection of small repeating byte patterns and between that and transformations, deduplication will find nothing, making compression work harder and deduplication spin its wheels. There are other differences – because deduplication deals with large runs of repetitive data (I believe that in BIG-IP the minimum size is over a K), it uses some form of caching to hold patterns that duplicates can match, and the larger the caching, the more strings of bytes you have to compare to. This introduces some fun around where the cache should be stored. In memory is fast, but limited in size, on flash disk is fast and has a greater size, but is expensive, and on disk is slow but has a huge advantage in size. Good deduplication engines can support all three and thus are customizable to what your organization needs and can afford. Some workloads just won’t benefit from one, but will get a huge benefit from the other. The extremes are good examples of this phenomenon – if you have a lot of in-the-stream repetitive data that is too small for deduplication to pick up, and little or no cross-stream duplication, then deduplication will be of limited use to you, and the act of running through the dedupe engine might actually degrade performance a negligible amount – of course, everything is algorithm dependent, so depending upon your vendor it might degrade performance a large amount also. On the other extreme, if you have a lot of large byte count duplication across streams, but very little within a given stream, deduplication is going to save your day, while compression will, at best, offer you a little benefit. So yes, they’re exactly the same from the 50,000 foot view, but very very different from the benefits and use cases view. And they’re very complimentary, giving you more bang for the buck.299Views0likes1CommentBare Metal Blog: Throughput Sometimes Has Meaning
#BareMetalBlog Knowing what to test is half the battle. Knowing how it was tested the other. Knowing what that means is the third. That’s some testing, real clear numbers. In most countries, top speed is no longer the thing that auto manufacturers want to talk about. Top speed is great if you need it, but for the vast bulk of us, we’ll never need it. Since the flow of traffic dictates that too much speed is hazardous on the vast bulk of roads, automobile manufacturers have correctly moved the conversation to other things – cup holders (did you know there is a magic number of them for female purchasers? Did you know people actually debate not the existence of such a number, but what it is?), USB/bluetooth connectivity, backup cameras, etc. Safety and convenience features have supplanted top speed as the things to discuss. The same is true of networking gear. While I was at Network Computing, focus was shifting from “speeds and feeds” as the industry called it, to overall performance in a real enterprise environment. Not only was it getting increasingly difficult and expensive to push ever-larger switches until they could no longer handle the throughput, enterprise IT staff was more interested in what the capabilities of the box were than how fast it could go. Capabilities is a vague term that I used on purpose. The definition is a moving target across both time and market, with a far different set of criteria for, say, an ADC versus a WAP. There are times, however, where you really do want to know about the straight-up throughput, even if you know it is the equivalent of a professional driver on a closed course, and your network will never see the level of performance that is claimed for the device. There are actually several cases where you will want to know about the maximum performance of an ADC, using the tools I pay the most attention to at the moment as an example. WAN optimization is a good one. In WANOpt, the goal is to shrink the amount of data being transferred between two dedicated points to try and maximize the amount of throughput. When “maximize the amount of throughput” is in the description, speeds and feeds matter. WANOpt is a pretty interesting example too, because there’s more than just “how much data did I send over the wire in that fifteen minute window”. It’s more complex than that (isn’t it always?). The best testing I’ve seen for WANOpt starts with “how many bytes were sent by the originating machine”, then measures that the same number of bytes were received by the WANOpt device, then measures how much is going out the Internet port of the WANOpt device – to measure compression levels and bandwidth usage – then measures the number of bytes the receiving machine at the remote location receives to make sure it matches the originating machine. So even though I say “speeds and feeds matter”, there is a caveat. You want to measure latency introduced with compression and dedupe, and possibly with encryption since WANOpt is almost always over the public Internet these days, throughput, and bandwidth usage. All technically “speeds and feeds” numbers, but taken together giving you an overall picture of what good the WANOpt device is doing. There are scenarios where the “good” is astounding. I’ve seen the numbers that range as high as 95x the performance. If you’re sending a ton of data over WANOpt connections, even 4x or 5x is a huge savings in connection upgrades, anything higher than that is astounding. This is an (older) diagram of WAN Optimization I’ve marked up to show where the testing took place, because sometimes a picture is indeed worth a thousand words. And yeah, I used F5 gear for the example image… That really should not surprise you . So basically, you count the bytes the server sends, the bytes the WANOpt device sends (which will be less for 99.99% of loads if compression and de-dupe are used), and the total number of bytes received by the target server. Then you know what percentage improvement you got out of the WANOpt device (by comparing server out bytes to WANOpt out bytes), that the WANOpt devices functioned as expected (server received bytes == server sent bytes), and what the overall throughput improvement was (server received bytes/time to transfer). There are other scenarios where simple speeds and feeds matter, but less of them than their used to be, and the trend is continuing. When a device designed to improve application traffic is introduced, there are certainly few. The ability to handle a gazillion connections per second I’ve mentioned before is a good guardian against DDoS attacks, but what those connections can do is a different question. Lots of devices in many networking market spaces show little or even no latency introduction on their glossy sales hand-outs, but make those devices do the job they’re purchased for and see what the latency numbers look like. It can be ugly, or you could be pleasantly surprised, but you need to know. Because you’re not going to use it in a pristine lab with perfect conditions, you’re going to slap it into a network where all sorts of things are happening and it is expected to carry its load. So again, I’ll wrap with acknowledgement that you all are smart puppies and know where speeds and feeds matter, make sure you have realistic performance numbers for those cases too. Technorati Tags: Testing,Application Delivery Controller,WAN Optimization,throughput,latency,compression,deduplication,Bare Metal Blog,F5 Networks,Don MacVittie The Whole Bare Metal Blog series: Bare Metal Blog: Introduction to FPGAs | F5 DevCentral Bare Metal Blog: Testing for Numbers or Performance? | F5 ... Bare Metal Blog: Test for reality. | F5 DevCentral Bare Metal Blog: FPGAs The Benefits and Risks | F5 DevCentral Bare Metal Blog: FPGAs: Reaping the Benefits | F5 DevCentral Bare Metal Blog: Introduction | F5 DevCentral204Views0likes0CommentsDoes Cloud Solve or Increase the 'Four Pillars' Problem?
It has long been said – often by this author – that there are four pillars to application performance: Memory CPU Network Storage As soon as you resolve one in response to application response times, another becomes the bottleneck, even if you are not hitting that bottleneck yet. For a bit more detail, they are “memory consumption” – because this impacts swapping in modern Operating Systems. “CPU utilization” – because regardless of OS, there is a magic line after which performance degrades radically. “Network throughput” – because applications have to communicate over the network, and blocking or not (almost all coding for networks today is), the information requested over the network is necessary and will eventually block code from continuing to execute. “Storage” – because IOPS matter when writing/reading to/from disk (or the OS swaps memory out/back in). These four have long been relatively easy to track. The relationship is pretty easy to spot, when you resolve one problem, one of the others becomes the “most dangerous” to application performance. But historically, you’ve always had access to the hardware. Even in highly virtualized environments, these items could be considered both at the Host and Guest level – because both individual VMs and the entire system matter. When moving to the cloud, the four pillars become much less manageable. The amount “much less” implies depends a lot upon your cloud provider, and how you define “cloud”. Put in simple terms, if you are suddenly struck blind, that does not change what’s in front of you, only your ability to perceive it. In the PaaS world, you have only the tools the provider offers to measure these things, and are urged not to think of the impact that host machines may have on your app. But they do have an impact. In an IaaS world you have somewhat more insight, but as others have pointed out, less control than in your datacenter. Picture Courtesy of Stanley Rabinowitz, Math Pro Press. In the SaaS world, assuming you include that in “cloud”, you have zero control and very little insight. If you app is not performing, you’ll have to talk to the vendors’ staff to (hopefully) get them to resolve issues. But is the problem any worse in the cloud than in the datacenter? I would have to argue no. Your ability to touch and feel the bits is reduced, but the actual problems are not. In a pureplay public cloud deployment, the performance of an application is heavily dependent upon your vendor, but the top-tier vendors (Amazon springs to mind) can spin up copies as needed to reduce workload. This is not a far cry from one common performance trick used in highly virtualized environments – bring up another VM on another server and add them to load balancing. If the app is poorly designed, the net result is not that you’re buying servers to host instances, it is instead that you’re buying instances directly. This has implications for IT. The reduced up-front cost of using an inefficient app – no matter which of the four pillars it is inefficient in – means that IT shops are more likely to tolerate inefficiency, even though in the long run the cost of paying monthly may be far more than the cost of purchasing a new server was, simply because the budget pain is reduced. There are a lot of companies out there offering information about cloud deployments that can help you to see if you feel blind. Fair disclosure, F5 is one of them, I work for F5. That’s all you’re going to hear on that topic in this blog. While knowing does not always directly correlate to taking action, and there is some information that only the cloud provider could offer you, knowing where performance bottlenecks are does at least give some level of decision-making back to IT staff. If an application is performing poorly, looking into what appears to be happening (you can tell network bandwidth, VM CPU usage, VM IOPS, etc, but not what’s happening on the physical hardware) can inform decision-making about how to contain the OpEx costs of cloud. Internal cloud is a much easier play, you still have access to all the information you had before cloud came along, and generally the investigation is similar to that used in a highly virtualized environment. From a troubleshooting performance problems perspective, it’s much the same. The key with both virtualization and internal (private) clouds is that you’re aiming for maximum utilization of resources, so you will have to watch for the bottlenecks more closely – you’re “closer to the edge” of performance problems, because you designed it that way. A comprehensive logging and monitoring environment can go a long way in all cloud and virtualization environments to keeping on top of issues that crop up – particularly in a large datacenter with many apps running. And developer education on how not to be a resource hog is helpful for internally developed apps. For externally developed apps the best you can do is ask for sizing information and then test their assumptions before buying. Sometimes, cloud simply is the right choice. If network bandwidth is the prime limiting factor, and your organization can accept the perceived security/compliance risks, for example, the cloud is an easy solution – bandwidth in the cloud is either not limited, or limited by your willingness to write a monthly check to cover usage. Either way, it’s not an Internet connection upgrade, which can be dastardly expensive not just at install, but month after month. Keep rocking it. Get the visibility you need, don’t worry about what you don’t need. Related Articles and Blogs: Don MacVittie - Load Balancing For Developers Advanced Load Balancing For Developers. The Network Dev Tool Load Balancers for Developers – ADCs Wan Optimization ... Intro to Load Balancing for Developers – How they work Intro to Load Balancing for Developers – The Gotchas Intro to Load Balancing for Developers – The Algorithms Load Balancing For Developers: Security and TCP Optimizations Advanced Load Balancers for Developers: ADCs - The Code Advanced Load Balancing For Developers: Virtual Benefits Don MacVittie - ADCs for Developers Devops Proverb: Process Practice Makes Perfect Devops is Not All About Automation 1024 Words: Why Devops is Hard Will DevOps Fork? DevOps. It's in the Culture, Not Tech. Lori MacVittie - Development and General Devops: Controlling Application Release Cycles to Avoid the ... An Aristotlean Approach to Devops and Infrastructure Integration How to Build a Silo Faster: Not Enough Ops in your Devops236Views0likes0CommentsF5 ... Wednesday: Bye Bye Branch Office Blues
#virtualization #VDI Unifying desktop management across multiple branch offices is good for performance – and operational sanity. When you walk into your local bank, or local retail outlet, or one of the Starbucks in Chicago O'Hare, it's easy to forget that these are more than your "local" outlets for that triple grande dry cappuccino or the latest in leg-warmer fashion (either I just dated myself or I'm incredibly aware of current fashion trends, you decide which). For IT, these branch offices are one of many end nodes on a corporate network diagram located at HQ (or the mother-ship, as some of us known as 'remote workers' like to call it) that require care and feeding – remotely. The number of branch offices continues to expand and, regardless of how they're counted, number in the millions. In a 2010 report, the Internet Research Group (IRG) noted: Over the past ten years the number of branch office locations in the US has increased by over 21% from a base of about 1.4M branch locations to about 1.7M at the end of 2009. While back in 2004, IDC research showed four million branch offices, as cited by Jim Metzler: The fact that there are now roughly four million branch offices supported by US businesses gives evidence to the fact that branch offices are not going away. However, while many business leaders, including those in the banking industry, were wrong in their belief that branch offices were unnecessary, they were clearly right in their belief that branch offices are expensive. One of the reasons that branch offices are expensive is the sheer number of branch offices that need to be supported. For example, while a typical company may have only one or two central sites, they may well have tens, hundreds or even thousands of branch offices. -- The New Branch Office Network - Ashton, Metzler & Associates Discrepancies appear to derive from the definition of "branch office" – is it geographic or regional location that counts? Do all five Starbucks at O'Hare count as five separate branch offices or one? Regardless how they're counted, the numbers are big and growth rates say it's just going to get bigger. From an IT perspective, which has trouble scaling to keep up with corporate data center growth let alone branch office growth, this spells trouble. Compliance, data protection, patches, upgrades, performance, even routine troubleshooting are all complicated enough without the added burden of accomplishing it all remotely. Maintaining data security, too, is a challenge when remote offices are involved. It is just these challenges that VMware seeks to address with its latest Branch Office Desktop solution set, which lays out two models for distributing and managing virtual desktops (based on VMware View, of course) to help IT mitigate if not all then most of the obstacles IT finds most troubling when it comes to branch office anything. But as with any distributed architecture constrained by bandwidth and technological limitations, there are areas that benefit from a boost from VMware partners. As long-time strategic and technology partners, F5 brings its expertise in improving performance and solving unique architectural challenges to the VMware Branch Office Desktop (BOD) solution, resulting in LAN-like convenience and a unified namespace with consistent access policy enforcement from HQ to wherever branch offices might be located. F5 Streamlines Deployments of Branch Office Desktops KEY BENEFITS · Local and global intelligent traffic management with single namespace and username persistence support · Architectural freedom of combining Virtual Editions with Physical Appliances · Optimized WAN connectivity between branches and primary data centers Using BIG-IP Global Traffic Manager (GTM), a single namespace (for example, https://desktop.example.com) can be provided to all end users. BIG-IP GTM and BIG-IP Local Traffic Manager (LTM) work together to ensure that requests are sent to a user’s preferred data center, regardless of the user’s current location. BIG-IP Access Policy Manager (APM) validates the login information against the existing authentication and authorization mechanisms such as Active Directory, RADIUS, HTTP, or LDAP. In addition, BIG-IP LTM works with the F5 iRules scripting language, which allows administrators to configure custom traffic rules. F5 Networks has tested and published an innovative iRule that maintains connection persistence based on the username, irrespective of the device or location. This means that a user can change devices or locations and log back in to be reconnected to a desktop identical to the one last used. By taking advantage of BIG-IP LTM to securely connect branch offices with corporate headquarters, users benefit from optimized WAN services that dramatically reduce transfer times and performance of applications relying on data center-hosted resources. Together, F5 and VMware can provide more efficient delivery of virtual desktops to the branch office without sacrificing performance or security or the end-user experience.212Views0likes0CommentsThe BYOD That is Real.
Not too long ago I wrote about VDI and BYOD, and how their hype cycles were impacting IT. In that article I was pretty dismissive of the corporate-wide democratization of IT through BYOD, and I stand by that. Internally, it is just not a realistic idea unless and until management toolsets converge. But that’s internally. Externally, we have a totally different world. If you run a website-heavy business like banking or sales, you’re going to have to deal with the proliferation of Internet enabled phones and tablets. Because they will hit your websites, and customers will expect them to work. Some companies – media companies tend to do a lot of this, for example – will ask you to download their app to view web pages. That’s ridiculous, just display the page. But some companies – again, banks are a good example – have valid reasons to want customers to use an app to access their accounts. The upshot is that any given app will have to support at least two platforms today, and that guarantees nothing a year from now. But it does not change the fact that one way or another, you’re going to have to support these devices over the web. There are plenty of companies out there trying to help you. Appcelerator offers a cross-platform development environment that translates from javascript into native Objective C or Java, for example. There are UI design tools available on the web that can output both formats but are notoriously short of source code and custom graphics. Still, good for prototyping. And the environments allow you to choose an HTML5 app, a native app, or a hybrid of the two, allowing staff to choose the best solution for the problem at hand. And then there is the network. It is not just a case of delivering a different format to the device, it is a case of optimizing that content for delivery to devices with smaller memory space, slower networks, and slower CPU speeds. That’s worth thinking about. There’s also the security factor. mobile devices are far easier to misplace than a desktop, and customers are not likely to admit their device is stolen until they’ve looked everywhere they might have left it. In the case (again) of financial institutions, if credentials are cached on the device, this is a recipe for disaster. So it is not only picking a platform and an application style, it is coding to the unique characteristics of the mobile world. Of course optimization is best handled at the network layer by products like our WebAccelerator, because it’s what they do and they’re very good at optimizing content based upon the target platform. Security, as usual, must be handled in several places. Checking that the device is not in a strange location (as I talked about here) is a good start, but not allowing username and password to be cached on the device is huge too. So while you are casting a skeptical look at BYOD inside your organization, pay attention to customers’ device preferences. They’re hitting the web on mobile devices more and more each month, and their view of your organization will be hugely impacted by how your site and/or apps respond. So invest the time and money, be there for them, so that they’ll come back to you. Or don’t. Your competitors would like that.264Views0likes0CommentsRandom Acts of Optimization.
When I first embarked on my application development career, I was a code optimization junky. Really, making things faster, more efficient, the tightest it could get was a big deal to me. That routine you wrote to solve a one-off problem often becomes the core routine used by applications across the infrastructure, so writing tight code was (to me) important. The industry obviously didn’t agree with me, since now we run mostly interpreted languages over the network, but that was then, this is now and all. The thing is that performance still matters, it has just changed location. The amount of overhead in the difference (in C/C++) between if() else and (x?y:z) is not so important anymore unless that particular instruction is being used a whole lot. The latency introduced to the network by all of those devices between server and client is far larger than the few clock cycles difference between these two instructions. There are still applications where optimized code really makes a difference (mostly in embedded, where all resources are less than even the tablet space), but with ever-shrinking form factors and increasing resources, even those instances are going away slowly but surely. The only place I’ve heard of that really needs a high level of source optimization in recent months is high-speed transactions in the financial services sector. Simply put, if your application is on the network, the organization will get more out of spending networking staff man-hours improving network performance than spending developer man-hours doing the same. There is still a lot of app optimization that needs to go on – databases are a notorious area where a great DBA can make your application many times faster – but the network is impacting the applications many times in its back-and-forth, and it is impacting all applications, including the ones you don’t have the source to. But there are a lot of pieces to application delivery optimization (ADO), and approaching it piecemeal has no better result than approaching application optimization piecemeal. Just because you put a load balancer in front of your server and fired up a few more VMs behind the load balancer to share the load does not mean that your application is optimized. In some instances, that is the best solution, but in most cases, a more thorough Application Delivery Network approach is required. Making the application more responsive by load balancing does not decrease the amount of data the application is sending over your Internet connection, does not optimize delivery of the application over the wire to make it faster on the client end, does not direct users to the geographically closest or least utilized datacenter/cloud, does not… Well, do a lot of things. Exactly the same as optimizing your applications’ code won’t help a bit if the database is the slowest part of the application. So I’ll recommend an holistic approach (I hate that phrase, but how else do you politely say “look at every friggin’ thing on your network”?), that focuses on application serving and application delivery. And if you’re in multiple datacenters with data having to traverse the Internet behind your application also, then back-end optimizations also. It’s not just about throwing more virtuals at the problem, and most of us know it at this point. The user experience is, in the end, what matters most, and there are plenty of places other than your app that can dog performance from the user perspective. Look into compression and caching, TCP optimizations, application specific delivery tweaks, back-end optimizations, and in some cases, code optimizations. check the performance hit that every device between your servers and the Internet introduces. Optimize everything. And, like writing tight code, it will become just the way you do things. Once it is ingrained that you check all the places your application performance can suffer, it’s less to worry about, because you’ll configure things with deployment. Call it DevOps if you will, but make it part of your normal deployment model and review things whenever there’s a change. It’s a complex beast, the enterprise network, and it’s not getting less so. Use templates (like F5’s iApps) to provision the network bits correctly for you. Taking the F5 example, there is an iApp for deploying SharePoint and a different one for Exchange. They take care of the breadth of issues that can speed delivery of each application. You just answer a few questions. I am unaware of any of our competitors having a similar solution, but it is only a question of time, so if you’re not an F5 customer, ask your sales representative what the timeline for delivery of similar functionality is. I’m not an expert on our competition, who knows, maybe they have rolled something out already. Even if not, you can make checklists much like F5 Application Guides and F5 Deployment Guides, then use them to train new employees and make certain you’ve set everything up to your liking. Generally speaking, faster is better on any given network, so optimization is something you’ll have to worry about even if you’re not thinking about it today. Hope this helps a little in understanding that there’s more to it than load balancing. But if not, at least I got to write about optimizing C source. Related Articles and Blogs: F5 Friday: F5 Application Delivery Optimization (ADO) The “All of the Above” Approach to Improving Application Performance Interop 2012 - Application Delivery Optimization with F5's Lori ... The Four V's of Big Data DevCentral Interviews | Audio - Application Delivery Controllers F5 News - Unified Application Delivery Intercloud: The Evolution of Global Application Delivery Audio White Paper - Application Delivery Hardware A Critical ... Who owns application delivery meta-data in the cloud? The Application Delivery Spell Book: Detect Invisible (Application ...258Views0likes0CommentsOn the Trading Floor and in the QA Lab
#fsi problems are very public, but provide warning messages for all enterprises. The recent troubles in High Frequency Trading (HFT) involving problems on the NASDAQ over the debut of Facebook, the Knight Trading $400 Million USD loss among others are a clear warning bell to High Frequency Trading organizations. The warning comes in two parts: “Testing is not optional”, and “Police yourselves or you will be policed”. Systems glitches happen in every industry, we’ve all been victims of them, but Knight in particular has been held up as an example of rushing to market and causing financial harm. Not only did the investors in Knight itself lose most of their investment (between the impact on the stock price and the dilution of shares their big-bank bailout entailed, it is estimated that their investors lost 80% or more in a couple of weeks), but when the price of a stock – particularly a big-name stock, which the hundred they were overtrading mostly were – fluctuates dramatically, it creates winners and losers in the market. For every person that bought cheap and sold high, there was a seller at the low end and a buyer at the high end. Regulatory agencies and a collection of university professors are reportedly looking into an ISO-9000 style quality control system for the HFT market. One more major glitch could be all it takes to send them to the drafting table, so it is time for the industry itself to band together and place controls, or allow others to dictate controls. Quality assurance in every highly competitive industry has this problem. They are a cost center, and while everyone wants a quality product, many are annoyed by the “interference” QA brings into the software development process. This can be worse in a highly complex network like HFT or a large multi-national requires, because replicating the network for QA purposes can be a daunting project. This is somewhat less relevant in smaller organizations, but certainly there are mid-sized companies with networks every bit as complex as large multi-nationals. Luckily, we have reached a point in time where a QA environment can be quickly configured and reconfigured, where testing is more of a focus on finding quality problems with the software than on configuring the environment – running cables, etc – that has traditionally cost QA for networked applications a lot of time or a lot of money maintaining a full copy of the production network. From this point forward, I will mention F5 products by name. Please feel free to insert your favorite vendors’ name if they have a comparable product. F5 is my employer, so I know what our gears’ capabilities are, competitors that is less true for, so call your sales folks and ask them if they support the functionality described. Wouldn’t hurt to do that with F5 either. Our sales people know a ton, and can clarify anything that isn’t clear in this blog. In the 21st century, testing and Virtualization go hand-in-hand. There are a couple of different forms of network virtualization that can help with testing, depending upon the needs of your testing team and your organization. I refer to them as QA testing and performance testing, think of them as “low throughput testing” and “high throughput testing”. If you’re not testing performance, you don’t need to see a jillion connections a second, but you do need to see all of the things the application does, and make certain they match requirements (more often design, but that’s a different blog post concerning what happens if the design doesn’t adequately address requirements and testing is off of design…). Quality Assurance For low throughput testing, virtualization has been the king for a good long while, with only cloud even pretending to challenge the benefits of a virtualized environment. Since “cloud” in this case is simply IaaS running VMs, I see no difference for QA purposes. This example could be in the cloud or on your desktop in VMs. Dropping a Virtual Application Delivery Controller (vADC) into the VM environment will provide provisioning of networking objects in the same manner as is done in the production network. This is very useful for testing multiple-instance applications for friendliness. It doesn’t take much of a mistake to turn the database into the bottleneck in a multiple-instance web application. Really. I’ve seen it happen. QA testing can see this type of behavior without the throughput of a production network, if the network is designed to handle load balanced copies of the application. It is also very useful for security screening, assuming the vADC supports a Web Application Firewall (WAF) like the BIG-IP with its Application Security Manager. While testing security through a WAF is useful, the power of the WAF really comes into play when a security flaw is discovered in QA testing. Many times, that flaw can be compensated for with a WAF, and having one on the QA network allows staff to test with and without the WAF. Should the WAF provide cover for the vulnerability, an informed decision can then be made about whether the application deployment must be delayed for a bug fix, or if the WAF will be allowed to handle protection of that particular vulnerability until the next scheduled update. In many cases, this saves both time and money. In cases of heavy backend transport impacting the performance of web applications – like mirroring database calls to a remote datacenter – the use of a WAN Optimization manager can be evaluated in test to see if it helps performance without making changes to the production network. Testing network object configurations is easier too. If the test environment is set up to mirror the production network, the only difference being that testing is 100% virtualized, then the exact network object – load balancing, WAN optimization, Application Acceleration, Security, and WAF can all be configured in QA Test exactly as they will be configured in production. This allows for thorough testing of the entire infrastructure, not just the application being deployed. Performance Testing For high-throughput testing, the commodity hardware that runs VMs can be a limiting factor in the sense that the throughput in test needs to match the expected usage of the application at peak times. For these scenarios, organizations with high-volume, mission-critical applications to test can run the same exact testing scenario using a hardware chassis capable of multi-tenancy. As always, I work for F5 so my experience is best couched in F5 terms. Our VIPRION systems are capable of running multiple different BIG-IP instances per blade. That means that in test, the exact same hardware that will be used in production can be used for performance evaluation. Everything said above about QA testing – WAF, Application Acceleration, testing for bottlenecks, all apply. The biggest difference is that the tests are on a physical machine, which might make testing to the cloud more difficult as the machine cannot be displaced to the cloud environment. To resolve this particular issue, the hybrid model can be adopted. VIPRION on the datacenter side and BIG-IP VE on the cloud side, in the case of F5. Utilizing management tools like the iApps Analytics built in to F5 Enterprise Manager (EM) allow testers to see which portion of the architecture is limiting performance, and save man-hours searching out problems. It’s Still About The App and the Culture In the end, the primary point of testing is to safeguard against coding errors that would cause real pain to the organization and get them fixed before the application is turned live. The inclusion of network resources in testing is a reflection of the growing complexity many web based applications are experiencing in supporting infrastructure. Just as you wouldn’t test a mainframe app on a PC, testing a networked app outside of the target environment is not conclusive. But the story at Knight trading does not appear to be one about testing, but rather culture. In a rush to meet an artificial deadline, they appear to have cut corners and rushed changes in the night before. You can’t fix problems with testing if you aren’t testing. Many IT shops need to take that to heart. The testers I have worked with over the years are astounding folks with a lot of smarts, but all suffer from the problem that their organization doesn’t value testing at the level it does other IT functions. Dedicated testing time is often in short order and the first thing to go when deadlines slip. Quite often testers are developers who have the added responsibility of testing. But many of us have said over the years and will continue to say… Testing your own code is not a way to find bugs. Don’t you think – really think – that if a developer thinks of it in testing, he/she probably thought of it during development? While those problems not thought of in development can certainly be caught, a fresh set of eyes setting up tests outside the context of developer assumptions is always a good idea. And Yeah, it’s happened to me Early in my career, I was called upon to make a change to a software package used by some of the largest banks in the US. I ran the change out in a couple of hours, I tested it, there was pressure to get it out the door so the rockstars in our testing department didn’t even know about the change, and we delivered it electronically to our largest banking customer – who was one of the orgs demanding the change. In their environment, the change over-wrote the database our application used. Literally destroyed a years’ worth of sensitive data. Thankfully, they had followed our advice and backed up the database first. While they were restoring, I pawed through the effected change line-by-line and found that the error destroying their database was occurring in my code, just not doing any real harm (it was writing over a different file on my system), so I didn’t notice it. Testing would have saved all of us a ton of pain because it would have been putting the app in a whole new environment. But this was a change “that they’re demanding NOW!” The bank in question lost the better part of a day restoring things to normal, and my company took an integrity hit with several major customers. I learned then that a few hours of testing of a change that took a few hours to write is worth the investment, no matter how much pressure there is to deliver. Since then, I have definitely pushed to have a test phase with individuals not involved with development running the app. And of course, the more urgent the change, the more I want at least one person to install and test outside of my dev machine. And you should too. Related Articles and Blogs There is more to it than performance. DEFCON 20 Highlights How to Develop Next-Gen Project Managers225Views0likes0CommentsSpeed Matters, but Dev Speed or App Speed?
In running, speed matters. But how the speed matters is very important, and what type of running is your forte’ should determine what you are involved in. As a teen, I was never a very good sprinter. Just didn’t get up to speed fast enough, and was consistently overcome by more nimble opponents. But growing up on a beach was perfect conditioning for cross country track. Running five miles in beach sand that gave way underfoot and drained your energy much faster than it allowed you to move forward was solid practice for running through the woods mile after mile. And I wasn’t a bad runner – not a world champion to be sure – but I won more often than I lost when the “track” was ten or fifteen miles through the woods. The same is true of mobile apps, though most organizations don’t seem to realize it yet. There are two types of mobile apps – those that are developed for sprinting, by getting them to market rapidly, and those that are developed for the long haul, by implementing solutions based around the platform in question. By “platform” in this case, I mean the core notions of “mobile” apps – wireless, limited resources, touch interfaces, and generally different use cases than a laptop or desktop machine. It is certainly a more rapid go-to-market plan to have an outsourcer of some kind dump your existing HTML into an “app” or develop a little HTML5 and wrap it in an “app”, but I would argue that the goals of such an endeavor are short term. Much like sprinting, you’ll get there quickly, but then the race is over. How the judges (customers in this case) gauge the result is much more important. There are three basic bits to judging in mobile apps – ease of use, which is usually pretty good in a wrapped HTML or “hybrid” app; security, which is usually pretty horrendous in a hybrid app; and performance, which is usually pretty horrendous in a hybrid app. The security bit could be fixed with some serious security folks looking over the resultant application, but the performance issue is not so simple. You see, performance of a hybrid application is a simple equation… Speed of original web content + overhead of a cell phone + overhead of the app wrapper around the HTML. Sure, you’ll get faster development time wrapping HTML pages in an app, but you’ll get worse long-term performance. Kind of the same issue you get when a sprinter tries to run cross country. They rock for the first while, but burn out before the cross country racers are up to speed. You can use tools like our Application Delivery Optimization (ADO) engine to make the wrapped app perform better, but that’s not a panacea. Longer term it will be necessary to develop a more targeted, comprehensive solution. Because when you need a little bit of data and could wrap display functionality around it on the client side, transferring that display functionality and then trying to make it work in a client is pure overhead. Overhead that must be transmitted on a slower network over what is increasingly a pay-as-you-go bandwidth model. Even if the application somehow performs adequately, apps that are bandwidth hogs are not going to be repaid with joy as increasing numbers of carriers drop unlimited bandwidth plans. So before you shell out the money for an intermediate step, stop and consider your needs. Enterprises are being beaten about the head and shoulders with claims that if you don’t have a mobile app, you’re doomed. Think really carefully before you take the chicken-little mentality to heart. Are your customers demanding an app? If so are they demanding it right this instant? if so, perhaps a hybrid app is a good option, if you’re willing to spend whatever it costs to get it developed only to rewrite the app native in six or ten months. Take a look at the Play store or the Apple store, and you’ll see that just throwing an app out there is not enough. You need to develop a method to let your customers know it’s available, and it has to offer them… Something. If you can’t clearly define both of those requirements, then you can’t clearly define what you need, and should take a deep breath while considering your options. Let’s say you have a web-based calculator for mortgage interest rates. It is calling web services to do the interest rate calculations. For not much more development time, it is possible to build a very sweet version of the same calculator in native mode for either iPhones or Android (depending upon your platform priorities, could be either), with a larger up-front investment but less long-term investment by re-using those web services calls from within the mobile app. A little more money now, and no need to rewrite for better performance or targeting Mobile in the future? Take the little extra hit now and do it right. There are plenty of apps out there, and unless you can prove you’re losing money every day over lack of a mobile app, no one will notice that your application came out a month or two later – but they will notice how cool it is. While we’re on the topic, I hate to burst any bubbles, but every single website doesn’t need a dedicated app. We have to get over the hype bit and get to reality. Most people do not want 50 reader apps on their phone, each one just a simple hybrid shell to allow easier reading of a single website. They just don’t. So consider whether you even need an app. Seriously. If the purpose of your app is to present your website in a different format, well news flash, all mobile devices have this nifty little tool called a web browser that’s pretty good at presenting your website. Of course, when you do deploy apps, or even before you do, consider F5’s ADO and security products. They do a lot with mobile that is specific to the mobile world. App development is no simple task, and good app development, like all good development, will cost you money. Make the right choices, drive the best app you can out to your customers, because they’re not very forgiving of slow or buggy apps, and they’re completely unforgiving about apps that mess up their mobile devices. And maybe one day soon, if we’re lucky, we’ll have a development toolkit that works well and delivers something like this: Related Articles and Blogs F5 Solutions for VMware View Mobile Secure Desktop Drama in the Cloud: Coming to a Security Theatre Near You Scary App Games. SSL without benefit. Will BYOL Cripple BYOD? Four Best Practices for Reducing Risk in the Cloud Birds on a Wire(less) 22 Beginner Travel Tips Dreaming of Work 20 Lines or Less #59: SSL Re-encryption, Mobile Browsing, and iFiles Scaling Web Security Operations with DAST and One-Click Virtual Patching BIG-IP Edge Client v1.0.4 for iOS223Views0likes0CommentsF5 Friday. Speedy SPDY
#ADO, #Stirling, #fasterapp a SPDY implementation that is as fast and adaptable as needed. **I originally wrote this more than a month ago… Coworkers have covered this topic extensively, but thought I’d still get it posted for those who read my blog and missed it. Remember the days when Internet connections were inherently slow, and browser usage required extreme patience? For many people – from certain geographic regions to mobile phone Internet users – that world of waiting has come around again, and they’re not as patient as people used to be, largely because instant communication has become a standard, so expectations have risen. As with all recurring themes, there are new solutions coming along to resolve these problems, and F5 is staying on top of them, helping IT to better serve the needs of the business, and the customer. In November of 2009, Google announced the SPDY protocol to improve the performance of browser-server communications. Since then, implementations of SPDY have cropped up in both Chrome and Firefox, which according to w3schools.com comprise over 70% of the global browser market. The problem is that web server and web application server implementations lag far behind client adoption. While the default is for SPDY to drop to HTTP if either client or server does not have a SPDY implementation, there are clear-cut benefits to SPDY that IT is missing out on. This is the result of a convergence of issues that will eventually be resolved on their own, most notably that it is easy to get two open source browsers to support your standard and attain market penetration, but much harder to convince tens of thousands of IT folks to disrupt their normal operations while implementing a standard that isn’t strictly necessary for most of them. Eventually, SPDY support will come pre-packaged in most web servers, and if it is something your organization needs, those webservers will be the first choice for new projects. Until then, clients with slow connections (including all mobile clients) will suffer longer delivery timeframes. What is required is a solution that allows for SPDY support without disrupting the flow of normal operations. Something that can be implemented quickly and easily, without the hassle of dropping web servers, installing modules, making configuration changes, etc. And of course that solution should be comprehensive enough to serve the most demanding environments. As of now, that requirement is fulfilled by F5. F5 WebAccelerator now supports SPDY as a proxy for all of the servers you choose to turn SPDY support on for. In the normal course of SPDY operations, the client and the server exchange information about whether they support SPDY or not, and if both do not, then HTTP is used for communication between the browser and the web server. BIG-IP WebAccelerator acts as a proxy for web servers. It terminates the connection, responds that the server behind it does indeed support SPDY, then translates requests from the browser into HTTP before passing them to the server, and responses from the server into SPDY before passing them to the client. The net result is that on the slowest part of the connection – the Internet and wireless device “last mile”, SPDY is being used, while there are zero changes to the application infrastructure. And because the BIG-IP product family specializes in configurations per-application, you can pick and choose which applications running behind a BIG-IP device actually support SPDY, should the need arise. Combined with the whole collection of other optimizations that WebAccelerator implements, the performance of web applications to any device can greatly benefit without retrofitting the entire network. The HTTP 2.0 War has Just Begun The Four V’s of Big Data The “All of the Above” Approach to Improving Application Performance Mobile Apps. New Game, New (and Old) Rules The HTTP 2.0 War has Just Begun F5 Friday: Ops First Rule220Views0likes0Comments