wan optimization
54 TopicsDeduplication and Compression – Exactly the same, but different.
One day many years ago, Lori and I’s oldest son held up two sheets of paper and said “These two things are exactly the same, but different!” Now, he’s a very bright individual, he was just young, and didn’t even get how incongruous the statement was. We, being a fun loving family that likes to tease each other on occasion, we of course have not yet let him live it down. It was honestly more than a decade ago, but all is fair, he doesn’t let Lori live down something funny that she did before he was born. It is all in good fun of course. Why am I bringing up this family story? Because that phrase does come to mind when you start talking about deduplication and compression. Highly complimentary and very similar, they are pretty much “Exactly the same, but different”. Since these technologies are both used pretty heavily in WAN Optimization, and are growing in use on storage products, this topic intrigued me. To get this out of the way, at F5, compression is built into the BIG-IP family as a feature of the core BIG-IP LTM product, and deduplication is an added layer implemented over BIG-IP LTM on BIG-IP WAN Optimization Module (WOM). Other vendors have similar but varied (there goes a variant of that phrase again) implementation details. Before we delve too deeply into this topic though, what caught my attention and started me pondering the whys of this topic was that F5’s deduplication is applied before compression, and it seems that reversing the order changes performance characteristics. I love a good puzzle, and while the fact that one should come before the other was no surprise, I started wanting to know why the order it was, and what the impact of reversing them in processing might be. So I started working to understand the details of implementation for these two technologies. Not understand them from an F5 perspective, though that is certainly where I started, but try to understand how they interact and compliment each other. While much of this discussion also applies to in-place compression and deduplication such as that used on many storage devices, some of it does not, so assume that I am talking about networking, specifically WAN networking, throughout this blog. At the very highest level, deduplication and compression are the same thing. They both look for ways to shrink your dataset before passing it along. After that, it gets a bit more complex. If it was really that simple, after all, we wouldn’t call them two different things. Well, okay, we might, IT has a way of having competing standards, product categories, even jobs that we lump together with the same name. But still, they wouldn’t warrant two different names in the same product like F5 does with BIG-IP WOM. The thing is that compression can do transformations to data to shrink it, and it also looks for small groupings of repetitive byte patterns and replaces them, while deduplication looks for larger groupings of repetitive byte patterns and replaces them. In the implementation you’ll see on BIG-IP WOM, deduplication looks for larger byte patterns repeated across all streams, while compression applies transformations to the data, and when removing duplication only looks for smaller combinations on a single stream. The net result? The two are very complimentary, but if you run compression before deduplication, it will find a whole collection of small repeating byte patterns and between that and transformations, deduplication will find nothing, making compression work harder and deduplication spin its wheels. There are other differences – because deduplication deals with large runs of repetitive data (I believe that in BIG-IP the minimum size is over a K), it uses some form of caching to hold patterns that duplicates can match, and the larger the caching, the more strings of bytes you have to compare to. This introduces some fun around where the cache should be stored. In memory is fast, but limited in size, on flash disk is fast and has a greater size, but is expensive, and on disk is slow but has a huge advantage in size. Good deduplication engines can support all three and thus are customizable to what your organization needs and can afford. Some workloads just won’t benefit from one, but will get a huge benefit from the other. The extremes are good examples of this phenomenon – if you have a lot of in-the-stream repetitive data that is too small for deduplication to pick up, and little or no cross-stream duplication, then deduplication will be of limited use to you, and the act of running through the dedupe engine might actually degrade performance a negligible amount – of course, everything is algorithm dependent, so depending upon your vendor it might degrade performance a large amount also. On the other extreme, if you have a lot of large byte count duplication across streams, but very little within a given stream, deduplication is going to save your day, while compression will, at best, offer you a little benefit. So yes, they’re exactly the same from the 50,000 foot view, but very very different from the benefits and use cases view. And they’re very complimentary, giving you more bang for the buck.299Views0likes1CommentBare Metal Blog: Throughput Sometimes Has Meaning
#BareMetalBlog Knowing what to test is half the battle. Knowing how it was tested the other. Knowing what that means is the third. That’s some testing, real clear numbers. In most countries, top speed is no longer the thing that auto manufacturers want to talk about. Top speed is great if you need it, but for the vast bulk of us, we’ll never need it. Since the flow of traffic dictates that too much speed is hazardous on the vast bulk of roads, automobile manufacturers have correctly moved the conversation to other things – cup holders (did you know there is a magic number of them for female purchasers? Did you know people actually debate not the existence of such a number, but what it is?), USB/bluetooth connectivity, backup cameras, etc. Safety and convenience features have supplanted top speed as the things to discuss. The same is true of networking gear. While I was at Network Computing, focus was shifting from “speeds and feeds” as the industry called it, to overall performance in a real enterprise environment. Not only was it getting increasingly difficult and expensive to push ever-larger switches until they could no longer handle the throughput, enterprise IT staff was more interested in what the capabilities of the box were than how fast it could go. Capabilities is a vague term that I used on purpose. The definition is a moving target across both time and market, with a far different set of criteria for, say, an ADC versus a WAP. There are times, however, where you really do want to know about the straight-up throughput, even if you know it is the equivalent of a professional driver on a closed course, and your network will never see the level of performance that is claimed for the device. There are actually several cases where you will want to know about the maximum performance of an ADC, using the tools I pay the most attention to at the moment as an example. WAN optimization is a good one. In WANOpt, the goal is to shrink the amount of data being transferred between two dedicated points to try and maximize the amount of throughput. When “maximize the amount of throughput” is in the description, speeds and feeds matter. WANOpt is a pretty interesting example too, because there’s more than just “how much data did I send over the wire in that fifteen minute window”. It’s more complex than that (isn’t it always?). The best testing I’ve seen for WANOpt starts with “how many bytes were sent by the originating machine”, then measures that the same number of bytes were received by the WANOpt device, then measures how much is going out the Internet port of the WANOpt device – to measure compression levels and bandwidth usage – then measures the number of bytes the receiving machine at the remote location receives to make sure it matches the originating machine. So even though I say “speeds and feeds matter”, there is a caveat. You want to measure latency introduced with compression and dedupe, and possibly with encryption since WANOpt is almost always over the public Internet these days, throughput, and bandwidth usage. All technically “speeds and feeds” numbers, but taken together giving you an overall picture of what good the WANOpt device is doing. There are scenarios where the “good” is astounding. I’ve seen the numbers that range as high as 95x the performance. If you’re sending a ton of data over WANOpt connections, even 4x or 5x is a huge savings in connection upgrades, anything higher than that is astounding. This is an (older) diagram of WAN Optimization I’ve marked up to show where the testing took place, because sometimes a picture is indeed worth a thousand words. And yeah, I used F5 gear for the example image… That really should not surprise you . So basically, you count the bytes the server sends, the bytes the WANOpt device sends (which will be less for 99.99% of loads if compression and de-dupe are used), and the total number of bytes received by the target server. Then you know what percentage improvement you got out of the WANOpt device (by comparing server out bytes to WANOpt out bytes), that the WANOpt devices functioned as expected (server received bytes == server sent bytes), and what the overall throughput improvement was (server received bytes/time to transfer). There are other scenarios where simple speeds and feeds matter, but less of them than their used to be, and the trend is continuing. When a device designed to improve application traffic is introduced, there are certainly few. The ability to handle a gazillion connections per second I’ve mentioned before is a good guardian against DDoS attacks, but what those connections can do is a different question. Lots of devices in many networking market spaces show little or even no latency introduction on their glossy sales hand-outs, but make those devices do the job they’re purchased for and see what the latency numbers look like. It can be ugly, or you could be pleasantly surprised, but you need to know. Because you’re not going to use it in a pristine lab with perfect conditions, you’re going to slap it into a network where all sorts of things are happening and it is expected to carry its load. So again, I’ll wrap with acknowledgement that you all are smart puppies and know where speeds and feeds matter, make sure you have realistic performance numbers for those cases too. Technorati Tags: Testing,Application Delivery Controller,WAN Optimization,throughput,latency,compression,deduplication,Bare Metal Blog,F5 Networks,Don MacVittie The Whole Bare Metal Blog series: Bare Metal Blog: Introduction to FPGAs | F5 DevCentral Bare Metal Blog: Testing for Numbers or Performance? | F5 ... Bare Metal Blog: Test for reality. | F5 DevCentral Bare Metal Blog: FPGAs The Benefits and Risks | F5 DevCentral Bare Metal Blog: FPGAs: Reaping the Benefits | F5 DevCentral Bare Metal Blog: Introduction | F5 DevCentral204Views0likes0CommentsF5 ... Wednesday: Bye Bye Branch Office Blues
#virtualization #VDI Unifying desktop management across multiple branch offices is good for performance – and operational sanity. When you walk into your local bank, or local retail outlet, or one of the Starbucks in Chicago O'Hare, it's easy to forget that these are more than your "local" outlets for that triple grande dry cappuccino or the latest in leg-warmer fashion (either I just dated myself or I'm incredibly aware of current fashion trends, you decide which). For IT, these branch offices are one of many end nodes on a corporate network diagram located at HQ (or the mother-ship, as some of us known as 'remote workers' like to call it) that require care and feeding – remotely. The number of branch offices continues to expand and, regardless of how they're counted, number in the millions. In a 2010 report, the Internet Research Group (IRG) noted: Over the past ten years the number of branch office locations in the US has increased by over 21% from a base of about 1.4M branch locations to about 1.7M at the end of 2009. While back in 2004, IDC research showed four million branch offices, as cited by Jim Metzler: The fact that there are now roughly four million branch offices supported by US businesses gives evidence to the fact that branch offices are not going away. However, while many business leaders, including those in the banking industry, were wrong in their belief that branch offices were unnecessary, they were clearly right in their belief that branch offices are expensive. One of the reasons that branch offices are expensive is the sheer number of branch offices that need to be supported. For example, while a typical company may have only one or two central sites, they may well have tens, hundreds or even thousands of branch offices. -- The New Branch Office Network - Ashton, Metzler & Associates Discrepancies appear to derive from the definition of "branch office" – is it geographic or regional location that counts? Do all five Starbucks at O'Hare count as five separate branch offices or one? Regardless how they're counted, the numbers are big and growth rates say it's just going to get bigger. From an IT perspective, which has trouble scaling to keep up with corporate data center growth let alone branch office growth, this spells trouble. Compliance, data protection, patches, upgrades, performance, even routine troubleshooting are all complicated enough without the added burden of accomplishing it all remotely. Maintaining data security, too, is a challenge when remote offices are involved. It is just these challenges that VMware seeks to address with its latest Branch Office Desktop solution set, which lays out two models for distributing and managing virtual desktops (based on VMware View, of course) to help IT mitigate if not all then most of the obstacles IT finds most troubling when it comes to branch office anything. But as with any distributed architecture constrained by bandwidth and technological limitations, there are areas that benefit from a boost from VMware partners. As long-time strategic and technology partners, F5 brings its expertise in improving performance and solving unique architectural challenges to the VMware Branch Office Desktop (BOD) solution, resulting in LAN-like convenience and a unified namespace with consistent access policy enforcement from HQ to wherever branch offices might be located. F5 Streamlines Deployments of Branch Office Desktops KEY BENEFITS · Local and global intelligent traffic management with single namespace and username persistence support · Architectural freedom of combining Virtual Editions with Physical Appliances · Optimized WAN connectivity between branches and primary data centers Using BIG-IP Global Traffic Manager (GTM), a single namespace (for example, https://desktop.example.com) can be provided to all end users. BIG-IP GTM and BIG-IP Local Traffic Manager (LTM) work together to ensure that requests are sent to a user’s preferred data center, regardless of the user’s current location. BIG-IP Access Policy Manager (APM) validates the login information against the existing authentication and authorization mechanisms such as Active Directory, RADIUS, HTTP, or LDAP. In addition, BIG-IP LTM works with the F5 iRules scripting language, which allows administrators to configure custom traffic rules. F5 Networks has tested and published an innovative iRule that maintains connection persistence based on the username, irrespective of the device or location. This means that a user can change devices or locations and log back in to be reconnected to a desktop identical to the one last used. By taking advantage of BIG-IP LTM to securely connect branch offices with corporate headquarters, users benefit from optimized WAN services that dramatically reduce transfer times and performance of applications relying on data center-hosted resources. Together, F5 and VMware can provide more efficient delivery of virtual desktops to the branch office without sacrificing performance or security or the end-user experience.212Views0likes0CommentsThe Four V’s of Big Data
#stirling #bigdata #ado #interop “Big data” focuses almost entirely on data at rest. But before it was at rest – it was transmitted over the network. That ultimately means trouble for application performance. The problem of “big data” is highly dependent upon to whom you are speaking. It could be an issue of security, of scale, of processing, of transferring from one place to another. What’s rarely discussed as a problem is that all that data got where it is in the same way: over a network and via an application. What’s also rarely discussed is how it was generated: by users. If the amount of data at rest is mind-boggling, consider the number of transactions and users that must be involved to create that data in the first place – and how that must impact the network. Which in turn, of course, impacts the users and applications creating it. It’s a vicious cycle, when you stop and think about it. This cycle shows no end in sight. The amount of data being transferred over networks, according to Cisco, is only going to grow at a staggering rate – right along with the number of users and variety of devices generating that data. The impact on the network will be increasing amounts of congestion and latency, leading to poorer application performance and greater user frustration. MITIGATING the RISKS of BIG DATA SIDE EFFECTS Addressing that frustration and improving performance is critical to maintaining a vibrant and increasingly fickle user community. A Yotta blog detailing the business impact of site performance (compiled from a variety of sources) indicates a serious risk to the business. According to its compilation, a delay of 1 second in page load time results in: 7% Loss in Conversions 11% Fewer Pages Viewed 16% Decrease in Customer Satisfaction This delay is particularly noticeable on mobile networks, where latency is high and bandwidth is low – a deadly combination for those trying to maintain service level agreements with respect to application performance. But users accessing sites over the LAN or Internet are hardly immune from the impact; the increasing pressure on networks inside and outside the data center inevitably result in failures to perform – and frustrated users who are as likely to abandon and never return as are mobile users. Thus, the importance of optimizing the delivery of applications amidst potentially difficult network conditions is rapidly growing. The definition of “available” is broadening and now includes performance as a key component. A user considers a site or application “available” if it responds within a specific time interval – and that time interval is steadily decreasing. Optimizing the delivery of applications while taking into consideration the network type and conditions is no easy task, and requires a level of intelligence (to apply the right optimization at the right time) that can only be achieved by a solution positioned in a strategic point of control – at the application delivery tier. Application Delivery Optimization (ADO) Application delivery optimization (ADO) is a comprehensive, strategic approach to addressing performance issues, period. It is not a focus on mobile, or on cloud, or on wireless networks. It is a strategy that employs visibility and intelligence at a strategic point of control in the data path that enables solutions to apply the right type of optimization at the right time to ensure individual users are assured the best performance possible given their unique set of circumstances. The technological underpinnings of ADO are both technological and topological, leveraging location along with technologies like load balancing, caching, and protocols to improve performance on a per-session basis. The difficulties in executing on an overarching, comprehensive ADO strategy is addressing variables of myriad environments, networks, devices, and applications with the fewest number of components possible, so as not to compound the problems by introducing more latency due to additional processing and network traversal. A unified platform approach to ADO is necessary to ensure minimal impact from the solution on the results. ADO must therefore support topology and technology in such a way as to ensure the flexible application of any combination as may be required to mitigate performance problems on demand. Topologies Symmetric Acceleration Front-End Optimization (Asymmetric Acceleration) Lengthy debate has surrounded the advantages and disadvantages of symmetric and asymmetric optimization techniques. The reality is that both are beneficial to optimization efforts. Each approach has varying benefits in specific scenarios, as each approach focuses on specific problem areas within application delivery chain. Neither is necessarily appropriate for every situation, nor will either one necessarily resolve performance issues in which the root cause lies outside the approach's intended domain expertise. A successful application delivery optimization strategy is to leverage both techniques when appropriate. Technologies Protocol Optimization Load Balancing Offload Location Whether the technology is new – SPDY – or old – hundreds of RFC standards improving on TCP – it is undeniable that technology implementation plays a significant role in improving application performance across a broad spectrum of networks, clients, and applications. From improving upon the way in which existing protocols behave to implementing emerging protocols, from offloading computationally expensive processing to choosing the best location from which to serve a user, the technologies of ADO achieve the best results when applied intelligently and dynamically, taking into consideration real-time conditions across the user-network-server spectrum. ADO cannot effectively scale as a solution if it focuses on one or two comprising solutions. It must necessarily address what is a polyvariable problem with a polyvariable solution: one that can apply the right set of technological and topological solutions to the problem at hand. That requires a level of collaboration across ADO solutions that is almost impossible to achieve unless the solutions are tightly integrated. A holistic approach to ADO is the most operationally efficient and effective means of realizing performance gains in the face of increasingly hostile network conditions. Mobile versus Mobile: 867-5309 Identity Gone Wild! Cloud Edition Network versus Application Layer Prioritization Performance in the Cloud: Business Jitter is Bad The Three Axioms of Application Delivery Fire and Ice, Silk and Chrome, SPDY and HTTP The HTTP 2.0 War has Just Begun Stripping EXIF From Images as a Security Measure231Views0likes0CommentsiDo Declare: iPhone with BIG-IP
Who would have imagined back in 1973 when Martin Cooper/Motorola dialed the first portable cellular phone call, that one day we'd be booking airline tickets, paying bills, taking pictures, watching movies, getting directions, emailing and getting work done on a little device the size of a deck of cards. As these 'cell-phones' have matured, they've also become an integral part of our lives on a daily basis. No longer are they strictly for emergency situations when you need to get help, now they are attached to our hip with an accompanying ear apparatus as if we've evolved with new bodily appendages. People have grown accustomed to being 'connected' everywhere. There have been mobile breakthroughs over the years, like having 3G/4G networks and Wi-Fi capability, but arguably one of the most talked about and coveted mobile devices in recent memory is the Apple iPhone. Ever since the launch of the iPhone in 2007, it has changed the way people perceive and use mobile devices. It's not just the tech-savvy that love the iPhone, it's Moms, Florists, Celebrities, Retailers and everyone in between that marvel at the useful ways iPhone can be used, and for their very own novel purpose. There are literally hundreds of thousands of apps available for iPhone, from the silly and mundane to banking and business. Browsing the web is a breeze with the iPhone with the ability to view apps in both portrait and landscape modes. The ability to zoom and 'pinch' with just your fingers made mobile browsing tolerable, even fun from an iPhone. Shopping from your cell phone is now as common as ordering a cup of coffee - often at the same time! iPhone developers are pushing the limits with augmented reality applications where you can point your iPhone into the sky and see the flight number, speed, destination and other such details as planes fly by. When the iPhone was first introduced and Apple started promoting it as a business capable device, it was missing a few important features. Many enterprises, and small businesses for that matter, use Microsoft products for their corporate software - Exchange for email, Word for documents, Excel for spreadsheets and PowerPoint for presentations. Those were, as expected, not available on the iPhone. As new generations of iPhones hit the market and iOS matured, things like iPhone Exchange ActiveSync became available and users could now configure their email to work with Exchange Server. Other office apps like Documents-to-Go make it possible for iPhone users to not only to view Microsoft Word and Excel documents, but they were able to create and edit them too. Today, there are business apps from Salesforce, SAP and Oracle along with business intelligence and HR apps. Companies can even lock down and locate a lost or stolen iPhone. Business users are increasingly looking to take advantage of Apple iOS devices in the corporate environment, and as such IT organizations are looking for ways to allow access without compromising security, or risking loss of endpoint control. IT departments who have been slow to accept the iPhone are now looking for a remote access solution to balance the need for mobile access and productivity with the ability to keep corporate resources secure. The F5 BIG-IP Edge Portal app for iOS devices streamlines secure mobile access to corporate web applications that reside behind BIG-IP Access Policy Manager, BIG-IP Edge Gateway and FirePass SSL VPN. Using the Edge Portal application, users can access internal web pages and web applications securely, while the new F5 BIG-IP Edge Client app offers complete network access connection to corporate resources from an iOS device; a complete VPN solution for both the iPhone and iPad. The BIG-IP Edge Portal App allows users to access internal web applications securely and offers the following features: User name/password authentication Client certificate support Saving credentials and sessions SSO capability with BIG-IP APM for various corporate web applications Saving local bookmarks and favorites Accessing bookmarks with keywords Embedded web viewer Display of all file types supported by native Mobile Safari Assuming an iPhone is a trusted device and/or network access from an iPhone/iPad is allowed, then the BIG-IP Edge Client app offers all the BIG-IP Edge Portal features listed above, plus the ability to create an encrypted, optimized SSL VPN tunnel to the corporate network. BIG-IP Edge Client offers a complete network access connection to corporate resources from an iOS device. With full VPN access, iPhone/iPad users can run applications such as RDP, SSH, Citrix, VMware View, VoIP/SIP, and other enterprise applications. The BIG-IP Edge Client app offers additional features such as Smart Reconnect, which enhances mobility when there are network outages, when users roaming from one network to another (like going from a mobile to Wi-Fi connection), or when a device comes out of hibernate/standby mode. Split tunneling mode is also supported, allowing users to access the Internet and internal resources simultaneously. BIG-IP Edge Client and Edge Portal work in tandem with BIG-IP Edge Gateway, BIG-IP APM and FirePass SSL VPN solutions to drive managed access to corporate resources and applications, and to centralize application access control for mobile users. Enabling access to corporate resources is key to user productivity, which is central to F5’s dynamic services model that delivers on-demand IT. ps Resources F5 Announces Two BIG-IP Apps Now Available at the App Store F5 BIG-IP Edge Client App F5 BIG-IP Edge Portal App F5 BIG-IP Edge Client Users Guide iTunes App Store Securing iPhone and iPad Access to Corporate Web Applications – F5 Technical Brief Audio Tech Brief - Secure iPhone Access to Corporate Web Applications Is the iPhone Finally Ready for Business Use? iPhone in Business The next IT challenge: Mobile device management Use Your iPhone to See Where Planes are Headed269Views0likes1CommentAdvanced Load Balancing For Developers. The Network Dev Tool
It has been a while since I wrote an installment of Load Balancing for Developers, and now I think it has been too long, but never fear, this is the grad-daddy of Load Balancing for Developers blogs, covering a useful bit of information about Application Delivery Controllers that you might want to take advantage of. For those who have joined us since my last installment, feel free to check out the entire list of blog entries (along with related blog entries) here, though I assure you that this installment, like most of the others, does not require you to have read those that went before. ZapNGo! Is still a growing enterprise, now with several dozen complex applications and a high availability architecture that spans datacenters and the cloud. While the organization relies upon its web properties to generate revenue, those properties have been going along fine with your Application Delivery Controller (ADC) architecture. Now though, you’re seeing a need to centralize administration of a whole lot of functions. What worked fine separately for one or two applications is no longer working so well now that you have several development teams and several dozen applications, and you need to find a way to bring the growing inter-relationships under control before maintenance and hidden dependencies swamp you in a cascading mess of disruption. With maintenance taking a growing portion of your application development manhours, and a reasonably well positioned test environment configured with a virtual ADC to mimic your production environment, all you need now is a way to cut those maintenance manhours and reduce the amount of repetitive work required to create or update an application. Particularly update an application, because that is a constant problem, where creating is less frequent. With many of the threats that your ZapNGo application will be known as ZapNGone eliminated, now it is efficiencies you are after. And believe it or not, these too are available in an ADC. Not all ADC’s are created equal, but this discussion will stay on topics that most ADCs can handle, and I’ll mention it when I stray from generic into specific – which I will do in one case because only one vendor supports one of the tools you can use, but all of the others should be supported by whatever ADC vendor you have, though as always, check with your vendor directly first, since I’m not an expert in the inner workings of every one. There is a lot that many organizations do for themselves, and the array of possibilities is long – from implementing load balancing in source code to security checks in the application, the boundaries of what is expected of developers are shaped by an organization, its history, and its chosen future direction. At ZapNGo, the team has implemented a virtual test environment that as close as possible mirrors production, so that code can be implemented and tested in the way it will be used. They use an ADC for load balancing, so that they don’t have to rewrite the same code over and over, and they have a policy of utilizing a familiar subset of ADC functionality on all applications that face the public. The company is successful and growing, but as always happens in companies in that situation, the pressures upon them are changing just by virtue of their growth. There are more new people who don’t yet have intimate knowledge of the code base, network topology, security policies, whatever their area of expertise is. There are more lines of code to maintain, while new projects are being brought up at a more rapid pace and with higher priorities (I’ve twice lived through the “Everything is high priority? Well this is highest priority!” syndrome while working in IT. Thankfully, most companies grow out of that fast when it’s pointed out that if everything is priority #1, nothing is). Timelines to complete projects – be they new development, bug fixes, or enhancements are stretching longer and longer as the percentage of gurus in the company is down and the complexity of the code and the architecture it runs on is up. So what is a development manager to do to increase productivity? Teaming newer developers with people who’ve been around since the beginning is helping, but those seasoned developers are a smaller and smaller percentage of the workforce, while the volume of work has slowly removed them from some of the many products now under management. Adopting coding standards and standardized libraries helps increase experience portability between projects, but doesn’t do enough. Enter offloading to the ADC. Some things just don’t have to be done in code, and if they don’t have to be, at this stage in the company’s growth, IT management at ZapNGo (that’s you!) decides they won’t be. There just isn’t time for non-essential development anymore. Utilizing a policy management tool and/or an Application Firewall on the ADC can improve security without increasing the code base, for example. And that shaves hours off of maintenance projects, while standardizing on one or a few implementations that are simply selected on the ADC. Implementing Web Application Acceleration protocols on the ADC means that less in-code optimization has to occur. Performance is no longer purely the role of developers (but of course it is still a concern. No Web Application Acceleration tool can make a loop that runs for five minutes run faster), they can allow the Web Application Acceleration tool to shrink the amount of data being sent to the users’ browser for you. Utilizing a WAN Optimization ADC tool to improve the performance of bulk copies or backups to a remote datacenter or cloud storage… The list goes on and on. The key is that the ADC enables a lot of opportunities for App Dev to be more responsive to the needs of the organization by moving repetitive tasks to the ADC and standardizing them. And a heaping bonus is that it also does that for operations with a different subset of functionality, meaning one toolset gives both App Dev and Operations a bit more time out of their day for servicing important organizational needs. Some would say this is all part of DevOps, some would say it is not. I leave those discussions to others, all I care is that it can make your apps more secure, fast, and available, while cutting down on workload. And if your ADC supports an SSL VPN, your developers can work from home when necessary. Or more likely, if your code is your IP, a subset of your developers can. Making ZapNGo more responsive, easier to maintain, and more adaptable to the changes coming next week/month/year. That’s what ADCs do. And they’re pretty darned good at it. That brings us to the one bit that I have to caveat with F5 only, and that is iApps. An iApp is a constructed configuration tool that asks a few questions and then deploys all the bits necessary to set up an ADC for a particular application. Why do I mention it here? Well if you have dozens of applications with similar characteristics, you can create an iApp Template and use it to rapidly bring new applications or new instances of applications online. And since it is abstracted, these iApp templates can be designed such that AppDev, or even the business owner, is able to operate them Meaning less time worrying about what network resources will be available, how they’re configured, and waiting for operations to have time to implement them (in an advanced ADC that is being utilized to its maximum in a complex application environment, this can be hundreds of networking objects to configure – all encapsulated into a form). Less time on the project timeline, more time for the next project. Or for the post deployment party. One of the two. That’s it for the F5 only bit. And knowing that all of these items are standardized means less things to get mis-configured, more surety that it will all work right the first time. As with all of these articles, that offers you the most important benefit… A good night’s sleep.238Views0likes0CommentsPerformance in the Cloud: Business Jitter is Bad
#fasterapp #ccevent While web applications aren’t sensitive to jitter, business processes are. One of the benefits of web applications is that they are generally transported via TCP, which is a connection-oriented protocol designed to assure delivery. TCP has a variety of native mechanisms through which delivery issues can be addressed – from window sizes to selective acks to idle time specification to ramp up parameters. All these technical knobs and buttons serve as a way for operators and administrators to tweak the protocol, often at run time, to ensure the exchange of requests and responses upon which web applications rely. This is unlike UDP, which is more of a “fire and forget” protocol in which the server doesn’t really care if you receive the data or not. Now, voice and streaming video and audio over the web has always leveraged UDP and thus it has always been highly sensitive to jitter. Jitter is, without getting into layer one (physical) jargon, an undesirable delay in the otherwise consistent delivery of packets. It causes the delay of and sometimes outright loss of packets that are experienced by users as pauses, skips, or jumps in multi-media content. While the same root causes of delay – network congestion, routing changes, time out intervals – have an impact on TCP, it generally only delays the communication and other than an uncomfortable wait for the user, does not negatively impact the content itself. The content is eventually delivered because TCP guarantees that, UDP does not. However, this does not mean that there are no negative impacts (other than trying the patience of users) from the performance issues that may plague web applications and particularly those that are more and more often out there, in the nebulous “cloud”. Delays are effectively business jitter and have a real impact on the ability of the business to perform its critical functions – and that includes generating revenue. BUSINESS JITTER and the CLOUD David Linthicum summed up the issue with performance of cloud-based applications well and actually used the terminology “jitter” to describe the unpredictable pattern of delay: Are cloud services slow? Or fast? Both, it turns out -- and that reality could cause unexpected problems if you rely on public clouds for part of your IT services and infrastructure. When I log performance on cloud-based processes -- some that are I/O intensive, some that are not -- I get results that vary randomly throughout the day. In fact, they appear to have the pattern of a very jittery process. Clearly, the program or system is struggling to obtain virtual resources that, in turn, struggle to obtain physical resources. Also, I suspect this "jitter" is not at all random, but based on the number of other processes or users sharing the same resources at that time. -- David Linthicum, “Face the facts: Cloud performance isn't always stable” But what the multitude of articles coming out over the past year or so with respect to performance of cloud services has largely ignored is the very real and often measurable impact on business processes. That jitter that occurs at the protocol and application layers trickles up to become jitter in the business process; a process that may be critical to servicing customers (and thus impacts satisfaction and brand) as well as on the bottom line. Unhappy customers forced to wait for “slow computers”, as it is so often called by the technically less adept customer service representatives employed by many organizations, may take to the social media airwaves to express displeasure, or cancel an order, or simply refuse to do business in the future with the organization based on delays experienced because of unpredictable cloud performance. Business jitter can also manifest as decreased business productivity measures, which it turns out can be measured mathematically if you put your mind to it. Understanding the variability of cloud performance is important for two reasons: You need to understand the impact on the business and quantify it before embarking on any cloud initiative so it can be factored in to the overall cost-benefit analysis. It may be that the cost savings from public cloud are much greater than the potential loss of revenue and/or productivity, and thus the benefits of a cloud-based solution outweigh the risks. Understanding the variability and from where it comes will have an impact and help guide you to choosing not only the right provider, but the right solutions that may be able to normalize or mitigate the variability. If the primary source of business jitter is your WAN, for example, then it may be that choosing a provider that supports your ability to deploy WAN optimization solutions would be an appropriate strategy. Similarly , if the variability in performance stems from capacity issues, then choosing a provider that allows greater latitude in load balancing algorithms or the deployment of a virtual (soft) ADC would likely be the best strategy. It seems clear from testing and empirical (as well as anecdotal) evidence that cloud performance is highly variable and, as David puts it, unstable. This should not necessarily be seen as a deterrent to adopting cloud services – unless your business is so highly sensitive to latency that even milliseconds can be financially damaging – but rather it should be a reality that factors into your decision making process with respect to your choice of provider and the architecture of the solution you’ll be deploying (or subscribing to, in the case of SaaS) in the cloud. Knowing is half the battle to leveraging cloud successfully. The other half is strategy and architecture. I’ll be at CloudConnect 2012 and we’ll discuss the subject of cloud and performance a whole lot more at the show! Sessions Is Features vs. Performance the New Cloud Battle Line? On the performance of clouds Face the facts: Cloud performance isn't always stable Data Center Feng Shui: Architecting for Predictable Performance A Formula for Quantifying Productivity of Web Applications Enterprise Apps are Not Written for Speed The Three Axioms of Application Delivery Virtualization and Cloud Computing: A Technological El Niño228Views0likes0CommentsEnterprise Apps are Not Written for Speed
#fasterapp #cceventThey’re written for readability, for integration, for business function, and for long-term maintenance… When I was first entering IT I had the good (or bad, depending on how you look at it) fortune to be involved in some of the first Internet-facing projects at a global transportation organization. We made mistakes and learned lessons and eventually got down to the business of architecting a framework that would span the entire IT portfolio. One of the lessons I learned early on was that maintainability always won over performance, especially at the code level. Oh, some basic tenets of optimization in the code could be followed – choosing between while, for, and do..until conditionals based on performance-related concerns – but for the most part, many of the tricks used to improve performance were verboten, and some based solely on factors like readability. The introduction of local scope for an if…then…else statement, for example, was required for readability, even though in terms of performance this introduces many unnecessary clock ticks that under load can have a negative impact on overall capacity and response time. Microseconds of delays adds up to seconds of delays, after all. But coding standards in the enterprise lean heavily toward the reality that (1) code lives for a long time and (2) someone other than the original developer will likely be maintaining it. This means readability is paramount to ensuring the long-term success of any development project. Thus, performance suffers and “rewriting the application” is not an option. It’s costly and the changes necessary would likely conflict with the overriding need to ensure long-term maintainability. Even modern web-focused organizations like Twitter and Facebook have run into performance issues based on architectural decisions made early in the lifecycle. Many no doubt recall the often very technical discussions regarding Twitter’s design and interaction with its database as a source of performance woes, with hundreds of experts offering advice and criticism. Applications are not often designed with performance in mind. They are architected and designed to perform specific functions and tasks, usually business-related, and they are developed with long-term maintenance in mind. This leads to the problem of performance, which can rarely be addressed by the developers due to the constraints placed upon them, not least of which may be an active and very vocal user base. APPLICATION DELIVERY PUTS the FAST back in APPLICATIONS This is a core reason the realm of application delivery exists: to compensate for issues within the application that cannot – for whatever reason – be addressed through modification of the application itself. Application acceleration, WAN optimization, and load balancing services combine to form a powerful tier of application delivery services within the data center through which performance-related issues can be addressed. This tier allows load balancing services, for example, to be leveraged as a means to scale out an application, which effectively results in similar (and often greater) performance gains as simply scaling up to redress inherent performance constraints within the application. Application acceleration techniques improve the delivery of application-related content and objects through caching, compression, transformation, and concatenation. And WAN optimization services address bandwidth constraints that may inhibit delivery of the application, especially those heavy on the data and content side. While certainly developers could modify applications to rearrange content or reduce the size of data being delivered, it is rarely practical or cost-effective to do so. Similarly, it is not cost-effective or practical to ask developers to modify applications to remove processing bottlenecks that may result in unreadable code. Enterprise applications are not written for speed, but that is exactly what is demanded of them by their users. Both needs must be met, and the introduction of an application delivery tier into the architecture can serve to provide the balance between performance and maintenance by applying acceleration services dynamically. In this way applications need not be modified, but performance and scale is greatly improved. I’ll be at CloudConnect 2012 and we’ll discuss the subject of cloud and performance a whole lot more at the show! Sessions From Point A to Point B. The Three Axioms of Application Delivery WILS: WPO versus FEO The Full-Proxy Data Center Architecture Even the best written code has a weakness At the Intersection of Cloud and Control… What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility What CIOs Can Learn from the Spartans195Views0likes0CommentsThe Epic Failure of Stand-Alone WAN Optimization
#fasterapp #ccevent WAN optimization is not and cannot be separated from application delivery Yes, yes I did say that. There's a reason for that, and after more than a decade of watching the markets that tangentially revolve around making applications faster I'm here to tell you it's a failure of monumental proportions. The very term WAN Optimization has always stuck in my craw (whatever and wherever that may be). That's because optimizing the WAN implies that you're making the WAN faster. The problem is that a WAN is either a dedicated link between two locations (old skool) or a connection to a remote site across the Internet (new skool). In the case of the former there really isn't a whole lot you can do to that network to make it faster. In the case of the latter there really isn't anything you can do to influence the networking components out there, on the Internet (or "in the cloud" as some are wont to call it these days) to make it faster. In the data center, optimizing the network has real meaning. It's about modifying routing and switching, moving segments and changing VLANs. It's about fatter connections and changing the way in which network traffic flows through the pipes. But when it comes to the WAN, you've got very little to tweak. This is the reason that when folks said WAN Optimization what they really meant was "we squish data down so there's fewer packets and thus, it traverses the network faster.” Wonderbar. APPLICATIONS are not NETWORKS What IT and business stakeholders really want - what they care about and what they're basing key performance indicators on, however - is faster applications. And if you focus on optimizing the network, well, that's not really the right approach. The right approach is to focus on the application, with its unique transport and application layer behaviors and what those imply in terms of performance. Then you focus on resolving any issues that arise because of that behavior. Then, if you still need to, you can apply traditional techniques like compression and deduplication and reduce the size to make the results of your initial efforts even better. But if you aren't focusing on those initial efforts, you're doing it wrong. If you profile an application and find out performance degradations are caused by excessive load on the server, optimizing the WAN is not going to really help. Reducing the load on the server, however, will. While it is certainly the case that improving transfer speeds over the WAN can ensure that some of the load is alleviated by clearing out server queues faster, it won't solve the entire problem because some of the degradation is caused by excessive context switching and simple fact that is compute time sharing between threads and processes on a server. Optimizing the WAN isn't really going to address that, but reducing the number of those threads and processes required will. See, application delivery is about the big picture - and the focal point of that picture is the application. WAN Optimization has a role to play in that picture, but it's a supporting role, just like network optimization and web acceleration. WAN optimization is not and cannot be separated from application delivery. It simply isn't a stand-alone technology - or at least not one that really enables organizations to realize real benefits with respect to performance and availability of applications. It's a piece of the larger picture that is application delivery. For some applications – and uses – WAN optimization will provide the biggest gains in performance. Transferring large data sets such as those related to backups and DR efforts, or virtual machines in long-distance migration efforts, need the assistance of WAN optimization solutions. For others, however, such technology does very little to improve performance. That’s why application delivery focuses on the whole, on the big picture – the application and the context in which it is accessed. Without a holistic approach to application delivery you may be making one piece of the puzzle better – or you may be just spinning your technological wheels.184Views0likes0CommentsFrom Point A to Point B.
The complexities of life often escape a young child. The Little Man asked me the other day why I had to go work, which was both a compliment to wanting to spend time with me and an unintended backhand slap at Lori, who was going to hang out with him while I took care of business. The answer was the usual stuff, that working paid the bills, and work has its own rewards… It did not include “and I like my job”, though I do, simply because I didn’t want to imply “more than hanging out with you” to a three year old. But children boil everything down to simplicity. The picture over there is said son, wearing a picklehaube with a Transformers shirt and (yes really) proclaiming he was an autobot because of the helmet. We adults, on the other hand, tend to layer complexity upon complexity until we’re not certain we’re getting value anymore, but we’re proud of whatever it is we have done/built/know. IT is like that sometimes. What is “the network” – in tweet length, for example? Not only is the answer tough to cram into tweet length, it is tougher to cram into tweet length and make useful. It is even more difficult to cram it into tweet length and include all the various constituencies of IT in the answer. But it can be done, because a “network” is a simple concept. You’re moving information from point A to point B. That’s it. Everything else is layers we’ve added over it to make some aspect of that movement better, or to facilitate the movement of data. But in the end, it is just sending bytes over the wire. If, for example, a business person with no IT background asks why a whole section of the corporate network is down, they don’t care about routing tables or even DNS, they care about “The network broke, and those clients can’t get to the datacenter. The network is complex, but we’ll have it up soon.” If you’re moving data over the WAN, it gets another layer of complexity – because you want to move data over the WAN at a decent speed, but most applications aren’t designed for network communication optimization. Instead they’re designed to be very good at moving data, and expect the network to worry about performance issues. But business users don’t want to hear about compression, dedupe, SSL offload, or any of that when things go wrong, they want to hear “The copy of our data at the remote site is a little out of synch right now, but we’re on it, and it will all be fine soon.” Want to secure the network? BAM! Another complex layer is tossed on top of that – but the point is that you don’t just want to move data from point A to point B anymore, you want to move data from point A to point B securely. Again, if your ADS or LDAP system goes on the fritz, you’re going to want to be able to tell people “users can’t log in right now, the servers that know who can do what are offline, but we’ll have them back up soon.” because users care that data isn’t moving from point A to point B, not about whatever bugbear has cropped up with authentication or the network. Want to give web users – internal and external – an enhanced experience while reducing the load on your servers? Another layer of complexity piles on, as you use Web Application Optimization techniques. They work great – at least those by F5 do, since I’ve been a user of them – but they add a whole new layer of oddities. “No, the new logo isn’t showing reliably, but the team is flushing the cache and/or changing the expiration date to get that fixed, and it’ll be right soon.” is what business users want to hear. Load balancing to increase reliability and performance adds yet another layer of complexity to the overall system, a layer that has all of its own terminology. But when load balancing goes wrong, “We misconfigured the Virtual IP and the Pool it points to does not serve your app” isn’t what the business person wants to hear. “Yes, we had an error, but your application should be back online soon.” is the right answer. Server virtualization doesn’t directly add complexity to the network, but server sprawl certainly does because now there are a lot more clients out there. One of the early problems with server sprawl that seems to be largely defeated was “where is that non-responding virtual running again?” But still, if the hardware goes down that a users’ VM is running on, they want to hear “Yes, we had a hardware failure, but your application should be back up on another server soon, and we’ll get the problem fixed then move your application back.” Desktop virtualization adds both complexity and traffic to your network, but simplifies a whole array of things from desktop management to licensing. Still, when it is performing poorly, a business leader does not want to hear about oversubscription, congestion, or the number of VMs per server, or anything else technical, they want to hear “Yes, we see that performance is down for those users. We’ve got a plan to fix it, and all should be back to normal soon.” The thing is, F5 sells tools to help with all of these issues. In fact, F5 sells a platform that you can customize to help with all of these issues… But notice that all the answers to business are simple, and end with "some variant of “back up soon”? We can supply you with tools to manage the “back up soon” or even make you able to say “there was a problem, but you shouldn’t have noticed”, we cannot provide you with a tool to make everything simple. The business sometimes needs educating, but most of the time they just need less detail and more information. We’ve got a ton of cool stuff going on in IT these days, but sometimes the complexity masks the simplicity. Boil it down to the basics, and tackle real problems. And enjoy talking simplicity for a change... Because the next round of Buzzword Bingo is on its way in 5,4,3,2,1…213Views0likes0Comments