wom
61 TopicsDeduplication and Compression – Exactly the same, but different.
One day many years ago, Lori and I’s oldest son held up two sheets of paper and said “These two things are exactly the same, but different!” Now, he’s a very bright individual, he was just young, and didn’t even get how incongruous the statement was. We, being a fun loving family that likes to tease each other on occasion, we of course have not yet let him live it down. It was honestly more than a decade ago, but all is fair, he doesn’t let Lori live down something funny that she did before he was born. It is all in good fun of course. Why am I bringing up this family story? Because that phrase does come to mind when you start talking about deduplication and compression. Highly complimentary and very similar, they are pretty much “Exactly the same, but different”. Since these technologies are both used pretty heavily in WAN Optimization, and are growing in use on storage products, this topic intrigued me. To get this out of the way, at F5, compression is built into the BIG-IP family as a feature of the core BIG-IP LTM product, and deduplication is an added layer implemented over BIG-IP LTM on BIG-IP WAN Optimization Module (WOM). Other vendors have similar but varied (there goes a variant of that phrase again) implementation details. Before we delve too deeply into this topic though, what caught my attention and started me pondering the whys of this topic was that F5’s deduplication is applied before compression, and it seems that reversing the order changes performance characteristics. I love a good puzzle, and while the fact that one should come before the other was no surprise, I started wanting to know why the order it was, and what the impact of reversing them in processing might be. So I started working to understand the details of implementation for these two technologies. Not understand them from an F5 perspective, though that is certainly where I started, but try to understand how they interact and compliment each other. While much of this discussion also applies to in-place compression and deduplication such as that used on many storage devices, some of it does not, so assume that I am talking about networking, specifically WAN networking, throughout this blog. At the very highest level, deduplication and compression are the same thing. They both look for ways to shrink your dataset before passing it along. After that, it gets a bit more complex. If it was really that simple, after all, we wouldn’t call them two different things. Well, okay, we might, IT has a way of having competing standards, product categories, even jobs that we lump together with the same name. But still, they wouldn’t warrant two different names in the same product like F5 does with BIG-IP WOM. The thing is that compression can do transformations to data to shrink it, and it also looks for small groupings of repetitive byte patterns and replaces them, while deduplication looks for larger groupings of repetitive byte patterns and replaces them. In the implementation you’ll see on BIG-IP WOM, deduplication looks for larger byte patterns repeated across all streams, while compression applies transformations to the data, and when removing duplication only looks for smaller combinations on a single stream. The net result? The two are very complimentary, but if you run compression before deduplication, it will find a whole collection of small repeating byte patterns and between that and transformations, deduplication will find nothing, making compression work harder and deduplication spin its wheels. There are other differences – because deduplication deals with large runs of repetitive data (I believe that in BIG-IP the minimum size is over a K), it uses some form of caching to hold patterns that duplicates can match, and the larger the caching, the more strings of bytes you have to compare to. This introduces some fun around where the cache should be stored. In memory is fast, but limited in size, on flash disk is fast and has a greater size, but is expensive, and on disk is slow but has a huge advantage in size. Good deduplication engines can support all three and thus are customizable to what your organization needs and can afford. Some workloads just won’t benefit from one, but will get a huge benefit from the other. The extremes are good examples of this phenomenon – if you have a lot of in-the-stream repetitive data that is too small for deduplication to pick up, and little or no cross-stream duplication, then deduplication will be of limited use to you, and the act of running through the dedupe engine might actually degrade performance a negligible amount – of course, everything is algorithm dependent, so depending upon your vendor it might degrade performance a large amount also. On the other extreme, if you have a lot of large byte count duplication across streams, but very little within a given stream, deduplication is going to save your day, while compression will, at best, offer you a little benefit. So yes, they’re exactly the same from the 50,000 foot view, but very very different from the benefits and use cases view. And they’re very complimentary, giving you more bang for the buck.299Views0likes1CommentF5 ... Wednesday: Bye Bye Branch Office Blues
#virtualization #VDI Unifying desktop management across multiple branch offices is good for performance – and operational sanity. When you walk into your local bank, or local retail outlet, or one of the Starbucks in Chicago O'Hare, it's easy to forget that these are more than your "local" outlets for that triple grande dry cappuccino or the latest in leg-warmer fashion (either I just dated myself or I'm incredibly aware of current fashion trends, you decide which). For IT, these branch offices are one of many end nodes on a corporate network diagram located at HQ (or the mother-ship, as some of us known as 'remote workers' like to call it) that require care and feeding – remotely. The number of branch offices continues to expand and, regardless of how they're counted, number in the millions. In a 2010 report, the Internet Research Group (IRG) noted: Over the past ten years the number of branch office locations in the US has increased by over 21% from a base of about 1.4M branch locations to about 1.7M at the end of 2009. While back in 2004, IDC research showed four million branch offices, as cited by Jim Metzler: The fact that there are now roughly four million branch offices supported by US businesses gives evidence to the fact that branch offices are not going away. However, while many business leaders, including those in the banking industry, were wrong in their belief that branch offices were unnecessary, they were clearly right in their belief that branch offices are expensive. One of the reasons that branch offices are expensive is the sheer number of branch offices that need to be supported. For example, while a typical company may have only one or two central sites, they may well have tens, hundreds or even thousands of branch offices. -- The New Branch Office Network - Ashton, Metzler & Associates Discrepancies appear to derive from the definition of "branch office" – is it geographic or regional location that counts? Do all five Starbucks at O'Hare count as five separate branch offices or one? Regardless how they're counted, the numbers are big and growth rates say it's just going to get bigger. From an IT perspective, which has trouble scaling to keep up with corporate data center growth let alone branch office growth, this spells trouble. Compliance, data protection, patches, upgrades, performance, even routine troubleshooting are all complicated enough without the added burden of accomplishing it all remotely. Maintaining data security, too, is a challenge when remote offices are involved. It is just these challenges that VMware seeks to address with its latest Branch Office Desktop solution set, which lays out two models for distributing and managing virtual desktops (based on VMware View, of course) to help IT mitigate if not all then most of the obstacles IT finds most troubling when it comes to branch office anything. But as with any distributed architecture constrained by bandwidth and technological limitations, there are areas that benefit from a boost from VMware partners. As long-time strategic and technology partners, F5 brings its expertise in improving performance and solving unique architectural challenges to the VMware Branch Office Desktop (BOD) solution, resulting in LAN-like convenience and a unified namespace with consistent access policy enforcement from HQ to wherever branch offices might be located. F5 Streamlines Deployments of Branch Office Desktops KEY BENEFITS · Local and global intelligent traffic management with single namespace and username persistence support · Architectural freedom of combining Virtual Editions with Physical Appliances · Optimized WAN connectivity between branches and primary data centers Using BIG-IP Global Traffic Manager (GTM), a single namespace (for example, https://desktop.example.com) can be provided to all end users. BIG-IP GTM and BIG-IP Local Traffic Manager (LTM) work together to ensure that requests are sent to a user’s preferred data center, regardless of the user’s current location. BIG-IP Access Policy Manager (APM) validates the login information against the existing authentication and authorization mechanisms such as Active Directory, RADIUS, HTTP, or LDAP. In addition, BIG-IP LTM works with the F5 iRules scripting language, which allows administrators to configure custom traffic rules. F5 Networks has tested and published an innovative iRule that maintains connection persistence based on the username, irrespective of the device or location. This means that a user can change devices or locations and log back in to be reconnected to a desktop identical to the one last used. By taking advantage of BIG-IP LTM to securely connect branch offices with corporate headquarters, users benefit from optimized WAN services that dramatically reduce transfer times and performance of applications relying on data center-hosted resources. Together, F5 and VMware can provide more efficient delivery of virtual desktops to the branch office without sacrificing performance or security or the end-user experience.212Views0likes0CommentsLong Distance Live Partition Mobility– A tale of collaboration
F5 Networks and IBM continue the long tradition of collaboration with the latest supported solution of Long Distance Live Partition Mobility. F5 worked closely with IBM to further prefect IBM Virtual I/O’s technology to better support Long Distance Mobility, proving again than when customers do business with F5 or IBM, they’re getting a wealth of value added benefits from the partnership. WHAT IS IT? Partition Mobility is IBM’s Power VM capability that allows for the transfer of active and inactive partitions from one V/IO server to another. This solution has been offered since Power6 technology based systems, and in our testing, we were able to move a running machine between two datacenters separated by a simulated 1000 Kilometers. The details of how we achieved this, a bit more on the solution is below, after the diagram. SOLUTION OVERVIEW The basics of the solution are that during Active Migrations, a running partition is moved from a primary LPAR (pictured above on the left) to a Failover LPAR (pictured above to the right). Applications can continue to handle their normal workloads during this process. The rest of the picture is comprised of these pieces: The IBM Hardware Managed Console (HMC - pictured above to the top left), the IBM Integration Virtualization Manager (IVM – not pictured), the shared storage (pictured in both data centers) and the F5 BIG-IP technology that enables this, specifically, F5 BIG-IP WAN Optimization Module to enable, secure and accelerate the data transfers and F5-BIP EtherIP, to keep active client sessions connected during transfers and finally, F5 Global Traffic Manager (GTM – not pictured and optional) to direct incoming traffic intelligently during failover events. All of the details of about IBMs Power Mobility feature can be found here: http://www.redbooks.ibm.com/redbooks/pdfs/sg247460.pdf Recommended reading from F5 about setting up these environments can be found here: http://www.f5.com/pdf/deployment-guides/f5-vmotion-flexcache-dg.pdf and http://www.f5.com/pdf/white-papers/cloud-vmotion-f5-wp.pdf We’ve built this solution in lab but a deployment guide is still pending, so in the meantime, I hope that my VMotion deployment guide and the white paper on cloud migration will fill in any questions you have about the nuts and bolts of the deployment. The solutions are very similar while, of course, the under lying technologies are unique to each partner. You can of course email me with any questions you have as well, at n dot moshiri at f5.com WHAT IS REQUIRED? The basics are as follows: AIX 7.1 is recommended, VIOS Version 2.2.2.0 (release August 2012) is recommended, Storage with connectivity to both data centers (or tightly couple replication), A latency (and distance) between the two data centers that can support the nature of the application running in the infrastructure And of course, network connectivity between the two data centers. I will throw in a quick word about VIOS release 2.2.2.0. During initial testing we discovered that IBM’s mobility manager picked an arbitrary TCP port during migration events. For internal migrations this would pose little problem, however, in order to secure, optimize and allow transmission over firewalls, in the long distance scenario, arbitrary ports would simply not do. IBM stepped up and delivered. With version 2.2.2.0, a user selectable port range allows for migration events to happen in a much more controlled manner on the network. Bottom line, migration traffic between the two data centers need to be secured, accelerated and client traffic needs to stay up and know where to go. BIG-IP provides all of this functionality through Local Traffic Manager (LTM), WAN Optimization (WOM) and Global Traffic Manager (GTM). This can be an ideal solution for certain use cases. When examining these architectures analyze: Network connectivity between the data centers, Shared Storage between the data centers, What are the workloads on the partitions, what are the memory footprints. Every Partition Mobility architecture will be different. Reach out to me or your F5 FSE and definitely plan on several rounds of architectural review. F5 and IBM will be there to back you up.437Views0likes0CommentsIn the Cloud, It's the Little Things That Get You. Here are nine of them.
#F5 Eight things you need to consider very carefully when moving apps to the cloud. Moving to a model that utilizes the cloud is a huge proposition. You can throw some applications out there without looking back – if they have no ties to the corporate datacenter and light security requirements, for example – but most applications require quite a bit of work to make them both mobile and stable. Just connections to the database raise all sorts of questions, and most enterprise level applications require connections to DC databases. But these are all problems people are talking about. There are ways to resolve them, ugly though some may be. The problems that will get you are the ones no one is talking about. So of course, I’m happy to dive into the conversation with some things that would be keeping me awake were I still running a datacenter with a lot of interconnections and getting beat up with demands for cloudy applications. The last year has proven that cloud services WILL go down, you can’t plan like it won’t, regardless of the hype. When they do, your databases must be 100% in synch, or business will be lost. 100%. Your DNS infrastructure will need attention, possibly for the first time since you installed it. Serving up addresses from both local and cloud providers isn’t so simple. Particularly during downtimes. Security – both network and app - will have to be centralized. You can implement separate security procedures for each deployment environment, but you are only as strong as your weakest link, and your staff will have to remember which policies apply where if you go that route. Failure plans will have to be flexible. What if part of your app goes down? What if the database is down, but the web pages are fine – except for that “failed to connect to database” error? No matter what the hype says, the more places you deploy, the more likelihood that you’ll have an outage. The IT Managers’ role is to minimize that increase. After a failure, recovery plans will also need to be flexible. What if part of your app comes up before the rest? What if the database spins up, but is now out of synch with your backup or alternate database? When (not if) a security breech occurs on a cloud hosted server, how much responsibility does the cloud provider have to help you clean up? Sometimes it takes more than spinning down your server to clean up a mess, after all. If you move mission-critical data to the cloud, how are you protecting it? Contrary to the wild claims of the clouderati, your data is in a location you do not have 100% visibility into, you’re going to have to take extra steps to protect it. If you’re opening connections back to the datacenter from the cloud, how are you protecting those connections? They’re trusted server to trusted server, but “trusted” is now relative. Of course there are solutions brewing for most of these problems. Here are the ones I am aware of, I guarantee that, since I do not “read all of the Internets” each day (Lori does), I’m missing some, but it can get you started. Just include cloud in your DR plans, what will you do if service X disappears? Is the information on X available somewhere else? Can you move the app elsewhere and update DNS quickly enough? Global Server Load Balancing (GSLB) will help with this problem and others on the list – it will eliminate the DNS propagation lag at least. But beware, for many cloud vendors it is harder to do DR. Check what capabilities your provider supports. There are tools available that just don’t get their fair share of thunder, IMO – like Oracle GoldenGate – that replicate each SQL command to a remote database. These systems create a backup that exactly mirrors the original. As long as you don’t get a database modifying attack that looks valid to your security systems, these architectures and products are amazing. People generally don’t care where you host apps, as long as when they type in the URL or click on the URL, it takes them to the correct location. Global DNS and GSLB will take care of this problem for you. Get policy-based security that can be deployed anywhere, including the cloud, or less attractively (and sometimes impractically), code security into the app so the security moves with it. Application availability will have to go through another round like it did when we went distributed and then SOA. Apps will have to be developed with an eye to “is critical service X up?” where service X might well be in a completely different location from the app. If not, remedial steps will have to occur before the App can claim to be up. Or local Load Balancing can buffer you by making service X several different servers/virtuals. What goes down (hopefully) must come back up. But the same safety steps implemented in #5 will cover #6 nicely, for the most part. Database consistency checks are the big exception, do those on recovery. Negotiate this point if you can. Lots of cloud providers don’t feel the need to negotiate anything, but asking the questions will give you more information. Perhaps take your business to someone who will guarantee full cooperation in fixing your problems. If you actually move critical databases to the cloud, encrypt them. Yeah, I do know it’s expensive in processing power, but they’re outside the area you can 100% protect. So take the necessary step. Secure tunnels are your friend. Really. Don’t just open a hole in your firewall and let “trusted” servers in, because it is possible to masquerade as a trusted server. Create secure tunnels, and protect the keys. That’s it for now. The cloud has a lot of promise, but like everything else in mid hype cycle, you need to approach the soaring commentary with realistic expectations. Protect your data as if it is your personal charge, because it is. The cloud provider is not the one (or not the only one) who will be held accountable when things go awry. So use it to keep doing what you do – making your organization hum with daily business – and avoid the pitfalls where ever possible. In my next installment I’ll be trying out the new footer Lori is using, looking forward to your feedback. And yes, I did put nine in the title to test the “put an odd number list in, people love that” theory. I think y’all read my stuff because I’m hitting relatively close to the mark, but we’ll see now, won’t we?206Views0likes0CommentsThere is more to it than performance.
Did you ever notice that sometimes, “high efficiency” furnaces aren’t? That some things the furnace just doesn’t cover – like the quality of your ductwork, for example? The same is true of a “high performance” race car. Yes, it is VERY fast, assuming a nice long flat surface for it to drive on. Put it on a dirt road in the rainy season, and, well, it’s just a gas hog. Or worse, a stuck car. I could continue the list. A “high energy” employee can be relatively useless if they are assigned tasks at which brainpower, not activity rate, determines success… But I’ll leave it at those three, I think you get the idea. The same is true of your network. Frankly, increasing your bandwidth in many scenarios will not yield the results you expected. Oh, it will improve traffic flow, and overall the performance of apps on the network will improve, the question is “how much?” It would be reasonable – or at least not unreasonable – to expect that doubling Internet bandwidth should stave off problems until you double bandwidth usage. But often times the problems are with the overloading apps we’re placing upon the network. Sometimes, it’s not the network at all. Check the ecosystem, not just the network. When I was the Storage and Servers Editor over at NWC, I reviewed a new (at the time) HP server that was loaded. It had a ton of memory, a ton of cores, and could make practically any application scream. It even had two gigabit NICs in it. But they weren’t enough. While I had almost nothing bad to say about the server itself, I did observe in the summary of my article that the network was now officially the bottleneck. Since the machine had high speed SAS disks, disk I/O was not as bi a deal as it traditionally has been, high-speed cached memory meant memory I/O wasn’t a problem at all, and multiple cores meant you could cram a ton of processing power in. But team those two NICs and you’d end up with slightly less than 2 Gigabits of network throughput. Assuming 100% saturation, that was really less than 250 Megabytes per second, and that counts both in and out. For query-intensive database applications or media streaming servers, that just wasn’t keeping pace with the server. Now here we are, six or so years later, and similar servers are in use all over the globe… Running VMs. Meaning that several copies of the OS are now carving up that throughput. So start with your server. Check it first if the rest of the network is performing, it might just be the problem. And while we’re talking about servers, the obvious one needs to be mentioned… Don’t forget to check CPU usage. You just might need a bigger server or load balancing, or these days, less virtuals on your server. Heck, as long as we’re talking about servers, let’s consider the app too. The last few years for a variety of reasons we’ve seen less focus on apps whose performance is sucktacular, but it still happens. Worth looking into if the server turns out to be the bottleneck. Old gear is old. I was working on a network that deployed an ancient Cisco switch. The entire network was 1 Gig, except for that single switch. But tracing wires showed that switch to lie between the Internet and the internal network. A simple error, easily fixed, but an easy error to have in a complex environment, and certainly one to be aware of. That switch was 10/100 only. We pulled it out of the network entirely, and performance instantly improved. There’s necessary traffic, and then there’s… Not all of the traffic on your network needs to be. And all that does need to be doesn’t have to be so bloated. Look for sources of UDP broadcasts. More often than you would think, applications broadcast that you don’t care about. Cut them off. For other traffic, well there is Application Delivery Optimization. ADO is improving application delivery by a variety of technical solutions, but we’ll focus on those that make your network and your apps seem faster. You already know about them – compression, caching, image optimization… In the case of back-end services, de-duplication. But have you considered what they do other than improve perceived or actual performance? Free Bandwidth Anything that reduces the size of application data leaving your network also reduces the burden on your Internet connection. This goes without saying, but as I alluded to above, we sometimes overlook the fact that it is not just application performance we’re impacting, but the effectiveness of our connections – connections that grow more expensive by leaps and bounds each time we have to upgrade them. While improving application performance is absolutely a valid reason to seek out ADO, delaying or eliminating the need to upgrade your Internet connection(s) is another. Indeed, in many organizations it is far easier to do TCO justification based upon deferred upgrades than it is based upon “our application will be faster”, while both are benefits of ADO. New stuff! And as time wears on, SPDY, IPv6, and a host of other technologies will be more readily available to help you improve your network. Meanwhile, check out gateways for these protocols to make the transition easier. In Summation There are a lot of reasons for apps not to perform, and there are a lot of benefits to ADO. I’ve listed some of the easier problems to ferret out, the deeper into your particular network you get, the harder it is to generalize problems. But these should give you a taste for the types of things to look for. And a bit more motivation to explore ADO. Of course I hope you choose F5 gear for ADO and overall application networking, but there are other options out there. I think. Maybe.215Views0likes0CommentsAdvanced Load Balancing For Developers. The Network Dev Tool
It has been a while since I wrote an installment of Load Balancing for Developers, and now I think it has been too long, but never fear, this is the grad-daddy of Load Balancing for Developers blogs, covering a useful bit of information about Application Delivery Controllers that you might want to take advantage of. For those who have joined us since my last installment, feel free to check out the entire list of blog entries (along with related blog entries) here, though I assure you that this installment, like most of the others, does not require you to have read those that went before. ZapNGo! Is still a growing enterprise, now with several dozen complex applications and a high availability architecture that spans datacenters and the cloud. While the organization relies upon its web properties to generate revenue, those properties have been going along fine with your Application Delivery Controller (ADC) architecture. Now though, you’re seeing a need to centralize administration of a whole lot of functions. What worked fine separately for one or two applications is no longer working so well now that you have several development teams and several dozen applications, and you need to find a way to bring the growing inter-relationships under control before maintenance and hidden dependencies swamp you in a cascading mess of disruption. With maintenance taking a growing portion of your application development manhours, and a reasonably well positioned test environment configured with a virtual ADC to mimic your production environment, all you need now is a way to cut those maintenance manhours and reduce the amount of repetitive work required to create or update an application. Particularly update an application, because that is a constant problem, where creating is less frequent. With many of the threats that your ZapNGo application will be known as ZapNGone eliminated, now it is efficiencies you are after. And believe it or not, these too are available in an ADC. Not all ADC’s are created equal, but this discussion will stay on topics that most ADCs can handle, and I’ll mention it when I stray from generic into specific – which I will do in one case because only one vendor supports one of the tools you can use, but all of the others should be supported by whatever ADC vendor you have, though as always, check with your vendor directly first, since I’m not an expert in the inner workings of every one. There is a lot that many organizations do for themselves, and the array of possibilities is long – from implementing load balancing in source code to security checks in the application, the boundaries of what is expected of developers are shaped by an organization, its history, and its chosen future direction. At ZapNGo, the team has implemented a virtual test environment that as close as possible mirrors production, so that code can be implemented and tested in the way it will be used. They use an ADC for load balancing, so that they don’t have to rewrite the same code over and over, and they have a policy of utilizing a familiar subset of ADC functionality on all applications that face the public. The company is successful and growing, but as always happens in companies in that situation, the pressures upon them are changing just by virtue of their growth. There are more new people who don’t yet have intimate knowledge of the code base, network topology, security policies, whatever their area of expertise is. There are more lines of code to maintain, while new projects are being brought up at a more rapid pace and with higher priorities (I’ve twice lived through the “Everything is high priority? Well this is highest priority!” syndrome while working in IT. Thankfully, most companies grow out of that fast when it’s pointed out that if everything is priority #1, nothing is). Timelines to complete projects – be they new development, bug fixes, or enhancements are stretching longer and longer as the percentage of gurus in the company is down and the complexity of the code and the architecture it runs on is up. So what is a development manager to do to increase productivity? Teaming newer developers with people who’ve been around since the beginning is helping, but those seasoned developers are a smaller and smaller percentage of the workforce, while the volume of work has slowly removed them from some of the many products now under management. Adopting coding standards and standardized libraries helps increase experience portability between projects, but doesn’t do enough. Enter offloading to the ADC. Some things just don’t have to be done in code, and if they don’t have to be, at this stage in the company’s growth, IT management at ZapNGo (that’s you!) decides they won’t be. There just isn’t time for non-essential development anymore. Utilizing a policy management tool and/or an Application Firewall on the ADC can improve security without increasing the code base, for example. And that shaves hours off of maintenance projects, while standardizing on one or a few implementations that are simply selected on the ADC. Implementing Web Application Acceleration protocols on the ADC means that less in-code optimization has to occur. Performance is no longer purely the role of developers (but of course it is still a concern. No Web Application Acceleration tool can make a loop that runs for five minutes run faster), they can allow the Web Application Acceleration tool to shrink the amount of data being sent to the users’ browser for you. Utilizing a WAN Optimization ADC tool to improve the performance of bulk copies or backups to a remote datacenter or cloud storage… The list goes on and on. The key is that the ADC enables a lot of opportunities for App Dev to be more responsive to the needs of the organization by moving repetitive tasks to the ADC and standardizing them. And a heaping bonus is that it also does that for operations with a different subset of functionality, meaning one toolset gives both App Dev and Operations a bit more time out of their day for servicing important organizational needs. Some would say this is all part of DevOps, some would say it is not. I leave those discussions to others, all I care is that it can make your apps more secure, fast, and available, while cutting down on workload. And if your ADC supports an SSL VPN, your developers can work from home when necessary. Or more likely, if your code is your IP, a subset of your developers can. Making ZapNGo more responsive, easier to maintain, and more adaptable to the changes coming next week/month/year. That’s what ADCs do. And they’re pretty darned good at it. That brings us to the one bit that I have to caveat with F5 only, and that is iApps. An iApp is a constructed configuration tool that asks a few questions and then deploys all the bits necessary to set up an ADC for a particular application. Why do I mention it here? Well if you have dozens of applications with similar characteristics, you can create an iApp Template and use it to rapidly bring new applications or new instances of applications online. And since it is abstracted, these iApp templates can be designed such that AppDev, or even the business owner, is able to operate them Meaning less time worrying about what network resources will be available, how they’re configured, and waiting for operations to have time to implement them (in an advanced ADC that is being utilized to its maximum in a complex application environment, this can be hundreds of networking objects to configure – all encapsulated into a form). Less time on the project timeline, more time for the next project. Or for the post deployment party. One of the two. That’s it for the F5 only bit. And knowing that all of these items are standardized means less things to get mis-configured, more surety that it will all work right the first time. As with all of these articles, that offers you the most important benefit… A good night’s sleep.238Views0likes0CommentsFrom Point A to Point B.
The complexities of life often escape a young child. The Little Man asked me the other day why I had to go work, which was both a compliment to wanting to spend time with me and an unintended backhand slap at Lori, who was going to hang out with him while I took care of business. The answer was the usual stuff, that working paid the bills, and work has its own rewards… It did not include “and I like my job”, though I do, simply because I didn’t want to imply “more than hanging out with you” to a three year old. But children boil everything down to simplicity. The picture over there is said son, wearing a picklehaube with a Transformers shirt and (yes really) proclaiming he was an autobot because of the helmet. We adults, on the other hand, tend to layer complexity upon complexity until we’re not certain we’re getting value anymore, but we’re proud of whatever it is we have done/built/know. IT is like that sometimes. What is “the network” – in tweet length, for example? Not only is the answer tough to cram into tweet length, it is tougher to cram into tweet length and make useful. It is even more difficult to cram it into tweet length and include all the various constituencies of IT in the answer. But it can be done, because a “network” is a simple concept. You’re moving information from point A to point B. That’s it. Everything else is layers we’ve added over it to make some aspect of that movement better, or to facilitate the movement of data. But in the end, it is just sending bytes over the wire. If, for example, a business person with no IT background asks why a whole section of the corporate network is down, they don’t care about routing tables or even DNS, they care about “The network broke, and those clients can’t get to the datacenter. The network is complex, but we’ll have it up soon.” If you’re moving data over the WAN, it gets another layer of complexity – because you want to move data over the WAN at a decent speed, but most applications aren’t designed for network communication optimization. Instead they’re designed to be very good at moving data, and expect the network to worry about performance issues. But business users don’t want to hear about compression, dedupe, SSL offload, or any of that when things go wrong, they want to hear “The copy of our data at the remote site is a little out of synch right now, but we’re on it, and it will all be fine soon.” Want to secure the network? BAM! Another complex layer is tossed on top of that – but the point is that you don’t just want to move data from point A to point B anymore, you want to move data from point A to point B securely. Again, if your ADS or LDAP system goes on the fritz, you’re going to want to be able to tell people “users can’t log in right now, the servers that know who can do what are offline, but we’ll have them back up soon.” because users care that data isn’t moving from point A to point B, not about whatever bugbear has cropped up with authentication or the network. Want to give web users – internal and external – an enhanced experience while reducing the load on your servers? Another layer of complexity piles on, as you use Web Application Optimization techniques. They work great – at least those by F5 do, since I’ve been a user of them – but they add a whole new layer of oddities. “No, the new logo isn’t showing reliably, but the team is flushing the cache and/or changing the expiration date to get that fixed, and it’ll be right soon.” is what business users want to hear. Load balancing to increase reliability and performance adds yet another layer of complexity to the overall system, a layer that has all of its own terminology. But when load balancing goes wrong, “We misconfigured the Virtual IP and the Pool it points to does not serve your app” isn’t what the business person wants to hear. “Yes, we had an error, but your application should be back online soon.” is the right answer. Server virtualization doesn’t directly add complexity to the network, but server sprawl certainly does because now there are a lot more clients out there. One of the early problems with server sprawl that seems to be largely defeated was “where is that non-responding virtual running again?” But still, if the hardware goes down that a users’ VM is running on, they want to hear “Yes, we had a hardware failure, but your application should be back up on another server soon, and we’ll get the problem fixed then move your application back.” Desktop virtualization adds both complexity and traffic to your network, but simplifies a whole array of things from desktop management to licensing. Still, when it is performing poorly, a business leader does not want to hear about oversubscription, congestion, or the number of VMs per server, or anything else technical, they want to hear “Yes, we see that performance is down for those users. We’ve got a plan to fix it, and all should be back to normal soon.” The thing is, F5 sells tools to help with all of these issues. In fact, F5 sells a platform that you can customize to help with all of these issues… But notice that all the answers to business are simple, and end with "some variant of “back up soon”? We can supply you with tools to manage the “back up soon” or even make you able to say “there was a problem, but you shouldn’t have noticed”, we cannot provide you with a tool to make everything simple. The business sometimes needs educating, but most of the time they just need less detail and more information. We’ve got a ton of cool stuff going on in IT these days, but sometimes the complexity masks the simplicity. Boil it down to the basics, and tackle real problems. And enjoy talking simplicity for a change... Because the next round of Buzzword Bingo is on its way in 5,4,3,2,1…213Views0likes0CommentsForget Performance IN the Cloud, What About Performance TO the Cloud?
In an N-Tiered architecture, the network connection between tiers becomes a truly important part of the overall application performance equation. This is a fact we have known for a couple of decades now. If your network performance is down for some reason (from mis-wiring to hardware mis-configuration to over utilization), your application performance will, by definition, suffer. I ran a test once while writing for Network Computing where the last person to use the lab had programmed the ports I was using to shunt bandwidth over threshold X onto a different VLAN. It took weeks and the help of one of the vendors under test – Juniper – to help me figure out exactly what was going wrong. Only after they observed that it was topping out and then dropping packets were we able to track down the one changed configuration setting. The funny thing is that this particular change would not have impacted the vast majority of testing that we did in the lab, I was just unlucky enough to pick the wrong ports for a test that purposefully overloaded the network. Until the Juniper crew said “no, that’s not right, we’ve done this test before”, I was blaming the products for the failures under high load… A symptom of the fact that when network performance degrades, systems appear to degrade. Thankfully, we caught the problem and I did not go to print with misinformation. But it does highlight the need for network performance to be top-notch if your application performance is to be top-notch. And we’re starting to head into uncharted territory where this fact is concerned. The problem is that enterprises don’t generally throw a whole bunch of data over the WAN, and if they do, they have specially provisioned ways to do so – because they’re going to a remote datacenter or partner network that is known and data volumes can be calculated for. But as we approach cloud computing we need to be more aware of the network performance aspects than we would be either in the datacenter or transferring between two datacenters. The reasons for this are manifold. First, you don’t have control of the connection to the cloud provider. You own one end, and with the right tools (skipping plug for F5WOM here, look it up if you have a need) you can “control” both ends of the connection, but you don’t really have control of what is in-between. The volume of traffic, the size of the pipes, etc. are all dictated by outside forces. For some DC-to-DC connections this is true also, but unlike cloud, DC-to-DC is point-to-point. If you are dealing with one of the major cloud vendors, you can’t even be certain what country your app resides in at the moment, let alone the route to that application. Some of this concern is automatically mitigated. If the platform provider is large enough to have datacenters in multiple countries, they have pipes bigger than New York sewers connected to those datacenters. Some of it can be mitigated by those same “right tools” that help at the ends of the connection, because they can create tunnels or p2p connections to handle communications, and offer bandwidth reduction capabilities to make certain you are only sending smaller amounts of data, reducing the impact of the network by having less round trips across it. In cloud storage, you have a bigger issue because the whole point is to send massive amounts of data across the WAN. While the right products will reduce the footprint of that data with compression (skipping plug for F5 ARX, look it up if you have a need), you still are sending a lot, or you wouldn’t have to go to a cloud platform, you’d just store it locally. So the question becomes how to make certain that performance to the cloud storage vendor is optimal when you don’t own both ends of the connection? That’s a tricky question, because it is not just a question for today, it is a question forever. You see, the more you store in a cloud storage providers’ space, the less likely you are to want to change providers. But the more business a cloud storage provider receives, the more companies are cramming huge volumes of data in and out of their DC. Which could cause performance problems for you… Unless you have SLAs that are rock-solid and you’re willing to enforce. The long and the short of this post is some advice. Get tools to reduce the amount you’re sending over the wire, and make certain you have an SLA that will cover you if your usage (or the other people using the providers’ usage) jumps. And yeah, if your vendor charges by the megabit or megabyte, check out some products that might reduce your throughput. I might have mentioned a couple here, but there are more out there.203Views0likes0CommentsToll Booths and Dams. And Strategic Points of Control
An interesting thing about toll booths, they provide a point at which all sorts of things can happen. When you are stopped to pay a toll, it smooths the flow of traffic by letting a finite number of vehicles through per minute, reducing congestion by naturally spacing things out. Dams are much the same, holding water back on a river and letting it flow through at a rate determined by the operators of the dam. The really interesting bit is the other things that these two points introduce. When necessary, toll booths have been used to find and stop suspected criminals. They have also been used as advertising and information transmission points. None of the above are things toll booths were created for. They were created to collect tolls. And yet by nature of where they sit in the highway system, can be utilized for much more. The same is true of a dam. Dams today almost always generate electricity. Often they function as bridges over the very water they’re controlling. They control the migration of fish, and operate as a check on predatory invasive species. Again, none of these things is the primary reason dams were originally invented, but the nature of their location allows them to be utilized effectively in all of these roles. Toll booths - Wikipedia We’ve talked a bit about strategic points of control. They’re much like toll booths and dams in the sense that their location makes them key to controlling a whole lot of traffic on your LAN. In the case of F5’s defined strategic points of control, they all tie in to the history of F5’s product lineup much like a toll booth was originally to collect tolls. F5BIG-IPLTM sits at the network strategic point of control. Initially LTM was a load balancer, but by virtue of its location and the needs of customers has grown into one of the most comprehensive Application Delivery Controllers on the market – everything from security to uptime monitoring is facilitated by LTM. F5 ARX is much the same, being the file-based storage strategic point of control allows such things as directing some requests to cloud storage and others to storage by vendor A, while still others go to vendor B, and the remainder go to a Linux or Windows machine with a ton of free disk space on it. The WAN strategic point of control is where you can improve performance over the WAN via WOM, but it is also a place where you can extend LTM functionality to remote locations, including the cloud. Budgets for most organizations are not growing due to the state of the economy. Whether you’re government, public, private, or small business, you’ve been doing more with less for so long that doing more with the same would be a nice change. If you’re lucky, you’ll see growth in IT budgeting due to increasing needs of security and growth of application footprints. Some others will see essentially flat budgets, and many – including most government IT orgs - will see shrinking budgets. While that is generally bad news, it does give you the opportunity to look around and figure out how to make more effective use of existing technology. Yes, I have said that before, because you’re living that reality, so it is worth repeating. Since I work for F5, here are a few examples though, something I’ve not done before. From the network strategic point of control, we can help you with DNSSec, AAA, Application Security, Encryption, performance on several levels (from TCP optimizations to compression), HA, and even WAN optimization issues if needed. From the storage strategic point of control we can help you harness cloud storage, implement tiering, and balance load across existing infrastructure to help stave off expensive new storage purchases. Backups and replication can be massively improved (both in terms of time and data transferred) from this location also. We’re not the only vendor that can help you out without having to build a whole new infrastructure. It might be worthwhile to have a vendor day, where you invite vendors in to give presentations about how they can help – larger companies and the federal government do this regularly, you can do the same in a scaled down manner, and what sales person is going to tell you “no, we don’t want to come tell you how you can help and we can sell you more stuff”? Really? Another option is, as I’ve said in the past, make sure you know not just the functionality you are using, but the capabilities of the IT gear, software, and services that you already have in-house. Chances are there are cost savings by using existing functionality of an existing product, with time being your only expense. That’s not free, but it’s about as close as IT gets. Hoover Dam from the air - Wikipedia So far we in IT have been lucky, the global recession hasn’t hit our industry as hard as it has hit most, but it has constricted our ability to spend big, so little things like those above can make a huge difference. Since I am on a computer or Playbook for the better part of 16 hours a day, hitting websites maintained by people like you, I can happily say that you all rock. A highly complex, difficult to manage set of variables rarely produces a stable ecosystem like we have. No matter how good the technology, in the end it is people who did that, and keep it that way. You all rock. And you never know, but you might just find the AllSpark hidden in the basement ;-).259Views0likes0CommentsYou Say Tomato, I Say Network Service Bus
It’s interesting to watch the evolution of IT over time. I have repeatedly been told “you people, we were doing that with X, back before you had a name for it!” And likely, the speaker is telling the truth, as far as it goes. Seriously, while the mechanisms may be different, putting a ton of commodity servers behind a load balancer and tweaking for performance looks an awful lot like having LPARs that can shrink and grow. You put “dynamic cloud” into the conversation and the similarities become more pronounced. The biggest difference is how much you’re paying for hardware and licensing. Back in the day, Enterprise Service Busses (ESB) were all the rage, able to handle communications between a variety of application sources and route things to the correct destination in the correct format, even providing guaranteed delivery if you needed it for transactional services. I trained in several of these tools, most notably IBM MQSeries (now called IBM WebSphere MQ, surprised?) and MS MQ. I was briefed on a ton more during my time at Network Computing. In the end, they’re simply message delivery and routing mechanisms that can translate along the way. Oh sure, with MQSeries Integrator you could include all sorts of other things like security callouts and such, but core functionality was restricted to message flow and delivery. While ESBs are still used today in highly mixed environments or highly complex application infrastructures, they’re not deployed broadly in IT, largely because XML significantly reduced the need for the translation aspect, which was a primary use of them in the enterprise. Today, technology is leading us to a parallel development that will likely turn out much more generically useful than ESBs. Since others have referred to it as several things, but the Network Service Bus is the closest I’ve seen in terms of accuracy, I’ll run with that term. This is routing, translation, and delivery across the network from consumer to the correct service. The service is running on a server somewhere, but that’s increasingly less relevant to the consumer application, merely that their request gets serviced is sufficient. Serviced in a timely and efficient manner is big too. Translated while servicing is seeing a temporary (though not short, in my estimation) bump while IPv4 is slowly supplanted by IPv6, but has other uses – like encrypted to unencrypted, for example. The network of the future will use a few key Strategic Points of Control – like the one between consumers and web servers – to handle routing to a service that is (a) active, (b) responsive, and (c) appropriate to the request. In the interim, while passing the request along, the Strategic point of control will translate the incoming request into a format that the service expects, and if necessary will validate the user in the context of the service being requested and the username/platform/location the request is coming from. This offloads a lot from your apps and your servers. Encryption can be offloaded to the strategic point of control, freeing up a lot of CPU time, and running unencrypted within your LAN, while maintaining encryption on the public Internet. IPv6 packets can be translated to IPv4 on the way in and back to IPv6 on the way out, so you don’t have to switch everything in your datacenter over to IPv6 at once, security checks can occur before the connection is allowed inside your LAN, and scalability gets a major upgrade because you now have a device in place that will route traffic according to the current back-end configuration. Adding and removing servers, upgrading apps, all benefit from the strategic point of control that allows you to maintain a given public IP while changing the machines that service requests as-needed. And then we factor in cloud computing. If all of this functionality – or at least a significant chunk of it – was available in the cloud, regardless of cloud vendor, then you could ship overflow traffic to the cloud. There are a lot of issues to deal with, like security, but they’re manageable if you can handle all of the other service requests as if the cloud servers were part of your everyday infrastructure. That’s a datacenter of the future. Let’s call it a tomato. And in the end it makes your infrastructure more adaptable while giving you a point of control that can harness to implement whatever monitoring or functionality you need. And if you have several of those points of control – one to globally load balance, one for storage, one in front of servers… Then you are offering services that are highly adaptable to fluctuations in usage. Like having a tomato, right in the palm of your hands. Completely irrelevant observation: The US Bureau of Labor Statistics (BLS) mentioned today that IT unemployment is at 3.3%. Now you have a bright spot in our economic doldrums.214Views0likes0Comments