deduplication
8 TopicsDeduplication and Compression – Exactly the same, but different.
One day many years ago, Lori and I’s oldest son held up two sheets of paper and said “These two things are exactly the same, but different!” Now, he’s a very bright individual, he was just young, and didn’t even get how incongruous the statement was. We, being a fun loving family that likes to tease each other on occasion, we of course have not yet let him live it down. It was honestly more than a decade ago, but all is fair, he doesn’t let Lori live down something funny that she did before he was born. It is all in good fun of course. Why am I bringing up this family story? Because that phrase does come to mind when you start talking about deduplication and compression. Highly complimentary and very similar, they are pretty much “Exactly the same, but different”. Since these technologies are both used pretty heavily in WAN Optimization, and are growing in use on storage products, this topic intrigued me. To get this out of the way, at F5, compression is built into the BIG-IP family as a feature of the core BIG-IP LTM product, and deduplication is an added layer implemented over BIG-IP LTM on BIG-IP WAN Optimization Module (WOM). Other vendors have similar but varied (there goes a variant of that phrase again) implementation details. Before we delve too deeply into this topic though, what caught my attention and started me pondering the whys of this topic was that F5’s deduplication is applied before compression, and it seems that reversing the order changes performance characteristics. I love a good puzzle, and while the fact that one should come before the other was no surprise, I started wanting to know why the order it was, and what the impact of reversing them in processing might be. So I started working to understand the details of implementation for these two technologies. Not understand them from an F5 perspective, though that is certainly where I started, but try to understand how they interact and compliment each other. While much of this discussion also applies to in-place compression and deduplication such as that used on many storage devices, some of it does not, so assume that I am talking about networking, specifically WAN networking, throughout this blog. At the very highest level, deduplication and compression are the same thing. They both look for ways to shrink your dataset before passing it along. After that, it gets a bit more complex. If it was really that simple, after all, we wouldn’t call them two different things. Well, okay, we might, IT has a way of having competing standards, product categories, even jobs that we lump together with the same name. But still, they wouldn’t warrant two different names in the same product like F5 does with BIG-IP WOM. The thing is that compression can do transformations to data to shrink it, and it also looks for small groupings of repetitive byte patterns and replaces them, while deduplication looks for larger groupings of repetitive byte patterns and replaces them. In the implementation you’ll see on BIG-IP WOM, deduplication looks for larger byte patterns repeated across all streams, while compression applies transformations to the data, and when removing duplication only looks for smaller combinations on a single stream. The net result? The two are very complimentary, but if you run compression before deduplication, it will find a whole collection of small repeating byte patterns and between that and transformations, deduplication will find nothing, making compression work harder and deduplication spin its wheels. There are other differences – because deduplication deals with large runs of repetitive data (I believe that in BIG-IP the minimum size is over a K), it uses some form of caching to hold patterns that duplicates can match, and the larger the caching, the more strings of bytes you have to compare to. This introduces some fun around where the cache should be stored. In memory is fast, but limited in size, on flash disk is fast and has a greater size, but is expensive, and on disk is slow but has a huge advantage in size. Good deduplication engines can support all three and thus are customizable to what your organization needs and can afford. Some workloads just won’t benefit from one, but will get a huge benefit from the other. The extremes are good examples of this phenomenon – if you have a lot of in-the-stream repetitive data that is too small for deduplication to pick up, and little or no cross-stream duplication, then deduplication will be of limited use to you, and the act of running through the dedupe engine might actually degrade performance a negligible amount – of course, everything is algorithm dependent, so depending upon your vendor it might degrade performance a large amount also. On the other extreme, if you have a lot of large byte count duplication across streams, but very little within a given stream, deduplication is going to save your day, while compression will, at best, offer you a little benefit. So yes, they’re exactly the same from the 50,000 foot view, but very very different from the benefits and use cases view. And they’re very complimentary, giving you more bang for the buck.300Views0likes1CommentBare Metal Blog: Throughput Sometimes Has Meaning
#BareMetalBlog Knowing what to test is half the battle. Knowing how it was tested the other. Knowing what that means is the third. That’s some testing, real clear numbers. In most countries, top speed is no longer the thing that auto manufacturers want to talk about. Top speed is great if you need it, but for the vast bulk of us, we’ll never need it. Since the flow of traffic dictates that too much speed is hazardous on the vast bulk of roads, automobile manufacturers have correctly moved the conversation to other things – cup holders (did you know there is a magic number of them for female purchasers? Did you know people actually debate not the existence of such a number, but what it is?), USB/bluetooth connectivity, backup cameras, etc. Safety and convenience features have supplanted top speed as the things to discuss. The same is true of networking gear. While I was at Network Computing, focus was shifting from “speeds and feeds” as the industry called it, to overall performance in a real enterprise environment. Not only was it getting increasingly difficult and expensive to push ever-larger switches until they could no longer handle the throughput, enterprise IT staff was more interested in what the capabilities of the box were than how fast it could go. Capabilities is a vague term that I used on purpose. The definition is a moving target across both time and market, with a far different set of criteria for, say, an ADC versus a WAP. There are times, however, where you really do want to know about the straight-up throughput, even if you know it is the equivalent of a professional driver on a closed course, and your network will never see the level of performance that is claimed for the device. There are actually several cases where you will want to know about the maximum performance of an ADC, using the tools I pay the most attention to at the moment as an example. WAN optimization is a good one. In WANOpt, the goal is to shrink the amount of data being transferred between two dedicated points to try and maximize the amount of throughput. When “maximize the amount of throughput” is in the description, speeds and feeds matter. WANOpt is a pretty interesting example too, because there’s more than just “how much data did I send over the wire in that fifteen minute window”. It’s more complex than that (isn’t it always?). The best testing I’ve seen for WANOpt starts with “how many bytes were sent by the originating machine”, then measures that the same number of bytes were received by the WANOpt device, then measures how much is going out the Internet port of the WANOpt device – to measure compression levels and bandwidth usage – then measures the number of bytes the receiving machine at the remote location receives to make sure it matches the originating machine. So even though I say “speeds and feeds matter”, there is a caveat. You want to measure latency introduced with compression and dedupe, and possibly with encryption since WANOpt is almost always over the public Internet these days, throughput, and bandwidth usage. All technically “speeds and feeds” numbers, but taken together giving you an overall picture of what good the WANOpt device is doing. There are scenarios where the “good” is astounding. I’ve seen the numbers that range as high as 95x the performance. If you’re sending a ton of data over WANOpt connections, even 4x or 5x is a huge savings in connection upgrades, anything higher than that is astounding. This is an (older) diagram of WAN Optimization I’ve marked up to show where the testing took place, because sometimes a picture is indeed worth a thousand words. And yeah, I used F5 gear for the example image… That really should not surprise you . So basically, you count the bytes the server sends, the bytes the WANOpt device sends (which will be less for 99.99% of loads if compression and de-dupe are used), and the total number of bytes received by the target server. Then you know what percentage improvement you got out of the WANOpt device (by comparing server out bytes to WANOpt out bytes), that the WANOpt devices functioned as expected (server received bytes == server sent bytes), and what the overall throughput improvement was (server received bytes/time to transfer). There are other scenarios where simple speeds and feeds matter, but less of them than their used to be, and the trend is continuing. When a device designed to improve application traffic is introduced, there are certainly few. The ability to handle a gazillion connections per second I’ve mentioned before is a good guardian against DDoS attacks, but what those connections can do is a different question. Lots of devices in many networking market spaces show little or even no latency introduction on their glossy sales hand-outs, but make those devices do the job they’re purchased for and see what the latency numbers look like. It can be ugly, or you could be pleasantly surprised, but you need to know. Because you’re not going to use it in a pristine lab with perfect conditions, you’re going to slap it into a network where all sorts of things are happening and it is expected to carry its load. So again, I’ll wrap with acknowledgement that you all are smart puppies and know where speeds and feeds matter, make sure you have realistic performance numbers for those cases too. Technorati Tags: Testing,Application Delivery Controller,WAN Optimization,throughput,latency,compression,deduplication,Bare Metal Blog,F5 Networks,Don MacVittie The Whole Bare Metal Blog series: Bare Metal Blog: Introduction to FPGAs | F5 DevCentral Bare Metal Blog: Testing for Numbers or Performance? | F5 ... Bare Metal Blog: Test for reality. | F5 DevCentral Bare Metal Blog: FPGAs The Benefits and Risks | F5 DevCentral Bare Metal Blog: FPGAs: Reaping the Benefits | F5 DevCentral Bare Metal Blog: Introduction | F5 DevCentral204Views0likes0CommentsWeb App Performance: Think 1990s.
As I’ve mentioned before, I am intrigued by the never-ending cycle of repetition that High Tech seems to be trapped in. Mainframe->Network->Distributed->Virtualized->Cloud, which while different, shares a lot of characteristics with a mainframe environment. The same is true with disks, after several completely different iterations, performance relative to CPUs and Application needs are really not that different from 20 years ago. The big difference is that 20 years ago we as users had a lot more tolerance for delays than they do today. One of my co-workers was talking about an article he recently read that said users are now annoyed literally “in the blink of an eye” at page load times. Right now, web applications are going through one of those phases in the performance space, and it’s something we need to be talking about. Not that delivery to the desktop is a problem, network speeds, application development improvements (both developers learning tricks and app tools getting better), and processing power have all combined to overcome performance issues in most applications, in fact, we’re kind of in a state of nirvana. Unless you have a localized problem, application performance is pretty darned good. Doubt me? Consider even trying to use something like YouTube in the 90s. Yeah, that’s a good reminder of how far we’ve come. But the world is evolving again. It’s no longer about web application performance to PCs, because right about the time a problem gets resolved in computer-land, someone changes the game. Now it’s about phones. To some extent it is about tablets, and they certainly need their love too, but when it comes to application delivery, it’s about phones, because they’re the slowest ship in the ocean. And according to a recent Gartner report, that won’t change soon. Gartner speculates that new phones are being added so fast that 4G will be overtaken relatively quickly, even though it is far and away better performance-wise than 3G. And there’s always the latency that phones have, which at this point in history is much more than wired connections – or even WLAN connections. The Louis CK video where he makes like a cell phone user going “it.. it’s not working!” when their request doesn’t come back right away is funny because it is accurate. And that’s bad news for IT trying to deliver the corporate interface to these devices. You need to make certain you have a method of delivering applications fast. Really fast. If the latency numbers are in the hundreds of milliseconds, then you have no time to waste – not with excess packets, not with stray requests. Yes of course F5 offers solutions that will help you a lot, that’s the reason I am looking into this topic, but if you’re not an F5 customer, and for any reason can’t/won’t be, there are still things you can do, they’re just not quite as effective and take a lot more man-hours. Going back through your applications to reduce the amount of data being transferred to the client (HTML can be overly verbose, and it’s not the worst offender), go through and create uber-reduced versions of images for display on a phone (or buy a tool that does this for you), consider SPDY support, since Google is opening it to the world. No doubt there are other steps you can take. They’re not as thorough as purchasing a complete solution designed around application performance that supports cell phones, but these steps will certainly help, if you have the man-hours to implement them. Note that only one in three human beings are considered online today. Imagine in five years what performance needs will be. I think that number is actually inflated. I personally own seven devices that get online, and more than one of them is turned on at a time… Considering that Lori has the same number, and that doesn’t count our servers, I’ll assume their math over-estimates the number of actual people online. Which means there’s a great big world out there waiting to receive the benefits of your optimized apps. If you can get them delivered in the blink of an eye. Related Articles And Blogs March (Marketing) Madness: Consolidation versus Consolidation March (Marketing) Madness: Feature Parity of Software with Hardware March (Marketing) Madness: Load Balancing SQL March (Marketing) Madness: Consolidation versus Consolidation March (Marketing) Madness: Feature Parity of Software with Hardware What banks can learn from Amazon Mobile versus Mobile: 867-5309226Views0likes0CommentsDevCentral Top5 02/04/2011
If your week has been anything like mine, then you’ve had plenty to keep you busy. While I’d like to think that your “busy” equates to as much time on DevCentral checking out the cool happenings while people get their geek on as mine does, I understand that’s less than likely. Fortunately, though, there is a mechanism by which I can distribute said geeky goodness for your avid assimilation. I give to you, the DC Top 5: iRuling the New FSE Crop http://bit.ly/f1JIiM Easily my favorite thing that happened this week was something I was fortunate enough to get to be a part of. A new crop of FSEs came through corporate this week to undergo a training boot camp that has been, from all accounts, a smashing success. A small part of this extensive readiness regimen was an iRules challenge issued unto the newly empowered engineers by yours truly. Through this means they were intended to learn about iRules, DevCentral, and the many resources available to them for researching and investigating any requirements and questions they may have. The results are in as of today and I have to say I’m duly impressed. I’ll post the results next week but, for now, here’s a taste of the challenge that was issued. Keep in mind these people range from a few weeks to maybe a couple months tops experience with F5, let alone iRules or coding in general, so this was a tall order. The gauntlet was laid down and the engineers answered, and answered with vigor. Stay tuned for more to come. Mitigate Java Vulnerabilities with iRules http://bit.ly/gbnPOe Jason put out a fantastic blog post this week showing how to thwart would be JavaScript abusing villains by way of iRules fu. Naturally I was interested so I investigated further. It turns out there was a vuln that cropped up plenty last week dealing with a specific string (2.2250738585072012e-308) that has a nasty habit of making the Java runtime compiler go into an infinite loop and, eventually, pack up its toys and go home. This is, as Jason accurately portrayed, “Not good.”. Luckily though iRules is able to leap to the rescue once more, as is its nature. By digging through the HTTP::request variable, Jason was able to quickly and easily strip out any possibly harmful instances of this string in the request headers. For more details on the problem, the process and the solution, click the link and have a read. F5 Friday: ‘IPv4 and IPv6 Can Coexist’ or ‘How to eat your cake and have it too’ http://bit.ly/ejYYSW Whether it was the promise of eating cake or the timely topic of IPv4 trying to cling to its last moments of glory in a world hurtling quickly towards an IPv6 existence I don’t know, but this one drew me in. Lori puts together an interesting discussion, as is often the case, in her foray into the “how can these two IP formats coexist” arena. With the reality of IPGeddon acting as the stick, the carrot of switching to an IPv6 compatible lifestyle seems mighty tasty for most businesses that want to continue being operational once the new order sets in. Time is quickly running out, as are the available IPv4 addresses, so the hour is nigh for decisions to be made. This is a look at one way in which you can exist in the brave new world of 128-bit addressing without having to reconfigure every system in your architecture. It’s interesting, timely, and might just save you 128-bits worth of headaches. Deduplication and Compression – Exactly the same, but different http://bit.ly/h8q0OS There’s something that got passed over last week because of an absolute overabundance of goodness that I wanted to bring up this week, as I felt it warranted some further review and discussion. That is, Don’s look at Deduplication and Compression. Taking the angle of the technologies being effectively the same is an interesting one to me. Certainly they aren’t the same thing, right? Clearly one prevents data from being transmitted while the other minimizes the transmission necessary. That’s different, right? Still though, as I was reading I couldn’t help but find myself nodding in agreeance as Don laid out the similarities. Honestly, they really do accomplish the same thing, that is minimizing what must pass through your network, even though they achieve it by different means. So which should you use when? How do they play together? Which is more effective for your environment? All excellent questions, and precisely why this post found its way into the Top5. Go have a look for yourself. Client Cert Fingerprint Matching via iRules http://bit.ly/gY2M69 Continuing in the fine tradition of the outright thieving of other peoples’ code to mold into fodder for my writing, this week I bring to you an awesome snippet from the land down under. Cameron Jenkins out of Australia was kind enough to share his iRule for Client Cert Fingerprint matching with the team. I immediately pounced on it as an opportunity to share another cool example of iRules doing what they do best: making stuff work. This iRule shows off an interesting way to compare cert fingerprints in an attempt to verify a cert’s identity without needing to store the entirety of the cert and key. It’s also useful for restricting access to a given list of certs. Very handy in some situations, and a wickedly simple iRule to achieve that level of functionality. Good on ya, Cameron, and thanks for sharing. There you have it, another week, another 5 piece of hawesome from DevCentral. See you next time, and happy weekend. #Colin183Views0likes0CommentsToo Much Holiday Traffic? Imagine if it wasn’t Deduped!
Lori, the Toddler, and I drove down to my mothers’ house in Cincinnati (about 9 hours away) for the fourth of July weekend. Our youngest daughter drove her car with her sister, the sister’s fiancé, and our grand-daughter. We stayed in touch via text message and drove through the night. What does all of this have to do with networking, you ask? Well I was driving about 1AM around Indianapolis, Indiana, and realized that there were an awful lot of cars on the road for the middle of the night, presumably holiday traffic, but things were moving along smoothly, even through the ever-present construction zones. One of the things that a really smart member of our technical staff and I were discussing the other day was the difference between dedupe on-the-wire and in-situ, and how to illustrate it succinctly for those of you who are being barraged with storage dedupe (which is generally in-situ), and WAN Optimization, which is in-flight. And this is it. When you go to leave for Holiday (hat tip to our UK friends), you pile everyone into the car and hit the road. If you think of the road as your Internet connection and each car as representative of some data, and the car travels once down the road, with each of the individuals in the car being instances of the same data… So our car had three instances of MacVittie in it – Lori, The Toddler, and I. But when we got to our destination, the three instances re-appeared as we climbed out of the car. Sure, the three instances were stiff, sleep deprived, and cranky, but rehydration will do that. When our daughters arrived, all four of them (two daughters, a grand-daughter, and a fiancé), also climbed out of the vehicle to be separate entities again. Our oldest son had to work, and thus was never sent over the wire/roadway, and didn’t get deduplicated. Now, so you all pile into one car, one car is sent over the roadway, then you all show up at the end. It’s an oversimplification, and probably the architect I was talking with is gritting his teeth, but here comes the example part. Imagine if each individual going on holiday took a different car? We’d have sent seven cars instead of two. Multiply that by the number of cars full of people on the road. Now imagine trying to drive through that. That’s what in-flight dedupe does for you, less cars. Your duplicated data transmission. (compliments of photocarsonline.com) In storage dedupe – the primary place where deduplication is bandied about these days – five of us would have been eliminated from the world, leaving only two actual people and references to those two. Never again would the originals be seen (short of some work on your part anyway), just a number and a list of differences between the original and this “copy”. On the road (and in on-the-fly compression such as that done with WAN Optimization), we all piled into the car, and we all piled out, completely rehydrated, all originals, which just a faint memory of being crammed together in a car. In on-the-fly deduplication, you have a box at either end (two of our LTM + WOM boxes, for example), and only during flight does it matter if one of those boxes disappears. In storage deduplication, you have to keep the vendor that did the dedupe. Though there are some implementations that can handle vendor change, I don’t see them in use because they don’t offer as much data for management purposes as proprietary solutions do. At least you have to keep them around until you’ve moved everything out of dedupe engine’s workspace. In on-the-fly dedupe, you have but to shut down the pipe, change the boxes from vendor A to vendor B, and turn the connection back on. No loss, no worries, because only while in-flight (while in the car) is the data changed from the original. As to compression, we won’t use the freeway analogy to talk about that ;-). Your deduplicated pipe. (compliments of www.interstate-guide.com) Related Articles and Blogs: 20 worst cities with traffic jams (first picture) I-90 Interstate Guide (second picture) Stop Repeating Yourself. Deduping WAN-Opt Style180Views0likes0CommentsStop Repeating Yourself. Deduping WAN-Opt Style
Ever hang out with the person who just wants to make their point, and no matter what the conversation says the same thing over and over in slightly different ways? Ever want to tell them they were doing their favorite cause/point/whatever a huge disfavor by acting like a repetitive fool? That’s what your data is doing when you send it across the WAN. Ever seen the data in a database file? Or in your corporate marketing documents? R E P E T I T I V E. And under a normal backup or replication scenario – or a remote office scenario – you are sending the same sequence of bytes over and over and over. Machines may be quad word these days, but your pipe is still measured in bits. That means even most of your large integers have 32 bits of redundant zeroes. Let’s not talk about all the places your corporate logo is in files, or how many times the word “the” appears in your documents. It is worth noting for those of you just delving into this topic that WAN deduplication shares some features and even technologies with storage deduplication, but because the WAN has to handle an essentially unlimited stream of data running through it, and it does not have to store that data and keep differentials or anything moving forward, it is a very different beast than disk-based deduplication. WAN deduplication is more along the lines of “fire and forget” (though forget is the wrong word, since it keeps duplicate info for future reference) than storage which is “fire and remember exactly what we did”. Thankfully, your data doesn’t have feelings, so we can offer a technological solution to its repetitive babbling. There are a growing number of products out there that tell your data “Hey! Say it once and move on!” these products either are or implement in-flight data deduplication. These devices require a system on each end – one to dedupe, one to rehydrate – and there are a variety of options the developer can choose, along with a few that you can choose, to make the deduplication of higher or lower quality. Interestingly, some of these options are perfect for one customers’ data set and not at all high-return for others. So I thought we’d talk through them generically, giving you an idea of what to ask your vendor when you consider deduplication as part of your WAN Optimization strategy. Related Articles and Blogs: WAN Optimization Continues to Evolve Best Practices for Deploying WAN Optimization with Data Replication Like a Matrushka, WAN Optimization is Nested195Views0likes0CommentsLoad Balancers for Developers – ADCs Wan Optimization Functionality
It’s been a good long while since I wrote an installment of Load Balancing for Developers, but you all keep reading them, and they are still my most read blog posts on a monthly basis, so since I have an increased interest in WAN Optimization, and F5 has a great set of WAN Optimization products, I thought I’d tag right onto the end with more information that will help you understand what Application Delivery Controllers (ADCs) are doing for (and to) your code, and how they can help you tweak your application without writing even more code. If you’re new to the series, it can be found here: Load Balancers For Developers on F5 DevCentral This is number eight in the series, so if you haven’t already read seven, you might check out the link also. To continue the story, your application Zap-N-Go! Has grown much faster than you had expected, and it is time to set up a redundant data center to insure that your customers are always able to access your mondo-cool application. The solution is out there for you, Application Delivery Controllers with WAN Optimization capabilities turned on. WAN Optimization is the process of making your internet communications faster. The whole idea is to improve the performance of your application by applying optimizations to the connection, the protocol, and the application. Sometimes application is very specific – like VMWare VMotion, sometimes it is more generic, like CIFS or HTTP. There are multiple steps to get there, but it all starts in one place… Your application places information on the wire, or requests information from a remote location, and you need it to be snappy in responding. Related Articles and Blogs: Like a Matrushka, WAN Optimization is nested Users Find the Secret of WAN Optimization WAN Optimization 101: Know Your Options BIG-IP WOM Product Overview (pdf) WAN Optimization is not Application Acceleration188Views0likes0CommentsIf I Were in IT Management Today…
I’ve had a couple of blog posts talking about how there is a disconnect between “the market” and “the majority of customers” where things like cloud (and less so storage) are concerned. So I thought I’d try this out as a follow on. If I were running your average medium to large IT shop (not talking extremely huge, just medium to large), what would I be focused on right now. By way of introduction, for those who don’t know, I’m relatively conservative in my use of IT, I’ve been around the block, been burned a few times (OS/2 Beta Tester, WFW, WP… The list goes on), and the organizations I’ve worked for where I was part of “Enterprise IT” were all relatively conservative (Utilities, Financials), while the organizations i worked in Product or App Development for were all relatively cutting edge. I’ve got a background in architecture, App Dev, and large systems projects, and think that IT Management is (sadly) 50% corporate politics and 50% actually managing IT. I’ll focus on problems that we all have in general here, rather than a certain vertical, and most of these problems are applicable to all but the largest and smallest IT shops today. By way of understanding, this list is the stuff I would be spending research or education time on, and is kept limited because the bulk of you and your staff’s time is of course spent achieving or fixing for the company, not researching. Though most IT shops I know of have room for the amount of research I’m talking about below.166Views0likes0Comments