cloud storage
23 TopicsCloud Storage Gateways. Short term win, but long term…?
In the rush to cloud, there are many tools and technologies out there that are brand new. I’ve covered a few, but that’s nowhere near a complete list, but it’s interesting to see what is going on out there from a broad-spectrum view. I have talked a bit about Cloud Storage Gateways here. And I’m slowly becoming a fan of this technology for those who are considering storing in the cloud tier. There are a couple of good reasons to consider these products, and I was thinking about the reasons and their standing validity. Thought I’d share with you where I stand on them at this time, and what I see happening that might impact their value proposition. The two vendors I have taken some time to research while preparing this blog for you are Nasuni and Panzura. No doubt there are plenty of others, but I’m writing you a blog here, not researching a major IT initiative. So I researched two of them to have some points of comparison, and leave the in-depth vendor selection research to you and your staff. These two vendors present similar base technology and very different additional feature sets. Both rely heavily upon local caching in the controller box, and both work with multiple cloud vendors, and both claim to manage compression. Nasuni delivers as a Virtual Appliance, includes encryption on your network before transmitting to the cloud, automated cloud provisioning, and caching that has timed updates to the cloud, but can perform a forced update if the cache gets full. It presents the cloud storage you’ve provisioned as a NAS on your end. Panzura delivers a hardware appliance that also presents the cloud as a NAS, works with multiple cloud vendors, handles encryption on-device, and claims global dedupe. I say claims, because “global” has a meaning that is “all” and in their case “all” means “all the storage we know about”, not “all the storage you know”. I would prefer a different term, but I get what they mean. Like everything else, they can’t de-dupe what they don’t control. They too present the cloud storage you’ve provisioned as a NAS on your end, but claim to accelerate CIFS and NFS also. Panzura is also trying to make a big splash about speeding access to MS-Sharepoint, but honestly, as a TMM for F5, a company that makes two astounding products that speed access to Sharepoint and nearly everything else on the Internet (LTM and WOM), I’m not impressed by Sharepoint acceleration. In fact, our Sharepoint Application Ready Solution is here, and our list of Application Ready Solutions is here. Those are just complete architectures we support directly, and don’t touch on what you can do with the products through Virtuals, iRules, profiles, and the host of other dials and knobs. I could go on and on about this topic, but that’s not the point of this blog, so suffice it to say there are some excellent application acceleration and WAN Optimization products out there, so this point solution alone should not be a buying criteria. There are some compelling reasons to purchase one of these products if you are considering cloud storage as a possible solution. Let’s take a look at them. Present cloud storage as a NAS – This is a huge benefit right now, but over time the importance will hopefully decrease as standards for cloud storage access emerge. Even if there is no actual standard that everyone agrees to, it will behoove smaller players to emulate the larger players that are allowing access to their storage in a manner that is similar to other storage technologies. Encryption – As far as I can see this will always be a big driver. They’re taking care of encryption for you, so you can sleep at night as they ship your files to the public cloud. If you’re considering them for non-public cloud, this point may still be huge if your pipe to the storage is over the public Internet. Local Caching – With current broadband bandwidths, this will be a large driver for the foreseeable future. You need your storage to be responsive, and local caching increases responsiveness, depending upon implementation, cache size, and how many writes you are doing this could be a huge improvement. De-duplication – I wish I had more time to dig into what these vendors mean by dedupe. Replacing duplicate files with a symlink is simplest and most resembles existing file systems, but it is also significantly less effective than partial file de-dupe. Let’s face it, most organizations have a lot more duplication laying around in files named Filename.Draft1.doc through Filename.DraftX.doc than they do in completely duplicate files. Check with the vendors if you’re considering this technology to find out what you can hope to gain from their de-dupe. This is important for the simple reason that in the cloud, you pay for what you use. That makes de-duplication more important than it has historically been. The largest caution sign I can see is vendor viability. This is a new space, and we have plenty of history with early entry players in a new space. Some will fold, some will get bought up by companies in adjacent spaces, some will be successful… at something other than Cloud Storage Gateways, and some will still be around in five or ten years. Since these products compress, encrypt, and de-dupe your data, and both of them manage your relationship with the cloud vendor, losing them is a huge risk. I would advise some due diligence before signing on with one – new companies in new market spaces are not always a risky proposition, but you’ll have to explore the possibilities to make sure your company is protected. After all, if they’re as good as they seem, you’ll soon have more data running through them than you’ll have free space in your data center, making eliminating them difficult at best. I haven’t done the research to say which product I prefer, and my gut reaction may well be wrong, so I’ll leave it to you to check into them if the topic interests you. They would certainly fit well with an ARX, as I mentioned in that other blog post. Here’s a sample architecture that would make “the Cloud Tier” just another piece of your virtual storage directory under ARX, complete with automated tiering and replication capabilities that ARX owners thrive on. This sample architecture shows your storage going to a remote data center over EDGE Gateway, to the cloud over Nasuni, and to NAS boxes, all run through an ARX to make the client (which could be a server or a user – remember this is the NAS client) see a super-simplified, unified directory view of the entire thing. Note that this is theoretical, to my knowledge no testing has occurred between Nasuni and ARX, and usually (though certainly not always) the storage traffic sent over EDGE Gateway will be from a local NAS to a remote one, but there is no reason I can think of for this not to work as expected – as long as the Cloud Gateway really presents itself as a NAS. That gives you several paths to replicate your data, and still presents client machines with a clean, single-directory NAS that participates in ADS if required. In this case Tier one could be NAS Vendor 1, Tier two NAS Vendor 2, your replication targets securely connected over EDGE Gateway, and tier 3 (things you want to save but no longer need to replicate for example) is the cloud as presented by the Cloud Gateway. The Cloud Gateway would arbitrate between actual file systems and whatever idiotic interface the cloud provider decided to present and tell you to deal with, while the ARX presents all of these different sources as a single-directory-tree NAS to the clients, handling tiering between them, access control, etc. And yes, if you’re not an F5 shop, you could indeed accomplish pieces of this architecture with other solutions. Of course, I’m biased, but I’m pretty certain the solution would not be nearly as efficient, cool, or let you sleep as well at night. Storage is complicated, but this architecture cleans it up a bit. And that’s got to be good for you. And all things considered, the only issue that is truly concerning is failure of a startup cloud gateway vendor. If another vendor takes one over, they’ll either support it or provide a migration path, if they are successful at something else, you’ll have plenty of time to move off of their storage gateway product, so only outright failure is a major concern. Related Articles and Blogs Panzura Launches ANS, Cloud Storage Enabled Alternative to NAS Nasuni Cloud Storage Gateway InfoSmack Podcasts #52: Nasuni (Podcast) F5’s BIG-IP Edge Gateway Solution Takes New Approach to Unifying, Optimizing Data Center Access Tiering is Like Tables or Storing in the Cloud Tier444Views0likes1CommentCopied Data. Is it a Replica, Snapshot, Backup, or an Archive?
It is interesting to me the number of variant Transformers that have been put out over the years, and the effect that has on those who like transformers. There are four different “Construction Devastator” figures put out over the years (there may be more, I know of four), and every Transformers collector or fan that I know – including my youngest son – want them all. That’s great marketing on the part of Hasbro, for certain, but it does mean that those who are trying to collect them are going to have a hard time of it, just because they were produced and then stopped, and all of them consist of seven or more parts. That’s a lot of things to go wrong. But still, it is savvy for Hasbro to recognize that a changed Transformer equates to more sales, even though it angers the diehard fans. As time moves forward, technology inevitably changes things. In IT that statement implies “at the speed of light”. Just like your laptop has been replaced with a newer model before you get it, and is “completely obsolete” within 18 months, so other portions of the IT field are quickly subsumed or consumed by changes. The difference is that IT is less likely to get caught up in the “new gadget” hype than the mass market. So while your laptop was technically outdated before it landed in your lap, IT knows that it is still perfectly usable and will only replace it when the warrantee is up (if you work for a smart company) or it completely dies on you (for a company pinching pennies). The same is true in every piece of storage, it is just that we don’t suffer from “Transformer Syndrome”. Old storage is just fine for our purposes, unless it actually breaks. Since you can just continue to pay annual licensing fees, there’s no such thing as “out of warrantee” storage unless you purchase very inexpensive, or choose to let it lapse. For the very highest end, letting it lapse isn’t an option, since you’re licensing the software. The same is true with how we back up and restore that data. Devastator, image courtesy of Gizmodo.com But even with a stodgy group like IT, who has been bitten enough times to know that we don’t change something unless there’s a darned good reason, eventually change does come. And it’s coming to backup and replication. There are a lot of people still differentiating between backups and replication. I think it’s time for us to stop doing so. What are the differences? Let’s take a look. Backups go to tape. Hello Virtual Tape Libraries, how are you? Backups are archival. Hello tiering, you allow us to move things to different storage types, and replicate them at different intervals, right? So all is correctly backed up for its usage levels? Replication is near-real-time. Not really. You’re thinking of Continuous Data Protection (CDP), which is gaining traction by app, not broadly. Replication goes to disk and that makes it much faster. See #1. VTL is fast too. Tape is slow. Right, but that’s a target problem, not a backup problem. VTLs are fast. Replication can do just the changes. Yeah, why this one ever became a myth, I’ll never know, but remember “incremental backups”? Same thing. I’m not saying they’re exactly the same – incremental replicas can be reverse applied so that you can take a version of the file without keeping many copies, and that takes work in a backup environment, what I AM saying is that once you move to disk (or virtual disk in the case of cloud storage), there isn’t really a difference worthy of keeping two different phrases. Tape isn’t dead, many of you still use a metric ton of it a year, but it is definitely waning, slowly. Meaning more and more of us are backing up or replicating to disk. Where did this come from? A whitepaper I wrote recently came back from technical review with “this is not accurate when doing backups”, and that got me to thinking “why the heck not?” If the reason for maintaining two different names is simply a people reason, while the technology is rapidly becoming the same mechanisms – disk in, disk out, then I humbly suggest we just call it one thing, because all maintaining two names and one fiction does is cause confusion. For those who insist that replicas are regularly updated, I would say making a copy or snapshotting them eliminates even that difference – you now have an archival copy that is functionally the same as a major backup. Add in an incremental snapshot and, well, we’re doing a backup cycle. With tiering, you can set policies to create snapshots or replicas on different timelines for different storage platforms, meaning that your tier three data can be backed up very infrequently, while your tier one (primary) storage is replicated all of the time. Did you see what I did there? The two are used interchangeably. Nobody died, and there’s less room for confusion. Of course I think you should use our ARX to do your tiering, ARX Cloud Extender to do your cloud connections, and take advantage of the built-in rules engine to help maintain your backup schedule. But the point is that we just don’t need two names for what is essentially the same thing any more. So let’s clean up the lingo. Since replication is more accurate to what we’re doing these days, let’s just call it replication. We have “snapshot” that is already associated with replication for point-in-time copies, which makes us able to differentiate between a regularly updated replica and a frozen-in-time “backup”. Words fall in and out of usage all of the time, let’s clean up the tech lingo and all get on the same language. No, no we won’t, but I’ve done my bit by suggesting it. And no doubt there are those confused by the current state of lingo that this will help to understand that yes, they are essentially the same thing, only archaic history keeps them separate. Or you could buy all three – replicate to a place where you can take a snapshot and then back up the snapshot (not as crazy as it sounds, I have seen this architecture deployed to get the backup process out of production, but I was being facetious). And you don’t need a ton of names. You replicate to secondary (tertiary) storage, then take a snapshot, then move or replicate the snapshot to a remote location – like the cloud or remote datacenter. Not so tough, and one term is removed from the confusion, inadvertently adding crispness to the other terms.251Views0likes0CommentsThe Right (Platform) Tool For the Job(s).
One of my hobbies is modeling – mostly for wargaming but also for the sake of modeling. In an average year I do a lot of WWII models, some modern military, some civilian vehicles, figures from an array of historical timeperiods and the occasional sci-fi figure for one of my sons… The oldest (24 y/o) being a WarHammer 40k player and the youngest (3 y/o) just plain enjoying anything that looks like a robot. While I have been modeling more or less for decades, only in the last five years have I had the luxury of owning an airbrush, and then I restrict it to very limited uses – mostly base-coating larger models like cars, tanks, or spaceships. The other day I was reading on my airbrush vendor’s website and discovered that they had purchased a competitor that specialized in detailing airbrushes – so detailed that the line is used to decorate fingernails. This got me to thinking that I could do more detailed bits on models – like shovel blades and flesh-tones with an airbrush if I had one of these little detail brushes. Lori told me to send her a link to them so that she had it on the list for possible gifts, so I went out and started researching which model of the line was most suited to my goals. The airbrush I have is one of the best on the market – a Badger Airbrush Company model 150. It has dual-action, which means that pushing down on the trigger lets air out, and pulling the trigger back while pushing down lets an increasing amount of paint flow through. I use this to determine the density of paint I’m applying, but have never thought too much about it. Well in my research I wanted to see how much difference there was between my airbrush and the Omni that I was interested in. The answer… Almost none. Which confused me at first, as my airbrush, even with the finest needle and tip available and a pressure valve on my compressor to control the amount of air being pumped through it, sprays a lot of paint at once. So I researched further, and guess what? The volume of paint adjustment that is controlled by how far you draw back the trigger, combined with the PSI you allow through the regulator will control the width of the paint flow. My existing airbrush can get down to 2mm – sharpened pencil point widths. I have a brand-new fine tip and needle (in poor lighting I confused my fine needle with my reamer and bent the tip a few weeks ago, so ordered a new one), my pressure regulator is a pretty good one, all that is left is to play with it until I have the right pressure, and I may be doing more detailed work with my airbrush in the near future. Airbrushing isn’t necessarily better – for some jobs I like the results better, like single-color finishes, because if you thin the paint and go with several coats, you can get a much more uniform worn look to surfaces – but overall it is just different. The reason I would want to use my airbrush more is, simply time. Because you don’t have to worry about crevices and such (the air blows paint into them), you don’t have to take nearly as long to paint a given part with an airbrush as you do with a brush. At least the base coat anyway, you still need a brush for highlighting and shadowing… Or at least I do… But it literally cuts hours off of a group of models if I can arrange one trip down to the spray area versus brush-painting those same models. What does all of this have to do with IT? The same thing it usually does. You have a ton of tools in your datacenter that do one job very well, but you have never had reason to look into alternate uses that the tool might do just as well or better at. This is relatively common with Application Delivery Controllers, where they are brought in just to do load balancing, or just for application acceleration, or just for WAN Optimization, and the other things that the tool does just as well haven’t been explored. But you might want to do some research on your platforms, just to see if they can serve other needs than you’re putting them to today. Let’s face it, you’ve paid for them, and in many cases they will work as-is or with a slight cost add-on to do even more. It is worth knowing what “more” is for a given product, if for no other reason than having that information in your pocket when exploring solutions going forward. A similar situation is starting to develop with our ARX family of products, and no doubt with some competitors also (though I haven’t heard of it from competitors, I’m simply conjecturing) – as ARX grows in its capabilities, many existing customers aren’t taking advantage of the sweet new tools that are available to them for free or for a modest premium on their existing investment. ARX Cloud Extender is the largest case of this phenomenon that I know of, but this week’s EMC Atmos announcement might well go a long way to reconcile that bit. To me it is very cool that ARX can virtualize your NAS devices AND include cloud and/or object storage alongside NAS so as to appear to be one large pool of storage. Whether you’re a customer or not, it’s worth checking out. Of course, like my airbrush, you’ll have some learning to do if you try new things with your existing hardware. I’ll spend a couple of hours with the airbrush figuring out how to make reliable lines of those sizes, then determine where best to use it. While I could have achieved the same or similar results with masking, the time investment for masking is large and repetitive, the dollar cost is repetitive. I also could have paid a large chunk of money for a specialized detail airbrush, but then I’d have two tools to maintain, when one will do it all… And this is true of alternatives to learning new things about your existing hardware – the learning curve will be there whether you implement new functionality on your existing platforms or purchase a point solution, best to figure out the cost in time and money to solve the problem from either direction. Often, you’ll find the cost of learning a new function on familiar hardware is much lower than purchasing and learning all new hardware. WWII Russians – vehicle is airbrushed, figures not.239Views0likes0CommentsIt is Never Easy, But There’s a Lot of Change Going On.
Every spring I get excited. I live in Wisconsin, which my travels have shown me you may not understand. I have actually been told “that is not your house, there is snow on the ground. All of America is sun and beaches”. Well, in Wisconsin, it gets cold. Moscow style cold. There are a couple of weeks each winter where going out is something you do only after bundling up like a toddler… Mittens, hats, coat, another coat, boots… But then spring comes, and once the temperature gets to the point where the snow starts to melt, the sun starts to feel warm again. It’s at that point that I start to get that burst of energy, and every year it surprises me. I realize that I was, toward the end of the winter, slowing down. Not work-wise, but home-wise. You can’t do too much work outside, there are days I didn’t even break down boxes for recycling because it was too cold in the (unheated) garage. So inside things take precedence. This year it was staining some window frames, helping Lori get her monstrous new fishtank set up, and working on some fun stuff I’d been sitting on. I register a very similar surprise in IT, even though, just like winter, it is a predictable cycle. The high-tech industry just keeps turning out new ideas, products, and hype cycles. Black Bear Hibernating – www.bear.org But this round seems different to me. Instead of a rush of new followed by a predictable lull while enterprises digest the new and turn it into functional solutions, it seems that, even given the global economy, the new just keeps coming. From Server Virtualization to Server Consolidation to Storage Virtualization to Primary Dedupe, through network virtualization and the maturity of load balancers into ADCs, then the adaptation of the best ADCs into tools to manage virtualization sprawl. Throwing in Cloud, then Cloud Storage, and heaping network convergence (with storage networks) onto the heap, and then drop the mobile device bomb… Wow. It’s been a run. IT has always had the belief that the only constant is change, but the rate of change seems to be in high gear over the last several years. The biggest problem with that is none of this stuff exists in a vacuum, and you don’t really get the opportunity to digest any of it and make it an integral part of your architecture if you’re doing it all. F5 and several other companies have some great stuff to help you take the bull by the horns, ours being instantiated as what we call Strategic Points of Control, but they too require time and effort. The theory is, of course, that we’re going to a better place, that IT will be more adaptable and less fragile. That needs to be in your sights at all times if you are participating in several of these changes at the same time, but also in your sites must be the short term – don’t make your IT less adaptable and more fragile today on the promise of making it less so in the future. And that’s a serious risk if you move too fast. That is a lot of change in your systems, and while I’ve talked about them individually, an architecture plan (can you tell I was an Enterprise Architect once?) that coordinates the changes you’re making and leaves breathing space so you can make the changes a part of your systems is a good idea. I’m not saying drag your feet, but I am saying that the famous saying “He who defends everything defends nothing” has an IT corollary “He who changes everything risks everything”. Do we here at F5 want you to buy our products? Of course we do. We wouldn’t make them if we didn’t think they rocked. Do we want you to redesign your network on-the-fly on a Sunday night from one end to the other? Not if it risks you failing. We look bad if you look bad because of us. So take your time, figure out which of the many new trends holds the most promise for your organization, prioritize, then implement. Make sure you know what you have before moving on to the next change. Many of you have stable virtualized server environments already, so moving on from there is easier, but many of you do not yet have stability in virtualization. VMWare and others make some great tools to help with managing your virtualized environment, but only if you’ve been in the virtualization game long enough to realize you need them. Where will we end up? I honestly don’t know. For sure with highly virtualized datacenters, and with much shortened lead times for IT to implement new systems. Perhaps we’ll end up 100% in the cloud, but there are inherent risks that make 100% doubtful – like outsourcing, you’re only as good as the date on your contract. So the future is cloudy, pun intended. So take your time, I’ve said it before, and will likely say it again, we’re here to help, but we want to help, not help shove you over the cliff. Good vendors will still be around if you delay implementation of some new architectural wonder by six weeks or six months to stabilize the one you just implemented, and the vendors that aren’t around? Well, imagine if you’d bought into them. :-) Another old adage that has new meaning at the current rate of change is “Anything worth doing is worth doing right”. Of course there will be politics in many of the most recent round of changes – pressure to do it faster – can’t help you there other than to suggest you point out that the difference between responsive and reckless is directly related to the pressure applied. My big kick is at the moment is access to cloud storage from your local network. Big bang for the buck whether you’re using our ARX Cloud Extender or one of the various cloud storage gateways out there, it gives you a place to move stuff that means you don’t have to back it up, but you don’t have to risk losing it either.236Views0likes0CommentsNo Really. Broadband.
In nature, things seek a balance that is sustainable. In the case of rivers, if there is too much pressure from water flowing, they either flood or open streams to let off the pressure. Both are technically examples of erosion, but we’re not here to discuss that particular natural process, we’re here to consider the case of a stream off a river when there is something changing the natural balance. Since I grew up around a couple of man-made lakes – some dug, some created when the mighty AuSable River was dammed, I’ll use man-made lakes as my examples, but there are plenty of more natural examples – such as earthquakes – that create the same type of phenomenon. Now that I’ve prattled a bit, we’ll get down to the science. A river will sometimes create off-shoots that run to relieve pressure. When these off-shoots stay and have running water, they’re streams or creeks. Take the river in the depiction below: The river flows right to left, and the stream is not a tributary – it is not dumping water into the river, it is a pressure relief stream taking water out. These form in natural depressions when, over time, the flow of a river is more than erosion can adjust for. They’re not at all a problem, and indeed distribute water away from the source river and into what could be a booming forest or prime agricultural land. But when some event – such as man dredging a man-made lake – creates a vacuum at the end of the stream, then the dynamic changes. Take, for example the following depiction. When the bulbous lake at the top is first dug, it is empty. The stream will have the natural resistance of its banks removed, and will start pulling a LOT more water out of the river. This can have the effect of widening the stream in areas with loose-packed soil, or of causing it to flow really very fast in less erosion-friendly environments like stone or clay. Either way, there is a lot more flowing through that stream. Make the lake big enough, and you can divert the river – at least for a time, and depending upon geography, maybe for good. This happens because water follows the path of least resistance, and if the pull from that gaping hole that you dug is strong enough, you will quickly cause the banks of the stream to erode and take the entire river’s contents into your hole. And that is pretty much what public cloud adoption promises to do to your Internet connection. At 50,000 feet, your network environment today looks like this: Notice how your Internet connection is comparable to the stream in the first picture? Where it’s only taking a tiny fraction of the traffic that your LAN is utilizing? Well adding in public cloud is very much like digging a lake. It creates more volume running through your Internet connection. If you can’t grow the width of your connection (due to monthly overhead implications), then you’re going to have to make it go much faster. This is going to be a concern, since most applications of cloud – from storage to apps – are going to require two-way communication with your datacenter. Whether it be for validating users or accessing archived files, there’s going to be more traffic going through your WAN connection and your firewall. Am I saying “don’t use public cloud”? Absolutely not. It is a tool like any other, if you are not already piloting a project out there, I suggest you do so, just so you know what it adds to your toolbox and what new issues it creates. But the one thing that is certain, the more you’re going “out there” for apps and data, the more you’ll need to improve performance of your Internet connections. Mandatory plug: F5 sells products like WOM, EDGE Gateway, and WAM to help you improve the throughput of your WAN connection, and they would be my first stop in researching how to handle increased volumes generated by cloud usage… But if you are a “Vendor X” shop, look at their WAN Optimization and Web Acceleration solutions. Don’t wait until this becomes an actual problem rather than a potential one – when you set up a project team to do a production project out in the public cloud, along with security and appdev, make sure to include a WAN optimization specialist, so you can make certain your Internet connection is not the roadblock that sank the project. This is also the point where I direct your attention to that big firewall in the above diagram. Involve your security staff early in any cloud project. Most of the security folks I have worked with are really smart cookies, but they can’t guarantee the throughput of the firewall if they don’t know you’re about to open up the floodgates on them. Give them time to consider more than just how to authenticate cloud application users. I know I’ve touched on this topic before, but wanted it to be graphically drawn out, so you got to see my weak MS-Paint skills in action, and hopefully I gave you a bit more obvious view of why this is so important.229Views0likes0CommentsCloud Storage Gateways, stairway to (thin provisioning) heaven?
With thanks to Led Zeppelin for some great lyrics. There's a sign on the wall But she wants to be sure 'Cause you know sometimes words have Two meanings Since cloud computing has a bit of an identity crisis, and cloud storage is just starting to realize one itself, it should be no surprise to anyone that “cloud storage gateway” has more than one meaning. While they are all a single market, implementation and deployment details make them very distinct products. In such a young market, differentiation is easy, even if selling your differentiation as a plus is not. Some vendors are already attempting to turn product differentiation into market segmentation – the upcoming Cirtas product, for example, is referred to by them as a “Cloud Storage Controller” because they believe that better defines their product, though they acknowledge that the market term “gateway” has taken on. When she gets there she knows If the stores are all closed With a word she can get what she came for Okay, so you don’t quite have that power, but all of these products do offer you a significant bonus in terms of cloud storage in the form of thin provisioning. For several years now you have had the capability to tell your server it had more storage than was actually dedicated to it, and if it ran over what was actually dedicated, more was allocated from a pool. The problem with this model is that you have to be certain you have enough storage to cover the worst reasonable case – what percentage of your servers might request extra storage over the weekend, and how much might they request. Weekend, month, year… Whatever your timeframe for buying new storage. The point of over-provisioning is that you’ll likely be oversubscribed, but you’re taking the risk that the oversubscription will never come due at the same time with one large calamitous bang. I wrote about this scenario and how virtualization has made the risk worse here. Yes, there are two paths you can go by But in the long run There's still time to change The road you're on Enter Cloud Storage Gateways. First a bit about cloud providers… They scale up to as much as you need (as long as your payments are covering it anyway), and down as your usage goes down. I won’t say all of them fit this pattern because there are a bewildering number of players looking to make a name in this space, and believe it or not, F5 doesn’t pay me to ponder cloud storage or cloud storage gateways, they merely allow me to chat about it, but I’m not taking weeks researching all of this, more like hours. The big players do indeed scale up and down, only billing you for actual usage though. Now that we have covered that for any who didn’t know, the cloud storage gateway handles the intricacies of dealing with various cloud storage providers, and most cache locally and encrypt on the way out. Starting to see the silver lining yet? They give you thin provisioning limited only by how much money you’re willing to risk. The current model gives you thin provisioning limited by either how much you’re willing to pay to guarantee you have enough disk for the worst case, or the amount of risk you’re willing to take on. Cloud storage gateways navigate that mine-laden sea for you and guarantee that your servers will stay up as long as you’re willing to foot the bill. Of course that doesn’t eliminate planning for you, but it does allow you to move the choke point up and down much more easily. You can eliminate the risk without a significant cash outlay, if that is your desire, as long as you know what it will cost you if your servers all start requesting more and more storage. And it makes me wonder (if you just went “Ooohhh Yeahhh-hah” in your head, take a moment to laugh at yourself, it’s healthy) The biggest risk in cloud storage gateways is the one I mentioned in a previous blog. If they are scooped up by cloud storage vendors that suddenly remembered the lingua franca of enterprise IT storage is not SOA, they will surely limit your options on the back-end. One of the strengths of these products is that you can point one at two completely different cloud storage vendors and remind each that with a flick of a switch, you can be on the other one, so 48 hour response times are not acceptable. That benefit would almost certainly disappear if a cloud storage vendor bought up your cloud storage gateway vendor. Otherwise, the risk is not any larger than any other cloud solution. Your head is humming and it won't go In case you don't know The piper's calling you to join him Cloud everything is the buzz du-jour, and cloud storage with a gateway to make it appear on your network as a NAS is a good idea for tier three, and these vendors are all (I’ve only spoken to three, so “all” is a bit of a stretch) saying they’re getting traction on tier two also, which makes sense for a lot of tier two data. Either way, it’s coming your way, and you should consider if it has a space in your datacenter. The idea of truly thin provisioning is a huge one that even further removes you from the limitation of monstrous disk arrays. And if thin provisioning with no subscription worries is on your list of things that would help you sleep at night, I suggest you go out and try… Bu-uying a stairway… To (thin provisioning) heaven And I won’t even get into what these solutions coupled with the automated tiering of ARX can do for you. That’s for another blog. Related Articles and Blogs Stairway to Heaven Lyrics Thin Provisioning Plus VMs - Armageddon in a Virtual Box Cloud Storage Gateways: Short Term Win, but Long Term…? More Cloud Storage Gateways Come Out Show Me The Gateway – Talking Storage to the Cloud213Views0likes0CommentsRemember When Hand Carts Were State Of The Art? Me either.
Funny thing about the advancement of technology, in most of the modern world we enshrine it, spend massive amounts of money to find “the next big thing”, and act as if change is not only inevitable, but rapid. The truth is that change is inevitable, but not necessarily rapid, and sometimes, it’s about necessity. Sometimes it is about productivity. Sometimes, it just plain isn’t about either. Handcarts are still used for serious purposes in parts of the world, by people who are happy to have them, and think a motorized vehicle would be a waste of resources. Think on that for a moment. What high-tech tool that was around 20 years ago are you still using? Let alone 200 years ago. The replacement of handcarts as a medium for transport not only wasn’t instant, it’s still going on 100 years after cars were mass produced. Handcart in use – Mumbai Daily We in high-tech are constantly in a state of flux from this technology to that solution to the other architecture. The question you have to ask yourself – and this is getting more important for enterprise IT in my opinion – is “does this do something good for the company?” It used to be that IT folks could try out all sorts of new doo-dads just to play with them and justify the cost based on the future potential benefit to the company. I’d love to say that this had a powerful positive effect, but frankly, it only rarely paid off. Why? Because we’re geeks. We buy this stuff on our own dime if the company won’t foot for it, and our eclectic tastes don’t necessarily jive with the needs of the organization. These days, the change is pretty intense, and focuses on infrastructure and application deployment architectures. Where can you run this application, and what form will the application take? Virtualized? Dedicated hardware? Cloud? the list goes on. And all of these questions spur thoughts about security, storage, the other bits of infrastructure required to support an application no matter where it is deployed. These are things that you can model in your basement, but can’t really test out, simply because the architecture of an enterprise is far more complex than the architecture of even the geekiest home network. Lori and I have a pretty complex network in our basement, but it doesn’t hold a candle to our employers’ worldwide network supporting dev and sales offices on every continent, users in many languages, and a potpourri of access methods that must be protected and available. Sometimes, change is simply a change of perspective. F5’s new iApps, for example, put the ADC infrastructure bits together for the application, instead of managing application security within the module that handles application security (ASM), it bundles security in with all of the other bits – like load balancing, SSL offload, etc – that an application requires. This is pretty powerful, it speeds deployment and troubleshooting because everything is in one place, and it speeds adding another machine because you simply apply the same iApp Template. That means you spin up another instance of the VM in question, tweak the settings, and apply the template already being used on existing instances, and you’re up. Sometimes, change is more radical. Deploying to the cloud is a good example of this, and cloud deployments suffer for it. Indeed, private and hybrid clouds are growing rapidly precisely because of the radical change that public cloud can introduce. Cloud storage was so radical that very few were willing to use it even as most thought it was a good idea. Along came cloud storage gateways like our ARX Cloud Extender or a variety of others, and suddenly the weakness was ameliorated… Because the radical bit of cloud storage was simply that it didn’t talk like storage traditionally has. With a gateway it does. And with most gateways (check with your provider) you get compression and encryption, making the cloud storage more efficient and secure in the process. But like the handcart, the idea that cloud, or virtualization, or consumerization must take hold overnight and you’re behind the times if you weren’t doing it yesterday are misplaced. Figure out what’s best for your organization, not just in terms of technology, but in terms of timelines also. Sure, some things, like support for the CEOs iPad will take on a life of their own, but in general, you’ve got time to figure out what you need, when you need it, and how best to implement it. As I’ve mentioned before, at the cutting edge of technology, when the hype cycle is way overblown, that’s where you’ll find the largest number of vendors that won’t be around to support you in five years. If you can wait until the noise about a space quiets down, you’ll be better served, because the level of competition will have eliminated the weaker companies and you’ll be dealing with the technological equivalent of the Darwinian most fit. Sure, some of those companies will fail or get merged also, but the chances that your vendor of choice won’t, or their products will live on, are much better after the hype cycle. After all, even though engine powered conveyances have largely replaced hand carts, have you heard of White Motor Company, Autocar Company, or Diamond T Company? All three made automobiles. They lived through boom and were swallowed in bust. Though in automobiles the cycle is much longer than in high-tech (Autocar started in the late 1800s and was purchased by White in the 1950s for example, who was purchased later by Audi), the same process occurs, so count on it. And no, I haven’t developed a sudden interest in automobile history, all of these companies thrived making half-tracks in World War Two, that’s how I knew to look for them amongst the massive number of failed car companies. Stay in touch with the new technologies out there, pay attention to how they can help you, but as I’ve said quite often, what's in the hype cycle isn’t necessarily what is best for your organization. 1908 Autocar XV (Wikipedia.org) Of course I think things like our VE product line and our new V.11 with both iApps and app mobility are just the thing for most organizations, even with those I will say “depending upon your needs”. Because contrary to what most marketing and many analysts want to tell you, it really is about your organization and its needs.206Views0likes0CommentsWhen The Walls Come Tumbling Down.
When horrid disasters strike and both people and corporations are put on notice that they suddenly have a lot more important things to do, will you be ready? It is a testament to man’s optimism that with very few exceptions we really don’t, not at the personal level, not at the corporate level. I’ve worked a lot of places, and none of them had a complete, ready to rock DR plan. The insurance company I worked at was the closest – they had an entire duplicate datacenter sitting dark in a location very remote from HQ, awaiting need. Every few years they would refresh it to make certain that the standby DC had the correct equipment to take over, but they counted on relocating staff from what would be a ravaged area in the event of a catastrophe, and were going to restore thousands of systems from backups before the remote DC could start running. At the time it was a good plan. Today it sounds quaint. And it wasn’t that long ago. There are also a lot of you who have yet to launch a cloud initiative of any kind. This is not from lack of interest, but more because you have important things to do that are taking up your time. Most organizations are dragging their feet replacing people, and few – according to a recent survey, very few – are looking to add headcount (proud plug that F5 is – check out our careers page if you’re looking). It’s tough to run off and try new things when you can barely keep up with the day-to-day workloads. Some organizations are lucky enough to have R&D time set aside. I’ve worked at a couple of those too, and honestly, they’re better about making use of technology than those who do not have such policies. Though we could debate if they’re better because they take the time, or take the time because they’re better. And the combination of these two items brings us to a possible pilot project. You want to be able to keep your organization online or be able to bring it back online quickly in the event of an emergency. Technology is making it easier and easier to complete this arrangement without investing in an entire datacenter and constantly refreshing the hardware to have quick recovery times. Global DNS in various forms is available to redirect users from the disabled datacenter to a datacenter that is still capable of handling the load, if you don’t have multiple datacenters, then it can redirect elsewhere – like to virtual servers running in the cloud. ADCs are starting to be able to work similarly whether they are cloud deployed or DC deployed, that leaves keeping a copy of your necessary data and applications in the cloud, and cloud storage with a cloud storage gateway such as the Cloud Extender functionality in our ARX product allow for this to be done with a minimum of muss and fuss. These technologies, used together, yield a DR architecture that looks something like this: Notice that the cloud extender isn’t listed here, because it is useful for getting the data copied, but would most likely reside in your damaged datacenter. Assuming that the cloud provider was one like our partner Rackspace, who does both cloud VMs and cloud storage, this architecture is completely viable. You’ll still have to work some things out, like guaranteeing that security in the cloud is acceptable, but we’re talking about an emergency DR architecture here, not a long-running solution, so app-level security and functionality to block malicious attacks at the ADC layer will cover most of what you need. AND it’s a cloud project. The cost is far, far lower than a full blown DR project, and you’ll be prepared in case you need it. This buys you time to ingest the fact that your datacenter has been wiped out. I’ve lived through it, there is so much that must be done immediately – finding a new location, dealing with insurance, digging up purchase documentation, recovering what can be recovered… Having a plan like this one in place is worth your while. Seriously. It’s a strangely emotional time, and having a plan is a huge help in keeping people focused. Simply put, disasters come, often without warning – mine was a flood caused by a broken pipe. We found out when our monitoring equipment fried from being soaked and sent out a raft of bogus messages. The monitoring equipment was six feet above the floor at the time. You can’t plan for everything, but to steal and twist a famous phrase, “he who plans for nothing protects nothing.”193Views0likes0CommentsCloud is Defined, Right?
At the end of the odd but intriguing movie Existenz, one of the primary characters looks at the other after killing a bunch of people and says “We’re still in the game, right?” With the implication that you the viewer really don’t know if they’re still in the Virtual Reality game they were playing. Sometimes, Cloud feels like that. I can just go “We’re still in the cloud, right?” Here we are, it is 2010, the pundits have been hailing cloud for years, and yet there is still a vast gulf of understanding of what is the cloud, exactly out there. Recently I was involved in a Twitter conversation with Mike Fratto (of Network Computing), Andy Ellis (of Akamai), Lori (of F5, as am I), with occasional input from Dustin Amrhein (of IBM), Greg Knieriemen (of Chi Corp and the InfoSmack podcast), Vanessa Alvarez (of Frost and Sullivan), and Tom Petrocelli (formerly of IP.com) where it became painfully clear that it is indeed not at all settled. Not even amongst such an august group of individuals. The thing is, that nearly two years ago, Lori gave “The Last Cloud Definition You’ll Ever Need”, and laid out what Cloud was and was not. It’s a good definition, but it is one that many vendors do not want to acknowledge for a variety of reasons, the most prevalent of which is marketing of their own products. In this conversation, Andy was simply asking “is a Content Delivery Network (CDN) not cloud?” now you might be quick to say “marketing! He works for Akamai!”, but my experience is that he’s deeper than that, and not really a marketer… He IS the CSO of Akamai after all, not exactly a marketing position. Also, his question made sense to me. A CDN delivers content across a wide geography on demand. A CDN is billed very much like all cloud services are billed, and many have APIs to manipulate what’s out there, how it’s delivered, etc. It didn’t make sense to Lori, and she was continuing the conversation with a different set of individuals in that wonderful “Twitter fracture effect” that causes conversations to veer off not just in topic but participants also, they generally seemed to be in agreement with her assessment. Mike was pretty much in agreement with her also. I spend a lot of time talking cloud with Lori because we’re together essentially all of the time, and she’s largely focused in that space right now. I don’t see it as too clear. If Cloud is or includes Infrastructure-As-A-Service (IaaS), and CDN was the original IaaS product, then it seems to me that it is worth exploring whether it is indeed not cloud just by virtue of being what it is. In the definition I linked to above, CDN is excluded by the simple fact that it is not an application delivery mechanism, but a content delivery mechanism. But that definition would preclude cloud storage from the definition of “what is cloud” also. And I think that cloud storage meets all of the numbered points in the definition, though due to the difference between storing data and delivering applications, point number four is only adhered to by some cloud storage providers. The NIST Notional Definition of Cloud Computing (MS-Word Doc) is commonly referenced by Cloud aficionados, and also seems to limit “Cloud” to application services, which would imply that there is no “Cloud Storage”, see above for my views on that notion. The problem is that you just can’t say (as Brenda Michelson of Elemental Links said while I was writing this blog) “The tubes formerly known as web”. She was joking, even used the #snark tag, but we do need a definition that goes beyond application delivery, or at least makes cloud storage fit into the application delivery paradigm (heh. I said paradigm in a blog. I’m so 90s) that is currently en-vogue. I’m trying to get a handle around the issue, because if you read the SNIA Cloud Storage documentation, it doesn’t clear the issue up, it ignores it and says “all these things are “cloud storage”. Nice, but how does that fit into the current definitions of cloud? And that doesn’t even touch Microsoft offering MS-SQL Server on Azure, which is technically SaaS, but smacks of cloud because of whom the customer is… I think that is where we come to the key to defining cloud as it exists today. It does leave some vendors out, but only those aiming at end users, and I’d argue those are cloud-delivered applications. So how about this for a new definition… “IT services designed to interact with and/or take the place of core enterprise IT hardware or software infrastructure”. It’s short, and no doubt I’ll get feedback that will help refine it, but it does encompass the core – this would be inclusive of Databases as services, it would be inclusive of what we all think of traditionally as “cloud” – server allocation – it would include cloud storage, it would rule out applications with end user interfaces that happen to be hosted in cloud or cloud like environments because those have never been considered “infrastructure”. It would include CDN, but if you’re going to put cloud storage in the bucket, you’re going to have to accept that CDN is a service that meets the above definition. It fits, it captures the spirit of cloud computing, and it leaves out those IaaS vendors that have been a serious bone of contention for everyone else. Some IaaS vendors can claim “Hosted in the cloud”, and that would make everyone happy, just don’t claim to be “the cloud” if your target user is an end user and not an administrator. Feedback, conversations, commentary, and even flame are all welcome. Visits from the men in white coats will not be appreciated. And thanks to all those listed. My blog topic for today was not nearly as interesting as this one. Related Articles and Blogs Wikipedia Cloud Computing Entry What Cloud Computing Really Means Twenty One Experts Define Cloud Computing Cloud Computing – The Last Definition You’ll Ever Need188Views0likes1CommentWhither Cloud Gateways?
Farm tractors and military tanks share an intertwined history that started when some smart person proposed the tracks on some farming equipment as the cross-country tool that tanks needed to get across a rubble and shell-hole strewn World War One battlefield. For the ensuing sixty years, improvements in one set of tracks spurred improvements in the other. Early on it was the farm vehicles developing improvements, but through World War Two and even some today, tanks did most of the developing. That is simply a case of experience. Farmers and farm tractor manufacturers had more experience when tanks were first invented, but the second world war and the variety of terrain, climate, and usage gave tanks the edge. After World War Two, the Cold War drove much more research money into tank improvements than commercial tractors received, so the trend continued. In fact, construction equipment eventually picked up where farming equipment dropped off. This is no coincidence, bulldozers received a lot of usage in the same wildly varying terrain as tanks during the second world war. Today, nearly all tracked construction equipment can trace their track and/or road wheel arrangements back to a specific tank (one bulldozer brand, for example, uses a slightly modified LT vz. 35 - Panzer 35(t) in German service - wheel system, invented in Czechoslovakia in the 1930s. That suspension was a modification of an even earlier Vickers tank design). Bradley AFV tug-o-war with a Farm Tractor What does all this have to do with cloud gateways? Well, technology follows somewhat predictable patterns, be it cloud and cloud communications or track and suspension systems. Originally, cloud gateways came out a few years back as the solution to making the cloud work for you. Not too long after cloud storage came along, some smart people thought the cloud gateway idea was a good one, and adopted a modified version called Cloud Storage Gateways. The driving difference between the two from the perspective of users was that Cloud Storage was practically useless without a gateway, while the Cloud could be used for application deployment in a much more broad sense without a gateway. So Cloud Storage Gateways like F5’s ARX Cloud Extender are a fact of life. Without them, Cloud Storage is just a blob that does not communicate with the rest of your storage infrastructure – including the servers that need to access said storage. With a Cloud Storage Gateway, storage looks and acts like all of the other IT products out there expect it to work. In the rush, Cloud Gateways largely fell by the wayside. Citrix sells one, and CloudSwitch is making a good business of it (there are more startups than just CloudSwitch, but they seem to be leading the pack), but the uptake seems to be nothing like the Cloud Storage Gateway uptake. And I think that’s a mistake. A cloud gateway is the key to cloud interoperability, and every organization needs at least a bare-minimum level of cloud portability, simply so they can point out to their cloud vendor that there are other players in the cloud space should the relationship become unprofitable for the customer. Add to that the ability to secure data on its way to the cloud and back, and Cloud Gateways are hugely important. What I don’t know is why uptake and competition in the space seems so slight. My guess would be that organizations aren’t attempting to integrate Cloud deployed applications into their architecture in the manner that Cloud Storage must be in order to be used. Which would scream that Cloud has not yet begun actual adoption yet. Though it doesn’t indicate whether that’s because Cloud is being dismissed by decision-makers as a place to host core applications, or just that uptake is slow. I’d be interested in hearing from you if you have more data that I’m somehow missing. It just seems incongruous to me that uptake isn’t closer to Cloud usage uptake claims. Meanwhile, security (encryption, tunneling, etc) can be had from your BIG-IP… But no, I don’t think BIG-IP is the reason Cloud Gateway uptake seems so low, or I wouldn’t have written this blog. I know some people are using it that way, with LTM-VE on the Cloud side and LTM on the datacenter side, but have no reason to suspect it is a large percentage of our customer base (I haven’t asked, this is pure conjecture). I’d like to see the two “gateway” products move in cooperative fits and starts until they are what is needed to secure, utilize, and access portable cloud-deployed applications and storage. You decide which is tank and which is tractor though… And since we’re talking about tanks, at least a little bit, proof that ever-smaller technology is not new in the computer age - The Nazi Goliath tank - Courtesy of militaryphotos.net Related Blogs: Cloud Storage Gateways. Short term win, but long term…? Cloud Storage Gateways, stairway to (thin provisioning) heaven? Certainly Cirtas! Cloud Storage Gains Momentum Tiering is Like Tables, or Storing in the Cloud Tier Cloud Storage Use Models Useful Cloud Advice, Part One. Storage Cloud Storage: Just In Time For Data Center Consolidation. Don MacVittie - F5 BIG-IP IPv6 Gateway Module If I Were in IT Management Today…185Views0likes0Comments