cloud storage gateways
15 TopicsCloud Storage Gateways. Short term win, but long term…?
In the rush to cloud, there are many tools and technologies out there that are brand new. I’ve covered a few, but that’s nowhere near a complete list, but it’s interesting to see what is going on out there from a broad-spectrum view. I have talked a bit about Cloud Storage Gateways here. And I’m slowly becoming a fan of this technology for those who are considering storing in the cloud tier. There are a couple of good reasons to consider these products, and I was thinking about the reasons and their standing validity. Thought I’d share with you where I stand on them at this time, and what I see happening that might impact their value proposition. The two vendors I have taken some time to research while preparing this blog for you are Nasuni and Panzura. No doubt there are plenty of others, but I’m writing you a blog here, not researching a major IT initiative. So I researched two of them to have some points of comparison, and leave the in-depth vendor selection research to you and your staff. These two vendors present similar base technology and very different additional feature sets. Both rely heavily upon local caching in the controller box, and both work with multiple cloud vendors, and both claim to manage compression. Nasuni delivers as a Virtual Appliance, includes encryption on your network before transmitting to the cloud, automated cloud provisioning, and caching that has timed updates to the cloud, but can perform a forced update if the cache gets full. It presents the cloud storage you’ve provisioned as a NAS on your end. Panzura delivers a hardware appliance that also presents the cloud as a NAS, works with multiple cloud vendors, handles encryption on-device, and claims global dedupe. I say claims, because “global” has a meaning that is “all” and in their case “all” means “all the storage we know about”, not “all the storage you know”. I would prefer a different term, but I get what they mean. Like everything else, they can’t de-dupe what they don’t control. They too present the cloud storage you’ve provisioned as a NAS on your end, but claim to accelerate CIFS and NFS also. Panzura is also trying to make a big splash about speeding access to MS-Sharepoint, but honestly, as a TMM for F5, a company that makes two astounding products that speed access to Sharepoint and nearly everything else on the Internet (LTM and WOM), I’m not impressed by Sharepoint acceleration. In fact, our Sharepoint Application Ready Solution is here, and our list of Application Ready Solutions is here. Those are just complete architectures we support directly, and don’t touch on what you can do with the products through Virtuals, iRules, profiles, and the host of other dials and knobs. I could go on and on about this topic, but that’s not the point of this blog, so suffice it to say there are some excellent application acceleration and WAN Optimization products out there, so this point solution alone should not be a buying criteria. There are some compelling reasons to purchase one of these products if you are considering cloud storage as a possible solution. Let’s take a look at them. Present cloud storage as a NAS – This is a huge benefit right now, but over time the importance will hopefully decrease as standards for cloud storage access emerge. Even if there is no actual standard that everyone agrees to, it will behoove smaller players to emulate the larger players that are allowing access to their storage in a manner that is similar to other storage technologies. Encryption – As far as I can see this will always be a big driver. They’re taking care of encryption for you, so you can sleep at night as they ship your files to the public cloud. If you’re considering them for non-public cloud, this point may still be huge if your pipe to the storage is over the public Internet. Local Caching – With current broadband bandwidths, this will be a large driver for the foreseeable future. You need your storage to be responsive, and local caching increases responsiveness, depending upon implementation, cache size, and how many writes you are doing this could be a huge improvement. De-duplication – I wish I had more time to dig into what these vendors mean by dedupe. Replacing duplicate files with a symlink is simplest and most resembles existing file systems, but it is also significantly less effective than partial file de-dupe. Let’s face it, most organizations have a lot more duplication laying around in files named Filename.Draft1.doc through Filename.DraftX.doc than they do in completely duplicate files. Check with the vendors if you’re considering this technology to find out what you can hope to gain from their de-dupe. This is important for the simple reason that in the cloud, you pay for what you use. That makes de-duplication more important than it has historically been. The largest caution sign I can see is vendor viability. This is a new space, and we have plenty of history with early entry players in a new space. Some will fold, some will get bought up by companies in adjacent spaces, some will be successful… at something other than Cloud Storage Gateways, and some will still be around in five or ten years. Since these products compress, encrypt, and de-dupe your data, and both of them manage your relationship with the cloud vendor, losing them is a huge risk. I would advise some due diligence before signing on with one – new companies in new market spaces are not always a risky proposition, but you’ll have to explore the possibilities to make sure your company is protected. After all, if they’re as good as they seem, you’ll soon have more data running through them than you’ll have free space in your data center, making eliminating them difficult at best. I haven’t done the research to say which product I prefer, and my gut reaction may well be wrong, so I’ll leave it to you to check into them if the topic interests you. They would certainly fit well with an ARX, as I mentioned in that other blog post. Here’s a sample architecture that would make “the Cloud Tier” just another piece of your virtual storage directory under ARX, complete with automated tiering and replication capabilities that ARX owners thrive on. This sample architecture shows your storage going to a remote data center over EDGE Gateway, to the cloud over Nasuni, and to NAS boxes, all run through an ARX to make the client (which could be a server or a user – remember this is the NAS client) see a super-simplified, unified directory view of the entire thing. Note that this is theoretical, to my knowledge no testing has occurred between Nasuni and ARX, and usually (though certainly not always) the storage traffic sent over EDGE Gateway will be from a local NAS to a remote one, but there is no reason I can think of for this not to work as expected – as long as the Cloud Gateway really presents itself as a NAS. That gives you several paths to replicate your data, and still presents client machines with a clean, single-directory NAS that participates in ADS if required. In this case Tier one could be NAS Vendor 1, Tier two NAS Vendor 2, your replication targets securely connected over EDGE Gateway, and tier 3 (things you want to save but no longer need to replicate for example) is the cloud as presented by the Cloud Gateway. The Cloud Gateway would arbitrate between actual file systems and whatever idiotic interface the cloud provider decided to present and tell you to deal with, while the ARX presents all of these different sources as a single-directory-tree NAS to the clients, handling tiering between them, access control, etc. And yes, if you’re not an F5 shop, you could indeed accomplish pieces of this architecture with other solutions. Of course, I’m biased, but I’m pretty certain the solution would not be nearly as efficient, cool, or let you sleep as well at night. Storage is complicated, but this architecture cleans it up a bit. And that’s got to be good for you. And all things considered, the only issue that is truly concerning is failure of a startup cloud gateway vendor. If another vendor takes one over, they’ll either support it or provide a migration path, if they are successful at something else, you’ll have plenty of time to move off of their storage gateway product, so only outright failure is a major concern. Related Articles and Blogs Panzura Launches ANS, Cloud Storage Enabled Alternative to NAS Nasuni Cloud Storage Gateway InfoSmack Podcasts #52: Nasuni (Podcast) F5’s BIG-IP Edge Gateway Solution Takes New Approach to Unifying, Optimizing Data Center Access Tiering is Like Tables or Storing in the Cloud Tier444Views0likes1CommentForce Multipliers and Strategic Points of Control Revisited
On occasion I have talked about military force multipliers. These are things like terrain and minefields that can make your force able to do their job much more effectively if utilized correctly. In fact, a study of military history is every bit as much a study of battlefields as it is a study of armies. He who chooses the best terrain generally wins, and he who utilizes tools like minefields effectively often does too. Rommel in the desert often used Wadis to hide his dreaded 88mm guns – that at the time could rip through any tank the British fielded. For the last couple of years, we’ve all been inundated with the story of The 300 Spartans that held off an entire army. Of course it was more than just the 300 Spartans in that pass, but they were still massively outnumbered. Over and over again throughout history, it is the terrain and the technology that give a force the edge. Perhaps the first person to notice this trend and certainly the first to write a detailed work on the topic was von Clausewitz. His writing is some of the oldest military theory, and much of it is still relevant today, if you are interested in that type of writing. For those of us in IT, it is much the same. He who chooses the best architecture and makes the most of available technology wins. In this case, as in a war, winning is temporary and must constantly be revisited, but that is indeed what our job is – keeping the systems at their tip-top shape with the resources available. Do you put in the tool that is the absolute best at what it does but requires a zillion man-hours to maintain, or do you put in the tool that covers everything you need and takes almost no time to maintain? The answer to that question is not always as simple as it sounds like it should be. By way of example, which solution would you like your bank to put between your account and hackers? Probably a different one than the one you would you like your bank to put in for employee timekeeping. An 88 in the desert, compliments of WW2inColor Unlike warfare though, a lot of companies are in the business of making tools for our architecture needs, so we get plenty of options and most spaces have a happy medium. Instead of inserting all the bells and whistles they inserted the bells and made them relatively easy to configure, or they merged products to make your life easier. When the terrain suits a commanders’ needs in wartime, the need for such force multipliers as barbed wire and minefields are eliminated because an attacker can be channeled into the desired defenses by terrain features like cliffs and swamps. The same could be said of your network. There are a few places on the network that are Strategic Points of Control, where so much information (incidentally including attackers, though this is not, strictly speaking, a security blog) is funneled through that you can increase your visibility, level of control, and even implement new functionality. We here at F5 like to talk about three of them… Between your users and the apps they access, between your systems and the WAN, and between consumers of file services and the providers of those services. These are places where you can gather an enormous amount of information and act upon that information without a lot of staff effort – force multipliers, so to speak. When a user connects to your systems, the strategic point of control at the edge of your network can perform pre-application-access security checks, route them to a VPN, determine the best of a pool of servers to service their requests, encrypt the stream (on front, back, or both sides), redirect them to a completely different datacenter or an instance of the application they are requesting that actually resides in the cloud… The possibilities are endless. When a user accesses a file, the strategic point of control between them and the physical storage allows you to direct them to the file no matter where it might be stored, allows you to optimize the file for the pattern of access that is normally present, allows you to apply security checks before the physical file system is ever touched, again, the list goes on and on. When an application like replication or remote email is accessed over the WAN, the strategic point of control between the app and the actual Internet allows you to encrypt, compress, dedupe, and otherwise optimize the data before putting it out of your bandwidth-limited, publicly exposed WAN connection. The first strategic point of control listed above gives you control over incoming traffic and early detection of attack attempts. It also gives you force multiplication with load balancing, so your systems are unlikely to get overloaded unless something else is going on. Finally, you get the security of SSL termination or full-stream encryption. The second point of control gives you the ability to balance your storage needs by scripting movement of files between NAS devices or tiers without the user having to see a single change. This means you can do more with less storage, and support for cloud storage providers and cloud storage gateways extends your storage to nearly unlimited space – depending upon your appetite for monthly payments to cloud storage vendors. The third force-multiplies the dollars you are spending on your WAN connection by reducing the traffic going over it, while offloading a ton of work from your servers because encryption happens on the way out the door, not on each VM. Taking advantage of these strategic points of control, architectural force multipliers offers you the opportunity to do more with less daily maintenance. For instance, the point between users and applications can be hooked up to your ADS or LDAP server and be used to authenticate that a user attempting to access internal resources from… Say… and iPad… is indeed an employee before they ever get to the application in question. That limits the attack vectors on software that may be highly attractive to attackers. There are plenty more examples of multiplying your impact without increasing staff size or even growing your architectural footprint beyond the initial investment in tools at the strategic point of control. For F5, we have LTM at the Application Delivery Network Strategic Point of Control. Once that investment is made, a whole raft of options can be tacked on – APM, WOM, WAM, ASM, the list goes on again (tired of that phrase for this blog yet?). Since each resides on LTM, there is only one “bump in the wire”, but a ton of functionality that can be brought to bear, including integration with some of the biggest names in applications – Microsoft, Oracle, IBM, etc. Adding business value like remote access for devices, while multiplying your IT force. I recommend that you check it out if you haven’t, there is definitely a lot to be gained, and it costs you nothing but a little bit of your precious time to look into it. No matter what you do, looking closely at these strategic points of control and making certain you are using them effectively to meet the needs of your organization is easy and important. The network is not just a way to hook users to machines anymore, so make certain that’s not all you’re using it for. Make the most of the terrain. And yes, if you also read Lori’s blog, we were indeed watching the same shows, and talking about this concept, so no surprise our blogs are on similar wavelengths. Related Blogs: What is a Strategic Point of Control Anyway? Is Your Application Infrastructure Architecture Based on the ... F5 Tech Field Day – Intro To F5 As A Strategic Point Of Control What CIOs Can Learn from the Spartans What We Learned from Anonymous: DDoS is now 3DoS What is Network-based Application Virtualization and Why Do You ... They're Called Black Boxes Not Invisible Boxes Service Virtualization Helps Localize Impact of Elastic Scalability F5 Friday: It is now safe to enable File Upload256Views0likes0CommentsIt is Never Easy, But There’s a Lot of Change Going On.
Every spring I get excited. I live in Wisconsin, which my travels have shown me you may not understand. I have actually been told “that is not your house, there is snow on the ground. All of America is sun and beaches”. Well, in Wisconsin, it gets cold. Moscow style cold. There are a couple of weeks each winter where going out is something you do only after bundling up like a toddler… Mittens, hats, coat, another coat, boots… But then spring comes, and once the temperature gets to the point where the snow starts to melt, the sun starts to feel warm again. It’s at that point that I start to get that burst of energy, and every year it surprises me. I realize that I was, toward the end of the winter, slowing down. Not work-wise, but home-wise. You can’t do too much work outside, there are days I didn’t even break down boxes for recycling because it was too cold in the (unheated) garage. So inside things take precedence. This year it was staining some window frames, helping Lori get her monstrous new fishtank set up, and working on some fun stuff I’d been sitting on. I register a very similar surprise in IT, even though, just like winter, it is a predictable cycle. The high-tech industry just keeps turning out new ideas, products, and hype cycles. Black Bear Hibernating – www.bear.org But this round seems different to me. Instead of a rush of new followed by a predictable lull while enterprises digest the new and turn it into functional solutions, it seems that, even given the global economy, the new just keeps coming. From Server Virtualization to Server Consolidation to Storage Virtualization to Primary Dedupe, through network virtualization and the maturity of load balancers into ADCs, then the adaptation of the best ADCs into tools to manage virtualization sprawl. Throwing in Cloud, then Cloud Storage, and heaping network convergence (with storage networks) onto the heap, and then drop the mobile device bomb… Wow. It’s been a run. IT has always had the belief that the only constant is change, but the rate of change seems to be in high gear over the last several years. The biggest problem with that is none of this stuff exists in a vacuum, and you don’t really get the opportunity to digest any of it and make it an integral part of your architecture if you’re doing it all. F5 and several other companies have some great stuff to help you take the bull by the horns, ours being instantiated as what we call Strategic Points of Control, but they too require time and effort. The theory is, of course, that we’re going to a better place, that IT will be more adaptable and less fragile. That needs to be in your sights at all times if you are participating in several of these changes at the same time, but also in your sites must be the short term – don’t make your IT less adaptable and more fragile today on the promise of making it less so in the future. And that’s a serious risk if you move too fast. That is a lot of change in your systems, and while I’ve talked about them individually, an architecture plan (can you tell I was an Enterprise Architect once?) that coordinates the changes you’re making and leaves breathing space so you can make the changes a part of your systems is a good idea. I’m not saying drag your feet, but I am saying that the famous saying “He who defends everything defends nothing” has an IT corollary “He who changes everything risks everything”. Do we here at F5 want you to buy our products? Of course we do. We wouldn’t make them if we didn’t think they rocked. Do we want you to redesign your network on-the-fly on a Sunday night from one end to the other? Not if it risks you failing. We look bad if you look bad because of us. So take your time, figure out which of the many new trends holds the most promise for your organization, prioritize, then implement. Make sure you know what you have before moving on to the next change. Many of you have stable virtualized server environments already, so moving on from there is easier, but many of you do not yet have stability in virtualization. VMWare and others make some great tools to help with managing your virtualized environment, but only if you’ve been in the virtualization game long enough to realize you need them. Where will we end up? I honestly don’t know. For sure with highly virtualized datacenters, and with much shortened lead times for IT to implement new systems. Perhaps we’ll end up 100% in the cloud, but there are inherent risks that make 100% doubtful – like outsourcing, you’re only as good as the date on your contract. So the future is cloudy, pun intended. So take your time, I’ve said it before, and will likely say it again, we’re here to help, but we want to help, not help shove you over the cliff. Good vendors will still be around if you delay implementation of some new architectural wonder by six weeks or six months to stabilize the one you just implemented, and the vendors that aren’t around? Well, imagine if you’d bought into them. :-) Another old adage that has new meaning at the current rate of change is “Anything worth doing is worth doing right”. Of course there will be politics in many of the most recent round of changes – pressure to do it faster – can’t help you there other than to suggest you point out that the difference between responsive and reckless is directly related to the pressure applied. My big kick is at the moment is access to cloud storage from your local network. Big bang for the buck whether you’re using our ARX Cloud Extender or one of the various cloud storage gateways out there, it gives you a place to move stuff that means you don’t have to back it up, but you don’t have to risk losing it either.236Views0likes0CommentsCloud Storage Gateways, stairway to (thin provisioning) heaven?
With thanks to Led Zeppelin for some great lyrics. There's a sign on the wall But she wants to be sure 'Cause you know sometimes words have Two meanings Since cloud computing has a bit of an identity crisis, and cloud storage is just starting to realize one itself, it should be no surprise to anyone that “cloud storage gateway” has more than one meaning. While they are all a single market, implementation and deployment details make them very distinct products. In such a young market, differentiation is easy, even if selling your differentiation as a plus is not. Some vendors are already attempting to turn product differentiation into market segmentation – the upcoming Cirtas product, for example, is referred to by them as a “Cloud Storage Controller” because they believe that better defines their product, though they acknowledge that the market term “gateway” has taken on. When she gets there she knows If the stores are all closed With a word she can get what she came for Okay, so you don’t quite have that power, but all of these products do offer you a significant bonus in terms of cloud storage in the form of thin provisioning. For several years now you have had the capability to tell your server it had more storage than was actually dedicated to it, and if it ran over what was actually dedicated, more was allocated from a pool. The problem with this model is that you have to be certain you have enough storage to cover the worst reasonable case – what percentage of your servers might request extra storage over the weekend, and how much might they request. Weekend, month, year… Whatever your timeframe for buying new storage. The point of over-provisioning is that you’ll likely be oversubscribed, but you’re taking the risk that the oversubscription will never come due at the same time with one large calamitous bang. I wrote about this scenario and how virtualization has made the risk worse here. Yes, there are two paths you can go by But in the long run There's still time to change The road you're on Enter Cloud Storage Gateways. First a bit about cloud providers… They scale up to as much as you need (as long as your payments are covering it anyway), and down as your usage goes down. I won’t say all of them fit this pattern because there are a bewildering number of players looking to make a name in this space, and believe it or not, F5 doesn’t pay me to ponder cloud storage or cloud storage gateways, they merely allow me to chat about it, but I’m not taking weeks researching all of this, more like hours. The big players do indeed scale up and down, only billing you for actual usage though. Now that we have covered that for any who didn’t know, the cloud storage gateway handles the intricacies of dealing with various cloud storage providers, and most cache locally and encrypt on the way out. Starting to see the silver lining yet? They give you thin provisioning limited only by how much money you’re willing to risk. The current model gives you thin provisioning limited by either how much you’re willing to pay to guarantee you have enough disk for the worst case, or the amount of risk you’re willing to take on. Cloud storage gateways navigate that mine-laden sea for you and guarantee that your servers will stay up as long as you’re willing to foot the bill. Of course that doesn’t eliminate planning for you, but it does allow you to move the choke point up and down much more easily. You can eliminate the risk without a significant cash outlay, if that is your desire, as long as you know what it will cost you if your servers all start requesting more and more storage. And it makes me wonder (if you just went “Ooohhh Yeahhh-hah” in your head, take a moment to laugh at yourself, it’s healthy) The biggest risk in cloud storage gateways is the one I mentioned in a previous blog. If they are scooped up by cloud storage vendors that suddenly remembered the lingua franca of enterprise IT storage is not SOA, they will surely limit your options on the back-end. One of the strengths of these products is that you can point one at two completely different cloud storage vendors and remind each that with a flick of a switch, you can be on the other one, so 48 hour response times are not acceptable. That benefit would almost certainly disappear if a cloud storage vendor bought up your cloud storage gateway vendor. Otherwise, the risk is not any larger than any other cloud solution. Your head is humming and it won't go In case you don't know The piper's calling you to join him Cloud everything is the buzz du-jour, and cloud storage with a gateway to make it appear on your network as a NAS is a good idea for tier three, and these vendors are all (I’ve only spoken to three, so “all” is a bit of a stretch) saying they’re getting traction on tier two also, which makes sense for a lot of tier two data. Either way, it’s coming your way, and you should consider if it has a space in your datacenter. The idea of truly thin provisioning is a huge one that even further removes you from the limitation of monstrous disk arrays. And if thin provisioning with no subscription worries is on your list of things that would help you sleep at night, I suggest you go out and try… Bu-uying a stairway… To (thin provisioning) heaven And I won’t even get into what these solutions coupled with the automated tiering of ARX can do for you. That’s for another blog. Related Articles and Blogs Stairway to Heaven Lyrics Thin Provisioning Plus VMs - Armageddon in a Virtual Box Cloud Storage Gateways: Short Term Win, but Long Term…? More Cloud Storage Gateways Come Out Show Me The Gateway – Talking Storage to the Cloud213Views0likes0CommentsWhen The Walls Come Tumbling Down.
When horrid disasters strike and both people and corporations are put on notice that they suddenly have a lot more important things to do, will you be ready? It is a testament to man’s optimism that with very few exceptions we really don’t, not at the personal level, not at the corporate level. I’ve worked a lot of places, and none of them had a complete, ready to rock DR plan. The insurance company I worked at was the closest – they had an entire duplicate datacenter sitting dark in a location very remote from HQ, awaiting need. Every few years they would refresh it to make certain that the standby DC had the correct equipment to take over, but they counted on relocating staff from what would be a ravaged area in the event of a catastrophe, and were going to restore thousands of systems from backups before the remote DC could start running. At the time it was a good plan. Today it sounds quaint. And it wasn’t that long ago. There are also a lot of you who have yet to launch a cloud initiative of any kind. This is not from lack of interest, but more because you have important things to do that are taking up your time. Most organizations are dragging their feet replacing people, and few – according to a recent survey, very few – are looking to add headcount (proud plug that F5 is – check out our careers page if you’re looking). It’s tough to run off and try new things when you can barely keep up with the day-to-day workloads. Some organizations are lucky enough to have R&D time set aside. I’ve worked at a couple of those too, and honestly, they’re better about making use of technology than those who do not have such policies. Though we could debate if they’re better because they take the time, or take the time because they’re better. And the combination of these two items brings us to a possible pilot project. You want to be able to keep your organization online or be able to bring it back online quickly in the event of an emergency. Technology is making it easier and easier to complete this arrangement without investing in an entire datacenter and constantly refreshing the hardware to have quick recovery times. Global DNS in various forms is available to redirect users from the disabled datacenter to a datacenter that is still capable of handling the load, if you don’t have multiple datacenters, then it can redirect elsewhere – like to virtual servers running in the cloud. ADCs are starting to be able to work similarly whether they are cloud deployed or DC deployed, that leaves keeping a copy of your necessary data and applications in the cloud, and cloud storage with a cloud storage gateway such as the Cloud Extender functionality in our ARX product allow for this to be done with a minimum of muss and fuss. These technologies, used together, yield a DR architecture that looks something like this: Notice that the cloud extender isn’t listed here, because it is useful for getting the data copied, but would most likely reside in your damaged datacenter. Assuming that the cloud provider was one like our partner Rackspace, who does both cloud VMs and cloud storage, this architecture is completely viable. You’ll still have to work some things out, like guaranteeing that security in the cloud is acceptable, but we’re talking about an emergency DR architecture here, not a long-running solution, so app-level security and functionality to block malicious attacks at the ADC layer will cover most of what you need. AND it’s a cloud project. The cost is far, far lower than a full blown DR project, and you’ll be prepared in case you need it. This buys you time to ingest the fact that your datacenter has been wiped out. I’ve lived through it, there is so much that must be done immediately – finding a new location, dealing with insurance, digging up purchase documentation, recovering what can be recovered… Having a plan like this one in place is worth your while. Seriously. It’s a strangely emotional time, and having a plan is a huge help in keeping people focused. Simply put, disasters come, often without warning – mine was a flood caused by a broken pipe. We found out when our monitoring equipment fried from being soaked and sent out a raft of bogus messages. The monitoring equipment was six feet above the floor at the time. You can’t plan for everything, but to steal and twist a famous phrase, “he who plans for nothing protects nothing.”193Views0likes0CommentsWhither Cloud Gateways?
Farm tractors and military tanks share an intertwined history that started when some smart person proposed the tracks on some farming equipment as the cross-country tool that tanks needed to get across a rubble and shell-hole strewn World War One battlefield. For the ensuing sixty years, improvements in one set of tracks spurred improvements in the other. Early on it was the farm vehicles developing improvements, but through World War Two and even some today, tanks did most of the developing. That is simply a case of experience. Farmers and farm tractor manufacturers had more experience when tanks were first invented, but the second world war and the variety of terrain, climate, and usage gave tanks the edge. After World War Two, the Cold War drove much more research money into tank improvements than commercial tractors received, so the trend continued. In fact, construction equipment eventually picked up where farming equipment dropped off. This is no coincidence, bulldozers received a lot of usage in the same wildly varying terrain as tanks during the second world war. Today, nearly all tracked construction equipment can trace their track and/or road wheel arrangements back to a specific tank (one bulldozer brand, for example, uses a slightly modified LT vz. 35 - Panzer 35(t) in German service - wheel system, invented in Czechoslovakia in the 1930s. That suspension was a modification of an even earlier Vickers tank design). Bradley AFV tug-o-war with a Farm Tractor What does all this have to do with cloud gateways? Well, technology follows somewhat predictable patterns, be it cloud and cloud communications or track and suspension systems. Originally, cloud gateways came out a few years back as the solution to making the cloud work for you. Not too long after cloud storage came along, some smart people thought the cloud gateway idea was a good one, and adopted a modified version called Cloud Storage Gateways. The driving difference between the two from the perspective of users was that Cloud Storage was practically useless without a gateway, while the Cloud could be used for application deployment in a much more broad sense without a gateway. So Cloud Storage Gateways like F5’s ARX Cloud Extender are a fact of life. Without them, Cloud Storage is just a blob that does not communicate with the rest of your storage infrastructure – including the servers that need to access said storage. With a Cloud Storage Gateway, storage looks and acts like all of the other IT products out there expect it to work. In the rush, Cloud Gateways largely fell by the wayside. Citrix sells one, and CloudSwitch is making a good business of it (there are more startups than just CloudSwitch, but they seem to be leading the pack), but the uptake seems to be nothing like the Cloud Storage Gateway uptake. And I think that’s a mistake. A cloud gateway is the key to cloud interoperability, and every organization needs at least a bare-minimum level of cloud portability, simply so they can point out to their cloud vendor that there are other players in the cloud space should the relationship become unprofitable for the customer. Add to that the ability to secure data on its way to the cloud and back, and Cloud Gateways are hugely important. What I don’t know is why uptake and competition in the space seems so slight. My guess would be that organizations aren’t attempting to integrate Cloud deployed applications into their architecture in the manner that Cloud Storage must be in order to be used. Which would scream that Cloud has not yet begun actual adoption yet. Though it doesn’t indicate whether that’s because Cloud is being dismissed by decision-makers as a place to host core applications, or just that uptake is slow. I’d be interested in hearing from you if you have more data that I’m somehow missing. It just seems incongruous to me that uptake isn’t closer to Cloud usage uptake claims. Meanwhile, security (encryption, tunneling, etc) can be had from your BIG-IP… But no, I don’t think BIG-IP is the reason Cloud Gateway uptake seems so low, or I wouldn’t have written this blog. I know some people are using it that way, with LTM-VE on the Cloud side and LTM on the datacenter side, but have no reason to suspect it is a large percentage of our customer base (I haven’t asked, this is pure conjecture). I’d like to see the two “gateway” products move in cooperative fits and starts until they are what is needed to secure, utilize, and access portable cloud-deployed applications and storage. You decide which is tank and which is tractor though… And since we’re talking about tanks, at least a little bit, proof that ever-smaller technology is not new in the computer age - The Nazi Goliath tank - Courtesy of militaryphotos.net Related Blogs: Cloud Storage Gateways. Short term win, but long term…? Cloud Storage Gateways, stairway to (thin provisioning) heaven? Certainly Cirtas! Cloud Storage Gains Momentum Tiering is Like Tables, or Storing in the Cloud Tier Cloud Storage Use Models Useful Cloud Advice, Part One. Storage Cloud Storage: Just In Time For Data Center Consolidation. Don MacVittie - F5 BIG-IP IPv6 Gateway Module If I Were in IT Management Today…185Views0likes0CommentsLike “API” Is “Storage Tier” Redefining itself?
There is an interesting bit in high-tech that isn’t much mentioned but happens pretty regularly – when a good idea is adapted and moved to new uses, raising it a bit in the stack or revising it to keep up with the times. The quintessential example of this phenomenon is the progression from “subroutines” to “libraries” to “frameworks” to “APIs” to “Web Services”. The progression is logical and useful, but those assembler and C programmers that were first stuffing things into reusable subroutines could not have foreseen the entire spectrum of what their “useful” idea was going to become over time. I had the luck of developing in all of those stages. I wrote assembly routines right before they were no longer necessary for everyday development, and wrote web services/SOA routines for the first couple of years they were about.182Views0likes0CommentsCloud Storage: Just In Time For Data Center Consolidation.
There’s this funny thing about pouring two bags of M&Ms into one candy dish. The number of M&Ms is exactly the same as when you started, but now they’re all in one location. You have, in theory, saved yourself from having to wash a second candy dish, but the same number of people can enjoy the same number of M&Ms, you’ll run out of M&Ms at about the same time, and if you have junior high kids in the crowd, the green M&Ms will disappear at approximately the same rate. The big difference is that fewer people will fit around one candy dish than two, unless you take extraordinary steps to make that one candy dish more accessible. If the one candy dish is specifically designed to hold one or one and a half bags of M&Ms, well then you’re going to need a place to store the excess. The debate about whether data center consolidation is a good thing or not is pretty much irrelevant if, for any reason your organization chooses to pursue this path. Seriously, while analysts want to make a trend out of everything these days, there are good reasons to consolidate data centers, ranging from skills shortage at one location to a hostile regulatory environment at another. Cost savings are very real when you consolidate data centers, though they’re rarely as large as you expect them to be in the planning stages because the work still has to be done, the connections still have to be routed, the data still has to be stored. You will get some synergies by hosting apps side-by-side that would normally need separate resources, but honestly, a datacenter consolidation project isn’t an application consolidation project. It can be, but that’s a distinct piece of the pie that introduces a whole lot more complexity than simply shifting loads, and all the projects I’ve seen with both as a part of them have them in two separate and distinct phases - “let’s get everything moved, and then focus on reducing our app footprint”. Lori and the M&Ms of doom. While F5 offers products to help you with all manner of consolidation problems, this is not a sales blog, so I’ll focus on one opportunity in the cloud that is just too much of the low-hanging fruit for you not to be considering it. Moving the “no longer needed no matter what” files out to the cloud. I’ve mentioned this in previous Cloud Storage and Cloud Storage Gateway posts, but in the context of data center consolidation, it moves from the “it just makes sense” category to the “urgently needed” category. You’re going to be increasing the workload at your converged datacenter by an unknown amount, and storage requirements will stay relatively static, but you’re shifting those requirements from two or more datacenters to one. This is the perfect time to consider your options with cloud storage. What if you could move an entire classification of your data out to the cloud, so you didn’t have to care if you were accessing it from a data center in Seattle or Cairo? What if you could move that selection of data out to the cloud and the purposely shift data centers without having to worry about that data? Well you can… And I see this as one of the drivers for Cloud Storage adoption. In general you will want a Cloud Storage Gateway like our ARX Cloud Extender, and using ARX or another rules-based tiering engine will certainly make the initial cloud storage propagation process easier, but the idea is simple. Skim off those thousands of files that haven’t been accessed in X days and move them to Cloud storage, freeing up local space so that maybe you won’t need to move or replace that big old NAS system from the redundant data center. X is very much dependent upon your business and even the owning business unit, I would seriously work with the business leaders to set reasonable numbers – and offering them guidance about what it will take (in terms of days X needs to be) to save the company moving or replacing an expensive (and expensive to ship) NAS. While the benefits appear to be short-term – not consolidating the NAS devices while consolidating datacenters – they are actually very long term. They allow you to learn about cloud storage and how it fits into your architectural plans with relatively low-risk data, as time goes on, the number of files (and terabytes) that qualify for movement to the cloud will continue to increase, keeping an escape valve on your NAS growth, and the files that generally don’t need to be backed up every month or so will all be hanging off your cloud storage gateway, simplifying the backup process and reducing backup/replication windows. I would be remiss if I didn’t point out the ongoing costs of cloud storage, after all, you will be paying each and every month. But I contend you would be anyway. If this becomes an issue from the business or from accounts payable, it should be relatively easy with a little research to come up with a number for what storage growth costs the company when it owns the NAS devices. The only number available to act as a damper on this cost would be the benefits of depreciation, but that’s a fraction of the total in real-dollar benefits, so my guess is that companies within the normal bounds of storage growth over the last five years can show a cost reduction over time without having to include cost-of-money-over-time calculations for “buy before you use” storage. So the cost of cloud being pieced out over months is beneficial, particularly at the prices in effect today for cloud storage. There will no doubt be a few speed bumps, but getting them out of the way now with this never-accessed data is better than waiting until you need cloud storage and trying to figure it out on the fly. And it does increase your ability to respond to rapidly changing storage needs… Which over the last decade have been rapidly changing in the upward direction. Datacenter consolidation is never easy on a variety of fronts, but this could make it just a little bit less painful and provide lasting benefits into the future. It’s worth considering if you’re in that position – and truthfully to avoid storage hardware sprawl, even if you’re not. Related Articles and Blogs Cloud Storage Gateways, stairway to (thin provisioning) heaven? Certainly Cirtas! Cloud Storage Gains Momentum Cloud Storage Gateways. Short term win, but long term…? Cloud Storage and Adaptability. Plan Ahead Like “API” Is “Storage Tier” Redefining itself? The Problem With Storage Growth is That No One Is Minding the Store F5 Friday: F5 ARX Cloud Extender Opens Cloud Storage Chances Are That Your Cloud Will Be Screaming WAN.182Views0likes0CommentsLet’s Rethink Our Views of Storage Before It Is Too Late.
When I was in Radiographer (X-Ray Tech) training in the Army, we were told the cautionary tale of a man who walked into an emergency room with a hatchet in his forehead and blood everywhere. As the staff of the emergency room rushed to treat the man’s very serious head injury, his condition continued to degrade. Blood everywhere, people rushing to and fro, the XRay tech with a portable XRay machine trying to squeeze in while nurses and doctors are working hard to keep the patient alive. And all the frenzied work failed. If you’ve ever been in an ER where a patient dies – particularly one that dies of traumatic injuries rather than long-term illness – it is difficult at best. You want to save everyone, but some people just don’t make it. They’re too injured, or came to the ER too late, or the precise injury is not treatable in the time available. It happens, but no one is in a good mood about it, and everyone is wondering if they could have done something different. In US emergency rooms at least, it is very rare that a patient dies and the reason lies in failure of the staff to take some crucial step. There are too many people in the room, too much policy and procedure built up, to fail at that level. And part of that policy and procedure was teaching us the cautionary tale. You see, the tale wasn’t over with the death of the patient. The tale goes on to say that the coroner’s report said the patient died not of a head injury, but of bleeding to death through a knife wound in his back. The story ends with the warning not to focus on the obvious injury so exclusively that you miss the other things going on with the patient. It was a lesson well learned, and I used it to good effect a couple of times in my eight years in Radiography. Since the introduction of Hierarchical Storage Management (HSM) many years ago, the focus of many in the storage space is on managing the amount of data that is being stored on your system, optimizing access times and insuring that files are accessible to those who need them, when they need them. That’s important stuff, our users count upon us to keep their files safe and serve up their unstructured data in a consistent and reliable manner. At this stage of the game we have automated tiering such as that offered by F5’s ARX platform, we have remote storage for some data, we have cloud storage if there is overflow, there are backups, replications, snapshots, and even some cases of Continuous Data Protection… And all of these items focus on getting the data to users when they want in the most reliable manner possible. But, like our cautionary tale above, it is far too easy to focus on one piece of the puzzle and miss the rest. The rest is that tons of your unstructured data is chaff. Yes indeed, you’ve got some fine golden grains of wheat that you are protecting, but to do so, today it is a common misperception to feel that you have to protect the chaff too. It’s time for you to start pushing back, perhaps past time. The buildup of unnecessary files is costing the organization money and making it more difficult to manage the files that really are important to the day-to-day running of your organization. My proposal is simple. Tell business leaders to clean up their act. Only keep what is necessary, stop hoarding files that were of marginal use when created, and negligible or no use today. We have treated storage as an essentially unlimited resource for long enough, time to say “well yes, but each disk we add to the storage hierarchy increases costs to the organization”. Meet with business leaders and ask them to assign people to go through files. If your organization is like ones I’ve worked at, when someone leaves their entire user folder is kept, almost like a gravestone. Not necessarily touched, just kept. Most of those files aren’t needed at all, and it becomes obvious after a couple of months which those are. So have your business units clean up after themselves. I’ve said it before, I’ll say it again, IT is not in a position to decide what stays and what goes, only those deeply involved in the running of that bit of the business can make those calls. The other option is to use whatever storage tiering mechanism you have to shuffle them off to neverland, but again, do you want a system making permanent delete decisions about a file that may not have been touched in two years but (perhaps) the law requires you keep for seven? You can do it, but it will always much better to have users police their own area, if you can. While focused on availability of files, don’t forget to deal with deletion of unneeded information. And there is a lot of it out there, if the enterprises I’m familiar with are any indication. Recruit business leaders, maybe take them a sample that shows them just how outdated or irrelevant some of their unstructured data is “the football pool for the 1997 season… Is that necessary?” is a good one. Unstructured storage needs are going to continue to grow, mitigated by tiering, enhanced resource utilization, compression, and dedupe, but why bother deduping or even saving a file that was needed for a short time and is now just a waste of space? No, no it won’t be easy to recruit such help. The business is worried about tomorrow, not last year. But convincing them that this is a necessary step to saving money for more projects tomorrow is part of what IT management does. And if you can convince them, you’ll see a dramatic savings in space that might put off more drastic measures. If you can’t convince them, then you’ll need a way to “get rid of” those files without getting rid of them. Traditional archival storage or a Cloud Storage Gateway are both options in that case, but best to just recruit the help cleaning up the house.174Views0likes0Comments