tiering
8 TopicsTiering is Like Tables, or Storing in the Cloud Tier
We take tables for granted. Really take them for granted. We cover them with our stuff, we sit down to eat on them, we put them in front of the TV to hold our computers, we put them pretty much everywhere, and we use them for everything from holding collections of important papers we will eventually get around to sorting to using them as workbenches to hold stuff down while we saw. In Lori and my house alone I could fill a blog with the things we use tables for. And yet we never see them when we’re not interacting with them. Unless we bought really cheap ones or have gone literally years without maintaining them or put them together wrong when we first got them, we never fear that they will fall apart randomly on us, and we never fear that they just won’t be there tomorrow. We interact with them on a daily basis, we put them to uses they were not designed for, and still we don’t worry that they won’t be there. And while tables have been specialized a zillion different ways, and yet we interact with them much the same way no matter which style a given table is. And that is exactly where we need our storage to be moving forward. I should not have to care about NAS vs. SAN anymore, we’re in the twenty-first century after all, and storage is as important to our systems as tables are to our lives. And in many ways they serve the same purpose – a place to store stuff, a place to drop stuff temporarily while we do something else… You get the analogy. Let us face basic facts here, an Open Source group (or ten) was able to come up with a way to host both proprietary OS’s and Open Source OS’s on the same hardware, determining what to use when, and that’s not even touching on the functionality of USB auto-detection, so why is it that you can’t auto-detect whether my share is running CIFs or NFS, or for that matter in an IP world, iSCSI? For all of my “use caution when approaching cloud” writing – that is mostly to offset the “everything will be in the cloud yesterday!” crowd - I do see one bit that would drive me to look closer – RAIN (Redundant Array of Independent Nodes) plus a cloud gateway like that offered by nasuni offers you highly redundant storage without having to care how you access it. Yes, there are drawbacks to this method of access, but the gateway makes it look like a NAS, meaning you can treat your cloud the same as you treat your NAS boxes. No increase in complexity, but access to essentially unlimited archival storage… Sounds like a plan to me. There are some caveats of course, you’d need to put all things that were highly proprietary or business critical on disks that weren’t copied out to the cloud, at least for now, since security is absolutely less certain than within your data center, and anyone who argues that point is likely selling cloud storage services. There aren’t other people accessing data on your SAN or NAS boxes, only employees. In the cloud, others are sharing your space. People you don’t know. That carries additional security risk. End of debate. But with a File Virtualization product like our ARX, you could easily position proprietary or sensitive info on disk that has no third tier, while everything else is on disk that has three tiers – primary, secondary, and tertiary… With the tertiary being provisioned from the cloud through a box like the nasuni gateway. Of course, storage has some rocket science bits, and thus it is not guaranteed that ARX works with nasuni… Since they’re a newer player and I’ve not heard anyone else propose this type of solution, I’m guessing our test team hasn’t yet run compatibility testing with them – or even considered doing so. Though if the LAN side of the gateway is standards based, there is no reason ARX doesn’t just plug-n-play with them. Why would you bother? Simple. The tertiary tier could be your offsite backup for less critical data. No need to build a data center, no need to put a bunch of custom cloud interfaces into place, farm it off to the cloud and forget about it. No worries about transporting it to an abandoned mine… Or that they’ll misplace it. Then you have your infrequently accessed stuff on slower media (it’s in the cloud, latency alone says slower), and it need not be backed up. That’s a huge chunk of your data that is automatically cared for. Just blue-skying how you can have your table and eat at it too, it sure would be cool if cloud started making our storage standardized, it’s 2010 and we shouldn’t be excited that major vendors are finally implementing unified look-n-feel management interfaces for NAS/SAN, so I’m looking for what will be truly exciting instead of a decade or more overdue. This might just be it. If you could back up a huge chunk of your unstructured data simply with a shadow copy or robocopy and be as assured as possible that it was protected simply by nature of how RAIN technology works… And you could do that without having to work through a specific cloud API, and you could do that through a device that would locally cache frequently accessed files until you could move them off to alternate storage… Well, that wouldn’t be a panacea, but it would certainly be a step in that direction. Lori, ever the cloud buzzword manager, immediately coined it “the Cloud Tier” when I was discussing this with her… So I herewith dub it. Table pictures from www.treehugger.com, Array picture CC-BY-SA Michael Moll211Views0likes0CommentsGraduating Your Storage
Lori and I’s youngest daughter graduated from High School this year, and her class chose one of the many good Vince Lombardi quotes for the theme of their graduation – “The measure of who we are is what we do with what we have.” Those who know me well know that I’m not a huge football fan (don’t tell my friends here in Green Bay that… The stadium can hold roughly half the city’s population, and they aren’t real friendly to those who don’t join in the frenzy), but Vince Lombardi certainly had a lot of great quotes over the course of his career, and I am a fan of solid quotes. This is a good example of his ability to say things short and to the point. This is the point where I say that I’m proud of our daughter. For a lot more than simply making it through school, and wish her the best of luck in that rollercoaster ride known as adult life. About the same time as our daughter was graduating, Lori sent me a link to this Research And Markets report on High Performance Computing site Storage usage. I found it to be very interesting, just because HPC sites are generally on the larger end of storage environments, and are where the rubber really hits the road in terms of storage performance and access times. One thing that stood out was the large percentage of disk that is reported as DAS. While you know there’s a lot of disk sitting in servers underutilized, I would expect the age of virtualization to have used a larger chunk of that disk with local images and more swap space for the multiple OS instances. Another thing of interest was that NAS and SAN are about evenly represented. Just a few years ago, that would not have been true at all. Fiber Channel has definitely lost some space to IP-based storage if they’re about even in HPC environment deployments. What’s good for the some of the most resource intensive environments on earth is good for most enterprises, and I suspect that NAS has eclipsed SAN in terms of shear storage space in the average enterprise (though that’s a conjecture on my part, not anything from the report). And that brings us back to the Vince Lombardi Quote. NAS disk space is growing. DAS disk space is still plentiful. The measure of the service your IT staff delivers will be what you do with what you have. And in this case, what you have is DAS disk not being used and a growing number of NAS heads to manage all that NAS storage. What do you do with that? Well, you do what makes the most sense. In this case, storage tiering comes to mind, but DAS isn’t generally considered a tier, right? It is if you have file virtualization (also called directory virtualization) in place. Seriously. By placing all that spare DAS into the directory tree, it is available as a pool of resources to service storage needs – and by utilizing automated, rule-based tiering, what is stored there can be tightly controlled by tiering rules so that you are not taking over all of the available space on the DAS, and things are stored in the place that makes the most sense based upon modification and/or access times. With tiering and file virtualization in place, you have a solution that can utilize all that DAS, and an automated system to move things to the place that makes the most sense. While you’re at it, move the rest of the disk into the virtual directory, and you can run backups off the directory virtualization engine, rather than set them up for each machine. You can even create rules to copy data off to slow disk and back it up from there, if you like. And with the direction things are headed, throw in an encrypting Cloud Storage Gateway like our ARX Cloud Extender, and you have a solution that utilizes your internal DAS and NAS both intelligently and to the maximum, and the gateway to Cloud storage for overflow, Tier N, or archival storage… depending upon how you’re using cloud storage. Then you are doing the most with what you have – and setting up an infinitely expandable pool to cover for unforeseen growth. All of the above makes your storage environment more rational, improves utilization in DAS (and in most cases NAS), retains your files with their names intact, and moves unstructured data to the storage that makes the most sense for it. There is very little not to like. So check it out. We have ARX, other vendors offer their solutions – though ARX is the leader in this space, so I don’t feel pandering to say you’ll find us a better fit.191Views0likes0CommentsLet’s Rethink Our Views of Storage Before It Is Too Late.
When I was in Radiographer (X-Ray Tech) training in the Army, we were told the cautionary tale of a man who walked into an emergency room with a hatchet in his forehead and blood everywhere. As the staff of the emergency room rushed to treat the man’s very serious head injury, his condition continued to degrade. Blood everywhere, people rushing to and fro, the XRay tech with a portable XRay machine trying to squeeze in while nurses and doctors are working hard to keep the patient alive. And all the frenzied work failed. If you’ve ever been in an ER where a patient dies – particularly one that dies of traumatic injuries rather than long-term illness – it is difficult at best. You want to save everyone, but some people just don’t make it. They’re too injured, or came to the ER too late, or the precise injury is not treatable in the time available. It happens, but no one is in a good mood about it, and everyone is wondering if they could have done something different. In US emergency rooms at least, it is very rare that a patient dies and the reason lies in failure of the staff to take some crucial step. There are too many people in the room, too much policy and procedure built up, to fail at that level. And part of that policy and procedure was teaching us the cautionary tale. You see, the tale wasn’t over with the death of the patient. The tale goes on to say that the coroner’s report said the patient died not of a head injury, but of bleeding to death through a knife wound in his back. The story ends with the warning not to focus on the obvious injury so exclusively that you miss the other things going on with the patient. It was a lesson well learned, and I used it to good effect a couple of times in my eight years in Radiography. Since the introduction of Hierarchical Storage Management (HSM) many years ago, the focus of many in the storage space is on managing the amount of data that is being stored on your system, optimizing access times and insuring that files are accessible to those who need them, when they need them. That’s important stuff, our users count upon us to keep their files safe and serve up their unstructured data in a consistent and reliable manner. At this stage of the game we have automated tiering such as that offered by F5’s ARX platform, we have remote storage for some data, we have cloud storage if there is overflow, there are backups, replications, snapshots, and even some cases of Continuous Data Protection… And all of these items focus on getting the data to users when they want in the most reliable manner possible. But, like our cautionary tale above, it is far too easy to focus on one piece of the puzzle and miss the rest. The rest is that tons of your unstructured data is chaff. Yes indeed, you’ve got some fine golden grains of wheat that you are protecting, but to do so, today it is a common misperception to feel that you have to protect the chaff too. It’s time for you to start pushing back, perhaps past time. The buildup of unnecessary files is costing the organization money and making it more difficult to manage the files that really are important to the day-to-day running of your organization. My proposal is simple. Tell business leaders to clean up their act. Only keep what is necessary, stop hoarding files that were of marginal use when created, and negligible or no use today. We have treated storage as an essentially unlimited resource for long enough, time to say “well yes, but each disk we add to the storage hierarchy increases costs to the organization”. Meet with business leaders and ask them to assign people to go through files. If your organization is like ones I’ve worked at, when someone leaves their entire user folder is kept, almost like a gravestone. Not necessarily touched, just kept. Most of those files aren’t needed at all, and it becomes obvious after a couple of months which those are. So have your business units clean up after themselves. I’ve said it before, I’ll say it again, IT is not in a position to decide what stays and what goes, only those deeply involved in the running of that bit of the business can make those calls. The other option is to use whatever storage tiering mechanism you have to shuffle them off to neverland, but again, do you want a system making permanent delete decisions about a file that may not have been touched in two years but (perhaps) the law requires you keep for seven? You can do it, but it will always much better to have users police their own area, if you can. While focused on availability of files, don’t forget to deal with deletion of unneeded information. And there is a lot of it out there, if the enterprises I’m familiar with are any indication. Recruit business leaders, maybe take them a sample that shows them just how outdated or irrelevant some of their unstructured data is “the football pool for the 1997 season… Is that necessary?” is a good one. Unstructured storage needs are going to continue to grow, mitigated by tiering, enhanced resource utilization, compression, and dedupe, but why bother deduping or even saving a file that was needed for a short time and is now just a waste of space? No, no it won’t be easy to recruit such help. The business is worried about tomorrow, not last year. But convincing them that this is a necessary step to saving money for more projects tomorrow is part of what IT management does. And if you can convince them, you’ll see a dramatic savings in space that might put off more drastic measures. If you can’t convince them, then you’ll need a way to “get rid of” those files without getting rid of them. Traditional archival storage or a Cloud Storage Gateway are both options in that case, but best to just recruit the help cleaning up the house.174Views0likes0CommentsBoxes and Sorting. How Tiering Helps
When you’re going through your basement, attic, or garage and reorganizing, you move things from box to box, shuffle locations of boxes, buy better boxes to hold things that are more precious, take steps to see to their safety by keeping boxes off the floor… There is an entire sorting mechanic going on that you are likely hardly even thinking about. Related Articles and Blogs Give Your Unstructured Data the Meyers-Briggs The Problem With Storage Virtualization Is That No One Is Minding The Store The State Of Storage Is Not The State Of Your Storage Tiering Is Like Tables, Or Storing In The Cloud Tier Better Networking Through Storage Virtualization171Views0likes0CommentsF5 Friday: F5 ARX Cloud Extender Opens Cloud Storage
Bridging the gap between data access and cloud storage to enable a critical storage strategy: tiering. There’s a disconnect between the way in which we access files and the way in which cloud storage providers are offering us access to files stored “in the cloud”. We use well-established file system access methods – CIFS, SMB, NFS – while they provide access via web-based standards, a la HTTP, SOAP, etc… That means it is difficult to actually leverage cloud storage services directly. There’s a gap between implementations that needs to be addressed if we’re going to leverage cloud storage in ways that make sense, for example as part of a larger storage tiering strategy. Such a strategy is increasingly important and storage tiering was recently identified at the Gartner Data Center conference as one of the “next big things”: #GartnerDC Major IT Trend #2 is: 'Big Data - The Elephant in the Room'. Growth 800% over next 5 years - w/80% unstructured. Tiering critical @ZimmerHDS Harry Zimmer To enable tiering to a cloud storage service, however, requires some kind of intermediary that bridges the gap between traditional access protocols and cloud access protocols. F5 announced just such a solution with its ARX Cloud Extender. I’ll let our resident storage expert, Don MacVittie, fill you in on the details. Happy Tiering! Anyone who reads my blog knows that I’m a fan of Cloud Storage Gateways. What ever possessed cloud storage vendors to implement an interface that is utterly foreign to the systems that their target market uses escapes me. As I said on Cloudchasers, Cloud Storage Gateways make Cloud Storage into Useful Storage. And though I have repeatedly said I don’t have a dog in this race, it turns out that now I do. F5 has released ARX Cloud Extender, a set of products that allow you to access cloud services as if they were local NAS devices, and that work with certified partners’ Cloud Storage Gateways to increase the benefits of these applications. That means if you have an ARX in place to virtualize your file systems, now that virtualization is extended to “the cloud tier”. Different customers are using cloud storage in different ways, with most using it for backup and archival storage, but some actually using it as primary storage. And of course there are some in-between. ACE allows you to make the jump to cloud storage in a controlled manner, writing rules to determine which files are migrated to the cloud and how they are migrated, all while making it appear that they are on your local systems. This is huge if you need to be able to see your archival data, but don’t want to spend your expensive local disk storing little-used files. You now have tiering, capacity planning, file/directory replication, and file migration extended to the “Cloud Tier”. With support for other vendor’s cloud storage gateways, you can choose the architecture that best suits your enterprise’s needs and bridge the gap between pay up-front and pay as you go storage. In short, you can set up rules in your ARX to shuffle files from tier one to tier two as your organization’s needs decrease, and then eventually off to the Cloud tier for long-term storage. Then, should the file see more frequent access, rules can automatically move it right back out to tier two and on to tier one. If your needs are different, the rules can behave differently. It’s all in what works best for you, and how your organization wishes to use Cloud Storage. So check it out, chances are that your organization is ready to take advantage of cloud computing , but even if not, ARX is still a primo file virtualization engine to help you keep your unstructured data under control. Related blogs & articles: Disk May Be Cheap but Storage is Not All F5 Friday Posts on DevCentral WILS: Controllers and Gateways Let’s Rethink Our Views of Storage Before It Is Too Late. Certainly Cirtas! Cloud Storage Gains Momentum Tiering is Like Tables, or Storing in the Cloud Tier Given Enough Standards, Define Anarchy Enterprise-Class File Virtualization Swapping Rack Space for Rack Space167Views0likes3CommentsF5 Friday: ARX VE Offers New Opportunities
Virtualization has many benefits in the data center – some that aren’t necessarily about provisioning and deployment. There are some things on your shopping list that you’d never purchase sight unseen or untested. Houses, cars, even furniture. So-called “big ticket” items that are generally expensive enough to be viewed as “investments” rather than purchases are rarely acquired without the customer physically checking them out. Except in IT. When it comes to hardware-based solutions there’s often been the opportunity for what vendors call “evaluation units” but these are guarded by field and sales engineers as if they’re gold from Fort Knox. And often times, like cars and houses, the time in which you can evaluate them – if you’re lucky enough to get one – is very limited. That makes it difficult to really test out a solution and determine if it’s going to fit into your organization and align with your business goals. Virtualization is changing that. While some view virtualization in light of its ability to enable cloud computing and highly dynamic architectures, there’s another side to virtualization that is just as valuable if not more so: evaluation and development. It’s been a struggle, for example, to encourage developers to take advantage of application delivery capabilities when they’re not allowed to actually test and play around with those capabilities in development. Virtual editions of application delivery controllers make it possible to make that happen – without the expense of acquisition and the associated administrative costs that go with it. Similarly, it’s hard to convince someone of the benefits of storage virtualization without giving them the chance to actually try it out. It’s one thing to write a white paper or put up a web page with a lot of marketing-like speak about how great it is but as they say, the proof is in the pudding. In the implementation. Not every solution is a good fit for production-level virtualization. It’s just not – for performance or memory or reliability reasons. But for testing and evaluation purposes, it makes sense for just about every technology that fits in the data center. So it was, as Don put it, “very exciting” to see our “virtual edition” options grow with the addition of ARX VE, F5’s storage virtualization solution. It just makes sense that like finding “your chair” you test it out before you make a decision. From automated tiering and shadow copying to unified governance, storage virtualization like ARX provides some tangible benefits to the organization that can address some of the issues associated with the massive growth of data in the enterprise. You may recall that storage tiering was recently identified at the Gartner Data Center conference as one of the “next big things” primarily due the continued growth of data: #GartnerDC Major IT Trend #2 is: 'Big Data - The Elephant in the Room'. Growth 800% over next 5 years - w/80% unstructured. Tiering critical @ZimmerHDS Harry Zimmer Virtualization gives us at F5 the opportunity to give you a chance to test drive a solution in ARX VE that is addressing that critical need. Don, who was won over to the side of “storage virtualization is awesome” only after he actually tried it out himself, has more details on our latest addition to our growing stable of virtualized offerings. INTRODUCING ARX VE As we here at F5 grow our stable of Virtual Edition products, we like to keep you abreast of the latest and greatest releases available to you. Today’s Virtual Edition discussion is about ARX VE Trial, a completely virtualized version of our ARX File/Directory Virtualization product. ARX has huge potential in helping you get NAS sprawl under control, but until now you had to either jump through hoops to get a vendor trial into place, or pay for the product before you fully understood how it worked in your environment. Not any more. ARX VE Trial is free to download and license, includes limited support, and is fully functional for testing purposes. If you have VMWare ESX 4.0 update 2 or VMWare ESX 4.1, then you can download and install the trial for free. There’s no time limit on how long the system can run, but there is a time limit on the number of NAS devices it can manage and the number of shares it can export. It is plenty adequate for the testing you’ll want to do to see how it performs though. Now you can see what heterogeneous tiering of NAS devices can do for you, you can test out shadow copying for replication and moving users’ data stores without touching the desktop. You can see how easy managing access control is when everything is presented as a single massive file system. And you can do all of this (and more) for free. As NAS-based storage architectures have grown, management costs have increased simply due to the amount of disk and number of arrays/shares/whatever under management. This is your chance to push those costs back in the other direction. Or at least your chance to find out if ARX will help in your specific environment without having to pay up-front or work through a long process to get a test box. You can get your copy of ARX VE (or Firepass VE or LTM VE) at our trial download site.166Views0likes1CommentCloud Storage at Computer Technology Review
Since I’ve mentioned it a couple of times, I thought I’d offer you all a link to my article in Computer Technology Review about The Cloud Tier. The point was to delve into how/when/where/why of cloud storage usage. While there is a lot to say on that topic and the article was of limited word count, I think the idea that it can fit into your existing architecture with minimal changes and then be utilized to service the needs of the business in a better/faster/more agile manner is the key point. Normally I keep my blogs relatively vendor-independent. This one talks mostly about our ARX solution because the article referenced was vendor independent, and I do think we’ve got some cool enabling stuff, so sometimes you just gotta talk about the cool toys. No worries, if you’re not an ARX customer and won’t be, there’s still info in here for you. For our part, F5 ARX is our link into that enabling story. Utilizing ARX (or another NAS virtualization engine), you can automatically direct qualifying files to the cloud, while pulling files that are accessed or modified frequently back into your tier one or tier two NAS storage. This optimizes storage usage while keeping the files available to your users. We call that point between CIFS/NFS clients and the storage they use one of our Strategic Points of Control. A spot where you can add value if you have the right tools in place. This one is a big one because files that move to the cloud can appear to users to not have moved at all – ARX has a file virtualization engine that shows them the file in a virtual directory structure. Where it is physically stored behind that virtual directory structure is completely unrelated to where the user thinks it is, and IT can move the file as needed – or write rules to have the ARX move files as needed. The only difference the user might notice is that the files they use every day are faster to open, and the ones they never or almost never access are slower. That’s called responsive, and it makes IT look good. It also means that you can say “We’re using the cloud, check out our storage”, and instead of planning for a huge outlay to put another array in, you can plan small monthly payments to cover the cost of storage in use. That’s one of the reasons I think cloud storage for non-critical data will take off relatively quickly compared to most cloud technologies. The benefits are obvious and have numbers associated with them, they reduce the amount of “excess capacity” you have to keep in the datacenter to account for growth, while shifting the cost of that excess capacity to much smaller monthly payments. What I don’t know is how the long-term cost of cloud storage will equate to the long term cost of purchasing the same storage. And I don’t think anyone can know what that graph looks like at this time. Cloud storage is new enough that it is safe to say the costs are anything but stabilized. Indeed, the race to the bottom, price-per-terabyte wise early in cloud storage’s growth nearly guarantees that the costs will go up over the long term. but how far up we just don’t have the information to figure out yet. Operations costs for cloud storage (from the vendor perspective) are new, and while the cost of storage over time in the datacenter is a starting point, the needs of administering cloud storage are not the same as enterprise storage, so it will be interesting to see how much long-term operation of a cloud storage vendor impacts prices over the next two to five years. Don’t let me scare you off. I still think it’s a worthy addition to your tiering strategy, I would only recommend that you have a way out. Don’t just assume your cloud vendor will be there forever, because there is that fulcrum where in order to survive they may have to raise prices beyond your organization’s willingness (or even ability) to pay. You need to plan for that scenario, because at worst having a plan is a waste of some man-hours and maybe a minimal fee to keep a second cloud provider on-line in case, while the worst case if things go wrong is losing your files. No comparison of the risks there. Of course, if you have ARX in place, moving between cloud storage providers is easier, but frankly, running that much data through your WAN link is going to be a crushing exercise. So plan for two options – one where you have the time to trickle the movement of data through your WAN connection (remember if you’re moving cloud storage providers, all that data needs to come into the building and back out that same WAN link unless you have alternatives available), and one where you have to get your data off quickly for whatever reason. Like everything else, there’s benefit and cost savings to be had, but keep a plan in place. It’s still your storage, so an addition to the Disaster Recovery plan is in order too – under “WAN link down” you need to add “what to do about non-critical files”. Likely you won’t care about them in an extreme emergency, but there are scenarios where you’re going to want an alternate way to get at your files. Meanwhile, you have a theoretically unlimited tier out there that could be holding a ton of those not-so-critical files that are eating up terabytes in your datacenter. I would look into taking advantage of it.164Views0likes0CommentsF5 Friday: Big Data? Big Risk…
#bigdata #infosec Storing sensitive data in the cloud is made more palatable by applying a little security before the data leaves the building… When corporate hardware, usually laptops, are stolen, one of the first questions asked by information security professionals is whether or not the data on the drive was encrypted. While encryption of data is certainly not a panacea, it’s a major deterrent to those who would engage in the practice of stealing data for dollars. Many organizations are aware of this and use encryption judiciously when data is at rest in the data center storage network. But as the Corollary to Hoff’s Law states, even “if your security practices don’t suck in the physical realm, you’ll be concerned by the inability to continue that practice when you move to Cloud.” It’s not that you can’t encrypt data being moved to cloud storage services, it’s that doing so isn’t necessarily a part of the processes or APIs used to do so. This makes it much more difficult to enforce such a policy and, for some organizations, unless they are guaranteed data will be secured at rest they aren’t going to give the okay. A recent Ponemon study speaks to just this issue: According to the report entitled "Data Security in the Cloud Survey of U.S. IT Operations, IT Security and Compliance Practitioners", only one third of IT security practitioners believe cloud infrastructure (IaaS) environments are as secure as on premise datacenters, while half of compliance officers think IaaS is as secure. -- Ponemon Institute Survey on Cloud Data Security Exposes Gulf between IT Security and Compliance Officers INTEGRATION and REPLICATION In order to make cloud a more palatable option it is necessary to ensure that data can be stored securely off-premise. A tried and true method is to encrypt the data before it leaves the building. And yet the same Ponemon study found that less than one-third of respondents’ organizations do just that. A possible explanation for organizations’ failure to encrypt data being transferred to the cloud is a lack of process and integration with the ways in which the data is transferred. Storing data in “the cloud” is generally accomplished via an API, and rarely do these APIs include a flag for “hey, encrypt my data.” There are technical reasons why this is the case; encryption – at least encryption worth the effort and compute consumed – often makes use of certificates and keys. Those keys should be unique to the organization. Using a general cloud storage service encryption API would require either sharing of that key (bad idea) or the use of a common provider key (yet another bad idea), neither of which is an acceptable solution. The answer is, of course, to encrypt the data before transfer to the cloud storage service. The cloud storage service, after all, doesn’t care what the data is – it just cares that it has to store it for you. This brings us back to the problem of process and integration at the infrastructure layer. What organizations need to leverage cloud storage services is the means to automatically encrypt data as it’s headed for the cloud. What organizations need is for that cloud storage service to be integrated with their own, data center based storage in a way that makes it possible to leverage cloud storage automatically, encrypting the data when it’s bound for the cloud. Organizations need a common, overarching storage solution that can seamlessly integrate cloud storage into operational processes and automatically provide a layer of security through encryption of the data when that data might be stored off-site, in a cloud storage service. F5 ARX and ARX Cloud Extender (CE) is that solution. In addition to its core aggregation and intelligent tiering capabilities, adding ARX CE to the architecture will allow for the seamless extension of storage to the cloud securely. When ARX CE is preparing to send data to public cloud destinations, the data is encrypted using AES-256 bit encryption for each object. Further, all transfers from the ARX CE-enabled Windows file server to public cloud storage occur over SSL (HTTPS), which provides network layer encryption. -- Securing Data in the Cloud with ARX CE The Ponemon study revealed that “less than half of IT practitioners (35%) and compliance officers (42%) believe their organizations have adequate technologies to secure their IaaS environments.” So not only do organizations believe the cloud is less secure, they also believe they don’t have the right tools to secure it and thus take advantage of it. F5 ARX and ARX CE addresses the operational risk associated with storage in the cloud – by integrating cloud storage services into operational processes it alleviates the manual burden imposed on IT to schedule transfers and prioritize files across tiers. With the ability to automatically apply encryption to data and use a secure transport channel to cloud storage services, it adds a layer of security to data stored in the cloud that would otherwise not exist, giving IT the confidence required to take advantage of lower cost storage in the cloud and realize its benefits. F5 ARX Cloud Extender Resources Securing Data in the Cloud with ARX CE (How To) ARX Tiered Storage: Best Practices Getting Up And Running With F5 ARX Virtual Edition F5 Storage Solutions F5 ARX 1500 and 2500 F5’s New ARX Platforms Help Organizations Reap the Benefits of File Virtualization Network World – F5 Rolls Out New File Virtualization Appliances Analyzing Performance Metrics for File Virtualization Strategies for a Seamless and Secure Transition to Enterprise Cloud Storage Building a Cloud-Ready File Storage Infrastructure SSDs, Velocity and the Rate of Change F5 Friday: If Data is King then Storage Virtualization is the Castellan F5 Friday: F5 ARX Cloud Extender Opens Cloud Storage F5 Friday: ARX VE Offers New Opportunities Disk May Be Cheap but Storage is Not All F5 Friday Posts on DevCentral Tiering is Like Tables, or Storing in the Cloud Tier161Views0likes0Comments