nas
16 TopicsIn The End, You Have to Clean.
Lori and I have a large technical reference library, both in print and electronic. Part of the reason it is large is because we are electronics geeks. We seriously want to know what there is to know about computers, networks, systems, and development tools. Part of the reason is that we don’t often enough sit down and decide to pare the collection down by those books that no longer have a valid reason for sitting on our (many) bookshelves of technical reference. The collection runs the gamut from the outdated to the state of the art, from the old stand-byes to the obscure, and we’ve been at it for 20 years… So many of them just don’t belong any more. One time we went through and cleaned up. The few books we got rid of were not only out of date (mainframe pascal data structures was one of them), but weren’t very good when they were new. And we need to do it again.From where I sit at my desk, I can see an OSF DCE reference, the Turbo Assembler documentation, A Perl 5 reference, a MicroC/OS-II reference, and Mastering Web Server Security. All of which are just not relevant anymore. There’s more, but I’ll save you the pain, you get the point. The thing is, I’m more likely to take a ton of my valuable time and sort through these books, recycling those that no longer make sense unless they have sentimental value - Lori and I wrote an Object Oriented Programming book back in 1996, that’s not going to recycling – than you are to go through your file system and clean the junk out of it. Two of ten… Funny thing happens in highly complex areas of human endeavor, people start avoiding ugly truths by thinking they’re someone else’s problem. In my case (and Lori’s), I worry about recycling a book that she has a future use for. Someone else’s problem syndrome (or an SEP field if you read Douglas Adams) has been the source of tremendous folly throughout mankind’s history, and storage at enterprises is a prime example of just such folly. Now don’t bet me wrong, I’ve been around the block, responsible for an ever-growing pool of storage, know that IT management has to worry that the second they start deleting unused files they’re going to end up in the hotseat because someone thought they needed the picture of the sign in front of the building circa 1995… But if IT (who owns the storage space) isn’t doing it, and business unit leaders (who own the files on the storage) aren’t doing it… Well, you’re going to have a nice big stack of storage building up over the next couple of years. Just like the last couple. I could – and will - tell you that you can use our ARX product to help you solve the problem, particularly with ARX Cloud Extender and a trusted cloud provider, by shuffling out to the cloud. But in the longer term, you’ve got to clean up the bookshelf, so-to-speak. ARX is very good at many things, but not making those extra files disappear. You’re going to pay for more disk, or you’re going to pay a cloud provider until you delete them. I haven’t been in IT management for a while, but if I were right now, I’d get the storage guys to build me a pie-chart showing who owns how much data, then gather a couple of outrageous examples of wasted space (a PowerPoint that is more than five years old is good, better than the football pool for marketing from ten years ago, because PowerPoint uses a ton more disk space), and then talk with business leaders about the savings they can bring the company by cleaning up. While you can’t make it their priority, you can give them the information they need. If marketing is responsible for 30% of the disk usage on NAS boxes (or I suppose unstructured storage in general, though this exercise is more complex with mixed SAN/NAS numbers, not terribly more complex), and you can show that 40% of the files owned by Marketing haven’t been touched in a year… That’s compelling at the C-level. 12% of your disk is sitting there just from one department with easy to identify unused files on it. Some CIOs I’ve known have laid the smackdown – “delete X percent by Y date or we will remove this list of files” is actually from a CIOs memo – but that’s just bad PR in my opinion. Convincing business leaders that they’re costing the company money – what’s 12% of your NAS investment for example, plus 12% of the time of the storage staff dedicated to NAS – is a much better plan, because you’re not the bad guy, you’re the person trying to save money while not negatively impacting their jobs. So yeah, install ARX, because it has a ton of other benefits, but go to the bookshelf, dust off that copy of the Fedora 2 Admin Guide, and finally put it to rest. That’s what I’ll be doing this weekend, I know that.185Views0likes0CommentsThe Question Is Not “Are You Ready For Cloud Storage?”
I recently read a piece in Network Computing Magazine that was pretty disparaging of NAS devices, and with a hand-wave the author pronounced NAS dead, long live cloud storage. Until now, storage has been pretty much immune to the type of hype that “The Cloud” gets. Sure, there have been some saying that we should use the cloud for primary storage, and others predicting that it will kill this or that technology, but the outrageous and intangible claims that accompany placing your applications in the cloud. My favorite, repeated even by a lot of people I respect, is that cloud mystically makes you greener. Okay, I’ll sidetrack for a moment and slay that particular demon yet again, because it is just too easy. Virtualization makes you more green by running more apps on less hardware. Moving virtualized anything to the cloud changes not one iota of carbon footprint, because it still has to run on hardware. So if you take 20 VMs from one server and move them to your favorite cloud provider, you have moved where they are running, but they are certainly running on at least one server. Just because it is not your datacenter does not change the fact that it is in a datacenter. Not greener, not smaller carbon footprint. But this column was awash with the claim that cloud storage is it. We no longer need those big old NAS boxes, and they can just go away from the datacenter, starting with the ones that have been cloudwashed. The future is cloudy, cloouuuudddyyy Okay, let us just examine a hypothetical corporation for a moment – I’ll use my old standby, Zap-N-Go. Sally, the CIO of Zap-N-Go is under pressure to “do something with the cloud!” or “Identify three applications to move to the cloud within the next six months!” Now this is a painful way to run an IT shop, but it’s happening all over, so Sally assigns Bob to check out the possibilities, and Bob suggests that moving storage to the cloud might be a big win because of the cost of putting in a new NAS box. They work out a plan to move infrequently accessed files to the cloud as a test of robustness, but that’s not a bold enough staff for the rest of senior management, so their plan to test the waters turns into a full-blown movement of primary data to the cloud. Now this may be a bit extreme, Sally, like any good CIO, would dig in her heals at this one, but bear with me. They move primary storage to the cloud on a cloudy Sunday, utilizing ARX or one of the other cloud-enabled devices on the market, and start to reorganize everything so that people can access their data. On Monday morning, everyone comes in and starts to work, but work is slow, nothing is performing like it used to. The calls start coming to the help desk. “Why is my system so slow?” And then, the CEO calls Sally directly. “It should not take minutes to open an Excel Spreadsheet” he harrumphs. And Sally goes down to help her staff figure out how to improve performance. Since the storage move was the day before, everyone knows the ultimate source of the problem, they’re just trying to figure out what is happening. Sue, the network wizard, pops off with “Our Internet connection is overloaded.” and everyone stops looking. After some work, the staff is able to get WOM running with the cloud provider to accelerate data flowing between the two companies… But doing so in the middle of the business day has cost the company money, and Sally is in trouble. After days of redress meetings, and acceptable, if not perfect performance, all seems well, and Sally can report to the rest of upper management that files have been moved to the cloud, and now a low monthly fee will be paid instead of large incremental chunks of budget going to new NAS devices. It’s Almost Ready for Primary Storage… Until the first time the Internet connection goes down. And then, gentle reader, Sally and Bob’s resume’ will end up on your desk, because they will not survive the aftermath of “no one can do anything”. Cloud in general and cloud storage in particular has amazing promise – I really believe that – but pumping it full of meaningless hyperbole does no one any good. Not IT, not the business, and not whatever you’re hawking. So take such proclamations with a grain of salt, keep your eye on the goal. Secure, Fast, and Agile solutions for your business, not “all in” like it’s a poker table. And don’t let such buffoons sour you on the promise of cloud, while I wouldn’t call them visionary, I do see a day when most of our storage and apps are in a cloud somewhere. It’s just not tomorrow. Or next year. Next year archiving and tier three will be out there, let’s just see how that goes before we start discussing primary storage. …And Ask Not “Are We Ready For Cloud Storage?” but rather “Is Cloud Storage Ready For Us?” My vote? Archival and Tier three are getting a good workout, start there.162Views0likes0CommentsGraduating Your Storage
Lori and I’s youngest daughter graduated from High School this year, and her class chose one of the many good Vince Lombardi quotes for the theme of their graduation – “The measure of who we are is what we do with what we have.” Those who know me well know that I’m not a huge football fan (don’t tell my friends here in Green Bay that… The stadium can hold roughly half the city’s population, and they aren’t real friendly to those who don’t join in the frenzy), but Vince Lombardi certainly had a lot of great quotes over the course of his career, and I am a fan of solid quotes. This is a good example of his ability to say things short and to the point. This is the point where I say that I’m proud of our daughter. For a lot more than simply making it through school, and wish her the best of luck in that rollercoaster ride known as adult life. About the same time as our daughter was graduating, Lori sent me a link to this Research And Markets report on High Performance Computing site Storage usage. I found it to be very interesting, just because HPC sites are generally on the larger end of storage environments, and are where the rubber really hits the road in terms of storage performance and access times. One thing that stood out was the large percentage of disk that is reported as DAS. While you know there’s a lot of disk sitting in servers underutilized, I would expect the age of virtualization to have used a larger chunk of that disk with local images and more swap space for the multiple OS instances. Another thing of interest was that NAS and SAN are about evenly represented. Just a few years ago, that would not have been true at all. Fiber Channel has definitely lost some space to IP-based storage if they’re about even in HPC environment deployments. What’s good for the some of the most resource intensive environments on earth is good for most enterprises, and I suspect that NAS has eclipsed SAN in terms of shear storage space in the average enterprise (though that’s a conjecture on my part, not anything from the report). And that brings us back to the Vince Lombardi Quote. NAS disk space is growing. DAS disk space is still plentiful. The measure of the service your IT staff delivers will be what you do with what you have. And in this case, what you have is DAS disk not being used and a growing number of NAS heads to manage all that NAS storage. What do you do with that? Well, you do what makes the most sense. In this case, storage tiering comes to mind, but DAS isn’t generally considered a tier, right? It is if you have file virtualization (also called directory virtualization) in place. Seriously. By placing all that spare DAS into the directory tree, it is available as a pool of resources to service storage needs – and by utilizing automated, rule-based tiering, what is stored there can be tightly controlled by tiering rules so that you are not taking over all of the available space on the DAS, and things are stored in the place that makes the most sense based upon modification and/or access times. With tiering and file virtualization in place, you have a solution that can utilize all that DAS, and an automated system to move things to the place that makes the most sense. While you’re at it, move the rest of the disk into the virtual directory, and you can run backups off the directory virtualization engine, rather than set them up for each machine. You can even create rules to copy data off to slow disk and back it up from there, if you like. And with the direction things are headed, throw in an encrypting Cloud Storage Gateway like our ARX Cloud Extender, and you have a solution that utilizes your internal DAS and NAS both intelligently and to the maximum, and the gateway to Cloud storage for overflow, Tier N, or archival storage… depending upon how you’re using cloud storage. Then you are doing the most with what you have – and setting up an infinitely expandable pool to cover for unforeseen growth. All of the above makes your storage environment more rational, improves utilization in DAS (and in most cases NAS), retains your files with their names intact, and moves unstructured data to the storage that makes the most sense for it. There is very little not to like. So check it out. We have ARX, other vendors offer their solutions – though ARX is the leader in this space, so I don’t feel pandering to say you’ll find us a better fit.192Views0likes0CommentsF5 Friday: ARX VE Offers New Opportunities
Virtualization has many benefits in the data center – some that aren’t necessarily about provisioning and deployment. There are some things on your shopping list that you’d never purchase sight unseen or untested. Houses, cars, even furniture. So-called “big ticket” items that are generally expensive enough to be viewed as “investments” rather than purchases are rarely acquired without the customer physically checking them out. Except in IT. When it comes to hardware-based solutions there’s often been the opportunity for what vendors call “evaluation units” but these are guarded by field and sales engineers as if they’re gold from Fort Knox. And often times, like cars and houses, the time in which you can evaluate them – if you’re lucky enough to get one – is very limited. That makes it difficult to really test out a solution and determine if it’s going to fit into your organization and align with your business goals. Virtualization is changing that. While some view virtualization in light of its ability to enable cloud computing and highly dynamic architectures, there’s another side to virtualization that is just as valuable if not more so: evaluation and development. It’s been a struggle, for example, to encourage developers to take advantage of application delivery capabilities when they’re not allowed to actually test and play around with those capabilities in development. Virtual editions of application delivery controllers make it possible to make that happen – without the expense of acquisition and the associated administrative costs that go with it. Similarly, it’s hard to convince someone of the benefits of storage virtualization without giving them the chance to actually try it out. It’s one thing to write a white paper or put up a web page with a lot of marketing-like speak about how great it is but as they say, the proof is in the pudding. In the implementation. Not every solution is a good fit for production-level virtualization. It’s just not – for performance or memory or reliability reasons. But for testing and evaluation purposes, it makes sense for just about every technology that fits in the data center. So it was, as Don put it, “very exciting” to see our “virtual edition” options grow with the addition of ARX VE, F5’s storage virtualization solution. It just makes sense that like finding “your chair” you test it out before you make a decision. From automated tiering and shadow copying to unified governance, storage virtualization like ARX provides some tangible benefits to the organization that can address some of the issues associated with the massive growth of data in the enterprise. You may recall that storage tiering was recently identified at the Gartner Data Center conference as one of the “next big things” primarily due the continued growth of data: #GartnerDC Major IT Trend #2 is: 'Big Data - The Elephant in the Room'. Growth 800% over next 5 years - w/80% unstructured. Tiering critical @ZimmerHDS Harry Zimmer Virtualization gives us at F5 the opportunity to give you a chance to test drive a solution in ARX VE that is addressing that critical need. Don, who was won over to the side of “storage virtualization is awesome” only after he actually tried it out himself, has more details on our latest addition to our growing stable of virtualized offerings. INTRODUCING ARX VE As we here at F5 grow our stable of Virtual Edition products, we like to keep you abreast of the latest and greatest releases available to you. Today’s Virtual Edition discussion is about ARX VE Trial, a completely virtualized version of our ARX File/Directory Virtualization product. ARX has huge potential in helping you get NAS sprawl under control, but until now you had to either jump through hoops to get a vendor trial into place, or pay for the product before you fully understood how it worked in your environment. Not any more. ARX VE Trial is free to download and license, includes limited support, and is fully functional for testing purposes. If you have VMWare ESX 4.0 update 2 or VMWare ESX 4.1, then you can download and install the trial for free. There’s no time limit on how long the system can run, but there is a time limit on the number of NAS devices it can manage and the number of shares it can export. It is plenty adequate for the testing you’ll want to do to see how it performs though. Now you can see what heterogeneous tiering of NAS devices can do for you, you can test out shadow copying for replication and moving users’ data stores without touching the desktop. You can see how easy managing access control is when everything is presented as a single massive file system. And you can do all of this (and more) for free. As NAS-based storage architectures have grown, management costs have increased simply due to the amount of disk and number of arrays/shares/whatever under management. This is your chance to push those costs back in the other direction. Or at least your chance to find out if ARX will help in your specific environment without having to pay up-front or work through a long process to get a test box. You can get your copy of ARX VE (or Firepass VE or LTM VE) at our trial download site.175Views0likes1CommentOnce Again, I Can Haz Storage As A Service?
While plenty of people have had a mouthful (or page full, or pipe full) of things to say about the Amazon outage, the one thing that it brings to the fore is not a problem with cloud, but a problem with storage. Long ago, the default mechanism for “High Availability” was to have two complete copies of something (say a network switch) and when one went down, the other was brought up with the same IP. It is sad to say that even this is far-and-away better than the level of redundancy that most of us place in our storage. The reasons are pretty straight-forward, you can put in a redundant pair of NAS heads, or a redundant pair of file/directory virtualization appliances like our own ARX, but a redundant pair of all of your storage? The cost alone would be prohibitive. Amazon’s problems seem to stem from a controller problem, not a data redundancy problem, but I’m not an insider, so that is just appearances. But most of us suffer from the opposite. High availability entry points protect data that is all too often a single point of failure. I know I lived through the sudden and irrevocable crashing of an AIX NAS once, and it wasn’t pretty. When the third disk turned up bad, we were out of luck, had to wait for priority shipment of new disks and then do a full restore… The entire time being down in a business where time is money. The critical importance of the data that is the engine of our enterprises these days makes building that cost-prohibitive truly redundant architecture a necessity. If you don’t already have a complete replica of your data somewhere, it is worth looking into file and database replication technologies. Honestly, if you choose to keep your replicas on cheaper, slower disk, you can save a bundle and still have the security that even if your entire storage system goes down, you’ll have the ability to keep the business running. But what I’d like to see is full blown storage as a service. We couldn’t call it SaaS, so I’ll propose we name it Storage Tiers As A Service Host, just so we can use the acronym Staash. The worthy goal of this technology would be the ability to automatically, with no administrator interaction, redirect all traffic to device A over to device B, heterogeneously. So your core datacenter NAS goes down hard, lets call it a power failure to one or more racks, Staash would detect that the primary is off-line and substitute your secondary for it in the storage hierarchy. People might notice that files are served up more slowly, depending upon your configuration, but they’ll also still be working. Given sufficient maturity, this model could even go so far as to allow them to save changes made to documents that were open at the time that the primary NAS went down, though this would be a future iteration of the concept. Today we have automated redundancy all the way to the final step, it is high time we implemented redundancy on that last little bit, and made our storage more agile. While I could reasonably argue that a File/Directory Virtualization device like F5’s ARX is the perfect place to implement this functionality – it is already heterogeneous, it sits between users and data, and it is capable of being deployed in HA pairs… All the pre-requisites for Staash to be implemented – I don’t think your average storage or server administrator much cares where it is implemented, as long as it is implemented. We’re about 90% there. You can replicate your data – and you can replicate it heterogeneously. You can set up an HA pair of NAS heads (if you are a single-vendor shop) or File/Directory virtualization devices whether you are single-vendor or heterogeneous, and with a file/directory virtualization tool you have already abstracted the user from the physical storage location in a very IT-friendly way (files are still saved together, storage is managed in a reasonable manner, only files with naming conflicts are renamed, etc), all that is left is to auto-switch from your high-end primary to a replica created however your organization does these things… And then you are truly redundantly designed. It’s been what, forty years? That’s almost as long as I’ve been alive. Of course, I think this would fit in well with my long-term vision of protocol independence too, but sometimes I want to pack too much into one idea or one blog, so I’ll leave it with “let’s start implementing our storage architecture like we do our server architecture… No single point of failure. No doubt someone out there is working on this configuration… Here’s hoping they call it Staash when it comes out. The cat in the picture above is Jennifer Leggio’s kitty Clarabelle. Thanks to Jen for letting me use the pic!200Views0likes0CommentsCloud Storage: Just In Time For Data Center Consolidation.
There’s this funny thing about pouring two bags of M&Ms into one candy dish. The number of M&Ms is exactly the same as when you started, but now they’re all in one location. You have, in theory, saved yourself from having to wash a second candy dish, but the same number of people can enjoy the same number of M&Ms, you’ll run out of M&Ms at about the same time, and if you have junior high kids in the crowd, the green M&Ms will disappear at approximately the same rate. The big difference is that fewer people will fit around one candy dish than two, unless you take extraordinary steps to make that one candy dish more accessible. If the one candy dish is specifically designed to hold one or one and a half bags of M&Ms, well then you’re going to need a place to store the excess. The debate about whether data center consolidation is a good thing or not is pretty much irrelevant if, for any reason your organization chooses to pursue this path. Seriously, while analysts want to make a trend out of everything these days, there are good reasons to consolidate data centers, ranging from skills shortage at one location to a hostile regulatory environment at another. Cost savings are very real when you consolidate data centers, though they’re rarely as large as you expect them to be in the planning stages because the work still has to be done, the connections still have to be routed, the data still has to be stored. You will get some synergies by hosting apps side-by-side that would normally need separate resources, but honestly, a datacenter consolidation project isn’t an application consolidation project. It can be, but that’s a distinct piece of the pie that introduces a whole lot more complexity than simply shifting loads, and all the projects I’ve seen with both as a part of them have them in two separate and distinct phases - “let’s get everything moved, and then focus on reducing our app footprint”. Lori and the M&Ms of doom. While F5 offers products to help you with all manner of consolidation problems, this is not a sales blog, so I’ll focus on one opportunity in the cloud that is just too much of the low-hanging fruit for you not to be considering it. Moving the “no longer needed no matter what” files out to the cloud. I’ve mentioned this in previous Cloud Storage and Cloud Storage Gateway posts, but in the context of data center consolidation, it moves from the “it just makes sense” category to the “urgently needed” category. You’re going to be increasing the workload at your converged datacenter by an unknown amount, and storage requirements will stay relatively static, but you’re shifting those requirements from two or more datacenters to one. This is the perfect time to consider your options with cloud storage. What if you could move an entire classification of your data out to the cloud, so you didn’t have to care if you were accessing it from a data center in Seattle or Cairo? What if you could move that selection of data out to the cloud and the purposely shift data centers without having to worry about that data? Well you can… And I see this as one of the drivers for Cloud Storage adoption. In general you will want a Cloud Storage Gateway like our ARX Cloud Extender, and using ARX or another rules-based tiering engine will certainly make the initial cloud storage propagation process easier, but the idea is simple. Skim off those thousands of files that haven’t been accessed in X days and move them to Cloud storage, freeing up local space so that maybe you won’t need to move or replace that big old NAS system from the redundant data center. X is very much dependent upon your business and even the owning business unit, I would seriously work with the business leaders to set reasonable numbers – and offering them guidance about what it will take (in terms of days X needs to be) to save the company moving or replacing an expensive (and expensive to ship) NAS. While the benefits appear to be short-term – not consolidating the NAS devices while consolidating datacenters – they are actually very long term. They allow you to learn about cloud storage and how it fits into your architectural plans with relatively low-risk data, as time goes on, the number of files (and terabytes) that qualify for movement to the cloud will continue to increase, keeping an escape valve on your NAS growth, and the files that generally don’t need to be backed up every month or so will all be hanging off your cloud storage gateway, simplifying the backup process and reducing backup/replication windows. I would be remiss if I didn’t point out the ongoing costs of cloud storage, after all, you will be paying each and every month. But I contend you would be anyway. If this becomes an issue from the business or from accounts payable, it should be relatively easy with a little research to come up with a number for what storage growth costs the company when it owns the NAS devices. The only number available to act as a damper on this cost would be the benefits of depreciation, but that’s a fraction of the total in real-dollar benefits, so my guess is that companies within the normal bounds of storage growth over the last five years can show a cost reduction over time without having to include cost-of-money-over-time calculations for “buy before you use” storage. So the cost of cloud being pieced out over months is beneficial, particularly at the prices in effect today for cloud storage. There will no doubt be a few speed bumps, but getting them out of the way now with this never-accessed data is better than waiting until you need cloud storage and trying to figure it out on the fly. And it does increase your ability to respond to rapidly changing storage needs… Which over the last decade have been rapidly changing in the upward direction. Datacenter consolidation is never easy on a variety of fronts, but this could make it just a little bit less painful and provide lasting benefits into the future. It’s worth considering if you’re in that position – and truthfully to avoid storage hardware sprawl, even if you’re not. Related Articles and Blogs Cloud Storage Gateways, stairway to (thin provisioning) heaven? Certainly Cirtas! Cloud Storage Gains Momentum Cloud Storage Gateways. Short term win, but long term…? Cloud Storage and Adaptability. Plan Ahead Like “API” Is “Storage Tier” Redefining itself? The Problem With Storage Growth is That No One Is Minding the Store F5 Friday: F5 ARX Cloud Extender Opens Cloud Storage Chances Are That Your Cloud Will Be Screaming WAN.187Views0likes0CommentsVE as in Very Exciting. ARX VE Trial
The limiting factor in adoption of file virtualization has been, in my opinion, twofold. First is the FUD created by the confusion with block-level virtualization and proprietary vendors wanting to sell you more of their gear – both of which are rapidly disappearing – and second is the unknown element. The simple “how does this set of products improve my environment, save me money, or cut manhours?” Well now this issue is going to rapidly go away also, because you can find out easily enough. Those of you who follow my writing know that I was a hard sell for file virtualization services. In fact, until I had a device in my environment and running, giving me a chance to tinker with it, I remained a bit skeptical even after understanding the use cases. The reasons are probably well-known to many in IT… What does file virtualization offer that the independent NAS boxes don’t cover in one way or another. The answer that I came to was something we here at F5 call strategic points of control. The ARX in my environment allowed me to utilize the back-end NAS devices/file servers to the maximum while alleviating quite a bit of “out of disk space” concern. This is simply a case of the ARX seeing across NAS devices and giving the ability to move things to less utilized space without client machines having to even know they moved. This goes a lot further than simple disk space utilization and allows tiering and enhanced automated backup. But I digress. My point is that, much like you and I cannot know what it is like to walk in space, I didn’t “get it” until I had my hands on the tool and could toy with it. Oh I conceptually understood the benefits, but wasn’t certain of the ROI for those benefits versus the cost of the device. Having one to configure and implement changes through was what it took for me to fully understand what benefits file virtualization had to offer in my environment. Today our Data Solutions Group introduced a new version of F5 ARX – F5 ARX VE Trial or ARX Virtual Edition Trial. Yes indeed, now, assuming you have the right environment, can download a copy of ARX and kick the tires, see for yourself what I found in our network – that file management, replication, and tiering are all enabled by the F5 ARX line of products at a level that makes life easier for storage admins, desktop support, security, and systems admins. Of course, no software exists in a vacuum, so I’ll cover the minimum requirements here, then talk about issues and differences from an ARX appliance. Image Courtesy of NASA Requirements are not too strenuous, and shouldn’t be too much of a surprise. First, it is a VM, so you’ll need VMWare. Specifically you’ll need VMWare ESX 4.0 Update 2 or VMWare ESX 4.1. The VMware install must be able to offer the ARX VM one virtual processor, two gigabytes of memory, and forty gigabytes of disk space inn order for it to run. And finally, you’ll need Internet access – either directly from the VM, or via a management machine. This is so you can get the license key from the F5 license server. You’ll want it to have routes to whatever subnets the storage your trying it out with are on, of course, and clients should have a route to it – or you won’t be doing much with it – but other than that, you’re set. I know there are a lot of you out there who have wondered at my change of heart vis-a-vis file virtualization… Several of you have written to me about it in fact. But now is your chance to see why I went from wonder that this market exists to an advocate of putting your storage behind such a device. The trial is free, with a few limitations, so lets go over them here. Remember the point of this product is to try out ARX, not to put in a fully functional production VM. More on that later, for now, understand that the following limitations exist and should offer more than enough power for you to check it out: The biggest one, in my opinion, is that you are limited to 32,768 files per system. That means your test environment will have to be carefully selected – you’d be amazed how fast 32K files (not of storage, actual unique files) build up. Next is that you are really only going to have 32 mount points available on the ARX. This is somewhat less of an issue because from a single mount point at root you can get to the entire storage system. The documentation that I have does not mention NFS at all, so presumably it is not supported in the Trial version – but let me caveat that with “Just because I haven’t seen it doesn’t mean it isn’t there”. I’ll be installing and playing with this over the next couple of weeks, and pop back in to let you know what I find. All in all, you can drop this into a VM, fire it up, and figure out just how much benefit you could get from file virtualization. That should be the point of a Trial version, so I think they hit the nail on the head. As to upgrading in the future, there are some caveats. What you do in the Trial Edition won’t transfer to a production box for example, you’ll have to reconfigure. But it’s meant for testing only, so that’s not a huge limitation. I know when I first install any unfamiliar infrastructure element there is that first bit of learning time that creates clutter anyway, so losing that clutter shouldn’t be all bad. Unless you’re just better than me anyway :-).192Views0likes0CommentsIn D&D Parlance, Your Network is Already Converged.
For decades now, the game Dungeons and Dragons has suffered from what is commonly called “Edition Wars”. When the publisher of the game releases a new version, they of course want to sell the new version and stop talking about the old – they’re a business, and it certainly does make the ability to be profitable tough if people don’t make the jump from version X to version Y. Problem is that people become heavily invested in whatever version they’re playing. When Fourth Edition was released, the MSRP on just the three books required to play the game was $150 or thereabout. The price has come down, and a careful shopper can get it delivered to their home for about half of that now… But that’s still expensive considering that there is only enough to play with those books if you invest a significant amount of time in preparing the game before-hand. So those who have spent hundreds or even thousands of dollars on reference material for the immediately previous edition are loath to change, and this manifests as sniping at the new edition. This immediately raises the ire of those who have made the switch, and they begin sniping about your preferred edition. Since “best” is relative in any game, and more so in a Role Playing Game, it is easy to pick pieces you don’t like out of any given edition and talk about how much better your chosen edition of the game is. And this has gone on for so long that it’s nearly a ritual. New version comes out, people put up their banners and begin nit-picking all other versions. I have a friend (who goes by DungeonDelver in most of his gaming interactions) who is certain that nothing worthy has come out since the release of the original Tactical Studies Rules box set in the early seventies, and other friends who can’t understand why anyone would play those “older versions” of the game. For those not familiar with the industry, “threetard” was coined to talk about those who loved third Edition, for example. While not the worst flame that’s coursed through these conversations, for a while there it was pervasive. And they all seem to miss the point. Each Edition has had good stuff in it, all you have to do is determine what is best for you and your players, and go play. Picking apart someone else’s version might be an entertaining passtime, but it is nowhere near the fun that actually playing the game is. Whatever version of the game. Because in the end, they all are the same thing… games designed to allow you to take on the persona of a character in a fantastical world and go forth to right the wrongs of that world. A similar problem happens almost daily in storage, and though it is a bit more complex than the simple “edition wars” of D&D, it is also more constant. We have different types of storage – NAS, SAN, DAS – different protocols and even networks – iSCSI, FCoE, FC, CIFS, etc – different vendors trying to convince you that their chosen infrastructure is “best”, and a whole lot of storage/systems admins that are heavily invested in whatever their organization uses for primary storage. But, like the edition wars, there is no “right” answer. I for one would love to see a reduction in options, but that is highly unlikely unless and until customers vote definitively with their dollars. The most recent example is the marketing push for “converged networking”. That’s interesting, I could have sworn we were already sending both data (NAS/iSCSI/FCoE) and communications over our IP connections? Apparently I was wrong and I need this new expensive gizmo to put data on my network… And that’s just the most recent example. Some simple advice I’ve picked up in my years watching the edition wars… Look at your environment, look at your needs, and continue to choose the storage that makes sense for the application. Not all environments and not all applications are the same, so that’s a determination you need to make. And you should make it vendor-free. Sure some vendors would rather sell you a multi-million dollar SAN with redundancy and high availability, and sure some other vendors want to drop a NAS box into your network and then walk away with your money. They’re in the business of selling you what they make, not necessarily what you need. The what you need part is your job, and if you’re buying a Mercedes where a Hyundai would do, you’re doing your organization a dis-service. Make sure you’re familiar with what’s going on out there, how it fits into your org, and how you can make the most out of what you have. RAID makes cheaper disk more appealing, iSCSI makes connecting to a SAN more user-friendly, but both have limits in how much they improve things. Know what your options are, then make a best fit analysis. Me? I chose a Dell NX3000 for my last storage – with iSCSI host. All converged, and not terribly expensive compared to the other similar performing options. But that was for my specific network, with characteristics that show nowhere near the traffic you’re showing right now on your enterprise network, so my solution is likely not your best solution. Oh, you meant the edition wars? I play a little of everything, though AD&D First edition is my favorite and Third Edition is my least favorite. I’m currently playing nearly 100% Castles and Crusades, with a switch soon to AD&D 2nd Edition. Again, they suit what our needs are, your needs are likely to vary. Don’t base your decision upon my opinion, base it on your analysis of your needs. And buy an ARX. They can’t be beat. No, I really believe that, but I only added that in here because I think it’s funny, after telling you to make your decisions vendor-free. ARX only does NAS ;-).166Views0likes0CommentsF5 Friday: Data Inventory Control
Today’s F5 Friday post comes to you courtesy of our own Don MacVittie who blogs more often than not on storage-related topics like file virtualization, cloud storage, and automated tiering goodness. You can connect, converse, and contradict Don in any of the usual social networking avenues: Enjoy! I have touched a few times on managing your unstructured data, and knowing what you have so you know what to do with it. As you no doubt know, automating some of that process in a generic manner is nearly impossible, since it is tied to your business, your vertical, and your organizational structure. But some of it – file level metadata and some small amount of content metadata can absolutely be handled for you by an automated system. This process is increasingly necessary as tiering, virtualization, and cloud become more and more prevalent as elements of the modern data center. The cost driver is pretty obvious to anyone that has handled storage budgeting in the last decade… Disk is expensive, and tier one disk the most expensive. Add to that the never-ending growth of unstructured data and you have a steady bleed of IT infrastructure dollars that you’re going to want to get under control. One of the tools that F5 (and presumably other vendors, but this is an F5 Friday post) offers to help you get a handle on your unstructured data with an eye to making your entire storage ecosystem more efficient, reliable, and scalable, is Data Manager. Data Manager is a software tool that helps you to categorize the data flowing through your corporate NAS infrastructure by looking at the unstructured files and evaluating all that it can about them from context, metadata, and source. Giving you a solid inventory of your unstructured data files is a good start toward automated tiering, including things like using an F5 ARX and a Cloud Storage Gateway to store your infrequently accessed data encrypted in the cloud. Automating tiering is a well-known science, but providing you with data about your files, heterogeneous file system usage, and data mobility is less covered in the marketplace. But you cannot manage what you don’t understand, and we all know that we’ve been managing storage simply by buying more each budgeting cycle, and that process is starting to weigh on the ops budget as well as the tightened budgets that the current market is enforcing – though there are signs that the tight market might be lifting, who wants to keep overhead any higher than it absolutely has to be? Auto-tiering is well-known to developers that make it happen, but consider the complexity of classifying files, identifying tiers, and moving that data while not causing disruptions to users or applications that need access to the files in question. It is definitely not the easiest task that your data center is providing, particularly in a heterogeneous environment where communications with the several vendor’s NAS devices can vary pretty wildly. The guys that write this stuff certainly have my admiration, but it does work. The part where you identify file servers and classify data – setting up communications with the various file servers and accessing the various folders to get at the unstructured files – is necessary just for classification, and that is what Data Manager is all about. Add in that Data Manager can help you understand utilization on all of these resources – in fact does help you understand utilization of them – and you’ve got a powerful tool for understanding what is going on in your NAS storage infrastructure. The reports it generates are in PDF, and can be done from directory level all the way up to a group of NAS boxes. Here’s a sample directory level from one of our NAS devices… The coolest part? Data Manager is Free for 90 days. Just download it here, install it, and start telling it about your NAS devices. See what it can do for you, decide if you like it, provide us with feedback and consider if it helps enough to warrant a purchase. It is an excellent tool for you to discover where you can get the most benefit our of file virtualization solutions like the F5 ARX. And yes, we hope you’ll buy Data Manager and ask about ARX. But the point is, you can try it out and decide if it helps your organization or not. If nothing else you will get an inventory of your NAS devices and what type of utilization you are getting out of them.190Views0likes2CommentsFirst Conferences, 12 TB, and a Road Trip!
Thursday was quite the day for us. I mentioned earlier in the week that I was setting up the storage for Lori to digitize all of the DVDs, well we came to the conclusion that we needed 12 terabytes of raw disk to hold movies + music. Our current NAS total was just over four Terabytes, clearly not enough. While I take it in stride that I would consider purchasing an additional 12 TB of disk space, you have to stop in awe for a moment, don’t you? It was just a decade ago that many pundits were saying most enterprises didn’t need more than a terabyte of data, now I’m considering 12 for personal use? And while it isn’t cheap, it is in the range of my budget, if I shop smartly. Kind of mind boggling. Makes you wonder what most enterprises are at. Sure, they’re not digitizing HD movies, but they are producing HD content, and that’s just a portion of what they’re turning out. I’ll have to go dig up some research on current market trends. So I ordered another NAS from Dell. For price plus simplicity, they suited our needs best – though the HP product a PR rep sent me a link to was interesting, it only had 1.5 TB of space, and I don’t feel like bulk-replacing disks on brand new kit. It seems that my ARX is going to get its workout in the near future… Almost gave it up about a month ago, now I’m glad I didn’t, that’s four NAS’s plus a couple of large shares that it can manage for us. We’re also reconfiguring backups, and I (at this time) have no idea how best to make use of the ARX in this process… It’s learning time. Shortly after I placed the order for the new NAS, we left for Milwaukee, where Lori was due to speak at the Paragon Development Services conference (they’re a partner of F5’s). We decided that since he was nearly three years old, it was time to take The Toddler to his first IT conference… He went, fit in well, and appears to have all the traits of conference attendees. He picked up on the “free water” thing pretty quickly, paid rapt attention, and was ready to go before she even started. Since PDS is pretty big on private clouds, and Lori knows a thing or two about clouds, their issues, their potential, and how to get started, it was a natural fit for her to speak there, and all accounts are that she did well. Yes, I had The Toddler in hand, so I left before she actually started to speak, didn’t want him to do to her what he did in church, shouting “I see a barn!” into the silence of her taking a breath. But everyone that talked to us on the way out was pleased with her presentation, and the slides were rock-solid, so I’m assuming she rocked it like she always does. So our Thursday was pretty full of excitement, but all went well. We have a shiny new NAS on the way (Dell Storage employees, it is on the way, isn’t it?), The Toddler got to see his first tech conference, and we ran all over Milwaukee looking for a hobby store afterward. The one we wanted to go to either closed or moved, the building was empty… And we ended up coming home without the hobby store stop along the way. Lori preparing to speak at PDS Tech 2010 Free Water! I will no doubt update you about the new NAS when it arrives, much the same as I have on previous new NAS purchases. Meanwhile, it’s back to the joys of WOM for me…206Views0likes0Comments