emc
9 TopicsF5 Friday: The More The Merrier
Heterogeneous storage systems remain one of the more difficult data center components to virtualize. F5 ARX and ARX Cloud Extender continue to broaden support for more systems, making it easier to normalize data storage – even if the data and provider interfaces aren’t. This week Don joins us to share the latest news from the F5 Data Solutions Group. The advent of directory virtualization opened up the ability to intelligently tier storage without a lot of manual intervention. The use of the strategic point of control between consumers of file services and the providers of those services through programmable rules and single-directory architecture made moving files from tier to tier without impacting users a viable option for both the short and long term. This means that an organization can put in SSD drives for the most frequently utilized and performance-critical files, a tier one high-speed disk or disk/ssd hybrid vendor for the next level of files, and a tier two vendor’s slower solution for the bulk of files maintained in any given enterprise. The most used files go on the fastest, most expensive equipment the organization is willing to purchase, while the bulk of files go on slower but less expensive disk. For most of you, this is nothing new, this is the real benefit of storage tiering. But unstructured data growth just continues to roll on. Without concern for the fact that many organizations are running on much tighter budgets than they were five years ago, and that those budgets are the status-quo going forward, according to research firm IDC, at an astounding 61% annually. That is a whole lot of data to manage while trying to do all of the other things that IT is responsible for, driving the relevance of storage tiering to the forefront, and making an alternate storage mechanism appealing for companies who have identified the least frequently used data on their traditional NAS storage. Enter F5 ARX Cloud Extender, the tool that allows your directory virtualization appliance to make use of cloud storage and/or cloud storage gateways. When first released, ARX Cloud Extender was qualified for use with Iron Mountain VFS Cloud Storage, Amazon S3, and NetApp Storage Grid. The programmable API utilized to interface with these vendors was always intended as a tool to expand the offering to provide customers with the most options for their cloud storage needs.202Views0likes0CommentsThe Right (Platform) Tool For the Job(s).
One of my hobbies is modeling – mostly for wargaming but also for the sake of modeling. In an average year I do a lot of WWII models, some modern military, some civilian vehicles, figures from an array of historical timeperiods and the occasional sci-fi figure for one of my sons… The oldest (24 y/o) being a WarHammer 40k player and the youngest (3 y/o) just plain enjoying anything that looks like a robot. While I have been modeling more or less for decades, only in the last five years have I had the luxury of owning an airbrush, and then I restrict it to very limited uses – mostly base-coating larger models like cars, tanks, or spaceships. The other day I was reading on my airbrush vendor’s website and discovered that they had purchased a competitor that specialized in detailing airbrushes – so detailed that the line is used to decorate fingernails. This got me to thinking that I could do more detailed bits on models – like shovel blades and flesh-tones with an airbrush if I had one of these little detail brushes. Lori told me to send her a link to them so that she had it on the list for possible gifts, so I went out and started researching which model of the line was most suited to my goals. The airbrush I have is one of the best on the market – a Badger Airbrush Company model 150. It has dual-action, which means that pushing down on the trigger lets air out, and pulling the trigger back while pushing down lets an increasing amount of paint flow through. I use this to determine the density of paint I’m applying, but have never thought too much about it. Well in my research I wanted to see how much difference there was between my airbrush and the Omni that I was interested in. The answer… Almost none. Which confused me at first, as my airbrush, even with the finest needle and tip available and a pressure valve on my compressor to control the amount of air being pumped through it, sprays a lot of paint at once. So I researched further, and guess what? The volume of paint adjustment that is controlled by how far you draw back the trigger, combined with the PSI you allow through the regulator will control the width of the paint flow. My existing airbrush can get down to 2mm – sharpened pencil point widths. I have a brand-new fine tip and needle (in poor lighting I confused my fine needle with my reamer and bent the tip a few weeks ago, so ordered a new one), my pressure regulator is a pretty good one, all that is left is to play with it until I have the right pressure, and I may be doing more detailed work with my airbrush in the near future. Airbrushing isn’t necessarily better – for some jobs I like the results better, like single-color finishes, because if you thin the paint and go with several coats, you can get a much more uniform worn look to surfaces – but overall it is just different. The reason I would want to use my airbrush more is, simply time. Because you don’t have to worry about crevices and such (the air blows paint into them), you don’t have to take nearly as long to paint a given part with an airbrush as you do with a brush. At least the base coat anyway, you still need a brush for highlighting and shadowing… Or at least I do… But it literally cuts hours off of a group of models if I can arrange one trip down to the spray area versus brush-painting those same models. What does all of this have to do with IT? The same thing it usually does. You have a ton of tools in your datacenter that do one job very well, but you have never had reason to look into alternate uses that the tool might do just as well or better at. This is relatively common with Application Delivery Controllers, where they are brought in just to do load balancing, or just for application acceleration, or just for WAN Optimization, and the other things that the tool does just as well haven’t been explored. But you might want to do some research on your platforms, just to see if they can serve other needs than you’re putting them to today. Let’s face it, you’ve paid for them, and in many cases they will work as-is or with a slight cost add-on to do even more. It is worth knowing what “more” is for a given product, if for no other reason than having that information in your pocket when exploring solutions going forward. A similar situation is starting to develop with our ARX family of products, and no doubt with some competitors also (though I haven’t heard of it from competitors, I’m simply conjecturing) – as ARX grows in its capabilities, many existing customers aren’t taking advantage of the sweet new tools that are available to them for free or for a modest premium on their existing investment. ARX Cloud Extender is the largest case of this phenomenon that I know of, but this week’s EMC Atmos announcement might well go a long way to reconcile that bit. To me it is very cool that ARX can virtualize your NAS devices AND include cloud and/or object storage alongside NAS so as to appear to be one large pool of storage. Whether you’re a customer or not, it’s worth checking out. Of course, like my airbrush, you’ll have some learning to do if you try new things with your existing hardware. I’ll spend a couple of hours with the airbrush figuring out how to make reliable lines of those sizes, then determine where best to use it. While I could have achieved the same or similar results with masking, the time investment for masking is large and repetitive, the dollar cost is repetitive. I also could have paid a large chunk of money for a specialized detail airbrush, but then I’d have two tools to maintain, when one will do it all… And this is true of alternatives to learning new things about your existing hardware – the learning curve will be there whether you implement new functionality on your existing platforms or purchase a point solution, best to figure out the cost in time and money to solve the problem from either direction. Often, you’ll find the cost of learning a new function on familiar hardware is much lower than purchasing and learning all new hardware. WWII Russians – vehicle is airbrushed, figures not.241Views0likes0CommentsI Want My Converged IP…
Every once in a while you hear something going on in the political spectrum that strikes you as meaningful and useful, that you hope against hope they will manage to get past the details and partisanship and move forward on. Right about the time of the writing of this blog, in the United States, we’re hearing a lot of these things because it’s an election cycle. Problem is that with 300 million people it is rare that any one idea is agreed upon by everyone, and politicians cover the spectrum, so often what sounds like a good idea is tough to get pushed through, and election years tend to generate more ideas and less action than other years.177Views0likes0CommentsA Storage (Capacity) Optimization Buying Spree!
Remember when Beanie Babies were free in Happy Meals, and tons of people ran out to buy the Happy Meals but only really wanted the Beanie Babies? Yeah, that’s what the storage compression/dedupe market is starting to look like these days. Lots of big names are out snatching up at-rest de-duplication and compression vendors to get the products onto their sales sheets, we’ll have to see if they wanted the real value of such an acquisition – the bright staff that brought these products to fruition – or they’re buying for the product and going to give or throw away the meat of the transaction. Yeah, that sentence is so pun laden that I think I’ll leave it like that. Except there is no actual meat in a Happy Meal, I’m pretty certain of that. Today IBM announced that it is formally purchasing Storwize, a file compression tool designed to compress data on NAS devices. That leaves few enough players in the storage optimization space, and only one – Permabit – whose name I readily recognize. Since I wrote the blog about Dellpicking up Ocarina, and this is happening while that blog is still being read pretty avidly, I figured I’d weigh in on this one also. Storwize is a pretty smart purchase for IBM on the surface. The products support NAS at the protocol level – they claim “storage agnostic”, but personal experience in the space is that there’s no such thing… CIFs and NFS tend to require tweaks from vendor A to vendor B, meaning that to be “agnostic” you have to “write to the device”. An interesting conundrum. Regardless, they support CIFS and NFS, are stand-alone appliances that the vendors claim are simple to set up and require little or no downtime, and offer straight-up compression. Again, Storewize and IBM are both claiming zero performance impact, I cannot imagine how that is possible in a compression engine, but that’s their claim. The key here is that they work on everyone’s NAS devices. If IBM is smart, the products still will work on everyone’s devices in a year. Related Articles and Blogs IBM Buys Storewize Dell Buys Ocarina Networks Wikipedia definition – Capacity Optimization Capacity Optimization – A Core Storage Technology (PDF)268Views0likes1CommentDell Buys Ocarina Networks. Dedupe For All?
Storage at rest de-duplication has been a growing point of interest for most IT staffs over the last year or so, just because de-duplication allows you to purchase less hardware over time, and if that hardware is a big old storage array sucking a ton of power and costing a not-insignificant amount to install and maintain, well, it’s appealing. Most of the recent buzz has been about primary storage de-duplication, but that is merely a case of where the market is. Backup de-duplication has existed for a good long while, and secondary storage de-duplication is not new. Only recently have people decided that at-rest de-dupe was stable enough to give it a go on their primary storage – where all the most important and/or active information is kept. I don’t think I’d call it a “movement” yet, but it does seem that the market’s resistance to anything that obfuscates data storage is eroding at a rapid rate due to the cost of the hardware (and attendant maintenance) to keep up with storage growth. Related Articles and Blogs Dell-Ocarina deal will alter landscape of primary storage deduplication Data dedupe technology helps curb virtual server sprawl Expanding Role of Data Deduplication The Reality of Primary Storage Deduplication212Views0likes0CommentsThe Storage Future is Cloudy, and it is about time.
One of the things I have talked about quite a bit in the last couple of months is the disjoint between the needs of enterprise IT and the offerings of a wide swath of the cloud marketplace. Some times it seems like many cloud vendors are telling customers “here’s what we choose to offer you, deal with it”. Problem is, oftentimes what they’re offering is not what the enterprise needs. There are of course some great examples of how to do cloud for the enterprise, Rackspace (among others) has done a smashing job of offering users a server with added services to install a database or web server on those servers. There are still some security concerns that the enterprise needs to address, but at least it’s a solid start toward giving IT the option of using the cloud in the same manner that they use VMs. Microsoft has done a good job of setting up databases that follow a similar approach. If you use MS database products, you know almost all you need to know to run an Azure database. The storage market has been a bit behind the rest of the cloud movement, with several offerings that aren’t terribly useful to the enterprise, and a few that are, more or less. That is changing, and I personally am thrilled. The most recent delve into cloud storage that is actually useful to the enterprise without rewriting their entire systems or buying a cloud storage gateway is Hitachi Data Systems (HDS). HDS announced their low-risk cloud storage services on the 29th of June, and the press largely yawned. I’m not terribly certain why they didn’t get as excited as I am, other than the fact that they are currently suffering information overload where cloud is concerned. With the style of offering that HDS has set up, you can use your HDS gear as in “internal” cloud, and their services as “external” cloud, all managed from the same place. And most importantly, all presenting as the storage you’re used to dealing with. The trend of many companies to offer storage as an API is self-defeating, as it seriously limits usefulness to the enterprise and has spawned the entire (mondo-cool, in context) cloud storage gateway market. The HDS solution allows you to hook up your disk like it was disk, not write extra code in every application to utilize disk space “in the cloud” and use the same methods you always have “in the DC”. To do the same with most other offerings requires the purchase of a cloud storage gateway. So you can have your cloud and internal too. The future of storage is indeed looking cloudy these days, and I’m glad for it. Let the enterprise use cloud, and instead of telling them “everyone is doing it”, give them a way to use it that makes sense in the real world. The key here is enabling. Now that we’re past early offerings, the winner’s circle will be filled with those that can make cloud storage accessible to IT. After that, the winners’ circle will slowly be filled with those who make it accessible to IT processes and secured against everything else. And it’s coming, so IT actually can take advantage of what is an astounding concept for storage. If you’re not an HDS customer (and don’t want to be), Cloud Storage Gateways can give you similar functionality, or you can hang out for a little bit. Looking like local storage is what must happen to cloud storage for it to be accessible, so more is on its way. And contrary to some of the hype out there, most of the big storage vendors have the technology to make an environment like – or even better than – the one Hitachi Data Systems has put together, so have no doubt that they will, since the market seems to be going there. There are some bright people working for these companies, so likely they’re almost there now. You might say that EMC has “been there, done that”, but not in a unified manner, in my estimation. I for one am looking forward to how this all pans out. Now that cloud storage is doing for customers instead of to them, it should be an interesting ride. Until then, cloud storage vendors take note: Storage is not primarily accessed, data files created or copied, or backups performed through an API. Related Articles and Blogs: Hitachi Data Systems to Help Customers Deploy… EMC Announces New End-to-End Storage Provisioning Solution EMC Cans Atmos Online Service178Views0likes0CommentsIf I Were in IT Management Today…
I’ve had a couple of blog posts talking about how there is a disconnect between “the market” and “the majority of customers” where things like cloud (and less so storage) are concerned. So I thought I’d try this out as a follow on. If I were running your average medium to large IT shop (not talking extremely huge, just medium to large), what would I be focused on right now. By way of introduction, for those who don’t know, I’m relatively conservative in my use of IT, I’ve been around the block, been burned a few times (OS/2 Beta Tester, WFW, WP… The list goes on), and the organizations I’ve worked for where I was part of “Enterprise IT” were all relatively conservative (Utilities, Financials), while the organizations i worked in Product or App Development for were all relatively cutting edge. I’ve got a background in architecture, App Dev, and large systems projects, and think that IT Management is (sadly) 50% corporate politics and 50% actually managing IT. I’ll focus on problems that we all have in general here, rather than a certain vertical, and most of these problems are applicable to all but the largest and smallest IT shops today. By way of understanding, this list is the stuff I would be spending research or education time on, and is kept limited because the bulk of you and your staff’s time is of course spent achieving or fixing for the company, not researching. Though most IT shops I know of have room for the amount of research I’m talking about below.166Views0likes0CommentsThe Problem With Storage Growth is That No One Is Minding the Store
In late 2008, IDC predicted more than 61% Annual Growth Rate for unstructured data in traditional data centers through 2012. The numbers appear to hold up thus far, perhaps were even conservative. This was one of the first reports to include the growth from cloud storage providers in their numbers, and that particular group was showing a much higher rate of growth – understandable since they have to turn up the storage they’re going to resell. The update to this document titled World Wide Enterprise Systems Storage Forecast published in April of this year shows that even in light of the recent financial troubles, storage space is continuing to grow. Related Articles and Blogs Unstructured Data Will Become the Primary Task for Storage Our Storage Growth (good example of someone who can’t do the above) Tiered Storage Tames Data Storage Growth says Construction CIO Data Deduplication Market Driven by Storage Growth Tiering is Like Tables or Storing in the Cloud Tier177Views0likes0CommentsOur data is so deduped that no two bits are alike!
Related Articles and Blogs Dedupe Ratios Do Matter (NWC) Ask Dr Dedupe: NetApp Deduplication Crosses the Exabyte Mark (NetApp) Dipesh on Dedupe: Deduplication Boost or Bust? (CommVault) Deduplication Ratios and their Impact on DR Cost Savings (About Restore) Make the Right Call (Online Storage Optimization) – okay, that one’s a joke BIG-IP WAN Optimization Module (f5 – PDF) Like a Matrushka, WAN Optimization is Nested (F5 DevCentral)189Views0likes0Comments