san
6 TopicsIn D&D Parlance, Your Network is Already Converged.
For decades now, the game Dungeons and Dragons has suffered from what is commonly called “Edition Wars”. When the publisher of the game releases a new version, they of course want to sell the new version and stop talking about the old – they’re a business, and it certainly does make the ability to be profitable tough if people don’t make the jump from version X to version Y. Problem is that people become heavily invested in whatever version they’re playing. When Fourth Edition was released, the MSRP on just the three books required to play the game was $150 or thereabout. The price has come down, and a careful shopper can get it delivered to their home for about half of that now… But that’s still expensive considering that there is only enough to play with those books if you invest a significant amount of time in preparing the game before-hand. So those who have spent hundreds or even thousands of dollars on reference material for the immediately previous edition are loath to change, and this manifests as sniping at the new edition. This immediately raises the ire of those who have made the switch, and they begin sniping about your preferred edition. Since “best” is relative in any game, and more so in a Role Playing Game, it is easy to pick pieces you don’t like out of any given edition and talk about how much better your chosen edition of the game is. And this has gone on for so long that it’s nearly a ritual. New version comes out, people put up their banners and begin nit-picking all other versions. I have a friend (who goes by DungeonDelver in most of his gaming interactions) who is certain that nothing worthy has come out since the release of the original Tactical Studies Rules box set in the early seventies, and other friends who can’t understand why anyone would play those “older versions” of the game. For those not familiar with the industry, “threetard” was coined to talk about those who loved third Edition, for example. While not the worst flame that’s coursed through these conversations, for a while there it was pervasive. And they all seem to miss the point. Each Edition has had good stuff in it, all you have to do is determine what is best for you and your players, and go play. Picking apart someone else’s version might be an entertaining passtime, but it is nowhere near the fun that actually playing the game is. Whatever version of the game. Because in the end, they all are the same thing… games designed to allow you to take on the persona of a character in a fantastical world and go forth to right the wrongs of that world. A similar problem happens almost daily in storage, and though it is a bit more complex than the simple “edition wars” of D&D, it is also more constant. We have different types of storage – NAS, SAN, DAS – different protocols and even networks – iSCSI, FCoE, FC, CIFS, etc – different vendors trying to convince you that their chosen infrastructure is “best”, and a whole lot of storage/systems admins that are heavily invested in whatever their organization uses for primary storage. But, like the edition wars, there is no “right” answer. I for one would love to see a reduction in options, but that is highly unlikely unless and until customers vote definitively with their dollars. The most recent example is the marketing push for “converged networking”. That’s interesting, I could have sworn we were already sending both data (NAS/iSCSI/FCoE) and communications over our IP connections? Apparently I was wrong and I need this new expensive gizmo to put data on my network… And that’s just the most recent example. Some simple advice I’ve picked up in my years watching the edition wars… Look at your environment, look at your needs, and continue to choose the storage that makes sense for the application. Not all environments and not all applications are the same, so that’s a determination you need to make. And you should make it vendor-free. Sure some vendors would rather sell you a multi-million dollar SAN with redundancy and high availability, and sure some other vendors want to drop a NAS box into your network and then walk away with your money. They’re in the business of selling you what they make, not necessarily what you need. The what you need part is your job, and if you’re buying a Mercedes where a Hyundai would do, you’re doing your organization a dis-service. Make sure you’re familiar with what’s going on out there, how it fits into your org, and how you can make the most out of what you have. RAID makes cheaper disk more appealing, iSCSI makes connecting to a SAN more user-friendly, but both have limits in how much they improve things. Know what your options are, then make a best fit analysis. Me? I chose a Dell NX3000 for my last storage – with iSCSI host. All converged, and not terribly expensive compared to the other similar performing options. But that was for my specific network, with characteristics that show nowhere near the traffic you’re showing right now on your enterprise network, so my solution is likely not your best solution. Oh, you meant the edition wars? I play a little of everything, though AD&D First edition is my favorite and Third Edition is my least favorite. I’m currently playing nearly 100% Castles and Crusades, with a switch soon to AD&D 2nd Edition. Again, they suit what our needs are, your needs are likely to vary. Don’t base your decision upon my opinion, base it on your analysis of your needs. And buy an ARX. They can’t be beat. No, I really believe that, but I only added that in here because I think it’s funny, after telling you to make your decisions vendor-free. ARX only does NAS ;-).166Views0likes0CommentsStore Storing Stored? Or Blocked?
Now that Lori has her new HP TouchSmart for an upcoming holiday gift, we are finally digitizing our DVD collection. You would think that since our tastes are somewhat similar, we’d be good to go with a relatively small number of DVDs… We’re not. I’m a huge fan of well-done war movies and documentaries, we share history and fantasy interests, and she likes a pretty eclectic list of pop-culture movies, so the pile is pretty big. I’m working out how to store them all on the NAS such that we can play them on any TV on the network, and that got me to pondering the nature of storage access these days. We own a SAN, it never occurred to me to put these shows on it – that would limit access to those devices with an FC card… Or we’d end up creating a share to run them all through one machine with an FC card as a NAS head of sorts. In the long litany of different ways that we store things – direct attached or networked, cloud or WAN, Object store or hierarchical – the one that stands out as the most glaring, and the one that has traditionally gotten the most attention is file versus block. For at least a decade the argument has raged between which is more suited to enterprise use, while most of us have watched from the sidelines and been somewhat bemused by the conversation because the enterprise is using both. As a rule of thumb, if you need to boot from it or write sectors of data to it, you need block. Everything else is generally file. And that’s where I’m starting to wonder. I know there was a movement not too many years ago to make databases file based instead of block based, and that the big vendors were going in that direction, but I do wonder if maybe it’s time for block to retire at the OS level. Of course for old disks to be compatible, the OS would still have to handle block, but setting it to only allow OS-level calls (I know, it’s harder with each release, that’s death by a thousand cuts though) to read/write sectors would resolve much of the problem. Then a VMWare style boot-from-file-structure would resolve the last bit. Soon we could cut our file protocols in half. Seriously, at this point in time, what does block give us? Not much, actually. thin/auto provisioning is available on NAS, high-end performance tweaks are available on NAS, and the extensive secondary network (be it FC or IP) is not necessary for NAS, though there are some cases where throughput may demand it, those are not your everyday case in a world of 1 Gig networks with multi-Gig backplanes on most devices. And 10 Gig is available pretty readily these days. SAN has been slowly dying, I’m just pondering the question of whether it should be finished off. Seriously, people say “SAN is the only thing for high-performance!” but I can guarantee you that I can find plenty of NAS boxes that perform better than plenty of SAN networks – just a question of vendor and connectivity. I’m a big fan of iSCSI, but am no longer sure there’s a need for it out there. Our storage environment, as I’ve blogged before, has become horribly complex, with choices at every turn, many of which are more tied to vendor and profits than needs and customer desires. Strip away the marketing and I wonder if SAN has a use in the future of enterprise. I’m starting to think not, but I won’t declare it dead, as I am still laughing at those who declared tape dead for the last 20 years – and still are, regardless of what tape vendors’ sales look like. It would be hypocritical of me to laugh at them and make the same type of pronouncement. SAN will be dead when customers stop buying it, not before. Block will end when vendors stop supporting it, not before… So I really am just pondering the state of the market, playing devil’s advocate a bit. I have heard people proclaim that block is much faster for database access. I have written and optimized B-Tree code, and yeah, it is. But that’s because we write databases to work on blocks. If we used a different mechanism, we’d get a different result. It is no trivial thing to move to a different storage method, but if the DB already supports file access, the work is half done, only optimizing for the new method or introducing shims to make chunks of files look like blocks would be required. If you think about it, if your DB is running in a VM, this is already essentially the case. The VM is in a file, the DB is in that file… So though the DB might think it’s directly accessing disk blocks, it is not. Food for thought.191Views0likes0CommentsGiven Enough Standards, Define Anarchy
If a given nation independently developed twelve or fourteen governmental systems that all sat side-by-side and attempted to cooperate but never inter-operate, then anarchy would result. Not necessarily overnight, but issues about who is responsible for what, where a given function is best handled, and more would spring up nearly every day. Related Articles and Blogs: NEC’s New I/O Technology Enables Simultaneous Sharing of I/O Storage Area Networking Network Attached Storage SNIA (website) HP Flexfabric Gets Raves from Storage Networking Vendors178Views0likes0CommentsTaking the Final Server Virtualization Steps
There is a trend in the high-tech industry to jump from one hot technology to another, without waiting for customers to catch up. We’re certainly seeing it with Cloud, there are people out there pushing the “everyone else is doing it and gaining agility!” button every day. But you’re not there yet. Part of the reason you’re not there yet is that virtualization is still growing up. Between VM sprawl, resource over-utilization, virtual versus physical infrastructure, and the inherent task of IT to continue to support the business as it sits today, there isn’t a ton of time left for hopping on the Cloud bandwagon. And some of these things – VM Sprawl and resource over-utilization for example – counter-indicate a move to Cloud, simply because they are situations that will cost you money if you do them on a platform that charges you by the rate of transfer or number of VMs. As Lori so aptly put it in one of her blogs, if you can’t manage it internally, you can’t manage it externally either. Related Articles and Blogs Virtual Sprawl is Not the Real Problem The Virtual Virtualization Case Studay Is VM Stall The Next Big Virtualization Challenge The Best Virtualization Joke Ever (no, it really is a joke) Virtualization’s Downsides Virtualization Planning: 4 Systems Management Keys to Success163Views0likes0CommentsThe Problem With Storage Growth is That No One Is Minding the Store
In late 2008, IDC predicted more than 61% Annual Growth Rate for unstructured data in traditional data centers through 2012. The numbers appear to hold up thus far, perhaps were even conservative. This was one of the first reports to include the growth from cloud storage providers in their numbers, and that particular group was showing a much higher rate of growth – understandable since they have to turn up the storage they’re going to resell. The update to this document titled World Wide Enterprise Systems Storage Forecast published in April of this year shows that even in light of the recent financial troubles, storage space is continuing to grow. Related Articles and Blogs Unstructured Data Will Become the Primary Task for Storage Our Storage Growth (good example of someone who can’t do the above) Tiered Storage Tames Data Storage Growth says Construction CIO Data Deduplication Market Driven by Storage Growth Tiering is Like Tables or Storing in the Cloud Tier177Views0likes0CommentsThe State of Storage is not the State of Your Storage
George Crump posted an interesting article over on Storage Switzerland that talks about the current state of the storage market from a protocol perspective. Interestingly to me, CIFS is specifically excluded from the conversation – NAS is featured, but the guts of the NAS bit only talks about NFS. In reality, NFS is a small percentage of the shared storage out there, since CIFS is built into Microsoft systems and is often used at the departmental or project level to keep storage costs down or to lighten the burden on the SAN. But now that I’ve nit-picked, it’s a relatively solid article. A little heavy on Brocade in the SAN section, but not so much that it takes away from the article. The real issue at hand is to determine what will work for you/your organization/projectX/whatever in the longer-term. Applications in enterprises tend to have a life of their own and just keep on going long after the designers and developers have moved off to other projects, other jobs, or sometimes even retirement. That’s a chunk of the reason that there are still so many mainframes out there. They weren’t as easy to kill as the distributed crowd (myself included) thought because they were the workhorses in the 70s and 80s, and those applications are still running today in many organizations. The same is going to be true in the enterprise. You can choose FCoE or even iSCSI, but they’re a bit higher risk than choosing FC or NAS, simply because FC and NAS are guaranteed to be around for a good long time, there are more than a handful of storage boxes running both. I personally feel that FCoE and iSCSI are safe at this point. They are not without their adherents, and there is a lot of competition for both, signifying vendor belief that needs will grow. But it is still a bigger risk than FC or NAS, for all the reasons stated above. There’s also the increasing complexity issue. Three of the IT shops I’ve worked in have tried major standardization efforts… None tried to standardize their storage protocol. But that day should be coming. You’re already living with one file-level and one block-level if you’re a mid-sized shop or larger, don’t make it worse unless you’re going to reap benefits that warrant further fragmenting how your storage is deployed. If you’re contemplating cloud computing, your storage is going to become more complex anyway. FCoE is your best option to limit that complexity – as eventually I suspect encrypted FCoE to take the cloud, since they can then put a SAN behind it and be done – but right now it’s just overhead and a new standard for your staff to learn. Certainly doesn’t look like Google Storage for Developers is FCoE compliant, and they’re the gorilla in that room at the moment. Knowing that you have a base of a given architecture, it is an acceptable choice to focus instead on improving the usage of that architecture and growing it for the time being with perhaps only a few pilot projects to explore your options and the capabilities of other technologies. As many times as Fiber Channel has been declared dead, I would not be surprised if you’re starting to get a bit sheepish about continuing to deploy it. But Mr. Crump is right, FC has inertia on its side. All that Fiber Channel isn’t going away unless something replaces it that is either close and familiar or so compelling that we’ll need the new functionality the replacement offers. Thus far that protocol has not appeared. The shared network thing hinders FCoE and iSCSI. Lots of people worry about putting this stuff on the same network as their applications, due to the congestion that could be created. But storage staff are not the people to create a dedicated Ethernet segment for your IP based storage either, so working with the network team becomes a requirement. Which I see as a good thing. The company has one IT group, they don’t care about the details. Imagine HR going “we don’t have a system for you to take time off, our compensation sub-team was unable to meet with the time accounting team”. Yeah, that’s the way it sounds when IT starts mumbling about network segments and cross functional problems. No one gets much past the “We don’t have…” part. I’m still and iSCSI fan-boy, even though the above doesn’t sound like it. I think it will take work to get the infrastructure right, considering half of the terms for an iSCSI network are not the standard fare of storage geeks. But to have everything on one network topology is a step toward having everything look and feel the same. The way that storage grew up, we naturally consider that SAN and NAS are two different beasts with two different sets of requirements and two different use cases. To the rest of the world, it is all just storage. And they’re (in general) right. So instead of looking at adding another protocol to the mix or changing your infrastructure, take a look at optimizations – HBAs are available for iSCSI if you need them (and the more virtualized, the more likely they are to be needed), your FC network could probably use a speed boost, and they’re constantly working on the next larger speed (Mr. Crump says 16 Gb is on the way… Astounding), FCoE converged adapters do much the same thing as iSCSI HBAs, but also handle IP traffic at 10 Gb. And 10 Gb will help your NAS too… Assuming said NAS can utilize it or the switch was your bottleneck anyway. Tiering products like our ARX can relieve pressure points on your network behind your back, following rules you have set, FC has virtualization tools that can help FC do the same, though they’re more complex should you ever lose the virtualization product. As Mr. Crump pointed out in other Storage Switzerland articles, adding an SSD tier can speed applications without a major network overhaul… And for all of these technologies, more disk is always an option. Something like the Dell EqualLogic series can even suck in an entire new array and add it to a partition without you having to do much more than say “yes, yes, this is the partition I want to grow”. Throw in the emerging SSD market for ultra-high-speed access, and well, major changes in protocol are not required. So moving forward, paying attention to the market is important, but as always, paying attention to what’s in your data center is more important. The days of implementing “cool new technology” just because it is “cool new technology” are long, long gone for most of us. More on that in another blog though. Related Articles and Blogs SSD is the new Green FCoE on Wikipedia Microsoft’s iSCSI Users Guide (MS-Word) FCIA Advanced Case Studies Other Storage blogs by me184Views0likes0Comments