f5 arx cloud extender
14 TopicsRemember When Hand Carts Were State Of The Art? Me either.
Funny thing about the advancement of technology, in most of the modern world we enshrine it, spend massive amounts of money to find “the next big thing”, and act as if change is not only inevitable, but rapid. The truth is that change is inevitable, but not necessarily rapid, and sometimes, it’s about necessity. Sometimes it is about productivity. Sometimes, it just plain isn’t about either. Handcarts are still used for serious purposes in parts of the world, by people who are happy to have them, and think a motorized vehicle would be a waste of resources. Think on that for a moment. What high-tech tool that was around 20 years ago are you still using? Let alone 200 years ago. The replacement of handcarts as a medium for transport not only wasn’t instant, it’s still going on 100 years after cars were mass produced. Handcart in use – Mumbai Daily We in high-tech are constantly in a state of flux from this technology to that solution to the other architecture. The question you have to ask yourself – and this is getting more important for enterprise IT in my opinion – is “does this do something good for the company?” It used to be that IT folks could try out all sorts of new doo-dads just to play with them and justify the cost based on the future potential benefit to the company. I’d love to say that this had a powerful positive effect, but frankly, it only rarely paid off. Why? Because we’re geeks. We buy this stuff on our own dime if the company won’t foot for it, and our eclectic tastes don’t necessarily jive with the needs of the organization. These days, the change is pretty intense, and focuses on infrastructure and application deployment architectures. Where can you run this application, and what form will the application take? Virtualized? Dedicated hardware? Cloud? the list goes on. And all of these questions spur thoughts about security, storage, the other bits of infrastructure required to support an application no matter where it is deployed. These are things that you can model in your basement, but can’t really test out, simply because the architecture of an enterprise is far more complex than the architecture of even the geekiest home network. Lori and I have a pretty complex network in our basement, but it doesn’t hold a candle to our employers’ worldwide network supporting dev and sales offices on every continent, users in many languages, and a potpourri of access methods that must be protected and available. Sometimes, change is simply a change of perspective. F5’s new iApps, for example, put the ADC infrastructure bits together for the application, instead of managing application security within the module that handles application security (ASM), it bundles security in with all of the other bits – like load balancing, SSL offload, etc – that an application requires. This is pretty powerful, it speeds deployment and troubleshooting because everything is in one place, and it speeds adding another machine because you simply apply the same iApp Template. That means you spin up another instance of the VM in question, tweak the settings, and apply the template already being used on existing instances, and you’re up. Sometimes, change is more radical. Deploying to the cloud is a good example of this, and cloud deployments suffer for it. Indeed, private and hybrid clouds are growing rapidly precisely because of the radical change that public cloud can introduce. Cloud storage was so radical that very few were willing to use it even as most thought it was a good idea. Along came cloud storage gateways like our ARX Cloud Extender or a variety of others, and suddenly the weakness was ameliorated… Because the radical bit of cloud storage was simply that it didn’t talk like storage traditionally has. With a gateway it does. And with most gateways (check with your provider) you get compression and encryption, making the cloud storage more efficient and secure in the process. But like the handcart, the idea that cloud, or virtualization, or consumerization must take hold overnight and you’re behind the times if you weren’t doing it yesterday are misplaced. Figure out what’s best for your organization, not just in terms of technology, but in terms of timelines also. Sure, some things, like support for the CEOs iPad will take on a life of their own, but in general, you’ve got time to figure out what you need, when you need it, and how best to implement it. As I’ve mentioned before, at the cutting edge of technology, when the hype cycle is way overblown, that’s where you’ll find the largest number of vendors that won’t be around to support you in five years. If you can wait until the noise about a space quiets down, you’ll be better served, because the level of competition will have eliminated the weaker companies and you’ll be dealing with the technological equivalent of the Darwinian most fit. Sure, some of those companies will fail or get merged also, but the chances that your vendor of choice won’t, or their products will live on, are much better after the hype cycle. After all, even though engine powered conveyances have largely replaced hand carts, have you heard of White Motor Company, Autocar Company, or Diamond T Company? All three made automobiles. They lived through boom and were swallowed in bust. Though in automobiles the cycle is much longer than in high-tech (Autocar started in the late 1800s and was purchased by White in the 1950s for example, who was purchased later by Audi), the same process occurs, so count on it. And no, I haven’t developed a sudden interest in automobile history, all of these companies thrived making half-tracks in World War Two, that’s how I knew to look for them amongst the massive number of failed car companies. Stay in touch with the new technologies out there, pay attention to how they can help you, but as I’ve said quite often, what's in the hype cycle isn’t necessarily what is best for your organization. 1908 Autocar XV (Wikipedia.org) Of course I think things like our VE product line and our new V.11 with both iApps and app mobility are just the thing for most organizations, even with those I will say “depending upon your needs”. Because contrary to what most marketing and many analysts want to tell you, it really is about your organization and its needs.210Views0likes0CommentsWhen The Walls Come Tumbling Down.
When horrid disasters strike and both people and corporations are put on notice that they suddenly have a lot more important things to do, will you be ready? It is a testament to man’s optimism that with very few exceptions we really don’t, not at the personal level, not at the corporate level. I’ve worked a lot of places, and none of them had a complete, ready to rock DR plan. The insurance company I worked at was the closest – they had an entire duplicate datacenter sitting dark in a location very remote from HQ, awaiting need. Every few years they would refresh it to make certain that the standby DC had the correct equipment to take over, but they counted on relocating staff from what would be a ravaged area in the event of a catastrophe, and were going to restore thousands of systems from backups before the remote DC could start running. At the time it was a good plan. Today it sounds quaint. And it wasn’t that long ago. There are also a lot of you who have yet to launch a cloud initiative of any kind. This is not from lack of interest, but more because you have important things to do that are taking up your time. Most organizations are dragging their feet replacing people, and few – according to a recent survey, very few – are looking to add headcount (proud plug that F5 is – check out our careers page if you’re looking). It’s tough to run off and try new things when you can barely keep up with the day-to-day workloads. Some organizations are lucky enough to have R&D time set aside. I’ve worked at a couple of those too, and honestly, they’re better about making use of technology than those who do not have such policies. Though we could debate if they’re better because they take the time, or take the time because they’re better. And the combination of these two items brings us to a possible pilot project. You want to be able to keep your organization online or be able to bring it back online quickly in the event of an emergency. Technology is making it easier and easier to complete this arrangement without investing in an entire datacenter and constantly refreshing the hardware to have quick recovery times. Global DNS in various forms is available to redirect users from the disabled datacenter to a datacenter that is still capable of handling the load, if you don’t have multiple datacenters, then it can redirect elsewhere – like to virtual servers running in the cloud. ADCs are starting to be able to work similarly whether they are cloud deployed or DC deployed, that leaves keeping a copy of your necessary data and applications in the cloud, and cloud storage with a cloud storage gateway such as the Cloud Extender functionality in our ARX product allow for this to be done with a minimum of muss and fuss. These technologies, used together, yield a DR architecture that looks something like this: Notice that the cloud extender isn’t listed here, because it is useful for getting the data copied, but would most likely reside in your damaged datacenter. Assuming that the cloud provider was one like our partner Rackspace, who does both cloud VMs and cloud storage, this architecture is completely viable. You’ll still have to work some things out, like guaranteeing that security in the cloud is acceptable, but we’re talking about an emergency DR architecture here, not a long-running solution, so app-level security and functionality to block malicious attacks at the ADC layer will cover most of what you need. AND it’s a cloud project. The cost is far, far lower than a full blown DR project, and you’ll be prepared in case you need it. This buys you time to ingest the fact that your datacenter has been wiped out. I’ve lived through it, there is so much that must be done immediately – finding a new location, dealing with insurance, digging up purchase documentation, recovering what can be recovered… Having a plan like this one in place is worth your while. Seriously. It’s a strangely emotional time, and having a plan is a huge help in keeping people focused. Simply put, disasters come, often without warning – mine was a flood caused by a broken pipe. We found out when our monitoring equipment fried from being soaked and sent out a raft of bogus messages. The monitoring equipment was six feet above the floor at the time. You can’t plan for everything, but to steal and twist a famous phrase, “he who plans for nothing protects nothing.”199Views0likes0CommentsWhither Cloud Gateways?
Farm tractors and military tanks share an intertwined history that started when some smart person proposed the tracks on some farming equipment as the cross-country tool that tanks needed to get across a rubble and shell-hole strewn World War One battlefield. For the ensuing sixty years, improvements in one set of tracks spurred improvements in the other. Early on it was the farm vehicles developing improvements, but through World War Two and even some today, tanks did most of the developing. That is simply a case of experience. Farmers and farm tractor manufacturers had more experience when tanks were first invented, but the second world war and the variety of terrain, climate, and usage gave tanks the edge. After World War Two, the Cold War drove much more research money into tank improvements than commercial tractors received, so the trend continued. In fact, construction equipment eventually picked up where farming equipment dropped off. This is no coincidence, bulldozers received a lot of usage in the same wildly varying terrain as tanks during the second world war. Today, nearly all tracked construction equipment can trace their track and/or road wheel arrangements back to a specific tank (one bulldozer brand, for example, uses a slightly modified LT vz. 35 - Panzer 35(t) in German service - wheel system, invented in Czechoslovakia in the 1930s. That suspension was a modification of an even earlier Vickers tank design). Bradley AFV tug-o-war with a Farm Tractor What does all this have to do with cloud gateways? Well, technology follows somewhat predictable patterns, be it cloud and cloud communications or track and suspension systems. Originally, cloud gateways came out a few years back as the solution to making the cloud work for you. Not too long after cloud storage came along, some smart people thought the cloud gateway idea was a good one, and adopted a modified version called Cloud Storage Gateways. The driving difference between the two from the perspective of users was that Cloud Storage was practically useless without a gateway, while the Cloud could be used for application deployment in a much more broad sense without a gateway. So Cloud Storage Gateways like F5’s ARX Cloud Extender are a fact of life. Without them, Cloud Storage is just a blob that does not communicate with the rest of your storage infrastructure – including the servers that need to access said storage. With a Cloud Storage Gateway, storage looks and acts like all of the other IT products out there expect it to work. In the rush, Cloud Gateways largely fell by the wayside. Citrix sells one, and CloudSwitch is making a good business of it (there are more startups than just CloudSwitch, but they seem to be leading the pack), but the uptake seems to be nothing like the Cloud Storage Gateway uptake. And I think that’s a mistake. A cloud gateway is the key to cloud interoperability, and every organization needs at least a bare-minimum level of cloud portability, simply so they can point out to their cloud vendor that there are other players in the cloud space should the relationship become unprofitable for the customer. Add to that the ability to secure data on its way to the cloud and back, and Cloud Gateways are hugely important. What I don’t know is why uptake and competition in the space seems so slight. My guess would be that organizations aren’t attempting to integrate Cloud deployed applications into their architecture in the manner that Cloud Storage must be in order to be used. Which would scream that Cloud has not yet begun actual adoption yet. Though it doesn’t indicate whether that’s because Cloud is being dismissed by decision-makers as a place to host core applications, or just that uptake is slow. I’d be interested in hearing from you if you have more data that I’m somehow missing. It just seems incongruous to me that uptake isn’t closer to Cloud usage uptake claims. Meanwhile, security (encryption, tunneling, etc) can be had from your BIG-IP… But no, I don’t think BIG-IP is the reason Cloud Gateway uptake seems so low, or I wouldn’t have written this blog. I know some people are using it that way, with LTM-VE on the Cloud side and LTM on the datacenter side, but have no reason to suspect it is a large percentage of our customer base (I haven’t asked, this is pure conjecture). I’d like to see the two “gateway” products move in cooperative fits and starts until they are what is needed to secure, utilize, and access portable cloud-deployed applications and storage. You decide which is tank and which is tractor though… And since we’re talking about tanks, at least a little bit, proof that ever-smaller technology is not new in the computer age - The Nazi Goliath tank - Courtesy of militaryphotos.net Related Blogs: Cloud Storage Gateways. Short term win, but long term…? Cloud Storage Gateways, stairway to (thin provisioning) heaven? Certainly Cirtas! Cloud Storage Gains Momentum Tiering is Like Tables, or Storing in the Cloud Tier Cloud Storage Use Models Useful Cloud Advice, Part One. Storage Cloud Storage: Just In Time For Data Center Consolidation. Don MacVittie - F5 BIG-IP IPv6 Gateway Module If I Were in IT Management Today…189Views0likes0CommentsCopied Data. Is it a Replica, Snapshot, Backup, or an Archive?
It is interesting to me the number of variant Transformers that have been put out over the years, and the effect that has on those who like transformers. There are four different “Construction Devastator” figures put out over the years (there may be more, I know of four), and every Transformers collector or fan that I know – including my youngest son – want them all. That’s great marketing on the part of Hasbro, for certain, but it does mean that those who are trying to collect them are going to have a hard time of it, just because they were produced and then stopped, and all of them consist of seven or more parts. That’s a lot of things to go wrong. But still, it is savvy for Hasbro to recognize that a changed Transformer equates to more sales, even though it angers the diehard fans. As time moves forward, technology inevitably changes things. In IT that statement implies “at the speed of light”. Just like your laptop has been replaced with a newer model before you get it, and is “completely obsolete” within 18 months, so other portions of the IT field are quickly subsumed or consumed by changes. The difference is that IT is less likely to get caught up in the “new gadget” hype than the mass market. So while your laptop was technically outdated before it landed in your lap, IT knows that it is still perfectly usable and will only replace it when the warrantee is up (if you work for a smart company) or it completely dies on you (for a company pinching pennies). The same is true in every piece of storage, it is just that we don’t suffer from “Transformer Syndrome”. Old storage is just fine for our purposes, unless it actually breaks. Since you can just continue to pay annual licensing fees, there’s no such thing as “out of warrantee” storage unless you purchase very inexpensive, or choose to let it lapse. For the very highest end, letting it lapse isn’t an option, since you’re licensing the software. The same is true with how we back up and restore that data. Devastator, image courtesy of Gizmodo.com But even with a stodgy group like IT, who has been bitten enough times to know that we don’t change something unless there’s a darned good reason, eventually change does come. And it’s coming to backup and replication. There are a lot of people still differentiating between backups and replication. I think it’s time for us to stop doing so. What are the differences? Let’s take a look. Backups go to tape. Hello Virtual Tape Libraries, how are you? Backups are archival. Hello tiering, you allow us to move things to different storage types, and replicate them at different intervals, right? So all is correctly backed up for its usage levels? Replication is near-real-time. Not really. You’re thinking of Continuous Data Protection (CDP), which is gaining traction by app, not broadly. Replication goes to disk and that makes it much faster. See #1. VTL is fast too. Tape is slow. Right, but that’s a target problem, not a backup problem. VTLs are fast. Replication can do just the changes. Yeah, why this one ever became a myth, I’ll never know, but remember “incremental backups”? Same thing. I’m not saying they’re exactly the same – incremental replicas can be reverse applied so that you can take a version of the file without keeping many copies, and that takes work in a backup environment, what I AM saying is that once you move to disk (or virtual disk in the case of cloud storage), there isn’t really a difference worthy of keeping two different phrases. Tape isn’t dead, many of you still use a metric ton of it a year, but it is definitely waning, slowly. Meaning more and more of us are backing up or replicating to disk. Where did this come from? A whitepaper I wrote recently came back from technical review with “this is not accurate when doing backups”, and that got me to thinking “why the heck not?” If the reason for maintaining two different names is simply a people reason, while the technology is rapidly becoming the same mechanisms – disk in, disk out, then I humbly suggest we just call it one thing, because all maintaining two names and one fiction does is cause confusion. For those who insist that replicas are regularly updated, I would say making a copy or snapshotting them eliminates even that difference – you now have an archival copy that is functionally the same as a major backup. Add in an incremental snapshot and, well, we’re doing a backup cycle. With tiering, you can set policies to create snapshots or replicas on different timelines for different storage platforms, meaning that your tier three data can be backed up very infrequently, while your tier one (primary) storage is replicated all of the time. Did you see what I did there? The two are used interchangeably. Nobody died, and there’s less room for confusion. Of course I think you should use our ARX to do your tiering, ARX Cloud Extender to do your cloud connections, and take advantage of the built-in rules engine to help maintain your backup schedule. But the point is that we just don’t need two names for what is essentially the same thing any more. So let’s clean up the lingo. Since replication is more accurate to what we’re doing these days, let’s just call it replication. We have “snapshot” that is already associated with replication for point-in-time copies, which makes us able to differentiate between a regularly updated replica and a frozen-in-time “backup”. Words fall in and out of usage all of the time, let’s clean up the tech lingo and all get on the same language. No, no we won’t, but I’ve done my bit by suggesting it. And no doubt there are those confused by the current state of lingo that this will help to understand that yes, they are essentially the same thing, only archaic history keeps them separate. Or you could buy all three – replicate to a place where you can take a snapshot and then back up the snapshot (not as crazy as it sounds, I have seen this architecture deployed to get the backup process out of production, but I was being facetious). And you don’t need a ton of names. You replicate to secondary (tertiary) storage, then take a snapshot, then move or replicate the snapshot to a remote location – like the cloud or remote datacenter. Not so tough, and one term is removed from the confusion, inadvertently adding crispness to the other terms.256Views0likes0CommentsUseful Cloud Advice, Part One. Storage
There’s a whole lot of talk about cloud revolutionizing IT, and a whole lot of argument about public versus private cloud, even a considerable amount of talk about what belongs in the cloud. But not much talk about helping you determine what applications and storage are a good candidate to move there – considering all of the angles that matter to IT. This blog will focus on storage, the next one on applications, because I don’t want to bury you in a blog as long as a feature length article. It amazes me when I see comments like “no one needs a datacenter” while the definition of what, exactly, cloud is still carries the weight of debate. For the purposes of this blog, we will limit the definition of cloud to Infrastructure as a Service (IaaS) – VM containers and the things to support them, or Storage containers and the things to support them. My reasoning is simple, in that the only other big category of “cloud” at the moment is SaaS, and since SaaS has been around for about a decade, you should already have a decision making process for outsourcing a given application to such a service. Salesforce.com and Google Docs are examples of what is filtered out by saying “SaaS is a different beast”. Hosting services and CDNs are a chunk of the market, but increasingly they are starting to look like IaaS, as they add functionality to meet the demands of their IT customers. So we’ll focus on the “new and shiny” aspects of cloud that have picked up a level of mainstream support. Cloud Storage Gateways, stairway to (thin provisioning) heaven? Cloud Storage Gateways. Short term win, but long term…? Certainly Cirtas! Cloud Storage Gains Momentum Cloud Storage Use Models Cloud Storage and Adaptability. Plan Ahead Cloud Storage: Just In Time For Data Center Consolidation. The Storage Future is Cloudy, and it is about time. Cloud. Glass Half Empty or Half Full? Chances Are That Your Cloud Will Be Screaming WAN. Would You Like Some Disk With That Cable? F5 Friday: Data Inventory Control175Views0likes0CommentsIn Times Of Change, IT Can Lead, Follow, Or Get Out of the Way.
Information Technology – geeks like you and I – have been responsible for an amazing transformation of business over the last thirty or forty years. The systems that have been put into place since computers became standard fare for businesses have allowed the business to scale out in almost every direction. Greater production, more customers, better marketing and sales follow-through, even insanely targeted marketing for those of you selling to consumers. There is not a piece of the business that would be better off without us. With that change came great responsibility though. Inability to access systems and/or data brings the organization to a screeching halt. So we spend a lot of time putting in redundant systems – for all of its power as an Advanced Application Delivery Controller, many of F5’s customers rely on BIG-IPLTM to keep their systems online even if a server fails. Because it’s good at that (among other things), and they need redundancy to keep the business running. When computerization first came about, and later when Palm and Blackberry were introducing the first personal devices, people – not always IT people – advocated change, and those changes impacted every facet of the business, and provide you and I with steady work. The people advocating were vocal, persistent, and knew that there would be long-term benefit from the systems, or even short-term benefit to dealing with ever increasing workloads. Many of them were rewarded with work maintaining and improving the systems they had advocated for, and all of them were leaders. As we crest the wave of virtualization and start to seriously consider cloud computing on a massive scale – be it cloud storage, cloud applications, or SOA applications that have been cloud-washed – it is time to seriously consider IT’s role in this process once again. Those leaders of the past pushed at business management until they got the systems they thought the organization needed, and another group of people will do the same this time. So as I’ve said before, you need to facilitate this activity. Don’t make them go outside the IT organization, because history says that any application or system allowed to grow outside the IT organization will inevitably fall upon the shoulders of IT to manage. Take that bull by the horns, frame the conversation in the manner that makes the most sense to your business, your management, and your existing infrastructure. Companies like F5 can help you move to the cloud with products like ARX Cloud Extender to make cloud storage look like local NAS, and BIG-IP LTM VE to make cloud apps able to partake of load balancing and other ADC functionality, but all the help in the world doesn’t do you any good if you don’t have a plan. Look at the cloud options available, they’re certainly telling you about themselves right now so that should be easy, then look at your organization’s acceptance of risk, and the policies of cloud service providers in regards to that risk, and come up with ideas on how to utilize the cloud. One thing about a new market that includes a cool buzz word like cloud, if you aren’t proposing where it fits, someone in your organization is. And that person is never going to be as qualified as IT to determine which applications and data belong outside the firewall. Never. I’ve said make a plan before, but many organizations don’t seem to be listening, so I’m saying it again. Whether Cloud is an enabling technology for your organization or a disruptive one for IT is completely in your hands. Be the leader of the past, it’s exciting stuff if managed properly, and like many new technologies, scary stuff if not managed in the context of the rest of your architecture. So build a checklist, pick some apps and even files that could sit in the cloud without a level of risk greater than your organization is willing to accept, and take the list to business leaders. Tell them that cloud is helping to enable IT to better serve them and ask if they’d like to participate in bringing cloud to the enterprise. It doesn’t have to be big stuff, just enough to make them feel like you’re leading the effort, and enough to make you feel like you’re checking cloud out with out “going all in”. After a few pilots, you’ll find you have one more set of tools to solve business problems. And that is almost never a bad thing. Even if you decide cloud usage isn’t for your organization, you chose what was put out there, not a random business person who sees the possibilities but doesn’t know the steps required and the issues to confront. Related Blogs: Risk is not a Synonym for “Lack of Security” Cloud Changes Cost of Attacks Cloud Computing: Location is important, but not the way you think Cloud Storage Gateways, stairway to (thin provisioning) heaven? If Security in the Cloud Were Handled Like Car Accidents Operational Risk Comprises More Than Just Security Quarantine First to Mitigate Risk of VM App Stores CloudFucius Tunes into Radio KCloud Risk Averse or Cutting Edge? Both at Once.208Views0likes0CommentsThe Question Is Not “Are You Ready For Cloud Storage?”
I recently read a piece in Network Computing Magazine that was pretty disparaging of NAS devices, and with a hand-wave the author pronounced NAS dead, long live cloud storage. Until now, storage has been pretty much immune to the type of hype that “The Cloud” gets. Sure, there have been some saying that we should use the cloud for primary storage, and others predicting that it will kill this or that technology, but the outrageous and intangible claims that accompany placing your applications in the cloud. My favorite, repeated even by a lot of people I respect, is that cloud mystically makes you greener. Okay, I’ll sidetrack for a moment and slay that particular demon yet again, because it is just too easy. Virtualization makes you more green by running more apps on less hardware. Moving virtualized anything to the cloud changes not one iota of carbon footprint, because it still has to run on hardware. So if you take 20 VMs from one server and move them to your favorite cloud provider, you have moved where they are running, but they are certainly running on at least one server. Just because it is not your datacenter does not change the fact that it is in a datacenter. Not greener, not smaller carbon footprint. But this column was awash with the claim that cloud storage is it. We no longer need those big old NAS boxes, and they can just go away from the datacenter, starting with the ones that have been cloudwashed. The future is cloudy, cloouuuudddyyy Okay, let us just examine a hypothetical corporation for a moment – I’ll use my old standby, Zap-N-Go. Sally, the CIO of Zap-N-Go is under pressure to “do something with the cloud!” or “Identify three applications to move to the cloud within the next six months!” Now this is a painful way to run an IT shop, but it’s happening all over, so Sally assigns Bob to check out the possibilities, and Bob suggests that moving storage to the cloud might be a big win because of the cost of putting in a new NAS box. They work out a plan to move infrequently accessed files to the cloud as a test of robustness, but that’s not a bold enough staff for the rest of senior management, so their plan to test the waters turns into a full-blown movement of primary data to the cloud. Now this may be a bit extreme, Sally, like any good CIO, would dig in her heals at this one, but bear with me. They move primary storage to the cloud on a cloudy Sunday, utilizing ARX or one of the other cloud-enabled devices on the market, and start to reorganize everything so that people can access their data. On Monday morning, everyone comes in and starts to work, but work is slow, nothing is performing like it used to. The calls start coming to the help desk. “Why is my system so slow?” And then, the CEO calls Sally directly. “It should not take minutes to open an Excel Spreadsheet” he harrumphs. And Sally goes down to help her staff figure out how to improve performance. Since the storage move was the day before, everyone knows the ultimate source of the problem, they’re just trying to figure out what is happening. Sue, the network wizard, pops off with “Our Internet connection is overloaded.” and everyone stops looking. After some work, the staff is able to get WOM running with the cloud provider to accelerate data flowing between the two companies… But doing so in the middle of the business day has cost the company money, and Sally is in trouble. After days of redress meetings, and acceptable, if not perfect performance, all seems well, and Sally can report to the rest of upper management that files have been moved to the cloud, and now a low monthly fee will be paid instead of large incremental chunks of budget going to new NAS devices. It’s Almost Ready for Primary Storage… Until the first time the Internet connection goes down. And then, gentle reader, Sally and Bob’s resume’ will end up on your desk, because they will not survive the aftermath of “no one can do anything”. Cloud in general and cloud storage in particular has amazing promise – I really believe that – but pumping it full of meaningless hyperbole does no one any good. Not IT, not the business, and not whatever you’re hawking. So take such proclamations with a grain of salt, keep your eye on the goal. Secure, Fast, and Agile solutions for your business, not “all in” like it’s a poker table. And don’t let such buffoons sour you on the promise of cloud, while I wouldn’t call them visionary, I do see a day when most of our storage and apps are in a cloud somewhere. It’s just not tomorrow. Or next year. Next year archiving and tier three will be out there, let’s just see how that goes before we start discussing primary storage. …And Ask Not “Are We Ready For Cloud Storage?” but rather “Is Cloud Storage Ready For Us?” My vote? Archival and Tier three are getting a good workout, start there.161Views0likes0CommentsGraduating Your Storage
Lori and I’s youngest daughter graduated from High School this year, and her class chose one of the many good Vince Lombardi quotes for the theme of their graduation – “The measure of who we are is what we do with what we have.” Those who know me well know that I’m not a huge football fan (don’t tell my friends here in Green Bay that… The stadium can hold roughly half the city’s population, and they aren’t real friendly to those who don’t join in the frenzy), but Vince Lombardi certainly had a lot of great quotes over the course of his career, and I am a fan of solid quotes. This is a good example of his ability to say things short and to the point. This is the point where I say that I’m proud of our daughter. For a lot more than simply making it through school, and wish her the best of luck in that rollercoaster ride known as adult life. About the same time as our daughter was graduating, Lori sent me a link to this Research And Markets report on High Performance Computing site Storage usage. I found it to be very interesting, just because HPC sites are generally on the larger end of storage environments, and are where the rubber really hits the road in terms of storage performance and access times. One thing that stood out was the large percentage of disk that is reported as DAS. While you know there’s a lot of disk sitting in servers underutilized, I would expect the age of virtualization to have used a larger chunk of that disk with local images and more swap space for the multiple OS instances. Another thing of interest was that NAS and SAN are about evenly represented. Just a few years ago, that would not have been true at all. Fiber Channel has definitely lost some space to IP-based storage if they’re about even in HPC environment deployments. What’s good for the some of the most resource intensive environments on earth is good for most enterprises, and I suspect that NAS has eclipsed SAN in terms of shear storage space in the average enterprise (though that’s a conjecture on my part, not anything from the report). And that brings us back to the Vince Lombardi Quote. NAS disk space is growing. DAS disk space is still plentiful. The measure of the service your IT staff delivers will be what you do with what you have. And in this case, what you have is DAS disk not being used and a growing number of NAS heads to manage all that NAS storage. What do you do with that? Well, you do what makes the most sense. In this case, storage tiering comes to mind, but DAS isn’t generally considered a tier, right? It is if you have file virtualization (also called directory virtualization) in place. Seriously. By placing all that spare DAS into the directory tree, it is available as a pool of resources to service storage needs – and by utilizing automated, rule-based tiering, what is stored there can be tightly controlled by tiering rules so that you are not taking over all of the available space on the DAS, and things are stored in the place that makes the most sense based upon modification and/or access times. With tiering and file virtualization in place, you have a solution that can utilize all that DAS, and an automated system to move things to the place that makes the most sense. While you’re at it, move the rest of the disk into the virtual directory, and you can run backups off the directory virtualization engine, rather than set them up for each machine. You can even create rules to copy data off to slow disk and back it up from there, if you like. And with the direction things are headed, throw in an encrypting Cloud Storage Gateway like our ARX Cloud Extender, and you have a solution that utilizes your internal DAS and NAS both intelligently and to the maximum, and the gateway to Cloud storage for overflow, Tier N, or archival storage… depending upon how you’re using cloud storage. Then you are doing the most with what you have – and setting up an infinitely expandable pool to cover for unforeseen growth. All of the above makes your storage environment more rational, improves utilization in DAS (and in most cases NAS), retains your files with their names intact, and moves unstructured data to the storage that makes the most sense for it. There is very little not to like. So check it out. We have ARX, other vendors offer their solutions – though ARX is the leader in this space, so I don’t feel pandering to say you’ll find us a better fit.192Views0likes0CommentsThe Right (Platform) Tool For the Job(s).
One of my hobbies is modeling – mostly for wargaming but also for the sake of modeling. In an average year I do a lot of WWII models, some modern military, some civilian vehicles, figures from an array of historical timeperiods and the occasional sci-fi figure for one of my sons… The oldest (24 y/o) being a WarHammer 40k player and the youngest (3 y/o) just plain enjoying anything that looks like a robot. While I have been modeling more or less for decades, only in the last five years have I had the luxury of owning an airbrush, and then I restrict it to very limited uses – mostly base-coating larger models like cars, tanks, or spaceships. The other day I was reading on my airbrush vendor’s website and discovered that they had purchased a competitor that specialized in detailing airbrushes – so detailed that the line is used to decorate fingernails. This got me to thinking that I could do more detailed bits on models – like shovel blades and flesh-tones with an airbrush if I had one of these little detail brushes. Lori told me to send her a link to them so that she had it on the list for possible gifts, so I went out and started researching which model of the line was most suited to my goals. The airbrush I have is one of the best on the market – a Badger Airbrush Company model 150. It has dual-action, which means that pushing down on the trigger lets air out, and pulling the trigger back while pushing down lets an increasing amount of paint flow through. I use this to determine the density of paint I’m applying, but have never thought too much about it. Well in my research I wanted to see how much difference there was between my airbrush and the Omni that I was interested in. The answer… Almost none. Which confused me at first, as my airbrush, even with the finest needle and tip available and a pressure valve on my compressor to control the amount of air being pumped through it, sprays a lot of paint at once. So I researched further, and guess what? The volume of paint adjustment that is controlled by how far you draw back the trigger, combined with the PSI you allow through the regulator will control the width of the paint flow. My existing airbrush can get down to 2mm – sharpened pencil point widths. I have a brand-new fine tip and needle (in poor lighting I confused my fine needle with my reamer and bent the tip a few weeks ago, so ordered a new one), my pressure regulator is a pretty good one, all that is left is to play with it until I have the right pressure, and I may be doing more detailed work with my airbrush in the near future. Airbrushing isn’t necessarily better – for some jobs I like the results better, like single-color finishes, because if you thin the paint and go with several coats, you can get a much more uniform worn look to surfaces – but overall it is just different. The reason I would want to use my airbrush more is, simply time. Because you don’t have to worry about crevices and such (the air blows paint into them), you don’t have to take nearly as long to paint a given part with an airbrush as you do with a brush. At least the base coat anyway, you still need a brush for highlighting and shadowing… Or at least I do… But it literally cuts hours off of a group of models if I can arrange one trip down to the spray area versus brush-painting those same models. What does all of this have to do with IT? The same thing it usually does. You have a ton of tools in your datacenter that do one job very well, but you have never had reason to look into alternate uses that the tool might do just as well or better at. This is relatively common with Application Delivery Controllers, where they are brought in just to do load balancing, or just for application acceleration, or just for WAN Optimization, and the other things that the tool does just as well haven’t been explored. But you might want to do some research on your platforms, just to see if they can serve other needs than you’re putting them to today. Let’s face it, you’ve paid for them, and in many cases they will work as-is or with a slight cost add-on to do even more. It is worth knowing what “more” is for a given product, if for no other reason than having that information in your pocket when exploring solutions going forward. A similar situation is starting to develop with our ARX family of products, and no doubt with some competitors also (though I haven’t heard of it from competitors, I’m simply conjecturing) – as ARX grows in its capabilities, many existing customers aren’t taking advantage of the sweet new tools that are available to them for free or for a modest premium on their existing investment. ARX Cloud Extender is the largest case of this phenomenon that I know of, but this week’s EMC Atmos announcement might well go a long way to reconcile that bit. To me it is very cool that ARX can virtualize your NAS devices AND include cloud and/or object storage alongside NAS so as to appear to be one large pool of storage. Whether you’re a customer or not, it’s worth checking out. Of course, like my airbrush, you’ll have some learning to do if you try new things with your existing hardware. I’ll spend a couple of hours with the airbrush figuring out how to make reliable lines of those sizes, then determine where best to use it. While I could have achieved the same or similar results with masking, the time investment for masking is large and repetitive, the dollar cost is repetitive. I also could have paid a large chunk of money for a specialized detail airbrush, but then I’d have two tools to maintain, when one will do it all… And this is true of alternatives to learning new things about your existing hardware – the learning curve will be there whether you implement new functionality on your existing platforms or purchase a point solution, best to figure out the cost in time and money to solve the problem from either direction. Often, you’ll find the cost of learning a new function on familiar hardware is much lower than purchasing and learning all new hardware. WWII Russians – vehicle is airbrushed, figures not.241Views0likes0CommentsCloud Storage at Computer Technology Review
Since I’ve mentioned it a couple of times, I thought I’d offer you all a link to my article in Computer Technology Review about The Cloud Tier. The point was to delve into how/when/where/why of cloud storage usage. While there is a lot to say on that topic and the article was of limited word count, I think the idea that it can fit into your existing architecture with minimal changes and then be utilized to service the needs of the business in a better/faster/more agile manner is the key point. Normally I keep my blogs relatively vendor-independent. This one talks mostly about our ARX solution because the article referenced was vendor independent, and I do think we’ve got some cool enabling stuff, so sometimes you just gotta talk about the cool toys. No worries, if you’re not an ARX customer and won’t be, there’s still info in here for you. For our part, F5 ARX is our link into that enabling story. Utilizing ARX (or another NAS virtualization engine), you can automatically direct qualifying files to the cloud, while pulling files that are accessed or modified frequently back into your tier one or tier two NAS storage. This optimizes storage usage while keeping the files available to your users. We call that point between CIFS/NFS clients and the storage they use one of our Strategic Points of Control. A spot where you can add value if you have the right tools in place. This one is a big one because files that move to the cloud can appear to users to not have moved at all – ARX has a file virtualization engine that shows them the file in a virtual directory structure. Where it is physically stored behind that virtual directory structure is completely unrelated to where the user thinks it is, and IT can move the file as needed – or write rules to have the ARX move files as needed. The only difference the user might notice is that the files they use every day are faster to open, and the ones they never or almost never access are slower. That’s called responsive, and it makes IT look good. It also means that you can say “We’re using the cloud, check out our storage”, and instead of planning for a huge outlay to put another array in, you can plan small monthly payments to cover the cost of storage in use. That’s one of the reasons I think cloud storage for non-critical data will take off relatively quickly compared to most cloud technologies. The benefits are obvious and have numbers associated with them, they reduce the amount of “excess capacity” you have to keep in the datacenter to account for growth, while shifting the cost of that excess capacity to much smaller monthly payments. What I don’t know is how the long-term cost of cloud storage will equate to the long term cost of purchasing the same storage. And I don’t think anyone can know what that graph looks like at this time. Cloud storage is new enough that it is safe to say the costs are anything but stabilized. Indeed, the race to the bottom, price-per-terabyte wise early in cloud storage’s growth nearly guarantees that the costs will go up over the long term. but how far up we just don’t have the information to figure out yet. Operations costs for cloud storage (from the vendor perspective) are new, and while the cost of storage over time in the datacenter is a starting point, the needs of administering cloud storage are not the same as enterprise storage, so it will be interesting to see how much long-term operation of a cloud storage vendor impacts prices over the next two to five years. Don’t let me scare you off. I still think it’s a worthy addition to your tiering strategy, I would only recommend that you have a way out. Don’t just assume your cloud vendor will be there forever, because there is that fulcrum where in order to survive they may have to raise prices beyond your organization’s willingness (or even ability) to pay. You need to plan for that scenario, because at worst having a plan is a waste of some man-hours and maybe a minimal fee to keep a second cloud provider on-line in case, while the worst case if things go wrong is losing your files. No comparison of the risks there. Of course, if you have ARX in place, moving between cloud storage providers is easier, but frankly, running that much data through your WAN link is going to be a crushing exercise. So plan for two options – one where you have the time to trickle the movement of data through your WAN connection (remember if you’re moving cloud storage providers, all that data needs to come into the building and back out that same WAN link unless you have alternatives available), and one where you have to get your data off quickly for whatever reason. Like everything else, there’s benefit and cost savings to be had, but keep a plan in place. It’s still your storage, so an addition to the Disaster Recovery plan is in order too – under “WAN link down” you need to add “what to do about non-critical files”. Likely you won’t care about them in an extreme emergency, but there are scenarios where you’re going to want an alternate way to get at your files. Meanwhile, you have a theoretically unlimited tier out there that could be holding a ton of those not-so-critical files that are eating up terabytes in your datacenter. I would look into taking advantage of it.169Views0likes0Comments