storage virtualization
10 TopicsWhat is a Strategic Point of Control Anyway?
From mammoth hunting to military maneuvers to the datacenter, the key to success is control Recalling your elementary school lessons, you’ll probably remember that mammoths were large and dangerous creatures and like most animals they were quite deadly to primitive man. But yet man found a way to hunt them effectively and, we assume, with more than a small degree of success as we are still here and, well, the mammoths aren’t. Marx Cavemen PHOTO AND ART WORK : Fred R Hinojosa. The theory of how man successfully hunted ginormous creatures like the mammoth goes something like this: a group of hunters would single out a mammoth and herd it toward a point at which the hunters would have an advantage – a narrow mountain pass, a clearing enclosed by large rock, etc… The qualifying criteria for the place in which the hunters would finally confront their next meal was that it afforded the hunters a strategic point of control over the mammoth’s movement. The mammoth could not move away without either (a) climbing sheer rock walls or (b) being attacked by the hunters. By forcing mammoths into a confined space, the hunters controlled the environment and the mammoth’s ability to flee, thus a successful hunt was had by all. At least by all the hunters; the mammoths probably didn’t find it successful at all. Whether you consider mammoth hunting or military maneuvers or strategy-based games (chess, checkers) one thing remains the same: a winning strategy almost always involves forcing the opposition into a situation over which you have control. That might be a mountain pass, or a densely wooded forest, or a bridge. The key is to force the entire complement of the opposition through an easily and tightly controlled path. Once they’re on that path – and can’t turn back – you can execute your plan of attack. These easily and highly constrained paths are “strategic points of control.” They are strategic because they are the points at which you are empowered to perform some action with a high degree of assurance of success. In data center architecture there are several “strategic points of control” at which security, optimization, and acceleration policies can be applied to inbound and outbound data. These strategic points of control are important to recognize as they are the most efficient – and effective – points at which control can be exerted over the use of data center resources. DATA CENTER STRATEGIC POINTS of CONTROL In every data center architecture there are aggregation points. These are points (one or more components) through which all traffic is forced to flow, for one reason or another. For example, the most obvious strategic point of control within a data center is at its perimeter – the router and firewalls that control inbound access to resources and in some cases control outbound access as well. All data flows through this strategic point of control and because it’s at the perimeter of the data center it makes sense to implement broad resource access policies at this point. Similarly, strategic points of control occur internal to the data center at several “tiers” within the architecture. Several of these tiers are: Storage virtualization provides a unified view of storage resources by virtualizing storage solutions (NAS, SAN, etc…). Because the storage virtualization tier manages all access to the resources it is managing, it is a strategic point of control at which optimization and security policies can be easily applied. Application Delivery / load balancing virtualizes application instances and ensures availability and scalability of an application. Because it is virtualizing the application it therefore becomes a point of aggregation through which all requests and responses for an application must flow. It is a strategic point of control for application security, optimization, and acceleration. Network virtualization is emerging internal to the data center architecture as a means to provide inter-virtual machine connectivity more efficiently than perhaps can be achieved through traditional network connectivity. Virtual switches often reside on a server on which multiple applications have been deployed within virtual machines. Traditionally it might be necessary for communication between those applications to physically exit and re-enter the server’s network card. But by virtualizing the network at this tier the physical traversal path is eliminated (and the associated latency, by the way) and more efficient inter-vm communication can be achieved. This is a strategic point of control at which access to applications at the network layer should be applied, especially in a public cloud environment where inter-organizational residency on the same physical machine is highly likely. OLD SKOOL VIRTUALIZATION EVOLVES You might have begun noticing a central theme to these strategic points of control: they are all points at which some kind of virtualization – and thus aggregation – occur naturally in a data center architecture. This is the original (first) kind of virtualization: the presentation of many resources as a single resources, a la load balancing and other proxy-based solutions. When there is a one —> many (1:M) virtualization solution employed, it naturally becomes a strategic point of control by virtue of the fact that all “X” traffic must flow through that solution and thus policies regarding access, security, logging, etc… can be applied in a single, centrally managed location. The key here is “strategic” and “control”. The former relates to the ability to apply the latter over data at a single point in the data path. This kind of 1:M virtualization has been a part of datacenter architectures since the mid 1990s. It’s evolved to provide ever broader and deeper control over the data that must traverse these points of control by nature of network design. These points have become, over time, strategic in terms of the ability to consistently apply policies to data in as operationally efficient manner as possible. Thus have these virtualization layers become “strategic points of control”. And you thought the term was just another square on the buzz-word bingo card, didn’t you?1.1KViews0likes6CommentsBuilding an elastic environment requires elastic infrastructure
One of the reasons behind some folks pushing for infrastructure as virtual appliances is the on-demand nature of a virtualized environment. When network and application delivery infrastructure hits capacity in terms of throughput - regardless of the layer of the application stack at which it happens - it's frustrating to think you might need to upgrade the hardware rather than just add more compute power via a virtual image. The truth is that this makes sense. The infrastructure supporting a virtualized environment should be elastic. It should be able to dynamically expand without requiring a new network architecture, a higher performing platform, or new configuration. You should be able to just add more compute resources and walk away. The good news is that this is possible today. It just requires that you consider carefully your choices in network and application network infrastructure when you build out your virtualized infrastructure. ELASTIC APPLICATION DELIVERY INFRASTRUCTURE Last year F5 introduced VIPRION, an elastic, dynamic application networking delivery platform capable of expanding capacity without requiring any changes to the infrastructure. VIPRION is a chassis-based bladed application delivery controller and its bladed system behaves much in the same way that a virtualized equivalent would behave. Say you start with one blade in the system, and soon after you discover you need more throughput and more processing power. Rather than bring online a new virtual image of such an appliance to increase capacity, you add a blade to the system and voila! VIPRION immediately recognizes the blade and simply adds it to its pools of processing power and capacity. There's no need to reconfigure anything, VIPRION essentially treats each blade like a virtual image and distributes requests and traffic across the network and application delivery capacity available on the blade automatically. Just like a virtual appliance model would, but without concern for the reliability and security of the platform. Traditional application delivery controllers can also be scaled out horizontally to provide similar functionality and behavior. By deploying additional application delivery controllers in what is often called an active-active model, you can rapidly deploy and synchronize configuration of the master system to add more throughput and capacity. Meshed deployments comprising more than a pair of application delivery controllers can also provide additional network compute resources beyond what is offered by a single system. The latter option (the traditional scaling model) requires more work to deploy than the former (VIPRION) simply because it requires additional hardware and all the overhead required of such a solution. The elastic option with bladed, chassis-based hardware is really the best option in terms of elasticity and the ability to grow on-demand as your infrastructure needs increase over time. ELASTIC STORAGE INFRASTRUCTURE Often overlooked in the network diagrams detailing virtualized infrastructures is the storage layer. The increase in storage needs in a virtualized environment can be overwhelming, as there is a need to standardize the storage access layer such that virtual images of applications can be deployed in a common, unified way regardless of which server they might need to be executing on at any given time. This means a shared, unified storage layer on which to store images that are necessarily large. This unified storage layer must also be expandable. As the number of applications and associated images are made available, storage needs increase. What's needed is a system in which additional storage can be added in a non-disruptive manner. If you have to modify the automation and orchestration systems driving your virtualized environment when additional storage is added, you've lost some of the benefits of a virtualized storage infrastructure. F5's ARX series of storage virtualization provides that layer of unified storage infrastructure. By normalizing the namespaces through which files (images) are accessed, the systems driving a virtualized environment can be assured that images are available via the same access method regardless of where the file or image is physically located. Virtualized storage infrastructure systems are dynamic; additional storage can be added to the infrastructure and "plugged in" to the global namespace to increase the storage available in a non-disruptive manner. An intelligent virtualized storage infrastructure can further make more efficient the use of the storage available by tiering the storage. Images and files accessed more frequently can be stored on fast, tier one storage so they are loaded and execute more quickly, while less frequently accessed files and images can be moved to less expensive and perhaps less peformant storage systems. By deploying elastic application delivery network infrastructure instead of virtual appliances you maintain stability, reliability, security, and performance across your virtualized environment. Elastic application delivery network infrastructure is already dynamic, and offers a variety of options for integration into automation and orchestration systems via standards-based control planes, many of which are nearly turn-key solutions. The reasons why some folks might desire a virtual appliance model for their application delivery network infrastructure are valid. But the reality is that the elasticity and on-demand capacity offered by a virtual appliance is already available in proven, reliable hardware solutions today that do not require sacrificing performance, security, or flexibility. Related articles by Zemanta How to instrument your Java EE applications for a virtualized environment Storage Virtualization Fundamentals Automating scalability and high availability services Building a Cloudbursting Capable Infrastructure EMC unveils Atmos cloud offering Are you (and your infrastructure) ready for virtualization?505Views0likes4CommentsDoes your virtualization strategy create an SEP field?
There is a lot of hype around all types of virtualization today, with one of the primary drivers often cited being a reduction in management costs. I was pondering whether or not that hype was true, given the amount of work that goes into setting up not only the virtual image, but the infrastructure necessary to properly deliver the images and the applications they contain. We've been using imaging technology for a long time, especially in lab and testing environments. It made sense then because a lot of work goes into setting up a server and the applications running on it before it's "imaged' for rapid deployment use. Virtual images that run inside virtualization servers like VMWare brought not just the ability to rapidly deploy a new server and its associated applications, but the ability to do so in near real-time. But it's not the virtualization of the operating system that really offers a huge return on investment, it's the virtualization of the applications that are packaged up in a virtual image that offers the most benefits. While there's certainly a lot of work that goes into deploying a server OS - the actual installation, configuration, patching, more patching, and licensing - there's even more work that goes into deploying an application simply because they can be ... fussy. So once you have a server and application configured and ready to deploy, it certainly makes sense that you'd want to "capture" it so that it can be rapidly deployed in the future. Without the proper infrastructure, however, the benefits can be drastically reduced. Four questions immediately come to mind that require some answers: Where will the images be stored? How will you manage the applications running on deployed virtual images? What about updates and patches to not only the server OS but the applications themselves? What about changes to your infrastructure? The savings realized by reducing the management and administrative costs of building, testing, and deploying an application in a virtual environment can be negated by a simple change to your infrastructure, or the need to upgrade/patch the application or operating system. Because the image is a basically a snapshot, that snapshot needs to change as the environment in which it runs changes. And the environment means more than just the server OS, it means the network, application, and delivery infrastructure. Addressing the complexity involved in such an environment requires an intelligent, flexible infrastructure that supports virtualization. And not just OS virtualization, but other forms of virtualization such as server virtualization and storage or file virtualization. There's a lot more to virtualization than just setting up a VMWare server, creating some images and slapping each other on the back for a job well done. If your infrastructure isn't ready to support a virtualized environment then you've simply shifted the costs - and responsibility - associated with deploying servers and applications to someone else and, in many cases, several someone elses. If you haven't considered how you're going to deliver the applications on those virtual images then you're in danger of simply shifting the costs of delivering applications elsewhere. Without a solid infrastructure that can support the dynamic environment created by virtual imaging the benefits you think you're getting quickly diminish as other groups are suddenly working overtime to configure and manage the rest of the infrastructure necessary to deliver those images and applications to servers and users. We often talk about silos in terms of network and applications' groups; but virtualization has the potential to create yet another silo, and that silo may be taller and more costly than anyone has yet considered. Virtualization has many benefits to you and your organization. Consider carefully whether you're infrastructure is prepared to support virtualization or risk discovering that implementing a virtualized solution is creating an SEP (Somebody Else's Problem) field around delivering and managing those images.319Views0likes0CommentsForce Multipliers and Strategic Points of Control Revisited
On occasion I have talked about military force multipliers. These are things like terrain and minefields that can make your force able to do their job much more effectively if utilized correctly. In fact, a study of military history is every bit as much a study of battlefields as it is a study of armies. He who chooses the best terrain generally wins, and he who utilizes tools like minefields effectively often does too. Rommel in the desert often used Wadis to hide his dreaded 88mm guns – that at the time could rip through any tank the British fielded. For the last couple of years, we’ve all been inundated with the story of The 300 Spartans that held off an entire army. Of course it was more than just the 300 Spartans in that pass, but they were still massively outnumbered. Over and over again throughout history, it is the terrain and the technology that give a force the edge. Perhaps the first person to notice this trend and certainly the first to write a detailed work on the topic was von Clausewitz. His writing is some of the oldest military theory, and much of it is still relevant today, if you are interested in that type of writing. For those of us in IT, it is much the same. He who chooses the best architecture and makes the most of available technology wins. In this case, as in a war, winning is temporary and must constantly be revisited, but that is indeed what our job is – keeping the systems at their tip-top shape with the resources available. Do you put in the tool that is the absolute best at what it does but requires a zillion man-hours to maintain, or do you put in the tool that covers everything you need and takes almost no time to maintain? The answer to that question is not always as simple as it sounds like it should be. By way of example, which solution would you like your bank to put between your account and hackers? Probably a different one than the one you would you like your bank to put in for employee timekeeping. An 88 in the desert, compliments of WW2inColor Unlike warfare though, a lot of companies are in the business of making tools for our architecture needs, so we get plenty of options and most spaces have a happy medium. Instead of inserting all the bells and whistles they inserted the bells and made them relatively easy to configure, or they merged products to make your life easier. When the terrain suits a commanders’ needs in wartime, the need for such force multipliers as barbed wire and minefields are eliminated because an attacker can be channeled into the desired defenses by terrain features like cliffs and swamps. The same could be said of your network. There are a few places on the network that are Strategic Points of Control, where so much information (incidentally including attackers, though this is not, strictly speaking, a security blog) is funneled through that you can increase your visibility, level of control, and even implement new functionality. We here at F5 like to talk about three of them… Between your users and the apps they access, between your systems and the WAN, and between consumers of file services and the providers of those services. These are places where you can gather an enormous amount of information and act upon that information without a lot of staff effort – force multipliers, so to speak. When a user connects to your systems, the strategic point of control at the edge of your network can perform pre-application-access security checks, route them to a VPN, determine the best of a pool of servers to service their requests, encrypt the stream (on front, back, or both sides), redirect them to a completely different datacenter or an instance of the application they are requesting that actually resides in the cloud… The possibilities are endless. When a user accesses a file, the strategic point of control between them and the physical storage allows you to direct them to the file no matter where it might be stored, allows you to optimize the file for the pattern of access that is normally present, allows you to apply security checks before the physical file system is ever touched, again, the list goes on and on. When an application like replication or remote email is accessed over the WAN, the strategic point of control between the app and the actual Internet allows you to encrypt, compress, dedupe, and otherwise optimize the data before putting it out of your bandwidth-limited, publicly exposed WAN connection. The first strategic point of control listed above gives you control over incoming traffic and early detection of attack attempts. It also gives you force multiplication with load balancing, so your systems are unlikely to get overloaded unless something else is going on. Finally, you get the security of SSL termination or full-stream encryption. The second point of control gives you the ability to balance your storage needs by scripting movement of files between NAS devices or tiers without the user having to see a single change. This means you can do more with less storage, and support for cloud storage providers and cloud storage gateways extends your storage to nearly unlimited space – depending upon your appetite for monthly payments to cloud storage vendors. The third force-multiplies the dollars you are spending on your WAN connection by reducing the traffic going over it, while offloading a ton of work from your servers because encryption happens on the way out the door, not on each VM. Taking advantage of these strategic points of control, architectural force multipliers offers you the opportunity to do more with less daily maintenance. For instance, the point between users and applications can be hooked up to your ADS or LDAP server and be used to authenticate that a user attempting to access internal resources from… Say… and iPad… is indeed an employee before they ever get to the application in question. That limits the attack vectors on software that may be highly attractive to attackers. There are plenty more examples of multiplying your impact without increasing staff size or even growing your architectural footprint beyond the initial investment in tools at the strategic point of control. For F5, we have LTM at the Application Delivery Network Strategic Point of Control. Once that investment is made, a whole raft of options can be tacked on – APM, WOM, WAM, ASM, the list goes on again (tired of that phrase for this blog yet?). Since each resides on LTM, there is only one “bump in the wire”, but a ton of functionality that can be brought to bear, including integration with some of the biggest names in applications – Microsoft, Oracle, IBM, etc. Adding business value like remote access for devices, while multiplying your IT force. I recommend that you check it out if you haven’t, there is definitely a lot to be gained, and it costs you nothing but a little bit of your precious time to look into it. No matter what you do, looking closely at these strategic points of control and making certain you are using them effectively to meet the needs of your organization is easy and important. The network is not just a way to hook users to machines anymore, so make certain that’s not all you’re using it for. Make the most of the terrain. And yes, if you also read Lori’s blog, we were indeed watching the same shows, and talking about this concept, so no surprise our blogs are on similar wavelengths. Related Blogs: What is a Strategic Point of Control Anyway? Is Your Application Infrastructure Architecture Based on the ... F5 Tech Field Day – Intro To F5 As A Strategic Point Of Control What CIOs Can Learn from the Spartans What We Learned from Anonymous: DDoS is now 3DoS What is Network-based Application Virtualization and Why Do You ... They're Called Black Boxes Not Invisible Boxes Service Virtualization Helps Localize Impact of Elastic Scalability F5 Friday: It is now safe to enable File Upload256Views0likes0CommentsWhat Is Your Reason for Virtualization and Cloud, Anyway?
Gear shifting in a modern car is a highly virtualized application nowadays. Whether you’re driving a stick or an automatic, it is certainly not the same as your great grandaddy’s shifting (assuming he owned a car). The huge difference between a stick and an automatic is how much work the operator has to perform to get the job done. In the case of an automatic, the driver sets the car up correctly (putting it into drive as opposed to one of the other gears), and then forgets about it other than depressing and releasing the gas and brake pedals. A small amount of up-front effort followed by blissful ignorance – until the transmission starts slipping anyway. In a stick, the driver has much more granular control of the shifting mechanism, but is required to pay attention to dials and the feel of the car, while operating both pedals and the shifting mechanism. Two different solutions with two different strengths and weaknesses. Manual transmissions are much more heavily influenced by the driver, both in terms of operating efficiency (gas mileage, responsiveness, etc) and longevity (a careful driver can keep the clutch from going bad for a very long time, a clutch-popping driver can destroy those pads in near-zero time). Automatic transmissions are less overhead day-to-day, but don’t offer the advantages of a stick. This is the same type of trade-off you have to ask about the goals of your next generation architecture. I’ve touched on this before, and no doubt others have too, but it is worth calling out as its own blog. Are you implementing virtualization and/or cloud technologies to make IT more responsive to the needs of the user, or are you implementing them to give users “put it in drive and don’t worry about it” control over their own application infrastructure? The difference is huge, and the two may have some synergies, but they’re certainly not perfectly complimentary. In the case of making IT more responsive, you want to give your operators a ton of dials and whistles to control the day-to-day operations of applications and make certain that load is distributed well and all applications are responsive in a manner keeping with business requirements. In the case of push-button business provisioning, you want to make the process bullet-proof and not require user interaction. It is a different world to say “It is easy for businesses to provision new applications.” (yes, I do know the questions that statement spawns, but there are people doing it anyway – more in a moment) than it is to say “Our monitoring and virtual environment give us the ability to guarantee uptime and shift load to the servers/locales/geographies that make sense.” While you can do the second as a part of the first, they do not require each other, and unless you know where you’re going, you won’t ever get there. Some of you have been laughing since I first mentioned giving business the ability to provision their own applications. Don’t. There are some very valid cases where this is actually the answer that makes the most sense. Anyone reading this that works at a University knows that this is the emerging standard model for the student virtualization efforts. Let students provision a gazillion servers, because they know what they need, and University IT could never service all of the requests. Then between semesters, wipe the virtual arrays clean and start over. The early results show that for the university model, this is a near-perfect solution. For everyone not at a university, there are groups within your organization capable of putting up applications - a content management server for example - without IT involvement… Except that IT controls the hardware. If you gave them single-button ability to provision a standard image, they may well be willing to throw up their own application. There are still a ton of issues, security and DB access come to mind, but I’m pointing out that there are groups with the desire who believe they have the ability, if IT gets out of their way. Are you aiming to serve them? If so, what do you do for less savvy groups within the organization or those with complex application requirements that don’t know how much disk space or how many instances they’ll need? For increasing IT agility, we’re ready to start that move today. Indeed, virtualization was the start of increasing IT’s responsiveness to business needs, and we’re getting more and more technology on-board to cover the missing pieces of agile infrastructure. By making your infrastructure as adaptable as your VM environment, you can leverage the strategic points of control built into your network to handle ADC functionality, security, storage virtualization, and WAN Optimization to make sure that traffic keeps flowing and your network doesn’t become the bottleneck. You can also leverage the advanced reporting that comes from sitting in one of those strategic points of control to foresee problem areas or catch them as they occur, rather than waiting for user complaints. Most of us are going for IT agility in the short term, but it is worth considering if, for some users, one-click provisioning wouldn’t reduce IT overhead and let you focus on new strategic projects. Giving user groups access to application templates and raw VM images configured for some common applications they might need is not a 100% terrible idea if they can use them with less involvement from IT than is currently the case. Meanwhile, watch this space, F5 is one of the vendors driving the next generation of network automation, and I’ll mention it when cool things are going on here. Or if I see something cool someone else is doing, I occasionally plug it here, like I did for Cirtas when they first came out, or Oracle Goldengate. Make a plan. Execute on it. Stand ready to serve the business in the way that makes the most sense with the least time investment from your already busy staff. And listen to a lot of loud music, it lightens the stress level. I was listening to ZZ Top and Buckcherry writing this. Maybe that says something, I don’t quite know.242Views0likes0CommentsBuilding the Hydra – Array Virtualization is not File Virtualization
So I’m jealous that Lori works D&D references into her posts regularly and I never have… Until today! For those who aren’t gamers or literary buffs, a Hydra is a big serpent or lizard with a variable number of heads (normally five to nine in both literature and gaming). They’re very powerful and very dangerous, and running into one unprepared is likely to get you p0wned. The worst part about them is that mythologically speaking, if you cut one of the heads off, two grow in its place. Ugly stuff if you’re determined to defeat it. That’s the way I see array-based file virtualization and other tack-on functionality. Vendors who are implementing it (many of whom are F5 partners), try to tell you that they’re unifying everything and the world is a wonderful place with greener grass and more smiling children due to their efforts. And they’re right…If you’re a homogenous shop with nothing but their storage gear. Then their multi-headed hydra looks pretty appealing. Everyone else feels like it only does a part of the job and is wary of getting too close. For the rest of us there are products like ARX to take care of that nasty truth that no organization is an all-one-vendor shop, particularly not in the NAS space, where higher end gear can cost hundreds of thousands while entry level is a commodity server with a thousand bucks worth of disk slapped into it. In fact, I have never seen an IT department that was all one vendor for NAS, and that’s the problem with single-vendor messaging. Sure they can give you a handle on their stuff, help you virtualize it, give you a unified directory structure, automate tiering, but what about that line where their box ends and the rest of the organization begins? That’s the demarcation line where you have to find other products to do the job. Picture from Pantheon.org Related Articles and Blogs: Storage Virtualization, Redux: Arise File Virtualization Lernaedan Hydraon Wikipedia Storage Vendors – The Deduplication Stakes are Raised SAN-Based Data Replication199Views0likes0CommentsAnother Guest Post on F5 Fridays
As you all know, I try to keep my marketing spiel for F5 to a minimum here. I don’t hesitate to mention when F5 has a product that will solve your problem, but try to focus on the problem and technical solutions. But sometimes I want to crow about how good our product lines really are. Thankfully, Lori provides a venue for us to do just that called F5 Fridays. This week I guest wrote an F5 Friday article about our new ARX Cloud Extender product and it’s cool enough I thought I’d let those of you who read my blog and don’t follow Loris know that it’s out there. Check it out here. If you’re a File Virtualization customer, or you want to take advantage of the cloud from your traditional storage arrays, it is worth a quick read.177Views0likes0CommentsF5 Friday: ARX VE Offers New Opportunities
Virtualization has many benefits in the data center – some that aren’t necessarily about provisioning and deployment. There are some things on your shopping list that you’d never purchase sight unseen or untested. Houses, cars, even furniture. So-called “big ticket” items that are generally expensive enough to be viewed as “investments” rather than purchases are rarely acquired without the customer physically checking them out. Except in IT. When it comes to hardware-based solutions there’s often been the opportunity for what vendors call “evaluation units” but these are guarded by field and sales engineers as if they’re gold from Fort Knox. And often times, like cars and houses, the time in which you can evaluate them – if you’re lucky enough to get one – is very limited. That makes it difficult to really test out a solution and determine if it’s going to fit into your organization and align with your business goals. Virtualization is changing that. While some view virtualization in light of its ability to enable cloud computing and highly dynamic architectures, there’s another side to virtualization that is just as valuable if not more so: evaluation and development. It’s been a struggle, for example, to encourage developers to take advantage of application delivery capabilities when they’re not allowed to actually test and play around with those capabilities in development. Virtual editions of application delivery controllers make it possible to make that happen – without the expense of acquisition and the associated administrative costs that go with it. Similarly, it’s hard to convince someone of the benefits of storage virtualization without giving them the chance to actually try it out. It’s one thing to write a white paper or put up a web page with a lot of marketing-like speak about how great it is but as they say, the proof is in the pudding. In the implementation. Not every solution is a good fit for production-level virtualization. It’s just not – for performance or memory or reliability reasons. But for testing and evaluation purposes, it makes sense for just about every technology that fits in the data center. So it was, as Don put it, “very exciting” to see our “virtual edition” options grow with the addition of ARX VE, F5’s storage virtualization solution. It just makes sense that like finding “your chair” you test it out before you make a decision. From automated tiering and shadow copying to unified governance, storage virtualization like ARX provides some tangible benefits to the organization that can address some of the issues associated with the massive growth of data in the enterprise. You may recall that storage tiering was recently identified at the Gartner Data Center conference as one of the “next big things” primarily due the continued growth of data: #GartnerDC Major IT Trend #2 is: 'Big Data - The Elephant in the Room'. Growth 800% over next 5 years - w/80% unstructured. Tiering critical @ZimmerHDS Harry Zimmer Virtualization gives us at F5 the opportunity to give you a chance to test drive a solution in ARX VE that is addressing that critical need. Don, who was won over to the side of “storage virtualization is awesome” only after he actually tried it out himself, has more details on our latest addition to our growing stable of virtualized offerings. INTRODUCING ARX VE As we here at F5 grow our stable of Virtual Edition products, we like to keep you abreast of the latest and greatest releases available to you. Today’s Virtual Edition discussion is about ARX VE Trial, a completely virtualized version of our ARX File/Directory Virtualization product. ARX has huge potential in helping you get NAS sprawl under control, but until now you had to either jump through hoops to get a vendor trial into place, or pay for the product before you fully understood how it worked in your environment. Not any more. ARX VE Trial is free to download and license, includes limited support, and is fully functional for testing purposes. If you have VMWare ESX 4.0 update 2 or VMWare ESX 4.1, then you can download and install the trial for free. There’s no time limit on how long the system can run, but there is a time limit on the number of NAS devices it can manage and the number of shares it can export. It is plenty adequate for the testing you’ll want to do to see how it performs though. Now you can see what heterogeneous tiering of NAS devices can do for you, you can test out shadow copying for replication and moving users’ data stores without touching the desktop. You can see how easy managing access control is when everything is presented as a single massive file system. And you can do all of this (and more) for free. As NAS-based storage architectures have grown, management costs have increased simply due to the amount of disk and number of arrays/shares/whatever under management. This is your chance to push those costs back in the other direction. Or at least your chance to find out if ARX will help in your specific environment without having to pay up-front or work through a long process to get a test box. You can get your copy of ARX VE (or Firepass VE or LTM VE) at our trial download site.166Views0likes1CommentHere Comes Payback Time. Prepare for Storage Shortages.
The last couple of years have been painful, to say the least. Some call them unprecedented, financially, but I do believe that is pushing the descriptor a bit far, since there have been plenty of instances where business pretty much en-masse questioned the amount that IT returns for their investment and cut budgets, so the feel of this recession is not much different than what we’ve felt before, it’s just by necessity. The funny bit of this is that everyone seems to agree that IT spending still went up in 2009, just by a massively reduced amount. Since the pinch is definitely out there, one can only assume that a 1.6% (or so, depending upon your source) increase in spending was not enough to cover increases in maintenance costs and new purchases. The impact on IT is pretty straight-forward, at least in my mind. Major IT projects were delayed or canceled based on tough funding decisions, and those projects ran the gamut from development to networking to outsourcing services. Some of these projects were not critical, and some were cut when the business they were going to support was curtailed, but some are “hidden gems” that will in the long run cost the business more than it is saving today. But belt tightening went on across the entire organization, so IT is left to struggle with its portion of the pie, hoping that the shortfalls (necessary project wise) will be made up in the future. The only bright spot from a budgeting perspective is that new programs and products were cut before IT, since IT is corporate wide and viewed (mostly) strategically. Related Articles and Blogs Breaking Point: 2010 State of Storage (membership required) Top Five Data Storage Compression Methods Storage In A Virtualized Environment Thin Provisioning Plus VMs – Armageddon in a Virtual Box? Cloud Storage Gateways – Stairway to (thin provisioning) Heaven Give Your Unstructured Data the Meyers-Briggs(TM)157Views0likes0Comments- 134Views0likes0Comments