netapp
11 TopicsA Storage (Capacity) Optimization Buying Spree!
Remember when Beanie Babies were free in Happy Meals, and tons of people ran out to buy the Happy Meals but only really wanted the Beanie Babies? Yeah, that’s what the storage compression/dedupe market is starting to look like these days. Lots of big names are out snatching up at-rest de-duplication and compression vendors to get the products onto their sales sheets, we’ll have to see if they wanted the real value of such an acquisition – the bright staff that brought these products to fruition – or they’re buying for the product and going to give or throw away the meat of the transaction. Yeah, that sentence is so pun laden that I think I’ll leave it like that. Except there is no actual meat in a Happy Meal, I’m pretty certain of that. Today IBM announced that it is formally purchasing Storwize, a file compression tool designed to compress data on NAS devices. That leaves few enough players in the storage optimization space, and only one – Permabit – whose name I readily recognize. Since I wrote the blog about Dellpicking up Ocarina, and this is happening while that blog is still being read pretty avidly, I figured I’d weigh in on this one also. Storwize is a pretty smart purchase for IBM on the surface. The products support NAS at the protocol level – they claim “storage agnostic”, but personal experience in the space is that there’s no such thing… CIFs and NFS tend to require tweaks from vendor A to vendor B, meaning that to be “agnostic” you have to “write to the device”. An interesting conundrum. Regardless, they support CIFS and NFS, are stand-alone appliances that the vendors claim are simple to set up and require little or no downtime, and offer straight-up compression. Again, Storewize and IBM are both claiming zero performance impact, I cannot imagine how that is possible in a compression engine, but that’s their claim. The key here is that they work on everyone’s NAS devices. If IBM is smart, the products still will work on everyone’s devices in a year. Related Articles and Blogs IBM Buys Storewize Dell Buys Ocarina Networks Wikipedia definition – Capacity Optimization Capacity Optimization – A Core Storage Technology (PDF)259Views0likes1CommentDell Buys Ocarina Networks. Dedupe For All?
Storage at rest de-duplication has been a growing point of interest for most IT staffs over the last year or so, just because de-duplication allows you to purchase less hardware over time, and if that hardware is a big old storage array sucking a ton of power and costing a not-insignificant amount to install and maintain, well, it’s appealing. Most of the recent buzz has been about primary storage de-duplication, but that is merely a case of where the market is. Backup de-duplication has existed for a good long while, and secondary storage de-duplication is not new. Only recently have people decided that at-rest de-dupe was stable enough to give it a go on their primary storage – where all the most important and/or active information is kept. I don’t think I’d call it a “movement” yet, but it does seem that the market’s resistance to anything that obfuscates data storage is eroding at a rapid rate due to the cost of the hardware (and attendant maintenance) to keep up with storage growth. Related Articles and Blogs Dell-Ocarina deal will alter landscape of primary storage deduplication Data dedupe technology helps curb virtual server sprawl Expanding Role of Data Deduplication The Reality of Primary Storage Deduplication212Views0likes0CommentsF5 Friday: Enhancing FlexPod with F5
#VDI #cloud #virtualization Black-box style infrastructure is good, but often fails to include application delivery components. F5 resolves that issue for NetApp FlexPod The best thing about the application delivery tier (load balancing, acceleration, remote access) is that is spans both networking and application demesnes. The worst thing about the application delivery tier (load balancing, acceleration, remote access) is that is spans both networking and application demesnes. The reality of application delivery is that it stands with one foot firmly in the upper layers of the stack and the other firmly in the lower layers of the stack, which means it’s often left out of infrastructure architectures merely because folks don’t know which box it should go in. Thus, when “black-box” style infrastructure architecture solutions like NetApp’s FlexPod arrive, they often fail to include any component that doesn’t firmly fit in one of three neat little boxes: storage, network, server (compute). FlexPod isn’t the only such offering, and I suspect we’ll continue to see more “architecture in a rack” solutions in the future as partnerships are solidified and solution providers continue to expand their understanding of what’s required to support a dynamic data center. FlexPod is a great example of both an “architecture in a rack” supporting the notion of a dynamic data center and of the reality that application delivery components are rarely included. “FlexPod™, jointly developed by NetApp and Cisco, is a flexible infrastructure platform composed of pre-sized storage, networking, and server components. It’s designed to ease your IT transformation from virtualization to cloud computing with maximum efficiency and minimal risk.” -- NetApp FlexPod Data Sheet NetApp has done a great job of focusing on the core infrastructure but it has also gone the distance and tested FlexPod to ensure compatibility with application deployments across a variety of hypervisors, operating systems and applications, including: VMware® View and vSphere™ Citrix XenDesktop Red Hat Enterprise Linux® (RHEL) Oracle® SAP® Microsoft® Exchange, SQL Server® and SharePoint® Microsoft Private Cloud built on FlexPod What I love about this particular list is that it parallels so nicely the tested and fully validated solutions from F5 for delivering all these solutions. Citrix XenDesktop VMWare View and vSphere Oracle SAP Microsoft® Exchange, SQL Server® and SharePoint® That means that providing a variety of application delivery services for these applications - secure remote access, load balancing, acceleration and optimization – should be a breeze for organizations to implement. It should also be a requirement, at least in terms of load balancing and optimization services. If FlexPod makes it easier to dynamically manage resources supporting these applications then adding an F5 application delivery tier to the mix will ensure those resources and the user experience are optimized. SERVERS should SERVE While FlexPod provides the necessary storage, compute, and layer 2 networking components, critical application deployments are enhanced by F5 BIG-IP solutions for several reasons: Increase Capacity Offloads CPU-intensive processes from virtual servers, freeing up resources and increasing VM density and application capacity Improved Performance Accelerates end-user experience using adaptive compression and connection pooling technologies Enables Transparent and Rapid Scalability Deployment of new virtual server instances hosted in FlexPod can be added to and removed from BIG-IP Local Traffic Manager (LTM) virtual pools to ensure seamless elasticity Enables Automated Disaster Recovery F5 BIG-IP Global Traffic Manager (GTM) provides DNS global server load balancing services to automate disaster recovery or dynamic redirection of user requests based on location. Accelerated Replication Traffic BIG-IP WAN Optimization Manager (WOM) can improve the performance of high latency or packet-loss prone WAN links. NetApp replication technology (SnapMirror) will see substantial benefit when customers add BIG-IP WOM to enhance WAN performance. Bonus: Operational Consistency Because BIG-IP is an application delivery platform, it allows the deployment of a variety of application delivery services on a single, unified platform with a consistent operational view of all application delivery services. That extends to other BIG-IP solutions, such as BIG-IP Access Policy Manager (APM) for providing unified authentication to network and application resources across remote, LAN, and wireless access. Operational consistency is one of the benefits a platform-based approach brings to the table and is increasingly essential to ensuring that the cost-saving benefits of cloud and virtualization are not lost when disparate operational and management systems are foisted upon IT. FlexPod only provides certified components for storage, compute and layer 2 networking. Most enterprise application deployments require application delivery services whether for load balancing or security or optimization and ones that do not still realize significant benefits when deploying such services. Marrying F5 application delivery services with a NetApp FlexPod solution will yield significant benefits in terms of resource utilization, cost reductions, and address critical components of operational risk without introducing additional burdens on already overwhelmed IT staff. Operational Risk Comprises More Than Just Security The Future of Cloud: Infrastructure as a Platform At the Intersection of Cloud and Control… The Pythagorean Theorem of Operational Risk The Epic Failure of Stand-Alone WAN Optimization Mature Security Organizations Align Security with Service Delivery F5 Friday: Doing VDI, Only Better204Views0likes0CommentsF5 Friday: Efficient Long Distance Transfer of VMs with F5 BIG-IP WOM and NetApp Flexcache
BIG-IPWOM and NetApp Flexcache speed movement of your VMs across the WAN. One of the major obstacles to the concept of cloud computing and “on-demand” is implementing the “on-demand” piece of the equation. Virtualization in theory allows organizations to shuffle virtual machine images of applications to and fro without the Big Hairy Mess that’s generally involved in physically migrating an application from location to another. Just the differences in hardware and thus potential conflicts between hardware drivers and the inevitable “lack of support” for some piece of critical hardware in the application can doom an application migration. Virtualization, of course, removes these concerns and moves the interoperability issues up to the hypervisor layer. That makes migration a much simpler process and, assuming all is well at that layer, mitigates many of the issues that had been present in the past with moving an application – such as ensuring all the right files and adapters and connections were with the application. It’s an excellent packaging scheme that migration as well as it does rapid provisioning. The problem, of course, has been in the network. Virtual images aren’t small by any stretch of the imagination while Internet connectivity has always been more constrained. Organizations did not run out and increase the amount of bandwidth they had available upon embarking on their virtualization journey and even if they did, they still have little to no control over the quality of that connection. So while it was possible in theory to move these packages of applications around to-and-fro, it wasn’t always necessarily feasible. Thus it is that solutions are appearing to address these problems to make it not only possible but feasible to perform migration of virtual images on-demand. NetApp Flexcache is just one such solution. Flexcache leverages data reduction and caching to ease the burden on the network of transferring such “big data”. Alone it is a powerful addition to vMotion, but it’s focused on storage, on the image, on data. It’s not necessarily addressing many of the core network issues that can cause a storage vMotion to fail. That’s where we come in because F5 BIG-IP WOM (WAN Optimization Module) does address those core network issues and makes it possible to successfully complete a storage vMotion across the WAN. Application migration, on-demand. Today’s F5 Friday is a guest post by Don MacVittie who, as you may know, keeps a close eye on storage and WAN optimization concerns and technologies in his blog. So without further adieu, I’ll let Don explain more about the combined F5 BIG-IP WOM and NetApp Flexcache solution for long distance transfer of virtual machines. VMWare vMotion allows you to transfer VMs from one server to another, or even from one datacenter to another, provided the latency between the datacenters is small. It does this in a two-step process that first moves the image, and then moves the running “dynamic” portions of the VM. Moving the image is much more intensive than moving the dynamic bits, as the image is everything you have on disk for the VM, while the dynamic part is just the current state of the machine. Moving the image is referred to as “storage vMotion” in VMWare lingo. NetApp Flexcache enhances the experience by handling the transfer of the image for you, making it possible to utilize Flexcache’s data reduction and cache refresh mechanisms to transfer the image for the vMotion system. While Flexcache alone is a powerful addition to vMotion, it does not address latency issues, and if the network is lossy, will suffer performance degradation as any application will. F5 BIG-IP WAN Optimization Module (WOM) boosts the performance and reliability of your WAN connections, be they down the street or on a different continent. In the datacenter-to-datacenter scenario, utilizing iSessions, two BIG-IP WOM devices can drastically improve the performance of your WAN link. Adding F5 BIG-IP WOM to the VMWare/Flexcache architecture provides you with latency mitigation techniques, loss prevention mechanisms, and more data reduction capability. as shown in this solution profile, a VMWare/Flexcache/WOM solution greatly increases the mobility of your VMs between datacenters. It also allows you to optimize all traffic flowing between the source and destination datacenters, not just the vMotion and Flexcache traffic. While the solution involving Flexcache (diagrammed in the above-mentioned Solution Profile) is more complex, a generic depiction of F5 BIG-IP WOM’s ability to speed, secure, and stabilize data transfers looks like this: So whether you are merging datacenters, shifting load, or opening a new datacenter, VMWare vMotion + NetApp Flexcache + F5 BIG-IP WOM are your path to quick and painless VM transfers across the WAN. Related blogs & articles: F5 Friday: Rackspace CloudConnect - Hybrid Architecture in Action F5 Friday: The 2048-bit Keys to the Kingdom All F5 Friday Posts on DevCentral F5 Friday: Elastic Applications are Enabled by Dynamic Infrastructure F5 Friday: It is now safe to enable File Upload F5 Friday: Application Access Control - Code, Agent, or Proxy? Oracle RMAN Replication with F5's BIG-IP WOM Don MacVittie - WOM Nojan Moshiri - BIGIP-WOM How May I Speed and Secure Replication? Let Me Count the Ways. WOM nom nom nom nom – DevCentral WOM and iRules - DevCentral199Views0likes0CommentsOur data is so deduped that no two bits are alike!
Related Articles and Blogs Dedupe Ratios Do Matter (NWC) Ask Dr Dedupe: NetApp Deduplication Crosses the Exabyte Mark (NetApp) Dipesh on Dedupe: Deduplication Boost or Bust? (CommVault) Deduplication Ratios and their Impact on DR Cost Savings (About Restore) Make the Right Call (Online Storage Optimization) – okay, that one’s a joke BIG-IP WAN Optimization Module (f5 – PDF) Like a Matrushka, WAN Optimization is Nested (F5 DevCentral)184Views0likes0CommentsF5 Friday: F5 BIG-IP WOM Puts the Snap(py) in NetApp SnapMirror
Data replication is still an issue for large organizations and as data growth continues, those backup windows are getting longer and longer… With all the hype surrounding cloud computing and dynamic resources on demand for cheap you’d think that secondary and tertiary data centers are a thing of the past. Not so. Large organizations with multiple data centers – even those are evolving out of growth at remote offices – still need to be able to replicate and backup data between corporate owned sites. Such initiatives are often fraught with peril due to the explosive growth in data which, by all accounts, is showing no signs of slowing down any time soon. The reason this is problematic is because the pipes connecting those data centers are not expanding and doing so simply to speed up transfer rates and decrease transfer windows is cost prohibitive. It’s the same story as any type of capacity – expanding to meet periodic bursts results in idle resources, and idle resources are no longer acceptable in today’s cost conscious, waste-not want-not data centers. Organizations that have in place a NetApp solution for storage replication are in luck today, as F5 has a solution that can improve transfer rates by employing data reduction technologies: F5 BIG-IP WAN Optimization Module (WOM). One of the awesome advantages of WOM (and all F5 modules) over other solutions is that a BIG-IP module is a component of our unified application delivery platform. That’s an advantage because of the way in which BIG-IP modules interact with one another and are integrated with the rest of a dynamic data center infrastructure. The ability to leverage core functionality across a shared, high-speed internal messaging platform means context is never lost and interactions are optimized internally, minimizing the impact of chaining multiple point solutions together across the network. I could go on and on myself about and its benefits when employed to improve site-to-site transfer of big data, but I’ve got colleagues like Don MacVittie who are well-versed in telling that story so I’ll let him introduce this solution instead. Happy Replicating! NetApp’s SnapMirror is a replication technology that allows you to keep a copy of a NetApp storage system on a remote system over the LAN or WAN. While NetApp has built in some impressive compression technology, there is still room for improvement in the WAN space, and F5BIG- IPWOM picks up where SnapMirror leaves off. Specialized in getting the most out of your WAN connection, WOM (WAN Optimization Module) improves your SnapMirror performance and WAN connection utilization. Not just improves it, gives performance that, in our testing, shows a manifold increase in both throughput and overall performance. And since it is a rare WAN connection that is only transferring SnapMirror data, the other applications on that same connection will also see an impressive benefit. Why upgrade your WAN connection when you can get the most out of it at any throughput rating? Add in the encrypted tunneling capability of BIG-IP WOM and you are more fast, more secure, and more available. With the wide range of adjustments you can make to determine which optimizations apply to which data streams, you can customize your traffic to suit the needs of your specific usage scenarios. Or as we like to say, IT Agility, Your Way. You can find out more about how NetApp SnapMirror and F5 BIG-IP WOM work together by reading our solution profile. Related blogs & articles: Why Single-Stack Infrastructure Sucks F5 Friday: Microsoft and F5 Lync Up on Unified Communications F5 Friday: The 2048-bit Keys to the Kingdom All F5 Friday Posts on DevCentral F5 Friday: Elastic Applications are Enabled by Dynamic Infrastructure Optimizing NetApp SnapMirror with BIG-IP WAN Optimization Module Top-to-Bottom is the New End-to-End178Views0likes1CommentAmerican Suzuki Case Study with F5 Networks
American Suzuki is passionate about affordable performance both in their products and how they run the company. When business growth led to IT storage issues, they needed a solution. Backups took 18hrs; users had performance issues with file shares; and data replication took weeks. Using industry best practices with NetApp and F5's ARX Data Manager, 18 hour backups now are done in 90 minutes and users are no longer complaining about performance. David Gonsalves, Associate IT Director, explains how operations improved, money was saved by reducing the amount of backup tape needed and how their storage management is much more efficient with an ROI of less than 12 months.176Views0likes0CommentsThe Problem With Storage Growth is That No One Is Minding the Store
In late 2008, IDC predicted more than 61% Annual Growth Rate for unstructured data in traditional data centers through 2012. The numbers appear to hold up thus far, perhaps were even conservative. This was one of the first reports to include the growth from cloud storage providers in their numbers, and that particular group was showing a much higher rate of growth – understandable since they have to turn up the storage they’re going to resell. The update to this document titled World Wide Enterprise Systems Storage Forecast published in April of this year shows that even in light of the recent financial troubles, storage space is continuing to grow. Related Articles and Blogs Unstructured Data Will Become the Primary Task for Storage Our Storage Growth (good example of someone who can’t do the above) Tiered Storage Tames Data Storage Growth says Construction CIO Data Deduplication Market Driven by Storage Growth Tiering is Like Tables or Storing in the Cloud Tier176Views0likes0CommentsF5 Friday: NetApp SnapVault With BIG-IP WOM
Because ‘big data’ isn’t just a problem for data at rest, it’s a problem for data being transferred. Remember when we talked about operational risk comprising more than security? One of the three core components of operational risk is availability which is defined differently based not only the vertical industry you serve but also on the business goals of the application. This includes disaster recovery goals, among which off-site backups are often used as a means to address the availability of data for critical applications in the event of a disaster. Data grows, it rarely shrinks, and operational tasks involving the migration of data – whether incremental or full backups – to secondary and even tertiary sites are critical to the successful “failover” of an organization from one site to another as well as the ability to restore data should something, heaven forbid, happen to the source. These backups are often moved across WAN connections to secondary data centers or off-site services. But the growth of data is not being mirrored by the growth of connectivity throughput and speeds, causing backup windows to grow to unacceptable intervals of time. #GartnerDC Major IT Trend #2 is: 'Big Data - The Elephant in the Room'. Growth 800% over next 5 years - w/80% unstructured. Tiering critical -- @ZimmerHDS (Harry Zimmer) As a means to combat the increasing time required to transfer big data across WAN connections, it becomes necessary to either increase the amount of available bandwidth or decrease the size of the data. The former is an expensive proposition, especially considering the benefits are only seen periodically and the latter is a challenge unto itself. Data growth is not something that can be halted or slowed by operational needs, so it’s necessary to find the means to reduce the size of data in transit, only. That’s where BIG-IP WOM (WAN Optimization Module) comes into play. In conjunction with NetApp SnapVault, BIG-IP WOM can dramatically impact the performance of WAN connections, decreasing the time required for backup operations and ensuring that such operationally critical tasks complete as expected. As is often the case when we’re talking storage or WAN optimization, Don has more details on our latest solution for NetApp SnapVault. Happy Backups! NetApp SnapVault is an optimized disk-to-disk backup system that copies changed blocks from a source file system to a target file system. Since the backup is a copy of the source file system, restore operations are immensely simplified from traditional disk-to-tape backup models. Only blocks that have changed since the last update are copied over-the-wire, making it very efficient at local backups. When the latencies and packet loss of a WAN connection are introduced, however, SnapVault suffers just the same as any other WAN application does, and can get backed up, making meeting your Recovery Point Objectives and Recovery Time Objectives difficult. While SnapVault specializes in keeping a near-replica of your chosen file system, if that chosen file system is remote, it might just need a little help. Enter BIG-IPWAN Optimization Module (WOM), a compression/dedupe/encryption/TCP Optimization add-on for BIG-IP LTM that improves WAN communications, and in many cases makes SnapVault over the WAN perform like SnapVault over the LAN. To achieve this, a BIG-IP WOM device is placed on the WAN links of both the source and target datacenters, then a secure tunnel is created between the two, and data is transferred over the secure tunnel. With Symmetric Adaptive Compression and Symmetric Deduplication, BIG-IP WOM can achieve enormous data transfer reductions while maintaining your data integrity. Add in TCP optimizations to improve the reliability of your WAN link and reduce the overhead of TCP in error-prone environments, and you’ve got a massive performance booster that does plenty for SnapVault. In fact, our testing (detailed in this solution profile) showed a 59x improvement over standard SnapVault installations. BIG-IP WOM also supports Rate Shaping, giving you the ability to assign priority to your SnapVault backups and insure that they receive enough bandwidth to stay up-to-date. You can find out more about SnapVault on the NetApp website, and more about F5 BIG-IP WAN Optimization Module on F5’s website. While the results for SnapVault are astounding, BIG-IP WOM has been tested with a wide array of replication products to give you the widest set of options possible for improving your bandwidth utilization on point to point communications. F5 BIG-IP WOM. Making long-distance SnapVault Secure, Fast, and Available. Enhancing NetApp SnapVault Performance with F5 BIG-IP WOM BIG-IP WAN Optimization Module Performance F5 BIG‑IP WAN Optimization Module in Data Replication Environments Byte Caching, Compression, and WAN Optimization No Really. Broadband. The Golden Age of Data Mobility? Deduplication and Compression – Exactly the same, but different. F5 Friday: BIG-IP WOM With Oracle Products F5 Friday: F5 BIG-IP WOM Puts the Snap(py) in NetApp SnapMirror169Views0likes0CommentsIf I Were in IT Management Today…
I’ve had a couple of blog posts talking about how there is a disconnect between “the market” and “the majority of customers” where things like cloud (and less so storage) are concerned. So I thought I’d try this out as a follow on. If I were running your average medium to large IT shop (not talking extremely huge, just medium to large), what would I be focused on right now. By way of introduction, for those who don’t know, I’m relatively conservative in my use of IT, I’ve been around the block, been burned a few times (OS/2 Beta Tester, WFW, WP… The list goes on), and the organizations I’ve worked for where I was part of “Enterprise IT” were all relatively conservative (Utilities, Financials), while the organizations i worked in Product or App Development for were all relatively cutting edge. I’ve got a background in architecture, App Dev, and large systems projects, and think that IT Management is (sadly) 50% corporate politics and 50% actually managing IT. I’ll focus on problems that we all have in general here, rather than a certain vertical, and most of these problems are applicable to all but the largest and smallest IT shops today. By way of understanding, this list is the stuff I would be spending research or education time on, and is kept limited because the bulk of you and your staff’s time is of course spent achieving or fixing for the company, not researching. Though most IT shops I know of have room for the amount of research I’m talking about below.165Views0likes0Comments